共 50 条
- [1] Model-Aware Parallelization Strategy for Deep Neural Networks' Distributed Training [J]. 2019 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA (CBD), 2019, : 61 - 66
- [2] An Optimization Strategy for Deep Neural Networks Training [J]. 2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 596 - 603
- [3] Accelerating Training for Distributed Deep Neural Networks in MapReduce [J]. WEB SERVICES - ICWS 2018, 2018, 10966 : 181 - 195
- [4] Training deep neural networks: a static load balancing approach [J]. JOURNAL OF SUPERCOMPUTING, 2020, 76 (12): : 9739 - 9754
- [5] Training deep neural networks: a static load balancing approach [J]. The Journal of Supercomputing, 2020, 76 : 9739 - 9754
- [6] An In-Depth Analysis of Distributed Training of Deep Neural Networks [J]. 2021 IEEE 35TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2021, : 994 - 1003
- [8] Parallel and Distributed Training of Deep Neural Networks: A brief overview [J]. 2020 IEEE 24TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENGINEERING SYSTEMS (INES 2020), 2020, : 165 - 170
- [10] Alleviating Imbalance in Synchronous Distributed Training of Deep Neural Networks [J]. 19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021), 2021, : 405 - 412