共 50 条
- [1] EmbRace: Accelerating Sparse Communication for Distributed Training of Deep Neural Networks [J]. 51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022, 2022,
- [2] Distributed B-SDLM: Accelerating the Training Convergence of Deep Neural Networks Through Parallelism [J]. PRICAI 2016: TRENDS IN ARTIFICIAL INTELLIGENCE, 2016, 9810 : 243 - 250
- [3] Centered Weight Normalization in Accelerating Training of Deep Neural Networks [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2822 - 2830
- [4] Accelerating distributed deep neural network training with pipelined MPI allreduce [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2021, 24 (04): : 3797 - 3813
- [5] Accelerating distributed deep neural network training with pipelined MPI allreduce [J]. Cluster Computing, 2021, 24 : 3797 - 3813
- [6] Accelerating Training of Deep Neural Networks via Sparse Edge Processing [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2017, PT I, 2017, 10613 : 273 - 280
- [7] An In-Depth Analysis of Distributed Training of Deep Neural Networks [J]. 2021 IEEE 35TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2021, : 994 - 1003
- [9] Parallel and Distributed Training of Deep Neural Networks: A brief overview [J]. 2020 IEEE 24TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENGINEERING SYSTEMS (INES 2020), 2020, : 165 - 170
- [10] Alleviating Imbalance in Synchronous Distributed Training of Deep Neural Networks [J]. 19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021), 2021, : 405 - 412