共 50 条
- [1] Accelerating distributed deep neural network training with pipelined MPI allreduce [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2021, 24 (04): : 3797 - 3813
- [2] Evaluation of MPI Allreduce for Distributed Training of Convolutional Neural Networks [J]. 2021 29TH EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING (PDP 2021), 2021, : 109 - 116
- [4] Analyzing the impact of the MPI allreduce in distributed training of convolutional neural networks [J]. Computing, 2023, 105 : 1101 - 1119
- [5] Accelerating Training for Distributed Deep Neural Networks in MapReduce [J]. WEB SERVICES - ICWS 2018, 2018, 10966 : 181 - 195
- [6] Accelerating Data Loading in Deep Neural Network Training [J]. 2019 IEEE 26TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS (HIPC), 2019, : 235 - 245
- [7] PipeCompress: Accelerating Pipelined Communication for Distributed Deep Learning [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 207 - 212
- [8] An Allreduce Algorithm and Network Co-design for Large-Scale Training of Distributed Deep Learning [J]. 21ST IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2021), 2021, : 396 - 405
- [9] EmbRace: Accelerating Sparse Communication for Distributed Training of Deep Neural Networks [J]. 51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022, 2022,