共 50 条
- [1] Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training [J]. PROCEEDINGS OF THE 2018 ACM SYMPOSIUM ON CLOUD COMPUTING (SOCC '18), 2018, : 41 - 54
- [2] H-PS: A Heterogeneous-Aware Parameter Server With Distributed Neural Network Training [J]. IEEE ACCESS, 2021, 9 : 44049 - 44058
- [3] Distributed Deep Neural Network Training on Edge Devices [J]. SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, : 304 - 306
- [4] Performance Modeling for Distributed Training of Convolutional Neural Networks [J]. 2021 29TH EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING (PDP 2021), 2021, : 99 - 108
- [7] Distributed Parameter Modeling for Coupled Striplines Based on Artificial Neural Network [J]. 2022 IEEE 10TH ASIA-PACIFIC CONFERENCE ON ANTENNAS AND PROPAGATION, APCAP, 2022,
- [8] Modeling and Optimizing the Scaling Performance in Distributed Deep Learning Training [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 1764 - 1773
- [9] Accelerating distributed deep neural network training with pipelined MPI allreduce [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2021, 24 (04): : 3797 - 3813
- [10] Accelerating distributed deep neural network training with pipelined MPI allreduce [J]. Cluster Computing, 2021, 24 : 3797 - 3813