共 50 条
- [2] Convolutional Neural Network Training with Distributed K-FAC [J]. PROCEEDINGS OF SC20: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS (SC20), 2020,
- [3] Inefficiency of K-FAC for Large Batch Size Training [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5053 - 5060
- [4] Accelerating Distributed K-FAC with Smart Parallelism of Computing and Communication Tasks [J]. 2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 550 - 560
- [5] Scalable Data Parallel Distributed Training for Graph Neural Networks [J]. 2022 IEEE 36TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2022), 2022, : 699 - 707
- [6] Partitioning Sparse Deep Neural Networks for Scalable Training and Inference [J]. PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ICS 2021, 2021, : 254 - 265
- [7] Accelerating Training for Distributed Deep Neural Networks in MapReduce [J]. WEB SERVICES - ICWS 2018, 2018, 10966 : 181 - 195
- [8] DistGNN: Scalable Distributed Training for Large -Scale Graph Neural Networks [J]. SC21: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2021,
- [9] A Scalable GPU-enabled Framework for Training Deep Neural Networks [J]. 2016 2ND INTERNATIONAL CONFERENCE ON GREEN HIGH PERFORMANCE COMPUTING (ICGHPC), 2016,