共 50 条
- [31] Evaluating Multi-Level Checkpointing for Distributed Deep Neural Network Training [J]. SCWS 2021: 2021 SC WORKSHOPS SUPPLEMENTARY PROCEEDINGS, 2021, : 60 - 67
- [32] Performance Modeling and Analysis of Distributed Deep Neural Network Training with Parameter Server [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4140 - 4145
- [33] Distributed Deep Learning Framework based on Shared Memory for Fast Deep Neural Network Training [J]. 2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), 2018, : 1239 - 1242
- [34] FPRaker: A Processing Element For Accelerating Neural Network Training [J]. PROCEEDINGS OF 54TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO 2021, 2021, : 857 - 869
- [35] Accelerating neural network training using weight extrapolations [J]. NEURAL NETWORKS, 1999, 12 (09) : 1285 - 1299
- [36] Near-Optimal Sparse Allreduce for Distributed Deep Learning [J]. PPOPP'22: PROCEEDINGS OF THE 27TH ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, 2022, : 135 - 149
- [37] Accelerating the Deep Reinforcement Learning with Neural Network Compression [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
- [38] Centered Weight Normalization in Accelerating Training of Deep Neural Networks [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2822 - 2830
- [40] LEARNAE: Distributed and Resilient Deep Neural Network Training for Heterogeneous Peer to Peer Topologies [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKSX, 2019, 1000 : 286 - 298