共 50 条
- [21] AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks Through Accuracy Gradient [J]. IEEE ACCESS, 2022, 10 : 108997 - 109007
- [22] Accelerating Deep Neural Networks implementation: A survey [J]. IET COMPUTERS AND DIGITAL TECHNIQUES, 2021, 15 (02): : 79 - 96
- [23] Accelerating Sparse Deep Neural Networks on FPGAs [J]. 2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,
- [24] Accelerating Deep Neural Networks Using FPGA [J]. 2018 30TH INTERNATIONAL CONFERENCE ON MICROELECTRONICS (ICM), 2018, : 176 - 179
- [25] Accelerating Data Loading in Deep Neural Network Training [J]. 2019 IEEE 26TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS (HIPC), 2019, : 235 - 245
- [26] A Flexible Research-Oriented Framework for Distributed Training of Deep Neural Networks [J]. 2021 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2021, : 730 - 739
- [27] DLB: A Dynamic Load Balance Strategy for Distributed Training of Deep Neural Networks [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (04): : 1217 - 1227
- [28] Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability [J]. PROCEEDINGS OF 2016 2ND WORKSHOP ON MACHINE LEARNING IN HPC ENVIRONMENTS (MLHPC), 2016, : 19 - 26
- [30] Model-Aware Parallelization Strategy for Deep Neural Networks' Distributed Training [J]. 2019 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA (CBD), 2019, : 61 - 66