共 50 条
- [1] Memory Efficient Deep Neural Network Training [J]. EURO-PAR 2021: PARALLEL PROCESSING WORKSHOPS, 2022, 13098 : 515 - 519
- [2] Logarithmic Compression for Memory Footprint Reduction in Neural Network Training [J]. 2017 FIFTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING (CANDAR), 2017, : 291 - 297
- [3] EPMC: efficient parallel memory compression in deep neural network training [J]. Neural Computing and Applications, 2022, 34 : 757 - 769
- [4] EPMC: efficient parallel memory compression in deep neural network training [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (01): : 757 - 769
- [6] Memory Saving Method for Enhanced Convolution of Deep Neural Network [J]. 2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2018, : 185 - 188
- [7] Deep Neural Network Training Method Based on Individual Differences of Training Samples [J]. Ruan Jian Xue Bao/Journal of Software, 2022, 33 (12): : 4534 - 4544
- [8] Distributed Deep Learning Framework based on Shared Memory for Fast Deep Neural Network Training [J]. 2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), 2018, : 1239 - 1242
- [10] FloatPIM: In-Memory Acceleration of Deep Neural Network Training with High Precision [J]. PROCEEDINGS OF THE 2019 46TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA '19), 2019, : 802 - 815