共 50 条
- [2] Improving Training Time of Deep Neural Network With Asynchronous Averaged Stochastic Gradient Descent [J]. 2014 9TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2014, : 446 - 449
- [4] DAC-SGD: A Distributed Stochastic Gradient Descent Algorithm Based on Asynchronous Connection [J]. IIP'17: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION PROCESSING, 2017,
- [6] Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
- [7] Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [8] Explicit loss asymptotics in the gradient descent training of neural networks [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
- [9] Gradient Descent Analysis: On Visualizing the Training of Deep Neural Networks [J]. PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL 3: IVAPP, 2019, : 338 - 345