Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent

被引:0
|
作者
Shen, Shuheng [1 ,2 ]
Xu, Linli [1 ,2 ]
Liu, Jingchang [1 ,2 ]
Liang, Xianfeng [1 ,2 ]
Cheng, Yifei [1 ,3 ]
机构
[1] Univ Sci & Technol China, Anhui Prov Key Lab Big Data Anal & Applicat, Hefei, Anhui, Peoples R China
[2] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei, Anhui, Peoples R China
[3] Univ Sci & Technol China, Sch Data Sci, Hefei, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the increase in the amount of data and the expansion of model scale, distributed parallel training becomes an important and successful technique to address the optimization challenges. Nevertheless, although distributed stochastic gradient descent (SGD) algorithms can achieve a linear iteration speedup, they are limited significantly in practice by the communication cost, making it difficult to achieve a linear time speedup. In this paper, we propose a computation and communication decoupled stochastic gradient descent (CoCoD-SGD) algorithm to run computation and communication in parallel to reduce the communication cost. We prove that CoCoD-SGD has a linear iteration speedup with respect to the total computation capability of the hardware resources. In addition, it has a lower communication complexity and better time speedup comparing with traditional distributed SGD algorithms. Experiments on deep neural network training demonstrate the significant improvements of CoCoD-SGD: when training ResNetl8 and VGG16 with 16 Geforce GTX 1080Ti GPUs, CoCoD-SGD is up to 2-3 x faster than traditional synchronous SGD.
引用
收藏
页码:4582 / 4589
页数:8
相关论文
共 50 条
  • [41] Nested Distributed Gradient Methods with Stochastic Computation Errors
    Iakovidou, Charikleia
    Wei, Ermin
    [J]. 2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2019, : 339 - 346
  • [42] On Stochastic Roundoff Errors in Gradient Descent with Low-Precision Computation
    Xia, Lu
    Massei, Stefano
    Hochstenbach, Michiel E.
    Koren, Barry
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, 200 (02) : 634 - 668
  • [43] Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation
    Jentzen, Arnulf
    Welti, Timo
    [J]. APPLIED MATHEMATICS AND COMPUTATION, 2023, 455
  • [44] Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks
    Vasudevan, Shrihari
    [J]. ENTROPY, 2020, 22 (05)
  • [45] Weighted Aggregating Stochastic Gradient Descent for Parallel Deep Learning
    Guo, Pengzhan
    Ye, Zeyang
    Xiao, Keli
    Zhu, Wei
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (10) : 5037 - 5050
  • [46] Stochastic Gradient Push for Distributed Deep Learning
    Assran, Mahmoud
    Loizou, Nicolas
    Ballas, Nicolas
    Rabbat, Michael
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [47] Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks
    Cui, Xiaodong
    Zhang, Wei
    Tuske, Zoltan
    Picheny, Michael
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [48] Privacy-Preserving Stochastic Gradient Descent with Multiple Distributed Trainers
    Le Trieu Phong
    [J]. NETWORK AND SYSTEM SECURITY, 2017, 10394 : 510 - 518
  • [49] Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers
    Hanna, Serge Kas
    Bitar, Rawad
    Parag, Parimal
    Dasari, Venkat
    El Rouayheb, Salim
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4262 - 4266
  • [50] Distributed Byzantine Tolerant Stochastic Gradient Descent in the Era of Big Data
    Jin, Richeng
    He, Xiaofan
    Dai, Huaiyu
    [J]. ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,