An asynchronous distributed training algorithm based on Gossip communication and Stochastic Gradient Descent

被引:2
|
作者
Tu, Jun [1 ]
Zhou, Jia [1 ]
Ren, Donglin [1 ]
机构
[1] Hubei Univ Technol, Sch Comp Sci, Wuhan, Peoples R China
关键词
CPS; Decentralized distribution; Gossip; Asynchronous; Redundancy; OPTIMIZATION;
D O I
10.1016/j.comcom.2022.09.010
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cyber-Physical Systems (CPS) applications are playing an increasingly important role in our lives, hence the use of centralized distributed machine learning in CPS to secure applications to start widespread use. However, existing centralized distributed machine learning (ML) algorithms have significant shortcomings in CPS scenarios. As a result, its synchronization algorithm has high latency and sensitivity to drop-off, which affects the security of CPS. Therefore, this paper combining the Gossip protocol with Stochastic Gradient Descent (SGD), this paper proposes a communication framework Gossip Ring SGD (GR-SGD) for machine learning. GR-SGD is decentralized and asynchronous, and solves the problem of long communication waiting time. This paper uses the ImageNet data set and the ResNet model to verify the feasibility of the algorithm and compares it with Ring AllReduce and D-PSGD. Moreover, this paper also indicates that some data redundancy can reduce communication overhead and increase system fault tolerance, it can be better applied to CPS and all kinds of machine learning models.
引用
收藏
页码:416 / 423
页数:8
相关论文
共 50 条
  • [1] DAC-SGD: A Distributed Stochastic Gradient Descent Algorithm Based on Asynchronous Connection
    He, Aijia
    Chen, Zehong
    Li, Weichen
    Li, Xingying
    Li, Hongjun
    Zhao, Xin
    [J]. IIP'17: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION PROCESSING, 2017,
  • [2] Developing a Loss Prediction-based Asynchronous Stochastic Gradient Descent Algorithm for Distributed Training of Deep Neural Networks
    Li, Junyu
    He, Ligang
    Ren, Shenyuan
    Mao, Rui
    [J]. PROCEEDINGS OF THE 49TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2020, 2020,
  • [3] ASYNCHRONOUS STOCHASTIC GRADIENT DESCENT FOR DNN TRAINING
    Zhang, Shanshan
    Zhang, Ce
    You, Zhao
    Zheng, Rong
    Xu, Bo
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6660 - 6663
  • [4] Distributed and asynchronous Stochastic Gradient Descent with variance reduction
    Ming, Yuewei
    Zhao, Yawei
    Wu, Chengkun
    Li, Kuan
    Yin, Jianping
    [J]. NEUROCOMPUTING, 2018, 281 : 27 - 36
  • [5] Distributed Stochastic Gradient Descent With Compressed and Skipped Communication
    Phuong, Tran Thi
    Phong, Le Trieu
    Fukushima, Kazuhide
    [J]. IEEE ACCESS, 2023, 11 : 99836 - 99846
  • [6] Communication-Censored Distributed Stochastic Gradient Descent
    Li, Weiyu
    Wu, Zhaoxian
    Chen, Tianyi
    Li, Liping
    Ling, Qing
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6831 - 6843
  • [7] Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
    Shen, Shuheng
    Xu, Linli
    Liu, Jingchang
    Liang, Xianfeng
    Cheng, Yifei
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4582 - 4589
  • [8] Distributed Stochastic Gradient Descent with Event-Triggered Communication
    George, Jemin
    Gurram, Prudhvi
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7169 - 7178
  • [9] Hinge Classification Algorithm Based on Asynchronous Gradient Descent
    Yan, Xiaodan
    Zhang, Tianxin
    Cui, Baojiang
    Deng, Jiangdong
    [J]. ADVANCES ON BROAD-BAND WIRELESS COMPUTING, COMMUNICATION AND APPLICATIONS, BWCCA-2017, 2018, 12 : 459 - 468
  • [10] Gossip-based distributed stochastic mirror descent for constrained optimization
    Fang, Xianju
    Zhang, Baoyong
    Yuan, Deming
    [J]. NEURAL NETWORKS, 2024, 175