Distributed Stochastic Gradient Descent Using LDGM Codes

被引:0
|
作者
Horii, Shunsuke [1 ]
Yoshida, Takahiro [2 ]
Kobayashi, Manabu [1 ]
Matsushima, Toshiyasu [1 ]
机构
[1] Waseda Univ, Tokyo, Japan
[2] Yokohama Coll Commerce, Yokohama, Kanagawa, Japan
基金
日本学术振兴会;
关键词
D O I
10.1109/isit.2019.8849580
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a distributed learning problem in which the computation is carried out on a system consisting of a master node and multiple worker nodes. In such systems, the existence of slow-running machines called stragglers will cause a significant decrease in performance. Recently, coding theoretic framework, which is named Gradient Coding (GC), for mitigating stragglers in distributed learning has been established by Tandon et al. Most studies on GC are aiming at recovering the gradient information completely assuming that the Gradient Descent (GD) algorithm is used as a learning algorithm. On the other hand, if the Stochastic Gradient Descent (SGD) algorithm is used, it is not necessary to completely recover the gradient information, and its unbiased estimator is sufficient for the learning. In this paper, we propose a distributed SGD scheme using Low Density Generator Matrix (LDGM) codes. In the proposed system, it may take longer time than existing GC methods to recover the gradient information completely, however, it enables the master node to obtain a high-quality unbiased estimator of the gradient at low computational cost and it leads to overall performance improvement.
引用
收藏
页码:1417 / 1421
页数:5
相关论文
共 50 条
  • [21] Fast Convergence for Stochastic and Distributed Gradient Descent in the Interpolation Limit
    Mitra, Partha P.
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 1890 - 1894
  • [22] Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents
    Akbay, Abdullah Basar
    Tepedelenlioglu, Cihan
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1238 - 1242
  • [23] Distributed Stochastic Gradient Descent: Nonconvexity, Nonsmoothness, and Convergence to Local Minima
    Swenson, Brian
    Murray, Ryan
    Poor, H. Vincent
    Kar, Soummya
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [24] Distributed stochastic gradient descent for link prediction in signed social networks
    Zhang, Han
    Wu, Gang
    Ling, Qing
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2019, 2019 (1)
  • [25] Privacy-Preserving Stochastic Gradient Descent with Multiple Distributed Trainers
    Le Trieu Phong
    NETWORK AND SYSTEM SECURITY, 2017, 10394 : 510 - 518
  • [26] Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers
    Hanna, Serge Kas
    Bitar, Rawad
    Parag, Parimal
    Dasari, Venkat
    El Rouayheb, Salim
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4262 - 4266
  • [27] A DAG Model of Synchronous Stochastic Gradient Descent in Distributed Deep Learning
    Shi, Shaohuai
    Wang, Qiang
    Chu, Xiaowen
    Li, Bo
    2018 IEEE 24TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS 2018), 2018, : 425 - 432
  • [28] ColumnSGD: A Column-oriented Framework for Distributed Stochastic Gradient Descent
    Zhang, Zhipeng
    Wu, Wentao
    Jiang, Jiawei
    Yu, Lele
    Cui, Bin
    Zhang, Ce
    2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, : 1513 - 1524
  • [29] Distributed Byzantine Tolerant Stochastic Gradient Descent in the Era of Big Data
    Jin, Richeng
    He, Xiaofan
    Dai, Huaiyu
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [30] Distributed stochastic gradient descent for link prediction in signed social networks
    Han Zhang
    Gang Wu
    Qing Ling
    EURASIP Journal on Advances in Signal Processing, 2019