Distributed Stochastic Gradient Descent Using LDGM Codes

被引:0
|
作者
Horii, Shunsuke [1 ]
Yoshida, Takahiro [2 ]
Kobayashi, Manabu [1 ]
Matsushima, Toshiyasu [1 ]
机构
[1] Waseda Univ, Tokyo, Japan
[2] Yokohama Coll Commerce, Yokohama, Kanagawa, Japan
基金
日本学术振兴会;
关键词
D O I
10.1109/isit.2019.8849580
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a distributed learning problem in which the computation is carried out on a system consisting of a master node and multiple worker nodes. In such systems, the existence of slow-running machines called stragglers will cause a significant decrease in performance. Recently, coding theoretic framework, which is named Gradient Coding (GC), for mitigating stragglers in distributed learning has been established by Tandon et al. Most studies on GC are aiming at recovering the gradient information completely assuming that the Gradient Descent (GD) algorithm is used as a learning algorithm. On the other hand, if the Stochastic Gradient Descent (SGD) algorithm is used, it is not necessary to completely recover the gradient information, and its unbiased estimator is sufficient for the learning. In this paper, we propose a distributed SGD scheme using Low Density Generator Matrix (LDGM) codes. In the proposed system, it may take longer time than existing GC methods to recover the gradient information completely, however, it enables the master node to obtain a high-quality unbiased estimator of the gradient at low computational cost and it leads to overall performance improvement.
引用
收藏
页码:1417 / 1421
页数:5
相关论文
共 50 条
  • [1] Bayesian Distributed Stochastic Gradient Descent
    Teng, Michael
    Wood, Frank
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] CONTROLLING STOCHASTIC GRADIENT DESCENT USING STOCHASTIC APPROXIMATION FOR ROBUST DISTRIBUTED OPTIMIZATION
    Jain, Adit
    Krishnamurthy, Vikram
    NUMERICAL ALGEBRA CONTROL AND OPTIMIZATION, 2024,
  • [3] Improving Distributed Gradient Descent Using Reed-Solomon Codes
    Halbawi, Wael
    Azizan, Navid
    Salehi, Fariborz
    Hassibi, Babak
    2018 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2018, : 2027 - 2031
  • [4] Predicting Throughput of Distributed Stochastic Gradient Descent
    Li, Zhuojin
    Paolieri, Marco
    Golubchik, Leana
    Lin, Sung-Han
    Yan, Wumo
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 2900 - 2912
  • [5] Distributed stochastic gradient descent with discriminative aggregating
    Chen, Zhen-Hong
    Lan, Yan-Yan
    Guo, Jia-Feng
    Cheng, Xue-Qi
    Jisuanji Xuebao/Chinese Journal of Computers, 2015, 38 (10): : 2054 - 2063
  • [6] A parallel and distributed stochastic gradient descent implementation using commodity clusters
    Robert K. L. Kennedy
    Taghi M. Khoshgoftaar
    Flavio Villanustre
    Timothy Humphrey
    Journal of Big Data, 6
  • [7] A parallel and distributed stochastic gradient descent implementation using commodity clusters
    Kennedy, Robert K. L.
    Khoshgoftaar, Taghi M.
    Villanustre, Flavio
    Humphrey, Timothy
    JOURNAL OF BIG DATA, 2019, 6 (01)
  • [8] Convergence analysis of distributed stochastic gradient descent with shuffling
    Meng, Qi
    Chen, Wei
    Wang, Yue
    Ma, Zhi-Ming
    Liu, Tie-Yan
    NEUROCOMPUTING, 2019, 337 : 46 - 57
  • [9] Distributed and asynchronous Stochastic Gradient Descent with variance reduction
    Ming, Yuewei
    Zhao, Yawei
    Wu, Chengkun
    Li, Kuan
    Yin, Jianping
    NEUROCOMPUTING, 2018, 281 : 27 - 36
  • [10] Distributed Stochastic Gradient Descent With Compressed and Skipped Communication
    Phuong, Tran Thi
    Phong, Le Trieu
    Fukushima, Kazuhide
    IEEE ACCESS, 2023, 11 : 99836 - 99846