Communication-Efficient Distributed SGD With Compressed Sensing

被引:3
|
作者
Tang, Yujie [1 ]
Ramanathan, Vikram [1 ]
Zhang, Junshan [2 ]
Li, Na [1 ]
机构
[1] Harvard Univ, Sch Engn & Appl Sci, Allston, MA 02134 USA
[2] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85287 USA
来源
基金
美国国家科学基金会;
关键词
Servers; Compressed sensing; Sensors; Stochastic processes; Sparse matrices; Optimization; Convergence; Optimization algorithms; large-scale systems; distributed optimization; compressed sensing; CONVERGENCE;
D O I
10.1109/LCSYS.2021.3137859
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider large scale distributed optimization over a set of edge devices connected to a central server, where the limited communication bandwidth between the server and edge devices imposes a significant bottleneck for the optimization procedure. Inspired by recent advances in federated learning, we propose a distributed stochastic gradient descent (SGD) type algorithm that exploits the sparsity of the gradient, when possible, to reduce communication burden. At the heart of the algorithm is to use compressed sensing techniques for the compression of the local stochastic gradients at the device side; and at the server side, a sparse approximation of the global stochastic gradient is recovered from the noisy aggregated compressed local gradients. We conduct theoretical analysis on the convergence of our algorithm in the presence of noise perturbation incurred by the communication channels, and also conduct numerical experiments to corroborate its effectiveness.
引用
收藏
页码:2054 / 2059
页数:6
相关论文
共 50 条
  • [1] AC-SGD: Adaptively Compressed SGD for Communication-Efficient Distributed Learning
    Yan, Guangfeng
    Li, Tan
    Huang, Shao-Lun
    Lan, Tian
    Song, Linqi
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (09) : 2678 - 2693
  • [2] Communication-efficient Distributed SGD with Sketching
    Ivkin, Nikita
    Rothchild, Daniel
    Ullah, Enayat
    Braverman, Vladimir
    Stoica, Ion
    Arora, Raman
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [3] DQ-SGD: Dynamic Quantization in SGD for Communication-Efficient Distributed Learning
    Yan, Guangfeng
    Huang, Shao-Lun
    Lan, Tian
    Song, Linqi
    2021 IEEE 18TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2021), 2021, : 136 - 144
  • [4] Communication-Efficient Distributed SGD with Error-Feedback, Revisited
    Tran Thi Phuong
    Le Trieu Phong
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2021, 14 (01) : 1373 - 1387
  • [5] A Random Access based Approach to Communication-Efficient Distributed SGD
    Choi, Jinho
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4486 - 4491
  • [6] A Convergence Analysis of Distributed SGD with Communication-Efficient Gradient Sparsification
    Shi, Shaohuai
    Zhao, Kaiyong
    Wang, Qiang
    Tang, Zhenheng
    Chu, Xiaowen
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3411 - 3417
  • [7] cpSGD: Communication-efficient and differentially-private distributed SGD
    Agarwal, Naman
    Suresh, Ananda Theertha
    Yu, Felix
    Kumar, Sanjiv
    McMahan, H. Brendan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [8] CE-SGD: Communication-Efficient Distributed Machine Learning
    Tao, Zeyi
    Xia, Qi
    Li, Qun
    Cheng, Songqing
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [9] Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback
    Zheng, Shuai
    Huang, Ziyue
    Kwok, James T.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
    Ruan, Mengzhe
    Yan, Guangfeng
    Xiao, Yuanzhang
    Song, Linqi
    Xu, Weitao
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5280 - 5285