Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning

被引:48
|
作者
Sun, Jun [1 ]
Chen, Tianyi [2 ]
Giannakis, Georgios B. [3 ,4 ]
Yang, Qinmin [1 ]
Yang, Zaiyue [5 ]
机构
[1] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310027, Peoples R China
[2] Rensselaer Polytech Inst, Dept Elect Comp & Syst Engn, Troy, NY 12180 USA
[3] Univ Minnesota, Dept Elect & Comp Engn, Minneapolis, MN 55455 USA
[4] Univ Minnesota, Digital Technol Ctr, Minneapolis, MN 55455 USA
[5] Southern Univ Sci & Technol, Dept Mech & Energy Engn, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Quantization (signal); Servers; Technological innovation; Convergence; Frequency modulation; Distributed databases; Collaborative work; Federated learning; communication-efficient; gradient innovation; quantization;
D O I
10.1109/TPAMI.2020.3033286
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper focuses on communication-efficient federated learning problem, and develops a novel distributed quantized gradient approach, which is characterized by adaptive communications of the quantized gradients. Specifically, the federated learning builds upon the server-worker infrastructure, where the workers calculate local gradients and upload them to the server; then the server obtain the global gradient by aggregating all the local gradients and utilizes it to update the model parameter. The key idea to save communications from the worker to the server is to quantize gradients as well as skip less informative quantized gradient communications by reusing previous gradients. Quantizing and skipping result in 'lazy' worker-server communications, which justifies the term Lazily Aggregated Quantized (LAQ) gradient. Theoretically, the LAQ algorithm achieves the same linear convergence as the gradient descent in the strongly convex case, while effecting major savings in the communication in terms of transmitted bits and communication rounds. Empirically, extensive experiments using realistic data corroborate a significant communication reduction compared with state-of-the-art gradient- and stochastic gradient-based algorithms.
引用
收藏
页码:2031 / 2044
页数:14
相关论文
共 50 条
  • [1] LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
    Chen, Tianyi
    Giannakis, Georgios B.
    Sun, Tao
    Yin, Wotao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
    Sun, Jun
    Chen, Tianyi
    Giannakis, Georgios B.
    Yang, Zaiyue
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [3] LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning
    Zhang, Jingjing
    Simeone, Osvaldo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (03) : 962 - 974
  • [4] Communication-Efficient Design for Quantized Decentralized Federated Learning
    Chen, Li
    Liu, Wei
    Chen, Yunfei
    Wang, Weidong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1175 - 1188
  • [5] Communication-efficient Federated Learning via Quantized Clipped SGD
    Jia, Ninghui
    Qu, Zhihao
    Ye, Baoliu
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT I, 2021, 12937 : 559 - 571
  • [6] Communication-Efficient Federated Learning via Quantized Compressed Sensing
    Oh, Yongjeong
    Lee, Namyoon
    Jeon, Yo-Seb
    Poor, H. Vincent
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (02) : 1087 - 1100
  • [7] SIGNGD with Error Feedback Meets Lazily Aggregated Technique:Communication-Efficient Algorithms for Distributed Learning
    Xiaoge Deng
    Tao Sun
    Feng Liu
    Dongsheng Li
    Tsinghua Science and Technology, 2022, 27 (01) : 174 - 185
  • [8] SIGNGD with Error Feedback Meets Lazily Aggregated Technique: Communication-Efficient Algorithms for Distributed Learning
    Deng, Xiaoge
    Sun, Tao
    Liu, Feng
    Li, Dongsheng
    TSINGHUA SCIENCE AND TECHNOLOGY, 2022, 27 (01) : 174 - 185
  • [9] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [10] FedSC: Compatible Gradient Compression for Communication-Efficient Federated Learning
    Yu, Xinlei
    Gao, Zhipeng
    Zhao, Chen
    Mo, Zijia
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 360 - 379