Communication-Efficient Federated DNN Training: Convert, Compress, Correct

被引:0
|
作者
Chen, Zhong-Jing [1 ]
Hernandez, Eduin E. [2 ]
Huang, Yu-Chih [1 ]
Rini, Stefano [1 ]
机构
[1] National Yang-Ming Chiao-Tung University, Institute of Communications Engineering, Hsinchu,30010, Taiwan
[2] National Yang Ming Chiao Tung University, Department of Electronics and Electrical Engineering, Hsinchu,30010, Taiwan
关键词
D O I
10.1109/JIOT.2024.3456857
中图分类号
学科分类号
摘要
In the federated training of a deep neural network (DNN), model updates are transmitted from the remote users to the parameter server (PS). In many scenarios of practical relevance, one is interested in reducing the communication overhead to enhance training efficiency. To address this challenge, we introduce CO3. CO3 takes its name from three processing applied which reduce the communication load when transmitting the local DNN gradients from the remote users to the PS. Namely, 1) gradient quantization through floating-point conversion; 2) lossless compression of the quantized gradient; and 3) correction of quantization error. We carefully design each of the steps above to ensure good training performance under a constraint on the communication rate. In particular, in steps 1) and 2), we adopt the assumption that DNN gradients are distributed according to a generalized normal distribution, which is validated numerically in this article. For step 3), we utilize an error feedback with a memory decay mechanism to correct the quantization error introduced in step 1). We argue that the memory decay coefficient -similar to the learning rate - can be optimally tuned to improve convergence. A rigorous convergence analysis of the proposed CO3 with stochastic gradient descent (SGD) is provided. Moreover, with extensive simulations, we show that CO3 offers improved performance as compared with existing gradient compression schemes proposed in the literature which employ sketching and nonuniform quantization of the local gradients. © 2014 IEEE.
引用
收藏
页码:40431 / 40447
相关论文
共 50 条
  • [1] Communication-efficient federated learning with stagewise training strategy
    Cheng, Yifei
    Shen, Shuheng
    Liang, Xianfeng
    Liu, Jingchang
    Chen, Joya
    Zhang, Tie
    Chen, Enhong
    NEURAL NETWORKS, 2023, 167 : 460 - 472
  • [2] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [3] Communication-Efficient Agnostic Federated Averaging
    Ro, Jae
    Chen, Mingqing
    Mathews, Rajiv
    Mohri, Mehryar
    Suresh, Ananda Theertha
    INTERSPEECH 2021, 2021, : 871 - 875
  • [4] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [5] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [6] Communication-efficient Federated Indoor Localization with Layerwise Swapping Training-FedAvg
    Liang, Jinjie
    Liu, Zhenyu
    Zhou, Zhiheng
    Xu, Yan
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2022, E105 (08)
  • [7] Communication-Efficient Federated Indoor Localization with Layerwise Swapping Training-FedAvg
    Liang, Jinjie
    Liu, Zhenyu
    Zhou, Zhiheng
    Xu, Yan
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2022, E105A (11) : 1493 - 1502
  • [8] Neuron Pruning-Based Federated Learning for Communication-Efficient Distributed Training
    Guan, Jianfeng
    Wang, Pengcheng
    Yao, Su
    Zhang, Jing
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT IV, 2024, 14490 : 63 - 81
  • [9] Communication-Efficient Federated Learning with Heterogeneous Devices
    Chen, Zhixiong
    Yi, Wenqiang
    Liu, Yuanwei
    Nallanathan, Arumugam
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3602 - 3607
  • [10] Communication-Efficient Federated Learning for Decision Trees
    Zhao, Shuo
    Zhu, Zikun
    Li, Xin
    Chen, Ying-Chi
    IEEE Transactions on Artificial Intelligence, 2024, 5 (11): : 5478 - 5492