A Communication-Efficient Federated Learning by Dynamic Quantization and Free-Ride Coding

被引:0
|
作者
Chen, Junjie [1 ]
Wang, Qianfan [1 ]
Wan, Hai [1 ]
Ma, Xiao [1 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
基金
国家重点研发计划;
关键词
Communication efficient; federated learning; free-ride coding; low-density parity-check (LDPC) codes; quantization;
D O I
10.1109/WCNC57260.2024.10571243
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper focuses on the design of dynamic quantization (DQ) and coded transmission schemes for federated learning (FL). In the conventional FL system, the updates divergences typically decrease with communication rounds as training goes on. We first study both the impact of the quantization bit-width in the error-free transmission scenario and the impact of bit error rate (BER) in the practical transmission scenario on the performance of FL. Then we propose a DQ scheme based on the fixed B-bit or 1-bit quantization scheme, where each device quantizes its local updates with dynamic bit-width according to the test accuracy of the global model and device-to-server signal-to-noise ratio (SNR). Due to the quantization bit-width is dynamic, the test accuracy (as a kind of extra data) is needed for devices to determine the bit-width, and the devices need to inform the server of the resultant bit-width (as another kind of extra data). To reliably transmit these extra data without consuming extra transmission resource, we utilize the free-ride coding, where the extra data are embedded into the low-density parity-check (LDPC) coded payload data. Numerical results show that in the practical scenario, B-bit (B > 1) quantization scheme shows fast convergence speed and high final accuracy (in high SNR region) while the 1-bit quantization scheme exhibits greater robustness (in low SNR region). They also show that the proposed FL by DQ and free-ride coding not only can significantly reduce the communication overhead with a negligible performance gap to the upper bound (error-free scheme) even in low SNR region but also outperforms the fixed quantization coded transmission FL scheme in terms of accuracy and convergence speed.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Communication-Efficient Federated Learning with Adaptive Quantization
    Mao, Yuzhu
    Zhao, Zihao
    Yan, Guangfeng
    Liu, Yang
    Lan, Tian
    Song, Linqi
    Ding, Wenbo
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [2] FedDQ: Communication-Efficient Federated Learning with Descending Quantization
    Qu, Linping
    Song, Shenghui
    Tsui, Chi-Ying
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 281 - 286
  • [3] Communication-Efficient Federated Learning via Predictive Coding
    Yue, Kai
    Jin, Richeng
    Wong, Chau-Wai
    Dai, Huaiyu
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (03) : 369 - 380
  • [4] ADAPTIVE QUANTIZATION OF MODEL UPDATES FOR COMMUNICATION-EFFICIENT FEDERATED LEARNING
    Jhunjhunwala, Divyansh
    Gadhikar, Advait
    Joshi, Gauri
    Eldar, Yonina C.
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3110 - 3114
  • [5] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [6] FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
    Reisizadeh, Amirhossein
    Mokhtari, Aryan
    Hassani, Hamed
    Jadbabaie, Ali
    Pedarsani, Ramtin
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2021 - 2030
  • [7] DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning
    Hoenig, Robert
    Zhao, Yiren
    Mullins, Robert
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [8] Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning
    Ji, Shaoxiong
    Jiang, Wenqi
    Walid, Anwar
    Li, Xue
    [J]. IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 27 - 34
  • [9] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    [J]. ALGORITHMS, 2022, 15 (08)
  • [10] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,