FedQClip: Accelerating Federated Learning via Quantized Clipped SGD

被引:1
|
作者
Qu, Zhihao [1 ]
Jia, Ninghui [1 ]
Ye, Baoliu [2 ]
Hu, Shihong [1 ]
Guo, Song [3 ]
机构
[1] Hohai Univ, Key Lab Water Big Data Technol, Minist Water Resources, Nanjing 211100, Peoples R China
[2] Nanjing Univ, Dept Comp Sci & Technol, State Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Kowloon, Hong Kong 999077, Peoples R China
基金
中国国家自然科学基金;
关键词
Convergence; Quantization (signal); Training; Servers; Computers; Computational modeling; Optimization methods; Distance learning; Costs; Computer aided instruction; Federated learning; communication compression; clipped SGD; optimization analysis;
D O I
10.1109/TC.2024.3477972
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) has emerged as a promising technique for collaboratively training machine learning models among multiple participants while preserving privacy-sensitive data. However, the conventional parameter server architecture presents challenges in terms of communication overhead when employing iterative optimization methods such as Stochastic Gradient Descent (SGD). Although communication compression techniques can reduce the traffic cost of FL during each training round, they often lead to degraded convergence rates, mainly due to compression errors and data heterogeneity. To address these issues, this paper presents FedQClip, an innovative approach that combines quantization and Clipped SGD. FedQClip leverages an adaptive step size inversely proportional to the l(2) norm of the gradient, effectively mitigating the negative impacts of quantized errors. Additionally, clipped operations can be applied locally and globally to further expedite training. Theoretical analyses provide evidence that, even under the settings of Non-IID (non-independent and identically distributed) data, FedQClip achieves a convergence rate of O(1/root T), effectively addressing the convergence degradation caused by compression errors. Furthermore, our theoretical analysis highlights the importance of selecting an appropriate number of local updates to enhance the convergence of FL training. Through extensive experiments, we demonstrate that FedQClip outperforms state-of-the-art methods in terms of communication efficiency and convergence rate.
引用
收藏
页码:717 / 730
页数:14
相关论文
共 50 条
  • [1] Communication-efficient Federated Learning via Quantized Clipped SGD
    Jia, Ninghui
    Qu, Zhihao
    Ye, Baoliu
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT I, 2021, 12937 : 559 - 571
  • [2] Fitting ReLUs via SGD and Quantized SGD
    Kalan, Seyed Mohammadreza Mousavi
    Soltanolkotabi, Mahdi
    Avestimehr, A. Salman
    2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2019, : 2469 - 2473
  • [3] Accelerating Federated Edge Learning via Topology Optimization
    Huang, Shanfeng
    Zhang, Zezhong
    Wang, Shuai
    Wang, Rui
    Huang, Kaibin
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (03) : 2056 - 2070
  • [4] Accelerating Federated Learning via Momentum Gradient Descent
    Liu, Wei
    Chen, Li
    Chen, Yunfei
    Zhang, Wenyi
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (08) : 1754 - 1766
  • [5] QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
    Ozkara, Kaan
    Singh, Navjot
    Data, Deepesh
    Diggavi, Suhas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] FedVQCS: Federated Learning via Vector Quantized Compressed Sensing
    Oh, Yongjeong
    Jeon, Yo-Seb
    Chen, Mingzhe
    Saad, Walid
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (03) : 1755 - 1770
  • [7] Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression
    Chen, Sijin
    Li, Zhize
    Chi, Yuejie
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [8] Communication-Efficient Federated Learning via Quantized Compressed Sensing
    Oh, Yongjeong
    Lee, Namyoon
    Jeon, Yo-Seb
    Poor, H. Vincent
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (02) : 1087 - 1100
  • [9] Accelerating Federated Edge Learning
    Nguyen, Tuan Dung
    Balef, Amir R.
    Dinh, Canh T.
    Tran, Nguyen H.
    Ngo, Duy T.
    Anh Le, Tuan
    Vo, Phuong L.
    IEEE COMMUNICATIONS LETTERS, 2021, 25 (10) : 3282 - 3286
  • [10] Accelerating Vertical Federated Learning
    Cai, Dongqi
    Fan, Tao
    Kang, Yan
    Fan, Lixin
    Xu, Mengwei
    Wang, Shangguang
    Yang, Qiang
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 752 - 760