Communication-Efficient Nonconvex Federated Learning With Error Feedback for Uplink and Downlink

被引:0
|
作者
Zhou, Xingcai [1 ]
Chang, Le [1 ]
Cao, Jinde [2 ,3 ]
机构
[1] Nanjing Audit Univ, Sch Stat & Data Sci, Nanjing 211815, Peoples R China
[2] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
[3] Yonsei Univ, Yonsei Frontier Lab, Seoul 03722, South Korea
基金
中国国家自然科学基金;
关键词
Uplink; Downlink; Costs; Convergence; Federated learning; Servers; Markov processes; Communication-efficient; distributed learning; error feedback (EF); lazily aggregated gradient (LAG); nonconvex learning; uplink and downlink;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facing large-scale online learning, the reliance on sophisticated model architectures often leads to nonconvex distributed optimization, which is more challenging than convex problems. Online recruited workers, such as mobile phone, laptop, and desktop computers, often have narrower uplink bandwidths than downlink. In this article, we propose two communication-efficient nonconvex federated learning algorithms with error feedback 2021 (EF21) and lazily aggregated gradient (LAG) for adapting uplink and downlink communications. EF21 is a new and theoretically better EF, which consistently and substantially outperforms vanilla EF in practice. LAG is a gradient filtration technique for adapting communication. For reducing communication costs of uplink, we design an effective LAG rule and then give EF21 with LAG (EF-LAG) algorithm, which combines EF21 and our LAG rule. We also present a bidirectional EF-LAG (BiEF-LAG) algorithm for reducing uplink and downlink communication costs. Theoretically, our proposed algorithms enjoy the same fast convergence rate O(1/ T) as gradient descent (GD) for smooth nonconvex learning. That is, our algorithms greatly reduce communication costs without sacrificing the quality of learning. Numerical experiments on both synthetic data and deep learning benchmarks show significant empirical superiority of our algorithms in communication.
引用
收藏
页码:1003 / 1014
页数:12
相关论文
共 50 条
  • [31] Communication-efficient Federated Learning with Cooperative Filter Selection
    Yang, Zhao
    Sun, Qingshuang
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 2172 - 2176
  • [32] Communication-Efficient Federated Learning with Adaptive Consensus ADMM
    He, Siyi
    Zheng, Jiali
    Feng, Minyu
    Chen, Yixin
    APPLIED SCIENCES-BASEL, 2023, 13 (09):
  • [33] Communication-Efficient Federated Learning With Gradual Layer Freezing
    Malan, Erich
    Peluso, Valentino
    Calimera, Andrea
    Macii, Enrico
    IEEE EMBEDDED SYSTEMS LETTERS, 2023, 15 (01) : 25 - 28
  • [34] Communication-Efficient Federated Learning Based on Compressed Sensing
    Li, Chengxi
    Li, Gang
    Varshney, Pramod K.
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (20) : 15531 - 15541
  • [35] On the Convergence of Communication-Efficient Local SGD for Federated Learning
    Gao, Hongchang
    Xu, An
    Huang, Heng
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7510 - 7518
  • [36] A Cooperative Analysis to Incentivize Communication-Efficient Federated Learning
    Li, Youqi
    Li, Fan
    Yang, Song
    Zhang, Chuan
    Zhu, Liehuang
    Wang, Yu
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 10175 - 10190
  • [37] FedDQ: Communication-Efficient Federated Learning with Descending Quantization
    Qu, Linping
    Song, Shenghui
    Tsui, Chi-Ying
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 281 - 286
  • [38] Communication-Efficient Generalized Neuron Matching for Federated Learning
    Hu, Sixu
    Li, Qinbin
    He, Bingsheng
    PROCEEDINGS OF THE 52ND INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2023, 2023, : 254 - 263
  • [39] DIFFERENTIAL ERROR FEEDBACK FOR COMMUNICATION-EFFICIENT DECENTRALIZED OPTIMIZATION
    Nassif, Roula
    Vlaski, Stefan
    Carpentiero, Marco
    Matta, Vincenzo
    Sayed, Ali H.
    2024 IEEE 13RD SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP, SAM 2024, 2024,
  • [40] Communication-efficient federated learning with stagewise training strategy
    Cheng, Yifei
    Shen, Shuheng
    Liang, Xianfeng
    Liu, Jingchang
    Chen, Joya
    Zhang, Tie
    Chen, Enhong
    NEURAL NETWORKS, 2023, 167 : 460 - 472