FedASMU: Efficient Asynchronous Federated Learning with Dynamic Staleness-Aware Model Update

被引:0
|
作者
Liu, Ji [1 ]
Jia, Juncheng [2 ,3 ]
Che, Tianshi [4 ]
Huo, Chao [2 ]
Ren, Jiaxiang [4 ]
Zhou, Yang [4 ]
Dai, Huaiyu [5 ]
Dou, Dejing [6 ]
机构
[1] Hithink RoyalFlush Informat Network Co Ltd, Hangzhou, Peoples R China
[2] Soochow Univ, Suzhou, Peoples R China
[3] Collaborat Innovat Ctr Novel Software Technol & I, Beijing, Peoples R China
[4] Auburn Univ, Auburn, AL USA
[5] North Carolina State Univ, Raleigh, NC USA
[6] Boston Consulting Grp Inc, Beijing, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a promising approach to deal with distributed data, Federated Learning (FL) achieves major advancements in recent years. FL enables collaborative model training by exploiting the raw data dispersed in multiple edge devices. However, the data is generally non-independent and identically distributed, i.e., statistical heterogeneity, and the edge devices significantly differ in terms of both computation and communication capacity, i.e., system heterogeneity. The statistical heterogeneity leads to severe accuracy degradation while the system heterogeneity significantly prolongs the training process. In order to address the heterogeneity issue, we propose an Asynchronous Staleness-Aware Model Update FL framework, i.e., FedASMU, with two novel methods. First, we propose an asynchronous FL system model with a dynamical model aggregation method between updated local models and the global model on the server for superior accuracy and high efficiency. Then, we propose an adaptive local model adjustment method by aggregating the fresh global model with local models on devices to further improve the accuracy. Extensive experimentation with 6 models and 5 public datasets demonstrates that FedASMU significantly outperforms baseline approaches in terms of accuracy (0.60% to 23.90% higher) and efficiency (3.54% to 97.98% faster).
引用
收藏
页码:13900 / 13908
页数:9
相关论文
共 50 条
  • [1] AsyncFedGAN: An Efficient and Staleness-Aware Asynchronous Federated Learning Framework for Generative Adversarial Networks
    Manu, Daniel
    Alazzwi, Abee
    Yao, Jingjing
    Lin, Youzuo
    Sun, Xiang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2025, 36 (03) : 553 - 569
  • [2] ASMAFL: Adaptive Staleness-Aware Momentum Asynchronous Federated Learning in Edge Computing
    Qiao, Dewen
    Guo, Songtao
    Zhao, Jun
    Le, Junqing
    Zhou, Pengzhan
    Li, Mingyan
    Chen, Xuetao
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (04) : 3390 - 3406
  • [3] FedSA: A staleness-aware asynchronous Federated Learning algorithm with non-IID data
    Chen, Ming
    Mao, Bingcheng
    Ma, Tianyi
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 120 : 1 - 12
  • [4] Staleness aware semi-asynchronous federated learning
    Yu, Miri
    Choi, Jiheon
    Lee, Jaehyun
    Oh, Sangyoon
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2024, 93
  • [5] Client Selection With Staleness Compensation in Asynchronous Federated Learning
    Zhu, Hongbin
    Kuang, Junqian
    Yang, Miao
    Qian, Hua
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (03) : 4124 - 4129
  • [6] Staleness-Controlled Asynchronous Federated Learning: Accuracy and Efficiency Tradeoff
    Sun, Sheng
    Zhang, Zengqi
    Pan, Quyang
    Liu, Min
    Wang, Yuwei
    He, Tianliu
    Chen, Yali
    Wu, Zhiyuan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 12621 - 12634
  • [7] FedMDS: An Efficient Model Discrepancy-Aware Semi-Asynchronous Clustered Federated Learning Framework
    Zhang, Yu
    Liu, Duo
    Duan, Moming
    Li, Li
    Chen, Xianzhang
    Ren, Ao
    Tan, Yujuan
    Wang, Chengliang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (03) : 1007 - 1019
  • [8] Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation
    Chen, Yang
    Sun, Xiaoyan
    Jin, Yaochu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4229 - 4238
  • [9] STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset
    Zhu, Feng
    Hao, Jiangshan
    Chen, Zhong
    Zhao, Yanchao
    Chen, Bing
    Tan, Xiaoyang
    ELECTRONICS, 2022, 11 (03)
  • [10] Data Selection for Efficient Model Update in Federated Learning
    Shi, Hongrui
    Radu, Valentin
    PROCEEDINGS OF THE 2022 2ND EUROPEAN WORKSHOP ON MACHINE LEARNING AND SYSTEMS (EUROMLSYS '22), 2022, : 72 - 78