Semi-Asynchronous Model Design for Federated Learning in Mobile Edge Networks

被引:1
|
作者
Zhang, Jinfeng [1 ]
Liu, Wei [1 ]
He, Yejun [1 ]
He, Zhou [2 ]
Guizani, Mohsen [3 ]
机构
[1] Shenzhen Univ, Guangdong Engn Res Ctr Base Stn Antennas, State Key Lab Radio Frequency Heterogeneous Integr, Coll Elect & Informat Engn,Shenzhen Key Lab Antenn, Shenzhen, Peoples R China
[2] Univ Maryland, Dept Mech Engn, College Pk, MD 20742 USA
[3] Mohamed Bin Zayed Univ Artificial Intelligence MBZ, Abu Dhabi 51133, U Arab Emirates
基金
中国国家自然科学基金;
关键词
Federated learning; mobile edge networks; deep deterministic policy gradient; semi-asynchronous update model; energy efficiency;
D O I
10.1109/TVT.2023.3298787
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) is a distributed machine learning (ML). Distributed clients train locally and exclusively need to upload the model parameters to learn the global model collaboratively under the coordination of the aggregation server. Although the privacy of the clients is protected, which requires multiple rounds of data upload between the clients and the server to ensure the accuracy of the global model. Inevitably, this results in latency and energy consumption issues due to limited communication resources. Therefore, mobile edge computing (MEC) has been proposed to solve communication delays and energy consumption in federated learning. In this paper, we first analyze how to select the gradient values that help the global model converge quickly and establish theoretical analysis about the relationship between the convergence rate and the gradient direction. To efficiently reduce the energy consumption of clients during training, on the premise of ensuring the local training accuracy and the convergence rate of the global model, we adopt the deep deterministic policy gradient (DDPG) algorithm, which adaptively allocates resources according to different clients' requests to minimize the energy consumption. To improve flexibility and scalability, we propose a new the semi-asynchronous federated update model, which allows clients to aggregate asynchronously on the server, and accelerates the convergence rate of the global model. Empirical results show that the proposed Algorithm $\mathbf {1}$ not only accelerates the convergence speed of the global model, but also reduces the size of parameters that need to be uploaded. Besides, the proposed Algorithm $\mathbf {2}$ reduces the time difference caused by user heterogeneity. Eventually, the semi-asynchronous update model is better than the synchronous update model in communication time.
引用
收藏
页码:16280 / 16292
页数:13
相关论文
共 50 条
  • [1] Semi-Asynchronous Hierarchical Federated Learning Over Mobile Edge Networks
    Chen, Qimei
    You, Zehua
    Wu, Jing
    Liu, Yunpeng
    Jiang, Hao
    IEEE ACCESS, 2023, 11 : 18887 - 18899
  • [2] ASFL: Adaptive Semi-asynchronous Federated Learning for Balancing Model Accuracy and Total Latency in Mobile Edge Networks
    Yu, Jieling
    Zhou, Ruiting
    Chen, Chen
    Li, Bo
    Dong, Fang
    PROCEEDINGS OF THE 52ND INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2023, 2023, : 443 - 451
  • [3] FedSA: A Semi-Asynchronous Federated Learning Mechanism in Heterogeneous Edge Computing
    Ma, Qianpiao
    Xu, Yang
    Xu, Hongli
    Jiang, Zhida
    Huang, Liusheng
    Huang, He
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) : 3654 - 3672
  • [4] Semi-Asynchronous Federated Edge Learning for Over-the-air Computation
    Kou, Zhoubin
    Ji, Yun
    Zhong, Xiaoxiong
    Zhang, Sheng
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 1351 - 1356
  • [5] Staleness aware semi-asynchronous federated learning
    Yu, Miri
    Choi, Jiheon
    Lee, Jaehyun
    Oh, Sangyoon
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2024, 93
  • [6] Energy Aware Task Allocation for Semi-Asynchronous Mobile Edge Learning
    Mohammad, Umair
    Sorour, Sameh
    Hefeida, Mohamed
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2023, 7 (04): : 1766 - 1777
  • [7] CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework
    Zhang, Yu
    Duan, Morning
    Liu, Duo
    Li, Li
    Ren, Ao
    Chen, Xianzhang
    Tan, Yujuan
    Wang, Chengliang
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [8] Time Efficient Federated Learning with Semi-asynchronous Communication
    Hao, Jiangshan
    Zhao, Yanchao
    Zhang, Jiale
    2020 IEEE 26TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2020, : 156 - 163
  • [9] Hermes: Fast Semi-Asynchronous Federated Learning in LEO Constellations
    Chen, Yan
    Liu, Jun
    Zhao, Jiejie
    Jiang, Guanjun
    Wang, Haiquan
    Du, Bowen
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [10] FedSEA: A Semi-Asynchronous Federated Learning Framework for Extremely Heterogeneous Devices
    Sun, Jingwei
    Li, Ang
    Duan, Lin
    Alam, Samiul
    Deng, Xuliang
    Guo, Xin
    Wang, Haiming
    Gorlatova, Maria
    Zhang, Mi
    Li, Hai
    Chen, Yiran
    PROCEEDINGS OF THE TWENTIETH ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS, SENSYS 2022, 2022, : 106 - 119