Semi-Asynchronous Model Design for Federated Learning in Mobile Edge Networks

被引:1
|
作者
Zhang, Jinfeng [1 ]
Liu, Wei [1 ]
He, Yejun [1 ]
He, Zhou [2 ]
Guizani, Mohsen [3 ]
机构
[1] Shenzhen Univ, Guangdong Engn Res Ctr Base Stn Antennas, State Key Lab Radio Frequency Heterogeneous Integr, Coll Elect & Informat Engn,Shenzhen Key Lab Antenn, Shenzhen, Peoples R China
[2] Univ Maryland, Dept Mech Engn, College Pk, MD 20742 USA
[3] Mohamed Bin Zayed Univ Artificial Intelligence MBZ, Abu Dhabi 51133, U Arab Emirates
基金
中国国家自然科学基金;
关键词
Federated learning; mobile edge networks; deep deterministic policy gradient; semi-asynchronous update model; energy efficiency;
D O I
10.1109/TVT.2023.3298787
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) is a distributed machine learning (ML). Distributed clients train locally and exclusively need to upload the model parameters to learn the global model collaboratively under the coordination of the aggregation server. Although the privacy of the clients is protected, which requires multiple rounds of data upload between the clients and the server to ensure the accuracy of the global model. Inevitably, this results in latency and energy consumption issues due to limited communication resources. Therefore, mobile edge computing (MEC) has been proposed to solve communication delays and energy consumption in federated learning. In this paper, we first analyze how to select the gradient values that help the global model converge quickly and establish theoretical analysis about the relationship between the convergence rate and the gradient direction. To efficiently reduce the energy consumption of clients during training, on the premise of ensuring the local training accuracy and the convergence rate of the global model, we adopt the deep deterministic policy gradient (DDPG) algorithm, which adaptively allocates resources according to different clients' requests to minimize the energy consumption. To improve flexibility and scalability, we propose a new the semi-asynchronous federated update model, which allows clients to aggregate asynchronously on the server, and accelerates the convergence rate of the global model. Empirical results show that the proposed Algorithm $\mathbf {1}$ not only accelerates the convergence speed of the global model, but also reduces the size of parameters that need to be uploaded. Besides, the proposed Algorithm $\mathbf {2}$ reduces the time difference caused by user heterogeneity. Eventually, the semi-asynchronous update model is better than the synchronous update model in communication time.
引用
收藏
页码:16280 / 16292
页数:13
相关论文
共 50 条
  • [41] Scheduling and Aggregation Design for Asynchronous Federated Learning Over Wireless Networks
    Hu, Chung-Hsuan
    Chen, Zheng
    Larsson, Erik G.
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (04) : 874 - 886
  • [42] Optimal Device Selection for Federated Learning over Mobile Edge Networks
    Ching, Cheng-Wei
    Liu, Yu-Chun
    Yang, Chung-Kai
    Kuo, Jian-Jhih
    Su, Feng-Ting
    2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2020, : 1298 - 1303
  • [43] Joint Contract Design and Task Reorganization for Semi-Decentralized Federated Edge Learning in Vehicular Networks
    Xu, Bo
    Zhao, Haitao
    Cao, Haotong
    Lu, Xiaozhen
    Zhu, Hongbo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 10539 - 10553
  • [44] Robust Design of Federated Learning for Edge-Intelligent Networks
    Qi, Qiao
    Chen, Xiaoming
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (07) : 4469 - 4481
  • [45] Mobility-Aware Asynchronous Federated Learning for Edge-Assisted Vehicular Networks
    Wang, Siyuan
    Wu, Qiong
    Fan, Qiang
    Fan, Pingyi
    Wang, Jiangzhou
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3621 - 3626
  • [46] MSSA-FL: High-Performance Multi-stage Semi-asynchronous Federated Learning with Non-IID Data
    Wei, Xiaohui
    Hou, Mingkai
    Ren, Chenghao
    Li, Xiang
    Yue, Hengshan
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT II, 2022, 13369 : 172 - 187
  • [47] An Efficient Asynchronous Federated Learning Protocol for Edge Devices
    Li, Qian
    Gao, Ziyi
    Sun, Yetao
    Wang, Yan
    Wang, Rui
    Zhu, Haiyan
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (17): : 28798 - 28808
  • [48] FedLC: Accelerating Asynchronous Federated Learning in Edge Computing
    Xu, Yang
    Ma, Zhenguo
    Xu, Hongli
    Chen, Suo
    Liu, Jianchun
    Xue, Yinxing
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 5327 - 5343
  • [49] An Asynchronous Federated Learning Mechanism for Edge Network Computing
    Lu X.
    Liao Y.
    Lio P.
    Pan H.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (12): : 2571 - 2582
  • [50] Optimized Federated Multitask Learning in Mobile Edge Networks: A Hybrid Client Selection and Model Aggregation Approach
    Hamood M.
    Albaseer A.
    Abdallah M.
    Al-Fuqaha A.
    Mohamed A.
    IEEE Transactions on Vehicular Technology, 2024, 73 (11) : 1 - 17