Asynchronous Decentralized Federated Learning for Heterogeneous Devices

被引:0
|
作者
Liao, Yunming [1 ,2 ]
Xu, Yang [1 ,2 ]
Xu, Hongli [1 ,2 ]
Chen, Min [3 ]
Wang, Lun [1 ,2 ]
Qiao, Chunming [4 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215000, Peoples R China
[3] Huawei Cloud Comp Technol Co Ltd, Hangzhou 310052, Peoples R China
[4] Univ Buffalo, State Univ New York, Dept Comp Sci & Engn, Buffalo, NY 14068 USA
基金
美国国家科学基金会;
关键词
Edge computing; decentralized federated learning; directed communication; neighbor selection; CONSENSUS; OPTIMIZATION; TOPOLOGY;
D O I
10.1109/TNET.2024.3424444
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Data generated at the network edge can be processed locally by leveraging the emerging technology of Federated Learning (FL). However, non-local data will lead to degradation of model accuracy and the heterogeneity of edge nodes inevitably slows down model training efficiency. Moreover, to avoid the potential communication bottleneck in the parameter-server-based FL, we concentrate on the Decentralized Federated Learning (DFL) that performs distributed model training in Peer-to-Peer (P2P) manner. To address these challenges, we propose an asynchronous DFL system by incorporating neighbor selection and gradient push, termed . Specifically, we require each edge node to push gradients only to a subset of neighbors for resource efficiency. Herein, we first give a theoretical convergence analysis of under the complicated non-and heterogeneous scenario, and further design a priority-based algorithm to dynamically select neighbors for each edge node so as to achieve the trade-off between communication cost and model performance. We evaluate the performance of through extensive experiments on a physical platform with 30 NVIDIA Jetson edge devices. Evaluation results show that can reduce the communication cost by 57% and the completion time by about 35% for achieving the same test accuracy, and improve model accuracy by at least 6% under the non-scenario, compared to the baselines.
引用
收藏
页码:4535 / 4550
页数:16
相关论文
共 50 条
  • [1] AEDFL: Efficient Asynchronous Decentralized Federated Learning with Heterogeneous Devices
    Liu, Ji
    Che, Tianshi
    Zhou, Yang
    Jin, Ruoming
    Dai, Huaiyu
    Dou, Dejing
    Valduriez, Patrick
    [J]. PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 833 - 841
  • [2] Asynchronous federated learning on heterogeneous devices: A survey
    Xu, Chenhao
    Qu, Youyang
    Xiang, Yong
    Gao, Longxiang
    [J]. COMPUTER SCIENCE REVIEW, 2023, 50
  • [3] Asynchronous Semi-Decentralized Federated Edge Learning for Heterogeneous Clients
    Sun, Yuchang
    Shao, Jiawei
    Mao, Yuyi
    Zhang, Jun
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5196 - 5201
  • [4] Grouped Federated Learning: A Decentralized Learning Framework with Low Latency for Heterogeneous Devices
    Yin, Tong
    Li, Lixin
    Lin, Wensheng
    Ma, Donghui
    Han, Zhu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2022, : 55 - 60
  • [5] FedSEA: A Semi-Asynchronous Federated Learning Framework for Extremely Heterogeneous Devices
    Sun, Jingwei
    Li, Ang
    Duan, Lin
    Alam, Samiul
    Deng, Xuliang
    Guo, Xin
    Wang, Haiming
    Gorlatova, Maria
    Zhang, Mi
    Li, Hai
    Chen, Yiran
    [J]. PROCEEDINGS OF THE TWENTIETH ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS, SENSYS 2022, 2022, : 106 - 119
  • [6] Towards a resource-efficient semi-asynchronous federated learning for heterogeneous devices
    Sasindran, Zitha
    Yelchuri, Harsha
    Prabhakar, T. V.
    [J]. 2024 NATIONAL CONFERENCE ON COMMUNICATIONS, NCC, 2024,
  • [7] Accelerating Decentralized Federated Learning in Heterogeneous Edge Computing
    Wang, Lun
    Xu, Yang
    Xu, Hongli
    Chen, Min
    Huang, Liusheng
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (09) : 5001 - 5016
  • [8] Decentralized Federated Learning With Adaptive Configuration for Heterogeneous Participants
    Liao, Yunming
    Xu, Yang
    Xu, Hongli
    Wang, Lun
    Qian, Chen
    Qiao, Chunming
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (06) : 7453 - 7469
  • [9] An Efficient Asynchronous Federated Learning Protocol for Edge Devices
    Li, Qian
    Gao, Ziyi
    Sun, Yetao
    Wang, Yan
    Wang, Rui
    Zhu, Haiyan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (17): : 28798 - 28808
  • [10] A Superquantile Approach to Federated Learning with Heterogeneous Devices
    Laguel, Yassine
    Pillutla, Krishna
    Malick, Jerome
    Harchaoui, Zaid
    [J]. 2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,