Asynchronous Decentralized Federated Learning for Heterogeneous Devices

被引:0
|
作者
Liao, Yunming [1 ,2 ]
Xu, Yang [1 ,2 ]
Xu, Hongli [1 ,2 ]
Chen, Min [3 ]
Wang, Lun [1 ,2 ]
Qiao, Chunming [4 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215000, Peoples R China
[3] Huawei Cloud Comp Technol Co Ltd, Hangzhou 310052, Peoples R China
[4] Univ Buffalo, State Univ New York, Dept Comp Sci & Engn, Buffalo, NY 14068 USA
基金
美国国家科学基金会;
关键词
Edge computing; decentralized federated learning; directed communication; neighbor selection; CONSENSUS; OPTIMIZATION; TOPOLOGY;
D O I
10.1109/TNET.2024.3424444
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Data generated at the network edge can be processed locally by leveraging the emerging technology of Federated Learning (FL). However, non-local data will lead to degradation of model accuracy and the heterogeneity of edge nodes inevitably slows down model training efficiency. Moreover, to avoid the potential communication bottleneck in the parameter-server-based FL, we concentrate on the Decentralized Federated Learning (DFL) that performs distributed model training in Peer-to-Peer (P2P) manner. To address these challenges, we propose an asynchronous DFL system by incorporating neighbor selection and gradient push, termed . Specifically, we require each edge node to push gradients only to a subset of neighbors for resource efficiency. Herein, we first give a theoretical convergence analysis of under the complicated non-and heterogeneous scenario, and further design a priority-based algorithm to dynamically select neighbors for each edge node so as to achieve the trade-off between communication cost and model performance. We evaluate the performance of through extensive experiments on a physical platform with 30 NVIDIA Jetson edge devices. Evaluation results show that can reduce the communication cost by 57% and the completion time by about 35% for achieving the same test accuracy, and improve model accuracy by at least 6% under the non-scenario, compared to the baselines.
引用
收藏
页码:4535 / 4550
页数:16
相关论文
共 50 条
  • [31] A decentralised federated learning scheme for heterogeneous devices in cognitive IoT
    Ge, Huanhuan
    Yang, Xingtao
    Wang, Jinlong
    Lyu, Zhihan
    [J]. International Journal of Cognitive Computing in Engineering, 2024, 5 : 357 - 366
  • [32] Asynchronous Decentralized Optimization in Heterogeneous Systems
    Rabbat, Michael G.
    Tsianos, Konstantinos I.
    [J]. 2014 IEEE 53RD ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2014, : 1125 - 1130
  • [33] Adaptive asynchronous federated learning
    Lu, Renhao
    Zhang, Weizhe
    Li, Qiong
    He, Hui
    Zhong, Xiaoxiong
    Yang, Hongwei
    Wang, Desheng
    Xu, Zenglin
    Alazab, Mamoun
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 152 : 193 - 206
  • [34] Towards asynchronous federated learning for heterogeneous edge-powered internet of things
    Zheyi Chen
    Weixian Liao
    Kun Hua
    Chao Lu
    Wei Yu
    [J]. Digital Communications and Networks, 2021, 7 (03) : 317 - 326
  • [35] FedSA: A Semi-Asynchronous Federated Learning Mechanism in Heterogeneous Edge Computing
    Ma, Qianpiao
    Xu, Yang
    Xu, Hongli
    Jiang, Zhida
    Huang, Liusheng
    Huang, He
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) : 3654 - 3672
  • [36] Towards asynchronous federated learning for heterogeneous edge-powered internet of things
    Chen, Zheyi
    Liao, Weixian
    Hua, Kun
    Lu, Chao
    Yu, Wei
    [J]. DIGITAL COMMUNICATIONS AND NETWORKS, 2021, 7 (03) : 317 - 326
  • [37] Learning Client Selection Strategy for Federated Learning across Heterogeneous Mobile Devices
    Zhang, Sai Qian
    Lin, Jieyu
    Zhang, Qi
    Chen, Yu-Jia
    [J]. 2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024, 2024,
  • [38] Asynchronous Online Federated Learning for Edge Devices with Non-IID Data
    Chen, Yujing
    Ning, Yue
    Slawski, Martin
    Rangwala, Huzefa
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 15 - 24
  • [39] Asynchronous Decentralized Online Learning
    Jiang, Jiyan
    Zhang, Wenpeng
    Gu, Jinjie
    Zhu, Wenwu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [40] No One Left Behind: Inclusive Federated Learning over Heterogeneous Devices
    Liu, Ruixuan
    Wu, Fangzhao
    Wu, Chuhan
    Wang, Yanlin
    Lyu, Lingjuan
    Chen, Hong
    Xie, Xing
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 3398 - 3406