Anarchic Federated learning with Delayed Gradient Averaging

被引:0
|
作者
Li, Dongsheng [1 ]
Gong, Xiaowen [1 ]
机构
[1] Auburn Univ, Dept Elect & Comp Engn, Auburn, AL 36849 USA
关键词
federated learning; delayed gradient; asynchronous; DESIGN;
D O I
10.1145/3565287.3610273
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFL-DGA, an Anarchic Federated Learning algorithm with Delayed Gradient Averaging, which gives maximum freedom to clients. In particular, AFL-DGA allows clients to 1) participate in any rounds; 2) participate asynchronously; 3) participate with any number of local iterations; 4) perform gradient computations and gradient communications in parallel. The proposed AFL-DGA algorithm enables clients to participate in FL flexibly according to their heterogeneous and time-varying computation and communication capabilities, and also efficiently by improving utilization of their computation and communication resources. We characterize performance bounds on the learning loss of AFL-DGA as a function of clients' local iteration numbers, local model delays, and global model delays. Our results show that the AFL-DGA algorithm can achieve a convergence rate of O(1/root NT) and also a linear convergence speedup, which matches that of existing benchmarks. The results also characterize the impacts of various system parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.
引用
收藏
页码:21 / 30
页数:10
相关论文
共 50 条
  • [41] Adaptive Federated Learning With Gradient Compression in Uplink NOMA
    Sun, Haijian
    Ma, Xiang
    Hu, Rose Qingyang
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (12) : 16325 - 16329
  • [42] Decentralised federated learning with adaptive partial gradient aggregation
    Jiang, Jingyan
    Hu, Liang
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2020, 5 (03) : 230 - 236
  • [43] DEFEAT: A decentralized federated learning against gradient attacks
    Lu, Guangxi
    Xiong, Zuobin
    Li, Ruinian
    Mohammad, Nael
    Li, Yingshu
    Li, Wei
    HIGH-CONFIDENCE COMPUTING, 2023, 3 (03):
  • [44] Gradient Compression with a Variational Coding Scheme for Federated Learning
    Kathariya, Birendra
    Li, Zhu
    Chen, Jianle
    Van der Auwera, Geert
    2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2021,
  • [45] Aggregation Strategy with Gradient Projection for Federated Learning in Diagnosis
    Lin, Huiyan
    Gao, Yunshu
    Li, Heng
    Zhang, Xiaotian
    Yu, Xiangyang
    Chen, Jianwen
    Liu, Jiang
    ADVANCED INTELLIGENT COMPUTING IN BIOINFORMATICS, PT I, ICIC 2024, 2024, 14881 : 207 - 218
  • [46] Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
    Wu, Ruihan
    Chen, Xiangyu
    Guo, Chuan
    Weinberger, Kilian Q.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2293 - 2303
  • [47] Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
    Han, Pengchao
    Wang, Shiqiang
    Leung, Kin K.
    2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2020, : 300 - 310
  • [48] Window-based Model Averaging Improves Generalization in Heterogeneous Federated Learning
    Caldarola, Debora
    Caputo, Barbara
    Ciccone, Marco
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 2255 - 2263
  • [49] FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
    Reisizadeh, Amirhossein
    Mokhtari, Aryan
    Hassani, Hamed
    Jadbabaie, Ali
    Pedarsani, Ramtin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2021 - 2030
  • [50] Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset
    Peng, Sony
    Yang, Yixuan
    Mao, Makara
    Park, Doo-Soon
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (02): : 742 - 756