ARFL: Adaptive and Robust Federated Learning

被引:1
|
作者
Uddin, Md Palash [1 ]
Xiang, Yong [1 ]
Cai, Borui [1 ]
Lu, Xuequan [2 ]
Yearwood, John [1 ]
Gao, Longxiang [3 ,4 ]
机构
[1] Deakin Univ, Sch Informat Technol, Geelong, Vic 3220, Australia
[2] La Trobe Univ, Bundoora, Vic 3086, Australia
[3] Qilu Univ Technol, Shandong Acad Sci, Jinan 250316, Shandong, Peoples R China
[4] Nat Supercomp Ctr Jinan, Shandong Comp Sci Ctr, Jinan 250101, Shandong, Peoples R China
基金
澳大利亚研究理事会;
关键词
Distributed learning; federated learning; parallel optimization; communication overhead; adaptive workload; adaptive step size; proximal term; robust aggregation;
D O I
10.1109/TMC.2023.3310248
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a machine learning technique that enables multiple local clients holding individual datasets to collaboratively train a model, without exchanging the clients' datasets. Conventional FL approaches often assign a fixed workload (local epoch) and step size (learning rate) to the clients during the client-side local model training and utilize all collaborating trained models' parameters evenly during the server-side global model aggregation. Consequently, they frequently experience problems with data heterogeneity and high communication costs. In this paper, we propose a novel FL approach to mitigate the above problems. On the client side, we propose an adaptive model update approach that optimally allocates a needful number of local epochs and dynamically adjusts the learning rate to train the local model and regularizes the conventional objective function by adding a proximal term to it. On the server side, we propose a robust model aggregation strategy that potentially supplants the local outlier updates (models' weights) prior to the aggregation. We provide the theoretical convergence results and perform extensive experiments on different data setups over the MNIST, CIFAR-10, and Shakespeare datasets, which manifest that our FL scheme surpasses the baselines in terms of communication speedup, test-set performance, and global convergence.
引用
收藏
页码:5401 / 5417
页数:17
相关论文
共 50 条
  • [31] Adaptive Network Pruning for Wireless Federated Learning
    Liu, Shengli
    Yu, Guanding
    Yin, Rui
    Yuan, Jiantao
    [J]. IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (07) : 1572 - 1576
  • [32] Adaptive Vertical Federated Learning on Unbalanced Features
    Zhang, Jie
    Guo, Song
    Qu, Zhihao
    Zeng, Deze
    Wang, Haozhao
    Liu, Qifeng
    Zomaya, Albert Y.
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 4006 - 4018
  • [33] Personalized Federated Learning with Adaptive Batchnorm for Healthcare
    Lu W.
    Wang J.
    Chen Y.
    Qin X.
    Xu R.
    Dimitriadis D.
    Qin T.
    [J]. IEEE Transactions on Big Data, 2024, 10 (06): : 1 - 1
  • [34] Federated Causality Learning with Explainable Adaptive Optimization
    Yang, Dezhi
    He, Xintong
    Wang, Jun
    Yu, Guoxian
    Domeniconi, Carlotta
    Zhang, Jinglin
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16308 - 16315
  • [35] Adaptive Regularization and Resilient Estimation in Federated Learning
    Uddin, Md Palash
    Xiang, Yong
    Zhao, Yao
    Ali, Mumtaz
    Zhang, Yushu
    Gao, Longxiang
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1369 - 1381
  • [36] Adaptive privacy-preserving federated learning
    Xiaoyuan Liu
    Hongwei Li
    Guowen Xu
    Rongxing Lu
    Miao He
    [J]. Peer-to-Peer Networking and Applications, 2020, 13 : 2356 - 2366
  • [37] FedSW: Federated learning with adaptive sample weights
    Zhao, Xingying
    Shen, Dong
    [J]. INFORMATION SCIENCES, 2024, 654
  • [38] Adaptive privacy-preserving federated learning
    Liu, Xiaoyuan
    Li, Hongwei
    Xu, Guowen
    Lu, Rongxing
    He, Miao
    [J]. PEER-TO-PEER NETWORKING AND APPLICATIONS, 2020, 13 (06) : 2356 - 2366
  • [39] Adaptive Federated Learning in Presence of Concept Drift
    Canonaco, Giuseppe
    Bergamasco, Alex
    Mongelluzzo, Alessi
    Roveri, Manuel
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [40] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,