Optimized and Adaptive Federated Learning for Straggler-Resilient Device Selection

被引:3
|
作者
Banerjee, Sourasekhar [1 ]
Vu, Xuan-Son [1 ]
Bhuyan, Monowar [1 ]
机构
[1] Umea Univ, Dept Comp Sci, SE-90781 Umea, Sweden
关键词
Federated learning; Adaptive device selection; Statistical heterogeneity; Multi-objective optimization; Straggler-resilient device;
D O I
10.1109/IJCNN55064.2022.9892777
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) has evolved as a promising distributed learning paradigm in which data samples are disseminated over massively connected devices in an IID (Identical and Independent Distribution) or non-IID manner. FL follows a collaborative training approach where each device uses local training data to train local models, and the server generates a global model by combining the local model's parameters. However, FL is vulnerable to system heterogeneity when local devices have varying computational, storage, and communication capabilities over time. The presence of stragglers or low-performing devices in the learning process severely impacts the scalability of FL algorithms and significantly delays convergence. To mitigate this problem, we propose Fed-MOODS, a Multi-Objective Optimization-based Device Selection approach to reduce the effect of stragglers in the FL process. The primary criteria for optimization are to maximize (i) the availability of the processing capacity of each device, (ii) the availability of the memory in devices, and (iii) the bandwidth capacity of the participating devices. The multi-objective optimization prioritizes devices from fast to slow. The approach involves faster devices in early global rounds and gradually incorporating slower devices from the Pareto fronts to improve the model's accuracy. The overall training time of Fed-MOODS is 18x and 148x faster than the baseline model (FedAvg) with random device selection for MNIST and FMNIST non-IID data, respectively. Fed-MOODS is extensively evaluated under multiple experimental settings, and the results show that Fed-MOODS has significantly improved model's convergence and performance. Fed-MOODS maintains fairness in the prioritized participation of devices and the model for both IID and non-IID settings.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] ADAPTIVE NODE PARTICIPATION FOR STRAGGLER-RESILIENT FEDERATED LEARNING
    Reisizadeh, Amirhossein
    Tziotis, Isidoros
    Hassani, Hamed
    Mokhtari, Aryan
    Pedarsani, Ramtin
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8762 - 8766
  • [2] Straggler-Resilient Secure Aggregation for Federated Learning
    Schlegel, Reent
    Kumar, Siddhartha
    Rosnes, Eirik
    Graell i Amat, Alexandre
    [J]. 2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 712 - 716
  • [3] A Straggler-resilient Federated Learning Framework for Non-IID Data Based on Harmonic Coding
    Tang, Weiheng
    Chen, Lin
    Chen, Xu
    [J]. 2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 512 - 519
  • [4] Straggler-Resilient Differentially-Private Decentralized Learning
    Yakimenka, Yauhen
    Weng, Chung-Wei
    Lin, Hsuan-Yin
    Rosnes, Eirik
    Kliewer, Jorg
    [J]. 2022 IEEE INFORMATION THEORY WORKSHOP (ITW), 2022, : 708 - 713
  • [5] ASR-Fed: agnostic straggler-resilient semi-asynchronous federated learning technique for secured drone network
    Ihekoronye, Vivian Ukamaka
    Nwakanma, Cosmas Ifeanyi
    Kim, Dong-Seong
    Lee, Jae Min
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, : 5303 - 5319
  • [6] DSAG: A Mixed Synchronous-Asynchronous Iterative Method for Straggler-Resilient Learning
    Severinson, Albin
    Rosnes, Eirik
    El Rouayheb, Salim
    Amat, Alexandre Graell i
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (02) : 808 - 822
  • [7] Straggler-Resilient Asynchronous Decentralized ADMM for Consensus Optimization
    He, Jeannie
    Xiao, Ming
    Skoglund, Mikael
    [J]. 2024 58TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS, CISS, 2024,
  • [8] Generalized Pseudorandom Secret Sharing and Efficient Straggler-Resilient Secure Computation
    Benhamouda, Fabrice
    Boyle, Elette
    Gilboa, Niv
    Halevi, Shai
    Ishai, Yuval
    Nof, Ariel
    [J]. THEORY OF CRYPTOGRAPHY, TCC 2021, PT II, 2021, 13043 : 129 - 161
  • [9] Optimized Device Selection and Power Control for Wireless Federated Learning
    Guo, Wei
    Li, Ran
    Huang, Chuan
    Qin, Xiaoqi
    Shen, Kaiming
    Zhang, Wei
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 4710 - 4715
  • [10] Adaptive Deadline Determination for Mobile Device Selection in Federated Learning
    Lee, Jaewook
    Ko, Haneul
    Pack, Sangheon
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (03) : 3367 - 3371