Adaptive client selection with personalization for communication efficient Federated Learning

被引:2
|
作者
de Souza, Allan M. [1 ]
Maciel, Filipe [1 ,4 ]
da Costa, Joahannes B. D. [1 ]
Bittencourt, Luiz F. [1 ]
Cerqueira, Eduardo [2 ]
Loureiro, Antonio A. F. [3 ]
Villas, Leandro A. [1 ]
机构
[1] Univ Estadual Campinas, Campinas, Brazil
[2] Fed Univ Para, Belem, Brazil
[3] Univ Fed Minas Gerais, Belo Horizonte, Brazil
[4] Univ Fed Ceara, Russas, Brazil
关键词
Federated Learning; Client selection; Personalization; Communication efficient;
D O I
10.1016/j.adhoc.2024.103462
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a distributed approach to collaboratively training machine learning models. FL requires a high level of communication between the devices and a central server, thus imposing several challenges, including communication bottlenecks and network scalability. This article introduces ACSP-FL, a solution to reduce the overall communication and computation costs for training a model in FL environments. ACSP-FL employs a client selection strategy that dynamically adapts the number of devices training the model and the number of rounds required to achieve convergence. Moreover, ACSP-FL enables model personalization to improve clients performance. A use case based on human activity recognition datasets aims to show the impact and benefits of ACSP-FL when compared to state-of-the-art approaches. Experimental evaluations show that ACSP-FL minimizes the overall communication and computation overheads to train a model and converges the system efficiently. In particular, ACSP-FL reduces communication up to 95% compared to literature approaches while providing good convergence even in scenarios where data is distributed differently, non -independent and identical way between client devices.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Compressed Client Selection for Efficient Communication in Federated Learning
    Mohamed, Aissa Hadj
    Assumpcao, Nicolas R. G.
    Astudillo, Carlos A.
    de Souza, Allan M.
    Bittencourt, Luiz F.
    Villas, Leandro A.
    [J]. 2023 IEEE 20TH CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2023,
  • [2] Greedy Shapley Client Selection for Communication-Efficient Federated Learning
    Singhal, Pranava
    Pandey, Shashi Raj
    Popovski, Petar
    [J]. IEEE Networking Letters, 2024, 6 (02): : 134 - 138
  • [3] Adaptive client and communication optimizations in Federated Learning
    Wu, Jiagao
    Wang, Yu
    Shen, Zhangchi
    Liu, Linfeng
    [J]. INFORMATION SYSTEMS, 2023, 116
  • [4] An Efficient Client Selection for Wireless Federated Learning
    Chen, Jingyi
    Wang, Qiang
    Zhang, Wenqi
    [J]. 2023 28TH ASIA PACIFIC CONFERENCE ON COMMUNICATIONS, APCC 2023, 2023, : 291 - 296
  • [5] Bandit-based Communication-Efficient Client Selection Strategies for Federated Learning
    Cho, Yae Jee
    Gupta, Samarth
    Joshi, Gauri
    Yagan, Osman
    [J]. 2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 1066 - 1069
  • [6] On Adaptive Client/Miner Selection for Efficient Blockchain-Based Decentralized Federated Learning
    Tomimasu, Yuta
    Sato, Koya
    [J]. 2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [7] Adaptive client selection and model aggregation for heterogeneous federated learning
    Zhai, Rui
    Jin, Haozhe
    Gong, Wei
    Lu, Ke
    Liu, Yanhong
    Song, Yalin
    Yu, Junyang
    [J]. MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [8] AQUILA: Communication Efficient Federated Learning With Adaptive Quantization in Device Selection Strategy
    Zhao, Zihao
    Mao, Yuzhu
    Shi, Zhenpeng
    Liu, Yang
    Lan, Tian
    Ding, Wenbo
    Zhang, Xiao-Ping
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (06) : 7363 - 7376
  • [9] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [10] Reducing communication in federated learning via efficient client sampling
    Ribero, Monica
    Vikalo, Haris
    [J]. PATTERN RECOGNITION, 2024, 148