Efficient Client Sampling with Compression in Heterogeneous Federated Learning

被引:0
|
作者
Marnissi, Ouiame [1 ]
El Hammouti, Hajar [1 ]
Bergou, El Houcine [1 ]
机构
[1] Mohammed VI Polytech Univ UM6P, Coll Comp, Ben Guerir, Morocco
关键词
Federated learning; Client sampling; Heterogeneity; Resource allocation;
D O I
10.1109/INFOCOMWKSHPS61880.2024.10620859
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) has emerged as a promising decentralized machine learning (ML) paradigm where distributed clients collaboratively train models without sharing their private data. However, due to their limited resources and heterogeneous properties, only a small subset of clients can participate at a given time. Furthermore, the high dimensions of ML models incur a massive communication overhead which considerably slows down the convergence of FL. To address the aforementioned challenges, we propose FedHSC, a framework that considers both system and statistical heterogeneity. Specifically, at each communication round, the clients are sampled based on their data properties combined with the importance of their local learning update. After completing their local training, the selected clients share compressed updates with the server for aggregation. The compression rate is adjusted for each client to meet the communication delay requirement. Experimental results on CIFAR-10 show the efficiency of our approach and its robustness to Non-IID data.
引用
收藏
页数:2
相关论文
共 50 条
  • [31] Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
    Zhao, Boxin
    Wang, Lingxiao
    Liu, Ziqi
    Zhang, Zhiqiang
    Zhou, Jun
    Chen, Chaochao
    Kolar, Mladen
    JOURNAL OF MACHINE LEARNING RESEARCH, 2025, 26 : 1 - 67
  • [32] Communication-Efficient Federated Learning With Adaptive Aggregation for Heterogeneous Client-Edge-Cloud Network
    Luo, Long
    Zhang, Chi
    Yu, Hongfang
    Sun, Gang
    Luo, Shouxi
    Dustdar, Schahram
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (06) : 3241 - 3255
  • [33] Learning Client Selection Strategy for Federated Learning across Heterogeneous Mobile Devices
    Zhang, Sai Qian
    Lin, Jieyu
    Zhang, Qi
    Chen, Yu-Jia
    2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024, 2024,
  • [34] Model Compression for Communication Efficient Federated Learning
    Shah, Suhail Mohmad
    Lau, Vincent K. N.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 5937 - 5951
  • [35] Differentially Private Federated Learning with Shuffling and Client Self-Sampling
    Girgis, Antonious M.
    Data, Deepesh
    Diggavi, Suhas
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 338 - 343
  • [36] Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling
    Luo, Bing
    Xiao, Wenli
    Wang, Shiqiang
    Huang, Jianwei
    Tassiulas, Leandros
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2022), 2022, : 1739 - 1748
  • [37] Federated Learning for Heterogeneous Mobile Edge Device: A Client Selection Game
    Liu, Tongfei
    Wang, Hui
    Ma, Maode
    2022 18TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN, 2022, : 897 - 902
  • [38] Target informed client recruitment for efficient federated learning in healthcare
    Scheltjens, Vincent
    Momo, Lyse Naomi Wamba
    Verbeke, Wouter
    De Moor, Bart
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2024, 24 (01)
  • [39] Adaptive client selection with personalization for communication efficient Federated Learning
    de Souza, Allan M.
    Maciel, Filipe
    da Costa, Joahannes B. D.
    Bittencourt, Luiz F.
    Cerqueira, Eduardo
    Loureiro, Antonio A. F.
    Villas, Leandro A.
    AD HOC NETWORKS, 2024, 157
  • [40] Stabilizing and Accelerating Federated Learning on Heterogeneous Data With Partial Client Participation
    Zhang, Hao
    Li, Chenglin
    Dai, Wenrui
    Zheng, Ziyang
    Zou, Junni
    Xiong, Hongkai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (01) : 67 - 83