FedCO: Communication-Efficient Federated Learning via Clustering Optimization

被引:6
|
作者
Al-Saedi, Ahmed A. [1 ]
Boeva, Veselka [1 ]
Casalicchio, Emiliano [1 ,2 ]
机构
[1] Blekinge Inst Technol, Dept Comp Sci, SE-37179 Karlskrona, Sweden
[2] Sapienza Univ Rome, Dept Comp Sci, I-00185 Rome, Italy
关键词
federated learning; Internet of Things; clustering; communication efficiency; convolutional neural network;
D O I
10.3390/fi14120377
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a clustering-based federated solution, entitled Federated Learning via Clustering Optimization (FedCO), which optimizes model aggregation and reduces communication costs. In order to reduce the communication costs, we first divide the participating workers into groups based on the similarity of their model parameters and then select only one representative, the best performing worker, from each group to communicate with the central server. Then, in each successive round, we apply the Silhouette validation technique to check whether each representative is still made tight with its current cluster. If not, the representative is either moved into a more appropriate cluster or forms a cluster singleton. Finally, we use split optimization to update and improve the whole clustering solution. The updated clustering is used to select new cluster representatives. In that way, the proposed FedCO approach updates clusters by repeatedly evaluating and splitting clusters if doing so is necessary to improve the workers' partitioning. The potential of the proposed method is demonstrated on publicly available datasets and LEAF datasets under the IID and Non-IID data distribution settings. The experimental results indicate that our proposed FedCO approach is superior to the state-of-the-art FL approaches, i.e., FedAvg, FedProx, and CMFL, in reducing communication costs and achieving a better accuracy in both the IID and Non-IID cases.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [2] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [3] Communication-Efficient Federated Learning via Predictive Coding
    Yue, Kai
    Jin, Richeng
    Wong, Chau-Wai
    Dai, Huaiyu
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (03) : 369 - 380
  • [4] Communication-efficient federated learning via personalized filter pruning
    Min, Qi
    Luo, Fei
    Dong, Wenbo
    Gu, Chunhua
    Ding, Weichao
    INFORMATION SCIENCES, 2024, 678
  • [5] Communication-efficient clustered federated learning via model distance
    Zhang, Mao
    Zhang, Tie
    Cheng, Yifei
    Bao, Changcun
    Cao, Haoyu
    Jiang, Deqiang
    Xu, Linli
    MACHINE LEARNING, 2024, 113 (06) : 3869 - 3888
  • [6] PBFL: Communication-Efficient Federated Learning via Parameter Predicting
    Li, Kaiju
    Xiao, Chunhua
    COMPUTER JOURNAL, 2023, 66 (03): : 626 - 642
  • [7] Communication-efficient Federated Learning via Quantized Clipped SGD
    Jia, Ninghui
    Qu, Zhihao
    Ye, Baoliu
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT I, 2021, 12937 : 559 - 571
  • [8] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [9] Communication-Efficient Federated Learning via Quantized Compressed Sensing
    Oh, Yongjeong
    Lee, Namyoon
    Jeon, Yo-Seb
    Poor, H. Vincent
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (02) : 1087 - 1100
  • [10] Communication-efficient clustered federated learning via model distance
    Mao Zhang
    Tie Zhang
    Yifei Cheng
    Changcun Bao
    Haoyu Cao
    Deqiang Jiang
    Linli Xu
    Machine Learning, 2024, 113 : 3869 - 3888