How to Prevent the Poor Performance Clients for Personalized Federated Learning?

被引:3
|
作者
Qu, Zhe [1 ]
Li, Xingyu [2 ]
Han, Xiao
Duan, Rui [3 ]
Shen, Chengchao [1 ]
Chen, Lixing [4 ]
机构
[1] Cent South Univ, Changsha, Peoples R China
[2] Mississippi State Univ, Mississippi State, MS USA
[3] Univ S Florida, Tampa, FL USA
[4] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.01171
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Personalized federated learning (pFL) collaboratively trains personalized models, which provides a customized model solution for individual clients in the presence of heterogeneous distributed local data. Although many recent studies have applied various algorithms to enhance personalization in pFL, they mainly focus on improving the performance from averaging or top perspective. However, part of the clients may fall into poor performance and are not clearly discussed. Therefore, how to prevent these poor clients should be considered critically. Intuitively, these poor clients may come from biased universal information shared with others. To address this issue, we propose a novel pFL strategy, called Personalize Locally, Generalize Universally (PLGU). PLGU generalizes the fine-grained universal information and moderates its biased performance by designing a Layer-Wised Sharpness Aware Minimization (LWSAM) algorithm while keeping the personalization local. Specifically, we embed our proposed PLGU strategy into two pFL schemes concluded in this paper: with/without a global model, and present the training procedures in detail. Through in-depth study, we show that the proposed PLGU strategy achieves competitive generalization bounds on both considered pFL schemes. Our extensive experimental results show that all the proposed PLGU based-algorithms achieve state-of-the-art performance.
引用
收藏
页码:12167 / 12176
页数:10
相关论文
共 50 条
  • [1] Ferrari: A Personalized Federated Learning Framework for Heterogeneous Edge Clients
    Yao, Zhiwei
    Liu, Jianchun
    Xu, Hongli
    Wang, Lun
    Qian, Chen
    Liao, Yunming
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 10031 - 10045
  • [2] UPFL: Unsupervised Personalized Federated Learning towards New Clients
    Ye, Tiandi
    Chen, Cen
    Wang, Yinggui
    Li, Xiang
    Gao, Ming
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 851 - 859
  • [3] How to Forget Clients in Federated Online Learning to Rank?
    Wang, Shuyi
    Liu, Bing
    Zuccon, Guido
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT III, 2024, 14610 : 105 - 121
  • [4] Federated Learning With Unreliable Clients: Performance Analysis and Mechanism Design
    Ma, Chuan
    Li, Jun
    Ding, Ming
    Wei, Kang
    Chen, Wen
    Poor, H. Vincent
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (24) : 17308 - 17319
  • [5] How to cope with malicious federated learning clients: An unsupervised learning-based approach
    Onsu, Murat Arda
    Kantarci, Burak
    Boukerche, Azzedine
    COMPUTER NETWORKS, 2023, 234
  • [6] Sparse Personalized Federated Learning
    Liu, Xiaofeng
    Li, Yinchuan
    Wang, Qing
    Zhang, Xu
    Shao, Yunfeng
    Geng, Yanhui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (09) : 12027 - 12041
  • [7] Benchmark for Personalized Federated Learning
    Matsuda, Koji
    Sasaki, Yuya
    Xiao, Chuan
    Onizuka, Makoto
    IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2024, 5 : 2 - 13
  • [8] Personalized Subgraph Federated Learning
    Baek, Jinheon
    Jeong, Wonyong
    Jin, Jiongdao
    Yoon, Jaehong
    Hwang, Sung Ju
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [9] Toward Personalized Federated Learning
    Tan, Alysa Ziying
    Yu, Han
    Cui, Lizhen
    Yang, Qiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 9587 - 9603
  • [10] A Personalized Quantum Federated Learning
    Gurung, Dev
    Pokhrel, Shiva Raj
    PROCEEDINGS OF THE 8TH ASIA-PACIFIC WORKSHOP ON NETWORKING, APNET 2024, 2024, : 175 - 176