Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor

被引:0
|
作者
Chen, Jinqian [1 ]
Zhu, Jihua [1 ]
Zheng, Qinghai [2 ]
机构
[1] Xi An Jiao Tong Univ, Xian, Shaanxi, Peoples R China
[2] Fuzhou Univ, Fuzhou, Fujian, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
关键词
Federated learning; Non-IID; Knowledge preservation;
D O I
10.1145/3581783.3612597
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning encounters a critical challenge of data heterogeneity, adversely affecting the performance and convergence of the federated model. Various approaches have been proposed to address this issue, yet their effectiveness is still limited. Recent studies have revealed that the federated model suffers severe forgetting in local training, leading to global forgetting and performance degradation. Although the analysis provides valuable insights, a comprehensive understanding of the vulnerable classes and their impact factors is yet to be established. In this paper, we aim to bridge this gap by systematically analyzing the forgetting degree of each class during local training across different communication rounds. Our observations are: (1) Both missing and non-dominant classes suffer similar severe forgetting during local training, while dominant classes show improvement in performance. (2) When dynamically reducing the sample size of a dominant class, catastrophic forgetting occurs abruptly when the proportion of its samples is below a certain threshold, indicating that the local model struggles to leverage a few samples of a specific class effectively to prevent forgetting. Motivated by these findings, we propose a novel and straightforward algorithm called Federated Knowledge Anchor (FedKA). Assuming that all clients have a single shared sample for each class, the knowledge anchor is constructed before each local training stage by extracting shared samples for missing classes and randomly selecting one sample per class for non-dominant classes. The knowledge anchor is then utilized to correct the gradient of each mini-batch towards the direction of preserving the knowledge of the missing and non-dominant classes. Extensive experimental results demonstrate that our proposed FedKA achieves fast and stable convergence, significantly improving accuracy on popular benchmarks.
引用
收藏
页码:8697 / 8706
页数:10
相关论文
共 50 条
  • [21] FedTrojan: Corrupting Federated Learning via Zero-Knowledge Federated Trojan Attacks
    Chang, Shan
    Liu, Ye
    Lin, Zhijian
    Zhu, Hongzi
    Zhu, Bingzhu
    Wang, Cong
    IEEE International Workshop on Quality of Service, IWQoS, 2024,
  • [22] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [23] Towards Personalized Federated Learning via Heterogeneous Model Reassembly
    Wang, Jiaqi
    Yang, Xingyi
    Cui, Suhan
    Che, Liwei
    Lyu, Lingjuan
    Xu, Dongkuan
    Ma, Fenglong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] Towards privacy palmprint recognition via federated hash learning
    Shao, Huikai
    Zhong, Dexing
    ELECTRONICS LETTERS, 2020, 56 (25) : 1418 - 1420
  • [25] Fedadkd:heterogeneous federated learning via adaptive knowledge distillation
    Song, Yalin
    Liu, Hang
    Zhao, Shuai
    Jin, Haozhe
    Yu, Junyang
    Liu, Yanhong
    Zhai, Rui
    Wang, Longge
    PATTERN ANALYSIS AND APPLICATIONS, 2024, 27 (04)
  • [26] Towards Fair Graph Federated Learning via Incentive Mechanisms
    Pan, Chenglu
    Xu, Jiarong
    Yu, Yue
    Yang, Ziqi
    Wu, Qingbiao
    Wang, Chunping
    Chen, Lei
    Yang, Yang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14499 - 14507
  • [27] Towards Federated Learning with Attention Transfer to Mitigate System and Data Heterogeneity of Clients
    Shi, Hongrui
    Radu, Valentin
    PROCEEDINGS OF THE 4TH INTERNATIONAL WORKSHOP ON EDGE SYSTEMS, ANALYTICS AND NETWORKING (EDGESYS'21), 2021, : 61 - 66
  • [28] Handling Data Heterogeneity for IoT Devices in Federated Learning: A Knowledge Fusion Approach
    Zhou, Xu
    Lei, Xinyu
    Yang, Cong
    Shi, Yichun
    Zhang, Xiao
    Shi, Jingwen
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (05): : 8090 - 8104
  • [29] Knowledge Distillation Assisted Robust Federated Learning: Towards Edge Intelligence
    Qiao, Yu
    Adhikary, Apurba
    Kim, Ki Tae
    Zhang, Chaoning
    Hong, Choong Seon
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 843 - 848
  • [30] Fast-Convergence Federated Edge Learning via Bilevel Optimization
    Wang, Sai
    Gong, Yi
    2023 28TH ASIA PACIFIC CONFERENCE ON COMMUNICATIONS, APCC 2023, 2023, : 87 - 92