Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor

被引:0
|
作者
Chen, Jinqian [1 ]
Zhu, Jihua [1 ]
Zheng, Qinghai [2 ]
机构
[1] Xi An Jiao Tong Univ, Xian, Shaanxi, Peoples R China
[2] Fuzhou Univ, Fuzhou, Fujian, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
关键词
Federated learning; Non-IID; Knowledge preservation;
D O I
10.1145/3581783.3612597
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning encounters a critical challenge of data heterogeneity, adversely affecting the performance and convergence of the federated model. Various approaches have been proposed to address this issue, yet their effectiveness is still limited. Recent studies have revealed that the federated model suffers severe forgetting in local training, leading to global forgetting and performance degradation. Although the analysis provides valuable insights, a comprehensive understanding of the vulnerable classes and their impact factors is yet to be established. In this paper, we aim to bridge this gap by systematically analyzing the forgetting degree of each class during local training across different communication rounds. Our observations are: (1) Both missing and non-dominant classes suffer similar severe forgetting during local training, while dominant classes show improvement in performance. (2) When dynamically reducing the sample size of a dominant class, catastrophic forgetting occurs abruptly when the proportion of its samples is below a certain threshold, indicating that the local model struggles to leverage a few samples of a specific class effectively to prevent forgetting. Motivated by these findings, we propose a novel and straightforward algorithm called Federated Knowledge Anchor (FedKA). Assuming that all clients have a single shared sample for each class, the knowledge anchor is constructed before each local training stage by extracting shared samples for missing classes and randomly selecting one sample per class for non-dominant classes. The knowledge anchor is then utilized to correct the gradient of each mini-batch towards the direction of preserving the knowledge of the missing and non-dominant classes. Extensive experimental results demonstrate that our proposed FedKA achieves fast and stable convergence, significantly improving accuracy on popular benchmarks.
引用
收藏
页码:8697 / 8706
页数:10
相关论文
共 50 条
  • [31] FEDGKD: Toward Heterogeneous Federated Learning via Global Knowledge Distillation
    Yao, Dezhong
    Pan, Wanning
    Dai, Yutong
    Wan, Yao
    Ding, Xiaofeng
    Yu, Chen
    Jin, Hai
    Xu, Zheng
    Sun, Lichao
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (01) : 3 - 17
  • [32] Federated knowledge graph completion via embedding-contrastive learning
    Chen, Mingyang
    Zhang, Wen
    Yuan, Zonggang
    Jia, Yantao
    Chen, Huajun
    KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [33] Knowledge Transfer via Compact Model in Federated Learning (Student Abstract)
    Pei, Jiaming
    Li, Wei
    Wang, Lukun
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23621 - 23622
  • [34] Towards Efficient Federated Learning Framework via Selective Aggregation of Models
    Shi, Yuchen
    Fan, Pingyi
    Zhu, Zheqi
    Peng, Chenghui
    Wang, Fei
    Letaief, Khaled B.
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 99 - 104
  • [35] Fast local representation learning via adaptive anchor graph for image retrieval
    Zhang, Canyu
    Nie, Feiping
    Wang, Zheng
    Wang, Rong
    Li, Xuelong
    INFORMATION SCIENCES, 2021, 578 : 870 - 886
  • [36] Towards General and Fast Video Derain via Knowledge Distillation
    Cai, Defang
    Mu, Pan
    Chan, Sixian
    Shao, Zhanpeng
    Bai, Cong
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1949 - 1954
  • [37] Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing
    Wu, Yebo
    Li, Li
    Tian, Chunlin
    Chang, Tao
    Lin, Chi
    Wang, Cong
    Xu, Cheng-Zhong
    2024 IEEE/ACM 32ND INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE, IWQOS, 2024,
  • [38] FedLGA: Toward System-Heterogeneity of Federated Learning via Local Gradient Approximation
    Li, Xingyu
    Qu, Zhe
    Tang, Bo
    Lu, Zhuo
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (01) : 401 - 414
  • [39] Robust federated learning under statistical heterogeneity via hessian-weighted aggregation
    Adnan Ahmad
    Wei Luo
    Antonio Robles-Kelly
    Machine Learning, 2023, 112 : 633 - 654
  • [40] Robust federated learning under statistical heterogeneity via hessian-weighted aggregation
    Ahmad, Adnan
    Luo, Wei
    Robles-Kelly, Antonio
    MACHINE LEARNING, 2023, 112 (02) : 633 - 654