Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor

被引:0
|
作者
Chen, Jinqian [1 ]
Zhu, Jihua [1 ]
Zheng, Qinghai [2 ]
机构
[1] Xi An Jiao Tong Univ, Xian, Shaanxi, Peoples R China
[2] Fuzhou Univ, Fuzhou, Fujian, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
关键词
Federated learning; Non-IID; Knowledge preservation;
D O I
10.1145/3581783.3612597
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning encounters a critical challenge of data heterogeneity, adversely affecting the performance and convergence of the federated model. Various approaches have been proposed to address this issue, yet their effectiveness is still limited. Recent studies have revealed that the federated model suffers severe forgetting in local training, leading to global forgetting and performance degradation. Although the analysis provides valuable insights, a comprehensive understanding of the vulnerable classes and their impact factors is yet to be established. In this paper, we aim to bridge this gap by systematically analyzing the forgetting degree of each class during local training across different communication rounds. Our observations are: (1) Both missing and non-dominant classes suffer similar severe forgetting during local training, while dominant classes show improvement in performance. (2) When dynamically reducing the sample size of a dominant class, catastrophic forgetting occurs abruptly when the proportion of its samples is below a certain threshold, indicating that the local model struggles to leverage a few samples of a specific class effectively to prevent forgetting. Motivated by these findings, we propose a novel and straightforward algorithm called Federated Knowledge Anchor (FedKA). Assuming that all clients have a single shared sample for each class, the knowledge anchor is constructed before each local training stage by extracting shared samples for missing classes and randomly selecting one sample per class for non-dominant classes. The knowledge anchor is then utilized to correct the gradient of each mini-batch towards the direction of preserving the knowledge of the missing and non-dominant classes. Extensive experimental results demonstrate that our proposed FedKA achieves fast and stable convergence, significantly improving accuracy on popular benchmarks.
引用
收藏
页码:8697 / 8706
页数:10
相关论文
共 50 条
  • [41] Tackling data-heterogeneity variations in federated learning via adaptive aggregate weights
    Yin, Qiaoyun
    Feng, Zhiyong
    Li, Xiaohong
    Chen, Shizhan
    Wu, Hongyue
    Han, Gaoyong
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [42] MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning
    Lin, Sen
    Yang, Li
    He, Zhezhi
    Fan, Deliang
    Zhang, Junshan
    2021 IEEE 18TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2021), 2021, : 164 - 172
  • [43] Secure and fast asynchronous Vertical Federated Learning via cascaded hybrid optimization
    Wang, Ganyu
    Zhang, Qingsong
    Li, Xiang
    Wang, Boyu
    Gu, Bin
    Ling, Charles X.
    MACHINE LEARNING, 2024, 113 (09) : 6413 - 6451
  • [44] Towards Fair Federated Recommendation Learning: Characterizing the Inter-Dependence of System and Data Heterogeneity
    Maeng, Kiwan
    Lu, Haiyu
    Melis, Luca
    Nguyen, John
    Rabbat, Mike
    Wu, Carole-Jean
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 156 - 167
  • [45] Towards Fast Network Intrusion Detection based on Efficiency-preserving Federated Learning
    Dong, Tian
    Qiu, Han
    Lu, Jialiang
    Qiu, Meikang
    Fan, Chun
    19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021), 2021, : 468 - 475
  • [46] Towards Accurate and Fast Federated Learning in End-Edge-Cloud Orchestrated Networks
    Li, Mingze
    Sun, Peng
    Zhou, Huan
    Zhao, Liang
    Liu, Xuxun
    Leung, Victor C. M.
    2023 IEEE 43RD INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS, 2023, : 1079 - 1080
  • [47] Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
    Han, Sungwon
    Park, Sungwon
    Wu, Fangzhao
    Kim, Sundong
    Zhu, Bin
    Xie, Xing
    Cha, Meeyoung
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4976 - 4985
  • [48] Personalized and privacy-enhanced federated learning framework via knowledge distillation
    Yu, Fangchao
    Wang, Lina
    Zeng, Bo
    Zhao, Kai
    Yu, Rongwei
    NEUROCOMPUTING, 2024, 575
  • [49] Towards Efficient Federated Learning via Vehicle Selection and Resource Optimization in IoV
    Gong, Nan
    Yan, Guozhi
    Zhang, Hao
    Xiao, Ke
    Yang, Zuoxiu
    Li, Chuzhao
    Liu, Kai
    NEURAL COMPUTING FOR ADVANCED APPLICATIONS, NCAA 2024, PT II, 2025, 2182 : 117 - 131
  • [50] Robust Heterogeneous Federated Learning via Data-Free Knowledge Amalgamation
    Ma, Jun
    Fan, Zheng
    Fan, Chaoyu
    Kang, Qi
    ADVANCES IN SWARM INTELLIGENCE, PT II, ICSI 2024, 2024, 14789 : 61 - 71