Towards Long-Term Remembering in Federated Continual Learning

被引:0
|
作者
Zhao, Ziqin [1 ]
Lyu, Fan [2 ]
Li, Linyan [3 ]
Hu, Fuyuan [4 ]
Gu, Minming [5 ]
Sun, Li [6 ]
机构
[1] Suzhou Univ Sci & Technol, Suzhou 215000, Peoples R China
[2] CASIA, CRIPAC, MAIS, Beijing 100000, Peoples R China
[3] Suzhou Inst Trade & Commerce, Suzhou 215000, Peoples R China
[4] Suzhou Key Lab Intelligent Low Carton Technol Appl, Suzhou 215009, Peoples R China
[5] Jiangsu Ind Intelligent & Low Carbon Technol Engn, Suzhou 215000, Peoples R China
[6] Xi An Jiao Tong Univ, Xian 710000, Peoples R China
关键词
Federated learning; Continual learning; Long-term remembering; Fisher information;
D O I
10.1007/s12559-024-10314-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Background Federated Continual Learning (FCL) involves learning from distributed data on edge devices with incremental knowledge. However, current FCL methods struggle to retain long-term memories on the server. Method In this paper, we introduce a method called Fisher INformation Accumulation Learning (FINAL) to address catastrophic forgetting in FCL. First, we accumulate a global Fisher with a federated Fisher information matrix formed from clients task by task to remember long-term knowledge. Second, we present a novel multi-node collaborative integration strategy to assemble the federated Fisher, which reveals the task-specific co-importance of parameters among clients. Finally, we raise a Fisher balancing method to combine the global Fisher and federated Fisher, avoiding neglecting new learning or causing catastrophic forgetting. Results We conducted evaluations on four FCL datasets, and the findings demonstrate that the proposed FINAL effectively maintains long-term knowledge on the server. Conclusions The exceptional performance of this method indicates its significant value for future FCL research.
引用
收藏
页码:2803 / 2811
页数:9
相关论文
共 50 条
  • [21] Client Selection and Bandwidth Allocation in Wireless Federated Learning Networks: A Long-Term Perspective
    Xu, Jie
    Wang, Heqiang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (02) : 1188 - 1200
  • [22] Long-Term Privacy-Preserving Aggregation With User-Dynamics for Federated Learning
    Liu, Ziyao
    Lin, Hsiao-Ying
    Liu, Yamin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2398 - 2412
  • [23] Federated Learning Enhanced by Continual Learning for Common and Uncommon Features
    Mori, Junki
    Teranishi, Isamu
    Furukawa, Ryo
    Transactions of the Japanese Society for Artificial Intelligence, 2024, 39 (03):
  • [24] Towards long-term prediction
    Judd, K
    Small, M
    PHYSICA D, 2000, 136 (1-2): : 31 - 44
  • [25] LONG-TERM RETENTION - ORGANIZATION AND RESEARCH ON REMEMBERING FORMS
    EHRLICH, MF
    LECOUTRE, MP
    ANNEE PSYCHOLOGIQUE, 1973, 73 (01): : 85 - 100
  • [26] REMEMBERING SCHWARZKOPF,NORMAN - EVIDENCE FOR 2 DISTINCT LONG-TERM FACT LEARNING-MECHANISMS
    KAPUR, N
    COGNITIVE NEUROPSYCHOLOGY, 1994, 11 (06) : 661 - 670
  • [27] FedViT: Federated continual learning of vision transformer at edge
    Zuo, Xiaojiang
    Luopan, Yaxin
    Han, Rui
    Zhang, Qinglong
    Liu, Chi Harold
    Wang, Guoren
    Chen, Lydia Y.
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 154 : 1 - 15
  • [28] Urban Traffic Forecasting using Federated and Continual Learning
    Lanza, Chiara
    Angelats, Eduard
    Miozzo, Marco
    Dini, Paolo
    2023 6TH CONFERENCE ON CLOUD AND INTERNET OF THINGS, CIOT, 2023, : 1 - 8
  • [29] Towards Learning Ocean Models for Long-term Navigation in Dynamic Environments
    Padrao, Paulo
    Dominguez, Alberto
    Bobadilla, Leonardo
    Smith, Ryan N.
    OCEANS 2022, 2022,
  • [30] Federated Continual Learning via Knowledge Fusion: A Survey
    Yang, Xin
    Yu, Hao
    Gao, Xin
    Wang, Hao
    Zhang, Junbo
    Li, Tianrui
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (08) : 3832 - 3850