SYNERGY BETWEEN SYNAPTIC CONSOLIDATION AND EXPERIENCE REPLAY FOR GENERAL CONTINUAL LEARNING

被引:0
|
作者
Sarfraz, Fahad [1 ]
Arani, Elahe [1 ]
Zonooz, Bahram [1 ]
机构
[1] NavInfo Europe, Adv Res Lab, Eindhoven, Netherlands
关键词
MEMORY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning (CL) in the brain is facilitated by a complex set of mechanisms. This includes the interplay of multiple memory systems for consolidating information as posited by the complementary learning systems (CLS) theory and synaptic consolidation for protecting the acquired knowledge from erasure. Thus, we propose a general CL method that creates a synergy between SYNaptic consolidation and dual memory Experience Replay (SYNERgy). Our method maintains a semantic memory that accumulates and consolidates information across the tasks and interacts with the episodic memory for effective replay. It further employs synaptic consolidation by tracking the importance of parameters during the training trajectory and anchoring them to the consolidated parameters in the semantic memory. To the best of our knowledge, our study is the first to employ dual memory experience replay in conjunction with synaptic consolidation that is suitable for general CL whereby the network does not utilize task boundaries or task labels during training or inference. Our evaluation on various challenging CL scenarios and characteristics analyses demonstrate the efficacy of incorporating both synaptic consolidation and CLS theory in enabling effective CL in DNNs.(1)
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Experience Replay for Continual Learning
    Rolnick, David
    Ahuja, Arun
    Schwarz, Jonathan
    Lillicrap, Timothy P.
    Wayne, Greg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] Saliency Guided Experience Packing for Replay in Continual Learning
    Saha, Gobinda
    Roy, Kaushik
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5262 - 5272
  • [3] Rethinking Experience Replay: Bag of Tricks for Continual Learning
    Buzzega, Pietro
    Boschini, Matteo
    Porrello, Angelo
    Calderara, Simone
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2180 - 2187
  • [4] Coordinating Experience Replay: A Harmonious Experience Retention approach for Continual Learning
    Ji, Zhong
    Liu, Jiayi
    Wang, Qiang
    Zhang, Zhongfei
    KNOWLEDGE-BASED SYSTEMS, 2021, 234
  • [5] Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning
    Yan, Qingsen
    Gong, Dong
    Liu, Yuhang
    van den Hengel, Anton
    Shi, Javen Qinfeng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 109 - 118
  • [6] AdaER: An adaptive experience replay approach for continual lifelong learning
    Li, Xingyu
    Tang, Bo
    Li, Haifeng
    NEUROCOMPUTING, 2024, 572
  • [7] Marginal Replay vs Conditional Replay for Continual Learning
    Lesort, Timothee
    Gepperth, Alexander
    Stoian, Andrei
    Filliat, David
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 466 - 480
  • [8] The Inter-batch Diversity of Samples in Experience Replay for Continual Learning
    Krutsylo, Andrii
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23395 - 23396
  • [9] Uncertainty-aware enhanced dark experience replay for continual learning
    Wang, Qiang
    Ji, Zhong
    Pang, Yanwei
    Zhang, Zhongfei
    APPLIED INTELLIGENCE, 2024, 54 (13-14) : 7135 - 7150
  • [10] Continual Learning with Deep Generative Replay
    Shin, Hanul
    Lee, Jung Kwon
    Kim, Jaehong
    Kim, Jiwon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30