Saliency Guided Experience Packing for Replay in Continual Learning

被引:2
|
作者
Saha, Gobinda [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Elmore Family Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/WACV56688.2023.00524
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial learning systems aspire to mimic human intelligence by continually learning from a stream of tasks without forgetting past knowledge. One way to enable such learning is to store past experiences in the form of input examples in episodic memory and replay them when learning new tasks. However, performance of such method suffers as the size of the memory becomes smaller. In this paper, we propose a new approach for experience replay, where we select the past experiences by looking at the saliency maps which provide visual explanations for the model's decision. Guided by these saliency maps, we pack the memory with only the parts or patches of the input images important for the model's prediction. While learning a new task, we replay these memory patches with appropriate zero-padding to remind the model about its past decisions. We evaluate our algorithm on CIFAR-100, miniImageNet and CUB datasets and report better performance than the state-of-the-art approaches. With qualitative and quantitative analyses we show that our method captures richer summaries of past experiences without any memory increase, and hence performs well with small episodic memory.
引用
收藏
页码:5262 / 5272
页数:11
相关论文
共 50 条
  • [1] Online continual learning with saliency-guided experience replay using tiny episodic memory
    Saha, Gobinda
    Roy, Kaushik
    MACHINE VISION AND APPLICATIONS, 2023, 34 (04)
  • [2] Online continual learning with saliency-guided experience replay using tiny episodic memory
    Gobinda Saha
    Kaushik Roy
    Machine Vision and Applications, 2023, 34
  • [3] Experience Replay for Continual Learning
    Rolnick, David
    Ahuja, Arun
    Schwarz, Jonathan
    Lillicrap, Timothy P.
    Wayne, Greg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [4] Prototype-Guided Memory Replay for Continual Learning
    Ho, Stella
    Liu, Ming
    Du, Lan
    Gao, Longxiang
    Xiang, Yong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 10973 - 10983
  • [5] Rethinking Experience Replay: Bag of Tricks for Continual Learning
    Buzzega, Pietro
    Boschini, Matteo
    Porrello, Angelo
    Calderara, Simone
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2180 - 2187
  • [6] Coordinating Experience Replay: A Harmonious Experience Retention approach for Continual Learning
    Ji, Zhong
    Liu, Jiayi
    Wang, Qiang
    Zhang, Zhongfei
    KNOWLEDGE-BASED SYSTEMS, 2021, 234
  • [7] Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning
    Yan, Qingsen
    Gong, Dong
    Liu, Yuhang
    van den Hengel, Anton
    Shi, Javen Qinfeng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 109 - 118
  • [8] AdaER: An adaptive experience replay approach for continual lifelong learning
    Li, Xingyu
    Tang, Bo
    Li, Haifeng
    NEUROCOMPUTING, 2024, 572
  • [9] Marginal Replay vs Conditional Replay for Continual Learning
    Lesort, Timothee
    Gepperth, Alexander
    Stoian, Andrei
    Filliat, David
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 466 - 480
  • [10] The Inter-batch Diversity of Samples in Experience Replay for Continual Learning
    Krutsylo, Andrii
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23395 - 23396