Hierarchical Correlations Replay for Continual Learning

被引:5
|
作者
Wang, Qiang [1 ]
Liu, Jiayi [1 ]
Ji, Zhong [1 ,2 ]
Pang, Yanwei [1 ,2 ]
Zhang, Zhongfei [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Tianjin Key Lab Brain Inspired Intelligence Techn, Tianjin 300308, Peoples R China
[3] SUNY Binghamton, Comp Sci Dept, Binghamton, NY 13902 USA
基金
中国国家自然科学基金;
关键词
Continual learning; Catastrophic forgetting; Experience replay; Image classification; CLASSIFICATION; KNOWLEDGE;
D O I
10.1016/j.knosys.2022.109052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual Learning (CL) aims at incrementally learning new knowledge from an infinite stream of data while preserving old knowledge. A perfect CL method is expected to retain the old knowledge greedily to alleviate the catastrophic interference caused by the new knowledge. However, most existing methods only focus on exploring the knowledge carried by the instance itself but neglect the correlations of inter-instance and inter-class, which are also valuable for preserving knowledge. To this end, we propose a novel method dubbed Hierarchical Correlations Replay (HCR) consisting of an Instance-level Correlation Replay (ICR) module and a Class-level Correlation Replay (CCR) module, which retain both the instance-level and the class-level correlations to consolidate old knowledge. Specifically, the ICR module employs a correlation matrix to represent the instance-level correlation, and a random triplet probability is utilized to construct the class-level correlation in the CCR module. Extensive experiments on five benchmark image datasets show that our HCR is competitive with or superior to state-of-the-art methods under diverse continual learning settings. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Experience Replay for Continual Learning
    Rolnick, David
    Ahuja, Arun
    Schwarz, Jonathan
    Lillicrap, Timothy P.
    Wayne, Greg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] The hippocampal formation as a hierarchical generative model supporting generative replay and continual learning
    Stoianov, Ivilin
    Maisto, Domenico
    Pezzulo, Giovanni
    PROGRESS IN NEUROBIOLOGY, 2022, 217
  • [3] Marginal Replay vs Conditional Replay for Continual Learning
    Lesort, Timothee
    Gepperth, Alexander
    Stoian, Andrei
    Filliat, David
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 466 - 480
  • [4] Continual Learning with Deep Generative Replay
    Shin, Hanul
    Lee, Jung Kwon
    Kim, Jaehong
    Kim, Jiwon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [5] Knowledge Capture and Replay for Continual Learning
    Gopalakrishnan, Saisubramaniam
    Singh, Pranshu Ranjan
    Fayek, Haytham
    Ramasamy, Savitha
    Ambikapathi, ArulMurugan
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 337 - 345
  • [6] Generative negative replay for continual learning
    Graffieti, Gabriele
    Maltoni, Davide
    Pellegrini, Lorenzo
    Lomonaco, Vincenzo
    NEURAL NETWORKS, 2023, 162 : 369 - 383
  • [7] Memory Enhanced Replay for Continual Learning
    Xu, Guixun
    Guo, Wenhui
    Wang, Yanjiang
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 218 - 222
  • [8] Posterior Meta-Replay for Continual Learning
    Henning, Christian
    Cervera, Maria R.
    D'Angelo, Francesco
    von Oswald, Johannes
    Traber, Regina
    Ehret, Benjamin
    Kobayashi, Seijin
    Grewe, Benjamin F.
    Sacramento, Joao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Beneficial Effect of Combined Replay for Continual Learning
    Solinas, M.
    Rousset, S.
    Cohendet, R.
    Bourrier, Y.
    Mainsant, M.
    Molnos, A.
    Reyboz, M.
    Mermillod, M.
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 205 - 217
  • [10] Prototype-Guided Memory Replay for Continual Learning
    Ho, Stella
    Liu, Ming
    Du, Lan
    Gao, Longxiang
    Xiang, Yong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 10973 - 10983