Rainbow Memory: Continual Learning with a Memory of Diverse Samples

被引:121
|
作者
Bang, Jihwan [1 ]
Kim, Heesu [2 ,3 ]
Yoo, YoungJoon [2 ,3 ]
Ha, Jung-Woo [2 ,3 ]
Choi, Jonghyun [4 ]
机构
[1] Search Solut Inc, Thousand Oaks, CA USA
[2] NAVER CLOVA, Seongnam, South Korea
[3] NAVER AI Lab, Seongnam, South Korea
[4] GIST, Gwangju, South Korea
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR46437.2021.00812
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning is a realistic learning scenario for AI models. Prevalent scenario of continual learning, however; assumes disjoint sets of classes as tasks and is less realistic rather artificial. Instead, we focus on 'blurry' task boundary; where tasks shares classes and is more realistic and practical. To address such task, we argue the importance of diversity of samples in an episodic memory. To enhance the sample diversity in the memory, we propose a novel memory management strategy based on persample classification uncertainty and data augmentation, named Rainbow Memory (RM). With extensive empirical validations on MNIST CIFAR10, CIFAR100, and ImageNet datasets, we show that the proposed method significantly improves the accuracy in blurry continual learning setups, outperforming state of the arts by large margins despite its simplicity.
引用
收藏
页码:8214 / 8223
页数:10
相关论文
共 50 条
  • [1] Memory Bounds for Continual Learning
    Chen, Xi
    Papadimitriou, Christos
    Peng, Binghui
    [J]. 2022 IEEE 63RD ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2022, : 519 - 530
  • [2] Neural inhibition for continual learning and memory
    Barron, Helen C.
    [J]. CURRENT OPINION IN NEUROBIOLOGY, 2021, 67 : 85 - 94
  • [3] Bilateral Memory Consolidation for Continual Learning
    Nie, Xing
    Xu, Shixiong
    Liu, Xiyan
    Meng, Gaofeng
    Huo, Chunlei
    Xiang, Shiming
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 16026 - 16035
  • [4] Online continual learning with declarative memory
    Xiao, Zhe
    Du, Zhekai
    Wang, Ruijin
    Gan, Ruimeng
    Li, Jingjing
    [J]. NEURAL NETWORKS, 2023, 163 : 146 - 155
  • [5] Memory Efficient Continual Learning with Transformers
    Ermis, Beyza
    Zappella, Giovanni
    Wistuba, Martin
    Rawal, Aditya
    Archambeau, Cedric
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Gradient Episodic Memory for Continual Learning
    Lopez-Paz, David
    Ranzato, Marc'Aurelio
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [7] Condensed Composite Memory Continual Learning
    Wiewe, Felix
    Yan, Bin
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [8] Memory Enhanced Replay for Continual Learning
    Xu, Guixun
    Guo, Wenhui
    Wang, Yanjiang
    [J]. 2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 218 - 222
  • [9] CarM: Hierarchical Episodic Memory for Continual Learning
    Lee, Soobee
    Weerakoon, Minindu
    Choi, Jonghyun
    Zhang, Minjia
    Wang, Di
    Jeon, Myeongjae
    [J]. PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 1147 - 1152
  • [10] Continual Learning Long Short Term Memory
    Guo, Xin
    Tian, Yu
    Xue, Qinghan
    Lampropoulos, Panos
    Eliuk, Steven
    Barner, Kenneth
    Wang, Xiaolong
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1817 - 1822