Computationally Efficient Rehearsal for Online Continual Learning

被引:2
|
作者
Davalas, Charalampos [1 ]
Michail, Dimitrios [1 ]
Diou, Christos [1 ]
Varlamis, Iraklis [1 ]
Tserpes, Konstantinos [1 ]
机构
[1] Harokopio Univ Athens, Dept Informat & Telemat, Athens 17778, Greece
关键词
Catastrophic forgetting; Continual learning; Online learning;
D O I
10.1007/978-3-031-06433-3_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning is a crucial ability for learning systems that have to adapt to changing data distributions, without reducing their performance in what they have already learned. Rehearsal methods offer a simple countermeasure to help avoid this catastrophic forgetting which frequently occurs in dynamic situations and is a major limitation of machine learning models. These methods continuously train neural networks using a mix of data both from the stream and from a rehearsal buffer, which maintains past training samples. Although the rehearsal approach is reasonable and simple to implement, its effectiveness and efficiency is significantly affected by several hyperparameters such as the number of training iterations performed at each step, the choice of learning rate, and the choice on whether to retrain the agent at each step. These options are especially important in resource-constrained environments commonly found in online continual learning for image analysis. This work evaluates several rehearsal training strategies for continual online learning and proposes the combined use of a drift detector that decides on (a) when to train using data from the buffer and the online stream, and (b) how to train, based on a combination of heuristics. Experiments on the MNIST and CIFAR-10 image classification datasets demonstrate the effectiveness of the proposed approach over baseline training strategies at a fraction of the computational cost.
引用
收藏
页码:39 / 49
页数:11
相关论文
共 50 条
  • [1] A rehearsal framework for computational efficiency in online continual learning
    Davalas, Charalampos
    Michail, Dimitrios
    Diou, Christos
    Varlamis, Iraklis
    Tserpes, Konstantinos
    APPLIED INTELLIGENCE, 2024, 54 (08) : 6383 - 6399
  • [2] Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning
    Gao, Xinyuan
    Dong, Songlin
    He, Yuhang
    Wang, Qiang
    Gong, Yihong
    COMPUTER VISION - ECCV 2024, PT LXXXV, 2025, 15143 : 89 - 106
  • [3] Rehearsal-Free Online Continual Learning for Automatic Speech Recognition
    Vander Eeckt, Steven
    Van Hamme, Hugo
    INTERSPEECH 2023, 2023, : 944 - 948
  • [4] Repeated Augmented Rehearsal: A Simple but Strong Baseline for Online Continual Learning
    Zhang, Yaqian
    Pfahringer, Bernhard
    Frank, Eibe
    Bifet, Albert
    Lim, Nick Jin Sean
    Jia, Yunzhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Example forgetting and rehearsal in continual learning
    Benko, Beatrix
    PATTERN RECOGNITION LETTERS, 2024, 179 : 65 - 72
  • [6] Rehearsal-free Continual Language Learning via Efficient Parameter Isolation
    Wang, Zhicheng
    Liu, Yufang
    Ji, Tao
    Wang, Xiaoling
    Wu, Yuanbin
    Jiang, Congcong
    Chao, Ye
    Han, Zhencong
    Wang, Ling
    Shao, Xu
    Zeng, Wenqiu
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 10933 - 10946
  • [7] Efficient Data-Parallel Continual Learning with Asynchronous Distributed Rehearsal Buffers
    Bouvier, Thomas
    Nicolae, Bogdan
    Chaugier, Hugo
    Costan, Alexandru
    Foster, Ian
    Antoniu, Gabriel
    2024 IEEE 24TH INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING, CCGRID 2024, 2024, : 245 - 254
  • [8] Online Prototype Learning for Online Continual Learning
    Wei, Yujie
    Ye, Jiaxin
    Huang, Zhizhong
    Zhang, Junping
    Shan, Hongming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18718 - 18728
  • [9] On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning
    Bonicelli, Lorenzo
    Boschini, Matteo
    Porrello, Angelo
    Spampinato, Concetto
    Calderara, Simone
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] Towards Causal Replay for Knowledge Rehearsal in Continual Learning
    Churamani, Nikhil
    Cheong, Jiaee
    Kalkan, Sinan
    Gunes, Hatice
    AAAI BRIDGE PROGRAM ON CONTINUAL CAUSALITY, VOL 208, 2023, 208 : 63 - 70