Computationally Efficient Rehearsal for Online Continual Learning

被引:2
|
作者
Davalas, Charalampos [1 ]
Michail, Dimitrios [1 ]
Diou, Christos [1 ]
Varlamis, Iraklis [1 ]
Tserpes, Konstantinos [1 ]
机构
[1] Harokopio Univ Athens, Dept Informat & Telemat, Athens 17778, Greece
关键词
Catastrophic forgetting; Continual learning; Online learning;
D O I
10.1007/978-3-031-06433-3_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning is a crucial ability for learning systems that have to adapt to changing data distributions, without reducing their performance in what they have already learned. Rehearsal methods offer a simple countermeasure to help avoid this catastrophic forgetting which frequently occurs in dynamic situations and is a major limitation of machine learning models. These methods continuously train neural networks using a mix of data both from the stream and from a rehearsal buffer, which maintains past training samples. Although the rehearsal approach is reasonable and simple to implement, its effectiveness and efficiency is significantly affected by several hyperparameters such as the number of training iterations performed at each step, the choice of learning rate, and the choice on whether to retrain the agent at each step. These options are especially important in resource-constrained environments commonly found in online continual learning for image analysis. This work evaluates several rehearsal training strategies for continual online learning and proposes the combined use of a drift detector that decides on (a) when to train using data from the buffer and the online stream, and (b) how to train, based on a combination of heuristics. Experiments on the MNIST and CIFAR-10 image classification datasets demonstrate the effectiveness of the proposed approach over baseline training strategies at a fraction of the computational cost.
引用
收藏
页码:39 / 49
页数:11
相关论文
共 50 条
  • [31] An Investigation of the Combination of Rehearsal and Knowledge Distillation in Continual Learning for Spoken Language Understanding
    Cappellazzo, Umberto
    Falavigna, Daniele
    Brutti, Alessio
    INTERSPEECH 2023, 2023, : 735 - 739
  • [32] Generating Instance-level Prompts for Rehearsal-free Continual Learning
    Jung, Dahuin
    Han, Dongyoon
    Bang, Jihwan
    Song, Hwanjun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11813 - 11823
  • [33] Contrastive Continuity on Augmentation Stability Rehearsal for Continual Self-Supervised Learning
    Cheng, Haoyang
    Wen, Haitao
    Zhang, Xiaoliang
    Qiu, Heqian
    Wang, Lanxiao
    Li, Hongliang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5684 - 5694
  • [34] Efficient Architecture Search for Continual Learning
    Gao, Qiang
    Luo, Zhipeng
    Klabjan, Diego
    Zhang, Fengli
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8555 - 8565
  • [35] EsaCL: An Efficient Continual Learning Algorithm
    Ren, Weijieying
    Honavar, Vasant G.
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 163 - 171
  • [36] Memory Efficient Continual Learning with Transformers
    Ermis, Beyza
    Zappella, Giovanni
    Wistuba, Martin
    Rawal, Aditya
    Archambeau, Cedric
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [37] Online continual learning for human activity recognition
    Schiemer, Martin
    Fang, Lei
    Dobson, Simon
    Ye, Juan
    PERVASIVE AND MOBILE COMPUTING, 2023, 93
  • [38] Online Continual Learning for Control of Mobile Robots
    Sarabakha, Andriy
    Qiao, Zhongzheng
    Ramasamy, Savitha
    Suganthan, Ponnuthurai Nagaratnam
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [39] Online Continual Learning from Imbalanced Data
    Chrysakis, Aristotelis
    Moens, Marie-Francine
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [40] Online Continual Learning from Imbalanced Data
    Chrysakis, Aristotelis
    Moens, Marie-Francine
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119