Effects of Auxiliary Knowledge on Continual Learning

被引:1
|
作者
Bellitto, Giovanni [1 ]
Pennisi, Matteo [1 ]
Palazzo, Simone [1 ]
Spampinato, Concetto [1 ]
Bonicelli, Lorenzo [2 ]
Boschini, Matteo [2 ]
Calderara, Simone [2 ]
机构
[1] Univ Catania, PeRCeiVe Lab, Catania, Italy
[2] Univ Modena & Reggio Emilia, AImageLab, Modena, Italy
关键词
D O I
10.1109/ICPR56361.2022.9956694
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In Continual Learning (CL), a neural network is trained on a stream of data whose distribution changes over time. In this context, the main problem is how to learn new information without forgetting old knowledge (i.e., Catastrophic Forgetting). Most existing CL approaches focus on finding solutions to preserve acquired knowledge, so working on the "past" of the model. However, we argue that as the model has to continually learn new tasks, it is also important to put focus on the "present" knowledge that could improve following tasks learning. In this paper we propose a new, simple, CL algorithm that focuses on solving the current task in a way that might facilitate the learning of the next ones. More specifically, our approach combines the main data stream with a secondary, diverse and uncorrelated stream, from which the network can draw auxiliary knowledge. This helps the model from different perspectives, since auxiliary data may contain useful features for the current and the next tasks and incoming task classes can be mapped onto auxiliary classes. Furthermore, the addition of data to the current task is implicitly making the classifier more robust as we are forcing the extraction of more discriminative features. Our method can outperform existing state-of-the-art models on the most common CL Image Classification benchmarks.
引用
收藏
页码:1357 / 1363
页数:7
相关论文
共 50 条
  • [1] Continual Auxiliary Task Learning
    McLeod, Matthew
    Lo, Chunlok
    Schlegel, Matthew
    Jacobsen, Andrew
    Kumaraswamy, Raksha
    White, Martha
    White, Adam
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Continual variational dropout: a view of auxiliary local variables in continual learning
    Nam Le Hai
    Trang Nguyen
    Linh Ngo Van
    Thien Huu Nguyen
    Khoat Than
    [J]. Machine Learning, 2024, 113 : 281 - 323
  • [3] Continual variational dropout: a view of auxiliary local variables in continual learning
    Hai, Nam Le
    Nguyen, Trang
    Van, Linh Ngo
    Nguyen, Thien Huu
    Than, Khoat
    [J]. MACHINE LEARNING, 2024, 113 (01) : 281 - 323
  • [4] Continual Learning of Knowledge Graph Embeddings
    Daruna, Angel
    Gupta, Mehul
    Sridharan, Mohan
    Chernova, Sonia
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 1128 - 1135
  • [5] Knowledge Capture and Replay for Continual Learning
    Gopalakrishnan, Saisubramaniam
    Singh, Pranshu Ranjan
    Fayek, Haytham
    Ramasamy, Savitha
    Ambikapathi, ArulMurugan
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 337 - 345
  • [6] Evaluating Knowledge Retention in Continual Learning
    Krutsylo, Andrii
    [J]. 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1053 - 1055
  • [7] A THEORY FOR KNOWLEDGE TRANSFER IN CONTINUAL LEARNING
    Benavides-Prado, Diana
    Riddle, Patricia
    [J]. CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [8] Continual Learning Based on Knowledge Distillation and Representation Learning
    Chen, Xiu-Yan
    Liu, Jian-Wei
    Li, Wen-Tao
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 27 - 38
  • [9] Continual Learning with Knowledge Transfer for Sentiment Classification
    Ke, Zixuan
    Liu, Bing
    Wang, Hao
    Shu, Lei
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT III, 2021, 12459 : 683 - 698
  • [10] Auxiliary Local Variables for Improving Regularization/Prior Approach in Continual Learning
    Linh Ngo Van
    Nam Le Hai
    Hoang Pham
    Khoat Than
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT I, 2022, 13280 : 16 - 28