Preempting Catastrophic Forgetting in Continual Learning Models by Anticipatory Regularization

被引:0
|
作者
El Khatib, Alaa [1 ]
Karray, Fakhri [1 ]
机构
[1] Univ Waterloo, Elect & Comp Engn, Waterloo, ON, Canada
关键词
D O I
10.1109/ijcnn.2019.8852426
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks trained on tasks sequentially tend to degrade in performance, on the average, the more tasks they see, as the representations learned for one task get progressively modified while learning subsequent tasks. This phenomenon-known as catastrophic forgetting-is a major obstacle on the road toward designing agents that can continually learn new concepts and tasks the way, say, humans do. A common approach to containing catastrophic forgetting is to use regularization to slow down learning on weights deemed important to previously learned tasks. We argue in this paper that, on their own, such post hoc measures to safeguard what has been learned can, even in their more sophisticated variants, paralyze the network and degrade its capacity to learn and counter forgetting as the number of tasks learned increases. We propose insteador possibly in conjunction-that, in anticipation of future tasks, regularization be applied to drive the optimization of network weights toward reusable solutions. We show that one way to achieve this is through an auxiliary unsupervised reconstruction loss that encourages the learned representations not only to be useful for solving, say, the current classification task, but also to reflect the content of the data being processed-content that is generally richer than it is discriminative for any one task. We compare our approach to the recent elastic weight consolidation regularization approach, and show that, although we do not explicitly try to preserve important weights or pass on any information about the data distribution of learned tasks, our model is comparable in performance, and in some cases better.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Catastrophic Forgetting in Continual Concept Bottleneck Models
    Marconato, Emanuele
    Bontempo, Gianpaolo
    Teso, Stefano
    Ficarra, Elisa
    Calderara, Simone
    Passerini, Andrea
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022 WORKSHOPS, PT II, 2022, 13374 : 539 - 547
  • [2] Quantum Continual Learning Overcoming Catastrophic Forgetting
    Jiang, Wenjie
    Lu, Zhide
    Deng, Dong-Ling
    CHINESE PHYSICS LETTERS, 2022, 39 (05)
  • [3] Quantum Continual Learning Overcoming Catastrophic Forgetting
    蒋文杰
    鲁智徳
    邓东灵
    Chinese Physics Letters, 2022, 39 (05) : 29 - 41
  • [4] Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models
    Umer, Muhammad
    Polikar, Robi
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [5] Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning
    Winata, Genta Indra
    Xie, Lingjue
    Radhakrishnan, Karthik
    Wu, Shijie
    Jin, Xisen
    Cheng, Pengxiang
    Kulkarni, Mayank
    Preotiuc-Pietro, Daniel
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 768 - 777
  • [6] Continual Learning for Instance Segmentation to Mitigate Catastrophic Forgetting
    Lee, Jeong Jun
    Lee, Seung Il
    Kim, Hyun
    18TH INTERNATIONAL SOC DESIGN CONFERENCE 2021 (ISOCC 2021), 2021, : 85 - 86
  • [7] CONSISTENCY IS THE KEY TO FURTHER MITIGATING CATASTROPHIC FORGETTING IN CONTINUAL LEARNING
    Bhat, Prashant
    Zonooz, Bahram
    Arani, Elahe
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [8] Understanding Catastrophic Forgetting of Gated Linear Networks in Continual Learning
    Munari, Matteo
    Pasa, Luca
    Zambon, Daniele
    Alippi, Cesare
    Navarin, Nicolo
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [9] Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting
    Li, Xilai
    Zhou, Yingbo
    Wu, Tianfu
    Socher, Richard
    Xiong, Caiming
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [10] Continual Deep Reinforcement Learning to Prevent Catastrophic Forgetting in Jamming Mitigation
    Davaslioglu, Kemal
    Kompella, Sastry
    Erpek, Tugba
    Sagduyu, Yalin E.
    arXiv,