NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning

被引:0
|
作者
Daramt, Anurag [1 ]
Kudithipudi, Dhireesha [1 ]
机构
[1] Univ Texas San Antonio, Neuromorph AI Lab, San Antonio, TX 78284 USA
来源
PROCEEDINGS OF THE 2023 ANNUAL NEURO-INSPIRED COMPUTATIONAL ELEMENTS CONFERENCE, NICE 2023 | 2023年
关键词
catastrophic forgetting; task agnostic; Task Incremental learning; Domain Incremental learning; Neuron Importance;
D O I
10.1145/3584954.3584960
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Continual learning (sequential learning of tasks) is challenging for deep neural networks, mainly because of catastrophic forgetting, the tendency for accuracy on previously trained tasks to drop when new tasks are learned. Although several biologically-inspired techniques have been proposed for mitigating catastrophic forgetting, they typically require additional memory and/or computational overhead. Here, we propose a novel regularization approach that combines neuronal activation-based importance measurement with neuron state-dependent learning mechanisms to alleviate catastrophic forgetting in both task-aware and task-agnostic scenarios. We introduce a neuronal state-dependent mechanism driven by neuronal activity traces and selective learning rules, with storage requirements for regularization parameters that grow slower with network size - compared to schemes that calculate weight importance, whose storage grows quadratically. The proposed model, NEO, is able to achieve performance comparable to other state-of-the-art regularization based approaches to catastrophic forgetting, while operating with a reduced memory overhead.
引用
收藏
页码:11 / 19
页数:9
相关论文
共 50 条
  • [1] Continual Learning with Neuron Activation Importance
    Kim, Sohee
    Lee, Seungkyu
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT I, 2022, 13231 : 310 - 321
  • [2] Mitigating Forgetting in Online Continual Learning with Neuron Calibration
    Yin, Haiyan
    Yang, Peng
    Li, Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Efficient Architecture Search for Continual Learning
    Gao, Qiang
    Luo, Zhipeng
    Klabjan, Diego
    Zhang, Fengli
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8555 - 8565
  • [4] EsaCL: An Efficient Continual Learning Algorithm
    Ren, Weijieying
    Honavar, Vasant G.
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 163 - 171
  • [5] Memory Efficient Continual Learning with Transformers
    Ermis, Beyza
    Zappella, Giovanni
    Wistuba, Martin
    Rawal, Aditya
    Archambeau, Cedric
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Selective Freezing for Efficient Continual Learning
    Sorrenti, Amelia
    Bellitto, Giovanni
    Salanitri, Federica Proietto
    Pennisi, Matteo
    Spampinato, Concetto
    Palazzo, Simone
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3542 - 3551
  • [7] Neurobiological mechanisms of state-dependent learning
    Radulovic, Jelena
    Jovasevic, Vladimir
    Meyer, Mariah A. A.
    CURRENT OPINION IN NEUROBIOLOGY, 2017, 45 : 92 - 98
  • [8] Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning
    Gao, Xinyuan
    Dong, Songlin
    He, Yuhang
    Wang, Qiang
    Gong, Yihong
    COMPUTER VISION - ECCV 2024, PT LXXXV, 2025, 15143 : 89 - 106
  • [9] Computationally Efficient Rehearsal for Online Continual Learning
    Davalas, Charalampos
    Michail, Dimitrios
    Diou, Christos
    Varlamis, Iraklis
    Tserpes, Konstantinos
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT III, 2022, 13233 : 39 - 49
  • [10] From IID to the Independent Mechanisms Assumption in Continual Learning
    Ostapenko, Oleksiy
    Rodriguez, Pau
    Lacoste, Alexandre
    Charlin, Laurent
    AAAI BRIDGE PROGRAM ON CONTINUAL CAUSALITY, VOL 208, 2023, 208 : 25 - 29