Continual learning with selective nets

被引:0
|
作者
Luu, Hai Tung [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Control Engn & Informat Technol, Budapest, Hungary
关键词
Continual learning; Computer vision; Image classification; Machine learning;
D O I
10.1007/s10489-025-06497-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread adoption of foundation models has significantly transformed machine learning, enabling even straightforward architectures to achieve results comparable to state-of-the-art methods. Inspired by the brain's natural learning process-where studying a new concept activates distinct neural pathways and recalling that memory requires a specific stimulus to fully recover the information-we present a novel approach to dynamic task identification and submodel selection in continual learning. Our method leverages the power of the learning robust visual features without supervision model (DINOv2) foundation model to handle multi-experience datasets by dividing them into multiple experiences, each representing a subset of classes. To build a memory of these classes, we employ strategies such as using random real images, distilled images, k-nearest neighbours (kNN) to identify the closest samples to each cluster, and support vector machines (SVM) to select the most representative samples. During testing, where the task identification (ID) is not provided, we extract features of the test image and use distance measurements to match it with the stored features. Additionally, we introduce a new forgetting metric specifically designed to measure the forgetting rate in task-agnostic continual learning scenarios, unlike traditional task-specific approaches. This metric captures the extent of knowledge loss across tasks where the task identity is unknown during inference. Despite its simple architecture, our method delivers competitive performance across various datasets, surpassing state-of-the-art results in certain instances.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Selective Freezing for Efficient Continual Learning
    Sorrenti, Amelia
    Bellitto, Giovanni
    Salanitri, Federica Proietto
    Pennisi, Matteo
    Spampinato, Concetto
    Palazzo, Simone
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3542 - 3551
  • [2] Visually Grounded Continual Language Learning with Selective Specialization
    Ahrens, Kyra
    Bengtson, Lennart
    Lee, Jae Hee
    Wermter, Stefan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 7037 - 7054
  • [3] Continual learning and its industrial applications: A selective review
    Lian, J.
    Choi, K.
    Veeramani, B.
    Hu, A.
    Murli, S.
    Freeman, L.
    Bowen, E.
    Deng, X.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 14 (06)
  • [4] Selective Replay Enhances Learning in Online Continual Analogical Reasoning
    Hayes, Tyler L.
    Kanan, Christopher
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3497 - 3507
  • [5] Continual learning with selective netsContinual learning with selective netsH. T. Luu, M. Szemenyei
    Hai Tung Luu
    Marton Szemenyei
    Applied Intelligence, 2025, 55 (7)
  • [6] Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning
    Shen, Jiangrong
    Ni, Wenyao
    Xu, Qi
    Tang, Huajin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 1, 2024, : 611 - 619
  • [7] Continual learning
    King, Denise
    JOURNAL OF EMERGENCY NURSING, 2008, 34 (04) : 283 - 283
  • [8] CONTINUAL LEARNING
    BROWN, WE
    JOURNAL OF THE AMERICAN DENTAL ASSOCIATION, 1965, 71 (04): : 935 - &
  • [9] Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
    Heng, Alvin
    Soh, Harold
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] Continual compression model for online continual learning
    Ye, Fei
    Bors, Adrian G.
    APPLIED SOFT COMPUTING, 2024, 167