Continual learning with selective nets

被引:0
|
作者
Luu, Hai Tung [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Control Engn & Informat Technol, Budapest, Hungary
关键词
Continual learning; Computer vision; Image classification; Machine learning;
D O I
10.1007/s10489-025-06497-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread adoption of foundation models has significantly transformed machine learning, enabling even straightforward architectures to achieve results comparable to state-of-the-art methods. Inspired by the brain's natural learning process-where studying a new concept activates distinct neural pathways and recalling that memory requires a specific stimulus to fully recover the information-we present a novel approach to dynamic task identification and submodel selection in continual learning. Our method leverages the power of the learning robust visual features without supervision model (DINOv2) foundation model to handle multi-experience datasets by dividing them into multiple experiences, each representing a subset of classes. To build a memory of these classes, we employ strategies such as using random real images, distilled images, k-nearest neighbours (kNN) to identify the closest samples to each cluster, and support vector machines (SVM) to select the most representative samples. During testing, where the task identification (ID) is not provided, we extract features of the test image and use distance measurements to match it with the stored features. Additionally, we introduce a new forgetting metric specifically designed to measure the forgetting rate in task-agnostic continual learning scenarios, unlike traditional task-specific approaches. This metric captures the extent of knowledge loss across tasks where the task identity is unknown during inference. Despite its simple architecture, our method delivers competitive performance across various datasets, surpassing state-of-the-art results in certain instances.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] The Present and Future of Continual Learning
    Bae, Heechul
    Song, Soonyong
    Park, Junhee
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 1193 - 1195
  • [32] PARTIAL HYPERNETWORKS FOR CONTINUAL LEARNING
    Hemati, Hamed
    Lomonaco, Vincenzo
    Bacciu, Davide
    Borth, Damian
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 318 - 336
  • [33] Adaptive Progressive Continual Learning
    Xu, Ju
    Ma, Jin
    Gao, Xuesong
    Zhu, Zhanxing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 6715 - 6728
  • [34] Continual learning in the presence of repetition
    Hemati, Hamed
    Pellegrini, Lorenzo
    Duan, Xiaotian
    Zhao, Zixuan
    Xia, Fangfang
    Masana, Marc
    Tscheschner, Benedikt
    Veas, Eduardo
    Zheng, Yuxiang
    Zhao, Shiji
    Li, Shao-Yuan
    Huang, Sheng-Jun
    Lomonaco, Vincenzo
    van de Ven, Gido M.
    NEURAL NETWORKS, 2025, 183
  • [35] CONTINUAL LEARNING AND PRIVATE UNLEARNING
    Liu, Bo
    Liu, Qiang
    Stone, Peter
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [36] Adversary Aware Continual Learning
    Umer, Muhammad
    Polikar, Robi
    IEEE ACCESS, 2024, 12 : 126108 - 126121
  • [37] Continual Learning, Fast and Slow
    Pham, Quang
    Liu, Chenghao
    Hoi, Steven C. H.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (01) : 134 - 149
  • [38] Dynamic Consolidation for Continual Learning
    Li, Hang
    Ma, Chen
    Chen, Xi
    Liu, Xue
    NEURAL COMPUTATION, 2023, 35 (02) : 228 - 248
  • [39] Experience Replay for Continual Learning
    Rolnick, David
    Ahuja, Arun
    Schwarz, Jonathan
    Lillicrap, Timothy P.
    Wayne, Greg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [40] Traceable Federated Continual Learning
    Wang, Qiang
    Lie, Yawen
    Liu, Bingyan
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 12872 - 12881