Continual learning with selective nets

被引:0
|
作者
Luu, Hai Tung [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Control Engn & Informat Technol, Budapest, Hungary
关键词
Continual learning; Computer vision; Image classification; Machine learning;
D O I
10.1007/s10489-025-06497-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread adoption of foundation models has significantly transformed machine learning, enabling even straightforward architectures to achieve results comparable to state-of-the-art methods. Inspired by the brain's natural learning process-where studying a new concept activates distinct neural pathways and recalling that memory requires a specific stimulus to fully recover the information-we present a novel approach to dynamic task identification and submodel selection in continual learning. Our method leverages the power of the learning robust visual features without supervision model (DINOv2) foundation model to handle multi-experience datasets by dividing them into multiple experiences, each representing a subset of classes. To build a memory of these classes, we employ strategies such as using random real images, distilled images, k-nearest neighbours (kNN) to identify the closest samples to each cluster, and support vector machines (SVM) to select the most representative samples. During testing, where the task identification (ID) is not provided, we extract features of the test image and use distance measurements to match it with the stored features. Additionally, we introduce a new forgetting metric specifically designed to measure the forgetting rate in task-agnostic continual learning scenarios, unlike traditional task-specific approaches. This metric captures the extent of knowledge loss across tasks where the task identity is unknown during inference. Despite its simple architecture, our method delivers competitive performance across various datasets, surpassing state-of-the-art results in certain instances.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Exemplary Care and Learning Sites: Linking the Continual Improvement of Learning and the Continual Improvement of Care
    Headrick, Linda A.
    Shalaby, Marc
    Baum, Karyn D.
    Fitzsimmons, Anne B.
    Hoffman, Kimberly G.
    Hoglund, Par J.
    Ogrinc, Greg
    Thorne, Karin
    ACADEMIC MEDICINE, 2011, 86 (11) : E6 - E7
  • [22] Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning
    Gao, Xinyuan
    Dong, Songlin
    He, Yuhang
    Wang, Qiang
    Gong, Yihong
    COMPUTER VISION - ECCV 2024, PT LXXXV, 2025, 15143 : 89 - 106
  • [23] Continual Learning Through Research
    Berndt, Dawn
    JOURNAL OF INFUSION NURSING, 2023, 46 (05) : 253 - 254
  • [24] Continual Unsupervised Representation Learning
    Rao, Dushyant
    Visin, Francesco
    Rusu, Andrei A.
    Teh, Yee Whye
    Pascanu, Razvan
    Hadsell, Raia
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [25] Continual Information Cascade Learning
    Zhou, Fan
    Jing, Xin
    Xu, Xovee
    Zhong, Ting
    Trajcevski, Goce
    Wu, Jin
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [26] Continual Auxiliary Task Learning
    McLeod, Matthew
    Lo, Chunlok
    Schlegel, Matthew
    Jacobsen, Andrew
    Kumaraswamy, Raksha
    White, Martha
    White, Adam
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [27] Reinforced Continual Learning for Graphs
    Rakaraddi, Appan
    Kei, Lam Siew
    Pratama, Mahardhika
    de Carvalho, Marcus
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1666 - 1674
  • [28] Memory Bounds for Continual Learning
    Chen, Xi
    Papadimitriou, Christos
    Peng, Binghui
    2022 IEEE 63RD ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2022, : 519 - 530
  • [29] Continual Learning of Object Instances
    Parshotam, Kishan
    Kilickaya, Mert
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 907 - 914
  • [30] Subspace distillation for continual learning
    Roy, Kaushik
    Simon, Christian
    Moghadam, Peyman
    Harandi, Mehrtash
    NEURAL NETWORKS, 2023, 167 : 65 - 79