Memory Efficient Class-Incremental Learning for Image Classification

被引:38
|
作者
Zhao, Hanbin [1 ]
Wang, Hui [1 ]
Fu, Yongjian [1 ]
Wu, Fei [1 ]
Li, Xi [1 ,2 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Shanghai Inst Adv Study, Shanghai 201210, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Knowledge transfer; Data mining; Adaptation models; Training; Noise measurement; Knowledge engineering; Catastrophic forgetting; class-incremental learning (CIL); classification; exemplar; memory efficient;
D O I
10.1109/TNNLS.2021.3072041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the memory-resource-limited constraints, class-incremental learning (CIL) usually suffers from the ``catastrophic forgetting'' problem when updating the joint classification model on the arrival of newly added classes. To cope with the forgetting problem, many CIL methods transfer the knowledge of old classes by preserving some exemplar samples into the size-constrained memory buffer. To utilize the memory buffer more efficiently, we propose to keep more auxiliary low-fidelity exemplar samples, rather than the original real-high-fidelity exemplar samples. Such a memory-efficient exemplar preserving scheme makes the old-class knowledge transfer more effective. However, the low-fidelity exemplar samples are often distributed in a different domain away from that of the original exemplar samples, that is, a domain shift. To alleviate this problem, we propose a duplet learning scheme that seeks to construct domain-compatible feature extractors and classifiers, which greatly narrows down the above domain gap. As a result, these low-fidelity auxiliary exemplar samples have the ability to moderately replace the original exemplar samples with a lower memory cost. In addition, we present a robust classifier adaptation scheme, which further refines the biased classifier (learned with the samples containing distillation label knowledge about old classes) with the help of the samples of pure true class labels. Experimental results demonstrate the effectiveness of this work against the state-of-the-art approaches. We will release the code, baselines, and training statistics for all models to facilitate future research.
引用
下载
收藏
页码:5966 / 5977
页数:12
相关论文
共 50 条
  • [41] Class incremental learning with analytic learning for hyperspectral image classification
    Zhuang, Huiping
    Yan, Yue
    He, Run
    Zeng, Ziqian
    Journal of the Franklin Institute, 2024, 361 (18)
  • [42] Few-shot class-incremental audio classification via discriminative prototype learning
    Xie, Wei
    Li, Yanxiong
    He, Qianhua
    Cao, Wenchang
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 225
  • [43] Efficient Class-Incremental Learning Based on Bag-of-Sequencelets Model for Activity Recognition
    Lee, Jong-Woo
    Hong, Ki-Sang
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2019, E102A (09) : 1293 - 1302
  • [44] Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision
    Hsieh, Yu-Hsing
    Chen, Guan-Sheng
    Cai, Shun-Xian
    Wei, Ting-Yun
    Yang, Huei-Fang
    Chen, Chu-Song
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1250 - 1261
  • [45] A survey on few-shot class-incremental learning
    Tian, Songsong
    Li, Lusi
    Li, Weijun
    Ran, Hang
    Ning, Xin
    Tiwari, Prayag
    NEURAL NETWORKS, 2024, 169 : 307 - 324
  • [46] On the Stability-Plasticity Dilemma of Class-Incremental Learning
    Kim, Dongwan
    Han, Bohyung
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20196 - 20204
  • [47] Dynamic Task Subspace Ensemble for Class-Incremental Learning
    Zhang, Weile
    He, Yuanjian
    Cong, Yulai
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 : 322 - 334
  • [48] Mixup-Inspired Video Class-Incremental Learning
    Long, Jinqiang
    Gao, Yizhao
    Lu, Zhiwu
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 1181 - 1186
  • [49] Distilling Causal Effect of Data in Class-Incremental Learning
    Hu, Xinting
    Tang, Kaihua
    Miao, Chunyan
    Hua, Xian-Sheng
    Zhang, Hanwang
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3956 - 3965
  • [50] FOSTER: Feature Boosting and Compression for Class-Incremental Learning
    Wang, Fu-Yun
    Zhou, Da-Wei
    Ye, Han-Jia
    Zhan, De-Chuan
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 398 - 414