Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning

被引:9
|
作者
Shi, Yanyan [1 ]
Shi, Dianxi [2 ]
Qiao, Ziteng [2 ]
Wang, Zhen [2 ]
Zhang, Yi [2 ]
Yang, Shaowu [1 ]
Qiu, Chunping [2 ]
机构
[1] Natl Univ Def Technol, Coll Comp, Changsha 410073, Peoples R China
[2] Natl Innovat Inst Def Technol, Beijing 100071, Peoples R China
基金
中国国家自然科学基金;
关键词
Class -incremental learning; Knowledge distillation; Consistency regularization; Image classification;
D O I
10.1016/j.neunet.2023.05.006
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are prone to the notorious catastrophic forgetting problem when learning new tasks incrementally. Class-incremental learning (CIL) is a promising solution to tackle the challenge and learn new classes while not forgetting old ones. Existing CIL approaches adopted stored representative exemplars or complex generative models to achieve good performance. However, storing data from previous tasks causes memory or privacy issues, and the training of generative models is unstable and inefficient. This paper proposes a method based on multi-granularity knowledge distillation and prototype consistency regularization (MDPCR) that performs well even when the previous training data is unavailable. First, we propose to design knowledge distillation losses in the deep feature space to constrain the incremental model trained on the new data. Thereby, multi -granularity is captured from three aspects: by distilling multi-scale self-attentive features, the feature similarity probability, and global features to maximize the retention of previous knowledge, effectively alleviating catastrophic forgetting. Conversely, we preserve the prototype of each old class and employ prototype consistency regularization (PCR) to ensure that the old prototypes and semantically enhanced prototypes produce consistent prediction, which excels in enhancing the robustness of old prototypes and reduces the classification bias. Extensive experiments on three CIL benchmark datasets confirm that MDPCR performs significantly better over exemplar-free methods and outperforms typical exemplar-based approaches.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:617 / 630
页数:14
相关论文
共 50 条
  • [1] Multi-granularity for knowledge distillation
    Shao, Baitan
    Chen, Ying
    [J]. IMAGE AND VISION COMPUTING, 2021, 115
  • [2] Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation
    Kang, Minsoo
    Park, Jaeyoo
    Han, Bohyung
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16050 - 16059
  • [3] CLASS-INCREMENTAL LEARNING FOR REMOTE SENSING IMAGES BASED ON KNOWLEDGE DISTILLATION
    Song, Jingduo
    Jia, Hecheng
    Xu, Feng
    [J]. IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 5026 - 5028
  • [4] Hyperspectral Image Classification Based on Class-Incremental Learning with Knowledge Distillation
    Xu, Meng
    Zhao, Yuanyuan
    Liang, Yajun
    Ma, Xiaorui
    [J]. REMOTE SENSING, 2022, 14 (11)
  • [5] Multi-Granularity Regularized Re-Balancing for Class Incremental Learning
    Chen, Huitong
    Wang, Yu
    Hu, Qinghua
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 7263 - 7277
  • [6] Class-incremental learning via prototype similarity replay and similarity-adjusted regularization
    Chen, Runji
    Chen, Guangzhu
    Liao, Xiaojuan
    Xiong, Wenjie
    [J]. APPLIED INTELLIGENCE, 2024, 54 (20) : 9971 - 9986
  • [7] Class-Incremental Learning of Plant and Disease Detection: Growing Branches with Knowledge Distillation
    Page-Fortin, Mathieu
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 593 - 603
  • [8] Few-Shot Class-Incremental Learning via Relation Knowledge Distillation
    Dong, Songlin
    Hong, Xiaopeng
    Tao, Xiaoyu
    Chang, Xinyuan
    Wei, Xing
    Gong, Yihong
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1255 - 1263
  • [9] Class-Incremental Exemplar Compression for Class-Incremental Learning
    Luo, Zilin
    Liu, Yaoyao
    Schiele, Bernt
    Sun, Qianru
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11371 - 11380
  • [10] Knowledge Restore and Transfer for Multi-Label Class-Incremental Learning
    Dong, Songlin
    Luo, Haoyu
    He, Yuhang
    Wei, Xing
    Cheng, Jie
    Gong, Yihong
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18665 - 18674