Complementary Calibration: Boosting General Continual Learning With Collaborative Distillation and Self-Supervision

被引:4
|
作者
Ji, Zhong [1 ,2 ]
Li, Jin [1 ,2 ]
Wang, Qiang [1 ,2 ]
Zhang, Zhongfei [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Tianjin Univ, Tianjin Key Lab Brain Inspired Intelligence Techno, Tianjin 300072, Peoples R China
[3] SUNY Binghamton, Dept Comp Sci, Binghamton, NY 13902 USA
基金
中国国家自然科学基金;
关键词
General continual learning; complementary calibration; knowledge distillation; self-supervised learning; supervised contrastive learning;
D O I
10.1109/TIP.2022.3230457
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data without catastrophic forgetting of the old tasks that don't rely on task boundaries during both training and testing stages. We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting, in which relation deviation refers to the deficiency of the relationship among all classes in knowledge distillation, and feature deviation refers to indiscriminative feature representations. To this end, we propose a Complementary Calibration (CoCa) framework by mining the complementary model's outputs and features to alleviate the two deviations in the process of GCL. Specifically, we propose a new collaborative distillation approach for addressing the relation deviation. It distills model's outputs by utilizing ensemble dark knowledge of new model's outputs and reserved outputs, which maintains the performance of old tasks as well as balancing the relationship among all classes. Furthermore, we explore a collaborative self-supervision idea to leverage pretext tasks and supervised contrastive learning for addressing the feature deviation problem by learning complete and discriminative features for all classes. Extensive experiments on six popular datasets show that our CoCa framework achieves superior performance against state-of-the-art methods.
引用
收藏
页码:657 / 667
页数:11
相关论文
共 50 条
  • [1] Self-distillation and self-supervision for partial label learning
    Yu, Xiaotong
    Sun, Shiding
    Tian, Yingjie
    PATTERN RECOGNITION, 2024, 146
  • [2] Boosting Few-Shot Visual Learning with Self-Supervision
    Gidaris, Spyros
    Bursuc, Andrei
    Komodakis, Nikos
    Perez, Patrick
    Cord, Matthieu
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8058 - 8067
  • [3] Tailoring Self-Supervision for Supervised Learning
    Moon, WonJun
    Kim, Ji-Hwan
    Heo, Jae-Pil
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 346 - 364
  • [4] Learning with self-supervision on EEG data
    Gramfort, Alexandre
    Banville, Hubert
    Chehab, Omar
    Hyvarinen, Aapo
    Engemann, Denis
    2021 9TH IEEE INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE (BCI), 2021, : 28 - 29
  • [5] Boosting Cross-Domain Speech Recognition With Self-Supervision
    Zhu, Han
    Cheng, Gaofeng
    Wang, Jindong
    Hou, Wenxin
    Zhang, Pengyuan
    Yan, Yonghong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 471 - 485
  • [6] Boosting few-shot rare skin disease classification via self-supervision and distribution calibration
    Fu, Wen
    Chen, Jie
    Zhou, Li
    BIOMEDICAL ENGINEERING LETTERS, 2024, 14 (04) : 877 - 889
  • [7] Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution
    Yang, Chuanguang
    An, Zhulin
    Cai, Linhang
    Xu, Yongjun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2094 - 2108
  • [8] Learning to Remove Rain in Video With Self-Supervision
    Yang, Wenhan
    Tan, Robby T.
    Wang, Shiqi
    Kot, Alex C.
    Liu, Jiaying
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1378 - 1396
  • [9] Prototype Augmentation and Self-Supervision for Incremental Learning
    Zhu, Fei
    Zhang, Xu-Yao
    Wang, Chuang
    Yin, Fei
    Liu, Cheng-Lin
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5867 - 5876
  • [10] Improving Audio Classification Method by Combining Self-Supervision with Knowledge Distillation
    Gong, Xuchao
    Duan, Hongjie
    Yang, Yaozhong
    Tan, Lizhuang
    Wang, Jian
    Vasilakos, Athanasios V.
    ELECTRONICS, 2024, 13 (01)