Self-Supervised Models are Continual Learners

被引:29
|
作者
Fini, Enrico [1 ,2 ]
da Costa, Victor G. Turrisi [1 ]
Alameda-Pineda, Xavier [2 ]
Ricci, Elisa [1 ,3 ]
Alahari, Karteek [2 ]
Mairal, Julien [2 ]
机构
[1] Univ Trento, Trento, Italy
[2] INRIA, Paris, France
[3] Fdn Bruno Kessler, Trento, Italy
基金
欧盟地平线“2020”;
关键词
D O I
10.1109/CVPR52688.2022.00940
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised models have been shown to produce comparable or better visual representations than their supervised counterparts when trained offline on unlabeled data at scale. However, their efficacy is catastrophically reduced in a Continual Learning (CL) scenario where data is presented to the model sequentially. In this paper, we show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for CL by adding a predictor network that maps the current state of the representations to their past state. This enables us to devise a framework for Continual self-supervised visual representation Learning that (i) significantly improves the quality of the learned representations, (ii) is compatible with several state-of-the-art self-supervised objectives, and (iii) needs little to no hyperparameter tuning. We demonstrate the effectiveness of our approach empirically by training six popular self-supervised models in various CL settings. Code: github.com/DonkeyShot21/cassle
引用
收藏
页码:9611 / 9620
页数:10
相关论文
共 50 条
  • [1] Improving Pedestrian Prediction Models With Self-Supervised Continual Learning
    Knoedler, Luzia
    Salmi, Chadi
    Zhu, Hai
    Brito, Bruno
    Alonso-Mora, Javier
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02): : 4781 - 4788
  • [2] SPeCiaL: Self-supervised Pretraining for Continual Learning
    Caccia, Lucas
    Pineau, Joelle
    [J]. CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 91 - 103
  • [3] Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
    Sun, Li
    Ye, Junda
    Peng, Hao
    Wang, Feiyang
    Yu, Philip S.
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4633 - 4642
  • [4] Continual Robot Learning Using Self-Supervised Task Inference
    Hafez, Muhammad Burhan
    Wermter, Stefan
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (03) : 947 - 960
  • [5] CONTINUAL SELF-SUPERVISED LEARNING IN EARTH OBSERVATION WITH EMBEDDING REGULARIZATION
    Moieez, Hamna
    Marsocci, Valerio
    Scardapane, Simone
    [J]. IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 5029 - 5032
  • [6] Continual Barlow Twins: Continual Self-Supervised Learning for Remote Sensing Semantic Segmentation
    Marsocci, Valerio
    Scardapane, Simone
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 5049 - 5060
  • [7] Denoising Diffusion Autoencoders are Unified Self-supervised Learners
    Xiang, Weilai
    Yang, Hongyu
    Huang, Di
    Wang, Yunhong
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15756 - 15766
  • [8] Dataset Inference for Self-Supervised Models
    Dziedzic, Adam
    Duan, Haonan
    Kaleem, Muhammad Ahmad
    Dhawan, Nikita
    Guan, Jonas
    Cattan, Yannis
    Boenisch, Franziska
    Papernot, Nicolas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Learning Representations for New Sound Classes With Continual Self-Supervised Learning
    Wang, Zhepei
    Subakan, Cem
    Jiang, Xilin
    Wu, Junkai
    Tzinis, Efthymios
    Ravanelli, Mirco
    Smaragdis, Paris
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 (2607-2611) : 2607 - 2611
  • [10] Contrastive Continuity on Augmentation Stability Rehearsal for Continual Self-Supervised Learning
    Cheng, Haoyang
    Wen, Haitao
    Zhang, Xiaoliang
    Qiu, Heqian
    Wang, Lanxiao
    Li, Hongliang
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5684 - 5694