CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning

被引:64
|
作者
Smith, James Seale [1 ,2 ]
Karlinsky, Leonid [2 ,4 ]
Gutta, Vyshnavi [1 ]
Cascante-Bonilla, Paola [2 ,3 ]
Kim, Donghyun [2 ,4 ]
Arbelle, Assaf [4 ]
Panda, Rameswar [2 ,4 ]
Feris, Rogerio [2 ,4 ]
Kira, Zsolt [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] MIT, IBM Watson AI Lab, Cambridge, MA 02139 USA
[3] Rice Univ, Houston, TX USA
[4] IBM Res, Armonk, NY USA
关键词
D O I
10.1109/CVPR52729.2023.01146
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen data, which increases memory costs and may violate data privacy. Recently, the emergence of large-scale pre-trained vision transformer models has enabled prompting approaches as an alternative to data-rehearsal. These approaches rely on a key-query mechanism to generate prompts and have been found to be highly resistant to catastrophic forgetting in the well-established rehearsal-free continual learning setting. However, the key mechanism of these methods is not trained end-to-end with the task sequence. Our experiments show that this leads to a reduction in their plasticity, hence sacrificing new task accuracy, and inability to benefit from expanded parameter capacity. We instead propose to learn a set of prompt components which are assembled with input-conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme. Our experiments show that we outperform the current SOTA method DualPrompt on established benchmarks by as much as 4.5% in average final accuracy. We also outperform the state of art by as much as 4.4% accuracy on a continual learning benchmark which contains both class-incremental and domain-incremental task shifts, corresponding to many practical settings. Our code is available at https://github.com/GT-RIPL/CODA-Prompt
引用
收藏
页码:11909 / 11919
页数:11
相关论文
共 38 条
  • [1] DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning
    Wang, Zifeng
    Zhang, Zizhao
    Ebrahimi, Sayna
    Sun, Ruoxi
    Zhang, Han
    Lee, Chen-Yu
    Ren, Xiaoqi
    Su, Guolong
    Perot, Vincent
    Dy, Jennifer
    Pfister, Tomas
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 631 - 648
  • [2] Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning
    Gao, Xinyuan
    Dong, Songlin
    He, Yuhang
    Wang, Qiang
    Gong, Yihong
    COMPUTER VISION - ECCV 2024, PT LXXXV, 2025, 15143 : 89 - 106
  • [3] KC-PROMPT: END-TO-END KNOWLEDGE-COMPLEMENTARY PROMPTING FOR REHEARSAL-FREE CONTINUAL LEARNING
    Li, Yaowei
    Liu, Yating
    Cheng, Xuxin
    Zhu, Zhihong
    Li, HongXiang
    Yang, Bang
    Huang, Zhiqi
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6860 - 6864
  • [4] TASK-WISE PROMPT QUERY FUNCTION FOR REHEARSAL-FREE CONTINUAL LEARNING
    Chen, Shuai
    Zhang, Mingyi
    Zhang, Junge
    Huang, Kaiqi
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6320 - 6324
  • [5] RECALL: Rehearsal-free Continual Learning for Object Classification
    Knauer, Markus
    Denninger, Maximilian
    Triebel, Rudolph
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 63 - 70
  • [6] Rehearsal-Free Online Continual Learning for Automatic Speech Recognition
    Vander Eeckt, Steven
    Van Hamme, Hugo
    INTERSPEECH 2023, 2023, : 944 - 948
  • [7] Generating Instance-level Prompts for Rehearsal-free Continual Learning
    Jung, Dahuin
    Han, Dongyoon
    Bang, Jihwan
    Song, Hwanjun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11813 - 11823
  • [8] Rehearsal-free Continual Language Learning via Efficient Parameter Isolation
    Wang, Zhicheng
    Liu, Yufang
    Ji, Tao
    Wang, Xiaoling
    Wu, Yuanbin
    Jiang, Congcong
    Chao, Ye
    Han, Zhencong
    Wang, Ling
    Shao, Xu
    Zeng, Wenqiu
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 10933 - 10946
  • [9] Rehearsal-Free Continual Learning over Small Non-IID Batches
    Lomonaco, Vincenzo
    Maltoni, Davide
    Pellegrini, Lorenzo
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 989 - 998
  • [10] Rehearsal-Free Domain Continual Face Anti-Spoofing: Generalize More and Forget Less
    Cai, Rizhao
    Cui, Yawen
    Li, Zhi
    Yu, Zitong
    Li, Haoliang
    Hu, Yongjian
    Kot, Alex
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8003 - 8014