Learning to Prompt for Continual Learning

被引:252
|
作者
Wang, Zifeng [1 ]
Zhang, Zizhao [2 ]
Lee, Hen Yu [2 ]
Zhang, Han [3 ]
Sun, Ruoxi [2 ]
Ren, Xiaoqi [2 ]
Su, Guolong [3 ]
Perot, Vincent [3 ]
Dy, Jennifer [1 ]
Pfister, Tomas [2 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Google Cloud AI, Sunnyvale, CA USA
[3] Google Res, Sunnyvale, CA USA
关键词
SYSTEMS;
D O I
10.1109/CVPR52688.2022.00024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge. Typical methods rely on a rehearsal buffer or known task identity at test time to retrieve learned knowledge and address forgetting, while this work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time. Our method learns to dynamically prompt (L2P) a pre-trained model to learn tasks sequentially under different task transitions. In our proposed framework, prompts are small learnable parameters, which are maintained in a memory space. The objective is to optimize prompts to instruct the model prediction and explicitly manage task-invariant and task-specific knowledge while maintaining model plasticity. We conduct comprehensive experiments under popular image classification benchmarks with different challenging continual learning settings, where L2P consistently outperforms prior state-ofthe-art methods. Surprisingly, L2P achieves competitive results against rehearsal-based methods even without a rehearsal buffer and is directly applicable to challenging taskagnostic continual learning. Source code is available at https:// github.com/ google- research/l2p.
引用
收藏
页码:139 / 149
页数:11
相关论文
共 50 条
  • [31] Online Prototype Learning for Online Continual Learning
    Wei, Yujie
    Ye, Jiaxin
    Huang, Zhizhong
    Zhang, Junping
    Shan, Hongming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18718 - 18728
  • [32] Clinical applications of continual learning machine learning
    Lee, Cecilia S.
    Lee, Aaron Y.
    LANCET DIGITAL HEALTH, 2020, 2 (06): : E279 - E281
  • [33] Meta-Learning Representations for Continual Learning
    Javed, Khurram
    White, Martha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [34] Learning on the Job: Online Lifelong and Continual Learning
    Liu, Bing
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13544 - 13549
  • [35] Exemplary Care and Learning Sites: Linking the Continual Improvement of Learning and the Continual Improvement of Care
    Headrick, Linda A.
    Shalaby, Marc
    Baum, Karyn D.
    Fitzsimmons, Anne B.
    Hoffman, Kimberly G.
    Hoglund, Par J.
    Ogrinc, Greg
    Thorne, Karin
    ACADEMIC MEDICINE, 2011, 86 (11) : E6 - E7
  • [36] Continual Learning Through Research
    Berndt, Dawn
    JOURNAL OF INFUSION NURSING, 2023, 46 (05) : 253 - 254
  • [37] Continual Unsupervised Representation Learning
    Rao, Dushyant
    Visin, Francesco
    Rusu, Andrei A.
    Teh, Yee Whye
    Pascanu, Razvan
    Hadsell, Raia
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [38] Continual Information Cascade Learning
    Zhou, Fan
    Jing, Xin
    Xu, Xovee
    Zhong, Ting
    Trajcevski, Goce
    Wu, Jin
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [39] Continual Auxiliary Task Learning
    McLeod, Matthew
    Lo, Chunlok
    Schlegel, Matthew
    Jacobsen, Andrew
    Kumaraswamy, Raksha
    White, Martha
    White, Adam
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [40] Reinforced Continual Learning for Graphs
    Rakaraddi, Appan
    Kei, Lam Siew
    Pratama, Mahardhika
    de Carvalho, Marcus
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1666 - 1674