DePT: Decoupled Prompt Tuning

被引:1
|
作者
Zhang, Ji [1 ]
Wu, Shihan [1 ]
Gao, Lianli [2 ]
Shen, Heng Tao [3 ]
Song, Jingkuan [1 ,3 ]
机构
[1] Univ Elect Sci & Technol China UESTC, Chengdu, Peoples R China
[2] UESTC, Shenzhen Inst Adv Study, Chengdu, Peoples R China
[3] Tongji Univ, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52733.2024.01228
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work breaks through the Base-New Tradeoff (BNT) dilemma in prompt tuning, i.e., the better the tuned model generalizes to the base (or target) task, the worse it generalizes to new tasks, and vice versa. Specifically, through an in-depth analysis of the learned features of the base and new tasks, we observe that the BNT stems from a channel bias issue - the vast majority of feature channels are occupied by base-specific knowledge, leading to the collapse of task-shared knowledge important to new tasks. To address this, we propose the Decoupled Prompt Tuning (DePT) framework, which decouples base-specific knowledge from feature channels into an isolated feature space during prompt tuning, so as to maximally preserve task-shared knowl-edge in the original feature space for achieving better zero-shot generalization on new tasks. Importantly, our DePT is orthogonal to existing prompt tuning approaches, and can enhance them with negligible additional computational cost. Extensive experiments on several datasets show the flexibility and effectiveness of DePT. Code is available at https://github.com/Koorye/DePT.
引用
收藏
页码:12924 / 12933
页数:10
相关论文
共 50 条
  • [1] Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
    Razdaibiedina, Anastasia
    Mao, Yuning
    Khabsa, Madian
    Lewis, Mike
    Hou, Rui
    Ba, Jimmy
    Almahairi, Amjad
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 6740 - 6757
  • [2] Visual Prompt Tuning
    Jia, Menglin
    Tang, Luming
    Chen, Bor-Chun
    Cardie, Claire
    Belongie, Serge
    Hariharan, Bharath
    Lim, Ser-Nam
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 709 - 727
  • [3] Prompt-aligned Gradient for Prompt Tuning
    Zhu, Beier
    Niu, Yulei
    Han, Yucheng
    Wu, Yue
    Zhang, Hanwang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15613 - 15623
  • [4] Compressed Video Prompt Tuning
    Li, Bing
    Chen, Jiaxin
    Bao, Xiuguo
    Huang, Di
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Universality and Limitations of Prompt Tuning
    Wang, Yihan
    Chauhan, Jatin
    Wang, Wei
    Hsieh, Cho-Jui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] When Adversarial Training Meets Prompt Tuning: Adversarial Dual Prompt Tuning for Unsupervised Domain Adaptation
    Cui, Chaoran
    Liu, Ziyi
    Gong, Shuai
    Zhu, Lei
    Zhang, Chunyun
    Liu, Hui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1427 - 1440
  • [7] LION: Implicit Vision Prompt Tuning
    Wang, Haixin
    Chang, Jianlong
    Zhai, Yihang
    Luo, Xiao
    Sun, Jinan
    Lin, Zhouchen
    Tian, Qi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5372 - 5380
  • [8] PTE: Prompt tuning with ensemble verbalizers
    Liang, Liheng
    Wang, Guancheng
    Lin, Cong
    Feng, Zhuowen
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 262
  • [9] Prompt Tuning in Biomedical Relation Extraction
    He, Jianping
    Li, Fang
    Li, Jianfu
    Hu, Xinyue
    Nian, Yi
    Xiang, Yang
    Wang, Jingqi
    Wei, Qiang
    Li, Yiming
    Xu, Hua
    Tao, Cui
    JOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2024, 8 (02) : 206 - 224
  • [10] Review of Research on Adapter and Prompt Tuning
    Lin, Lingde
    Liu, Na
    Wang, Zhengan
    Computer Engineering and Applications, 59 (02): : 12 - 21