Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models

被引:0
|
作者
Wu, Qiong [1 ,2 ]
Yu, Wei [1 ,2 ]
Zhou, Yiyi [1 ,2 ]
Huang, Shubin [1 ]
Sun, Xiaoshuai [1 ,2 ]
Ji, Rongrong [1 ,2 ]
机构
[1] Xiamen Univ, Key Lab Multimedia Trusted Percept & Efficient Co, Minist Educ China, Xiamen 361005, Peoples R China
[2] Xiamen Univ, Inst Artificial Intelligence, Xiamen 361005, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With ever increasing parameters and computation, vision-language pre-trained (VLP) models exhibit prohibitive expenditure in downstream task adaption. Recent endeavors mainly focus on parameter efficient transfer learning (PETL) for VLP models by only updating a small number of parameters. However, excessive computational overhead still plagues the application of VLPs. In this paper, we aim at parameter and computation efficient transfer learning (PCETL) for VLP models. In particular, PCETL not only needs to limit the number of trainable parameters in VLP models, but also to reduce the computational redundancy during inference, thus enabling a more efficient transfer. To approach this target, we propose a novel dynamic architecture skipping (DAS) approach towards PCETL. Instead of directly optimizing the intrinsic architectures of VLP models, DAS first observes the significances of their modules to downstream tasks via a reinforcement learning (RL) based process, and then skips the redundant ones with lightweight networks, i.e., adapters, according to the obtained rewards. In this case, the VLP model can well maintain the scale of trainable parameters while speeding up its inference on downstream tasks. To validate DAS, we apply it to a bunch of representative VLP models, and conduct extensive experiments on a set of VL tasks. The experimental results not only show the great advantages of DAS in reducing computational complexity, e.g. -11.97% FLOPs of METER on VQA2.0, but also confirm its competitiveness against existing PETL methods in terms of parameter scale and performance. Our source code is given in https://github. com/DoubtedSteam/DAS.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Generalization of vision pre-trained models for histopathology
    Sikaroudi, Milad
    Hosseini, Maryam
    Gonzalez, Ricardo
    Rahnamayan, Shahryar
    Tizhoosh, H. R.
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [32] Parameter-efficient fine-tuning of large-scale pre-trained language models
    Ning Ding
    Yujia Qin
    Guang Yang
    Fuchao Wei
    Zonghan Yang
    Yusheng Su
    Shengding Hu
    Yulin Chen
    Chi-Min Chan
    Weize Chen
    Jing Yi
    Weilin Zhao
    Xiaozhi Wang
    Zhiyuan Liu
    Hai-Tao Zheng
    Jianfei Chen
    Yang Liu
    Jie Tang
    Juanzi Li
    Maosong Sun
    [J]. Nature Machine Intelligence, 2023, 5 : 220 - 235
  • [33] Hadamard Adapter: An Extreme Parameter-Efficient Adapter Tuning Method for Pre-trained Language Models
    Chen, Yuyan
    Fu, Qiang
    Fan, Ge
    Du, Lun
    Lou, Jian-Guang
    Han, Shi
    Zhang, Dongmei
    Li, Zhixu
    Xiao, Yanghua
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 276 - 285
  • [34] X2-VLM: All-in-One Pre-Trained Model for Vision-Language Tasks
    Zeng, Yan
    Zhang, Xinsong
    Li, Hang
    Wang, Jiawei
    Zhang, Jipeng
    Zhou, Wangchunshu
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3156 - 3168
  • [35] Parameter-efficient fine-tuning of large-scale pre-trained language models
    Ding, Ning
    Qin, Yujia
    Yang, Guang
    Wei, Fuchao
    Yang, Zonghan
    Su, Yusheng
    Hu, Shengding
    Chen, Yulin
    Chan, Chi-Min
    Chen, Weize
    Yi, Jing
    Zhao, Weilin
    Wang, Xiaozhi
    Liu, Zhiyuan
    Zheng, Hai-Tao
    Chen, Jianfei
    Liu, Yang
    Tang, Jie
    Li, Juanzi
    Sun, Maosong
    [J]. NATURE MACHINE INTELLIGENCE, 2023, 5 (03) : 220 - +
  • [36] Neural Transfer Learning For Vietnamese Sentiment Analysis Using Pre-trained Contextual Language Models
    An Pha Le
    Tran Vu Pham
    Thanh-Van Le
    Huynh, Duy, V
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLIED NETWORK TECHNOLOGIES (ICMLANT II), 2021, : 84 - 88
  • [37] Learning to Prompt for Vision-Language Models
    Zhou, Kaiyang
    Yang, Jingkang
    Loy, Chen Change
    Liu, Ziwei
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (09) : 2337 - 2348
  • [38] Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization
    Yu, Tiezheng
    Dai, Wenliang
    Liu, Zihan
    Fung, Pascale
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3995 - 4007
  • [39] Learning to Prompt for Vision-Language Models
    Kaiyang Zhou
    Jingkang Yang
    Chen Change Loy
    Ziwei Liu
    [J]. International Journal of Computer Vision, 2022, 130 : 2337 - 2348
  • [40] Annotating Columns with Pre-trained Language Models
    Suhara, Yoshihiko
    Li, Jinfeng
    Li, Yuliang
    Zhang, Dan
    Demiralp, Cagatay
    Chen, Chen
    Tan, Wang-Chiew
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503