Black Box Few-Shot Adaptation for Vision-Language models

被引:5
|
作者
Ouali, Yassine [1 ]
Bulat, Adrian [1 ]
Matinez, Brais [1 ]
Tzimiropoulos, Georgios [1 ,2 ]
机构
[1] Samsung AI Cambridge, Cambridge, England
[2] Queen Mary Univ London, London, England
关键词
SHAPE;
D O I
10.1109/ICCV51070.2023.01424
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-Language (V-L) models trained with contrastive learning to align the visual and language modalities have been shown to be strong few-shot learners. Soft prompt learning is the method of choice for few-shot downstream adaption aiming to bridge the modality gap caused by the distribution shift induced by the new domain. While parameter-efficient, prompt learning still requires access to the model weights and can be computationally infeasible for large models with billions of parameters. To address these shortcomings, in this work, we describe a blackbox method for V-L few-shot adaptation that (a) operates on pre-computed image and text features and hence works without access to the model's weights, (b) it is orders of magnitude faster at training time, (c) it is amenable to both supervised and unsupervised training, and (d) it can be even used to align image and text features computed from uni- modal models. To achieve this, we propose Linear Feature Alignment ( LFA), a simple linear approach for V-L re-alignment in the target domain. LFA is initialized from a closed-form solution to a least-squares problem and then it is iteratively updated by minimizing a re-ranking loss. Despite its simplicity, our approach can even surpass soft-prompt learning methods as shown by extensive experiments on 11 image and 2 video datasets. Code available at: https://github.com/saic-fi/LFA
引用
收藏
页码:15488 / 15500
页数:13
相关论文
共 50 条
  • [21] Vision-Language Alignment Learning Under Affinity and Divergence Principles for Few-Shot Out-of-Distribution Generalization
    Zhu, Lin
    Yin, Weihan
    Yang, Yiyao
    Wu, Fan
    Zeng, Zhaoyu
    Gu, Qinying
    Wang, Xinbing
    Zhou, Chenghu
    Ye, Nanyang
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (09) : 3375 - 3407
  • [22] Large Language Models Enable Few-Shot Clustering
    Viswanathan, Vijay
    Gashteovski, Kiril
    Lawrence, Carolin
    Wu, Tongshuang
    Neubig, Graham
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 321 - 333
  • [23] Multimodal Few-Shot Learning with Frozen Language Models
    Tsimpoukelli, Maria
    Menick, Jacob
    Cabi, Serkan
    Eslami, S. M. Ali
    Vinyals, Oriol
    Hill, Felix
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [24] H2R Bridge: Transferring vision-language models to few-shot intention meta-perception in human robot collaboration
    Wu, Duidi
    Zhao, Qianyou
    Fan, Junming
    Qi, Jin
    Zheng, Pai
    Hu, Jie
    Journal of Manufacturing Systems, 2025, 80 : 524 - 535
  • [25] Disease-Informed Adaptation of Vision-Language Models
    Zhang, Jiajin
    Wang, Ge
    Kalra, Mannudeep K.
    Yan, Pingkun
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XI, 2024, 15011 : 232 - 242
  • [26] CoCoOpter: Pre-train, prompt, and fine-tune the vision-language model for few-shot image classification
    Yan, Jie
    Xie, Yuxiang
    Guo, Yanming
    Wei, Yingmei
    Zhang, Xiaoping
    Luan, Xidao
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2023, 12 (02)
  • [27] CoCoOpter: Pre-train, prompt, and fine-tune the vision-language model for few-shot image classification
    Jie Yan
    Yuxiang Xie
    Yanming Guo
    Yingmei Wei
    Xiaoping Zhang
    Xidao Luan
    International Journal of Multimedia Information Retrieval, 2023, 12
  • [28] Advancing Few-Shot Black-Box Attack With Alternating Training
    Meng, Lingzhuang
    Shao, Mingwen
    Wang, Fan
    Qiao, Yuanjian
    Xu, Zhaofei
    IEEE TRANSACTIONS ON RELIABILITY, 2024, 73 (03) : 1544 - 1558
  • [29] Improving Diversity in Black-Box Few-Shot Knowledge Distillation
    Vo, Tri-Nhan
    Nguyen, Dang
    Do, Kien
    Gupta, Sunil
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT II, ECML PKDD 2024, 2024, 14942 : 178 - 196
  • [30] VL-Few: Vision Language Alignment for Multimodal Few-Shot Meta Learning
    Ma, Han
    Fan, Baoyu
    Ng, Benjamin K.
    Lam, Chan-Tong
    APPLIED SCIENCES-BASEL, 2024, 14 (03):