Open-set domain adaptation with visual-language foundation models

被引:0
|
作者
Yu, Qing [1 ]
Irie, Go [2 ]
Aizawa, Kiyoharu [1 ]
机构
[1] Univ Tokyo, Dept Informat & Commun Engn, Tokyo 1138656, Japan
[2] Tokyo Univ Sci, Dept Informat & Comp Technol, Tokyo 1258585, Japan
关键词
Deep learning; Cross-domain learning; Open-set recognition; Domain adaptation;
D O I
10.1016/j.cviu.2024.104230
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data. Owing to the lack of labeled data in the target domain and the possible presence of unknown classes, open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase. Although existing ODA approaches aim to solve the distribution shifts between the source and target domains, most methods fine-tuned ImageNet pre-trained models on the source domain with the adaptation on the target domain. Recent visual- language foundation models (VLFM), such as Contrastive Language-Image Pre-Training (CLIP), are robust to many distribution shifts and, therefore, should substantially improve the performance of ODA. In this work, we explore generic ways to adopt CLIP, a popular VLFM, for ODA. We investigate the performance of zero-shot prediction using CLIP, and then propose an entropy optimization strategy to assist the ODA models with the outputs of CLIP. The proposed approach achieves state-of-the-art results on various benchmarks, demonstrating its effectiveness in addressing the ODA problem.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Visual-language foundation models in medicine
    Liu, Chunyu
    Jin, Yixiao
    Guan, Zhouyu
    Li, Tingyao
    Qin, Yiming
    Qian, Bo
    Jiang, Zehua
    Wu, Yilan
    Wang, Xiangning
    Zheng, Ying Feng
    Zeng, Dian
    VISUAL COMPUTER, 2025, 41 (04): : 2953 - 2972
  • [2] Open-set domain adaptation by deconfounding domain gaps
    Zhao, Xin
    Wang, Shengsheng
    Sun, Qianru
    APPLIED INTELLIGENCE, 2023, 53 (07) : 7862 - 7875
  • [3] Open-set domain adaptation by deconfounding domain gaps
    Xin Zhao
    Shengsheng Wang
    Qianru Sun
    Applied Intelligence, 2023, 53 : 7862 - 7875
  • [4] Domain Adaptation with Dynamic Open-Set Targets
    Wu, Jun
    He, Jingrui
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2039 - 2049
  • [5] Open-Set Graph Domain Adaptation via Separate Domain Alignment
    Wang, Yu
    Zhu, Ronghang
    Ji, Pengsheng
    Li, Sheng
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 9142 - 9150
  • [6] Extending Partial Domain Adaptation Algorithms to the Open-Set Setting
    Pikramenos, George
    Spyrou, Evaggelos
    Perantonis, Stavros J.
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [7] Open-Set Domain Adaptation Classification Via Adversarial Learning
    Zhao, Yunbin
    Zhu, Songhao
    Liang, Zhiwei
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7059 - 7063
  • [8] Simplifying open-set video domain adaptation with contrastive learning
    Zara, Giacomo
    da Costa, Victor Guilherme Turrisi
    Roy, Subhankar
    Rota, Paolo
    Ricci, Elisa
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 241
  • [9] GRAPH NEURAL NETWORK BASED OPEN-SET DOMAIN ADAPTATION
    Zhao, Shan
    Saha, Sudipan
    Zhu, Xiao Xiang
    XXIV ISPRS CONGRESS: IMAGING TODAY, FORESEEING TOMORROW, COMMISSION III, 2022, 43-B3 : 1407 - 1413
  • [10] Self-Paced Learning for Open-Set Domain Adaptation
    Liu X.
    Zhou Y.
    Zhou T.
    Qin J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (08): : 1711 - 1726