Open-set domain adaptation with visual-language foundation models

被引:0
|
作者
Yu, Qing [1 ]
Irie, Go [2 ]
Aizawa, Kiyoharu [1 ]
机构
[1] Univ Tokyo, Dept Informat & Commun Engn, Tokyo 1138656, Japan
[2] Tokyo Univ Sci, Dept Informat & Comp Technol, Tokyo 1258585, Japan
关键词
Deep learning; Cross-domain learning; Open-set recognition; Domain adaptation;
D O I
10.1016/j.cviu.2024.104230
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data. Owing to the lack of labeled data in the target domain and the possible presence of unknown classes, open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase. Although existing ODA approaches aim to solve the distribution shifts between the source and target domains, most methods fine-tuned ImageNet pre-trained models on the source domain with the adaptation on the target domain. Recent visual- language foundation models (VLFM), such as Contrastive Language-Image Pre-Training (CLIP), are robust to many distribution shifts and, therefore, should substantially improve the performance of ODA. In this work, we explore generic ways to adopt CLIP, a popular VLFM, for ODA. We investigate the performance of zero-shot prediction using CLIP, and then propose an entropy optimization strategy to assist the ODA models with the outputs of CLIP. The proposed approach achieves state-of-the-art results on various benchmarks, demonstrating its effectiveness in addressing the ODA problem.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Adversarial Domain Adaptation With Dual Auxiliary Classifiers for Cross-Domain Open-Set Intelligent Fault Diagnosis
    Wang, Bo
    Zhang, Meng
    Xu, Hao
    Wang, Chao
    Yang, Wenglong
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [42] Distance-based Hyperspherical Classification for Multi-source Open-Set Domain Adaptation
    Bucci, Silvia
    Borlino, Francesco Cappio
    Caputo, Barbara
    Tommasi, Tatiana
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1030 - 1039
  • [43] Toward domain adaptation with open-set target data: Review of theory and computer vision applications
    Ghaffari, Reyhane
    Sadegh Helfroush, Mohammad
    Khosravi, Abbas
    Kazemi, Kamran
    Danyali, Habibollah
    Rutkowski, Leszek
    INFORMATION FUSION, 2023, 100
  • [44] Open Set Domain Adaptation
    Busto, Pau Panareda
    Gall, Juergen
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 754 - 763
  • [45] EdgeFM: Leveraging Foundation Model for Open-set Learning on the Edge
    Yang, Bufang
    He, Lixing
    Ling, Neiwen
    Yan, Zhenyu
    Xing, Guoliang
    Shuai, Xian
    Ren, Xiaozhe
    Jiang, Xin
    PROCEEDINGS OF THE 21ST ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS, SENSYS 2023, 2023, : 111 - 124
  • [46] Multi-source Open-Set Image Classification Based on Deep Adversarial Domain Adaptation
    Zhang, Haitao
    Liu, Xinran
    Han, Qilong
    Lu, Dan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 143 - 156
  • [47] Multiweight Adversarial Open-Set Domain Adaptation Network for Machinery Fault Diagnosis With Unknown Faults
    Wang, Rui
    Huang, Weiguo
    Shi, Mingkuan
    Ding, Chuancang
    Wang, Jun
    IEEE SENSORS JOURNAL, 2023, 23 (24) : 31483 - 31492
  • [48] A domain adaptation method based on interpolation and centroid representation for open-set fault diagnosis of bearing
    Bo, Lin
    Sun, Kong
    Wei, Daiping
    MEASUREMENT, 2023, 216
  • [49] Open-Set Black-Box Domain Adaptation for Remote Sensing Image Scene Classification
    Zhao, Xin
    Wang, Shengsheng
    Lin, Jun
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [50] OPEN-SET DOMAIN GENERALIZATION VIA METRIC LEARNING
    Katsumata, Kai
    Kishida, Ikki
    Amma, Ayako
    Nakayama, Hideki
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 459 - 463