Confidence-based Visual Dispersal for Few-shot Unsupervised Domain Adaptation

被引:1
|
作者
Xiong, Yizhe [1 ,2 ,3 ]
Chen, Hui [2 ]
Lin, Zijia [1 ]
Zhao, Sicheng [2 ]
Ding, Guiguang [1 ,2 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing, Peoples R China
[2] Beijing Natl Res Ctr Informat Sci & Technol BNRis, Beijing, Peoples R China
[3] Hangzhou Zhuoxi Inst Brain & Intelligence, Hangzhou, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.01067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation aims to transfer knowledge from a fully-labeled source domain to an unlabeled target domain. However, in real-world scenarios, providing abundant labeled data even in the source domain can be infeasible due to the difficulty and high expense of annotation. To address this issue, recent works consider the Few-shot Unsupervised Domain Adaptation (FUDA) where only a few source samples are labeled, and conduct knowledge transfer via self-supervised learning methods. Yet existing methods generally overlook that the sparse label setting hinders learning reliable source knowledge for transfer. Additionally, the learning difficulty difference in target samples is different but ignored, leaving hard target samples poorly classified. To tackle both deficiencies, in this paper, we propose a novel Confidence-based Visual Dispersal Transfer learning method (C-VisDiT) for FUDA. Specifically, C-VisDiT consists of a cross-domain visual dispersal strategy that transfers only high-confidence source knowledge for model adaptation and an intra-domain visual dispersal strategy that guides the learning of hard target samples with easy ones. We conduct extensive experiments on Office-31, Office-Home, VisDA-C, and Domain-Net benchmark datasets and the results demonstrate that the proposed C-VisDiT significantly outperforms state-of-the-art FUDA methods. Our code is available at https://github.com/Bostoncake/C-VisDiT.
引用
收藏
页码:11587 / 11597
页数:11
相关论文
共 50 条
  • [1] Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation
    Zhang, Jiajin
    Chao, Hanqing
    Dhurandhar, Amit
    Chen, Pin-Yu
    Tajer, Ali
    Xu, Yangyang
    Yan, Pingkun
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT I, 2023, 14220 : 728 - 738
  • [2] Inductive Unsupervised Domain Adaptation for Few-Shot Classification via Clustering
    Cong, Xin
    Yu, Bowen
    Liu, Tingwen
    Cui, Shiyao
    Tang, Hengzhu
    Wang, Bin
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 624 - 639
  • [3] Few-Shot Adversarial Domain Adaptation
    Motiian, Saeid
    Jones, Quinn
    Iranmanesh, Seyed Mehdi
    Doretto, Gianfranco
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [4] Prompt-induced prototype alignment for few-shot unsupervised domain adaptation
    Li, Yongguang
    Long, Sifan
    Wang, Shengsheng
    Zhao, Xin
    Li, Yiyang
    Expert Systems with Applications, 2025, 269
  • [5] Few-shot time-series anomaly detection with unsupervised domain adaptation
    Li, Hongbo
    Zheng, Wenli
    Tang, Feilong
    Zhu, Yanmin
    Huang, Jielong
    INFORMATION SCIENCES, 2023, 649
  • [6] Prototype-Augmented Contrastive Learning for Few-Shot Unsupervised Domain Adaptation
    Gong, Lu
    Zhang, Wen
    Li, Mingkang
    Zhang, Jiali
    Zhang, Zili
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2023, 2023, 14120 : 197 - 210
  • [7] Marginalized Augmented Few-Shot Domain Adaptation
    Jing, Taotao
    Xia, Haifeng
    Hamm, Jihun
    Ding, Zhengming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (09) : 12459 - 12469
  • [8] Few-Shot Domain Adaptation with Polymorphic Transformers
    Li, Shaohua
    Sui, Xiuchao
    Fu, Jie
    Fu, Huazhu
    Luo, Xiangde
    Feng, Yangqin
    Xu, Xinxing
    Liu, Yong
    Ting, Daniel S. W.
    Goh, Rick Siow Mong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT II, 2021, 12902 : 330 - 340
  • [9] ConfMix: Unsupervised Domain Adaptation for Object Detection via Confidence-based Mixing
    Mattolin, Giulio
    Zanella, Luca
    Ricci, Elisa
    Wang, Yiming
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 423 - 433
  • [10] High-Level Semantic Feature Matters Few-Shot Unsupervised Domain Adaptation
    Yu, Lei
    Yang, Wanqi
    Huang, Shengqi
    Wang, Lei
    Yang, Ming
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11025 - 11033