Class Relationship Embedded Learning for Source-Free Unsupervised Domain Adaptation

被引:5
|
作者
Zhang, Yixin [1 ,2 ]
Wang, Zilei [2 ]
He, Weinan [2 ]
机构
[1] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
[2] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.00736
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work focuses on a practical knowledge transfer task defined as Source-Free Unsupervised Domain Adaptation (SFUDA), where only a well-trained source model and unlabeled target data are available. To fully utilize source knowledge, we propose to transfer the class relationship, which is domain-invariant but still under-explored in previous works. To this end, we first regard the classifier weights of the source model as class prototypes to compute class relationship, and then propose a novel probability-based similarity between target-domain samples by embedding the source-domain class relationship, resulting in Class Relationship embedded Similarity (CRS). Here the inter-class term is particularly considered in order to more accurately represent the similarity between two samples, in which the source prior of class relationship is utilized by weighting. Finally, we propose to embed CRS into contrastive learning in a unified form. Here both class-aware and instance discrimination contrastive losses are employed, which are complementary to each other. We combine the proposed method with existing representative methods to evaluate its efficacy in multiple SFUDA settings. Extensive experimental results reveal that our method can achieve state-of-the-art performance due to the transfer of domain-invariant class relationship. (1)
引用
收藏
页码:7619 / 7629
页数:11
相关论文
共 50 条
  • [31] Collaborative Learning of Diverse Experts for Source-free Universal Domain Adaptation
    Shen, Meng
    Lu, Yanzuo
    Hu, Yanxu
    Ma, Andy J.
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2054 - 2065
  • [32] Unified multi-level neighbor clustering for Source-Free Unsupervised Domain Adaptation
    Xiao, Yuzhe
    Xiao, Guangyi
    Chen, Hao
    PATTERN RECOGNITION, 2024, 153
  • [33] Source-free domain adaptation with unrestricted source hypothesis
    He, Jiujun
    Wu, Liang
    Tao, Chaofan
    Lv, Fengmao
    PATTERN RECOGNITION, 2024, 149
  • [34] Adversarial Source Generation for Source-Free Domain Adaptation
    Cui, Chaoran
    Meng, Fan'an
    Zhang, Chunyun
    Liu, Ziyi
    Zhu, Lei
    Gong, Shuai
    Lin, Xue
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4887 - 4898
  • [35] Enhancing and Adapting in the Clinic: Source-Free Unsupervised Domain Adaptation for Medical Image Enhancement
    Li, Heng
    Lin, Ziqin
    Qiu, Zhongxi
    Li, Zinan
    Niu, Ke
    Guo, Na
    Fu, Huazhu
    Hu, Yan
    Liu, Jiang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (04) : 1323 - 1336
  • [36] Guiding Pseudo-labels with Uncertainty Estimation for Source-free Unsupervised Domain Adaptation
    Litrico, Mattia
    Del Bue, Alessio
    Morerio, Pietro
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7640 - 7650
  • [37] SSDA: Secure Source-Free Domain Adaptation
    Ahmed, Sabbir
    Al Arafat, Abdullah
    Rizve, Mamshad Nayeem
    Hossain, Rahim
    Guo, Zhishan
    Rakin, Adnan Siraj
    Proceedings of the IEEE International Conference on Computer Vision, 2023, : 19123 - 19133
  • [38] Robust self-supervised learning for source-free domain adaptation
    Tian, Liang
    Zhou, Lihua
    Zhang, Hao
    Wang, Zhenbin
    Ye, Mao
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) : 2405 - 2413
  • [39] Source-free domain adaptation for image segmentation
    Bateson, Mathilde
    Kervadec, Hoel
    Dolz, Jose
    Lombaert, Herve
    Ben Ayed, Ismail
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [40] USDAP: universal source-free domain adaptation based on prompt learning
    Shao, Xun
    Shao, Mingwen
    Chen, Sijie
    Liu, Yuanyuan
    Journal of Electronic Imaging, 2024, 33 (05)