Domain-Specificity Inducing Transformers for Source-Free Domain Adaptation

被引:0
|
作者
Sanyal, Sunandini [1 ]
Asokan, Ashish Ramayee [1 ]
Bhambri, Suvaansh [1 ]
Kulkarni, Akshay [1 ]
Kundu, Jogendra Nath [1 ]
Babu, R. Venkatesh [1 ]
机构
[1] Indian Inst Sci, Vis & AI Lab, Bengaluru, India
关键词
D O I
10.1109/ICCV51070.2023.01735
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conventional Domain Adaptation (DA) methods aim to learn domain- invariant feature representations to improve the target adaptation performance. However, we motivate that domain-specificity is equally important since in-domain trained models hold crucial domain-specific properties that are beneficial for adaptation. Hence, we propose to build a framework that supports disentanglement and learning of domain-specific factors and task-specific factors in a unified model. Motivated by the success of vision transformers in several multi-modal vision problems, we find that queries could be leveraged to extract the domain-specific factors. Hence, we propose a novel Domain-Specificity inducing Transformer (DSiT) framework 1 for disentangling and learning both domain-specific and task-specific factors. To achieve disentanglement, we propose to construct novel Domain-Representative Inputs (DRI) with domain-specific information to train a domain classifier with a novel domain token. We are the first to utilize vision transformers for domain adaptation in a privacy-oriented source-free setting, and our approach achieves state-of-the-art performance on single-source, multi-source, and multi-target benchmarks.
引用
收藏
页码:18882 / 18891
页数:10
相关论文
共 50 条
  • [1] Generalized Source-free Domain Adaptation
    Yang, Shiqi
    Wang, Yaxing
    van de Weijer, Joost
    Herranz, Luis
    Jui, Shangling
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8958 - 8967
  • [2] Universal Source-Free Domain Adaptation
    Kundu, Jogendra Nath
    Venkat, Naveen
    Rahul, M., V
    Babu, R. Venkatesh
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4543 - 4552
  • [3] Imbalanced Source-free Domain Adaptation
    Li, Xinhao
    Li, Jingjing
    Zhu, Lei
    Wang, Guoqing
    Huang, Zi
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3330 - 3339
  • [4] Source bias reduction for source-free domain adaptation
    Tian, Liang
    Ye, Mao
    Zhou, Lihua
    Wang, Zhenbin
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (SUPPL 1) : 883 - 893
  • [5] Continual Source-Free Unsupervised Domain Adaptation
    Ahmed, Waqar
    Morerio, Pietro
    Murino, Vittorio
    [J]. IMAGE ANALYSIS AND PROCESSING, ICIAP 2023, PT I, 2023, 14233 : 14 - 25
  • [6] Source-free unsupervised domain adaptation: A survey
    Fang, Yuqi
    Yap, Pew-Thian
    Lin, Weili
    Zhu, Hongtu
    Liu, Mingxia
    [J]. NEURAL NETWORKS, 2024, 174
  • [7] Source-free domain adaptation with unrestricted source hypothesis
    He, Jiujun
    Wu, Liang
    Tao, Chaofan
    Lv, Fengmao
    [J]. Pattern Recognition, 2024, 149
  • [8] SSDA: Secure Source-Free Domain Adaptation
    Ahmed, Sabbir
    Al Arafat, Abdullah
    Rizve, Mamshad Nayeem
    Hossain, Rahim
    Guo, Zhishan
    Rakin, Adnan Siraj
    [J]. Proceedings of the IEEE International Conference on Computer Vision, 2023, : 19123 - 19133
  • [9] Adversarial Source Generation for Source-Free Domain Adaptation
    Cui, Chaoran
    Meng, Fan'an
    Zhang, Chunyun
    Liu, Ziyi
    Zhu, Lei
    Gong, Shuai
    Lin, Xue
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4887 - 4898
  • [10] Source-free domain adaptation with unrestricted source hypothesis
    He, Jiujun
    Wu, Liang
    Tao, Chaofan
    Lv, Fengmao
    [J]. PATTERN RECOGNITION, 2024, 149