Unsupervised Domain Adaptation via Bidirectional Cross-Attention Transformer

被引:0
|
作者
Wang, Xiyu [1 ,2 ]
Guo, Pengxin [1 ]
Zhang, Yu [1 ,3 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen, Peoples R China
[2] Univ Sydney, Sch Comp Sci, Fac Engn, Camperdown, NSW, Australia
[3] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
Unsupervised Domain Adaptation; Transformer; Cross-Attention;
D O I
10.1007/978-3-031-43424-2_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised Domain Adaptation (UDA) seeks to utilize the knowledge acquired from a source domain, abundant in labeled data, and apply it to a target domain that contains only unlabeled data. The majority of existing UDA research focuses on learning domain-invariant feature representations for both domains by minimizing the domain gap using convolution-based neural networks. Recently, vision transformers have made significant strides in enhancing performance across various visual tasks. In this paper, we introduce a Bidirectional Cross-Attention Transformer (BCAT) for UDA, which is built upon vision transformers with the goal of improving performance. The proposed BCAT employs an attention mechanism to extract implicit source and target mixup feature representations, thereby reducing the domain discrepancy. More specifically, BCAT is designed as a weight-sharing quadruple-branch transformer with a bidirectional cross-attention mechanism, allowing it to learn domain-invariant feature representations. Comprehensive experiments indicate that our proposed BCAT model outperforms existing state-of-the-art UDA methods, both convolution-based and transformer-based, on four benchmark datasets.
引用
收藏
页码:309 / 325
页数:17
相关论文
共 50 条
  • [1] Unsupervised Domain Adaptive Dose Prediction Via Cross-Attention Transformer and Target-Specific Knowledge Preservation
    Cui, Jiaqi
    Xiao, Jianghong
    Hou, Yun
    Wu, Xi
    Zhou, Jiliu
    Peng, Xingchen
    Wang, Yan
    [J]. INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2023, 33 (11)
  • [2] Bidirectional feature enhancement transformer for unsupervised domain adaptation
    Hao, Zhiwei
    Wang, Shengsheng
    Long, Sifan
    Li, Yiyang
    Chai, Hao
    [J]. VISUAL COMPUTER, 2024, 40 (09): : 6261 - 6277
  • [3] Towards Unsupervised Domain Adaptation via Domain-Transformer
    Ren, Chuan-Xian
    Zhai, Yiming
    Luo, You-Wei
    Yan, Hong
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024,
  • [4] Cross-Attention Transformer for Video Interpolation
    Kim, Hannah Halin
    Yu, Shuzhi
    Yuan, Shuai
    Tomasi, Carlo
    [J]. COMPUTER VISION - ACCV 2022 WORKSHOPS, 2023, 13848 : 325 - 342
  • [5] CAT-DTI: cross-attention and Transformer network with domain adaptation for drug-target interaction prediction
    Zeng, Xiaoting
    Chen, Weilin
    Lei, Baiying
    [J]. BMC BIOINFORMATICS, 2024, 25 (01)
  • [6] SCATT: Transformer tracking with symmetric cross-attention
    Zhang, Jianming
    Chen, Wentao
    Dai, Jiangxin
    Zhang, Jin
    [J]. APPLIED INTELLIGENCE, 2024, 54 (08) : 6069 - 6084
  • [7] Deblurring transformer tracking with conditional cross-attention
    Sun, Fuming
    Zhao, Tingting
    Zhu, Bing
    Jia, Xu
    Wang, Fasheng
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (03) : 1131 - 1144
  • [8] Deblurring transformer tracking with conditional cross-attention
    Fuming Sun
    Tingting Zhao
    Bing Zhu
    Xu Jia
    Fasheng Wang
    [J]. Multimedia Systems, 2023, 29 : 1131 - 1144
  • [9] Learning cross-domain representations by vision transformer for unsupervised domain adaptation
    Ye, Yifan
    Fu, Shuai
    Chen, Jing
    [J]. Neural Computing and Applications, 2023, 35 (15) : 10847 - 10860
  • [10] Cross-Domain Urban Land Use Classification via Scenewise Unsupervised Multisource Domain Adaptation With Transformer
    Li, Mengmeng
    Zhang, Congcong
    Zhao, Wufan
    Zhou, Wen
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 10051 - 10066