DACS: Domain Adaptation via Cross-domain Mixed Sampling

被引:161
|
作者
Tranheden, Wilhelm [1 ,2 ]
Olsson, Viktor [1 ,2 ]
Pinto, Juliano [1 ]
Svensson, Lennart [1 ]
机构
[1] Chalmers Univ Technol, Gothenburg, Sweden
[2] Volvo Cars, Gothenburg, Sweden
关键词
D O I
10.1109/WACV48630.2021.00142
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semantic segmentation models based on convolutional neural networks have recently displayed remarkable performance for a multitude of applications. However, these models typically do not generalize well when applied on new domains, especially when going from synthetic to real data. In this paper we address the problem of unsupervised domain adaptation (UDA), which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain). Existing methods have seen success by training on pseudo-labels for these unlabelled images. Multiple techniques have been proposed to mitigate low-quality pseudolabels arising from the domain shift, with varying degrees of success. We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudolabels. These mixed samples are then trained on, in addition to the labelled data itself. We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes, a common synthetic-to-real semantic segmentation benchmark for UDA.
引用
收藏
页码:1378 / 1388
页数:11
相关论文
共 50 条
  • [31] Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain Adaptation
    Du, Zhekai
    Li, Jingjing
    Su, Hongzu
    Zhu, Lei
    Lu, Ke
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3936 - 3945
  • [32] Cross-Domain Graph Convolutions for Adversarial Unsupervised Domain Adaptation
    Zhu, Ronghang
    Jiang, Xiaodong
    Lu, Jiasen
    Li, Sheng
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 3847 - 3858
  • [33] Cross-Domain Few-Shot Relation Extraction via Representation Learning and Domain Adaptation
    Yuan, Zhongju
    Wang, Zhenkun
    Li, Genghui
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [34] Cross-Domain Human Parsing via Adversarial Feature and Label Adaptation
    Liu, Si
    Sun, Yao
    Zhu, Defa
    Ren, Guanghui
    Chen, Yu
    Feng, Jiashi
    Han, Jizhong
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 7146 - 7153
  • [35] Cross-domain Semantic Feature Learning via Adversarial Adaptation Networks
    Li, Rui
    Cao, Wenming
    Qian, Sheng
    Wong, Hau-San
    Wu, Si
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 37 - 42
  • [36] ACDC: Online unsupervised cross-domain adaptation
    de Carvalho, Marcus
    Pratama, Mahardhika
    Zhang, Jie
    Yee, Edward Yapp Kien
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 253
  • [37] Cross-domain policy adaptation with dynamics alignment
    Gui, Haiyuan
    Pang, Shanchen
    Yu, Shihang
    Qiao, Sibo
    Qi, Yufeng
    He, Xiao
    Wang, Min
    Zhai, Xue
    [J]. NEURAL NETWORKS, 2023, 167 : 104 - 117
  • [38] Cross-Domain Adaptation for Animal Pose Estimation
    Cao, Jinkun
    Tang, Hongyang
    Fang, Hao-Shu
    Shen, Xiaoyong
    Lu, Cewu
    Tai, Yu-Wing
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9497 - 9506
  • [39] Robustness via Cross-Domain Ensembles
    Yeo, Teresa
    Kar, Oguzhan Fatih
    Zamir, Amir
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 12169 - 12179
  • [40] Cross-domain damage identification based on conditional adversarial domain adaptation
    Li, Zuoqiang
    Weng, Shun
    Xia, Yong
    Yu, Hong
    Yan, Yongyi
    Yin, Pengcheng
    [J]. Engineering Structures, 2024, 321