A multi-grained unsupervised domain adaptation approach for semantic segmentation

被引:4
|
作者
Li, Luyang [1 ]
Ma, Tai [2 ]
Lu, Yue [2 ]
Li, Qingli [2 ]
He, Lianghua [3 ]
Wen, Ying [2 ]
机构
[1] East China Normal Univ, Sch Comp Sci & Technol, Shanghai, Peoples R China
[2] East China Normal Univ, Sch Commun & Elect Engn, Shanghai, Peoples R China
[3] Donghua Univ, Sch Comp Sci & Technol, Shanghai, Peoples R China
关键词
Domain adaptation; Unsupervised semantic segmentation; Neural network;
D O I
10.1016/j.patcog.2023.109841
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When transferring knowledge between different datasets, domain mismatch greatly hinders model's perfor-mance. So domain adaption has been brought up to tackle the problem. Traditional methods focusing either on global or local alignment play a limited role in improving model's performance. In this paper, we propose a multi-grained unsupervised domain adaptation approach (Muda) for semantic segmentation. Muda aims to enforce multi-grained semantic consistency between domains by aligning domains at both global and category level. Specifically, coarse-grained adaptation uses global adversarial learning on an image translation model and a main segmentation model, which respectively attempts to eliminate appearance differences and to get similar segmentation maps from two domains. While fine-grained adaptation employs an auxiliary model to adapt category information to refine pseudo labels of target data. Experiments and ablation studies are conducted on two synthetic-to-real benchmarks: GTA5-* Cityscapes and SYNTHIA-* Cityscapes, which show that our model outperforms the state-of-the-art methods.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Multi-modal unsupervised domain adaptation for semantic image segmentation
    Hu, Sijie
    Bonardi, Fabien
    Bouchafa, Samia
    Sidibe, Desire
    [J]. PATTERN RECOGNITION, 2023, 137
  • [2] Unsupervised Domain Adaptation in Semantic Segmentation: A Review
    Toldo, Marco
    Maracani, Andrea
    Michieli, Umberto
    Zanuttigh, Pietro
    [J]. TECHNOLOGIES, 2020, 8 (02)
  • [3] Multichannel Semantic Segmentation with Unsupervised Domain Adaptation
    Watanabe, Kohei
    Saito, Kuniaki
    Ushiku, Yoshitaka
    Harada, Tatsuya
    [J]. COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 : 600 - 616
  • [4] Geometric Unsupervised Domain Adaptation for Semantic Segmentation
    Guizilini, Vitor
    Li, Jie
    Ambrus, Rares
    Gaidon, Adrien
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8517 - 8527
  • [5] Unsupervised Domain Adaptation for Referring Semantic Segmentation
    Shi, Haonan
    Pan, Wenwen
    Zhao, Zhou
    Zhang, Mingmin
    Wu, Fei
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5807 - 5818
  • [6] Rethinking unsupervised domain adaptation for semantic segmentation
    Wang, Zhijie
    Suganuma, Masanori
    Okatani, Takayuki
    [J]. Pattern Recognition Letters, 2024, 186 : 119 - 125
  • [7] A Fine-Grained Unsupervised Domain Adaptation Framework for Semantic Segmentation of Remote Sensing Images
    Wang, Luhan
    Xiao, Pengfeng
    Zhang, Xueliang
    Chen, Xinyang
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 4109 - 4121
  • [8] Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation
    Saporta, Antoine
    Douillard, Arthur
    Vu, Tuan-Hung
    Perez, Patrick
    Cord, Matthieu
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3750 - 3759
  • [9] Unsupervised Adversarial Domain Adaptation Network for Semantic Segmentation
    Liu, Wei
    Su, Fulin
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2020, 17 (11) : 1978 - 1982
  • [10] Towards Unsupervised Online Domain Adaptation for Semantic Segmentation
    Kuznietsov, Yevhen
    Proesmans, Marc
    Van Gool, Luc
    [J]. 2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022), 2022, : 261 - 271