Bidirectional Self-Training with Multiple Anisotropic Prototypes for Domain Adaptive Semantic Segmentation

被引:9
|
作者
Lu, Yulei [1 ]
Luo, Yawei [1 ]
Zhang, Li [2 ]
Li, Zheyang [3 ]
Yang, Yi [1 ]
Xiao, Jun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Insigma Digital Technol Co Ltd, Hangzhou, Peoples R China
[3] Hikvis Res Inst, Hangzhou, Peoples R China
基金
中国国家自然科学基金; 浙江省自然科学基金;
关键词
Semantic Segmentation; Unsupervised Domain Adaptation; Gaussian Mixture Model; Self-training;
D O I
10.1145/3503161.3548225
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
A thriving trend for domain adaptive segmentation endeavors to generate the high-quality pseudo labels for target domain and retrain the segmentor on them. Under this self-training paradigm, some competitive methods have sought to the latent-space information, which establishes the feature centroids (a.k.a prototypes) of the semantic classes and determines the pseudo label candidates by their distances from these centroids. In this paper, we argue that the latent space contains more information to be exploited thus taking one step further to capitalize on it. Firstly, instead of merely using the source-domain prototypes to determine the target pseudo labels as most of the traditional methods do, we bidirectionally produce the target-domain prototypes to degrade those source features which might be too hard or disturbed for the adaptation. Secondly, existing attempts simply model each category as a single and isotropic prototype while ignoring the variance of the feature distribution, which could lead to the confusion of similar categories. To cope with this issue, we propose to represent each category with multiple and anisotropic prototypes via Gaussian Mixture Model, in order to fit the de facto distribution of source domain and estimate the likelihood of target samples based on the probability density. We apply our method on GTA5->Cityscapes and Synthia->Cityscapes tasks and achieve 61.2% and 62.8% respectively in terms of mean IoU, substantially outperforming other competitive self-training methods. Noticeably, in some categories which severely suffer from the categorical confusion such as "truck" and "bus", our method achieves 56.4% and 68.8% respectively, which further demonstrates the effectiveness of our design. The code and model are available at https://github.com/luyvlei/BiSMAPs.
引用
收藏
页码:1405 / 1415
页数:11
相关论文
共 50 条
  • [1] Adversarial Self-Training with Domain Mask for Semantic Segmentation
    Hsin, Hsien-Kai
    Chiu, Hsiao-Chien
    Lin, Chun-Chen
    Chen, Chih-Wei
    Tsung, Pei-Kuei
    [J]. 2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 3689 - 3695
  • [2] Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation
    Niemeijer, Joshua
    Schaefer, Joerg P.
    [J]. 2021 IEEE INTELLIGENT VEHICLES SYMPOSIUM WORKSHOPS (IV WORKSHOPS), 2021, : 364 - 371
  • [3] Pseudo Features-Guided Self-Training for Domain Adaptive Semantic Segmentation of Satellite Images
    Zhang, Fahong
    Shi, Yilei
    Xiong, Zhitong
    Huang, Wei
    Zhu, Xiao Xiang
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation
    Marsden, Robert A.
    Bartler, Alexander
    Doebler, Mario
    Yang, Bin
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [5] Cooperative Self-Training for Multi-Target Adaptive Semantic Segmentation
    Zhang, Yangsong
    Roy, Subhankar
    Lu, Hongtao
    Ricci, Elisa
    Lathuiliere, Stephane
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5593 - 5602
  • [6] Unsupervised Domain Adaptation with Multiple Domain Discriminators and Adaptive Self-Training
    Spadotto, Teo
    Toldo, Marco
    Michieli, Umberto
    Zanuttigh, Pietro
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2845 - 2852
  • [7] Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation
    Kim, Daehan
    Seo, Minseok
    Park, Kwanyong
    Shin, Inkyu
    Woo, Sanghyun
    Kweon, In-So
    Choi, Dong-Geol
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 1114 - 1123
  • [8] Domain Adaptive Semantic Segmentation via Entropy-Ranking and Uncertain Learning-Based Self-Training
    Peng, Chengli
    Ma, Jiayi
    [J]. IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (08) : 1524 - 1527
  • [9] Domain Adaptive Semantic Segmentation via Entropy-Ranking and Uncertain Learning-Based Self-Training
    Chengli Peng
    Jiayi Ma
    [J]. IEEE/CAA Journal of Automatica Sinica, 2022, (08) : 1524 - 1527
  • [10] Self-Training for Class-Incremental Semantic Segmentation
    Yu, Lu
    Liu, Xialei
    van de Weijer, Joost
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 9116 - 9127