LARGE SCALE UNSUPERVISED DOMAIN ADAPTATION OF SEGMENTATION NETWORKS WITH ADVERSARIAL LEARNING

被引:0
|
作者
Deng, Xueqing [1 ,2 ]
Yang, Hsiuhan Lexie [2 ]
Makkar, Nikhil [2 ]
Lunga, Dalton [2 ]
机构
[1] Univ Calif, Merced, CA 95343 USA
[2] Oak Ridge Natl Lab, Natl Secur Sci Directorate, Oak Ridge, TN 37831 USA
关键词
large scale mapping; adversarial learning; domain adaptation;
D O I
10.1109/igarss.2019.8900277
中图分类号
P [天文学、地球科学];
学科分类号
07 ;
摘要
Most current state-of-the-art methods for semantic segmentation on remote sensing imagery require large labeled data, which is scarcely available. Due to the distribution shifting phenomenon inherent in remote sensing imagery, the reuse of pre-trained models on new areas of interest rarely yield satisfactory results. In this paper, we approach this problem from an adversarial learning perspective toward unsupervised domain adaptation. The core concept is to infuse fully convolutional neural networks and adversarial networks for semantic segmentation assuming the structures in the scene and objects of interest are similar in two set of images. Models are trained on a source dataset where ground truth is available and adapted to new target dataset iteratively via a adversarial loss on unlabeled samples. We use two real large scale datasets to validate the framework: 1) cross city road extraction and 2) cross country building extraction. The preliminary results show the usefulness of considering adversarial learning for indirect re-use of the pre-trained models. Experimental validation suggests significant benefits over models without adaptation.
引用
下载
收藏
页码:4955 / 4958
页数:4
相关论文
共 50 条
  • [31] Unsupervised Domain Adaptation for Remote Sensing Image Segmentation Based on Adversarial Learning and Self-Training
    Liang, Chenbin
    Cheng, Bo
    Xiao, Baihua
    Dong, Yunyun
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [32] Adversarial unsupervised domain adaptation for 3D semantic segmentation with multi-modal learning
    Liu, Wei
    Luo, Zhiming
    Cai, Yuanzheng
    Yu, Ying
    Ke, Yang
    Marcato Junior, Jose
    Goncalves, Wesley Nunes
    Li, Jonathan
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2021, 176 : 211 - 221
  • [33] Adversarial Robustness for Unsupervised Domain Adaptation
    Awais, Muhammad
    Zhou, Fengwei
    Xu, Hang
    Hong, Lanqing
    Luo, Ping
    Bae, Sung-Ho
    Li, Zhenguo
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8548 - 8557
  • [34] Scale variance minimization for unsupervised domain adaptation in image segmentation
    Guan, Dayan
    Huang, Jiaxing
    Lu, Shijian
    Xiao, Aoran
    PATTERN RECOGNITION, 2021, 112
  • [35] Unsupervised domain adaptation for cross-modality liver segmentation via joint adversarial learning and self-learning
    Hong, Jin
    Yu, Simon Chun-Ho
    Chen, Weitian
    APPLIED SOFT COMPUTING, 2022, 121
  • [36] Deep cycle autoencoder for unsupervised domain adaptation with generative adversarial networks
    Zhou, Qiang
    Zhou, Wen'an
    Yang, Bin
    Huan, Jun
    IET COMPUTER VISION, 2019, 13 (07) : 659 - 665
  • [37] Unsupervised Domain Adaptation with Generative Adversarial Networks for Facial Emotion Recognition
    Fan, Yingruo
    Lam, Jacqueline C. K.
    Li, Victor O. K.
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 4460 - 4464
  • [38] Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
    Bousmalis, Konstantinos
    Silberman, Nathan
    Dohan, David
    Erhan, Dumitru
    Krishnan, Dilip
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 95 - 104
  • [39] Deep Multi-Modality Adversarial Networks for Unsupervised Domain Adaptation
    Ma, Xinhong
    Zhang, Tianzhu
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (09) : 2419 - 2431
  • [40] Unsupervised Feature-Level Domain Adaptation with Generative Adversarial Networks
    Wu Z.
    Yang Z.
    Pu X.
    Xu J.
    Cao S.
    Ren Y.
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2022, 51 (04): : 580 - 585and607