Fully convolutional DenseNet with adversarial training for semantic segmentation of high-resolution remote sensing images

被引:9
|
作者
Guo, Xuejun [1 ,2 ]
Chen, Zehua [2 ]
Wang, Chengyi [1 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Beijing, Peoples R China
[2] Taiyuan Univ Technol, Coll Data Sci, Taiyuan, Peoples R China
基金
中国国家自然科学基金;
关键词
semantic segmentation; generative adversarial network; fully convolutional neural network; high resolution remote sensing; DenseNets; NEURAL-NETWORK; CLASSIFICATION; EXTRACTION; MULTISCALE; FRAMEWORK;
D O I
10.1117/1.JRS.15.016520
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Semantic segmentation is an important and foundational task in the application of high-resolution remote sensing images (HRRSIs). However, HRRSIs feature large differences within categories and minor variances across categories, posing a significant challenge to the high-accuracy semantic segmentation of HRRSIs. To address this issue and obtain powerful feature expressiveness, a deep conditional generative adversarial network (DCGAN), integrating fully convolutional DenseNet (FC-DenseNet) and Pix2pix, is proposed. The DCGAN is composed of a generator-discriminator pair, which is built on a modified downsampling unit of FC-DenseNet. The proposed method possesses strong feature expression ability because of its skip connections, the very deep network structure and multiscale supervision introduced by FC-DenseNet, and the supervision from the discriminator. Experiments on a Deep Globe Land Cover dataset demonstrate the feasibility and effectiveness of this approach for the semantic segmentation of HRRSIs. The results also reveal that our method can mitigate the influence of class imbalance. Our approach for precise semantic segmentation can effectively facilitate the application of HRRSIs. (C) 2021 Society of Photo-Optical Instrumentation Engineers (SPIE)
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Enhanced Lightweight End-to-End Semantic Segmentation for High-Resolution Remote Sensing Images
    Dong, He
    Yu, Baoguo
    Wu, Wanqing
    He, Chenglong
    [J]. IEEE Access, 2022, 10 : 70947 - 70954
  • [32] FSegNet: A Semantic Segmentation Network for High-Resolution Remote Sensing Images That Balances Efficiency and Performance
    Luo, Wen
    Deng, Fei
    Jiang, Peifan
    Dong, Xiujun
    Zhang, Gulan
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21 : 1 - 5
  • [33] Global Multi-Attention UResNeXt for Semantic Segmentation of High-Resolution Remote Sensing Images
    Chen, Zhong
    Zhao, Jun
    Deng, He
    [J]. REMOTE SENSING, 2023, 15 (07)
  • [34] RAANet: A Residual ASPP with Attention Framework for Semantic Segmentation of High-Resolution Remote Sensing Images
    Liu, Runrui
    Tao, Fei
    Liu, Xintao
    Na, Jiaming
    Leng, Hongjun
    Wu, Junjie
    Zhou, Tong
    [J]. REMOTE SENSING, 2022, 14 (13)
  • [35] Gated Convolutional Neural Network for Semantic Segmentation in High-Resolution Images
    Wang, Hongzhen
    Wang, Ying
    Zhang, Qian
    Xiang, Shiming
    Pan, Chunhong
    [J]. REMOTE SENSING, 2017, 9 (05)
  • [36] Research on Semantic Segmentation of High-resolution Remote Sensing Image Based on Full Convolutional Neural Network
    Fu, Xiaomeng
    Qu, Huiming
    [J]. 2018 12TH INTERNATIONAL SYMPOSIUM ON ANTENNAS, PROPAGATION AND ELECTROMAGNETIC THEORY (ISAPE), 2018,
  • [37] Semantic segmentation of high-resolution images
    Juhong WANG
    Bin LIU
    Kun XU
    [J]. Science China(Information Sciences), 2017, 60 (12) : 256 - 261
  • [38] Semantic segmentation of high-resolution images
    Juhong Wang
    Bin Liu
    Kun Xu
    [J]. Science China Information Sciences, 2017, 60
  • [39] Semantic segmentation of high-resolution images
    Wang, Juhong
    Liu, Bin
    Xu, Kun
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2017, 60 (12) : 123101:1 - 123101:6
  • [40] Convolutional Neural Network for the Semantic Segmentation of Remote Sensing Images
    Muhammad Alam
    Jian-Feng Wang
    Cong Guangpei
    LV Yunrong
    Yuanfang Chen
    [J]. Mobile Networks and Applications, 2021, 26 : 200 - 215