Multi-temporal remote sensing imagery semantic segmentation color consistency adversarial network

被引:0
|
作者
Li X. [1 ]
Zhang L. [1 ]
Wang Q. [1 ]
Ai H. [1 ]
机构
[1] Chinese Academy of Surveying and Mapping, Beijing
关键词
Attention mechanism; Color consistency; Generative adversarial networks; Multi-temporal remote sensing imagery; Semantic segmentation;
D O I
10.11947/j.AGCS.2020.20190439
中图分类号
学科分类号
摘要
Using deep convolutional neural network (CNN) to intelligently extract buildings from remote sensing images is of great significance for digital city construction, disaster detection and land management. The color difference between multi-temporal remote sensing images will lead to the decrease of generalization ability of building semantic segmentation model. In view of this, this paper proposes the attention-guided color consistency adversarial network (ACGAN). The algorithm takes the reference color style images and the images to be corrected in the same area and different phases as the training set and adopts the consistency adversarial network with the U-shaped attention mechanism to train the color consistency model. In the prediction stage, this model converts the hue of the images to that of the reference color style image, which is based on the reasoning ability of the deep learning model, instead of the corresponding reference color style image. This model transforms the hue of the images to be corrected into that of the reference color style images. This stage is based on the reasoning ability of the deep learning model, and the corresponding reference color style image is no longer needed. In order to verify the effectiveness of the algorithm, firstly, we compare the algorithm of this paper with the traditional image processing algorithm and other consistency adversarial network. The results show that the images after ACGAN color consistency processing are more similar to that of the reference color style images. Secondly, we carried out the building semantic segmentation experiment on the images processed by the above different color consistency algorithms, which proved that the method in this paper is more conducive to the impro-vement of the generalization ability of multi-temporal remote sensing image semantic segmentation model. © 2020, Surveying and Mapping Press. All right reserved.
引用
收藏
页码:1473 / 1484
页数:11
相关论文
共 32 条
  • [1] JI Shunping, WEI Shiqing, LU Meng, Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set, IEEE Transactions on Geoscience and Remote Sensing, 57, 1, pp. 574-586, (2019)
  • [2] ZUO Zongcheng, ZHANG Wen, ZHANG Dongying, A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields, Acta Geodaetica et Cartographica Sinica, 48, 6, pp. 718-726, (2019)
  • [3] ZHANG Hongyi, CISSE M, DAUPHIN Y N, Et al., Mixup: beyond empirical risk minimization, (2017)
  • [4] INOUE H., Data augmentation by pairing samples for images classification, (2018)
  • [5] CUBUK E D, ZOPH B, MANE D, Et al., AutoAugment: learning augmentation strategies from data, Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 113-123, (2019)
  • [6] CANTY M J, NIELSEN A A., Automatic radiometric normalization of multitemporal satellite imagery with the iteratively re-weighted MAD transformation, Remote Sensing of Environment, 112, 3, pp. 1025-1036, (2008)
  • [7] HUANG T W, CHEN H T., Landmark-based sparse color representations for color transfer, Proceedings of 2009 IEEE International Conference on Computer Vision, pp. 199-204, (2009)
  • [8] LI Deren, WANG Mi, PAN Jun, Auto-dodging processing and its application for optical RS images, Geomatics and Information Science of Wuhan University, 31, 9, pp. 753-756, (2006)
  • [9] HORN B K P, WOODHAM R J., Destriping Landsat MSS images by histogram modification, Computer Graphics and Image Processing, 10, 1, pp. 69-83, (1979)
  • [10] PITIE F, KOKARAM A., The linear Monge-Kantorovitch colour mapping for example-based colour transfer, Proceedings of the 4th European Conference on Visual Media Production, pp. 27-28, (2007)