Cultivated land extraction from high-resolution remote sensing images based on BECU-Net model with edge enhancement

被引:0
|
作者
Dong Z. [1 ,2 ,3 ]
Li J. [1 ]
Zhang J. [1 ]
Yu J. [1 ]
An S. [1 ]
机构
[1] School of Computer and Information, Hefei University of Technology, Hefei
[2] Anhui Key Laboratory of Industrial Safety and Emergency Technology, Hefei University of Technology, Hefei
[3] Anhui Provincial Laboratory of Intelligent Interconnection System, Hefei University of Technology, Hefei
基金
安徽省自然科学基金;
关键词
cultivated land extraction; edge enhancement; high resolution remote sensing image; remote sensing; semantic segmentation; U-Net;
D O I
10.11834/jrs.20222268
中图分类号
学科分类号
摘要
Cultivated land cover, as an important technical index to reflect the dynamic changes of human activities and the utilization degree of land resources, has been widely utilized in the fields of food security assessment and land management decision making. Existing information extraction methods ignore the differential characteristics of the plots and the rich information found in edge details, which results in fragmented extraction results with fuzzy boundaries. Therefore, an improved model that couples semantic segmentation model and edge enhancement is proposed to better solve the problem of insufficient fitting of cultivated land edges and fully utilize the rich semantic features and edge information in remote sensing images. The edge loss is designed accordingly to further improve the training accuracy and model performance. We design an edge branching self-network formed by CoT unit, gated convolution, and SCSE attention mechanism to realize the information complementarity of edge and depth features. We construct a joint edge enhancement loss function called BE-loss with constraints to enhance the attention of the model to boundary information. On this basis, we construct a cultivated land information extraction model, that is, BECU-net, by combining the EfficientNet backbone network and U-frame. In the multi-feature input layer of this model, the index and texture features of the preprocessed data are pre-extracted, the input structure is adjusted, and the feature expression ability of the network is improved. The extraction accuracy of cultivated land is 94.13%, and the F1-score is 95.17%. Compared with PANet, the extraction accuracy increased by 15.01%, and the F1-score improved by 7.93%. Compared with DeeplabV3+ network, the extraction accuracy is enhanced by 2.03%, and the F1-score is increased by 1.15%. The edge of cultivated land extracted by BECU-Net model is clear, and it is close to the real edge shape of cultivated land. Few holes and islands are observed. The extracted large parcels are not missing, and the edges and corners are sharp. The extracted small parcels have clear outlines and small deformation. At various gaps and complex edges, the extraction effect of GID dataset is significantly improved compared with that of the five other models. The effect is significant when used for edge extraction, The sawtooth and cavity phenomena of cultivated land patches are effectively restrained as well. (1) The input layer of network structure with multiple features, including exponential features and texture features, can effectively reflect the characteristics of cultivated land. (2) The edge branch subnetwork focuses on processing the shape information to better identify the boundary details in the cultivated land image. Its edge features complement the depth features of the Efficient encoder, and they can be cascaded to fully utilize the shallow details. (3) The improved combined loss function called BE-Loss with regular term solves the problem of unbalanced training sample categories and non-edge pixel-dominated loss function. Overall, the algorithm in this study provides a technical reference for further solving the problem of fuzzy boundaries when extracting cultivated land information. It also offers theoretical support for the accurate division of complex boundaries. © 2023 Science Press. All rights reserved.
引用
收藏
页码:2847 / 2859
页数:12
相关论文
共 34 条
  • [1] Bao Y T, Liu W, Gao O Y, Lin Z K, Hu Q., E-Unet++: a semantic segmentation method for remote sensing images, 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference, pp. 1858-1862, (2021)
  • [2] Chen L C, Zhu Y K, Papandreou G, Schroff F, Adam H., Encoder-decoder with atrous separable convolution for semantic image segmentation, Proceedings of the 15th European Conference on Computer vision, pp. 833-851, (2018)
  • [3] Chen Z X, Wang L M, Wu W B, Jiang Z W, Li H., Monitoring plastic-mulched farmland by Landsat-8 OLI imagery using spectral and textural features, Remote Sensing, 8, 4, (2016)
  • [4] Du G M, Gai Z X, Wang H Y., Theoretical explanation and research framework of cultivated-land fragmentation in China, Journal of Earth Sciences and Environment, 43, 6, pp. 997-1008, (2021)
  • [5] Fan D L, Su X Y, Weng B, Wang T S, Yang F Y., Research progress on remote sensing classification methods for farmland vegetation, AgriEngineering, 3, 4, pp. 971-989, (2021)
  • [6] Gan F P, Wang R S, Wang Y J, Fu Z W., The classification method based on remote sensing techniques for land use and cover, Remote Sensing for Land and Resources, 11, 4, pp. 40-45, (1999)
  • [7] Gong P, Howarth P J., The use of structural information for improving land-cover classification accuracies at the rural-urban fringe, Photogrammetric Engineering and Remote Sensing, 56, 1, pp. 67-73, (1990)
  • [8] He C, Li S L, Xiong D H, Fang P Z, Liao M S., Remote sensing image semantic segmentation based on edge information guidance, Remote Sensing, 12, 9, (2020)
  • [9] Krizhevsky A, Sutskever I, Hinton G E., ImageNet classification with deep convolutional neural networks, Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1097-1105, (2012)
  • [10] Lee R Y, Chang K C, Ou D Y, Hsu C H., Evaluation of crop mapping on fragmented and complex slope farmlands through random forest and object-oriented analysis using unmanned aerial vehicles, Geocarto International, 35, 12, pp. 1293-1310, (2020)