Road extraction networks fusing multiscale and edge features

被引:0
|
作者
Sun, Genyun [1 ,2 ,3 ]
Sun, Chao [1 ]
Zhang, Aizhu [1 ]
机构
[1] College of Oceanography and Space Informaties, China University of Petroleum (East China), Qingdao,266580, China
[2] Key Laboratory of Natural Resources Monitoring in Tropieal and Subtropieal Area of South China, Ministry of Natural Resources, Guangzhou,510700, China
[3] Laboratory for Marine Mineral Resources, Qingdao National Laboratory for Marine Science and Technology, Qingdao,266237, China
基金
中国国家自然科学基金;
关键词
Edge detection - Urban growth;
D O I
10.11947/j.AGCS.2024.20230291
中图分类号
学科分类号
摘要
Extracting roads using remote sensing images is of great significance to urban development. However, due to factors such as variable scale of roads and easy to be obscured, it leads to problems such as road miss detection and incomplete edges. To address the above problems, this paper proposes a network (MeD-Net) for road extraction from remote sensing images integrating multi-scale features and focusing on edge detail features. MeD-Net consists of two parts: road segmentation and edge extraction. The road segmentation network uses multi-scale deep feature processing (MDFP) module to extract multiscale features taking into account global and local information, and is trained using group normalization optimization model after convolution. The edge extraction network uses detail-guided fusion algorithms to enhance the detail information of deep edge features and uses attention mechanisms for feature fusion. To verify the algorithm performance, this paper conducts experiments using the Massachusetts road dataset and the GF-2 road dataset in Qingdao area. The experiments show that MeD-Net achieves the highest accuracy in both datasets in terms of intersection-over-union ratio and Fl value, and is able to extract roads at different scales and maintain road edges more completely. © 2025 SinoMaps Press. All rights reserved.
引用
收藏
页码:2233 / 2243
相关论文
共 50 条
  • [21] Strip Attention Networks for Road Extraction
    Huan, Hai
    Sheng, Yu
    Zhang, Yi
    Liu, Yuan
    REMOTE SENSING, 2022, 14 (18)
  • [22] Road centreline extraction from high-resolution imagery based on multiscale structural features and support vector machines
    Huang, Xin
    Zhang, Liangpei
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2009, 30 (08) : 1977 - 1987
  • [23] A new SAR image denoising algorithm of fusing Kuan filters and edge extraction
    Zhang Xiang
    Deng Kazhong
    Fan Hongdong
    INTERNATIONAL SYMPOSIUM ON LIDAR AND RADAR MAPPING 2011: TECHNOLOGIES AND APPLICATIONS, 2011, 8286
  • [24] Road extraction by snake with inertia and differential features
    Sawano, H
    Okada, M
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4, 2004, : 380 - 383
  • [25] Combining static and dynamic features using neural networks and edge fusion for video object extraction
    Kim, J
    Chen, T
    IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING, 2003, 150 (03): : 160 - 167
  • [26] Improved U-Net Remote Sensing Classification Algorithm Fusing Attention and Multiscale Features
    Fan, Xiangsuo
    Yan, Chuan
    Fan, Jinlong
    Wang, Nayi
    REMOTE SENSING, 2022, 14 (15)
  • [27] Multiscale Building Extraction With Refined Attention Pyramid Networks
    Tian, Qinglin
    Zhao, Yingjun
    Li, Yao
    Chen, Jun
    Chen, Xuejiao
    Qin, Kai
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [28] Road Extraction by Multiscale Deformable Transformer From Remote Sensing Images
    Hu, Peng-Cheng
    Chen, Si-Bao
    Huang, Li-Li
    Wang, Gui-Zhou
    Tang, Jin
    Luo, Bin
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [29] Recognition of Ballistic Targets by Fusing Micro-Motion Features with Networks
    Yang, Lei
    Zhang, Wenpeng
    Jiang, Weidong
    REMOTE SENSING, 2022, 14 (22)
  • [30] A NEW FACE FEATURE EXTRACTION METHOD BASED ON FUSING LBP AND DBNS FEATURES
    Wang, Yan
    Wang, Yunyun
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2016, 12 (04): : 1353 - 1364