CDANet: Contextual Detail-Aware Network for High-Spatial-Resolution Remote-Sensing Imagery Shadow Detection

被引:12
|
作者
Zhu, Qiqi [1 ]
Yang, Yang [1 ]
Sun, Xiongli [2 ]
Guo, Mingqiang [1 ,3 ]
机构
[1] China Univ Geosci, Sch Geog & Informat Engn, Wuhan 430074, Peoples R China
[2] Res Inst Informat Technol, China Construct Engn Bur 3, Wuhan 430070, Peoples R China
[3] Minist Nat Resources, Key Lab Urban Land Resources Monitoring & Simulat, Shenzhen 518034, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Task analysis; Semantics; Decoding; Learning systems; Deep learning; Image reconstruction; Contextual information; deep learning; remote sensing; shadow detection; AERIAL IMAGES; FEATURE-EXTRACTION; CLASSIFICATION; RECONSTRUCTION; COMPENSATION; ILLUMINATION; AREAS;
D O I
10.1109/TGRS.2022.3143886
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Shadow detection automatically marks shadow pixels in high-spatial-resolution (HSR) imagery with specific categories based on meaningful colorific features. Accurate shadow mapping is crucial in interpreting images and recovering radiometric information. Recent studies have demonstrated the superiority of deep learning in very-high-resolution satellite imagery shadow detection. Previous methods usually overlap convolutional layers but cause the loss of spatial information. In addition, the scale and shape of shadows vary, and the small and irregular shadows are challenging to detect. In addition, the unbalanced distribution of the foreground and the background causes the common binary cross-entropy loss function to be biased, which seriously affects model training. A contextual detail-aware network (CDANet), a novel framework for extracting accurate and complete shadows, is proposed for shadow detection to remedy these issues. In CDANet, a double branch module is embedded in the encoder & x2013;decoder structure to effectively alleviate low-level local information loss during convolution. The contextual semantic fusion connection with the residual dilation module is proposed to provide multiscale contextual information of diverse shadows. A hybrid loss function is designed to retain the detailed information of the tiny shadows, which per-pixel calculates the distribution of shadows and improves the robustness of the model. The performance of the proposed method is validated on two distinct shadow detection datasets, and the proposed CDANet reveals higher portability and robustness than other methods.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification
    Wang, Guizhou
    Liu, Jianbo
    He, Guojin
    SCIENTIFIC WORLD JOURNAL, 2013,
  • [22] CONTEXTUAL TECHNIQUES FOR CLASSIFICATION OF HIGH AND LOW-RESOLUTION REMOTE-SENSING DATA
    KARTIKEYAN, B
    GOPALAKRISHNA, B
    KALUBARME, MH
    MAJUMDER, KL
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 1994, 15 (05) : 1037 - 1051
  • [23] SCALABLE MULTI-CLASS GEOSPATIAL OBJECT DETECTION IN HIGH-SPATIAL-RESOLUTION REMOTE SENSING IMAGES
    Cheng, Gong
    Han, Junwei
    Zhou, Peicheng
    Guo, Lei
    2014 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2014, : 2479 - 2482
  • [24] An automatic shadow detection method for high-resolution remote sensing imagery based on polynomial fitting
    Xue, Li
    Yang, Shuwen
    Li, Yikun
    Ma, Jijing
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2019, 40 (08) : 2986 - 3007
  • [25] RACDNet: Resolution- and Alignment-Aware Change Detection Network for Optical Remote Sensing Imagery
    Tian, Juan
    Peng, Daifeng
    Guan, Haiyan
    Ding, Haiyong
    REMOTE SENSING, 2022, 14 (18)
  • [26] Study on Rocky Coastline Extraction of High-Spatial-Resolution Remote Sensing Images
    Wang, Liyan
    Hou, Chen
    Li, Peng
    Qu, Hui
    Zhang, Jie
    PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY APPLICATIONS (ICCITA), 2016, 53 : 150 - 153
  • [27] Locality-Aware Rotated Ship Detection in High-Resolution Remote Sensing Imagery Based on Multiscale Convolutional Network
    Liu, Lingyi
    Bai, Yunpeng
    Li, Ying
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [28] An Energy-Driven Total Variation Model for Segmentation and Classification of High Spatial Resolution Remote-Sensing Imagery
    Zhang, Qian
    Huang, Xin
    Zhang, Liangpei
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2013, 10 (01) : 125 - 129
  • [29] Detail-Preserving Smoothing Classifier Based on Conditional Random Fields for High Spatial Resolution Remote Sensing Imagery
    Zhao, Ji
    Zhong, Yanfei
    Zhang, Liangpei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (05): : 2440 - 2452
  • [30] Detail and Deep Feature Multi-Branch Fusion Network for High-Resolution Farmland Remote-Sensing Segmentation
    Tang, Zhankui
    Pan, Xin
    She, Xiangfei
    Ma, Jing
    Zhao, Jian
    REMOTE SENSING, 2025, 17 (05)