Convolutional block attention module U-Net: a method to improve attention mechanism and U-Net for remote sensing images

被引:6
|
作者
Zhang, Yanjun [1 ]
Kong, Jiayuan [2 ]
Long, Sifang [3 ]
Zhu, Yuanhao [1 ]
He, Fushuai [4 ]
机构
[1] China Univ Min & Technol Beijing, Coll Geosci & Surveying Engn, Beijing, Peoples R China
[2] Taiyuan Univ Technol, Sch Min Engn, Taiyuan, Peoples R China
[3] Zhejiang Univ, Coll Biosyst Engn & Food Sci, Hangzhou, Peoples R China
[4] Power China Zhongnan Engn Corp Ltd, Changsha, Hunan, Peoples R China
关键词
remote sensing image; building extraction; U-Net; attention mechanism; characteristics of the transfer;
D O I
10.1117/1.JRS.16.026516
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Building extraction from high-resolution remote sensing images is the foundation of many fields, but the existing algorithms cannot extract building details from remote sensing images well because of complex background conditions and the occlusion of interfering objects. To solve the problems of low accuracy and incomplete boundary of traditional building extraction methods, ProCBAM, a parallel attention mechanism based on U-Net network, is adopted and added to the feature transmission step of U-Net. Through the representation of spatial dimension image feature map and channel dimension optimized feature map, we can learn more detailed image information and reduce the error of image recognition. Experiments are carried out on the Massachusetts building dataset, Wuhan University dataset, and IND.v2 dataset, and the experimental results show the effectiveness of this method in building extraction from remote sensing images. (c) 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Tooth CT Image Segmentation Method Based on the U-Net Network and Attention Module
    Tao, Sha
    Wang, Zhenfeng
    [J]. Computational and Mathematical Methods in Medicine, 2022, 2022
  • [32] Tooth CT Image Segmentation Method Based on the U-Net Network and Attention Module
    Tao, Sha
    Wang, Zhenfeng
    [J]. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE, 2022, 2022
  • [33] Building Extraction Based on U-Net with an Attention Block and Multiple Losses
    Guo, Mingqiang
    Liu, Heng
    Xu, Yongyang
    Huang, Ying
    [J]. REMOTE SENSING, 2020, 12 (09)
  • [34] Dual attention U-net for liver tumor segmentation in CT images
    Alirr, Omar Ibrahim
    [J]. INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2024, 19 (02)
  • [35] APU-Net: An Attention Mechanism Parallel U-Net for Lung Tumor Segmentation
    Zhou, Tao
    Dong, YaLi
    Lu, HuiLing
    Zheng, XiaoMin
    Qiu, Shi
    Hou, SenBao
    [J]. BIOMED RESEARCH INTERNATIONAL, 2022, 2022
  • [36] DRA U-Net: An Attention based U-Net Framework for 2D Medical Image Segmentation
    Zhang, Xian
    Feng, Ziyuan
    Zhong, Tianchi
    Shen, Sicheng
    Zhang, Ruolin
    Zhou, Lijie
    Zhang, Bo
    Wang, Wendong
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 3936 - 3942
  • [37] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Kumar T. Rajamani
    Priya Rani
    Hanna Siebert
    Rajkumar ElagiriRamalingam
    Mattias P. Heinrich
    [J]. Signal, Image and Video Processing, 2023, 17 : 981 - 989
  • [38] Cone-beam computed tomography noise reduction method based on U-Net with convolutional block attention module in proton therapy
    XingYue Ruan
    XiuFang Li
    MengYa Guo
    Mei Chen
    Ming Lv
    Rui Li
    ZhiLing Chen
    [J]. NuclearScienceandTechniques., 2024, 35 (07) - 147
  • [39] AN IMPROVED U-NET MODEL FOR BUILDINGS EXTRACTION WITH REMOTE SENSING IMAGES
    He, Weibing
    Qiang, Xiaoyong
    Maihaimaiti, Azigu
    Chen, Shengyi
    Ge, Bingfu
    Huang, Fang
    [J]. IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6870 - 6873
  • [40] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Rajamani, Kumar T.
    Rani, Priya
    Siebert, Hanna
    ElagiriRamalingam, Rajkumar
    Heinrich, Mattias P.
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 981 - 989