Embedded Attention Network for Semantic Segmentation

被引:3
|
作者
Lv, Qingxuan [1 ,2 ]
Feng, Mingzhe [1 ,2 ]
Sun, Xin [1 ,2 ]
Dong, Junyu [1 ,2 ]
Chen, Changrui [1 ,2 ]
Zhang, Yu [1 ,2 ]
机构
[1] Ocean Univ China, Coll Informat Sci & Engn, Haide Coll, Qingdao 266100, Shandong, Peoples R China
[2] Ocean Univ China, Inst Adv Ocean Study, Qingdao 266100, Shandong, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Semantics; Task analysis; Sun; Image segmentation; Costs; Convolution; Computational modeling; Computer vision for transportation; object detection; segmentation and categorization; deep learning; self-attention; ENVIRONMENTS;
D O I
10.1109/LRA.2021.3126892
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Semantic segmentation, as a fundamental task in computer vision, is capable of providing perception ability in many robot applications, such as automatic navigation. To enhance the segmentation accuracy, self-attention mechanism is adopted as a key technique for capturing the long-range dependency and enlarging the receptive fields. However, it requires high computation complexity and GPU memory. In this letter, we propose an Embedded Attention Network to relieve the undesired computational cost. Specifically, we introduce an Embedded Attention (EA) block to improve the segmentation performance and efficiency. Firstly, EA block generates a group of compact while coarse feature bases with the capability of reducing large amount of computation cost. Then an embedded attention is employed to collect the global contextual information and update the representation of the coarse bases from a global view. Finally, the updated bases are leveraged to estimate the attention similarity. We take the well-estimated feature bases to perform feature aggregation. Our approach achieves a considerable computation cost reduction, which suggests it is more powerful than other counterparts in most robot platforms. We conduct extensive experiments on two benchmark semantic segmentation datasets, i.e., CityScapes and ADE20 K. The results demonstrate that the proposed Embedded Attention network delivers comparable performance with high efficiency.
引用
收藏
页码:326 / 333
页数:8
相关论文
共 50 条
  • [1] Bilateral attention network for semantic segmentation
    Wang, Dongli
    Li, Nanjun
    Zhou, Yan
    Mu, Jinzhen
    [J]. IET IMAGE PROCESSING, 2021, 15 (08) : 1607 - 1616
  • [2] CROSS ATTENTION NETWORK FOR SEMANTIC SEGMENTATION
    Liu, Mengyu
    Yin, Hujun
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2434 - 2438
  • [3] Dynamic attention network for semantic segmentation
    Wu, Fei
    Chen, Feng
    Jing, Xiao-Yuan
    Hu, Chang-Hui
    Ge, Qi
    Ji, Yimu
    [J]. NEUROCOMPUTING, 2020, 384 (384) : 182 - 191
  • [4] Efficient Attention Pyramid Network for Semantic Segmentation
    Yang, Qirui
    Ku, Tao
    Hu, Kunyuan
    [J]. IEEE ACCESS, 2021, 9 : 18867 - 18875
  • [5] Grouped Double Attention Network for Semantic Segmentation
    Chen Xiaolong
    Zhao Ji
    Chen Siyi
    Du Xinhao
    Liu Xin
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (22)
  • [6] Semantic Segmentation Network Based on Integral Attention
    Xiong, Siqi
    [J]. PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON COMPUTER AND MULTIMEDIA TECHNOLOGY, ICCMT 2024, 2024, : 285 - 288
  • [7] POINT SET ATTENTION NETWORK FOR SEMANTIC SEGMENTATION
    Jiang, Jie
    Liu, Jing
    Fu, Jun
    Zhu, Xinxin
    Lu, Hanqing
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2186 - 2190
  • [8] RANet: Region Attention Network for Semantic Segmentation
    Shen, Dingguo
    Ji, Yuanfeng
    Li, Ping
    Wang, Yi
    Lin, Di
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [9] Realtime Global Attention Network for Semantic Segmentation
    Mo, Xi
    Chen, Xiangyu
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 1574 - 1580
  • [10] SPARSE SPATIAL ATTENTION NETWORK FOR SEMANTIC SEGMENTATION
    Liu, Mengyu
    Yin, Hujun
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 644 - 648