Lightweight Self-Attention Network for Semantic Segmentation

被引:1
|
作者
Zhou, Yan [1 ]
Zhou, Haibin [2 ]
Li, Nanjun [3 ]
Li, Jianxun [4 ]
Wang, Dongli [1 ]
机构
[1] Xiangtan Univ, Sch Automat & Elect Informat, Xiangtan 411105, Peoples R China
[2] Xiangtan Univ, Sch Math & Computat Sci, Xiangtan 411105, Peoples R China
[3] Shenzhen CBPM KEXIN Banking Technol CO LTD, Shenzhen 518000, Peoples R China
[4] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
基金
中国国家自然科学基金;
关键词
Semantic segmentation; Attention module; Encoder-decoder architecture;
D O I
10.1109/IJCNN55064.2022.9891928
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The deep neural network model based on self-attention (SA) for obtaining rich contextual information has been widely adopted in semantic segmentation. However, the computational complexity of the standard self-attentive module is high, which partly limits the use of this module. In this work, we propose the lightweight self-attention network (LSANet) for semantic segmentation. Specifically, the Lightweight Self-Attentive Module (LSAM) captures information using a hand-designed compact feature representation, and weighted fusion of position information. In the decoder structure, an improved up-sampling module is proposed. Compared with the bilinear upsampling, this method achieves better results in restoring image details. The experimental results on PASCAL VOC 2012, and Cityscapes datasets show the effectiveness of our method, which simplifies operations and improves performance.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Self-attention feature fusion network for semantic segmentation
    Zhou, Zhen
    Zhou, Yan
    Wang, Dongli
    Mu, Jinzhen
    Zhou, Haibin
    NEUROCOMPUTING, 2021, 453 : 50 - 59
  • [2] Pyramid Self-attention for Semantic Segmentation
    Qi, Jiyang
    Wang, Xinggang
    Hu, Yao
    Tang, Xu
    Liu, Wenyu
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 480 - 492
  • [3] CaSaFormer: A cross- and self-attention based lightweight network for large-scale building semantic segmentation
    Li, Jiayi
    Hu, Yuping
    Huang, Xin
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 130
  • [4] FsaNet: Frequency Self-Attention for Semantic Segmentation
    Zhang, Fengyu
    Panahi, Ashkan
    Gao, Guangjun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4757 - 4772
  • [5] Lunet: an enhanced upsampling fusion network with efficient self-attention for semantic segmentation
    Zhou, Yan
    Zhou, Haibin
    Yang, Yin
    Li, Jianxun
    Irampaye, Richard
    Wang, Dongli
    Zhang, Zhengpeng
    VISUAL COMPUTER, 2024, : 3109 - 3128
  • [6] Real-Time Semantic Segmentation Network Based on Regional Self-Attention
    Bao Hailong
    Wan Min
    Liu Zhongxian
    Qin Mian
    Cui Haoyu
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (08)
  • [7] SATS: Self-attention transfer for continual semantic segmentation
    Qiu, Yiqiao
    Shen, Yixing
    Sun, Zhuohao
    Zheng, Yanchong
    Chang, Xiaobin
    Zheng, Weishi
    Wang, Ruixuan
    PATTERN RECOGNITION, 2023, 138
  • [8] Lightweight Semantic Segmentation Network Based on Attention Coding
    Chen Xiaolong
    Zhao Ji
    Chen Siyi
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (14)
  • [9] Saliency Guided Self-Attention Network for Weakly and Semi-Supervised Semantic Segmentation
    Yao, Qi
    Gong, Xiaojin
    IEEE ACCESS, 2020, 8 : 14413 - 14423
  • [10] Lightweight Semi-Supervised Semantic Segmentation Algorithm Based on Dual-Polarization Self-Attention
    Ma, Dongmei
    Li, Yueyuan
    Chen, Xi
    Computer Engineering and Applications, 2024, 60 (08) : 225 - 233