Real-Time Semantic Segmentation With Fast Attention

被引:86
|
作者
Hu, Ping [1 ,2 ]
Perazzi, Federico [3 ]
Heilbron, Fabian Caba [4 ]
Wang, Oliver [4 ]
Lin, Zhe [4 ]
Saenko, Kate [1 ,2 ]
Sclaroff, Stan [1 ,2 ]
机构
[1] Boston Univ, Dept Comp Sci, 111 Cummington St, Boston, MA 02215 USA
[2] MIT IBM Watson AI Lab, Cambridge, MA 02142 USA
[3] Facebook, Menlo Pk, CA 94025 USA
[4] Adobe, San Jose, CA 95110 USA
基金
美国国家科学基金会;
关键词
Semantics; Real-time systems; Feature extraction; Computational modeling; Computational efficiency; Videos; Computer architecture; Semantic segmentation; real-time speed; fast attention;
D O I
10.1109/LRA.2020.3039744
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In deep CNN based models for semantic segmentation, high accuracy relies on rich spatial context (large receptive fields) and fine spatial details (high resolution), both of which incur high computational costs. In this letter, we propose a novel architecture that addresses both challenges and achieves state-of-the-art performance for semantic segmentation of high-resolution images and videos in real-time. The proposed architecture relies on our fast spatial attention, which is a simple yet efficient modification of the popular self-attention mechanism and captures the same rich spatial context at a small fraction of the computational cost, by changing the order of operations. Moreover, to efficiently process high-resolution input, we apply an additional spatial reduction to intermediate feature stages of the network with minimal loss in accuracy thanks to the use of the fast attention module to fuse features. We validate our method with a series of experiments, and show that results on multiple datasets demonstrate superior performance with better accuracy and speed compared to existing approaches for real-time semantic segmentation. On Cityscapes, our network achieves 74.4% mIoU at 72 FPS and 75.5% mIoU at 58 FPS on a single Titan X GPU, which is similar to 50% faster than the state-of-the-art while retaining the same accuracy.
引用
下载
收藏
页码:263 / 270
页数:8
相关论文
共 50 条
  • [1] Stripe Pooling Attention for Real-Time Semantic Segmentation
    Lyu J.
    Sun Y.
    Xu P.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (09): : 1395 - 1404
  • [2] A Fast Attention-Guided Hierarchical Decoding Network for Real-Time Semantic Segmentation
    Hu, Xuegang
    Feng, Jing
    SENSORS, 2024, 24 (01)
  • [3] Efficient real-time semantic segmentation: accelerating accuracy with fast non-local attention
    Lan, Tianye
    Dou, Furong
    Feng, Ziliang
    Zhang, Chengfang
    VISUAL COMPUTER, 2024, 40 (08): : 5783 - 5796
  • [4] Contextual Attention Refinement Network for Real-Time Semantic Segmentation
    Hao, Shijie
    Zhou, Yuan
    Zhang, Youming
    Guo, Yanrong
    IEEE ACCESS, 2020, 8 (08): : 55230 - 55240
  • [5] A lightweight network with attention decoder for real-time semantic segmentation
    Wang, Kang
    Yang, Jinfu
    Yuan, Shuai
    Li, Mingai
    VISUAL COMPUTER, 2022, 38 (07): : 2329 - 2339
  • [6] BiAttnNet: Bilateral Attention for Improving Real-Time Semantic Segmentation
    Li, Genling
    Li, Liang
    Zhang, Jiawan
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 46 - 50
  • [7] A lightweight network with attention decoder for real-time semantic segmentation
    Kang Wang
    Jinfu Yang
    Shuai Yuan
    Mingai Li
    The Visual Computer, 2022, 38 : 2329 - 2339
  • [8] BiAttnNet: Bilateral Attention for Improving Real-Time Semantic Segmentation
    Li, Genling
    Li, Liang
    Zhang, Jiawan
    IEEE Signal Processing Letters, 2022, 29 : 46 - 50
  • [9] Bilateral attention decoder: A lightweight decoder for real-time semantic segmentation
    Peng, Chengli
    Tian, Tian
    Chen, Chen
    Guo, Xiaojie
    Ma, Jiayi
    NEURAL NETWORKS, 2021, 137 : 188 - 199
  • [10] Attention based lightweight asymmetric network for real-time semantic segmentation
    Liu, Qian
    Wang, Cunbao
    Li, Zhensheng
    Qi, Youwei
    Fang, Jiongtao
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 130