Head-Free Lightweight Semantic Segmentation with Linear Transformer

被引:0
|
作者
Dong, Bo [1 ]
Wang, Pichao [1 ]
Wang, Fan [1 ]
机构
[1] Alibaba Grp, Hangzhou, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing semantic segmentation works have been mainly focused on designing effective decoders; however, the com-putational load introduced by the overall structure has long been ignored, which hinders their applications on resource-constrained hardwares. In this paper, we propose a head-free lightweight architecture specifically for semantic segmentation, named Adaptive Frequency Transformer (AFFormer). AFFormer adopts a parallel architecture to leverage prototype representations as specific learnable local descriptions which replaces the decoder and preserves the rich image semantics on high-resolution features. Although removing the decoder compresses most of the computation, the accuracy of the parallel structure is still limited by low computational resources. Therefore, we employ heterogeneous operators (CNN and Vision Transformer) for pixel embedding and prototype representations to further save computational costs. Moreover, it is very difficult to linearize the complexity of the vision Transformer from the perspective of spatial domain. Due to the fact that semantic segmentation is very sensitive to frequency information, we construct a lightweight prototype learning block with adaptive frequency filter of complexity O(n) to replace standard self attention with O(n2). Extensive experiments on widely adopted datasets demonstrate that AFFormer achieves superior accuracy while retaining only 3M parameters. On the ADE20K dataset, AFFormer achieves 41.8 mIoU and 4.6 GFLOPs, which is 4.4 mIoU higher than Segformer, with 45% less GFLOPs. On the Cityscapes dataset, AFFormer achieves 78.7 mIoU and 34.4 GFLOPs, which is 2.5 mIoU higher than Segformer with 72.5% less GFLOPs. Code is available at https://github.com/dongbo811/AFFormer.
引用
收藏
页码:516 / 524
页数:9
相关论文
共 50 条
  • [21] Transformer Scale Gate for Semantic Segmentation
    Shi, Hengcan
    Hayat, Munawar
    Cai, Jianfei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3051 - 3060
  • [22] TransRVNet: LiDAR Semantic Segmentation With Transformer
    Cheng, Hui-Xian
    Han, Xian-Feng
    Xiao, Guo-Qiang
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (06) : 5895 - 5907
  • [23] Pyramid Fusion Transformer for Semantic Segmentation
    Qin, Zipeng
    Liu, Jianbo
    Zhang, Xiaolin
    Tian, Maoqing
    Zhou, Aojun
    Yi, Shuai
    Li, Hongsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9630 - 9643
  • [24] Independent control of head and gaze movements during head-free pursuit in humans
    Collins, CJS
    Barnes, GR
    JOURNAL OF PHYSIOLOGY-LONDON, 1999, 515 (01): : 299 - 314
  • [25] Tunnel crack segmentation based on lightweight Transformer
    Kuang, Xianyan
    Xu, Yaoming
    Lei, Hui
    Cheng, Fujun
    Huan, Xianglan
    Journal of Railway Science and Engineering, 2024, 21 (08) : 3421 - 3433
  • [26] GAZE SHIFT DURING OPTOKINETIC STIMULATION IN HEAD-FREE CATS
    SCHWEIGART, G
    NEUROSCIENCE LETTERS, 1995, 183 (1-2) : 124 - 126
  • [27] Light4Mars: A lightweight transformer model for semantic segmentation on unstructured environment like Mars
    Xiong Y.
    Xiao X.
    Yao M.
    Cui H.
    Fu Y.
    ISPRS Journal of Photogrammetry and Remote Sensing, 2024, 214 : 167 - 178
  • [28] The Visual Input to the Retina during Natural Head-Free Fixation
    Aytekin, Murat
    Victor, Jonathan D.
    Rucci, Michele
    JOURNAL OF NEUROSCIENCE, 2014, 34 (38): : 12701 - 12715
  • [29] THE ROLE OF PREDICTION IN HEAD-FREE PURSUIT AND VESTIBULOOCULAR REFLEX SUPPRESSION
    BARNES, GR
    GREALY, MA
    ANNALS OF THE NEW YORK ACADEMY OF SCIENCES, 1992, 656 : 687 - 694
  • [30] Head-centric computing for vestibular stimulation under head-free conditions
    La Scaleia, Barbara
    Brunetti, Claudia
    Lacquaniti, Francesco
    Zago, Myrka
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2023, 11