ScatterFormer: Efficient Voxel Transformer with Scattered Linear Attention

被引:0
|
作者
He, Chenhang [1 ]
Li, Ruihuang [1 ,2 ]
Zhang, Guowen [1 ]
Zhang, Lei [1 ,2 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[2] OPPO Res, Shenzhen, Peoples R China
来源
关键词
3D Object Detection; Voxel Transformer;
D O I
10.1007/978-3-031-73397-0_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Window-based transformers excel in large-scale point cloud understanding by capturing context-aware representations with affordable attention computation in a more localized manner. However, the sparse nature of point clouds leads to a significant variance in the number of voxels per window. Existing methods group the voxels in each window into fixed-length sequences through extensive sorting and padding operations, resulting in a non-negligible computational and memory overhead. In this paper, we introduce ScatterFormer, which to the best of our knowledge, is the first to directly apply attention to voxels across different windows as a single sequence. The key of ScatterFormer is a Scattered Linear Attention (SLA) module, which leverages the pre-computation of key-value pairs in linear attention to enable parallel computation on the variable-length voxel sequences divided by windows. Leveraging the hierarchical structure of GPUs and shared memory, we propose a chunk-wise algorithm that reduces the SLA module's latency to less than 1 millisecond on moderate GPUs. Furthermore, we develop a cross-window interaction module that improves the locality and connectivity of voxel features across different windows, eliminating the need for extensive window shifting. Our proposed ScatterFormer demonstrates 73.8 mAP (L2) on the Waymo Open Dataset and 72.4 NDS on the NuScenes dataset, running at an outstanding detection rate of 23 FPS. The code is available at https://github.com/skyhehe123/ScatterFormer.
引用
收藏
页码:74 / 92
页数:19
相关论文
共 50 条
  • [41] Learning Attention from Attention: Efficient Self-Refinement Transformer for Face Super-Resolution
    Li, Guanxin
    Shi, Jingang
    Zong, Yuan
    Wang, Fei
    Wang, Tian
    Gong, Yihong
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1035 - 1043
  • [42] Global voxel transformer networks for augmented microscopy
    Zhengyang Wang
    Yaochen Xie
    Shuiwang Ji
    Nature Machine Intelligence, 2021, 3 : 161 - 171
  • [43] Global voxel transformer networks for augmented microscopy
    Wang, Zhengyang
    Xie, Yaochen
    Ji, Shuiwang
    NATURE MACHINE INTELLIGENCE, 2021, 3 (02) : 161 - 171
  • [44] Spatiotemporal Transformer Attention Network for 3D Voxel Level Joint Segmentation and Motion Prediction in Point Cloud
    Wei, Zhensong
    Qi, Xuewei
    Bai, Zhengwei
    Wu, Guoyuan
    Nayak, Saswat
    Hao, Peng
    Barth, Matthew
    Liu, Yongkang
    Oguchi, Kentaro
    2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 1381 - 1386
  • [45] Efficient Facial Expression Recognition Transformer with Additively Comprised Class Attention Encoder
    Wang, Jiasen
    Hu, Yuanjing
    Huang, Aibin
    JOURNAL OF IMAGING SCIENCE AND TECHNOLOGY, 2025, 69 (01)
  • [46] Efficient Visual Tracking Using Local Information Patch Attention Free Transformer
    Wang, Pin-Feng
    Tang, Chih-Wei
    2022 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN, IEEE ICCE-TW 2022, 2022, : 447 - 448
  • [47] Semantically Guided Efficient Attention Transformer for Face Super-Resolution Tasks
    Han, Cong
    Gui, Youqiang
    Cheng, Peng
    You, Zhisheng
    INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS, 2025, 21 (01)
  • [48] Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling
    Shim, Kyuhong
    Choi, Iksoo
    Sung, Wonyong
    Choi, Jungwook
    18TH INTERNATIONAL SOC DESIGN CONFERENCE 2021 (ISOCC 2021), 2021, : 357 - 358
  • [49] EGFormer: An Enhanced Transformer Model with Efficient Attention Mechanism for Traffic Flow Forecasting
    Yang, Zhihui
    Zhang, Qingyong
    Chang, Wanfeng
    Xiao, Peng
    Li, Minglong
    VEHICLES, 2024, 6 (01): : 120 - 139
  • [50] DARKER: : Efficient Transformer with Data-driven Attention Mechanism for Time Series
    Zuo, Rundong
    Li, Guozhong
    Cao, Rui
    Choi, Byron
    Xu, Jianliang
    Bhowmick, Sourav S.
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (11): : 3229 - 3242