FlatFormer: Flattened Window Attention for Efficient Point Cloud Transformer

被引:28
|
作者
Liu, Zhijian [1 ]
Yang, Xinyu [1 ,2 ]
Tang, Haotian [1 ]
Yang, Shang [1 ,3 ]
Han, Song [1 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
基金
美国国家科学基金会;
关键词
VISION;
D O I
10.1109/CVPR52729.2023.00122
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer, as an alternative to CNN, has been proven effective in many modalities (e.g., texts and images). For 3D point cloud transformers, existing efforts focus primarily on pushing their accuracy to the state-of-the-art level. However, their latency lags behind sparse convolution-based models (3x slower), hindering their usage in resource-constrained, latency-sensitive applications (such as autonomous driving). This inefficiency comes from point clouds' sparse and irregular nature, whereas transformers are designed for dense, regular workloads. This paper presents FlatFormer to close this latency gap by trading spatial proximity for better computational regularity. We first flatten the point cloud with window-based sorting and partition points into groups of equal sizes rather than windows of equal shapes. This effectively avoids expensive structuring and padding overheads. We then apply self-attention within groups to extract local features, alternate sorting axis to gather features from different directions, and shift windows to exchange features across groups. FlatFormer delivers state-of-the-art accuracy on Waymo Open Dataset with 4.6x speedup over (transformer-based) SST and 1.4x speedup over (sparse convolutional) CenterPoint. This is the first point cloud transformer that achieves real-time performance on edge GPUs and is faster than sparse convolutional methods while achieving on-par or even superior accuracy on large-scale benchmarks.
引用
收藏
页码:1200 / 1211
页数:12
相关论文
共 50 条
  • [41] Point attention network for point cloud semantic segmentation
    Dayong Ren
    Zhengyi Wu
    Jiawei Li
    Piaopiao Yu
    Jie Guo
    Mingqiang Wei
    Yanwen Guo
    Science China Information Sciences, 2022, 65
  • [42] Point cloud downsampling based on the transformer features
    Dehghanpour, Alireza
    Sharifi, Zahra
    Dehyadegari, Masoud
    VISUAL COMPUTER, 2025, 41 (04): : 2629 - 2638
  • [43] Latent diffusion transformer for point cloud generation
    Ji, Junzhong
    Zhao, Runfeng
    Lei, Minglong
    VISUAL COMPUTER, 2024, 40 (06): : 3903 - 3917
  • [44] Transformer-Based Point Cloud Classification
    Wu, Xianfeng
    Liu, Xinyi
    Wang, Junfei
    Wu, Xianzu
    Lai, Zhongyuan
    Zhou, Jing
    Liu, Xia
    ARTIFICIAL INTELLIGENCE AND ROBOTICS, ISAIR 2022, PT I, 2022, 1700 : 218 - 225
  • [45] FSwin Transformer: Feature-Space Window Attention Vision Transformer for Image Classification
    Yoo, Dayeon
    Kim, Jeesu
    Yoo, Jinwoo
    IEEE ACCESS, 2024, 12 : 72598 - 72606
  • [46] Local Window Attention Transformer for Polarimetric SAR Image Classification
    Jamali, Ali
    Roy, Swalpa Kumar
    Bhattacharya, Avik
    Ghamisi, Pedram
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [47] DcTr: Noise-robust point cloud completion by dual-channel transformer with cross-attention
    Fei, Ben
    Yang, Weidong
    Ma, Lipeng
    Chen, Wen-Ming
    PATTERN RECOGNITION, 2023, 133
  • [48] SAT3D: Slot Attention Transformer for 3D Point Cloud Semantic Segmentation
    Ibrahim, Muhammad
    Akhtar, Naveed
    Anwar, Saeed
    Mian, Ajmal
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (05) : 5456 - 5466
  • [49] PCMG:3D point cloud human motion generation based on self-attention and transformer
    Ma, Weizhao
    Yin, Mengxiao
    Li, Guiqing
    Yang, Feng
    Chang, Kan
    VISUAL COMPUTER, 2024, 40 (05): : 3765 - 3780
  • [50] PCMG:3D point cloud human motion generation based on self-attention and transformer
    Weizhao Ma
    Mengxiao Yin
    Guiqing Li
    Feng Yang
    Kan Chang
    The Visual Computer, 2024, 40 : 3765 - 3780