Learning Cross-Attention Point Transformer With Global Porous Sampling

被引:0
|
作者
Duan, Yueqi [1 ]
Sun, Haowen [2 ]
Yan, Juncheng [2 ]
Lu, Jiwen [2 ]
Zhou, Jie [2 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Automat, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud compression; Transformers; Global Positioning System; Convolution; Three-dimensional displays; Geometry; Feature extraction; Training data; Sun; Shape; Point cloud; 3D deep learning; transformer; cross-attention; NETWORK;
D O I
10.1109/TIP.2024.3486612
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a point-based cross-attention transformer named CrossPoints with parametric Global Porous Sampling (GPS) strategy. The attention module is crucial to capture the correlations between different tokens for transformers. Most existing point-based transformers design multi-scale self-attention operations with down-sampled point clouds by the widely-used Farthest Point Sampling (FPS) strategy. However, FPS only generates sub-clouds with holistic structures, which fails to fully exploit the flexibility of points to generate diversified tokens for the attention module. To address this, we design a cross-attention module with parametric GPS and Complementary GPS (C-GPS) strategies to generate series of diversified tokens through controllable parameters. We show that FPS is a degenerated case of GPS, and the network learns more abundant relational information of the structure and geometry when we perform consecutive cross-attention over the tokens generated by GPS as well as C-GPS sampled points. More specifically, we set evenly-sampled points as queries and design our cross-attention layers with GPS and C-GPS sampled points as keys and values. In order to further improve the diversity of tokens, we design a deformable operation over points to adaptively adjust the points according to the input. Extensive experimental results on both shape classification and indoor scene segmentation tasks indicate promising boosts over the recent point cloud transformers. We also conduct ablation studies to show the effectiveness of our proposed cross-attention module with GPS strategy.
引用
收藏
页码:6283 / 6297
页数:15
相关论文
共 50 条
  • [21] DeepCoVDR: deep transfer learning with graph transformer and cross-attention for predicting COVID-19 drug response
    Huang, Zhijian
    Zhang, Pan
    Deng, Lei
    BIOINFORMATICS, 2023, 39 : I475 - I483
  • [22] Image-text multimodal classification via cross-attention contextual transformer with modality-collaborative learning
    Shi, Qianyao
    Xu, Wanru
    Miao, Zhenjiang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (04)
  • [23] An efficient object tracking based on multi-head cross-attention transformer
    Dai, Jiahai
    Li, Huimin
    Jiang, Shan
    Yang, Hongwei
    EXPERT SYSTEMS, 2025, 42 (02)
  • [24] A Novel Transformer Network With Shifted Window Cross-Attention for Spatiotemporal Weather Forecasting
    Bojesomo, Alabi
    Almarzouqi, Hasan
    Liatsis, Panos
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 45 - 55
  • [25] DeepCoVDR: deep transfer learning with graph transformer and cross-attention for predicting COVID-19 drug response
    Huang, Zhijian
    Zhang, Pan
    Deng, Lei
    BIOINFORMATICS, 2023, 39 : i475 - i483
  • [26] Cross-attention Based Text-image Transformer for Visual Question Answering
    Rezapour M.
    Recent Advances in Computer Science and Communications, 2024, 17 (04) : 72 - 78
  • [27] Dual Cross-Attention Transformer Networks for Temporal Predictive Modeling of Industrial Process
    Wang, Jie
    Xie, Yongfang
    Xie, Shiwen
    Chen, Xiaofang
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 11
  • [28] RGB-Sonar Tracking Benchmark and Spatial Cross-Attention Transformer Tracker
    Li, Yunfeng
    Wang, Bo
    Sun, Jiuran
    Wu, Xueyi
    Li, Ye
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2260 - 2275
  • [29] Unsupervised Cross-Domain Rumor Detection with Contrastive Learning and Cross-Attention
    Ran, Hongyan
    Jia, Caiyan
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13510 - 13518
  • [30] PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds
    Cao, Anh-Quan
    Puy, Gilles
    Boulch, Alexandre
    Marlet, Renaud
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13209 - 13218