A point contextual transformer network for point cloud completion

被引:1
|
作者
Leng, Siyi [1 ,2 ,3 ]
Zhang, Zhenxin [1 ,2 ]
Zhang, Liqiang [4 ]
机构
[1] Capital Normal Univ, Key Lab Informat Acquisit & Applicat 3D, MOE, Beijing 100048, Peoples R China
[2] Capital Normal Univ, Coll Resource Environm & Tourism, Beijing 100048, Peoples R China
[3] Xinjiang Normal Univ, Coll Geosci & Tourism, Urumqi 830054, Peoples R China
[4] Beijing Normal Univ, State Key Lab Remote Sensing Sci, Beijing 100875, Peoples R China
基金
北京市自然科学基金;
关键词
Point cloud completion; Feature extraction; Point contextual transformer; Attention mechanism;
D O I
10.1016/j.eswa.2024.123672
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Point cloud completion is an essential task for recovering a complete point cloud from its partial observation to support downstream applications, such as object detection and reconstruction. Existing point cloud completion networks primarily rely on large-scale datasets to learn the mapping between the partial shapes and the complete shapes. They often adopt a multi-stage strategy to progressively generate complete point clouds with finer details. However, underutilization of shape priors and complex modelling frameworks still plague these networks. To address these issues, we innovatively propose a point contextual transformer (PCoT) for point cloud completion (PCoT-Net). We design the PCoT to adaptively fuse static and dynamic point contextual information. This allows for the effective capture of fine-grained local contextual features. We then propose a one-stage network with a feature completion module to directly generate credible and detailed complete point cloud results. Furthermore, we incorporate External Attention (EA) into the feature completion module, which is lightweight and further improves the performance of learning complete features and reconstructing the complete point cloud. Extensive experiments on various datasets validate the effectiveness of our PCoT-based approach and the EA-enhanced feature completion module, which achieves superior quantitative performance in Chamfer Distance (CD) and F1-Score. In comparison to PMP-Net++ (Wen et al., 2022), our method improves the F1-Score by 0.010, 0.022, and 0.026, and reduces the CD by 0.16, 0.95, and 1.74 on the MVP, CRN, and ScanNet datasets, respectively, while providing visually superior results, capturing more fine-grained details and producing smoother reconstructed surfaces.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] FACNet: Feature alignment fast point cloud completion network
    Yu, Xinxing
    Li, Jianyi
    Wong, Chi-Chong
    Vong, Chi-Man
    Liang, Yanyan
    COMPUTATIONAL VISUAL MEDIA, 2025, 11 (01): : 141 - 157
  • [42] Dense Point Cloud Completion Based on Generative Adversarial Network
    Cheng, Ming
    Li, Guoyan
    Chen, Yiping
    Chen, Jun
    Wang, Cheng
    Li, Jonathan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [43] SoftPool++: An Encoder–Decoder Network for Point Cloud Completion
    Yida Wang
    David Joseph Tan
    Nassir Navab
    Federico Tombari
    International Journal of Computer Vision, 2022, 130 : 1145 - 1164
  • [44] Multi-feature fusion point cloud completion network
    Xiu Chen
    Yujie Li
    Yun Li
    World Wide Web, 2022, 25 : 1551 - 1564
  • [45] Stage-Aware Interaction Network for Point Cloud Completion
    Wu, Hang
    Miao, Yubin
    ELECTRONICS, 2024, 13 (16)
  • [46] SAPCNet: symmetry-aware point cloud completion network
    Xue, Yazhang
    Wang, Guoqi
    Fan, Xin
    Yu, Long
    Tian, Shengwei
    Zhang, Huang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (05)
  • [47] MLFT-Net: Point Cloud Completion Using MultiLevel Feature Transformer
    Du, Yueling
    Xie, Jin
    2022 6TH INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND INTELLIGENT CONTROL, ISCSIC, 2022, : 159 - 165
  • [48] PCT: Point cloud transformer
    Meng-Hao Guo
    Jun-Xiong Cai
    Zheng-Ning Liu
    Tai-Jiang Mu
    Ralph R.Martin
    Shi-Min Hu
    Computational Visual Media, 2021, 7 (02) : 187 - 199
  • [49] PCT: Point cloud transformer
    Guo, Meng-Hao
    Cai, Jun-Xiong
    Liu, Zheng-Ning
    Mu, Tai-Jiang
    Martin, Ralph R.
    Hu, Shi-Min
    COMPUTATIONAL VISUAL MEDIA, 2021, 7 (02) : 187 - 199
  • [50] PCT: Point cloud transformer
    Meng-Hao Guo
    Jun-Xiong Cai
    Zheng-Ning Liu
    Tai-Jiang Mu
    Ralph R. Martin
    Shi-Min Hu
    Computational Visual Media, 2021, 7 : 187 - 199