TreeNet: Structure preserving multi-class 3D point cloud completion

被引:2
|
作者
Xi, Long [1 ]
Tang, Wen [1 ]
Wan, TaoRuan [2 ]
机构
[1] Bournemouth Univ, Dept Creat Technol, Poole BH12 5BB, England
[2] Univ Bradford, Dept Informat, Bradford BD7 1DP, England
关键词
3D Point cloud completion; Multi -class training; Hierarchical tree; Computer vision; Artificial intelligence; Deep learning;
D O I
10.1016/j.patcog.2023.109476
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating the missing data of 3D object point clouds from partial observations is a challenging task. Existing state-of-the-art learning-based 3D point cloud completion methods tend to use a limited number of categories/classes of training data and regenerate the entire point cloud based on the training datasets. As a result, output 3D point clouds generated by such methods may lose details (i.e. sharp edges and topology changes) due to the lack of multi-class training. These methods also lose the structural and spatial details of partial inputs due to the models do not separate the reconstructed partial input from missing points in the output.In this paper, we propose a novel deep learning network -TreeNet for 3D point cloud completion. TreeNet has two networks in hierarchical tree-based structures: TreeNet-multiclass focuses on multi-class train-ing with a specific class of the completion task on each sub-tree to improve the quality of point cloud output; TreeNet-binary focuses on generating points in missing areas and fully preserving the original partial input. TreeNet-multiclass and TreeNet-binary are both network decoders and can be trained inde-pendently. TreeNet decoder is the combination of TreeNet-multiclass and TreeNet-binary and is trained with an encoder from existing methods (i.e. PointNet encoder). We compare the proposed TreeNet with five state-of-the-art learning-based methods on fifty classes of the public Shapenet dataset and unknown classes, which shows that TreeNet provides a significant improvement in the overall quality and exhibits strong generalization to unknown classes that are not trained.(c) 2023 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
引用
收藏
页数:13
相关论文
共 50 条
  • [31] VEHICLE COMPLETION IN TRAFFIC SCENE USING 3D LIDAR POINT CLOUD DATA
    Wu, Chongrong
    Lin, Yitai
    Guo, Yan
    Wen, Chenglu
    Shi, Yongfei
    Wang, Cheng
    [J]. 2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 7495 - 7498
  • [32] DF-Net: Dynamic and Folding Network for 3D Point Cloud Completion
    Xiao, Yao
    Li, Yang
    Yu, Qingjun
    Liu, Shenglan
    Gang, Jialin
    [J]. IEEE ACCESS, 2022, 10 : 97835 - 97842
  • [33] Deep Neural Network for 3D Point Cloud Completion with Multistage Loss Function
    Huang, Haohao
    Chen, Hongliang
    Li, Jianxun
    [J]. PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 4604 - 4609
  • [34] 3D shape partition via multi-class spectral graph clustering
    Lei, Haopeng
    Sheng, Jianqiang
    Lin, Shujin
    [J]. Journal of Information and Computational Science, 2014, 11 (03): : 859 - 866
  • [35] Reduced Multi-class Contour Preserving Classification
    Piyabute Fuangkhon
    Thitipong Tanprasert
    [J]. Neural Processing Letters, 2016, 43 : 759 - 804
  • [36] Multi-Class Simultaneous Adaptive Segmentation and Quality Control of Point Cloud Data
    Habib, Ayman
    Lin, Yun-Jou
    [J]. REMOTE SENSING, 2016, 8 (02):
  • [37] Reduced Multi-class Contour Preserving Classification
    Fuangkhon, Piyabute
    Tanprasert, Thitipong
    [J]. NEURAL PROCESSING LETTERS, 2016, 43 (03) : 759 - 804
  • [39] Parallel multi-class contour preserving classification
    Fuangkhon, Piyabute
    [J]. Journal of Intelligent Systems, 2015, 2015 : 109 - 121
  • [40] Energy minimisation-based multi-class multi-instance geometric primitives extraction from 3D point clouds
    Wang, Liang
    Yan, Biying
    Duan, Fuqing
    Lu, Ke
    [J]. IET IMAGE PROCESSING, 2020, 14 (12) : 2660 - 2667