3D Orientation and Object Classification from Partial Model Point Cloud based on PointNet

被引:0
|
作者
Tuan Anh Nguyen [1 ]
Lee, Sukhan [1 ]
机构
[1] Sungkyunkwan Univ, Inst Elect & Comp Engn, Intelligent Syst Res, Suwon 2066, South Korea
关键词
3D Orientation Estimation; 3D Object Recognition; PointNet; Deep Learning;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this paper, we propose a deep network based on PointNet to estimate the orientations and predict the object classes of 3D oriented objects using their partial model point clouds. More specific, our network exploits the advantages of PointNet to extract the global features of two kinds of point cloud: 1) 3D partial model orientation point cloud which is a part of a 3D object in an observed orientation and 2) full object model point cloud of the 3D object in the reference orientation which is referred to specify the orientations. We then associate the partial model point cloud global features with the corresponding reference global features by an association subnetwork, in which the association network takes the partial model global features as the input and output the corresponding reference feature reconstruction. We use this global feature reconstruction as aligned global features to infer the object classes of the partial model point cloud. To predict the orientation of an oriented point cloud from its partial model point cloud, we use the concatenation of partial model global features and the reference feature reconstruction as an optimal orientation features for network learning with orientation targets. Using the orientation dataset with partial model point clouds based on 3D ModelNet, our experiments have shown the better object classification performance comparing to the vanilla PointNet and the robustness of our proposed network in orientation estimation.
引用
收藏
页码:192 / 197
页数:6
相关论文
共 50 条
  • [11] Object Volume Estimation Based on 3D Point Cloud
    Chang, Wen-Chung
    Wu, Chia-Hung
    Tsai, Ya-Hui
    Chiu, Wei-Yao
    2017 INTERNATIONAL AUTOMATIC CONTROL CONFERENCE (CACS), 2017,
  • [12] UNSUPERVISED SEGMENTATION OF INDOOR 3D POINT CLOUD: APPLICATION TO OBJECT-BASED CLASSIFICATION
    Poux, F.
    Mattes, C.
    Kobbelt, L.
    ISPRS TC IV 3RD BIM/GIS INTEGRATION WORKSHOP AND 15TH 3D GEOINFO CONFERENCE 2020, 2020, 44-4 (W1): : 111 - 118
  • [13] 3D Object Detection Based on Improved Frustum PointNet
    Liu Xunhua
    Sun Shaoyuan
    Gu Lipeng
    Li Xiang
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (20)
  • [14] PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
    Qi, Charles R.
    Su, Hao
    Mo, Kaichun
    Guibas, Leonidas J.
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 77 - 85
  • [15] 2D&3DHNet for 3D Object Classification in LiDAR Point Cloud
    Song, Wei
    Li, Dechao
    Sun, Su
    Zhang, Lingfeng
    Xin, Yu
    Sung, Yunsick
    Choi, Ryong
    REMOTE SENSING, 2022, 14 (13)
  • [16] Determination of 3D Object Pose in Point Cloud with CAD Model
    Duc Dung Nguyen
    Ko, Jae Pil
    Jeon, Jae Wook
    2015 21ST KOREA-JAPAN JOINT WORKSHOP ON FRONTIERS OF COMPUTER VISION, 2015,
  • [17] Object Segmentation and Recognition in 3D Point Cloud with Language Model
    Yang Yi
    Yan Guang
    Zhu Hao
    Fu Meng-yin
    Wang Mei-ling
    PROCESSING OF 2014 INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INFORMATION INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2014,
  • [18] FMP: Enhancing 3D Point Cloud Classification With Functional Multi-Head Pooling to Surpass PointNet
    Jin, Zhan
    Xu, Fan
    Lu, Zhongyuan
    Liu, Jin
    IEEE ACCESS, 2024, 12 : 127931 - 127942
  • [19] 3D Point Cloud Classification and Segmentation Model Based on Graph Convolutional Network
    Hou Xiangdan
    Yu Xixin
    Liu Hongpu
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (18)
  • [20] 3D Object Detection Based on Vanishing Point and Prior Orientation
    GAO Yongbin
    ZHAO Huaqing
    FANG Zhijun
    HUANG Bo
    ZHONG Cengsi
    WuhanUniversityJournalofNaturalSciences, 2019, 24 (05) : 369 - 375