Towards robustness and generalization of point cloud representation: A geometry coding method and a large-scale object-level dataset

被引:0
|
作者
Xu, Mingye [1 ,2 ]
Zhou, Zhipeng [5 ]
Wang, Yali [1 ,4 ]
Qiao, Yu [1 ,3 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Guangdong Hong Kong Macao Joint Lab Human Machine, Shenzhe 518000, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Shanghai AI Lab, Shanghai 200001, Peoples R China
[4] Shenzhen Inst Artificial Intelligence & Robot Soc, SIAT Branch, Shenzhen 518000, Peoples R China
[5] Alibaba DAMO Acad, Hangzhou 242332, Peoples R China
来源
COMPUTATIONAL VISUAL MEDIA | 2024年 / 10卷 / 01期
基金
中国国家自然科学基金;
关键词
geometry coding; self-supervised learning; point cloud; classification; segmentation; 3D analysis; SEGMENTATION;
D O I
10.1007/s41095-022-0305-5
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Robustness and generalization are two challenging problems for learning point cloud representation. To tackle these problems, we first design a novel geometry coding model, which can effectively use an invariant eigengraph to group points with similar geometric information, even when such points are far from each other. We also introduce a large-scale point cloud dataset, PCNet184. It consists of 184 categories and 51,915 synthetic objects, which brings new challenges for point cloud classification, and provides a new benchmark to assess point cloud cross-domain generalization. Finally, we perform extensive experiments on point cloud classification, using ModelNet40, ScanObjectNN, and our PCNet184, and segmentation, using ShapeNetPart and S3DIS. Our method achieves comparable performance to state-of-the-art methods on these datasets, for both supervised and unsupervised learning. Code and our dataset are available at https://github.com/MingyeXu/PCNet184.
引用
收藏
页码:27 / 43
页数:17
相关论文
共 38 条
  • [31] Pointsoup: High-Performance and Extremely Low-Decoding-Latency Learned Geometry Codec for Large-Scale Point Cloud Scenes
    You, Kang
    Liu, Kai
    Li Yu
    Gao, Pan
    Ding, Dandan
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5380 - 5388
  • [32] MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration
    Ou, Wengjun
    Zheng, Mingkui
    Zheng, Haifeng
    ELECTRONICS LETTERS, 2024, 60 (05)
  • [33] PCGOR: A Novel Plane Constraints-Based Guaranteed Outlier Removal Method for Large-Scale LiDAR Point Cloud Registration
    Ma, Gang
    Wei, Hui
    Lin, Runfeng
    Wu, Jialiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [34] PCGOR: A Novel Plane Constraints-Based Guaranteed Outlier Removal Method for Large-Scale LiDAR Point Cloud Registration
    Ma, Gang
    Wei, Hui
    Lin, Runfeng
    Wu, Jialiang
    IEEE Transactions on Geoscience and Remote Sensing, 2024, 62
  • [35] Y-Net: Learning Domain Robust Feature Representation for ground camera image and large-scale image-based point cloud registration
    Liu, Weiquan
    Wang, Cheng
    Chen, Shuting
    Bian, Xuesheng
    Lai, Baiqi
    Shen, Xuelun
    Cheng, Ming
    Lai, Shang-Hong
    Weng, Dongdong
    Li, Jonathan
    INFORMATION SCIENCES, 2021, 581 : 655 - 677
  • [36] LCL_FDA: Local context learning and full-level decoder aggregation network for large-scale point cloud semantic segmentation
    Li, Yong
    Ye, Zhenqin
    Huang, Xingwen
    Heli, Yubin
    Shuang, Feng
    NEUROCOMPUTING, 2025, 621
  • [37] HouseCat6D - A Large-Scale Multi-Modal Category Level 6D Object Perception Dataset with Household Objects in Realistic Scenarios
    Fau Erlangen-Nürnberg, Germany
    不详
    不详
    不详
    不详
    不详
    Proc IEEE Comput Soc Conf Comput Vision Pattern Recognit, (22498-22508):
  • [38] TopSPR-Net: Topology Aware Segment-Level Point Cloud Learning Descriptors for Three-Dimensional Place Recognition in Large-Scale Environments
    Kong, Dong
    Li, Xu
    Ni, Peizhou
    Hu, Yue
    Hu, Jinchao
    Hu, Weiming
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024, 71 (10) : 13406 - 13416