Efficient 3D Object Recognition from Cluttered Point Cloud

被引:12
|
作者
Li, Wei [1 ]
Cheng, Hongtai [2 ]
Zhang, Xiaohua [1 ]
机构
[1] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[2] Northeastern Univ, Sch Mech Engn & Automat, Shenyang 110167, Peoples R China
关键词
object recognition; point cloud; SAC-IA; RANSAC; RANDOMIZED RANSAC; HISTOGRAMS; FEATURES;
D O I
10.3390/s21175850
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Recognizing 3D objects and estimating their postures in a complex scene is a challenging task. Sample Consensus Initial Alignment (SAC-IA) is a commonly used point cloud-based method to achieve such a goal. However, its efficiency is low, and it cannot be applied in real-time applications. This paper analyzes the most time-consuming part of the SAC-IA algorithm: sample generation and evaluation. We propose two improvements to increase efficiency. In the initial aligning stage, instead of sampling the key points, the correspondence pairs between model and scene key points are generated in advance and chosen in each iteration, which reduces the redundant correspondence search operations; a geometric filter is proposed to prevent the invalid samples to the evaluation process, which is the most time-consuming operation because it requires transforming and calculating the distance between two point clouds. The introduction of the geometric filter can significantly increase the sample quality and reduce the required sample numbers. Experiments are performed on our own datasets captured by Kinect v2 Camera and on Bologna 1 dataset. The results show that the proposed method can significantly increase (10-30x) the efficiency of the original SAC-IA method without sacrificing accuracy.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] 3D Small-Scale Object Recognition Network in Cluttered Point Cloud Scenes
    Sun, Zhengmao
    Sun, Junhua
    Zhang, Jie
    [J]. AOPC 2021: INFRARED DEVICE AND INFRARED TECHNOLOGY, 2021, 12061
  • [2] Training-based Object Recognition in Cluttered 3D Point Clouds
    Pang, Guan
    Neumann, Ulrich
    [J]. 2013 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2013), 2013, : 87 - 94
  • [3] Using spin images for efficient object recognition in cluttered 3D scenes
    Johnson, AE
    Hebert, M
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1999, 21 (05) : 433 - 449
  • [4] An Improved Local Descriptor based Object Recognition in Cluttered 3D Point Clouds
    Liu, X.
    Lu, Y.
    Wu, T.
    Yuan, T.
    [J]. INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2018, 13 (02) : 221 - 234
  • [5] Object Segmentation and Recognition in 3D Point Cloud with Language Model
    Yang Yi
    Yan Guang
    Zhu Hao
    Fu Meng-yin
    Wang Mei-ling
    [J]. PROCESSING OF 2014 INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INFORMATION INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2014,
  • [6] A Survey of Adversarial Attacks on 3D Point Cloud Object Recognition
    Liu, Weiquan
    Zheng, Shijun
    Guo, Yu
    Wang, Cheng
    [J]. Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (05): : 1645 - 1657
  • [7] 3D Object Recognition Based on Improved Point Cloud Descriptors
    Wen, Weiwei
    Wen, Gongjian
    Hui, Bingwei
    Qiu, Shaohua
    [J]. TENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2018), 2018, 10806
  • [8] Object Recognition in 3D Point Cloud of Urban Street Scene
    Babahajiani, Pouria
    Fan, Lixin
    Gabbouj, Moncef
    [J]. COMPUTER VISION - ACCV 2014 WORKSHOPS, PT I, 2015, 9008 : 177 - 190
  • [9] 3-D OBJECT RECOGNITION FROM POINT CLOUD DATA
    Smith, W.
    Walker, A. S.
    Zhang, B.
    [J]. ISPRS HANNOVER WORKSHOP 2011: HIGH-RESOLUTION EARTH IMAGING FOR GEOSPATIAL INFORMATION, 2011, 39-4 (W19): : 353 - 358
  • [10] Efficient 3D object recognition using foveated point clouds
    Gomes, Rafael Beserra
    Ferreira da Silva, Bruno Marques
    de Medeiros Rocha, Lourena Karin
    Aroca, Rafael Vidal
    Pacheco Rodrigues Velho, Luiz Carlos
    Garcia Goncalves, Luiz Marcos
    [J]. COMPUTERS & GRAPHICS-UK, 2013, 37 (05): : 496 - 508