6D pose estimation and unordered picking of stacked cluttered objects

被引:0
|
作者
Zhai J. [1 ]
Huang L. [1 ]
机构
[1] School of Mechanical & Automotive Engineering, South China University of Technology, Guangzhou
关键词
3D vision; Object recognition; Pose estimation; Robot; Stacked cluttered objects; Unordered picking;
D O I
10.11918/202110081
中图分类号
学科分类号
摘要
Aiming at the problem of robotic picking in the scenario of stacked cluttered objects, an unordered picking system from target screening, recognition to 6D pose estimation was established. The Locally Convex Connected Patches method was used to segment the stacked cluttered objects collected by Kinect V2 camera into separate subsets of point cloud, and the uppermost unshaded object was selected as the target to be captured by defining the capture fraction, so as to ensure that robot could grasp the object from top to bottom. According to the picking requirements of different kinds of objects, 3d targets are identified and grasping points are located based on matching similarity function. An object 6D pose estimation model is established by combining TEASER(Truncated least squares Estimation And SEmidefinite Relaxation) algorithm and ICP(Iterative Closest Point) algorithm to ensure accurate registration of target point cloud and model point cloud under low coincidence rate. Experiments of 6D pose estimation and robotic unordered picking are carried out on self-collected data. The results show that the proposed 6D pose estimation method can obtain the 6D pose of the target more quickly and accurately compared with several popular methods. The root mean square distance error is less than 3.3 mm and the root mean square angle error is less than 5.6°. The visual processing time is far less than the movement time of robot arm, and the whole process of robotic real-time grasping is accomplished in the actual scene. Copyright ©2022 Journal of Harbin Institute of Technology.All rights reserved.
引用
收藏
页码:136 / 142
页数:6
相关论文
共 20 条
  • [1] RAD M, LEPETIT V., BB8: a scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth, IEEE International Conference on Computer Vision (ICCV), (2017)
  • [2] HU Yinlin, HUGONOT J, FUA P, Et al., Segmentation-driven 6D object pose estimation, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020)
  • [3] PAVLAKOS G, ZHOU Xiaowei, CHAN A, Et al., 6-DoF object pose from semantic keypoints, IEEE International Conference on Robotics and Automation (ICRA), (2017)
  • [4] DROST B, ILIC S., 3D object detection and localization using multimodal point pair features, Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, (2012)
  • [5] WANG Chen, XU Danfei, ZHU Yuke, Et al., DenseFusion: 6D object pose estimation by iterative dense fusion, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019)
  • [6] PENG Sida, LIU Yuan, HUANG Qixing, Et al., PVNet: pixel-wise voting network for 6DoF pose estimation, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019)
  • [7] LEPETIT V, MORENO F, FUA P., EPnP: an accurate O(n) solution to the PnP problem, International Journal of Computer Vision, 81, 2, (2009)
  • [8] HINTERSTOISSER S, HOLZER S, CAGNIART C, Et al., Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes, International Conference on Computer Vision, (2011)
  • [9] MELLADO N, AIGER D, MITRA N J., Super 4PCS fast global pointcloud registration via smart indexing, Computer Graphics Forum, 33, 5, (2014)
  • [10] YANG Jiaolong, LI Hongdong, CAMPBELL D, Et al., Go-ICP: a globally optimal solution to 3D ICP point-set registration, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 11, (2016)