Grasp Pose Detection in Point Clouds

被引:350
|
作者
ten Pas, Andreas [1 ]
Gualtieri, Marcus [1 ]
Saenko, Kate [2 ]
Platt, Robert [1 ]
机构
[1] Northeastern Univ, 360 Huntington Ave, Boston, MA 02115 USA
[2] Boston Univ, Boston, MA 02215 USA
来源
基金
美国国家科学基金会;
关键词
grasping; manipulation; perception; grasp detection;
D O I
10.1177/0278364917735594
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75% and 95% for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real-world grasping. This paper proposes a number of innovations that together result in an improvement in grasp detection performance. The specific improvement in performance due to each of our contributions is quantitatively measured either in simulation or on robotic hardware. Ultimately, we report a series of robotic experiments that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter.
引用
收藏
页码:1455 / 1473
页数:19
相关论文
共 50 条
  • [31] Hybrid Physical Metric For 6-DoF Grasp Pose Detection
    Lu, Yuhao
    Deng, Beixing
    Wang, Zhenyu
    Zhi, Peiyuan
    Li, Yali
    Wang, Shengjin
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 8238 - 8244
  • [32] Two-Stage Grasp Detection Method for Robotics Using Point Clouds and Deep Hierarchical Feature Learning Network
    Liu, Xiaofeng
    Huang, Congyu
    Li, Jie
    Wan, Weiwei
    Yang, Chenguang
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (02) : 720 - 731
  • [33] A model-free 6-DOF grasp detection method based on point clouds of local sphere area
    Sun, Jingkang
    Zhang, Keqin
    Yang, Genke
    Chu, Jian
    ADVANCED ROBOTICS, 2023, 37 (11) : 679 - 690
  • [34] An Integrated Framework for 3-D Modeling, Object Detection, and Pose Estimation From Point-Clouds
    Guo, Yulan
    Bennamoun, Mohammed
    Sohel, Ferdous
    Lu, Min
    Wan, Jianwei
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2015, 64 (03) : 683 - 693
  • [35] Grasp Quality Evaluation Network for Surface-to-Surface Contacts in Point Clouds
    Ruan, Jian
    Liu, Houde
    Xue, Anshun
    Wang, Xueqian
    Liang, Bin
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2020, : 1467 - 1472
  • [36] Visual Grasp Affordance Localization in Point Clouds Using Curved Contact Patches
    Kanoulas, Dimitrios
    Lee, Jinoh
    Caldwell, Darwin G.
    Tsagarakis, Nikos G.
    INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, 2017, 14 (01)
  • [37] Using Geometry to Detect Grasp Poses in 3D Point Clouds
    ten Pas, Andreas
    Platt, Robert
    ROBOTICS RESEARCH, VOL 1, 2018, 2 : 307 - 324
  • [38] Probabilistic Scan Matching: Bayesian Pose Estimation from Point Clouds
    Mendrzik, Rico
    Meyer, Florian
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 10228 - 10234
  • [39] An adaptive pose tracking method based on sparse point clouds matching
    Zhang, Yang
    Lv, Qiang
    Lin, Hui-Can
    Qi, Ke-Xin
    JOURNAL OF INTERDISCIPLINARY MATHEMATICS, 2018, 21 (05) : 1115 - 1120
  • [40] Worldwide Pose Estimation Using 3D Point Clouds
    Li, Yunpeng
    Snavely, Noah
    Huttenlocher, Dan
    Fua, Pascal
    COMPUTER VISION - ECCV 2012, PT I, 2012, 7572 : 15 - 29