DSQNet: A Deformable Model-Based Supervised Learning Algorithm for Grasping Unknown Occluded Objects

被引:9
|
作者
Kim, Seungyeon [1 ]
Ahn, Taegyun [2 ]
Lee, Yonghyeon [1 ]
Kim, Jihwan [1 ]
Wang, Michael Yu [3 ]
Park, Frank C. [1 ]
机构
[1] Seoul Natl Univ, Dept Mech Engn, Seoul 08826, South Korea
[2] Saige Res, Seoul 06656, South Korea
[3] Monash Univ, Dept Mech & Aerosp Engn, Clayton, Vic 3800, Australia
关键词
Shape; Point cloud compression; Grasping; Grippers; Training data; Object recognition; Deep learning; Robotic grasping; object shape recognition; geometric shape primitives; supervised deep learning; SUPERQUADRICS; RECOVERY;
D O I
10.1109/TASE.2022.3184873
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Grasping previously unseen objects for the first time, in which only partially occluded views of the object are available, remains a difficult challenge. Despite their recent successes, deep learning-based end-to-end methods remain impractical when training data and resources are limited and multiple grippers are used. Two-step methods that first identify the object shape and structure using deformable shape templates, then plan and execute the grasp, are free from those limitations, but also have difficulty with partially occluded objects. In this paper, we propose a two-step method that merges a richer set of shape primitives, the deformable superquadrics, with a deep learning network, DSQNet, that is trained to identify complete object shapes from partial point cloud data. Grasps are then generated that take into account the kinematic and structural properties of the gripper while exploiting the closed-form equations available for deformable superquadrics. A seven-dof robotic arm equipped with a parallel jaw gripper is used to conduct experiments involving a collection of household objects, achieving average grasp success rates of 93% (compared to 86% for existing methods), with object recognition times that are ten times faster. Code is available at https://github.com/seungyeon-k/DSQNet-public Note to Practitioners-This paper provides a comprehensive two-step method for grasping previously unseen objects, in which only partially occluded views of the object may be available. End-to-end deep learning-based methods typically require large amounts of training data, in the form of images of the objects taken from different angles and with different levels of occlusion, and grasping experiments that record the success and failure of each attempt; if a new gripper is used, more often than not the training data must be recollected and a new set of experiments performed. Two-step methods that first identify the object structure and shape using deformable shape templates, then plan the grasp based on knowledge of the object shape, are currently a more practical solution, but also have difficulty when only occluded views of the object are available. Our newly proposed two-step method takes advantage of a more flexible set of shape primitives, and also uses a supervised deep learning network to identify the object from occluded views. Our experimental results indicate improved grasp success rates against the state-of-the-art, with recognition rates that are up to ten times faster. Our method shows high recognition and grasping performance so is well applicable on most of the general household objects, but it cannot be directly applied to more diverse public 3D datasets since it requires some human-annotated segmentation labels. In future research, we will develop our deep learning network to automatically learn segmentation without human-annotated labels, allowing it to recognize more complex and diverse object shapes.
引用
收藏
页码:1721 / 1734
页数:14
相关论文
共 50 条
  • [1] DSQNet: A Deformable Model-Based Supervised Learning Algorithm for Grasping Unknown Occluded Objects
    Kim, Seungyeon
    Ahn, Taegyun
    Lee, Yonghyeon
    Kim, Jihwan
    Wang, Michael Yu
    Park, Frank C.
    [J]. IEEE Transactions on Automation Science and Engineering, 2023, 20 (03): : 1721 - 1734
  • [2] Model-Based Grasping of Unknown Objects from a Random Pile
    Sauvet, Bruno
    Levesque, Francois
    Park, SeungJae
    Cardou, Philippe
    Gosselin, Clement
    [J]. ROBOTICS, 2019, 8 (03)
  • [3] A Grasping Pose Detection Algorithm for Occluded Objects
    Wang, Xuyang
    Li, Qinghua
    Liu, Kaiyue
    Zhang, Kun
    Zhu, Zhaoxin
    Feng, Chao
    [J]. Proceedings - 2023 China Automation Congress, CAC 2023, 2023, : 2287 - 2292
  • [4] Survey on model-based manipulation planning of deformable objects
    Jimenez, P.
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2012, 28 (02) : 154 - 163
  • [5] A Graph-Based Deep Reinforcement Learning Approach to Grasping Fully Occluded Objects
    Zuo, Guoyu
    Tong, Jiayuan
    Wang, Zihao
    Gong, Daoxiong
    [J]. COGNITIVE COMPUTATION, 2023, 15 (01) : 36 - 49
  • [6] A Graph-Based Deep Reinforcement Learning Approach to Grasping Fully Occluded Objects
    Guoyu Zuo
    Jiayuan Tong
    Zihao Wang
    Daoxiong Gong
    [J]. Cognitive Computation, 2023, 15 : 36 - 49
  • [7] Model-based strategy for grasping 3D deformable objects using a multi-fingered robotic hand
    Zaidi, Lazher
    Corrales, Juan Antonio
    Bouzgarrou, Belhassen Chedli
    Mezouar, Youcef
    Sabourin, Laurent
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2017, 95 : 196 - 206
  • [8] Grasping unknown objects based on 3D model reconstruction
    Wang, B
    Jiang, L
    Li, JW
    Cai, HG
    Liu, H
    [J]. 2005 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, VOLS 1 AND 2, 2005, : 461 - 466
  • [9] Robotic Grasping of Unknown Objects Based on Deep Learning-Based Feature Detection
    Khor, Kai Sherng
    Liu, Chao
    Cheah, Chien Chern
    [J]. SENSORS, 2024, 24 (15)
  • [10] Empty the Basket - A Shape Based Learning Approach for Grasping Piles of Unknown Objects
    Fischinger, David
    Vincze, Markus
    [J]. 2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 2051 - 2057