Robotic Continuous Grasping System by Shape Transformer-Guided Multiobject Category-Level 6-D Pose Estimation

被引:14
|
作者
Liu, Jian [1 ,2 ]
Sun, Wei [3 ,4 ]
Liu, Chongpei [1 ,2 ]
Zhang, Xing [1 ,2 ]
Fu, Qiang [1 ,2 ]
机构
[1] Hunan Univ, Natl Engn Res Ctr Robot Visual Percept & Control, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
[2] Hunan Univ, State Key Lab Adv Design & Mfg Vehicle Body, Changsha 410082, Peoples R China
[3] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
[4] Hunan Univ, Shenzhen Res Inst, Shenzhen 518052, Peoples R China
基金
中国国家自然科学基金;
关键词
Grasping; Shape; Robots; Three-dimensional displays; Robot kinematics; Pose estimation; Feature extraction; Category-level 6-D pose estimation; global shape; robotic continuous grasping; shape transformer; NETWORK;
D O I
10.1109/TII.2023.3244348
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robotic grasping is one of the key functions for realizing industrial automation and human-machine interaction. However, current robotic grasping methods for unknown objects mainly focus on generating the 6-D grasp poses, which cannot obtain rich object pose information and are not robust in challenging scenes. Based on this, in this article, we propose a robotic continuous grasping system that achieves end-to-end robotic grasping of intraclass unknown objects in 3-D space by accurate category-level 6-D object pose estimation. Specifically, to achieve object pose estimation, first, we propose a global shape extraction network (GSENet) based on ResNet1D to extract the global shape of an object category from the 3-D models of intraclass known objects. Then, with the global shape as the prior feature, we propose a transformer-guided network to reconstruct the shape of intraclass unknown object. The proposed network can effectively introduce internal and mutual communication between the prior feature, current feature, and their difference feature. The internal communication is performed by self-attention. The mutual communication is performed by cross attention to strengthen their correlation. To achieve robotic grasping for multiple objects, we propose a low-computation and effective grasping strategy based on the predefined vector orientation, and develop a graphical user interface for monitoring and control. Experiments on two benchmark datasets demonstrate that our system achieves state-of-the-art 6-D pose estimation accuracy. Moreover, the real-world experiments show that our system also achieves superior robotic grasping performance, with a grasping success rate of 81.6% for multiple objects.
引用
收藏
页码:11171 / 11181
页数:11
相关论文
共 50 条
  • [41] Query6DoF: Learning Sparse Queries as Implicit Shape Prior for Category-Level 6DoF Pose Estimation
    Wang, Ruiqi
    Wang, Xinggang
    Li, Te
    Yang, Rong
    Wan, Minhong
    Liu, Wenyu
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 14009 - 14018
  • [42] SSP-Pose: Symmetry-Aware Shape Prior Deformation for Direct Category-Level Object Pose Estimation
    Zhang, Ruida
    Di, Yan
    Manhardt, Fabian
    Tombari, Federico
    Ji, Xiangyang
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 7452 - 7459
  • [43] Synthetic Depth Image-Based Category-Level Object Pose Estimation With Effective Pose Decoupling and Shape Optimization
    Yu, Sheng
    Zhai, Di-Hua
    Xia, Yuanqing
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [44] Robotic Grasp Detection Based on Category-Level Object Pose Estimation With Self-Supervised Learning
    Yu, Sheng
    Zhai, Di-Hua
    Xia, Yuanqing
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2024, 29 (01) : 625 - 635
  • [45] CatTrack: Single-Stage Category-Level 6D Object Pose Tracking via Convolution and Vision Transformer
    Yu, Sheng
    Zhai, Di-Hua
    Xia, Yuanqing
    Li, Dong
    Zhao, Shiqi
    [J]. IEEE Transactions on Multimedia, 2024, 26 : 1665 - 1680
  • [46] Category-Level 6D Pose Estimation Using Geometry-Guided Instance-Aware Prior and Multi-Stage Reconstruction
    Nie, Tong
    Ma, Jie
    Zhao, Yuehua
    Fan, Ziming
    Wen, Junjie
    Sun, Mengxuan
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (04) : 2381 - 2388
  • [47] DTF-Net: Category-Level Pose Estimation and Shape Reconstruction via Deformable Template Field
    Wang, Haowen
    Fan, Zhipeng
    Zhao, Zhen
    Che, Zhengping
    Xu, Zhiyuan
    Liu, Dong
    Feng, Feifei
    Huang, Yakun
    Qiao, Xiuquan
    Tang, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3676 - 3685
  • [48] CatTrack: Single-Stage Category-Level 6D Object Pose Tracking via Convolution and Vision Transformer
    Yu, Sheng
    Zhai, Di-Hua
    Xia, Yuanqing
    Li, Dong
    Zhao, Shiqi
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1665 - 1680
  • [49] SOCS: Semantically-aware Object Coordinate Space for Category-Level 6D Object Pose Estimation under Large Shape Variations
    Wan, Boyan
    Shi, Yifei
    Xu, Kai
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 14019 - 14028
  • [50] FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose Estimation with Decoupled Rotation Mechanism
    Chen, Wei
    Jia, Xi
    Chang, Hyung Jin
    Duan, Jinming
    Shen, Linlin
    Leonardis, Ales
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1581 - 1590