Sim-Grasp: Learning 6-DOF Grasp Policies for Cluttered Environments Using a Synthetic Benchmark

被引:0
|
作者
Li, Juncheng [1 ,2 ]
Cappelleri, David J. [1 ,2 ]
机构
[1] Purdue Univ, Sch Mech Engn, Multiscale Robot & Automat Lab, W Lafayette, IN 47906 USA
[2] Purdue Univ, Weldon Sch Biomed Engn By Courtesy, W Lafayette, IN 47906 USA
来源
关键词
Point cloud compression; Grasping; Benchmark testing; 6-DOF; Robot learning; Object recognition; Clutter; mobile manipulation; deep learning in grasping and manipulation; data sets for robot learning;
D O I
10.1109/LRA.2024.3430712
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we present Sim-Grasp, a robust 6-DOF two-finger grasping system that integrates advanced language models for enhanced object manipulation in cluttered environments. We introduce the Sim-Grasp-Dataset, which includes 1,550 objects across 500 scenarios with 7.9 million annotated labels, and develop Sim-GraspNet to generate grasp poses from point clouds. The Sim-Grasp-Polices achieve grasping success rates of 97.14% for single objects and 87.43% and 83.33% for mixed clutter scenarios of Levels 1-2 and Levels 3-4 objects, respectively. By incorporating language models for target identification through text and box prompts, Sim-Grasp enables both object-agnostic and target picking, pushing the boundaries of intelligent robotic systems.
引用
收藏
页码:7645 / 7652
页数:8
相关论文
共 50 条
  • [31] CenterGrasp: Object-Aware Implicit Representation Learning for Simultaneous Shape Reconstruction and 6-DoF Grasp Estimation
    Chisari, Eugenio
    Heppert, Nick
    Welschehold, Tim
    Burgard, Wolfram
    Valada, Abhinav
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (06): : 5094 - 5101
  • [32] Simulating Complete Points Representations for Single-View 6-DoF Grasp Detection
    Liu, Zhixuan
    Chen, Zibo
    Zheng, Wei-Shi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (03) : 2901 - 2908
  • [33] Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations
    Jiang, Thenyu
    Zhu, Yifeng
    Svetlik, Maxwell
    Pang, Kuan
    Zhu, Yuke
    ROBOTICS: SCIENCE AND SYSTEM XVII, 2021,
  • [34] GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF
    Dai, Qiyu
    Zhu, Yan
    Geng, Yiran
    Ruan, Ciyu
    Zhang, Jiazhao
    Wang, He
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 1757 - 1763
  • [35] A Methodology of Stable 6-DoF Grasp Detection for Complex Shaped Object Using End-to-End Network
    Jeong, Woojin
    Gu, Yongwoo
    Lee, Jaewook
    Yi, June-sup
    2024 21ST INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR 2024, 2024, : 257 - 264
  • [36] Integration of Deep Q-Learning with a Grasp Quality Network for Robot Grasping in Cluttered Environments
    Huang, Chih-Yung
    Shao, Yu-Hsiang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2024, 110 (03)
  • [37] Enabling two finger virtual grasping on a single grasp 6-DOF interface, by using just one force sensor
    Balandra, Alfonso
    Gruppelaar, Virglio
    Mitake, Hironori
    Hasegawa, Shoichi
    2017 IEEE WORLD HAPTICS CONFERENCE (WHC), 2017, : 382 - 387
  • [38] 6-DoF grasp estimation method that fuses RGB-D data based on external attention
    Ran, Haosong
    Chen, Diansheng
    Chen, Qinshu
    Li, Yifei
    Luo, Yazhe
    Zhang, Xiaoyu
    Li, Jiting
    Zhang, Xiaochuan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 101
  • [39] HAGrasp: Hybrid Action Grasp Control in Cluttered Scenes using Deep Reinforcement Learning
    Song, Kai-Tai
    Chen, Hsiang-Hsi
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 3131 - 3137
  • [40] Robot Grasp in Cluttered Scene Using a Multi-Stage Deep Learning Model
    Wei, Dujia
    Cao, Jianmin
    Gu, Ye
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (07): : 6512 - 6519