Grasp Pose Learning from Human Demonstration with Task Constraints

被引:0
|
作者
Yinghui Liu
Kun Qian
Xin Xu
Bo Zhou
Fang Fang
机构
[1] Southeast University,School of Automation
[2] Southeast University,The Key Laboratory of Measurement and Control of CSE, Ministry of Education
来源
关键词
Learning from demonstration; Robot grasping; Grasp pose detection; Superquadric; Task constraints;
D O I
暂无
中图分类号
学科分类号
摘要
To learn grasp constraints from human demonstrations, we propose a method that combines data-driven grasp constraint learning and one-shot human demonstration of tasks. By presenting task constraints in a GMM-based gripper-independent form, the task constraints are learned from simulated data with self-labeled grasp quality scores. By observing a human demonstration of the task and a real-world object, the learned task constraint model can be utilized to infer both the unknown grasping task and the probability density distributions of the task constraints on the object point cloud. In addition, we extend the superquadric-based grasp estimation method for reproducing the grasping task with 2-finger grippers. The task constraints restrict the searching scope of the grasp pose, so the geometrically best grasp pose within the task-constrained regions can be obtained. The effectiveness of our methodology is verified in experiments with a UR5 robot with a 2-finger gripper.
引用
收藏
相关论文
共 50 条
  • [1] Grasp Pose Learning from Human Demonstration with Task Constraints
    Liu, Yinghui
    Qian, Kun
    Xu, Xin
    Zhou, Bo
    Fang, Fang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 105 (02)
  • [2] Interactive grasp learning based on human demonstration
    Ekvall, S
    Kragic, D
    2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, : 3519 - 3524
  • [3] Learning Task Constraints from Demonstration for Hybrid Force/Position Control
    Conkey, Adam
    Hermans, Tucker
    2019 IEEE-RAS 19TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2019, : 162 - 169
  • [4] Recognizing the grasp intention from human demonstration
    de Souza, Ravin
    El-Khoury, Sahar
    Santos-Victor, Jose
    Billard, Aude
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 74 : 108 - 121
  • [5] Learning robots to grasp by demonstration
    De Coninck, Elias
    Verbelen, Tim
    Van Molle, Pieter
    Simoens, Pieter
    Dhoedt, Bart
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2020, 127
  • [6] SingleDemoGrasp: Learning to Grasp From a Single Image Demonstration
    Sefat, Amir Mehman
    Angleraud, Alexandre
    Rahtu, Esa
    Pieters, Roel
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 390 - 396
  • [7] Learning Logic Constraints from Demonstration
    Baert, Mattijs
    Leroux, Sam
    Simoens, Pieter
    NEURAL-SYMBOLIC LEARNING AND REASONING 2023, NESY 2023, 2023,
  • [8] Robot Learning from Human Demonstration of Peg-in-Hole Task
    Wang, Peng
    Zhu, Jianxin
    Feng, Wei
    Ou, Yongsheng
    2018 IEEE 8TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (IEEE-CYBER), 2018, : 318 - 322
  • [9] Demonstration of the EMPATHIC Framework for Task Learning from Implicit Human Feedback
    Cui, Yuchen
    Zhang, Qiping
    Jain, Sahil
    Allievi, Alessandro
    Stone, Peter
    Niekum, Scott
    Knox, W. Bradley
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 16017 - 16019
  • [10] Learning a Pick-and-Place Robot Task from Human Demonstration
    Lin, Hsien-, I
    Cheng, Chia-Hsien
    Chen, Wei-Kai
    2013 CACS INTERNATIONAL AUTOMATIC CONTROL CONFERENCE (CACS), 2013, : 312 - +