Grasp Pose Learning from Human Demonstration with Task Constraints

被引:0
|
作者
Yinghui Liu
Kun Qian
Xin Xu
Bo Zhou
Fang Fang
机构
[1] Southeast University,School of Automation
[2] Southeast University,The Key Laboratory of Measurement and Control of CSE, Ministry of Education
来源
关键词
Learning from demonstration; Robot grasping; Grasp pose detection; Superquadric; Task constraints;
D O I
暂无
中图分类号
学科分类号
摘要
To learn grasp constraints from human demonstrations, we propose a method that combines data-driven grasp constraint learning and one-shot human demonstration of tasks. By presenting task constraints in a GMM-based gripper-independent form, the task constraints are learned from simulated data with self-labeled grasp quality scores. By observing a human demonstration of the task and a real-world object, the learned task constraint model can be utilized to infer both the unknown grasping task and the probability density distributions of the task constraints on the object point cloud. In addition, we extend the superquadric-based grasp estimation method for reproducing the grasping task with 2-finger grippers. The task constraints restrict the searching scope of the grasp pose, so the geometrically best grasp pose within the task-constrained regions can be obtained. The effectiveness of our methodology is verified in experiments with a UR5 robot with a 2-finger gripper.
引用
收藏
相关论文
共 50 条
  • [41] The Grasp Perturbator: Calibrating human grasp stiffness during a graded force task
    Hoeppner, Hannes
    Lakatos, Dominic
    Urbanek, Holger
    Castellini, Claudio
    van der Smagt, Patrick
    2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011,
  • [42] Abstract Constraints for Safe and Robust Robot Learning from Demonstration
    Mueller, Carl L.
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13728 - 13729
  • [43] Long Time Sequential Task Learning From Unstructured Demonstration
    Zhang, Huiwen
    Liu, Yuwang
    Zhou, Weijia
    IEEE ACCESS, 2019, 7 : 96240 - 96252
  • [44] Deep learning a grasp function for grasping under gripper pose uncertainty
    Dyson Robotics Lab, Imperial College London, United Kingdom
    IEEE Int Conf Intell Rob Syst, (4461-4468):
  • [45] Deep Learning a Grasp Function for Grasping under Gripper Pose Uncertainty
    Johns, Edward
    Leutenegger, Stefan
    Davison, Andrew J.
    2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 4461 - 4468
  • [46] Learning Object Orientation Constraints and Guiding Constraints for Narrow Passages from One Demonstration
    Li, Changshuo
    Berenson, Dmitry
    2016 INTERNATIONAL SYMPOSIUM ON EXPERIMENTAL ROBOTICS, 2017, 1 : 197 - 210
  • [47] A Model of Shared Grasp Affordances from Demonstration
    Sweeney, John D.
    Grupen, Rod
    HUMANOIDS: 2007 7TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, 2007, : 27 - 35
  • [48] Learning Generalizable Dexterous Manipulation from Human Grasp Affordance
    Wu, Yueh-Hua
    Wang, Jiashun
    Wang, Xiaolong
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 618 - 629
  • [49] Nonvisual learning of intrinsic object properties in a reaching task dissociates grasp from reach
    Jenni M. Karl
    Leandra R. Schneider
    Ian Q. Whishaw
    Experimental Brain Research, 2013, 225 : 465 - 477
  • [50] Nonvisual learning of intrinsic object properties in a reaching task dissociates grasp from reach
    Karl, Jenni M.
    Schneider, Leandra R.
    Whishaw, Ian Q.
    EXPERIMENTAL BRAIN RESEARCH, 2013, 225 (04) : 465 - 477