Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors

被引:54
|
作者
Khansari-Zadeh, Seyed Mohammad [1 ]
Khatib, Oussama [1 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
基金
瑞士国家科学基金会;
关键词
Potential field; Variable impedance control; Compliant control; Robot learning; Physical interaction control; Motion control; Imitation learning; Motion primitives; TIME OBSTACLE AVOIDANCE; IMPEDANCE CONTROL; ROBOT CONTROL; MANIPULATORS; TASK;
D O I
10.1007/s10514-015-9528-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of devising a unified control policy capable of regulating both the robot motion and its physical interaction with the environment. We formulate this control policy by a non-parametric potential function and a dissipative field, which both can be learned from human demonstrations. We show that the robot motion and its stiffness behaviors can be encapsulated by the potential function's gradient and curvature, respectively. The dissipative field can also be used to model desired damping behavior throughout the motion, hence generating motions that follows the same velocity profile as the demonstrations. The proposed controller can be realized as a unification approach between "realtime motion generation" and "variable impedance control", with the advantages that it has guaranteed stability as well as does not rely on following a reference trajectory. Our approach, called unified motion and variable impedance control (UMIC), is completely time-invariant and can be learned from a few demonstrations via solving two (convex) constrained quadratic optimization problems. We validate UMIC on a library of 30 human handwriting motions and on a set of experiments on 7-DoF KUKA light weight robot.
引用
收藏
页码:45 / 69
页数:25
相关论文
共 50 条
  • [41] Iterative Learning From Novice Human Demonstrations for Output Tracking
    Warrier, Rahul B.
    Devasia, Santosh
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2016, 46 (04) : 510 - 521
  • [42] Pouring Skills with Planning and Learning Modeled from Human Demonstrations
    Yamaguchi, Akihiko
    Atkeson, Christopher G.
    Ogasawara, Tsukasa
    [J]. INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, 2015, 12 (03)
  • [43] Interactive and incremental learning of spatial object relations from human demonstrations
    Kartmann, Rainer
    Asfour, Tamim
    [J]. FRONTIERS IN ROBOTICS AND AI, 2023, 10
  • [44] Learning Continuous Grasping Function With a Dexterous Hand From Human Demonstrations
    Ye, Jianglong
    Wang, Jiashun
    Huang, Binghao
    Qin, Yuzhe
    Wang, Xiaolong
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2882 - 2889
  • [45] Learning Dexterous Manipulation for a Soft Robotic Hand from Human Demonstrations
    Gupta, Abhishek
    Eppner, Clemens
    Levine, Sergey
    Abbeel, Pieter
    [J]. 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 3786 - 3793
  • [46] Learning and Generalizing Variable Impedance Manipulation Skills from Human Demonstrations
    Zhang, Yan
    Zhao, Fei
    Liao, Zhiwei
    [J]. 2022 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2022, : 810 - 815
  • [47] Learning Under-specified Object Manipulations From Human Demonstrations
    Qian, Kun
    Xu, Jun
    Gao, Ge
    Fang, Fang
    Ma, Xudong
    [J]. 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2018, : 1936 - 1941
  • [48] Learning from Demonstrations in Human-Robot Collaborative Scenarios: A Survey
    Daniel Sosa-Ceron, Arturo
    Gustavo Gonzalez-Hernandez, Hugo
    Antonio Reyes-Avendano, Jorge
    [J]. ROBOTICS, 2022, 11 (06)
  • [49] An improved approach of task-parameterized learning from demonstrations for cobots in dynamic manufacturing
    Shirine El Zaatari
    Yuqi Wang
    Yudie Hu
    Weidong Li
    [J]. Journal of Intelligent Manufacturing, 2022, 33 : 1503 - 1519
  • [50] Logic Learning From Demonstrations for Multi-Step Manipulation Tasks in Dynamic Environments
    Zhang, Yan
    Xue, Teng
    Razmjoo, Amirreza
    Calinon, Sylvain
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (08): : 7214 - 7221