Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors

被引:54
|
作者
Khansari-Zadeh, Seyed Mohammad [1 ]
Khatib, Oussama [1 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
基金
瑞士国家科学基金会;
关键词
Potential field; Variable impedance control; Compliant control; Robot learning; Physical interaction control; Motion control; Imitation learning; Motion primitives; TIME OBSTACLE AVOIDANCE; IMPEDANCE CONTROL; ROBOT CONTROL; MANIPULATORS; TASK;
D O I
10.1007/s10514-015-9528-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of devising a unified control policy capable of regulating both the robot motion and its physical interaction with the environment. We formulate this control policy by a non-parametric potential function and a dissipative field, which both can be learned from human demonstrations. We show that the robot motion and its stiffness behaviors can be encapsulated by the potential function's gradient and curvature, respectively. The dissipative field can also be used to model desired damping behavior throughout the motion, hence generating motions that follows the same velocity profile as the demonstrations. The proposed controller can be realized as a unification approach between "realtime motion generation" and "variable impedance control", with the advantages that it has guaranteed stability as well as does not rely on following a reference trajectory. Our approach, called unified motion and variable impedance control (UMIC), is completely time-invariant and can be learned from a few demonstrations via solving two (convex) constrained quadratic optimization problems. We validate UMIC on a library of 30 human handwriting motions and on a set of experiments on 7-DoF KUKA light weight robot.
引用
收藏
页码:45 / 69
页数:25
相关论文
共 50 条
  • [1] Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors
    Seyed Mohammad Khansari-Zadeh
    Oussama Khatib
    [J]. Autonomous Robots, 2017, 41 : 45 - 69
  • [2] Learning Motion and Impedance Behaviors from Human Demonstrations
    Saveriano, Matteo
    Lee, Dongheui
    [J]. 2014 11TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAI), 2014, : 368 - 373
  • [3] Learning Lyapunov (Potential) Functions from Counterexamples and Demonstrations
    Ravanbakhsh, Hadi
    Sankaranarayanan, Sriram
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XIII, 2017,
  • [4] Learning Physical Collaborative Robot Behaviors From Human Demonstrations
    Rozo, Leonel
    Calinon, Sylvain
    Caldwell, Darwin G.
    Jimenez, Pablo
    Torras, Carme
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2016, 32 (03) : 513 - 527
  • [5] Sequential learning unification controller from human demonstrations for robotic compliant manipulation
    Duan, Jianghua
    Ou, Yongsheng
    Xu, Sheng
    Liu, Ming
    [J]. NEUROCOMPUTING, 2019, 366 : 35 - 45
  • [6] Learning Compliant Manipulation Tasks from Force Demonstrations
    Duan, Jianghua
    Ou, Yongsheng
    Xu, Sheng
    Wang, Zhiyang
    Peng, Ansi
    Wu, Xinyu
    Feng, Wei
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON CYBORG AND BIONIC SYSTEMS (CBS), 2018, : 449 - 454
  • [7] Learning compliant dynamical system from human demonstrations for stable force control in unknown environments
    Ge, Dongsheng
    Zhao, Huan
    Wang, Yiwei
    Li, Dianxi
    Li, Xiangfei
    Ding, Han
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2024, 86
  • [8] Learning Reward Functions by Integrating Human Demonstrations and Preferences
    Palan, Malayandi
    Shevchuk, Gleb
    Landolfi, Nicholas C.
    Sadigh, Dorsa
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [9] Objective learning from human demonstrations
    Lin, Jonathan Feng-Shun
    Carreno-Medrano, Pamela
    Parsapour, Mahsa
    Sakr, Maram
    Kulic, Dana
    [J]. ANNUAL REVIEWS IN CONTROL, 2021, 51 : 111 - 129
  • [10] Learning control lyapunov functions from counterexamples and demonstrations
    Hadi Ravanbakhsh
    Sriram Sankaranarayanan
    [J]. Autonomous Robots, 2019, 43 : 275 - 307