Residual Learning From Demonstration: Adapting DMPs for Contact-Rich Manipulation

被引:18
|
作者
Davchev, Todor [1 ]
Luck, Kevin Sebastian [2 ]
Burke, Michael [3 ]
Meier, Franziska [5 ]
Schaal, Stefan [4 ]
Ramamoorthy, Subramanian [1 ]
机构
[1] Univ Edinburgh, Sch Informat, Edinburgh EH8 9AB, Midlothian, Scotland
[2] Aalto Univ, Dept Elect Engn & Automat, Intelligent Robot, Espoo 02150, Finland
[3] Monash Univ, ECSE, Melbourne, Vic 3800, Australia
[4] Google X Intrins, Mountain View, CA 94043 USA
[5] Facebook AI Res, Menlo Pk, CA 94025 USA
来源
基金
英国工程与自然科学研究理事会; 芬兰科学院;
关键词
Task analysis; Robots; Adaptation models; Friction; Trajectory; Gears; Couplings; Learning from demonstration; reinforcement learning; sensorimotor learning; ENVIRONMENT; SKILLS; MODELS;
D O I
10.1109/LRA.2022.3150024
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Manipulation skills involving contact and friction are inherent to many robotics tasks. Using the class of motor primitives for peg-in-hole like insertions, we study how robots can learn such skills. Dynamic Movement Primitives (DMP) are a popular way of extracting such policies through behaviour cloning (BC) but can struggle in the context of insertion. Policy adaptation strategies such as residual learning can help improve the overall performance of policies in the context of contact-rich manipulation. However, it is not clear how to best do this with DMPs. As a result, we consider several possible ways for adapting a DMP formulation and propose "residual Learning from Demonstration" (rLfD), a framework that combines DMPs with Reinforcement Learning (RL) to learn a residual correction policy. Our evaluations suggest that applying residual learning directly in task space and operating on the full pose of the robot can significantly improve the overall performance of DMPs. We show that rLfD offers a gentle to the joints solution that improves the task success and generalisation of DMPs and enables transfer to different geometries and frictions through few-shot task adaptation. The proposed framework is evaluated on a set of tasks. A simulated robot and a physical robot have to successfully insert pegs, gears and plugs into their respective sockets.
引用
收藏
页码:4488 / 4495
页数:8
相关论文
共 50 条
  • [1] Residual Feedback Learning for Contact-Rich Manipulation Tasks with Uncertainty
    Ranjbar, Alireza
    Vien, Ngo Anh
    Ziesche, Hanna
    Boedecker, Joschka
    Neumann, Gerhard
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2383 - 2390
  • [2] Augmentation Enables One-Shot Generalization In Learning From Demonstration for Contact-Rich Manipulation
    Li, Xing
    Baum, Manuel
    Brock, Oliver
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 3656 - 3663
  • [3] Variable Impedance Skill Learning for Contact-Rich Manipulation
    Yang, Quantao
    Durr, Alexander
    Topp, Elin Anna
    Stork, Johannes A.
    Stoyanov, Todor
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03): : 8391 - 8398
  • [4] Learning Dense Rewards for Contact-Rich Manipulation Tasks
    Wu, Zheng
    Lian, Wenzhao
    Unhelkar, Vaibhav
    Tomizuka, Masayoshi
    Schaal, Stefan
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 6214 - 6221
  • [5] Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks
    Shi, Yunlei
    Chen, Zhaopeng
    Wu, Yansong
    Henkel, Dimitri
    Riedel, Sebastian
    Liu, Hongxu
    Feng, Qian
    Zhang, Jianwei
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 1062 - 1069
  • [6] Variable-Impedance and Force Control for Robust Learning of Contact-rich Manipulation Tasks from User Demonstration
    Enayati, Nima
    Mariani, Stefano
    Wahrburg, Arne
    Zanchettin, Andrea M.
    [J]. IFAC PAPERSONLINE, 2020, 53 (02): : 9834 - 9840
  • [7] Learning Contact-Rich Manipulation Skills with Guided Policy Search
    Levine, Sergey
    Wagener, Nolan
    Abbeel, Pieter
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 156 - 163
  • [8] A System for Imitation Learning of Contact-Rich Bimanual Manipulation Policies
    Stepputtis, Simon
    Bandari, Maryam
    Schaal, Stefan
    Ben Amor, Heni
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 11810 - 11817
  • [9] Stability-Guaranteed Reinforcement Learning for Contact-Rich Manipulation
    Khader, Shahbaz Abdul
    Yin, Hang
    Falco, Pietro
    Kragic, Danica
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (01) : 1 - 8
  • [10] A review on reinforcement learning for contact-rich robotic manipulation tasks
    Elguea-Aguinaco, Inigo
    Serrano-Munoz, Antonio
    Chrysostomou, Dimitrios
    Inziarte-Hidalgo, Ibai
    Bogh, Simon
    Arana-Arexolaleiba, Nestor
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2023, 81