Data-Efficient Reinforcement Learning for Variable Impedance Control

被引:0
|
作者
Anand, Akhil S. [1 ]
Kaushik, Rituraj [2 ]
Gravdahl, Jan Tommy [1 ]
Abu-Dakka, Fares J. [3 ]
机构
[1] Norwegian Univ Sci & Technol NTNU, Dept Engn Cybernet, N-7491 Trondheim, Norway
[2] Aalto Univ, Dept Elect Engn & Automat EEA, Intelligent Robot Grp, Espoo 00076, Finland
[3] Mondragon Univ, Fac Engn, Dept Elect & Informat, Arrasate Mondragon 20500, Spain
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Model-based reinforcement learning; variable impedance learning control; Gaussian processes; covariance matrix adaptation; EVOLUTIONARY OPTIMIZATION; FORCE CONTROL; CONTACT; ROBOT; ENVIRONMENT; ADAPTATION; MOTION;
D O I
10.1109/ACCESS.2024.3355311
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
One of the most crucial steps toward achieving human-like manipulation skills in robots is to incorporate compliance into the robot controller. Compliance not only makes the robot's behaviour safe but also makes it more energy efficient. In this direction, the variable impedance control (VIC) approach provides a framework for a robot to adapt its compliance during execution by employing an adaptive impedance law. Nevertheless, autonomously adapting the compliance profile as demanded by the task remains a challenging problem to be solved in practice. In this work, we introduce a reinforcement learning (RL)-based approach called DEVILC (Data-Efficient Variable Impedance Learning Controller) to learn the variable impedance controller through real-world interaction of the robot. More concretely, we use a model-based RL approach in which, after every interaction, the robot iteratively learns a probabilistic model of its dynamics using the Gaussian process regression model. The model is then used to optimize a neural-network policy that modulates the robot's impedance such that the long-term reward for the task is maximized. Thanks to the model-based RL framework, DEVILC allows a robot to learn the VIC policy with only a few interactions, making it practical for real-world applications. In simulations and experiments, we evaluate DEVILC on a Franka Emika Panda robotic manipulator for different manipulation tasks in the Cartesian space. The results show that DEVILC is a promising direction toward autonomously learning compliant manipulation skills directly in the real world through interactions. A video of the experiments is available in the link: https://youtu.be/_uyr0Vye5no.
引用
收藏
页码:15631 / 15641
页数:11
相关论文
共 50 条
  • [1] Data-Efficient Reinforcement Learning for Malaria Control
    Zou, Lixin
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 507 - 513
  • [2] Data-Efficient Hierarchical Reinforcement Learning
    Nachum, Ofir
    Gu, Shixiang
    Lee, Honglak
    Levine, Sergey
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Data-efficient Deep Reinforcement Learning for Vehicle Trajectory Control
    Frauenknecht, Bernd
    Ehlgen, Tobias
    Trimpe, Sebastian
    [J]. 2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 894 - 901
  • [4] Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control
    Kamthe, Sanket
    Deisenroth, Marc Peter
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [5] Data Based Optimal Control with Neural Networks and Data-Efficient Reinforcement Learning
    Runkler, Thomas A.
    Udluft, Steffen
    Duell, Siegmund
    [J]. AT-AUTOMATISIERUNGSTECHNIK, 2012, 60 (10) : 641 - 647
  • [6] DATA-EFFICIENT MODEL-BASED REINFORCEMENT LEARNING FOR ROBOT CONTROL
    Sun, Ming
    Gao, Yue
    Liu, Wei
    Li, Shaoyuan
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2021, 36 (04): : 211 - 218
  • [7] Pretraining Representations for Data-Efficient Reinforcement Learning
    Schwarzer, Max
    Rajkumar, Nitarshan
    Noukhovitch, Michael
    Anand, Ankesh
    Charlin, Laurent
    Hjelm, Devon
    Bachman, Philip
    Courville, Aaron
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] Data-Efficient Hierarchical Reinforcement Learning for Robotic Assembly Control Applications
    Hou, Zhimin
    Fei, Jiajun
    Deng, Yuelin
    Xu, Jing
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2021, 68 (11) : 11565 - 11575
  • [9] EqR: Equivariant Representations for Data-Efficient Reinforcement Learning
    Mondal, Arnab Kumar
    Jain, Vineet
    Siddiqi, Kaleem
    Ravanbakhsh, Siamak
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [10] BarlowRL: Barlow Twins for Data-Efficient Reinforcement Learning
    Cagatan, Omer Veysel
    Akgun, Baris
    [J]. ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222