Model-Based Reinforcement Learning Variable Impedance Control for Human-Robot Collaboration

被引:0
|
作者
Loris Roveda
Jeyhoon Maskani
Paolo Franceschi
Arash Abdi
Francesco Braghin
Lorenzo Molinari Tosatti
Nicola Pedrocchi
机构
[1] Institute of Intelligent Industrial Systems and Technologies for Advanced Manufacturing (STIIMA-CNR),Istituto Dalle Molle di studi sull’Intelligenza Artificiale (IDSIA)
[2] Scuola Universitaria Professionale della Svizzera Italiana (SUPSI),undefined
[3] Università della Svizzera Italiana (USI) IDSIA-SUPSI,undefined
[4] School of Industrial and Information Engineering Politecnico di Milano,undefined
来源
关键词
Human-robot collaboration; Machine learning; Industry 4.0; Model-based reinforcement learning control; Variable impedance control;
D O I
暂无
中图分类号
学科分类号
摘要
Industry 4.0 is taking human-robot collaboration at the center of the production environment. Collaborative robots enhance productivity and flexibility while reducing human’s fatigue and the risk of injuries, exploiting advanced control methodologies. However, there is a lack of real-time model-based controllers accounting for the complex human-robot interaction dynamics. With this aim, this paper proposes a Model-Based Reinforcement Learning (MBRL) variable impedance controller to assist human operators in collaborative tasks. More in details, an ensemble of Artificial Neural Networks (ANNs) is used to learn a human-robot interaction dynamic model, capturing uncertainties. Such a learned model is kept updated during collaborative tasks execution. In addition, the learned model is used by a Model Predictive Controller (MPC) with Cross-Entropy Method (CEM). The aim of the MPC+CEM is to online optimize the stiffness and damping impedance control parameters minimizing the human effort (i.e, minimizing the human-robot interaction forces). The proposed approach has been validated through an experimental procedure. A lifting task has been considered as the reference validation application (weight of the manipulated part: 10 kg unknown to the robot controller). A KUKA LBR iiwa 14 R820 has been used as a test platform. Qualitative performance (i.e, questionnaire on perceived collaboration) have been evaluated. Achieved results have been compared with previous developed offline model-free optimized controllers and with the robot manual guidance controller. The proposed MBRL variable impedance controller shows improved human-robot collaboration. The proposed controller is capable to actively assist the human in the target task, compensating for the unknown part weight. The human-robot interaction dynamic model has been trained with a few initial experiments (30 initial experiments). In addition, the possibility to keep the learning of the human-robot interaction dynamics active allows accounting for the adaptation of human motor system.
引用
收藏
页码:417 / 433
页数:16
相关论文
共 50 条
  • [1] Model-Based Reinforcement Learning Variable Impedance Control for Human-Robot Collaboration
    Roveda, Loris
    Maskani, Jeyhoon
    Franceschi, Paolo
    Abdi, Arash
    Braghin, Francesco
    Tosatti, Lorenzo Molinari
    Pedrocchi, Nicola
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2020, 100 (02) : 417 - 433
  • [2] Reinforcement Learning Based Variable Impedance Control for High Precision Human-robot Collaboration Tasks
    Meng, Yan
    Su, Jianhua
    Wu, Jiaxi
    2021 6TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2021), 2021, : 560 - 565
  • [3] Shared Impedance Control Based on Reinforcement Learning in a Human-Robot Collaboration Task
    Wu, Min
    He, Yanhao
    Liu, Steven
    ADVANCES IN SERVICE AND INDUSTRIAL ROBOTICS, 2020, 980 : 95 - 103
  • [4] Q-Learning-based model predictive variable impedance control for physical human-robot collaboration
    Roveda, Loris
    Testa, Andrea
    Shahid, Asad Ali
    Braghin, Francesco
    Piga, Dario
    ARTIFICIAL INTELLIGENCE, 2022, 312
  • [5] Variable Impedance Control with Simplex Gradient based Iterative Learning for Human-Robot Collaboration
    Tran Duc Liem
    Yashima, Masahito
    Yamawaki, Tasuku
    Horade, Mitsuhiro
    2022 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII 2022), 2022, : 351 - 354
  • [6] Q-Learning-Based Model Predictive Variable Impedance Control for Physical Human-Robot Collaboration (Extended Abstract)
    Roveda, Loris
    Testa, Andrea
    Shahid, Asad Ali
    Braghin, Francesco
    Piga, Dario
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6959 - 6963
  • [7] Explainable Reinforcement Learning for Human-Robot Collaboration
    Iucci, Alessandro
    Hata, Alberto
    Terra, Ahmad
    Inam, Rafia
    Leite, Iolanda
    2021 20TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2021, : 927 - 934
  • [8] Motion Planning for Human-Robot Collaboration based on Reinforcement Learning
    Yu, Tian
    Chang, Qing
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 1866 - 1871
  • [9] Iterative Learning of Variable Impedance Control for Human-Robot Cooperation
    Yamawaki, Tasuku
    Ishikawa, Hiroki
    Yashima, Masahito
    2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 839 - 844
  • [10] A Model-Based Human Activity Recognition for Human-Robot Collaboration
    Lee, Sang Uk
    Hofmann, Andreas
    Williams, Brian
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 736 - 743