Model-based Reinforcement Learning for Sim-to-Real Transfer in Robotics using HTM neural networks

被引:0
|
作者
Diprasetya, M. R. [1 ]
Pullani, A. N. [1 ]
Schwung, D. [2 ]
Schwung, A. [1 ]
机构
[1] South Westphalia Univ Appl Sci, Dept Automat Technol & Learning Syst, Soest, Germany
[2] Hsch Dusseldorf Univ Appl Sci, Dept Artificial Intelligence & Data Sci, Dusseldorf, Germany
关键词
Model-based Reinforcement learning; Homogeneous transformation matrix; robotics; Sim-to-real transfer;
D O I
10.1109/CoDIT62066.2024.10708424
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work we propose a novel approach based on model-based Reinforcement Learning (RL) for the sim-to-real transfer of industrial robots. Specifically, we propose to employ a recently developed kinematics-informed, modular neural network serving as a learnable environment model within the world model framework. Using the kinematics-informed model, training of the world model is made more efficient resulting in faster training. Furthermore, the approach allows to train industrial robots on specific tasks solely within the simulation of the system thereby saving time and energy-consumption. Using simulations ensures safe and controlled training implementation and allows for parallelization to increase training speed. We conduct various experiments which underline the effectiveness of the proposed method. We show that training the RL algorithm solely within the simulation, results in a hundred percent task completion rate in both simulation and real world experiments.
引用
收藏
页码:43 / 48
页数:6
相关论文
共 50 条
  • [21] Reinforcement Learning and Sim-to-Real Transfer of Reorientation and Landing Control for Quadruped Robots on Asteroids
    Qi, Ji
    Gao, Haibo
    Su, Huanli
    Huo, Mingying
    Yu, Haitao
    Deng, Zongquan
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024, 71 (11) : 14392 - 14400
  • [22] Dual Action Policy for Robust Sim-to-Real Reinforcement Learning
    Terence, Ng Wen Zheng
    Chen Jianda
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IV, 2024, 15019 : 369 - 380
  • [23] Torque-Based Deep Reinforcement Learning for Task-and-Robot Agnostic Learning on Bipedal Robots Using Sim-to-Real Transfer
    Kim, Donghyeon
    Berseth, Glen
    Schwartz, Mathew
    Park, Jaeheung
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (10) : 6251 - 6258
  • [24] Sim-to-Real Control of Trifinger Robot by Deep Reinforcement Learning
    Wan, Qiang
    Wu, Tianyang
    Ye, Jiawei
    Wan, Lipeng
    Lau, Xuguang
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT VI, 2025, 15206 : 300 - 314
  • [25] Dynamic Bipedal Turning through Sim-to-Real Reinforcement Learning
    Yu, Fangzhou
    Batke, Ryan
    Dao, Jeremy
    Hurst, Jonathan
    Green, Kevin
    Fern, Alan
    2022 IEEE-RAS 21ST INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2022, : 903 - 910
  • [26] Model-Based Reinforcement Learning in Robotics: A Survey
    Sun S.
    Lan X.
    Zhang H.
    Zheng N.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (01): : 1 - 16
  • [27] Sim-to-real transfer in reinforcement learning-based, non-steady-state control for chemical plants
    Kubosawa S.
    Onishi T.
    Tsuruoka Y.
    SICE Journal of Control, Measurement, and System Integration, 2022, 15 (01) : 10 - 23
  • [28] Sim-to-real transfer reinforcement learning for control of thermal effects of an atmospheric pressure plasma jet
    Witman, Matthew
    Gidon, Dogan
    Graves, David B.
    Smit, Berend
    Mesbah, Ali
    PLASMA SOURCES SCIENCE & TECHNOLOGY, 2019, 28 (09):
  • [29] Real-Time Parameter Control for Trajectory Generation Using Reinforcement Learning With Zero-Shot Sim-to-Real Transfer
    Ji, Chang-Hun
    Lim, Gyeonghun
    Han, Youn-Hee
    Moon, Sungtae
    IEEE ACCESS, 2024, 12 : 171662 - 171674
  • [30] Reinforcement Learning-based Sim-to-Real Impedance Parameter Tuning for Robotic Assembly
    Kim, Yong-Geon
    Na, Minwoo
    Song, Jae-Bok
    2021 21ST INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2021), 2021, : 833 - 836