Offline Reinforcement Learning of Robotic Control Using Deep Kinematics and Dynamics

被引:1
|
作者
Li, Xiang [1 ]
Shang, Weiwei [1 ]
Cong, Shuang [1 ]
机构
[1] Univ Sci & Technol China, Dept Automat, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
Computed-torque controller; kinematic and dynamic model learning; model-based reinforcement learning (MBRL); robotic control; trajectory tracking; NEURAL-NETWORKS;
D O I
10.1109/TMECH.2023.3336316
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of deep learning, model-free reinforcement learning algorithms have achieved remarkable results in many fields. However, their high sample complexity and the potential for causing damage to environments and robots pose severe challenges for their application in real-world environments. Model-based reinforcement learning algorithms are often used to reduce the sample complexity. One limitation of these algorithms is the inevitable modeling errors. While the black-box model can fit complex state transition models, it ignores the existing knowledge of physics and robotics, especially studies of kinematic and dynamic models of the robotic manipulator. Compared with the black-box model, the physics-inspired deep models do not require specific knowledge of each system to obtain interpretable kinematic and dynamic models. In model-based reinforcement learning, these models can simulate the motion and be combined with classical controllers. This is due to their sharing the same form as traditional models, leading to higher precision tracking results. In this work, we utilize physics-inspired deep models to learn the kinematics and dynamics of a robotic manipulator. We propose a model-based offline reinforcement learning algorithm for controller parameter learning, combined with the traditional computed-torque controller. Experiments on trajectory tracking control of the Baxter manipulator, both in joint and operational space, are conducted in simulation and real environments. Experimental results demonstrate that our algorithm can significantly improve tracking accuracy and exhibits strong generalization and robustness.
引用
收藏
页码:2428 / 2439
页数:12
相关论文
共 50 条
  • [1] FOVEATION CONTROL OF A ROBOTIC EYE USING DEEP REINFORCEMENT LEARNING
    Rajendran, Sunil Kumar
    Wei, Qi
    Zhang, Feitian
    [J]. PROCEEDINGS OF THE ASME 11TH ANNUAL DYNAMIC SYSTEMS AND CONTROL CONFERENCE, 2018, VOL 1, 2018,
  • [2] Robotic Grasping using Deep Reinforcement Learning
    Joshi, Shirin
    Kumra, Sulabh
    Sahin, Ferat
    [J]. 2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2020, : 1461 - 1466
  • [3] Autonomous Building Control Using Offline Reinforcement Learning
    Schepers, Jorren
    Eyckerman, Reinout
    Elmaz, Furkan
    Casteels, Wim
    Latre, Steven
    Hellinckx, Peter
    [J]. ADVANCES ON P2P, PARALLEL, GRID, CLOUD AND INTERNET COMPUTING, 3PGCIC-2021, 2022, 343 : 246 - 255
  • [4] Warfarin Dose Management Using Offline Deep Reinforcement Learning
    Ji, Hannah
    Gill, Matthew F.
    Draper, Evan W.
    Liedl, David A.
    Hodge, David O.
    Houghton, Damon E.
    Casanegra, Ana I.
    [J]. CIRCULATION, 2023, 148
  • [5] Robotic Control of the Deformation of Soft Linear Objects Using Deep Reinforcement Learning
    Zakaria, Melodie Hani Daniel
    Aranda, Miguel
    Lequievre, Laurent
    Lengagne, Sebastien
    Corrales Ramon, Juan Antonio
    Mezouar, Youcef
    [J]. 2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 1516 - 1522
  • [6] Offline reinforcement learning with anderson acceleration for robotic tasks
    Guoyu Zuo
    Shuai Huang
    Jiangeng Li
    Daoxiong Gong
    [J]. Applied Intelligence, 2022, 52 : 9885 - 9898
  • [7] Offline reinforcement learning with anderson acceleration for robotic tasks
    Zuo, Guoyu
    Huang, Shuai
    Li, Jiangeng
    Gong, Daoxiong
    [J]. APPLIED INTELLIGENCE, 2022, 52 (09) : 9885 - 9898
  • [8] RMBench: Benchmarking Deep Reinforcement Learning for Robotic Manipulator Control
    Xiang, Yanfei
    Wang, Xin
    Hu, Shu
    Zhu, Bin
    Huang, Xiaomeng
    Wu, Xi
    Lyu, Siwei
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1207 - 1214
  • [9] A maintenance planning framework using online and offline deep reinforcement learning
    Bukhsh, Zaharah A.
    Molegraaf, Hajo
    Jansen, Nils
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023,
  • [10] Training Dynamic Motion Primitives using Deep Reinforcement Learning to Control a Robotic Tadpole
    Hameed, Imran
    Chao, Xu
    Navarro-Alarcon, David
    Jing, Xingjian
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 6881 - 6887