Development of Push-Recovery control system for humanoid robots using deep reinforcement learning

被引:6
|
作者
Aslan, Emrah [1 ]
Arserim, Muhammet Ali [2 ]
Ucar, Aysegul [3 ]
机构
[1] Dicle Univ, Silvan Vocat Sch, Diyarbakir, Turkiye
[2] Dicle Univ, Engn Fac, Diyarbakir, Turkiye
[3] Firat Univ, Engn Fac, Elazig, Turkiye
关键词
Deep reinforcement learning; deep q network(DQN); double deep q network(DDQN); Humanoid robot; Robotis op2; Push-recovery; GENERATION;
D O I
10.1016/j.asej.2023.102167
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This paper focuses on the push-recovery problem of bipedal humanoid robots affected by external forces and pushes. Since they are structurally unstable, balance is the most important problem in humanoid robots. Our purpose is to design and implement a completely independent push-recovery control system that can imitate the actions of a human. For humanoid robots to be able to stay in balance while standing or walking, and to prevent balance disorders that may be caused by external forces, an active balance control has been presented. Push-recovery controllers consist of three strategies: ankle strategy, hip strategy, and step strategy. These strategies are biomechanical responses that people show in cases of balance disorder. In our application, both simulation and real-world tests have been performed. The simulation tests of the study were carried out with 3D models in the Webots environment. Real-world tests were performed on the Robotis-OP2 humanoid robot. The gyroscope, accelerometer and motor data from the sensors in our robot were recorded and external pushing force was applied to the robot. The balance of the robot was achieved by using the recorded data and the ankle strategy. To make the robot completely autonomous, Deep Q Network (DQN) and Double Deep Q Network (DDQN) methods from Deep Reinforcement Learning (DPL) algorithms have been applied. The results obtained with the DDQN algorithm yielded 21.03% more successful results compared to the DQN algorithm. The results obtained in the real environment tests showed parallelism to the simulation results.(c) 2023 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Ain Shams University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/ by-nc-nd/4.0/).
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Learning Capture Points for Humanoid Push Recovery
    Rebula, John
    Canas, Fabian
    Pratt, Jerry
    Goswami, Ambarish
    [J]. HUMANOIDS: 2007 7TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, 2007, : 65 - +
  • [22] Deep reinforcement learning for shared control of mobile robots
    Tian, Chong
    Shaik, Shahil
    Wang, Yue
    [J]. IET CYBER-SYSTEMS AND ROBOTICS, 2021, 3 (04) : 315 - 330
  • [23] Comparison of push-recovery control methods for Robotics-OP2 using ankle strategy
    Aslan, Emrah
    Arserim, Muhammet Ali
    Ucar, Aysegul
    [J]. JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, 2024, 39 (04): : 2551 - 2566
  • [24] Reinforcement learning with imitative behaviors for humanoid robots navigation: synchronous planning and control
    Wang, Xiaoying
    Zhang, Tong
    [J]. AUTONOMOUS ROBOTS, 2024, 48 (02)
  • [25] Hybrid Control Algorithm for Humanoid Robots Walking Based on Episodic Reinforcement Learning
    Katic, Dusko
    Rodic, Aleksandar
    Jose Bayro-Corrochano, Eduardo
    [J]. 2012 WORLD AUTOMATION CONGRESS (WAC), 2012,
  • [26] Ball Dribbling for Humanoid Biped Robots: A Reinforcement Learning and Fuzzy Control Approach
    Leottau, Leonardo
    Celemin, Carlos
    Ruiz-del-Solar, Javier
    [J]. ROBOCUP 2014: ROBOT WORLD CUP XVIII, 2015, 8992 : 549 - 561
  • [27] Robust and accurate feature selection for humanoid push recovery and classification: deep learning approach
    Vijay Bhaskar Semwal
    Kaushik Mondal
    G. C. Nandi
    [J]. Neural Computing and Applications, 2017, 28 : 565 - 574
  • [28] Robust and accurate feature selection for humanoid push recovery and classification: deep learning approach
    Semwal, Vijay Bhaskar
    Mondal, Kaushik
    Nandi, G. C.
    [J]. NEURAL COMPUTING & APPLICATIONS, 2017, 28 (03): : 565 - 574
  • [29] On Training Flexible Robots using Deep Reinforcement Learning
    Dwiel, Zach
    Candadai, Madhavun
    Phielipp, Mariano
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4666 - 4671
  • [30] Online Learning of Low Dimensional Strategies for High-Level Push Recovery in Bipedal Humanoid Robots
    Yi, Seung-Joon
    Zhang, Byoung-Tak
    Hong, Dennis
    Lee, Daniel D.
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 1649 - 1655