Development of Push-Recovery control system for humanoid robots using deep reinforcement learning

被引:6
|
作者
Aslan, Emrah [1 ]
Arserim, Muhammet Ali [2 ]
Ucar, Aysegul [3 ]
机构
[1] Dicle Univ, Silvan Vocat Sch, Diyarbakir, Turkiye
[2] Dicle Univ, Engn Fac, Diyarbakir, Turkiye
[3] Firat Univ, Engn Fac, Elazig, Turkiye
关键词
Deep reinforcement learning; deep q network(DQN); double deep q network(DDQN); Humanoid robot; Robotis op2; Push-recovery; GENERATION;
D O I
10.1016/j.asej.2023.102167
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This paper focuses on the push-recovery problem of bipedal humanoid robots affected by external forces and pushes. Since they are structurally unstable, balance is the most important problem in humanoid robots. Our purpose is to design and implement a completely independent push-recovery control system that can imitate the actions of a human. For humanoid robots to be able to stay in balance while standing or walking, and to prevent balance disorders that may be caused by external forces, an active balance control has been presented. Push-recovery controllers consist of three strategies: ankle strategy, hip strategy, and step strategy. These strategies are biomechanical responses that people show in cases of balance disorder. In our application, both simulation and real-world tests have been performed. The simulation tests of the study were carried out with 3D models in the Webots environment. Real-world tests were performed on the Robotis-OP2 humanoid robot. The gyroscope, accelerometer and motor data from the sensors in our robot were recorded and external pushing force was applied to the robot. The balance of the robot was achieved by using the recorded data and the ankle strategy. To make the robot completely autonomous, Deep Q Network (DQN) and Double Deep Q Network (DDQN) methods from Deep Reinforcement Learning (DPL) algorithms have been applied. The results obtained with the DDQN algorithm yielded 21.03% more successful results compared to the DQN algorithm. The results obtained in the real environment tests showed parallelism to the simulation results.(c) 2023 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Ain Shams University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/ by-nc-nd/4.0/).
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Learning a Push-Recovery Controller for Quadrupedal Robots
    Li, Peiyang
    Chen, Wei
    Han, Xinyu
    Zhao, Mingguo
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE-ROBIO 2021), 2021, : 944 - 949
  • [2] Push Recovery Control for Humanoid Robot using Reinforcement Learning
    Seo, Donghyeon
    Kim, Harin
    Kim, Donghan
    [J]. 2019 THIRD IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2019), 2019, : 488 - 492
  • [3] Learning Push Recovery Behaviors for Humanoid Walking Using Deep Reinforcement Learning
    Dicksiano C. Melo
    Marcos R. O. A. Maximo
    Adilson Marques da Cunha
    [J]. Journal of Intelligent & Robotic Systems, 2022, 106
  • [4] Learning Push Recovery Behaviors for Humanoid Walking Using Deep Reinforcement Learning
    Melo, Dicksiano C.
    Maximo, Marcos R. O. A.
    da Cunha, Adilson Marques
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 106 (01)
  • [5] Learning Full Body Push Recovery Control for Small Humanoid Robots
    Yi, Seung-Joon
    Zhang, Byoung-Tak
    Hong, Dennis
    Lee, Daniel D.
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011, : 2047 - 2052
  • [6] Reactive Stepping for Humanoid Robots using Reinforcement Learning: Application to Standing Push Recovery on the Exoskeleton Atalante
    Duburcq, Alexis
    Schramm, Fabian
    Boeris, Guilhem
    Bredeche, Nicolas
    Chevaleyre, Yann
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9302 - 9309
  • [7] Push-recovery strategies implemented on a compliant humanoid robot
    Hyon, Sang-Ho
    Osu, Rieko
    Otaka, Yohei
    Morimoto, Jun
    [J]. NEUROSCIENCE RESEARCH, 2009, 65 : S183 - S183
  • [8] Learning to Move an Object by the Humanoid Robots by Using Deep Reinforcement Learning
    Aslan, Simge Nur
    Tasci, Burak
    Ucar, Aysegul
    Guzelis, Cuneyt
    [J]. INTELLIGENT ENVIRONMENTS 2021, 2021, 29 : 143 - 155
  • [9] On the Emergence of Whole-Body Strategies From Humanoid Robot Push-Recovery Learning
    Ferigo, Diego
    Camoriano, Raffaello
    Viceconte, Paolo Maria
    Calandriello, Daniele
    Traversaro, Silvio
    Rosasco, Lorenzo
    Pucci, Daniele
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 8561 - 8568
  • [10] Visual Navigation for Biped Humanoid Robots Using Deep Reinforcement Learning
    Lobos-Tsunekawa, Kenzo
    Leiva, Francisco
    Ruiz-del-Solar, Javier
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04): : 3247 - 3254