Intelligent Posture Control of Humanoid Robot in Variable Environment

被引:1
|
作者
Shi Q. [1 ]
Lü L. [1 ]
Xie J. [1 ]
机构
[1] School of Mechanical and Electrical Engineering and Automation, Shanghai University, Shanghai
关键词
Bipedal walking; Deep reinforcement learning; Motion control;
D O I
10.3901/JME.2020.03.064
中图分类号
学科分类号
摘要
To solve the problems of motion instability of humanoid robots in variable uncertain, unstructured terrain and the low accuracy motion control, intelligent posture motion control algorithm is proposed. The deep reinforcement learning based continuous motion and continuous state space is applied to posture control, and the humanoid robot motion intelligent posture controller is established. Aiming at the problems of less sample and low efficiency of physical prototype training, the identification robot model is present to perform offline pre-training of the posture controller as a prior knowledge for continuous learning and in the real physical environment, improve the training efficiency in the later stage. The optimized robot posture controller is applied to the motion control of the robot. Compared with the robot motion with PID controller, MPC controller and PID+MPC controller, the standard deviation of the upper body pitch posture trajectory tracking error of the robot is reduced by 60.97%, 46.36%, 23.98% in the environmental transitional walking test, respectively. In the walking test of ground obstacles, the standard deviations of the trajectory tracking errors of the robot's upper body pitching posture are reduced by 60.38%, 26.38% and 9.52%, respectively. © 2020 Journal of Mechanical Engineering.
引用
收藏
页码:64 / 72
页数:8
相关论文
共 20 条
  • [1] Reher J., Cousineau E.A., Hereid A., Et al., Realizing dynamic and efficient bipedal locomotion on the humanoid robot DURUS, IEEE International Conference on Robotics and Automation, pp. 1794-1801, (2016)
  • [2] Ding C., Study on dynamic response characteristics of planar biped robot under randomly uncertain disturbance, (2016)
  • [3] Zhou S., Song G., Ren Z., Et al., Nonlinear dynamic analysis of coupled gear-rotor-bearing system with the effect of internal and external excitations, Chinese Journal of Mechanical Engineering, 30, 2, pp. 281-292, (2016)
  • [4] Chen Q., Research and implementation of reinforcement learning on walking stability control of humanoid robots, (2016)
  • [5] Wang W., Xiao S., Meng X., Et al., Agent-based hierarchical reinforcement learning model and architecture, Journal of Mechanical Engineering, 46, 2, pp. 76-82, (2010)
  • [6] Shao S., Sun W., Yan R., Et al., A deep learning approach for fault diagnosis of induction motors in manufacturing, Chinese Journal of Mechanical Engineering, 30, 6, pp. 1347-1356, (2017)
  • [7] Hou W., Ye M., Li W., Fault classification of rolling bearings based on improved stack noise reduction self-coding, Journal of Mechanical Engineering, 54, 7, pp. 87-96, (2018)
  • [8] Hwang K.S., Lin J.L., Li J.S., Biped balance control by reinforcement learning, Journal of Information Science and Engineering, 32, 4, pp. 1041-1060, (2016)
  • [9] Silva I.J., Perico D.H., Homem T.P.D., Et al., Using reinforcement learning to improve the stability of a humanoid robot: Walking on sloped terrain, 2015 12th Latin American Robotics Symposium (LARS) and 20153rd Brazilian Symposium on Robotics (LARS-SBR), pp. 210-215, (2015)
  • [10] Wu W., Gao L., Posture self-stabilizer of a biped robot based on training platform and reinforcement learning, Robotics and Autonomous Systems, 98, pp. 42-55, (2017)