Hierarchical Multicontact Motion Planning of Hexapod Robots With Incremental Reinforcement Learning

被引:0
|
作者
Tang, Kaiqiang [1 ]
Fu, Huiqiao [1 ]
Deng, Guizhou [2 ]
Wang, Xinpeng [2 ]
Chen, Chunlin [1 ]
机构
[1] Nanjing Univ, Sch Management & Engn, Dept Control Sci & Intelligent Engn, Nanjing 210093, Peoples R China
[2] Southwest Univ Sci & Technol, Dept Proc Equipment & Control Engn, Sch Mfg Sci & Engn, Mianyang 621000, Peoples R China
基金
中国国家自然科学基金;
关键词
Planning; Robots; Legged locomotion; Dynamics; Heuristic algorithms; Kinematics; Trajectory; Dynamic environments; incremental reinforcement learning (IRL); legged locomotion; multicontact motion planning; unstructured environments; NAVIGATION;
D O I
10.1109/TCDS.2023.3345539
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Legged locomotion in unstructured environments with static and dynamic obstacles is challenging. This article proposes a novel hierarchical multicontact motion planning method with incremental reinforcement learning (HMC-IRL) that enables hexapod robots to pass through large-scale discrete complex unstructured environments with local changes occurring. First, a novel hierarchical structure and an information fusion mechanism are developed to decompose multicontact motion planning into two stages: planning the high level prior grid path and planning the low level detailed center of mass (COM) and foothold sequences based on the prior grid path. Second, we leverage the HMC-IRL method with an incremental architecture to enable swift adaptation to local changes in the environment, which includes incremental soft Q-learning (ISQL) algorithm to obtain the optimal prior grid path and incremental proximal policy optimization (IPPO) algorithm to obtain the COM and foothold sequences in the dynamic plum blossom pile environment. Finally, the integrated HMC-IRL method is tested on both simulated and real systems. All the experimental results demonstrate the feasibility and efficiency of the proposed method. Videos are shown at http://www.hexapod.cn/hmcirl.html.
引用
收藏
页码:1327 / 1341
页数:15
相关论文
共 50 条
  • [31] Harnessing Reinforcement Learning for Neural Motion Planning
    Jurgenson, Tom
    Tamar, Aviv
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [32] Federated reinforcement learning for generalizable motion planning
    Yuan, Zhenyuan
    Xu, Siyuan
    Zhu, Minghui
    2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 78 - 83
  • [33] An incremental learning approach to motion planning with roadmap management
    Li, TY
    Shie, YC
    2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2002, : 3411 - 3416
  • [34] An incremental learning approach to motion planning with roadmap management
    Li, Tsai-Yen
    Shie, Yang-Chuan
    JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2007, 23 (02) : 525 - 538
  • [35] Motion generation of virtual human with hierarchical reinforcement learning
    Mukai, T
    Kuriyama, S
    Kaneko, T
    ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 2004, 87 (11): : 34 - 43
  • [36] Hierarchical reinforcement learning for transportation infrastructure maintenance planning
    Hamida, Zachary
    Goulet, James-A.
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2023, 235
  • [37] Combined Reinforcement Learning and CPG Algorithm to Generate Terrain-Adaptive Gait of Hexapod Robots
    Li, Daxian
    Wei, Wu
    Qiu, Zhiying
    ACTUATORS, 2023, 12 (04)
  • [38] A LEARNING FUZZY ALGORITHM FOR MOTION PLANNING OF MOBILE ROBOTS
    WU, CJ
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 1994, 11 (03) : 209 - 221
  • [39] Learning fuzzy algorithm for motion planning of mobile robots
    Wu, Chia-Ju
    Journal of Intelligent and Robotic Systems: Theory and Applications, 1994, 11 (03): : 209 - 221
  • [40] Motion Profile Optimization in Industrial Robots using Reinforcement Learning
    Wen, Yunshi
    He, Honglu
    Julius, Agung
    Wen, John T.
    2023 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, AIM, 2023, : 1309 - 1316