Multi-Modal Legged Locomotion Framework With Automated Residual Reinforcement Learning

被引:6
|
作者
Yu, Chen [1 ]
Rosendo, Andre [1 ]
机构
[1] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
来源
关键词
Evolutionary robotics; legged robots; multi-modal locomotion; reinforcement learning; HUMANOID ROBOTS; OPTIMIZATION; WALKING; DESIGN;
D O I
10.1109/LRA.2022.3191071
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
While quadruped robots usually have good stability and load capacity, bipedal robots offer a higher level of flexibility / adaptability to different tasks and environments. A multi-modal legged robot can take the best of both worlds. In this paper, we propose a multi-modal locomotion framework that is composed of a hand-crafted transition motion and a learning-based bipedal controller-learnt by a novel algorithm called Automated Residual Reinforcement Learning. This framework aims to endow arbitrary quadruped robots with the ability to walk bipedally. In particular, we 1) design an additional supporting structure for a quadruped robot and a sequential multi-modal transition strategy; 2) propose a novel class of Reinforcement Learning algorithms for bipedal control and evaluate their performances in both simulation and the real world. Experimental results show that our proposed algorithms have the best performance in simulation and maintain a good performance in a real-world robot. Overall, our multi-modal robot could successfully switch between biped and quadruped, and walk in both modes.
引用
收藏
页码:10312 / 10319
页数:8
相关论文
共 50 条
  • [1] Safe Reinforcement Learning for Legged Locomotion
    Yang, Tsung-Yen
    Zhang, Tingnan
    Luu, Linda
    Ha, Sehoon
    Tan, Jie
    Yu, Wenhao
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 2454 - 2461
  • [2] Reinforcement Learning of Single Legged Locomotion
    Fankhauser, Peter
    Hutter, Marco
    Gehring, Christian
    Bloesch, Michael
    Hoepflinger, Mark A.
    Siegwart, Roland
    [J]. 2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 188 - 193
  • [3] MMDP: A Mobile-IoT Based Multi-Modal Reinforcement Learning Service Framework
    Wang, Puming
    Yang, Laurence T.
    Li, Jintao
    Li, Xue
    Zhou, Xiaokang
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2020, 13 (04) : 675 - 684
  • [4] A unified framework for multi-modal federated learning
    Xiong, Baochen
    Yang, Xiaoshan
    Qi, Fan
    Xu, Changsheng
    [J]. NEUROCOMPUTING, 2022, 480 : 110 - 118
  • [5] A Framework of Multi-modal Corpus for Mandarin Learning
    Liu, Yang
    Yang, Chunting
    [J]. 2009 IITA INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS ENGINEERING, PROCEEDINGS, 2009, : 476 - 479
  • [6] Rough-Terrain Locomotion and Unilateral Contact Force Regulations With a Multi-Modal Legged Robot
    Liang, Kaier
    Sihite, Eric
    Dangol, Pravin
    Lessieur, Andrew
    Ramezani, Alireza
    [J]. 2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 1762 - 1769
  • [7] A Deep Reinforcement Learning Recommendation Model with Multi-modal Features
    Pan, Huali
    Xie, Jun
    Gao, Jing
    Xu, Xinying
    Wang, Changzheng
    [J]. Data Analysis and Knowledge Discovery, 2023, 7 (04) : 114 - 128
  • [8] Bayesian decomposition of multi-modal dynamical systems for reinforcement learning
    Kaiser, Markus
    Otte, Clemens
    Runkler, Thomas A.
    Ek, Carl Henrik
    [J]. NEUROCOMPUTING, 2020, 416 : 352 - 359
  • [9] Learning Efficient and Robust Multi-Modal Quadruped Locomotion: A Hierarchical Approach
    Xu, Shaohang
    Zhu, Lijun
    Ho, Chin Pang
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 4649 - 4655
  • [10] Learning to Climb: Constrained Contextual Bayesian Optimisation on a Multi-Modal Legged Robot
    Yu, Chen
    Cao, Jinyue
    Rosendo, Andre
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 9881 - 9888