Multi-Modal Legged Locomotion Framework With Automated Residual Reinforcement Learning

被引:6
|
作者
Yu, Chen [1 ]
Rosendo, Andre [1 ]
机构
[1] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
来源
关键词
Evolutionary robotics; legged robots; multi-modal locomotion; reinforcement learning; HUMANOID ROBOTS; OPTIMIZATION; WALKING; DESIGN;
D O I
10.1109/LRA.2022.3191071
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
While quadruped robots usually have good stability and load capacity, bipedal robots offer a higher level of flexibility / adaptability to different tasks and environments. A multi-modal legged robot can take the best of both worlds. In this paper, we propose a multi-modal locomotion framework that is composed of a hand-crafted transition motion and a learning-based bipedal controller-learnt by a novel algorithm called Automated Residual Reinforcement Learning. This framework aims to endow arbitrary quadruped robots with the ability to walk bipedally. In particular, we 1) design an additional supporting structure for a quadruped robot and a sequential multi-modal transition strategy; 2) propose a novel class of Reinforcement Learning algorithms for bipedal control and evaluate their performances in both simulation and the real world. Experimental results show that our proposed algorithms have the best performance in simulation and maintain a good performance in a real-world robot. Overall, our multi-modal robot could successfully switch between biped and quadruped, and walk in both modes.
引用
收藏
页码:10312 / 10319
页数:8
相关论文
共 50 条
  • [21] Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning
    Cruz, Francisco
    Parisi, German, I
    Wermter, Stefan
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [22] Hierarchical Adaptive Value Estimation for Multi-modal Visual Reinforcement Learning
    Huang, Yangru
    Peng, Peixi
    Zhao, Yifan
    Xu, Haoran
    Geng, Mengyue
    Tian, Yonghong
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [23] FREQUENCY-RELEVANT RESIDUAL LEARNING FOR MULTI-MODAL IMAGE DENOISING
    Liu, Xiongwei
    Sheng, Zehua
    Shen, Hui-Liang
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 86 - 90
  • [24] Deep Multi-modal Latent Representation Learning for Automated Dementia Diagnosis
    Zhou, Tao
    Liu, Mingxia
    Fu, Huazhu
    Wang, Jun
    Shen, Jianbing
    Shao, Ling
    Shen, Dinggang
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT IV, 2019, 11767 : 629 - 638
  • [25] Multi-modal Active Learning From Human Data: A Deep Reinforcement Learning Approach
    Rudovic, Ognjen
    Zhang, Meiru
    Schuller, Bjorn
    Picard, Rosalind W.
    [J]. ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 6 - 15
  • [26] A Unified Deep Learning Framework for Multi-Modal Multi-Dimensional Data
    Xi, Pengcheng
    Goubran, Rafik
    Shu, Chang
    [J]. 2019 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA), 2019,
  • [27] Unsupervised Multi-modal Learning
    Iqbal, Mohammed Shameer
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE (AI 2015), 2015, 9091 : 343 - 346
  • [28] Learning Multi-modal Similarity
    McFee, Brian
    Lanckriet, Gert
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 491 - 523
  • [29] Exploring unknown environments with multi-modal locomotion swarm
    Ouarda, Zedadra
    Nicolas, Jouandeau
    Hamid, Seridi
    Giancarlo, Fortino
    [J]. INTELLIGENT DISTRIBUTED COMPUTING X, 2017, 678 : 131 - 140
  • [30] A Flying Robot with Adaptive Morphology for Multi-Modal Locomotion
    Daler, Ludovic
    Lecoeur, Julien
    Haehlen, Patrizia Bernadette
    Floreano, Dario
    [J]. 2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 1361 - 1366