ViTAL: Vision-Based Terrain-Aware Locomotion for Legged Robots

被引:6
|
作者
Fahmi, Shamel [1 ,2 ]
Barasuol, Victor [1 ]
Esteban, Domingo [1 ]
Villarreal, Octavio [1 ]
Semini, Claudio [1 ]
Semini, Claudio [1 ]
机构
[1] Ist Italiano Tecnol, Dynam Legged Syst Lab, I-16163 Genoa, Italy
[2] MIT, Biomimet Robot Lab, Cambridge, MA 02139 USA
关键词
Legged robots; optimization and optimal control; visual learning; whole-body motion planning and control; ROUGH-TERRAIN; QUADRUPED LOCOMOTION; MODEL; ADAPTATION;
D O I
10.1109/TRO.2022.3222958
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This work is on vision-based planning strategies for legged robots that separate locomotion planning into foothold selection and pose adaptation. Current pose adaptation strategies optimize the robot's body pose relative to given footholds. If these footholds are not reached, the robot may end up in a state with no reachable safe footholds. Therefore, we present a Vision-Based Terrain-Aware Locomotion (ViTAL) strategy that consists of novel pose adaptation and foothold selection algorithms. ViTAL introduces a different paradigm in pose adaptation that does not optimize the body pose relative to given footholds, but the body pose that maximizes the chances of the legs in reaching safe footholds. ViTAL plans footholds and poses based on skills that characterize the robot's capabilities and its terrain-awareness. We use the 90 kg HyQ and 140 kg HyQReal quadruped robots to validate ViTAL, and show that they are able to climb various obstacles including stairs, gaps, and rough terrains at different speeds and gaits. We compare ViTAL with a baseline strategy that selects the robot pose based on given selected footholds, and show that ViTAL outperforms the baseline.
引用
收藏
页码:885 / 904
页数:20
相关论文
共 50 条
  • [31] When and where to step: Terrain-aware real-time footstep location and timing optimization for bipedal robots
    Wang, Ke
    Hu, Zhaoyang Jacopo
    Tisnikar, Peter
    Helander, Oskar
    Chappell, Digby
    Kormushev, Petar
    [J]. Robotics and Autonomous Systems, 2024, 179
  • [32] Terrain-aware path planning via semantic segmentation and uncertainty rejection filter with adversarial noise for mobile robots
    Lee, Kangneoung
    Lee, Kiju
    [J]. JOURNAL OF FIELD ROBOTICS, 2024,
  • [33] Vision-based PID control of planar robots
    Cervantes, I
    Garrido, R
    Alvarez-Ramirez, J
    Martinez, A
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2004, 9 (01) : 132 - 136
  • [34] BRIDGELOC: Bridging Vision-Based Localization for Robots
    Zhai, Qiang
    Yang, Fan
    Champion, Adam C.
    Peng, Chunyi
    Wang, Jingchuan
    Xuan, Dong
    Zhao, Wei
    [J]. 2017 IEEE 14TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SENSOR SYSTEMS (MASS), 2017, : 362 - 370
  • [35] Vision-Based Path Learning for Home Robots
    Ueno, Atsushi
    Kajihara, Natsuki
    Fujii, Natsuko
    Takubo, Tomohito
    [J]. 2014 TENTH INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION HIDING AND MULTIMEDIA SIGNAL PROCESSING (IIH-MSP 2014), 2014, : 411 - 414
  • [36] Vision-based exponential stabilization of mobile robots
    Lopez-Nicolas, G.
    Saguees, C.
    [J]. AUTONOMOUS ROBOTS, 2011, 30 (03) : 293 - 306
  • [37] Vision-Based Kinematic Calibration of Spherical Robots
    Agand, Pedram
    Taghirad, Hamid D.
    Molaee, Amir
    [J]. 2015 3RD RSI INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), 2015, : 395 - 400
  • [38] Vision-Based Navigation of Omnidirectional Mobile Robots
    Ferro, Marco
    Paolillo, Antonio
    Cherubini, Andrea
    Vendittelli, Marilena
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (03) : 2691 - 2698
  • [39] Vision-based maze navigation for humanoid robots
    Paolillo, Antonio
    Faragasso, Angela
    Oriolo, Giuseppe
    Vendittelli, Marilena
    [J]. AUTONOMOUS ROBOTS, 2017, 41 (02) : 293 - 309
  • [40] Vision-Based Interfaces Applied to Assistive Robots
    Perez, Elisa
    Soria, Carlos
    Lopez, Natalia M.
    Nasisi, Oscar
    Mut, Vicente
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10