ALPHRED: A Multi-Modal Operations Quadruped Robot for Package Delivery Applications

被引:54
|
作者
Hooks, Joshua [1 ]
Ahn, Min Sung [1 ]
Yu, Jeffrey [1 ]
Zhang, Xiaoguang [1 ]
Zhu, Taoyuanmin [1 ]
Chae, Hosik [1 ]
Hong, Dennis [1 ]
机构
[1] Univ Calif Los Angeles UCLA, Robot & Mech Lab RoMeLa, Los Angeles, CA 90095 USA
来源
关键词
Mobile robots; legged locomotion; robot kinematics; robot control; force control; SYSTEM;
D O I
10.1109/LRA.2020.3007482
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Modern quadruped robots are more capable than ever before at performing robust, dynamic locomotion over a variety of terrains, but are still mostly used as mobile inspection platforms. This paper presents ALPHRED version 2, a multi-modal operations quadruped robot designed for both locomotion and manipulation. ALPHRED is equipped with high force bandwidth proprioceptive actuators and simple one degree of freedom end-effectors. Additionally, ALPHRED has a unique radially symmetric kinematic design that provides a superior end-effector workspace and allows the robot to reconfigure itself into different modes to accomplish different tasks. For locomotion tasks, ALPHRED is capable of fast dynamic trotting, continuous hopping and jumping, efficient rolling on passive caster wheels, and even has the potential for bipedal walking. For manipulation tasks, ALPHRED has a tripod mode that provides single arm manipulation capabilities that is strong enough to punch through a wooden board. Additionally, ALPHRED can go into a bipedal mode to allow for dual arm manipulation capable of picking up a box off a one meter tall table and placing it on the ground.
引用
收藏
页码:5409 / 5416
页数:8
相关论文
共 50 条
  • [41] Extraction of multi-modal object representations in a robot vision system
    Pugeault, Nicolas
    Baseski, Erare
    Kraft, Dirk
    Worgotter, Florentin
    Kruger, Norbert
    [J]. ROBOT VISION, 2007, : 126 - +
  • [42] Mixed Reality Deictic Gesture for Multi-Modal Robot Communication
    Williams, Tom
    Bussing, Matthew
    Cabrol, Sebastian
    Boyle, Elizabeth
    Nhan Tran
    [J]. HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2019, : 191 - 201
  • [43] Situated robot learning for multi-modal instruction and imitation of grasping
    Steil, M
    Röthling, F
    Haschke, R
    Ritter, H
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2004, 47 (2-3) : 129 - 141
  • [44] Predict Robot Grasp Outcomes based on Multi-Modal Information
    Yang, Chao
    Du, Peng
    Sun, Fuchun
    Fang, Bin
    Zhou, Jie
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2018, : 1563 - 1568
  • [45] The Design and Control of the Multi-Modal Locomotion Origami Robot, Tribot
    Zhakypov, Zhenishbek
    Falahi, Mohsen
    Shah, Manan
    Paik, Jamie
    [J]. 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 4349 - 4355
  • [46] Robot and cloud-assisted multi-modal healthcare system
    Yujun Ma
    Yin Zhang
    Jiafu Wan
    Daqiang Zhang
    Ning Pan
    [J]. Cluster Computing, 2015, 18 : 1295 - 1306
  • [47] Hybrid parameter identification of a multi-modal underwater soft robot
    Giorgio-Serchi, F.
    Arienti, A.
    Corucci, F.
    Giorelli, M.
    Laschi, C.
    [J]. BIOINSPIRATION & BIOMIMETICS, 2017, 12 (02) : 1 - 15
  • [48] Multi-modal Motion Planning for a Humanoid Robot Manipulation Task
    Hauser, Kris
    Ng-Thow-Hing, Victor
    Gonzalez-Banos, Hector
    [J]. ROBOTICS RESEARCH, 2010, 66 : 307 - +
  • [49] Multi-Modal Posture Recognition System for Healthcare Applications
    Sreeni, Siddarth
    Hari, S. R.
    Harikrishnan, R.
    Sreejith, V
    [J]. PROCEEDINGS OF TENCON 2018 - 2018 IEEE REGION 10 CONFERENCE, 2018, : 0373 - 0376
  • [50] Multi-modal biometrics with PKIs for border control applications
    Kwon, T
    Moon, H
    [J]. COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2005, PT 1, 2005, 3480 : 584 - 590