MEM: Multi-Modal Elevation Mapping for Robotics and Learning

被引:7
|
作者
Erni, Gian [1 ]
Frey, Jonas [1 ,2 ]
Miki, Takahiro [1 ]
Mattamala, Matias [3 ]
Hutter, Marco [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Mech & Proc Engn, CH-8092 Zurich, Switzerland
[2] Max Planck Inst Intelligent Syst, D-72076 Tubingen, Germany
[3] Univ Oxford, Oxford Robot Inst, Oxford, England
基金
瑞士国家科学基金会;
关键词
D O I
10.1109/IROS55552.2023.10342108
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Elevation maps are commonly used to represent the environment of mobile robots and are instrumental for locomotion and navigation tasks. However, pure geometric information is insufficient for many field applications that require appearance or semantic information, which limits their applicability to other platforms or domains. In this work, we extend a 2.5D robot-centric elevation mapping framework by fusing multi-modal information from multiple sources into a popular map representation. The framework allows inputting data contained in point clouds or images in a unified manner. To manage the different nature of the data, we also present a set of fusion algorithms that can be selected based on the information type and user requirements. Our system is designed to run on the GPU, making it real-time capable for various robotic and learning tasks. We demonstrate the capabilities of our framework by deploying it on multiple robots with varying sensor configurations and showcasing a range of applications that utilize multi-modal layers, including line detection, human detection, and colorization.
引用
收藏
页码:11011 / 11018
页数:8
相关论文
共 50 条
  • [1] Multi-modal mapping
    Yates, Darran
    NATURE REVIEWS NEUROSCIENCE, 2016, 17 (09) : 536 - 536
  • [2] Multi-modal mapping
    Darran Yates
    Nature Reviews Neuroscience, 2016, 17 : 536 - 536
  • [3] Multi-Modal Interaction for Robotics Mules
    Taylor, Glenn
    Quist, Michael
    Lanting, Matthew
    Dunham, Cory
    Muench, Paul
    UNMANNED SYSTEMS TECHNOLOGY XIX, 2017, 10195
  • [4] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [5] Motion description languages for multi-modal control in robotics
    Egerstedt, M
    CONTROL PROBLEMS IN ROBOTICS, 2003, 4 : 75 - 89
  • [6] Unsupervised Multi-modal Learning
    Iqbal, Mohammed Shameer
    ADVANCES IN ARTIFICIAL INTELLIGENCE (AI 2015), 2015, 9091 : 343 - 346
  • [7] Learning Multi-modal Similarity
    McFee, Brian
    Lanckriet, Gert
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 491 - 523
  • [8] On Robustness of Multi-Modal Fusion-Robotics Perspective
    Bednarek, Michal
    Kicki, Piotr
    Walas, Krzysztof
    ELECTRONICS, 2020, 9 (07) : 1 - 17
  • [9] A Survey of Multi-modal Question Answering Systems for Robotics
    Liu, Xiaomeng
    Long, Fei
    2017 2ND INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM), 2017, : 189 - 194
  • [10] A multi-modal haptic interface for virtual reality and robotics
    Folgheraiter, Michele
    Gini, Giuseppina
    Vercesi, Dario
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2008, 52 (3-4) : 465 - 488