MEM: Multi-Modal Elevation Mapping for Robotics and Learning

被引:7
|
作者
Erni, Gian [1 ]
Frey, Jonas [1 ,2 ]
Miki, Takahiro [1 ]
Mattamala, Matias [3 ]
Hutter, Marco [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Mech & Proc Engn, CH-8092 Zurich, Switzerland
[2] Max Planck Inst Intelligent Syst, D-72076 Tubingen, Germany
[3] Univ Oxford, Oxford Robot Inst, Oxford, England
基金
瑞士国家科学基金会;
关键词
D O I
10.1109/IROS55552.2023.10342108
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Elevation maps are commonly used to represent the environment of mobile robots and are instrumental for locomotion and navigation tasks. However, pure geometric information is insufficient for many field applications that require appearance or semantic information, which limits their applicability to other platforms or domains. In this work, we extend a 2.5D robot-centric elevation mapping framework by fusing multi-modal information from multiple sources into a popular map representation. The framework allows inputting data contained in point clouds or images in a unified manner. To manage the different nature of the data, we also present a set of fusion algorithms that can be selected based on the information type and user requirements. Our system is designed to run on the GPU, making it real-time capable for various robotic and learning tasks. We demonstrate the capabilities of our framework by deploying it on multiple robots with varying sensor configurations and showcasing a range of applications that utilize multi-modal layers, including line detection, human detection, and colorization.
引用
收藏
页码:11011 / 11018
页数:8
相关论文
共 50 条
  • [41] Multi-Modal Graph Learning for Disease Prediction
    Zheng, Shuai
    Zhu, Zhenfeng
    Liu, Zhizhe
    Guo, Zhenyu
    Liu, Yang
    Yang, Yuchen
    Zhao, Yao
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (09) : 2207 - 2216
  • [42] Multi-Modal Teaching Design in Learning Poetry
    Sun Nan
    PROCEEDINGS OF 2018 INTERNATIONAL SYMPOSIUM - REFORM AND INNOVATION OF HIGHER ENGINEERING EDUCATION, 2018, : 191 - 194
  • [43] Multi-modal broad learning for material recognition
    Wang, Zhaoxin
    Liu, Huaping
    Xu, Xinying
    Sun, Fuchun
    COGNITIVE COMPUTATION AND SYSTEMS, 2021, 3 (02) : 123 - 130
  • [44] Multi-modal learning for geospatial vegetation forecasting
    Benson, Vitus
    Robin, Claire
    Requena-Mesa, Christian
    Alonso, Lazaro
    Carvalhais, Nuno
    Cortes, Jose
    Gao, Zhihan
    Linscheid, Nora
    Weynants, Melanie
    Reichstein, Markus
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 27788 - 27799
  • [45] Knowledge Synergy Learning for Multi-Modal Tracking
    He, Yuhang
    Ma, Zhiheng
    Wei, Xing
    Gong, Yihong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5519 - 5532
  • [46] Multi-modal Learning for WebAssembly Reverse Engineering
    Huang, Hanxian
    Zhao, Jishen
    PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 453 - 465
  • [47] Differentiated Learning for Multi-Modal Domain Adaptation
    Lv, Jianming
    Liu, Kaijie
    He, Shengfeng
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1322 - 1330
  • [48] Generalization analysis of multi-modal metric learning
    Lei, Yunwen
    Ying, Yiming
    ANALYSIS AND APPLICATIONS, 2016, 14 (04) : 503 - 521
  • [49] Multi-modal deep learning for landform recognition
    Du, Lin
    You, Xiong
    Li, Ke
    Meng, Liqiu
    Cheng, Gong
    Xiong, Liyang
    Wang, Guangxia
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 158 : 63 - 75
  • [50] A survey of multi-modal learning theory(英文)
    HUANG Yu
    HUANG Longbo
    中山大学学报(自然科学版)(中英文), 2023, 62 (05) : 38 - 49