MEM: Multi-Modal Elevation Mapping for Robotics and Learning

被引:7
|
作者
Erni, Gian [1 ]
Frey, Jonas [1 ,2 ]
Miki, Takahiro [1 ]
Mattamala, Matias [3 ]
Hutter, Marco [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Mech & Proc Engn, CH-8092 Zurich, Switzerland
[2] Max Planck Inst Intelligent Syst, D-72076 Tubingen, Germany
[3] Univ Oxford, Oxford Robot Inst, Oxford, England
基金
瑞士国家科学基金会;
关键词
D O I
10.1109/IROS55552.2023.10342108
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Elevation maps are commonly used to represent the environment of mobile robots and are instrumental for locomotion and navigation tasks. However, pure geometric information is insufficient for many field applications that require appearance or semantic information, which limits their applicability to other platforms or domains. In this work, we extend a 2.5D robot-centric elevation mapping framework by fusing multi-modal information from multiple sources into a popular map representation. The framework allows inputting data contained in point clouds or images in a unified manner. To manage the different nature of the data, we also present a set of fusion algorithms that can be selected based on the information type and user requirements. Our system is designed to run on the GPU, making it real-time capable for various robotic and learning tasks. We demonstrate the capabilities of our framework by deploying it on multiple robots with varying sensor configurations and showcasing a range of applications that utilize multi-modal layers, including line detection, human detection, and colorization.
引用
收藏
页码:11011 / 11018
页数:8
相关论文
共 50 条
  • [31] Mapping Multi-Modal Brain Connectome for Brain Disorder Diagnosis via Cross-Modal Mutual Learning
    Yang, Yanwu
    Ye, Chenfei
    Guo, Xutao
    Wu, Tao
    Xiang, Yang
    Ma, Ting
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (01) : 108 - 121
  • [32] MULTI-MODAL DEEP LEARNING FOR MULTI-TEMPORAL URBAN MAPPING WITH A PARTLY MISSING OPTICAL MODALITY
    Hafner, Sebastian
    Ban, Yifang
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6843 - 6846
  • [33] A Multi-Modal Mapping Unit for Autonomous Exploration and Mapping of Underground Tunnels
    Mascarich, Frank
    Khattak, Shehryar
    Papachristos, Christos
    Alexis, Kostas
    2018 IEEE AEROSPACE CONFERENCE, 2018,
  • [34] On Multi-modal Fusion Learning in constraint propagation
    Li, Yaoyi
    Lu, Hongtao
    INFORMATION SCIENCES, 2018, 462 : 204 - 217
  • [35] On Multi-Modal Learning of Editing Source Code
    Chakraborty, Saikat
    Ray, Baishakhi
    2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 443 - 455
  • [36] Mineral: Multi-modal Network Representation Learning
    Kefato, Zekarias T.
    Sheikh, Nasrullah
    Montresor, Alberto
    MACHINE LEARNING, OPTIMIZATION, AND BIG DATA, MOD 2017, 2018, 10710 : 286 - 298
  • [37] Multi-Modal Curriculum Learning over Graphs
    Gong, Chen
    Yang, Jian
    Tao, Dacheng
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2019, 10 (04)
  • [38] Multi-Modal Learning for Predicting the Genotype of Glioma
    Wei, Yiran
    Chen, Xi
    Zhu, Lei
    Zhang, Lipei
    Schonlieb, Carola-Bibiane
    Price, Stephen
    Li, Chao
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (11) : 3167 - 3178
  • [39] Scalable multi-modal representation learning networks
    Zihan Fang
    Ying Zou
    Shiyang Lan
    Shide Du
    Yanchao Tan
    Shiping Wang
    Artificial Intelligence Review, 58 (7)
  • [40] Learning to Hash on Partial Multi-Modal Data
    Wang, Qifan
    Si, Luo
    Shen, Bin
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3904 - 3910