TransFusionOdom: Transformer-Based LiDAR-Inertial Fusion Odometry Estimation

被引:14
|
作者
Sun, Leyuan [1 ,2 ]
Ding, Guanqun [3 ]
Qiu, Yue [2 ]
Yoshiyasu, Yusuke [2 ]
Kanehiro, Fumio [1 ,4 ]
机构
[1] Natl Inst Adv Ind Sci & Technol, CNRS AIST Joint Robot Lab JRL, IRL, Tsukuba 3058560, Japan
[2] Natl Inst Adv Ind Sci & Technol, Comp Vis Res Team, Artificial Intelligence Res Ctr AIRC, Tsukuba 3058560, Japan
[3] Natl Inst Adv Ind Sci & Technol, Digital Architecture Res Ctr DigiARC, Tokyo 1350064, Japan
[4] Univ Tsukuba, Grad Sch Sci & Technol, Dept Intelligent & Mech Interact Syst, Tsukuba 3050006, Japan
基金
日本学术振兴会;
关键词
Attention mechanisms; LiDAR-inertial odometry (LIO); multimodal learning; sensor data fusion; transformer; ROBUST; DEPTH; CNN;
D O I
10.1109/JSEN.2023.3302401
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multimodal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. Recently, learning-based approaches garner the attention in this field, due to their robust nonhandcrafted designs. However, the question of How to perform fusion among different modalities in a supervised sensor fusion odometry estimation task? is one of the challenging issues still remaining. Some simple operations, such as elementwise summation and concatenation, are not capable of assigning adaptive attentional weights to incorporate different modalities efficiently, which makes it difficult to achieve competitive odometry results. Besides, the Transformer architecture has shown potential for multimodal fusion tasks, particularly in the domains of vision with language. In this work, we propose an end-to-end supervised Transformer-based LiDAR-Inertial fusion framework (namely TransFusionOdom) for odometry estimation. The multiattention fusion module demonstrates different fusion approaches for homogeneous and heterogeneous modalities to address the overfitting problem that can arise from blindly increasing the complexity of the model. Additionally, to interpret the learning process of the Transformer-based multimodal interactions, a general visualization approach is introduced to illustrate the interactions between modalities. Moreover, exhaustive ablation studies evaluate different multimodal fusion strategies to verify the performance of the proposed fusion strategy. A synthetic multimodal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities. The quantitative and qualitative odometry evaluations on the KITTI dataset verify that the proposed TransFusionOdom can achieve superior performance compared with other learning-based related works.
引用
收藏
页码:22064 / 22079
页数:16
相关论文
共 50 条
  • [41] Asynchronous Multiple LiDAR-Inertial Odometry Using Point-Wise Inter-LiDAR Uncertainty Propagation
    Jung, Minwoo
    Jung, Sangwoo
    Kim, Ayoung
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (07): : 4211 - 4218
  • [42] A LiDAR-Inertial SLAM Method Based on Virtual Inertial Navigation System
    Cai, Yunpiao
    Qian, Weixing
    Dong, Jiayi
    Zhao, Jiaqi
    Wang, Kerui
    Shen, Tianxiao
    ELECTRONICS, 2023, 12 (12)
  • [43] LiDAR-Inertial Based Navigation and Mapping for Precision Landing
    Setterfield, Timothy P.
    Hewitt, Robert A.
    Chen, Po-Ting
    Espinoza, Antonio Teran
    Trawny, Nikolas
    Katake, Anup
    2021 IEEE AEROSPACE CONFERENCE (AEROCONF 2021), 2021,
  • [44] Section-LIO: A High Accuracy LiDAR-Inertial Odometry Using Undistorted Sectional Point
    Meng, Kai
    Sun, Hui
    Qi, Jiangtao
    Wang, Hongbo
    IEEE ACCESS, 2023, 11 : 144918 - 144927
  • [45] A High-Precision LiDAR-Inertial Odometry via Kalman Filter and Factor Graph Optimization
    Tang, Jiaqiao
    Zhang, Xudong
    Zou, Yuan
    Li, Yuanyuan
    Du, Guodong
    IEEE SENSORS JOURNAL, 2023, 23 (11) : 11218 - 11231
  • [46] Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Odometry for Urban Positioning and Mapping
    Zhang, Jiachen
    Wen, Weisong
    Huang, Feng
    Chen, Xiaodong
    Hsu, Li-Ta
    REMOTE SENSING, 2021, 13 (12)
  • [47] Need for Speed: Fast Correspondence-Free Lidar-Inertial Odometry Using Doppler Velocity
    Yoon, David J.
    Burnett, Keenan
    Laconte, Johann
    Chen, Yi
    Vhavle, Heethesh
    Kammel, Soeren
    Reuther, James
    Barfoot, Timothy D.
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5304 - 5311
  • [48] RI-LIO: Reflectivity Image Assisted Tightly-Coupled LiDAR-Inertial Odometry
    Zhang, Yanfeng
    Tian, Yunong
    Wang, Wanguo
    Yang, Guodong
    Li, Zhishuo
    Jing, Fengshui
    Tan, Min
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (03) : 1802 - 1809
  • [49] LA-LIO: Robust Localizability-Aware LiDAR-Inertial Odometry for Challenging Scenes
    Huang, Junjie
    Zhang, Yunzhou
    Xu, Qingdong
    Wu, Song
    Liu, Jun
    Wang, Guiyuan
    Liu, Wei
    2024 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2024), 2024, : 10145 - 10152
  • [50] LIC-Fusion: LiDAR-Inertial-Camera Odometry
    Zuo, Xingxing
    Geneva, Patrick
    Lee, Woosik
    Liu, Yong
    Huang, Guoquan
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 5848 - 5854