InertialNet: Toward Robust SLAM via Visual Inertial Measurement

被引:0
|
作者
Liu, Tse-An [1 ]
Lin, Huei-Yung [2 ,3 ]
Lin, Wei-Yang [4 ]
机构
[1] ZMP Inc, Bunkyo Ku, 5-41-10,Koishikawa, Tokyo 1120002, Japan
[2] Natl Chung Cheng Univ, Dept Elect Engn, Chiayi 621, Taiwan
[3] Natl Chung Cheng Univ, Adv Inst Mfg High Tech Innovat, Chiayi 621, Taiwan
[4] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi 621, Taiwan
关键词
SLAM; VIO; deep learning; IMU; optical flow;
D O I
暂无
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
SLAM (simultaneous localization and mapping) is commonly considered as a crucial component to achieving autonomous robot navigation. Currently, most of the existing visual 3D SLAM systems are still not robust enough. Image blur, variation of illumination, and low-texture scenes may lead to registration failures. To let the visual odometry (VO) deal with these problems, the workflow of traditional approaches becomes bulky and complicated. On the other hand, the advancement of deep learning brings new opportunities. In this paper, we use a deep network model to predict complex camera motion. It is different from previous supervised learning VO researches and requires no camera trajectories which are difficult to obtain. Using the image input and IMU output as an end-to-end training pair makes data collection cost-effective. The optical flow structure also makes the system independent of the appearance of training sets. The experimental results show that the proposed architecture has faster training convergence, and the model parameters are also significantly reduced. Our method is able to remain certain robustness under image blur, illumination changes, and low-texture scenes. It can correctly predict the new EuRoC dataset, which is more challenging than the KITTI dataset.
引用
收藏
页码:1311 / 1316
页数:6
相关论文
共 50 条
  • [21] Large Scale Dense Visual Inertial SLAM
    Ma, Lu
    Falquez, Juan M.
    McGuire, Steve
    Sibley, Gabe
    FIELD AND SERVICE ROBOTICS: RESULTS OF THE 10TH INTERNATIONAL CONFERENCE, 2016, 113 : 141 - 155
  • [22] Invariant Kalman Filtering for Visual Inertial SLAM
    Brossard, Martin
    Bonnabel, Silvere
    Barrau, Axel
    2018 21ST INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2018, : 2021 - 2028
  • [23] Observer design for monocular visual inertial SLAM
    Fink, Geoff
    Franke, Mirko
    Lynch, Alan F.
    Roebenack, Klaus
    AT-AUTOMATISIERUNGSTECHNIK, 2018, 66 (03) : 246 - 257
  • [24] A novel visual-inertial Monocular SLAM
    Yue, Xiaofeng
    Zhang, Wenjuan
    Xu, Li
    Liu, JiangGuo
    MIPPR 2017: AUTOMATIC TARGET RECOGNITION AND NAVIGATION, 2018, 10608
  • [25] Enhancing Visual Inertial SLAM with Magnetic Measurements
    Joshi, Bharat
    Rekleitis, Ioannis
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 10012 - 10019
  • [26] Multimodal Features and Accurate Place Recognition With Robust Optimization for Lidar-Visual-Inertial SLAM
    Zhao, Xiongwei
    Wen, Congcong
    Manoj Prakhya, Sai
    Yin, Hongpei
    Zhou, Rundong
    Sun, Yijiao
    Xu, Jie
    Bai, Haojie
    Wang, Yang
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 1
  • [27] Robust Visual SLAM Across Seasons
    Naseer, Tayyab
    Ruhnke, Michael
    Stachniss, Cyrill
    Spinello, Luciano
    Burgard, Wolfram
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 2529 - 2535
  • [28] Visual-Inertial-Laser SLAM Based on ORB-SLAM3
    Cao, Meng
    Zhang, Jia
    Chen, Wenjie
    UNMANNED SYSTEMS, 2024, 12 (05) : 903 - 912
  • [29] PEBO-SLAM: Observer Design for Visual Inertial SLAM With Convergence Guarantees
    Yi, Bowen
    Jin, Chi
    Wang, Lei
    Shi, Guodong
    Ila, Viorela
    Manchester, Ian R.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2025, 70 (03) : 1714 - 1729
  • [30] Robust orientation estimate via inertial guided visual sample consensus
    Zhang, Yinlong
    Liang, Wei
    Li, Yang
    An, Haibo
    Tan, Jindong
    PERSONAL AND UBIQUITOUS COMPUTING, 2018, 22 (02) : 259 - 274