InertialNet: Toward Robust SLAM via Visual Inertial Measurement

被引:0
|
作者
Liu, Tse-An [1 ]
Lin, Huei-Yung [2 ,3 ]
Lin, Wei-Yang [4 ]
机构
[1] ZMP Inc, Bunkyo Ku, 5-41-10,Koishikawa, Tokyo 1120002, Japan
[2] Natl Chung Cheng Univ, Dept Elect Engn, Chiayi 621, Taiwan
[3] Natl Chung Cheng Univ, Adv Inst Mfg High Tech Innovat, Chiayi 621, Taiwan
[4] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi 621, Taiwan
关键词
SLAM; VIO; deep learning; IMU; optical flow;
D O I
暂无
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
SLAM (simultaneous localization and mapping) is commonly considered as a crucial component to achieving autonomous robot navigation. Currently, most of the existing visual 3D SLAM systems are still not robust enough. Image blur, variation of illumination, and low-texture scenes may lead to registration failures. To let the visual odometry (VO) deal with these problems, the workflow of traditional approaches becomes bulky and complicated. On the other hand, the advancement of deep learning brings new opportunities. In this paper, we use a deep network model to predict complex camera motion. It is different from previous supervised learning VO researches and requires no camera trajectories which are difficult to obtain. Using the image input and IMU output as an end-to-end training pair makes data collection cost-effective. The optical flow structure also makes the system independent of the appearance of training sets. The experimental results show that the proposed architecture has faster training convergence, and the model parameters are also significantly reduced. Our method is able to remain certain robustness under image blur, illumination changes, and low-texture scenes. It can correctly predict the new EuRoC dataset, which is more challenging than the KITTI dataset.
引用
收藏
页码:1311 / 1316
页数:6
相关论文
共 50 条
  • [1] Fast and Robust Initialization for Visual-Inertial SLAM
    Campos, Carlos
    Montiel, Jose M. M.
    Tardos, Juan D.
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 1288 - 1294
  • [2] InertialNet: Inertial Measurement Learning for Simultaneous Localization and Mapping
    Lin, Huei-Yung
    Liu, Tse-An
    Lin, Wei-Yang
    Klein, Itzik
    Yao, Yiqing
    SENSORS, 2023, 23 (24)
  • [3] Robust Indoor Visual-Inertial SLAM with Pedestrian Detection
    Zhang, Heng
    Huang, Ran
    Yuan, Liang
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE-ROBIO 2021), 2021, : 802 - 807
  • [4] A Robust Visual-Inertial SLAM in Complex Indoor Environments
    Zhong, Min
    You, Yinghui
    Zhou, Shuai
    Xu, Xiaosu
    IEEE SENSORS JOURNAL, 2023, 23 (17) : 19986 - 19994
  • [5] Robust High Accuracy Visual-Inertial-Laser SLAM System
    Wang, Zengyuan
    Zhang, Jianhua
    Chen, Shengyong
    Yuan, Conger
    Zhang, Jingqian
    Zhang, Jianwei
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6636 - 6641
  • [6] Robust Collaborative Visual-Inertial SLAM for Mobile Augmented Reality
    Pan, Xiaokun
    Huang, Gan
    Zhang, Ziyang
    Li, Jinyu
    Bao, Hujun
    Zhang, Guofeng
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2024, 30 (11) : 7354 - 7363
  • [7] A Robust Parallel Initialization Method for Monocular Visual-Inertial SLAM
    Zhong, Min
    Yao, Yiqing
    Xu, Xiaosu
    Wei, Hongyu
    SENSORS, 2022, 22 (21)
  • [8] Dynam-SLAM: An Accurate, Robust Stereo Visual-Inertial SLAM Method in Dynamic Environments
    Yin, Hesheng
    Li, Shaomiao
    Tao, Yu
    Guo, Junlong
    Huang, Bo
    IEEE TRANSACTIONS ON ROBOTICS, 2022,
  • [9] Dynam-SLAM: An Accurate, Robust Stereo Visual-Inertial SLAM Method in Dynamic Environments
    Yin, Hesheng
    Li, Shaomiao
    Tao, Yu
    Guo, Junlong
    Huang, Bo
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (01) : 289 - 308
  • [10] LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme
    Liu, Zhenbin
    Li, Zengke
    Liu, Ao
    Shao, Kefan
    Guo, Qiang
    Wang, Chuanhao
    REMOTE SENSING, 2024, 16 (09)