InertialNet: Toward Robust SLAM via Visual Inertial Measurement

被引:0
|
作者
Liu, Tse-An [1 ]
Lin, Huei-Yung [2 ,3 ]
Lin, Wei-Yang [4 ]
机构
[1] ZMP Inc, Bunkyo Ku, 5-41-10,Koishikawa, Tokyo 1120002, Japan
[2] Natl Chung Cheng Univ, Dept Elect Engn, Chiayi 621, Taiwan
[3] Natl Chung Cheng Univ, Adv Inst Mfg High Tech Innovat, Chiayi 621, Taiwan
[4] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi 621, Taiwan
关键词
SLAM; VIO; deep learning; IMU; optical flow;
D O I
暂无
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
SLAM (simultaneous localization and mapping) is commonly considered as a crucial component to achieving autonomous robot navigation. Currently, most of the existing visual 3D SLAM systems are still not robust enough. Image blur, variation of illumination, and low-texture scenes may lead to registration failures. To let the visual odometry (VO) deal with these problems, the workflow of traditional approaches becomes bulky and complicated. On the other hand, the advancement of deep learning brings new opportunities. In this paper, we use a deep network model to predict complex camera motion. It is different from previous supervised learning VO researches and requires no camera trajectories which are difficult to obtain. Using the image input and IMU output as an end-to-end training pair makes data collection cost-effective. The optical flow structure also makes the system independent of the appearance of training sets. The experimental results show that the proposed architecture has faster training convergence, and the model parameters are also significantly reduced. Our method is able to remain certain robustness under image blur, illumination changes, and low-texture scenes. It can correctly predict the new EuRoC dataset, which is more challenging than the KITTI dataset.
引用
收藏
页码:1311 / 1316
页数:6
相关论文
共 50 条
  • [31] Robust orientation estimate via inertial guided visual sample consensus
    Yinlong Zhang
    Wei Liang
    Yang Li
    Haibo An
    Jindong Tan
    Personal and Ubiquitous Computing, 2018, 22 : 259 - 274
  • [32] Visual-Inertial Monocular SLAM With Map Reuse
    Mur-Artal, Raul
    Tardos, Juan D.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (02): : 796 - 803
  • [33] On Sensor Pose Parameterization for Inertial Aided Visual SLAM
    Kleinert, Markus
    Stilla, Uwe
    2012 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 2012,
  • [34] PAL-SLAM2: Visual and visual-inertial monocular SLAM for panoramic annular lens
    Wang, Ding
    Wang, Junhua
    Tian, Yuhan
    Fang, Yi
    Yuan, Zheng
    Xu, Min
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2024, 211 : 35 - 48
  • [35] Asynchronous adaptive conditioning for visual-inertial SLAM
    Keivan, Nima
    Sibley, Gabe
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (13): : 1573 - 1589
  • [36] Stereo Visual-Inertial SLAM With Points and Lines
    Liu, Yanqing
    Yang, Dongdong
    Li, Jiamao
    Gu, Yuzhang
    Pi, Jiatian
    Zhang, Xiaolin
    IEEE ACCESS, 2018, 6 : 69381 - 69392
  • [37] Stationary Detector for Monocular Visual-Inertial SLAM
    Guillemard, Richard
    Helenon, Francois
    Petit, Bruno
    Gay-Bellile, Vincent
    Carrier, Mathieu
    2019 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 2019,
  • [38] Asynchronous Adaptive Conditioning for Visual-Inertial SLAM
    Keivan, Nima
    Patron-Perez, Alonso
    Sibley, Gabe
    EXPERIMENTAL ROBOTICS, 2016, 109 : 309 - 321
  • [39] A Survey of Visual-Inertial SLAM for Mobile Robots
    Shi J.
    Zha F.
    Sun L.
    Guo W.
    Wang P.
    Li M.
    Jiqiren/Robot, 2020, 42 (06): : 734 - 748
  • [40] Visual Inertial SLAM: Application to Unmanned Aerial Vehicles
    Fink, Geoff
    Franke, Mirko
    Lynch, Alan F.
    Roebenack, Klaus
    Godbolt, Bryan
    IFAC PAPERSONLINE, 2017, 50 (01): : 1965 - 1970