InertialNet: Toward Robust SLAM via Visual Inertial Measurement

被引:0
|
作者
Liu, Tse-An [1 ]
Lin, Huei-Yung [2 ,3 ]
Lin, Wei-Yang [4 ]
机构
[1] ZMP Inc, Bunkyo Ku, 5-41-10,Koishikawa, Tokyo 1120002, Japan
[2] Natl Chung Cheng Univ, Dept Elect Engn, Chiayi 621, Taiwan
[3] Natl Chung Cheng Univ, Adv Inst Mfg High Tech Innovat, Chiayi 621, Taiwan
[4] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi 621, Taiwan
关键词
SLAM; VIO; deep learning; IMU; optical flow;
D O I
暂无
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
SLAM (simultaneous localization and mapping) is commonly considered as a crucial component to achieving autonomous robot navigation. Currently, most of the existing visual 3D SLAM systems are still not robust enough. Image blur, variation of illumination, and low-texture scenes may lead to registration failures. To let the visual odometry (VO) deal with these problems, the workflow of traditional approaches becomes bulky and complicated. On the other hand, the advancement of deep learning brings new opportunities. In this paper, we use a deep network model to predict complex camera motion. It is different from previous supervised learning VO researches and requires no camera trajectories which are difficult to obtain. Using the image input and IMU output as an end-to-end training pair makes data collection cost-effective. The optical flow structure also makes the system independent of the appearance of training sets. The experimental results show that the proposed architecture has faster training convergence, and the model parameters are also significantly reduced. Our method is able to remain certain robustness under image blur, illumination changes, and low-texture scenes. It can correctly predict the new EuRoC dataset, which is more challenging than the KITTI dataset.
引用
收藏
页码:1311 / 1316
页数:6
相关论文
共 50 条
  • [41] COVINS: Visual-Inertial SLAM for Centralized Collaboration
    Schmuck, Patrik
    Ziegler, Thomas
    Karrer, Marco
    Perraudin, Jonathan
    Chli, Margarita
    2021 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT PROCEEDINGS (ISMAR-ADJUNCT 2021), 2021, : 171 - 176
  • [42] DynaVINS: A Visual-Inertial SLAM for Dynamic Environments
    Song, Seungwon
    Lim, Hyungtae
    Lee, Alex Junho
    Myung, Hyun
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 11523 - 11530
  • [43] Fast Feature Matching in Visual-Inertial SLAM
    Feng, Lin
    Qu, Xinyi
    Ye, Xuetong
    Wang, Kang
    Li, Xueyuan
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 500 - 504
  • [44] Collaborative Visual Inertial SLAM for Multiple Smart Phones
    Liu, Jialing
    Liu, Ruyu
    Chen, Kaiqi
    Zhang, Jianhua
    Guo, Dongyan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11553 - 11559
  • [45] Accurate and robust visual SLAM with a novel ray-to-ray line measurement model
    Zhang, Chengran
    Fang, Zheng
    Luo, Xingjian
    Liu, Wei
    IMAGE AND VISION COMPUTING, 2023, 140
  • [46] LR-SLAM: Visual Inertial SLAM System with Redundant Line Feature Elimination
    Jiang, Hao
    Cang, Naimeng
    Lin, Yuan
    Guo, Dongsheng
    Zhang, Weidong
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2024, 110 (04)
  • [47] RWT-SLAM: Robust Visual SLAM for Weakly Textured Environments
    Peng, Qihao
    Zhao, Xijun
    Dang, Ruina
    Xiang, Zhiyu
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 913 - 919
  • [48] UVS: underwater visual SLAM—a robust monocular visual SLAM system for lifelong underwater operations
    Marco Leonardi
    Annette Stahl
    Edmund Førland Brekke
    Martin Ludvigsen
    Autonomous Robots, 2023, 47 : 1367 - 1385
  • [49] Robust Onboard Visual SLAM for Autonomous MAVs
    Yang, Shaowu
    Scherer, Sebastian A.
    Zell, Andreas
    INTELLIGENT AUTONOMOUS SYSTEMS 13, 2016, 302 : 361 - 373
  • [50] Robust Visual SLAM with Point and Line Features
    Zuo, Xingxing
    Xie, Xiaojia
    Liu, Yong
    Huang, Guoquan
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 1775 - 1782