Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality

被引:4
|
作者
Jinyu L. [1 ]
Bangbang Y. [1 ]
Danpeng C. [1 ]
Nan W. [2 ]
Guofeng Z. [1 ]
Hujun B. [1 ]
机构
[1] State Key Lab of CAD&CG, Zhejiang University, Hangzhou
[2] SenseTime Research, Hangzhou
来源
基金
中国国家自然科学基金;
关键词
Augmented reality; Localization; Mapping; Odometry; Tracking; Visual-inertial SLAM;
D O I
10.1016/j.vrih.2019.07.002
中图分类号
学科分类号
摘要
Although VSLAM/VISLAM has achieved great success, it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark. For AR applications in practice, a variety of challenging situations (e.g., fast motion, strong rotation, serious motion blur, dynamic interference) may be easily encountered since a home user may not carefully move the AR device, and the real environment may be quite complex. In addition, the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience. Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications. With the above motivation, we build a new visual-inertial dataset as well as a series of evaluation criteria for AR. We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons. Especially, we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark. Our dataset, sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/. © 2019 Beijing Zhongke Journal Publishing Co. Ltd
引用
收藏
页码:386 / 410
页数:24
相关论文
共 50 条
  • [41] Fast Feature Matching in Visual-Inertial SLAM
    Feng, Lin
    Qu, Xinyi
    Ye, Xuetong
    Wang, Kang
    Li, Xueyuan
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 500 - 504
  • [42] Unsupervised Monocular Visual-inertial Odometry Network
    Wei, Peng
    Hua, Guoliang
    Huang, Weibo
    Meng, Fanyang
    Liu, Hong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2347 - 2354
  • [43] Monocular Visual-Inertial SLAM with Camera-IMU Extrinsic Automatic Calibration and Online Estimation
    Pan, Linhao
    Tian, Fuqing
    Ying, Wenjian
    She, Bo
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2019, PT IV, 2019, 11743 : 706 - 721
  • [44] Asynchronous adaptive conditioning for visual-inertial SLAM
    Keivan, Nima
    Sibley, Gabe
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (13): : 1573 - 1589
  • [45] Stereo Visual-Inertial SLAM With Points and Lines
    Liu, Yanqing
    Yang, Dongdong
    Li, Jiamao
    Gu, Yuzhang
    Pi, Jiatian
    Zhang, Xiaolin
    IEEE ACCESS, 2018, 6 : 69381 - 69392
  • [46] Asynchronous Adaptive Conditioning for Visual-Inertial SLAM
    Keivan, Nima
    Patron-Perez, Alonso
    Sibley, Gabe
    EXPERIMENTAL ROBOTICS, 2016, 109 : 309 - 321
  • [47] COVINS: Visual-Inertial SLAM for Centralized Collaboration
    Schmuck, Patrik
    Ziegler, Thomas
    Karrer, Marco
    Perraudin, Jonathan
    Chli, Margarita
    2021 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT PROCEEDINGS (ISMAR-ADJUNCT 2021), 2021, : 171 - 176
  • [48] Fast and Robust Initialization for Visual-Inertial SLAM
    Campos, Carlos
    Montiel, Jose M. M.
    Tardos, Juan D.
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 1288 - 1294
  • [49] Online Initialization and Automatic Camera-IMU Extrinsic Calibration for Monocular Visual-Inertial SLAM
    Huang, Weibo
    Liu, Hong
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 5182 - 5189
  • [50] Monocular Visual-Inertial Odometry for Agricultural Environments
    Song, Kaiyu
    Li, Jingtao
    Qiu, Run
    Yang, Gaidi
    IEEE ACCESS, 2022, 10 : 103975 - 103986