Review of visual-inertial navigation system initialization method

被引:0
|
作者
Liu Z. [1 ]
Shi D. [2 ]
Yang S. [1 ]
Li R. [2 ]
机构
[1] College of Computer Science and Technology, National University of Defense Technology, Changsha
[2] National Innovation Institute of Defense Technology, Academy of Military Sciences, Beijing
关键词
inertial measurement processing; initialization; sensor fusion; visual-inertial navigation systems;
D O I
10.11887/j.cn.202302002
中图分类号
学科分类号
摘要
VINS (visual-inertial navigation systems) solve a series of parameters required for state estimation through initialization, such as scale, gravity vector, velocity, inertial measurement unit′s bias, etc., to improve the accuracy of navigation positioning and environmental perception of the system. The initialization methods of VINS can be divided into three categories according to the sensing information fusing mode:joint initialization, disjoint initialization and semi-joint initialization. Based on the existing research work, the current mainstream initialization methods of VINS were reviewed from four aspects:basic theory, development and classification, existing methods, performance evaluation, and the future development trends were summarized, which is helpful to have a general understanding of VINS initialization methods and grasp its development direction. © 2023 National University of Defense Technology. All rights reserved.
引用
收藏
页码:15 / 26
页数:11
相关论文
共 81 条
  • [1] CADENA C, CARLONE L, CARRILLO H, Et al., Past, present, and future of simultaneous localization and mapping:toward the robust-perception age[J], IEEE Transactions on Robotics, 32, 6, pp. 1309-1332, (2016)
  • [2] YANG N, VON STUMBERG L, WANG R, Et al., D3VO:deep depth, deep pose and deep uncertainty for monocular visual odometry[C], Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1281-1292, (2020)
  • [3] GOMEZ-OJEDA R, MORENO F A, ZUNIGA-NOEL D, Et al., PL-SLAM:a stereo SLAM system through the combination of points and line segments[J], IEEE Transactions on Robotics, 35, 3, pp. 734-746, (2019)
  • [4] WANG H, LU D J, FANG B F., RGB-D SLAM method based on enhanced segmentation in dynamic environment[J], Robot, 44, 4, pp. 418-430, (2022)
  • [5] REBECQ H, HORSTSCHAEFER T, GALLEGO G, Et al., EVO:a geometric approach to event-based 6-DOF parallel tracking and mapping in real time[J], IEEE Robotics and Automation Letters, 2, 2, pp. 593-600, (2017)
  • [6] SHI D X, YANG Z Y, JIN S C, Et al., A multi-UAV collaborative SLAM method oriented to data sharing[J], Chinese Journal of Computers, 44, 5, pp. 983-998, (2021)
  • [7] LI P L, QIN T, HU B T, Et al., Monocular visual-inertial state estimation for mobile augmented reality, Proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR), (2017)
  • [8] YU Y N, WEI H, CHEN J., Optimization algorithm of visual odometry for SLAM based on local image entropy[J], Acta Automatica Sinica, 47, 6, pp. 1460-1466, (2021)
  • [9] QIN T, CHEN T Q, CHEN Y L, Et al., AVP-SLAM:semantic visual mapping and localization for autonomous vehicles in the parking lot, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2021)
  • [10] KASYANOV A, ENGELMANN F, STUCKLER J, Et al., Keyframe-based visual-inertial online SLAM with relocalization, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2017)