A Survey of Loop-Closure Detection Method of Visual SLAM in Complex Environments

被引:0
|
作者
Liu Q. [1 ]
Duan F. [1 ]
Sang Y. [1 ]
Zhao J. [1 ]
机构
[1] School of Mechanical Engineering, Dalian University of Technology, Dalian
来源
Jiqiren/Robot | 2019年 / 41卷 / 01期
关键词
Decision model; Loop-closure detection; Performance evaluation; Place description; Visual SLAM;
D O I
10.13973/j.cnki.robot.180004
中图分类号
学科分类号
摘要
With the rapid development of the autonomous driving and the virtual reality technologies, visual simultaneous localization and mapping (SLAM) has become a research hotspot in recent years. Three main problems of loop-closure detection of visual SLAM in complex environments are surveyed, i. e. place description, decision model, and evaluation of loop-closure detection. Firstly, the place description methods are introduced based on classical image features, deep learning, depth information and time-varying map, and the advantages and disadvantages of different methods are analyzed in detail. Secondly, some decision models are summarized which are commonly used in the process of loop recognition based on place description, especially for the probability model and the sequence matching. Thirdly, the performance evaluation method of loop-closure detection is explained, and its connection with the backend optimization is analyzed. Finally, the future directions that contribute to the development of loop-closure detection are discussed, focusing on several key points, such as deep learning, backend optimization and fusion of multiple descriptors. © 2019, Science Press. All right reserved.
引用
收藏
页码:112 / 123and136
相关论文
共 90 条
  • [1] Cheng J., Jiang Z., Zhang Y., Et al., Toward robust linear SLAM, IEEE International Conference on Mechatronics and Automation, pp. 705-710, (2014)
  • [2] Lowe D.G., Object recognition from local scale-invariant features, IEEE International Conference on Computer Vision, (2002)
  • [3] Se S., Lowe D.G., Little J.J., Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks, International Journal of Robotics Research, 21, 8, pp. 735-760, (2002)
  • [4] Stumm E., Mei C., Lacroix S., Probabilistic place recognition with covisibility maps, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4158-4163, (2013)
  • [5] Kosecka J., Li F., Yang X., Global localization and relative positioning based on scale-invariant key points, Robotics & Autonomous Systems, 52, 1, pp. 27-38, (2005)
  • [6] Bay H., Tuytelaars T., Gool L.V., SURF: Speeded up robust features, Computer Vision & Image Understanding, 110, 3, pp. 404-417, (2006)
  • [7] Rublee E., Rabaud V., Konolige K., Et al., ORB: An efficient alternative to SIFT or SURF, IEEE International Conference on Computer Vision, pp. 2564-2571, (2012)
  • [8] Sivic J., Zisserman A., Video Google: A text retrieval approach to object matching in videos, IEEE International Conference on Computer Vision, pp. 1470-1477, (2003)
  • [9] Galvez-Lopez D., Tardos J.D., Bags of binary words for fast place recognition in image sequences, IEEE Transactions on Robotics, 28, 5, pp. 1188-1197, (2012)
  • [10] Mur-Artal R., Tardos J.D., ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras, IEEE Transactions on Robotics, 33, 5, pp. 1255-1262, (2016)