Visual localization technology of AGV based on global sparse map

被引:0
|
作者
Zhang H. [1 ]
Cheng X. [1 ]
Liu C. [1 ]
Sun J. [1 ]
机构
[1] School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing
关键词
Automated guided vehicle (AGV); Coded point; Sparse map; Three-dimensional reconstruction; Visual localization;
D O I
10.13700/j.bh.1001-5965.2018.0272
中图分类号
学科分类号
摘要
In order to realize the high-precision localization of automated guided vehicle (AGV) in complex industrial environment and overcome the influence of environment change, a vision localization method based on a global sparse map was proposed. First, a large-capacity two-dimensional coded point was designed, which was set on the ground as an artificial landmark. Based on a quad recognition algorithm, the coded points were accurately segmented and identified in complex industrial environment. The feature points from different images were properly matched by using the coded information provided by coded points. Then, a block-optimization three-dimensional reconstruction algorithm was designed to build a map for a large-scale industrial environment, which provided a sparse electronic map for AGV visual localization. The visual localization of AGV was realized by matching the feature points from the visual sensor and sparse electronic maps. The repeated precision of AGV is less than 0.5 mm, the angle deviation is less than 0.5°, and the average displacement error of trajectory is less than 0.1%. The practical application shows that the method can realize the visual localization of AGV in complex industrial environment. The speed and precision of localization both meet the requirements of industrial application, which provides a new way for vision-based localization of AGV. © 2019, Editorial Board of JBUAA. All right reserved.
引用
收藏
页码:218 / 226
页数:8
相关论文
共 17 条
  • [1] Zhang J.P., Lou P.H., Qian X.M., Et al., Research on precise positioning technology by multi-window and real-time measurement for visual navigation AGV, Chinese Journal of Scientific Instrument, 37, 6, pp. 1356-1365, (2016)
  • [2] Kim J., Chung W., Localization of a mobile robot using a laser range finder in a glass-walled environment, IEEE Transactions on Industrial Electronics, 63, 6, pp. 3616-3627, (2016)
  • [3] Cesar C., Luca C., Henry C., Et al., Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age, IEEE Transactions on Robotics, 32, 6, pp. 1309-1332, (2016)
  • [4] Rublee E., Rabaud V., Konolig K., Et al., ORB: An efficient alternative to SIFT or SURF, International Conference on Computer Vision, pp. 2564-2571, (2012)
  • [5] Lowe D.G., Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2, pp. 91-110, (2004)
  • [6] Lu Y., Song D., Visual navigation using heterogeneous landmarks and unsupervised geometric constraints, IEEE Transactions on Robotics, 31, 3, pp. 736-749, (2017)
  • [7] Roland S., Illah R.N., Davide S., Et al., Introduction to Sutonomous Mobile Robots, pp. 345-346, (2010)
  • [8] Cao T.Y., Cai Y.H., Dong F.M., Et al., Robot vision system for keyframe global map establishment and robot localization based on graphic content mathing, Optics and Precision Engineering, 25, 8, pp. 2222-2232, (2017)
  • [9] Wulf O., Lecking D., Wagner B., Robust self-localization in industrial environments based on 3D ceiling structures, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1530-1534, (2007)
  • [10] Davide R., Roberto O., Cristian S., Et al., AGV global localization using indistinguishable artificial landmarks, IEEE International Conference on Robotics and Automation, pp. 287-292, (2011)