Algorithm for dealing with motion blur in visual SLAM

被引:0
|
作者
Guo K. [1 ,2 ]
Fang J. [1 ]
Wang X. [1 ]
Zhang X. [1 ]
Liu X. [1 ]
机构
[1] Institution of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing
[2] University of Chinese Academy of Sciences, Beijing
关键词
Coordinate difference; Frame removal; Mobile robot; Motion blur; SLAM;
D O I
10.11918/j.issn.0367-6234.201901208
中图分类号
学科分类号
摘要
Motion blur caused by high-speed movement often occurs in low-price devices, which is a main factor that affects the accuracy of Simultaneous Localization and Mapping (SLAM). Some approaches dealing with motion blur such as computing blur kernel and blind deconvolution are not suitable for mobile phone, unmanned Areial Vehicle(UAV), and other platforms with limited processing capacity, which may impact the application of SLAM algorithm. In this study, correspondence was found between coordinate difference and extent of motion blur by exploring the generation of motion blur and the difference of feature coordinates between adjacent images. The average movement of feature points was used to form EBL parameter and represent the blur degree of the frame, which was then combined with frame removal algorithm to continuously remove the big-blur image. The accuracy of localization and mapping under motion blur could be enhanced by adding a small amount of computation. Experiments proved the validity of EBL parameters and the improvement of the accuracy of the SLAM system. Results show that the proposed algorithm could obviously reduce the error of the camera trajectory. For datasets with severe blur, the error could be reduced by 20% under an appropriate size of window. © 2019, Editorial Board of Journal of Harbin Institute of Technology. All right reserved.
引用
收藏
页码:116 / 121
页数:5
相关论文
共 19 条
  • [1] Fuentes-Pacheco J., Ruiz-Ascencio J., Manuel J., Visual simultaneous localization and mapping: A survey, Artificial Intelligence Review, 43, 1, (2015)
  • [2] Takatomi T., Uchiyama H., Ikeda S., Visual SLAM algorithms: A survey from 2010 to 2016, IPSJ Transactions on Computer Vision and Applications, 9, 1, (2017)
  • [3] Liu H., Zhang G., Bao H., A survey of monocular simultaneous localization and mapping, Journal of Computer Aided Design and Computer Graphics, 28, 6, (2016)
  • [4] Quan M., Piao S., Li G., An overview of visual SLAM, CAAI Transactions on Intelligent Systems, 11, 6, (2016)
  • [5] Pendyala S., Ramesha P., Veer Bns A., Et al., Blur detection and fast blind image deblurring, 2015 Annual Ieee India Conference, (2015)
  • [6] Kim T.H., Lee K.M., Generalized video deblurring for dynamic scenes, 2015 Ieee Conference on Computer Vision and Pattern Recognition, (2015)
  • [7] Atashgah A., Gholampour P., Malaek S., Integration of image de-blurring in an aerial mono-SLAM, Proceedings of the Institution of Mechanical Engineers Part G Journal of Aerospace Engineering, 228, 8, (2014)
  • [8] Russo L.O., Airo Farulla G., Indaco M., Et al., Blurring prediction in monocular SLAM, Design and Test Symposium, (2013)
  • [9] Lee H.S., Kwon J., Lee K.M., Simultaneous localization, mapping and deblurring, IEEE International Conference on Computer Vision, (2011)
  • [10] Pretto A., Menegatti E., Bennewitz M., Et al., A visual odometry framework robust to motion blur, IEEE International Conference on Robotics and Automation, (2009)