A Visual Feature Mismatch Detection Algorithm for Optical Flow-Based Visual Odometry

被引:1
|
作者
Li, Ruichen [1 ]
Shen, Han [2 ]
Wang, Linan [2 ]
Liu, Congyi [3 ]
Yi, Xiaojian [4 ]
机构
[1] Beijing Inst Technol, Sch Automat, Beijing 100081, Peoples R China
[2] Southeast Univ, Sch Math, Dept Syst Sci, Nanjing 211189, Peoples R China
[3] Southeast Univ, Sch Cyber Sci & Engn, Nanjing 211189, Peoples R China
[4] Beijing Inst Technol, Sch Mechatron Engn, Beijing 100081, Peoples R China
关键词
Optical flow; visual simultaneous localization and mapping (VSLAM); visual odometry (VO); mismatch detection; ROBUST;
D O I
10.1142/S2301385025410031
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Camera-based visual simultaneous localization and mapping (VSLAM) algorithms involve extracting and tracking feature points in their front-ends. Feature points are subsequently forwarded to the back-end for camera pose estimation. However, the matching results of these feature points by optical flow are prone to visual feature mismatches. To address the mentioned problems, this paper introduces a novel visual feature mismatch detection algorithm. First, the algorithm calculates pixel displacements for all feature point pairs tracked by the optical flow method between consecutive images. Subsequently, mismatches are detected based on the pixel displacement threshold calculated by the statistical characteristics of tracking results. Additionally, bound values for the threshold are set to enhance the accuracy of the filtered matches, ensuring its adaptability to different environments. Following the filtered matches, the algorithm calculates the fundamental matrix, which is then used to further refine the filtered matches sent to the back-end for camera pose estimation. The algorithm is seamlessly integrated into the state-of-the-art VSLAM system, enhancing the overall robustness of VSLAM. Extensive experiments conducted on both public datasets and our unmanned surface vehicles (USVs) validate the performance of the proposed algorithm.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Real-time visual odometry based on Optical Flow and Depth Learning
    Zhao, Qi
    Li, Fangmin
    Liu, Xinhua
    2018 10TH INTERNATIONAL CONFERENCE ON MEASURING TECHNOLOGY AND MECHATRONICS AUTOMATION (ICMTMA), 2018, : 239 - 242
  • [22] Video saliency detection algorithm based on biological visual feature and visual psychology theory
    Fang Zhi-Ming
    Cui Rong-Yi
    Jin Jing-Xuan
    ACTA PHYSICA SINICA, 2017, 66 (10)
  • [23] StatWire: Visual Flow-based Statistical Programming
    Subramanian, Krishna
    Maas, Johannes
    Ellers, Michael
    Wacharamanotham, Chat
    Voelker, Simon
    Borchers, Jan
    CHI 2018: EXTENDED ABSTRACTS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2018,
  • [24] Inertial Monocular Visual Odometry Based on RUPF Algorithm
    Hou, Juanrou
    Wang, Zhanqing
    Zhang, Yanshun
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 3885 - 3891
  • [25] Learning Optical Flow with R-CNN for Visual Odometry
    Huang, Yingping
    Zhao, Baigan
    Gao, Chong
    Hu, Xing
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 14410 - 14416
  • [26] Flow-based Influence Graph Visual Summarization
    Shi, Lei
    Tong, Hanghang
    Tang, Jie
    Lin, Chuang
    2014 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2014, : 983 - 988
  • [27] Leveraging Deep Learning for Visual Odometry Using Optical Flow
    Pandey, Tejas
    Pena, Dexmont
    Byrne, Jonathan
    Moloney, David
    SENSORS, 2021, 21 (04) : 1 - 13
  • [28] Monocular Visual Odometry Based on Homogeneous SURF Feature Points
    Si, Zengxiu
    Wu, Xinhua
    Liu, Gang
    5TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER SCIENCE APPLICATIONS AND TECHNOLOGIES (ACSAT 2017), 2017, : 10 - 17
  • [29] Robust visual odometry using sparse optical flow network
    Liu, Qiang
    Chen, Baojia
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [30] Orthogonality Index Based Optimal Feature Selection for Visual Odometry
    Huu Hung Nguyen
    Lee, Sukhan
    IEEE ACCESS, 2019, 7 : 62284 - 62299