Semantic Visual Odometry Based on Panoramic Annular Imaging

被引:5
|
作者
Chen Hao [1 ]
Yang Kailun [2 ]
Hu Weijian [1 ]
Bai Jian [1 ]
Wang Kaiwei [1 ]
机构
[1] Zhejiang Univ, Natl Engn Res Ctr Opt Instrumentat, Hangzhou 310058, Zhejiang, Peoples R China
[2] Karlsruhe Inst Technol, Inst Anthropomat & Robot, D-76131 Karlsruhe, Germany
关键词
machine vision; visual odometry; panoramic annular lens; semantic segmentation; pose estimation;
D O I
10.3788/AOS202141.2215002
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Visual odometry is commonly used in various applications including intelligent robots and self-driving cars. However, traditional visual odometry algorithms based on the pinhole camera with a limited field of view (FOV) arc usually fragile to moving objects in the environment and fast rotation of the camera, resulting in insufficient robustness and accuracy in practical use. This paper proposes panoramic annular semantic visual odometry as a solution to this problem. Using the panoramic annular imaging system with ultra-wide FOV into visual odometry and coupling semantic information provided by the panoramic annular semantic segmentation based on deep learning into each module of the algorithm, the effect of moving objects and fast rotation is reduced; then, the performance of visual odometry in dealing with these challenging scenarios can be improved. Compared with traditional visual odometry systems, experimental results show that the proposed algorithm achieves more accurate and robust pose estimation in realistic scenarios.
引用
收藏
页数:11
相关论文
共 35 条
  • [1] Lucas-Kanade 20 years on: A unifying framework
    Baker, S
    Matthews, I
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2004, 56 (03) : 221 - 255
  • [2] On Autonomous Spatial Exploration with Small Hexapod Walking Robot using Tracking Camera Intel RealSense T265
    Bayer, Jan
    Faigl, Jan
    [J]. 2019 EUROPEAN CONFERENCE ON MOBILE ROBOTS (ECMR), 2019,
  • [3] DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes
    Bescos, Berta
    Facil, Jose M.
    Civera, Javier
    Neira, Jose
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04): : 4076 - 4083
  • [4] Bowman Sean L., 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P1722, DOI 10.1109/ICRA.2017.7989203
  • [5] ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM
    Campos, Carlos
    Elvira, Richard
    Gomez Rodriguez, Juan J.
    Montiel, Jose M. M.
    Tardos, Juan D.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) : 1874 - 1890
  • [6] PALVO: visual odometry based on panoramic annular lens
    Chen, Hao
    Wang, Kaiwei
    Hu, Weijian
    Yang, Kailun
    Cheng, Ruiqi
    Huang, Xiao
    Bai, Jian
    [J]. OPTICS EXPRESS, 2019, 27 (17) : 24481 - 24497
  • [7] Direct Sparse Odometry
    Engel, Jakob
    Koltun, Vladlen
    Cremers, Daniel
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) : 611 - 625
  • [8] SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems
    Forster, Christian
    Zhang, Zichao
    Gassner, Michael
    Werlberger, Manuel
    Scaramuzza, Davide
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (02) : 249 - 265
  • [9] Forster C, 2014, IEEE INT CONF ROBOT, P15, DOI 10.1109/ICRA.2014.6906584
  • [10] Ganti P., 2018, SIVO SEMANTICALLY IN