ViPR: Visual-Odometry-aided Pose Regression for 6DoF Camera Localization

被引:6
|
作者
Ott, Felix [1 ]
Feigl, Tobias [1 ,2 ]
Loeffler, Christoffer [1 ,2 ]
Mutschler, Christopher [1 ,3 ]
机构
[1] Fraunhofer Inst Integrated Circuits IIS, Nurnberg, Germany
[2] FAU Erlangen Nuremberg, Dept Comp Sci, Erlangen, Germany
[3] Ludwig Maximilians Univ LMU, Dept Stat, Munich, Germany
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020) | 2020年
关键词
AUGMENTED REALITY; EFFICIENT;
D O I
10.1109/CVPRW50498.2020.00029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual Odometry (VO) accumulates a positional drift in long-term robot navigation tasks. Although Convolutional Neural Networks (CNNs) improve VO in various aspects, VO still suffers from moving obstacles, discontinuous observation of features, and poor textures or visual information. While recent approaches estimate a 6DoF pose either directly from (a series of) images or by merging depth maps with optical flow (OF), research that combines absolute pose regression with OF is limited. We propose ViPR, a novel modular architecture for long-term 6DoF VO that leverages temporal information and synergies between absolute pose estimates (from PoseNet-like modules) and relative pose estimates (from FlowNet-based modules) by combining both through recurrent layers. Experiments on known datasets and on our own Industry dataset show that our modular design outperforms state of the art in long-term navigation tasks.
引用
收藏
页码:187 / 198
页数:12
相关论文
共 50 条
  • [41] Real-time scalable 6DOF pose estimation for textureless objects
    Cao, Zhe
    Sheikh, Yaser
    Banerjee, Natasha Kholgade
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 2441 - 2448
  • [42] ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation
    Su, Yongzhi
    Saleh, Mahdi
    Fetzer, Torben
    Rambach, Jason
    Navab, Nassir
    Busam, Benjamin
    Stricker, Didier
    Tombari, Federico
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6728 - 6738
  • [43] PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation
    Peng, Sida
    Liu, Yuan
    Huang, Qixing
    Zhou, Xiaowei
    Bao, Hujun
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4556 - 4565
  • [44] Keypoint Cascade Voting for Point Cloud Based 6DoF Pose Estimation
    Wu, Yangzheng
    Javaheri, Alireza
    Zand, Mohsen
    Greenspan, Michael
    2022 INTERNATIONAL CONFERENCE ON 3D VISION, 3DV, 2022, : 176 - 186
  • [45] ParametricNet: 6DoF Pose Estimation Network for Parametric Shapes in Stacked Scenarios
    Zeng, Long
    Lv, Wei Jie
    Zhang, Xin Yu
    Liu, Yong Jin
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 772 - 778
  • [46] Optimizing RGB-D Fusion for Accurate 6DoF Pose Estimation
    Saadi, Lounes
    Besbes, Bassem
    Kramm, Sebastien
    Bensrhair, Abdelaziz
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02): : 2413 - 2420
  • [47] A Survey of 6DoF Object Pose Estimation Methods for Different Application Scenarios
    Guan, Jian
    Hao, Yingming
    Wu, Qingxiao
    Li, Sicong
    Fang, Yingjian
    SENSORS, 2024, 24 (04)
  • [48] A Study on the Impact of Domain Randomization for Monocular Deep 6DoF Pose Estimation
    da Cunha, Kelvin B.
    Brito, Caio
    Valenca, Luas
    Simoes, Francisco
    Teichrieb, Veronica
    2020 33RD SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2020), 2020, : 332 - 339
  • [49] YCB-M: A Multi-Camera RGB-D Dataset for Object Recognition and 6DoF Pose Estimation
    Grenzdoerffer, Till
    Guenther, Martin
    Hertzberg, Joachim
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 3650 - 3656
  • [50] Monocular Visual Odometry aided by a low resolution Time of Flight camera
    Chiodini, Sebastiano
    Giubilato, Riccardo
    Pertile, Marco
    Debei, Stefano
    2017 IEEE INTERNATIONAL WORKSHOP ON METROLOGY FOR AEROSPACE (METROAEROSPACE), 2017, : 239 - 244