Two Stream Networks for Self-Supervised Ego-Motion Estimation

被引:0
|
作者
Ambrus, Rares [1 ]
Guizilini, Vitor [1 ]
Li, Jie [1 ]
Pillai, Sudeep [1 ]
Gaidon, Adrien [1 ]
机构
[1] Toyota Res Inst TRI, Los Altos, CA 94022 USA
来源
关键词
Self-Supervised Learning; Ego-Motion Estimation; Visual Odometry;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Learning depth and camera ego-motion from raw unlabeled RGB video streams is seeing exciting progress through self-supervision from strong geometric cues. To leverage not only appearance but also scene geometry, we propose a novel self-supervised two-stream network using RGB and inferred depth information for accurate visual odometry. In addition, we introduce a sparsity-inducing data augmentation policy for ego-motion learning that effectively regularizes the pose network to enable stronger generalization performance. As a result, we show that our proposed two-stream pose network achieves state-of-the-art results among learning-based methods on the KITTI odometry benchmark, and is especially suited for self-supervision at scale. Our experiments on a large-scale urban driving dataset of 1 million frames indicate that the performance of our proposed architecture does indeed scale progressively with more data.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
    Shen, Tianwei
    Luo, Zixin
    Zhou, Lei
    Deng, Hanyu
    Zhang, Runze
    Fang, Tian
    Quan, Long
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 6359 - 6365
  • [2] Self-Supervised Attention Learning for Depth and Ego-motion Estimation
    Sadek, Assent
    Chidlovskii, Boris
    [J]. 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 10054 - 10060
  • [3] CeMNet: Self-supervised learning for accurate continuous ego-motion estimation
    Lee, Minhaeng
    Fowlkes, Charless C.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 354 - 363
  • [4] Rigid-aware self-supervised GAN for camera ego-motion estimation
    Lin, Lili
    Luo, Wan
    Yan, Zhengmao
    Zhou, Wenhui
    [J]. DIGITAL SIGNAL PROCESSING, 2022, 126
  • [5] Semantic and Optical Flow Guided Self-supervised Monocular Depth and Ego-Motion Estimation
    Fang, Jiaojiao
    Liu, Guizhong
    [J]. IMAGE AND GRAPHICS (ICIG 2021), PT III, 2021, 12890 : 465 - 477
  • [6] Self-supervised monocular depth and ego-motion estimation for CT-bronchoscopy fusion
    Chang, Qi
    Higgins, William E.
    [J]. IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, MEDICAL IMAGING 2024, 2024, 12928
  • [7] Self-Supervised monocular depth and ego-Motion estimation in endoscopy: Appearance flow to the rescue
    Shao, Shuwei
    Pei, Zhongcai
    Chen, Weihai
    Zhu, Wentao
    Wu, Xingming
    Sun, Dianmin
    Zhang, Baochang
    [J]. MEDICAL IMAGE ANALYSIS, 2022, 77
  • [8] Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion
    Vasiljevic, Igor
    Guizilini, Vitor
    Ambrus, Rares
    Pillai, Sudeep
    Burgard, Wolfram
    Shakhnarovich, Greg
    Gaidon, Adrien
    [J]. 2020 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2020), 2020, : 1 - 11
  • [9] Self-Supervised Ego-Motion Estimation Based on Multi-Layer Fusion of RGB and Inferred Depth
    Jiang, Zijie
    Taira, Hajime
    Miyashita, Naoyuki
    Okutomi, Masatoshi
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7605 - 7611
  • [10] Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion
    Tishchenko, Ivan
    Lombardi, Sandro
    Oswald, Martin R.
    Pollefeys, Marc
    [J]. 2020 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2020), 2020, : 150 - 159