On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation

被引:0
|
作者
Haimei Zhao
Jing Zhang
Zhuo Chen
Bo Yuan
Dacheng Tao
机构
[1] University of Sydney,School of Computer Science
[2] Tsinghua University,Shenzhen International Graduate School
[3] University of Queensland,School of Information Technology & Electrical Engineering
来源
关键词
3D vision; depth estimation; cross-view consistency; self-supervised learning; monocular perception;
D O I
暂无
中图分类号
学科分类号
摘要
Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the “point-to-point” alignment paradigm to the “region-to-region” one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.
引用
下载
收藏
页码:495 / 513
页数:18
相关论文
共 50 条
  • [1] On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation
    Zhao, Haimei
    Zhang, Jing
    Chen, Zhuo
    Yuan, Bo
    Tao, Dacheng
    MACHINE INTELLIGENCE RESEARCH, 2024, 21 (03) : 495 - 513
  • [2] Image Masking for Robust Self-Supervised Monocular Depth Estimation
    Chawla, Hemang
    Jeeveswaran, Kishaan
    Arani, Elahe
    Zonooz, Bahram
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 10054 - 10060
  • [3] Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation
    Chawla, Hemang
    Varma, Arnav
    Arani, Elahe
    Zonooz, Bahram
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 5140 - 5146
  • [4] Digging Into Self-Supervised Monocular Depth Estimation
    Godard, Clement
    Mac Aodha, Oisin
    Firman, Michael
    Brostow, Gabriel
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3827 - 3837
  • [5] Self-supervised monocular depth estimation in fog
    Tao, Bo
    Hu, Jiaxin
    Jiang, Du
    Li, Gongfa
    Chen, Baojia
    Qian, Xinbo
    OPTICAL ENGINEERING, 2023, 62 (03)
  • [6] On the uncertainty of self-supervised monocular depth estimation
    Poggi, Matteo
    Aleotti, Filippo
    Tosi, Fabio
    Mattoccia, Stefano
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3224 - 3234
  • [7] Revisiting Self-supervised Monocular Depth Estimation
    Kim, Ue-Hwan
    Lee, Gyeong-Min
    Kim, Jong-Hwan
    ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 : 336 - 350
  • [8] Self-supervised Event-based Monocular Depth Estimation using Cross-modal Consistency
    Zhu, Junyu
    Liu, Lina
    Jiang, Bofeng
    Wen, Feng
    Zhang, Hongbo
    Li, Wanlong
    Liu, Yong
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 7704 - 7710
  • [9] Self-supervised Monocular Depth Estimation Based on Semantic Assistance and Depth Temporal Consistency Constraints
    Ling, Chuanwu
    Chen, Hua
    Xu, Dayong
    Zhang, Xiaogang
    Hunan Daxue Xuebao/Journal of Hunan University Natural Sciences, 2024, 51 (08): : 1 - 12
  • [10] Self-supervised monocular depth estimation for high field of view colonoscopy cameras
    Mathew, Alwyn
    Magerand, Ludovic
    Trucco, Emanuele
    Manfredi, Luigi
    FRONTIERS IN ROBOTICS AND AI, 2023, 10