On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation

被引:1
|
作者
Zhao, Haimei [1 ]
Zhang, Jing [1 ]
Chen, Zhuo [2 ]
Yuan, Bo [3 ]
Tao, Dacheng [1 ]
机构
[1] Univ Sydney, Sch Comp Sci, Sydney, NSW 2008, Australia
[2] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen 518055, Peoples R China
[3] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane 4072, Australia
基金
澳大利亚研究理事会;
关键词
3D vision; depth estimation; cross-view consistency; self-supervised learning; monocular perception;
D O I
10.1007/s11633-023-1474-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the "point-to-point" alignment paradigm to the "region-to-region" one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.
引用
下载
收藏
页码:495 / 513
页数:19
相关论文
共 50 条
  • [41] MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer
    Zhao, Chaoqiang
    Zhang, Youmin
    Poggi, Matteo
    Tosi, Fabio
    Guo, Xianda
    Zhu, Zheng
    Huang, Guan
    Tang, Yang
    Mattoccia, Stefano
    2022 INTERNATIONAL CONFERENCE ON 3D VISION, 3DV, 2022, : 668 - 678
  • [42] Self-Supervised Deep Monocular Depth Estimation With Ambiguity Boosting
    Bello, Juan Luis Gonzalez
    Kim, Munchurl
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9131 - 9149
  • [43] Self-Supervised Monocular Depth Estimation Based on Channel Attention
    Tao, Bo
    Chen, Xinbo
    Tong, Xiliang
    Jiang, Du
    Chen, Baojia
    PHOTONICS, 2022, 9 (06)
  • [44] Self-Supervised Human Depth Estimation from Monocular Videos
    Tan, Feitong
    Zhu, Hao
    Cui, Zhaopeng
    Zhu, Siyu
    Pollefeys, Marc
    Tan, Ping
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 647 - 656
  • [45] Self-Supervised Monocular Depth Estimation with Multi-constraints
    Yang, Xinpeng
    Zhang, Sen
    Zhao, Baoyong
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 8422 - 8427
  • [46] Deep Digging into the Generalization of Self-Supervised Monocular Depth Estimation
    Bae, Jinwoo
    Moon, Sungho
    Im, Sunghoon
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 187 - 196
  • [47] Constant Velocity Constraints for Self-Supervised Monocular Depth Estimation
    Zhou, Hang
    Greenwood, David
    Taylor, Sarah
    Gong, Han
    CVMP 2020: THE 17TH ACM SIGGRAPH EUROPEAN CONFERENCE ON VISUAL MEDIA PRODUCTION, 2020,
  • [48] A LIGHTWEIGHT SELF-SUPERVISED TRAINING FRAMEWORK FOR MONOCULAR DEPTH ESTIMATION
    Heydrich, Tim
    Yang, Yimin
    Du, Shan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2265 - 2269
  • [49] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation
    Peng, Rui
    Wang, Ronggang
    Lai, Yawen
    Tang, Luyang
    Cai, Yangang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15540 - 15549
  • [50] Transferring knowledge from monocular completion for self-supervised monocular depth estimation
    Sun, Lin
    Li, Yi
    Liu, Bingzheng
    Xu, Liying
    Zhang, Zhe
    Zhu, Jie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42485 - 42495