The Constraints between Edge Depth and Uncertainty for Monocular Depth Estimation

被引:1
|
作者
Wu, Shouying [1 ]
Li, Wei [2 ]
Liang, Binbin [2 ]
Huang, Guoxin [1 ]
机构
[1] Sichuan Univ, Natl Key Lab Fundamental Sci Synthet Vis, Chengdu 610065, Peoples R China
[2] Sichuan Univ, Sch Aeronut & Astronaut, Chengdu 610065, Peoples R China
关键词
monocular depth estimation; self-supervised method; uncertainty estimation; VISION;
D O I
10.3390/electronics10243153
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The self-supervised monocular depth estimation paradigm has become an important branch of computer vision depth-estimation tasks. However, the depth estimation problem arising from object edge depth pulling or occlusion is still unsolved. The grayscale discontinuity of object edges leads to a relatively high depth uncertainty of pixels in these regions. We improve the geometric edge prediction results by taking uncertainty into account in the depth-estimation task. To this end, we explore how uncertainty affects this task and propose a new self-supervised monocular depth estimation technique based on multi-scale uncertainty. In addition, we introduce a teacher-student architecture in models and investigate the impact of different teacher networks on the depth and uncertainty results. We evaluate the performance of our paradigm in detail on the standard KITTI dataset. The experimental results show that the accuracy of our method increased from 87.7% to 88.2%, the AbsRel error rate decreased from 0.115 to 0.11, the SqRel error rate decreased from 0.903 to 0.822, and the RMSE error rate decreased from 4.863 to 4.686 compared with the benchmark Monodepth2. Our approach has a positive impact on the problem of texture replication or inaccurate object boundaries, producing sharper and smoother depth images.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Monocular depth estimation with enhanced edge
    Wang Q.
    Wang Q.
    Cheng K.
    Liu Z.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2022, 50 (03): : 36 - 42
  • [2] Uncertainty Estimation for Efficient Monocular Depth Perception
    Du, Hao
    Cheng, Guoan
    Matsune, Ai
    Zhu, Qiang
    Zhan, Shu
    2022 ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING (CACML 2022), 2022, : 804 - 808
  • [3] Lightweight Monocular Depth Estimation on Edge Devices
    Liu, Siping
    Yang, Laurence Tianruo
    Tu, Xiaohan
    Li, Renfa
    Xu, Cheng
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) : 16168 - 16180
  • [4] Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation
    Xiang, Mochu
    Zhang, Jing
    Barnes, Nick
    Dai, Yuchao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5716 - 5727
  • [5] On the uncertainty of self-supervised monocular depth estimation
    Poggi, Matteo
    Aleotti, Filippo
    Tosi, Fabio
    Mattoccia, Stefano
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3224 - 3234
  • [6] Uncertainty from Motion for DNN Monocular Depth Estimation
    Sudhakar, Soumya
    Sze, Vivienne
    Karaman, Sertac
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 8673 - 8679
  • [7] Gradient-Based Uncertainty for Monocular Depth Estimation
    Hornauer, Julia
    Belagiannis, Vasileios
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 613 - 630
  • [8] Lightweight Monocular Depth Estimation with an Edge Guided Network
    Dong, Xingshuai
    Garratt, Matthew A.
    Anavatti, Sreenatha G.
    Abbass, Hussein A.
    Dong, Junyu
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 204 - 210
  • [9] Unsupervised monocular depth estimation based on edge enhancement
    Qu Y.
    Chen Y.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2024, 46 (01): : 71 - 79
  • [10] Sparse depth densification for monocular depth estimation
    Zhen Liang
    Tiyu Fang
    Yanzhu Hu
    Yingjian Wang
    Multimedia Tools and Applications, 2024, 83 : 14821 - 14838