Self-Supervised Monocular Depth Estimation with Extensive Pretraining

被引:0
|
作者
Choi, Hyukdoo [1 ,2 ]
机构
[1] Department of Electronics and Information Engineering, Soonchunhyang University, Asan, Korea, Republic of
[2] Department of Electronic Materials and Devices Engineering, Soonchunhyang University, Asan,31538, Korea, Republic of
基金
新加坡国家研究基金会;
关键词
Unsupervised learning - Stereo image processing - Supervised learning - Convolutional neural networks - Optical radar;
D O I
暂无
中图分类号
学科分类号
摘要
Although depth estimation is a key technology for three-dimensional sensing applications involving motion, active sensors such as LiDAR and depth cameras tend to be expensive and bulky. Here, we explore the potential of monocular depth estimation (MDE) based on a self-supervised approach. MDE is a promising technology, but supervised learning suffers from a need for accurate ground-truth depth data. Recent studies have enabled self-supervised training on an MDE model with only monocular image sequences and image-reconstruction errors. We pretrained networks using multiple datasets, including monocular and stereo image sequences. The main challenges posed by the self-supervised MDE model were occlusions and dynamic objects. We proposed novel loss functions to handle these problems in the form of min-over-all and min-with-flow losses, both based on the per-pixel minimum reprojection error of Monodepth2 and extended to stereo images and optical flow. With extensive pretraining and novel losses, our model outperformed existing unsupervised approaches in quantitative depth estimation and the ability to distinguish small objects against a background, as evaluated by KITTI 2015. © 2013 IEEE.
引用
收藏
页码:157236 / 157246
相关论文
共 50 条
  • [31] Self-Supervised Deep Monocular Depth Estimation With Ambiguity Boosting
    Bello, Juan Luis Gonzalez
    Kim, Munchurl
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9131 - 9149
  • [32] Self-Supervised Monocular Depth Estimation Based on Channel Attention
    Tao, Bo
    Chen, Xinbo
    Tong, Xiliang
    Jiang, Du
    Chen, Baojia
    PHOTONICS, 2022, 9 (06)
  • [33] Self-Supervised Human Depth Estimation from Monocular Videos
    Tan, Feitong
    Zhu, Hao
    Cui, Zhaopeng
    Zhu, Siyu
    Pollefeys, Marc
    Tan, Ping
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 647 - 656
  • [34] Self-Supervised Monocular Depth Estimation with Multi-constraints
    Yang, Xinpeng
    Zhang, Sen
    Zhao, Baoyong
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 8422 - 8427
  • [35] Deep Digging into the Generalization of Self-Supervised Monocular Depth Estimation
    Bae, Jinwoo
    Moon, Sungho
    Im, Sunghoon
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 187 - 196
  • [36] Constant Velocity Constraints for Self-Supervised Monocular Depth Estimation
    Zhou, Hang
    Greenwood, David
    Taylor, Sarah
    Gong, Han
    CVMP 2020: THE 17TH ACM SIGGRAPH EUROPEAN CONFERENCE ON VISUAL MEDIA PRODUCTION, 2020,
  • [37] A LIGHTWEIGHT SELF-SUPERVISED TRAINING FRAMEWORK FOR MONOCULAR DEPTH ESTIMATION
    Heydrich, Tim
    Yang, Yimin
    Du, Shan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2265 - 2269
  • [38] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation
    Peng, Rui
    Wang, Ronggang
    Lai, Yawen
    Tang, Luyang
    Cai, Yangang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15540 - 15549
  • [39] Transferring knowledge from monocular completion for self-supervised monocular depth estimation
    Sun, Lin
    Li, Yi
    Liu, Bingzheng
    Xu, Liying
    Zhang, Zhe
    Zhu, Jie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42485 - 42495
  • [40] Transferring knowledge from monocular completion for self-supervised monocular depth estimation
    Lin Sun
    Yi Li
    Bingzheng Liu
    Liying Xu
    Zhe Zhang
    Jie Zhu
    Multimedia Tools and Applications, 2022, 81 : 42485 - 42495