Deep Depth from Defocus: How Can Defocus Blur Improve 3D Estimation Using Dense Neural Networks?

被引:14
|
作者
Carvalho, Marcela [1 ]
Le Saux, Bertrand [1 ]
Trouve-Peloux, Pauline [1 ]
Almansa, Andres [2 ]
Champagnat, Frederic [2 ]
机构
[1] Univ Paris Saclay, DTIS, ONERA, F-91123 Palaiseau, France
[2] Univ Paris 05, F-75006 Paris, France
来源
关键词
Depth from defocus; Domain adaptation; Depth estimation; Single-image depth prediction; BLIND DECONVOLUTION;
D O I
10.1007/978-3-030-11009-3_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Depth estimation is critical interest for scene understanding and accurate 3D reconstruction. Most recent approaches with deep learning exploit geometrical structures of standard sharp images to predict depth maps. However, cameras can also produce images with defocus blur depending on the depth of the objects and camera settings. Hence, these features may represent an important hint for learning to predict depth. In this paper, we propose a full system for single-image depth prediction in the wild using depth-from-defocus and neural networks. We carry out thorough experiments real and simulated defocused images using a realistic model of blur variation with respect to depth. We also investigate the influence of blur on depth prediction observing model uncertainty with a Bayesian neural network approach. From these studies, we show that out-of-focus blur greatly improves the depth-prediction network performances. Furthermore, we transfer the ability learned on a synthetic, indoor dataset to real, indoor and outdoor images. For this purpose, we present a new dataset with real all-focus and defocused images from a DSLR camera, paired with ground truth depth maps obtained with an active 3D sensor for indoor scenes. The proposed approach is successfully validated on both this new dataset and standard ones as NYUv2 or Depth-in-the-Wild. Code and new datasets are available at https://github.com/marcelampc/d3net_depth_estimation.
引用
收藏
页码:307 / 323
页数:17
相关论文
共 50 条
  • [41] Sensor Simulation for Monocular Depth Estimation using Deep Neural Networks
    Nadar, Christon R.
    Kunert, Christian
    Schwandt, Tobias
    Broll, Wolfgang
    2021 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW 2021), 2021, : 9 - 16
  • [42] HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA
    Khorrami, Pooya
    Le Paine, Tom
    Brady, Kevin
    Dagli, Charlie
    Huang, Thomas S.
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 619 - 623
  • [43] 3D DEPTH ESTIMATION FROM A HOLOSCOPIC 3D IMAGE
    Aondoakaa, Akuha Solomon
    Swash, Mohammad Rafiq
    Sadka, Abdul
    2017 4TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 2017, : 320 - 324
  • [44] An evaluation methodology for 3D deep neural networks using visualization in 3D data classification
    Hyun-Tae Hwang
    Soo-Hong Lee
    Hyung Gun Chi
    Nam Kyu Kang
    Hyeon Bae Kong
    Jiaqi Lu
    Hyungseok Ohk
    Journal of Mechanical Science and Technology, 2019, 33 : 1333 - 1339
  • [45] An evaluation methodology for 3D deep neural networks using visualization in 3D data classification
    Hwang, Hyun-Tae
    Lee, Soo-Hong
    Chi, Hyung Gun
    Kang, Nam Kyu
    Kong, Hyeon Bae
    Lu, Jiaqi
    Ohk, Hyungseok
    JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY, 2019, 33 (03) : 1333 - 1339
  • [46] Deformable 3D Shape Classification Using 3D Racah Moments and Deep Neural Networks
    Lakhili, Zouhir
    El Alami, Abdelmajid
    Mesbah, Abderrahim
    Berrahou, Aissam
    Qjidaa, Hassan
    SECOND INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING IN DATA SCIENCES (ICDS2018), 2019, 148 : 12 - 20
  • [47] Vehicle Detection in Aerial Images Based on 3D Depth Maps and Deep Neural Networks
    Javadi, Saleh
    Dahl, Mattias
    Pettersson, Mats I.
    IEEE ACCESS, 2021, 9 : 8381 - 8391
  • [48] Direct 3D servoing using dense depth maps
    Teuliere, Celine
    Marchand, Eric
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 1741 - 1746
  • [49] 3D human pose and shape estimation with dense correspondence from a single depth image
    Wang, Kangkan
    Zhang, Guofeng
    Yang, Jian
    VISUAL COMPUTER, 2023, 39 (01): : 429 - 441
  • [50] 3D human pose and shape estimation with dense correspondence from a single depth image
    Kangkan Wang
    Guofeng Zhang
    Jian Yang
    The Visual Computer, 2023, 39 : 429 - 441