ACCURATE LIGHT FIELD DEPTH ESTIMATION VIA AN OCCLUSION-AWARE NETWORK

被引:12
|
作者
Guo, Chunle [1 ]
Jin, Jing [1 ]
Hou, Junhui [1 ]
Chen, Jie [2 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] Hong Kong Baptist Univ, Hong Kong, Peoples R China
关键词
Light fields; depth estimation; deep neural network; occlusion;
D O I
10.1109/icme46284.2020.9102829
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Depth estimation is a fundamental problem for light field based applications. Although recent learning-based methods have proven to be effective for light field depth estimation, they still have troubles when handling occlusion regions. In this paper, by leveraging the explicitly learned occlusion map, we propose an occlusion-aware network, which is capable of estimating accurate depth maps with sharp edges. Our main idea is to separate the depth estimation on non-occlusion and occlusion regions, as they contain different properties with respect to the light field structure, i.e., obeying and violating the angular photo consistency constraint. To this end, three modules are involved in our network: the occlusion region detection network (ORDNet), the coarse depth estimation network (CDENet), and the refined depth estimation network (RDENet). Specifically, ORDNet predicts the occlusion map as a mask, while under the guidance of the resulting occlusion map, CDENet and REDNet focus on the depth estimation on non-occlusion and occlusion areas, respectively. Experimental results show that our method achieves better performance on 4D light field benchmark, especially in occlusion regions, when compared with current state-of-the-art light-field depth estimation algorithms.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Fast Depth Densification for Occlusion-aware Augmented Reality
    Holynski, Aleksander
    Kopf, Johannes
    SIGGRAPH ASIA'18: SIGGRAPH ASIA 2018 TECHNICAL PAPERS, 2018,
  • [22] Fast Depth Densification for Occlusion-aware Augmented Reality
    Holynski, Aleksander
    Kopf, Johannes
    ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (06):
  • [23] Robust Local Light Field Synthesis via Occlusion-aware Sampling and Deep Visual Feature Fusion
    Wenpeng Xing
    Jie Chen
    Yike Guo
    Machine Intelligence Research, 2023, 20 : 408 - 420
  • [24] Occlusion-Aware Unsupervised Learning of Depth From 4-D Light Fields
    Jin, Jing
    Hou, Junhui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 2216 - 2228
  • [25] Robust Local Light Field Synthesis via Occlusion-aware Sampling and Deep Visual Feature Fusion
    Xing, Wenpeng
    Chen, Jie
    Guo, Yike
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (03) : 408 - 420
  • [26] Learning an Occlusion-Aware Network for Video Deblurring
    Xu, Qian
    Pan, Jinshan
    Qian, Yuntao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4312 - 4323
  • [27] Occlusion-Aware Hand Pose Estimation Using Hierarchical Mixture Density Network
    Ye, Qi
    Kim, Tae-Kyun
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 817 - 834
  • [28] Learning Occlusion-aware Coarse-to-Fine Depth Map for Self-supervised Monocular Depth Estimation
    Zhou, Zhengming
    Dong, Qiulei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6386 - 6395
  • [29] Learning occlusion-aware view synthesis for light fields
    Navarro, J.
    Sabater, N.
    PATTERN ANALYSIS AND APPLICATIONS, 2021, 24 (03) : 1319 - 1334
  • [30] Learning occlusion-aware view synthesis for light fields
    J. Navarro
    N. Sabater
    Pattern Analysis and Applications, 2021, 24 : 1319 - 1334