Occlusion-aware Depth Estimation Using Light-field Cameras

被引:254
|
作者
Wang, Ting-Chun [1 ]
Efros, Alexei A. [1 ]
Ramamoorthi, Ravi [2 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Univ Calif San Diego, La Jolla, CA 92093 USA
关键词
ENERGY MINIMIZATION; STEREO;
D O I
10.1109/ICCV.2015.398
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Consumer-level and high-end light-field cameras are now widely available. Recent work has demonstrated practical methods for passive depth estimation from light-field images. However, most previous approaches do not explicitly model occlusions, and therefore cannot capture sharp transitions around object boundaries. A common assumption is that a pixel exhibits photo-consistency when focused to its correct depth, i.e., all viewpoints converge to a single (Lambertian) point in the scene. This assumption does not hold in the presence of occlusions, making most current approaches unreliable precisely where accurate depth information is most important - at depth discontinuities. In this paper, we develop a depth estimation algorithm that treats occlusion explicitly; the method also enables identification of occlusion edges, which may be useful in other applications. We show that, although pixels at occlusions do not preserve photo-consistency in general, they are still consistent in approximately half the viewpoints. Moreover, the line separating the two view regions (correct depth vs. occluder) has the same orientation as the occlusion edge has in the spatial domain. By treating these two regions separately, depth estimation can be improved. Occlusion predictions can also be computed and used for regularization. Experimental results show that our method outperforms current state-of-the-art light-field depth estimation algorithms, especially near occlusion boundaries.
引用
收藏
页码:3487 / 3495
页数:9
相关论文
共 50 条
  • [41] GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras
    Yuan, Ye
    Iqbal, Umar
    Molchanov, Pavlo
    Kitani, Kris
    Kautz, Jan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11028 - 11039
  • [42] Learning Occlusion-aware Coarse-to-Fine Depth Map for Self-supervised Monocular Depth Estimation
    Zhou, Zhengming
    Dong, Qiulei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6386 - 6395
  • [43] Calibrating Light-Field Cameras Using Plenoptic Disc Features
    O'Brien, Sean G. P.
    Trumpf, Jochen
    Ila, Viorela
    Mahony, Rob
    2018 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2018, : 286 - 294
  • [44] Light-field depth estimation considering plenoptic imaging distortion
    Cai, Zewei
    Liu, Xiaoli
    Pedrini, Giancarlo
    Osten, Wolfgang
    Peng, Xiang
    OPTICS EXPRESS, 2020, 28 (03): : 4156 - 4168
  • [45] Exploiting Sequence Analysis for Accurate Light-Field Depth Estimation
    Han, Lei
    Zheng, Shengnan
    Shi, Zhan
    Xia, Mingliang
    IEEE ACCESS, 2023, 11 : 74657 - 74670
  • [46] Learning occlusion-aware view synthesis for light fields
    Navarro, J.
    Sabater, N.
    PATTERN ANALYSIS AND APPLICATIONS, 2021, 24 (03) : 1319 - 1334
  • [47] Depth Estimation for Light-Field Images Using Stereo Matching and Convolutional Neural Networks
    Rogge, Segolene
    Schiopu, Ionut
    Munteanu, Adrian
    SENSORS, 2020, 20 (21) : 1 - 20
  • [48] Learning occlusion-aware view synthesis for light fields
    J. Navarro
    N. Sabater
    Pattern Analysis and Applications, 2021, 24 : 1319 - 1334
  • [49] Occlusion-Aware Hand Pose Estimation Using Hierarchical Mixture Density Network
    Ye, Qi
    Kim, Tae-Kyun
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 817 - 834
  • [50] SVBRDF-Invariant Shape and Reflectance Estimation from Light-Field Cameras
    Wang, Ting-Chun
    Chandraker, Manmohan
    Efros, Alexei A.
    Ramamoorthi, Ravi
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5451 - 5459