Occlusion-aware Depth Estimation Using Light-field Cameras

被引:254
|
作者
Wang, Ting-Chun [1 ]
Efros, Alexei A. [1 ]
Ramamoorthi, Ravi [2 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Univ Calif San Diego, La Jolla, CA 92093 USA
关键词
ENERGY MINIMIZATION; STEREO;
D O I
10.1109/ICCV.2015.398
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Consumer-level and high-end light-field cameras are now widely available. Recent work has demonstrated practical methods for passive depth estimation from light-field images. However, most previous approaches do not explicitly model occlusions, and therefore cannot capture sharp transitions around object boundaries. A common assumption is that a pixel exhibits photo-consistency when focused to its correct depth, i.e., all viewpoints converge to a single (Lambertian) point in the scene. This assumption does not hold in the presence of occlusions, making most current approaches unreliable precisely where accurate depth information is most important - at depth discontinuities. In this paper, we develop a depth estimation algorithm that treats occlusion explicitly; the method also enables identification of occlusion edges, which may be useful in other applications. We show that, although pixels at occlusions do not preserve photo-consistency in general, they are still consistent in approximately half the viewpoints. Moreover, the line separating the two view regions (correct depth vs. occluder) has the same orientation as the occlusion edge has in the spatial domain. By treating these two regions separately, depth estimation can be improved. Occlusion predictions can also be computed and used for regularization. Experimental results show that our method outperforms current state-of-the-art light-field depth estimation algorithms, especially near occlusion boundaries.
引用
收藏
页码:3487 / 3495
页数:9
相关论文
共 50 条
  • [31] Depth Estimation from Light Field Cameras
    Im, Sunghoon
    Jeon, Hae-Gon
    Ha, Hyowon
    Kweon, In So
    2015 12TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAI), 2015, : 190 - 191
  • [32] Fast Depth Estimation for Light Field Cameras
    Mishiba, Kazu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4232 - 4242
  • [33] Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation
    Zhou, Wenhui
    Lin, Lili
    Hong, Yongjie
    Li, Qiujian
    Shen, Xingfa
    Kuruoglu, Ercan Engin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15660 - 15674
  • [34] Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation
    Zhou, Wenhui
    Lin, Lili
    Hong, Yongjie
    Li, Qiujian
    Shen, Xingfa
    Kuruoglu, Ercan Engin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15660 - 15674
  • [35] Fast Depth Densification for Occlusion-aware Augmented Reality
    Holynski, Aleksander
    Kopf, Johannes
    SIGGRAPH ASIA'18: SIGGRAPH ASIA 2018 TECHNICAL PAPERS, 2018,
  • [36] DEPTH MAP ESTIMATION USING CENSUS TRANSFORM FOR LIGHT FIELD CAMERAS
    Tomioka, Takayuki
    Mishiba, Kazu
    Oyamada, Yuji
    Kondo, Katsuya
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 1641 - 1645
  • [37] Fast Depth Densification for Occlusion-aware Augmented Reality
    Holynski, Aleksander
    Kopf, Johannes
    ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (06):
  • [38] Depth Map Estimation Using Census Transform for Light Field Cameras
    Tomioka, Takayuki
    Mishiba, Kazu
    Oyamada, Yuji
    Kondo, Katsuya
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2017, E100D (11) : 2711 - 2720
  • [39] Occlusion-Aware Unsupervised Learning of Depth From 4-D Light Fields
    Jin, Jing
    Hou, Junhui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 2216 - 2228
  • [40] Depth Estimation of Semi-submerged Objects Using a Light-field Camera
    Fan, Juehui
    Yang, Yee-Hong
    2017 14TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2017), 2017, : 80 - 86