Unsupervised disparity estimation from light field using plug-and-play weighted warping loss

被引:5
|
作者
Iwatsuki, Taisei [1 ]
Takahashi, Keita [1 ]
Fujii, Toshiaki [1 ]
机构
[1] Nagoya Univ, Grad Sch Engn, Furo Cho,Chikusa Ku, Nagoya 4648603, Japan
关键词
Light field; Disparity estimation; CNN; Unsupervised learning; STEREO; DEPTH;
D O I
10.1016/j.image.2022.116764
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We investigated disparity estimation from a light field using a convolutional neural network (CNN). Most of the methods implemented a supervised learning framework, where the predicted disparity map was compared directly to the corresponding ground-truth disparity map in the training stage. However, light field data accompanied with ground-truth disparity maps were insufficient and rarely available for real-world scenes. The lack of training data resulted in limited generality of the methods trained with them. To tackle this problem, we took a simple Figure-and-play approach to remake a supervised method into an unsupervised (self-supervised) one. We replaced the loss function of the original method with one that does not depend on the ground-truth disparity maps. More specifically, our loss function is designed to indirectly evaluate the accuracy of the disparity map by using warping errors among the input light field views. We designed pixel-wise weights to properly evaluate the warping errors in the presence of occlusions, and an edge loss to encourage edge alignment between the image and the disparity map. As a result of this unsupervised learning framework, our method can use more abundant training datasets (even those without ground-truth disparity maps) than the original supervised method. Our method was evaluated on computer-generated scenes (4D Light Field Benchmark) and real-world scenes captured by Lytro Illum cameras. Our method achieved the state-ofthe-art performance as an unsupervised method on the benchmark. We also demonstrated that our method can estimate disparity maps more accurately than the original supervised method for various real-world scenes.
引用
收藏
页数:8
相关论文
共 40 条
  • [21] Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation
    Zhou, Wenhui
    Lin, Lili
    Hong, Yongjie
    Li, Qiujian
    Shen, Xingfa
    Kuruoglu, Ercan Engin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15660 - 15674
  • [22] Disparity Estimation using Light Ray Pair in Stacked 3D Light Field
    Jung, Hyunmin
    Lee, Hyuk-Jae
    Rhee, Chae Eun
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 435 - 438
  • [23] Adaptive matching norm based disparity estimation from light field data
    Liu, Chang
    Shi, Ligen
    Zhao, Xing
    Qiu, Jun
    SIGNAL PROCESSING, 2023, 209
  • [24] DISPARITY ESTIMATION FROM LIGHT FIELDS USING SHEARED EPI ANALYSIS
    Suzuki, Takahiro
    Takahashi, Keita
    Fujii, Toshiaki
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 1444 - 1448
  • [25] 4D Light Field Disparity Map estimation using Krawtchouk Polynomials
    Lourenco, Rui
    Rivero-Castillo, Daniel
    Thomaz, Lucas A.
    Assuncao, Pedro A. A.
    Tavora, Luis M. N.
    de Faria, Sergio M. M.
    2020 TENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA), 2020,
  • [26] Disparity Estimation for Focused Light Field Camera Using Cost Aggregation in Micro-Images
    Ding, Zhiyu
    Liu, Qian
    Wang, Qing
    2017 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV 2017), 2017, : 366 - 371
  • [27] The Accurate Estimation of Disparity Maps from Cross-Scale Reference-Based Light Field
    Zhao, Mandan
    Hao, Xiangyang
    Wu, Gaochang
    2018 IEEE 3RD INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING (ICIVC), 2018, : 966 - 971
  • [28] Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers
    Tachella, Julian
    Altmann, Yoann
    Mellado, Nicolas
    McCarthy, Aongus
    Tobin, Rachael
    Buller, Gerald S.
    Tourneret, Jean-Yves
    McLaughlin, Stephen
    NATURE COMMUNICATIONS, 2019, 10 (1)
  • [29] Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers
    Julián Tachella
    Yoann Altmann
    Nicolas Mellado
    Aongus McCarthy
    Rachael Tobin
    Gerald S. Buller
    Jean-Yves Tourneret
    Stephen McLaughlin
    Nature Communications, 10
  • [30] Fast and Robust Disparity Estimation from Noisy Light Fields Using 1-D Slanted Filters
    Houben, Gou
    Fujita, Shu
    Takahashi, Keita
    Fujii, Toshiaki
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (11) : 2101 - 2109