Anti-occlusion light field depth estimation guided by Gini cost volume

被引:0
|
作者
Zhang X.-D. [1 ]
Dong Y.-L. [1 ]
Shi M.-D. [1 ]
机构
[1] School of Computer and Information, Hefei University of Technology, Hefei
来源
Kongzhi yu Juece/Control and Decision | 2020年 / 35卷 / 08期
关键词
Central view; Depth estimation; Gini cost volume; Joint guided filter; Light field; Occlusion; Refocusing;
D O I
10.13195/j.kzyjc.2018.1718
中图分类号
学科分类号
摘要
The light field camera can record multi-view information of a three-dimensional scene within one shot, which possesses unique advantage especially in depth estimation. However, the accuracy of the depth information extracted by the existing depth estimation method is significantly reduced when there is complex occlusion in the scene. Aiming at this problem, a method for estimating the depth of anti-occlusion light field based on Gini cost volume is proposed. Firstly, the refocusing images are obtained using the light field refocusing algorithm. Then, the Gini cost volume of the central view and other views are constructed. The initial depth map is calculated according to the principle of minimum cost volume. Finally, the initial depth information is combined with the color map for joint guided filtering, and a high-precision depth map is obtained. The experimental results show that the proposed method is more robust for complex scenes and can obtain better depth estimation results with smaller algorithm complexity. Compared to other advanced methods, the depth images obtained by using the proposed method are more accurate, the edges are clearer, and the MSE100 indicator on HCI dataset is reduced by an average of 7.8 %. © 2020, Editorial Office of Control and Decision. All right reserved.
引用
收藏
页码:1849 / 1858
页数:9
相关论文
共 34 条
  • [1] Fujimura K, Zhu Y., Target orientation estimation using depth sensing, (2009)
  • [2] Xu Y, Maeno K, Nagahara H, Light field distortion feature for transparent object classification, Computer Vision and Image Understanding, 139, C, pp. 122-135, (2015)
  • [3] Waller L, Tian L, 3D intensity and phase imaging from light field measurements in an LED array microscope, Optica, 2, 2, pp. 104-111, (2015)
  • [4] Qi X, Liao R, Jia J, Et al., 3D Graph neural networks for RGBD semantic segmentation, IEEE International Conference on Computer Vision (ICCV), pp. 5209-5218, (2017)
  • [5] Zhang J, Wang M, Gao J, Et al., Saliency detection with a deeper investigation of light field, International Conference on Artificial Intelligence, pp. 2212-2218, (2015)
  • [6] Adelson E H, Wang J Y A, Single lens stereo with a plenoptic camera, IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, 2, pp. 99-106, (2002)
  • [7] Ng R., Digital light field photography, (2006)
  • [8] Wu G, Masia B, Jarabo A, Light field image processing: An overview, IEEE Journal of Selected Topics in Signal Processing, 11, 7, pp. 926-954, (2017)
  • [9] Gao J, Wang L J, Zhang X D, Comparative study of light field depth estimation, Pattern Recognition and Artificial Intelligence, 29, 9, pp. 769-779, (2016)
  • [10] Zhang X D, Li C Y, Wang Y Z, Light field depth estimation for scene with occlusion, Control and Decision, 33, 12, pp. 2122-2130, (2018)