Gaussian-Hermite moment-based depth estimation from single still image for stereo vision

被引:2
|
作者
Haque, Samiul [1 ,2 ,4 ]
Rahman, S. M. Mahbubur [1 ]
Hatzinakos, Dimitrios [3 ]
机构
[1] Bangladesh Univ Engn & Technol, Dept Elect & Elect Engn, Dhaka 1205, Bangladesh
[2] North Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC 27606 USA
[3] Univ Toronto, Dept Elect & Comp Engn, Toronto, ON M5S 2E4, Canada
[4] Bangladesh Univ Engn & Technol, Dhaka, Bangladesh
基金
加拿大自然科学与工程研究理事会;
关键词
Anaglyph image; Depth estimation; Focus cue; Gaussian-Hermite moments; Laplacian matting; 2D-TO-3D CONVERSION; PARALLEL FRAMEWORK; VIDEO; 3D; 2D; MOTION; HEVC; EXTRACTION; RECONSTRUCTION; GENERATION;
D O I
10.1016/j.jvcir.2016.10.008
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Depth information of objects plays a significant role in image-based rendering. Traditional depth estimation techniques use different visual cues including the disparity, motion, geometry, and defocus of objects. This paper presents a novel approach of focus cue-based depth estimation for still images using the Gaussian-Hermite moments (GHMs) of local neighboring pixels. The GHMs are chosen due to their superior reconstruction ability and invariance properties to intensity and geometric distortions of objects as compared to other moments. Since depths of local neighboring pixels are significantly correlated, the Laplacian matting is employed to obtain final depth map from the moment-based focus map. Experiments are conducted on images of indoor and outdoor scenes having objects with varying natures of resolution, edge, occlusion, and blur contents. Experimental results reveal that the depth estimated from GHMs can provide anaglyph images with stereo quality better than that provided by existing methods using traditional visual cues. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:281 / 295
页数:15
相关论文
共 50 条
  • [1] Object Labeling in 3D from Multi-view Scenes Using Gaussian-Hermite Moment-Based Depth Map
    Enan, Sadman Sakib
    Rahman, S. M. Mahbubur
    Haque, Samiul
    Howlader, Tamanna
    Hatzinakos, Dimitrios
    [J]. PROCEEDINGS OF 3RD INTERNATIONAL CONFERENCE ON COMPUTER VISION AND IMAGE PROCESSING, CVIP 2018, VOL 2, 2020, 1024 : 87 - 99
  • [2] Analysis of Gaussian-Hermite Moment Invariants on Image Geometric Transformation
    Zhang, Chaoxin
    Xi, Ping
    [J]. COMPUTER AND INFORMATION TECHNOLOGY, 2014, 519-520 : 557 - 561
  • [3] Mosaic of Printed Circuit Board Image Based on Gaussian-Hermite Moment Invariants
    ZHANG Yuye
    YANG Bo
    [J]. Wuhan University Journal of Natural Sciences, 2017, 22 (05) : 380 - 386
  • [4] Fingerprint image segmentation based on Gaussian-Hermite moments
    Wang, L
    Suo, H
    Dai, M
    [J]. ADVANCED DATA MINING AND APPLICATIONS, PROCEEDINGS, 2005, 3584 : 446 - 454
  • [5] Coherency estimation based on spectrum Gaussian-Hermite moments
    Li, Gengxiang
    Yang, Bo
    Dai, Mo
    [J]. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND PATTERN RECOGNITION IN INDUSTRIAL ENGINEERING, 2010, 7820
  • [6] Enhanced SIFT Descriptor Based on Modified Discrete Gaussian-Hermite Moment
    Kang, Tae-Koo
    Zhang, Huazhen
    Kim, Dong W.
    Park, Gwi-Tae
    [J]. ETRI JOURNAL, 2012, 34 (04) : 572 - 582
  • [7] MDGHM-SURF: A robust local image descriptor based on modified discrete Gaussian-Hermite moment
    Kang, Tae-Koo
    Choi, In-Hwan
    Lim, Myo-Taeg
    [J]. PATTERN RECOGNITION, 2015, 48 (03) : 670 - 684
  • [8] A Blind Estimation for Speckle Noise based on Gaussian-Hermite Moments
    Miao, Ma
    Xue, Zheng
    Zhao, Pei
    [J]. 2016 INTERNATIONAL SYMPOSIUM ON COMPUTER, CONSUMER AND CONTROL (IS3C), 2016, : 829 - 832
  • [9] Estimation of Peak Factor of Non-Gaussian Wind Pressures by Improved Moment-Based Hermite Model
    Liu, Min
    Chen, Xinzhong
    Yang, Qingshan
    [J]. JOURNAL OF ENGINEERING MECHANICS, 2017, 143 (07)
  • [10] Extended biologically inspired model for object recognition based on oriented Gaussian-Hermite moment
    Lu, Yan-Feng
    Zhang, Hua-Zhen
    Kang, Tae-Koo
    Choi, In-Hwan
    Lim, Myo-Taeg
    [J]. NEUROCOMPUTING, 2014, 139 : 189 - 201