DEPTH ESTIMATION FROM FOCUS AND DISPARITY

被引:0
|
作者
Acharyya, Arnav [1 ]
Hudson, Dustin [1 ]
Chen, Ka Wai [1 ]
Feng, Tianjia [1 ]
Kan, Chih-Yin [1 ]
Truong Nguyen [1 ]
机构
[1] Univ Calif San Diego, Dept Elect & Comp Engn, La Jolla, CA 92093 USA
关键词
Stereoscopy; Focus; Defocus; Stereogram; EFFICIENT STEREO;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper explores how focusing and defocusing can be used in combination with stereoscopy to enhance the accuracy and speed of depth estimation. The proposed method for combining focus information with stereo is inspired from how the human brain perceives depth using both. The outline of the method is to first estimate depth from focus using a high focal length camera. Next, we perform disparity estimation from the stereo images by only searching in the neighborhood of the disparity from focus (derived from depth from focus) for matching features. The matching is faster as the search range is only a small neighborhood of the expected disparity map. It also increases the accuracy because searching over a small range of high confidence rules out the possibility of wrong matching, especially if there are multiple areas with similar features. Depth from focus can be coarsely approximated by depth from defocus using multiple cameras focused at complementary distances. Hence, the proposed method also works with depth from defocus.
引用
收藏
页码:3444 / 3448
页数:5
相关论文
共 50 条
  • [41] The perceived depth from disparity as function of luminance contrast
    Chen, Pei-Yin
    Chen, Chien-Chung
    Tyler, Christopher W.
    JOURNAL OF VISION, 2016, 16 (11):
  • [42] Depth from disparity: Constraints imposed by the geometry of occlusion
    Anderson, BL
    AUSTRALIAN JOURNAL OF PSYCHOLOGY, 2004, 56 : 102 - 102
  • [43] Asymmetries and errors in perception of depth from disparity suggest a multicomponent model of disparity processing
    Landers, DD
    Cormack, LK
    PERCEPTION & PSYCHOPHYSICS, 1997, 59 (02): : 219 - 231
  • [44] Learning Sub-Pixel Disparity Distribution for Light Field Depth Estimation
    Chao, Wentao
    Wang, Xuechun
    Wang, Yingqian
    Wang, Guanghui
    Duan, Fuqing
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 : 1126 - 1138
  • [45] Depth Estimation Based on Binocular Disparity and Color-Coded Aperture Fusion
    Zhou, Dianle
    Wang, Xiaoshen
    Zhong, Zhiwei
    Pan, Xiaotian
    Shun, Xilu
    THREE-DIMENSIONAL IMAGE ACQUISITION AND DISPLAY TECHNOLOGY AND APPLICATIONS, 2018, 10845
  • [46] ACTIVE STEREO - INTEGRATING DISPARITY, VERGENCE, FOCUS, APERTURE, AND CALIBRATION FOR SURFACE ESTIMATION
    AHUJA, N
    ABBOTT, AL
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1993, 15 (10) : 1007 - 1029
  • [47] DFRNets: Unsupervised Monocular Depth Estimation Using a Siamese Architecture for Disparity Refinement
    Yusiong, John Paul Tan
    Naval, Prospero Clara, Jr.
    PERTANIKA JOURNAL OF SCIENCE AND TECHNOLOGY, 2020, 28 (01): : 163 - 177
  • [48] Learning Depth from Focus in the Wild
    Won, Changyeon
    Jeon, Hae-Gon
    COMPUTER VISION - ECCV 2022, PT I, 2022, 13661 : 1 - 18
  • [49] Variational Depth From Focus Reconstruction
    Moeller, Michael
    Benning, Martin
    Schoenlieb, Carola
    Cremers, Daniel
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) : 5369 - 5378
  • [50] Focus model for metric depth estimation in standard plenoptic cameras
    Pertuz, Said
    Pulido-Herrera, Edith
    Kamarainen, Joni-Kristian
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2018, 144 : 38 - 47