Video Depth-From-Defocus

被引:7
|
作者
Kim, Hyeongwoo [1 ]
Richardt, Christian [2 ]
Theobalt, Christian [3 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Intel Visual Comp Inst, Saarbrucken, Germany
[3] Univ Bath, Bath BA2 7AY, Avon, England
来源
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV) | 2016年
基金
英国工程与自然科学研究理事会;
关键词
IMAGE; BLUR; PHOTOGRAPHY; CAMERA;
D O I
10.1109/3DV.2016.46
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [21] Single image depth-from-defocus with a learned covariance: algorithm and performance model for co-design
    Buat, B.
    Trouve-Peloux, P.
    Champagnat, F.
    Le Besnerais, G.
    UNCONVENTIONAL OPTICAL IMAGING III, 2022, 12136
  • [22] Image-based calibration of spatial domain depth-from-defocus and application to automatic focus tracking
    Park, SY
    Moon, J
    COMPUTER VISION - ACCV 2006, PT I, 2006, 3851 : 754 - 763
  • [23] SELF-SUPERVISED SPATIALLY VARIANT PSF ESTIMATION FOR ABERRATION-AWARE DEPTH-FROM-DEFOCUS
    Wu, Zhuofeng
    Monno, Yusuke
    Okutomi, Masatoshi
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 2560 - 2564
  • [24] Defocus Discrimination in Video: Motion in Depth
    Petrella, Vincent A.
    Labute, Simon
    Langer, Michael S.
    Kry, Paul G.
    I-PERCEPTION, 2017, 8 (06):
  • [25] Video-rate calculation of depth from defocus on a FPGA
    Raj, Alex Noel Joseph
    Staunton, Richard C.
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2018, 14 (02) : 469 - 480
  • [26] Video-rate calculation of depth from defocus on a FPGA
    Alex Noel Joseph Raj
    Richard C. Staunton
    Journal of Real-Time Image Processing, 2018, 14 : 469 - 480
  • [27] Defocus from depth for defocus measurement
    Liu, Ziwei
    Xu, Tingfa
    Liu, Jingdan
    Wang, Hongqing
    Shi, Mingzhu
    Li, Xiangmin
    JOURNAL OF MODERN OPTICS, 2013, 60 (21) : 1977 - 1981
  • [28] 3D positioning and autofocus of the particle field based on the depth-from-defocus method and the deep networks
    Zhang, Xiaolei
    Dong, Zhao
    Wang, Huaying
    Sha, Xiaohui
    Wang, Wenjian
    Su, Xinyu
    Hu, Zhengsheng
    Yang, Shaokai
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (02):
  • [29] A video-rate range sensor based on depth from defocus
    Ghita, O
    Whelan, PF
    OPTICS AND LASER TECHNOLOGY, 2001, 33 (03): : 167 - 176
  • [30] Depth from Defocus in the Wild
    Tang, Huixuan
    Cohen, Scott
    Price, Brian
    Schiller, Stephen
    Kutulakos, Kiriakos N.
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4773 - 4781