Video Depth-From-Defocus

被引:7
|
作者
Kim, Hyeongwoo [1 ]
Richardt, Christian [2 ]
Theobalt, Christian [3 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Intel Visual Comp Inst, Saarbrucken, Germany
[3] Univ Bath, Bath BA2 7AY, Avon, England
来源
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV) | 2016年
基金
英国工程与自然科学研究理事会;
关键词
IMAGE; BLUR; PHOTOGRAPHY; CAMERA;
D O I
10.1109/3DV.2016.46
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [31] REGULARIZED DEPTH FROM DEFOCUS
    Namboodiri, Vinay P.
    Chaudhuri, Subhasis
    Hadap, Sunil
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 1520 - 1523
  • [32] Diffraction-limited depth-from-defocus imaging with a pixel-limited camera using pupil phase modulation and compressive sensing
    Niihara, Takahiro
    Horisaki, Ryoichi
    Kiyono, Mitsuhiro
    Yanai, Kenichi
    Tanida, Jun
    Applied Physics Express, 2015, 8 (01)
  • [33] Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring
    Changyin Zhou
    Stephen Lin
    Shree K. Nayar
    International Journal of Computer Vision, 2011, 93 : 53 - 72
  • [34] Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring
    Zhou, Changyin
    Lin, Stephen
    Nayar, Shree K.
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2011, 93 (01) : 53 - 72
  • [35] A Unified Approach for Registration and Depth in Depth from Defocus
    Ben-Ari, Rami
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (06) : 1041 - 1055
  • [36] Depth from motion and defocus blur
    Lin, Huei-Yung
    Chang, Chia-Hong
    OPTICAL ENGINEERING, 2006, 45 (12)
  • [37] Computational approach for depth from defocus
    Ghita, O
    Whelan, PF
    Mallon, J
    JOURNAL OF ELECTRONIC IMAGING, 2005, 14 (02) : 1 - 8
  • [38] Blur Calibration for Depth from Defocus
    Mannan, Fahim
    Langer, Michael S.
    2016 13TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2016, : 281 - 288
  • [39] Discriminative Filters for Depth from Defocus
    Mannan, Fahim
    Langer, Michael S.
    PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2016, : 592 - 600
  • [40] DEPTH FROM SPECTRAL DEFOCUS BLUR
    Ishihara, Shin
    Sulc, Antonin
    Sato, Imari
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 1980 - 1984