Video Depth-From-Defocus

被引:7
|
作者
Kim, Hyeongwoo [1 ]
Richardt, Christian [2 ]
Theobalt, Christian [3 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Intel Visual Comp Inst, Saarbrucken, Germany
[3] Univ Bath, Bath BA2 7AY, Avon, England
来源
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV) | 2016年
基金
英国工程与自然科学研究理事会;
关键词
IMAGE; BLUR; PHOTOGRAPHY; CAMERA;
D O I
10.1109/3DV.2016.46
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [41] Optimal Camera Parameters for Depth from Defocus
    Mannan, Fahim
    Langer, Michael S.
    2015 INTERNATIONAL CONFERENCE ON 3D VISION, 2015, : 326 - 334
  • [42] Depth from Defocus Applied to Auto Focus
    Yasugi, Shunsuke
    Nguyen, Khang
    Ezawa, Kozo
    Kawamura, Takashi
    2014 IEEE 3RD GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE), 2014, : 171 - 173
  • [43] Depth from defocus using wavelet transform
    Asif, M
    Choi, TS
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2004, E87D (01): : 250 - 253
  • [44] Depth from defocus using the Hermits transform
    Ziou, D
    Wang, S
    Vaillancourt, J
    1998 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING - PROCEEDINGS, VOL 2, 1998, : 958 - 962
  • [45] Rational filters for passive depth from defocus
    Watanabe, M
    Nayar, SK
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 1998, 27 (03) : 203 - 225
  • [46] ROTATING CODED APERTURE FOR DEPTH FROM DEFOCUS
    Yang, Jingyu
    Ma, Jinlong
    Jiang, Bin
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 1726 - 1730
  • [47] Coding depth perception from image defocus
    Super, Hans
    Romeo, August
    VISION RESEARCH, 2014, 105 : 199 - 203
  • [48] Rational Filters for Passive Depth from Defocus
    Masahiro Watanabe
    Shree K. Nayar
    International Journal of Computer Vision, 1998, 27 : 203 - 225
  • [49] DEPTH FROM DEFOCUS TECHNIQUE BY MUTUAL REFOCUSING
    Takemura, Kazumi
    Yoshida, Toshiyuki
    2018 INTERNATIONAL WORKSHOP ON ADVANCED IMAGE TECHNOLOGY (IWAIT), 2018,
  • [50] Fast depth from defocus from focal stacks
    Bailey, Stephen W.
    Echevarria, Jose I.
    Bodenheimer, Bobby
    Gutierrez, Diego
    VISUAL COMPUTER, 2015, 31 (12): : 1697 - 1708