Video Depth-From-Defocus

被引:7
|
作者
Kim, Hyeongwoo [1 ]
Richardt, Christian [2 ]
Theobalt, Christian [3 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Intel Visual Comp Inst, Saarbrucken, Germany
[3] Univ Bath, Bath BA2 7AY, Avon, England
基金
英国工程与自然科学研究理事会;
关键词
IMAGE; BLUR; PHOTOGRAPHY; CAMERA;
D O I
10.1109/3DV.2016.46
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [1] Physically inspired depth-from-defocus
    Persch, Nico
    Schroers, Christopher
    Setzer, Simon
    Weickert, Joachim
    IMAGE AND VISION COMPUTING, 2017, 57 : 114 - 129
  • [2] Particle depth measurement based on depth-from-defocus
    Kyoto Institute of Technology, Hashigami-cho, Matsugasaki, S., Kyoto, Japan
    Opt Laser Technol, 1 (95-102):
  • [3] Particle depth measurement based on depth-from-defocus
    Murata, S
    Kawamura, M
    OPTICS AND LASER TECHNOLOGY, 1999, 31 (01): : 95 - 102
  • [4] Bayesian Depth-From-Defocus With Shading Constraints
    Li, Chen
    Su, Shuochen
    Matsushita, Yasuyuki
    Zhou, Kun
    Lin, Stephen
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (02) : 589 - 600
  • [5] Depth-from-defocus: Blur equalization technique
    Xian, Tao
    Subbarao, Murali
    TWO- AND THREE-DIMENSIONAL METHODS FOR INSPECTION AND METROLOGY IV, 2006, 6382
  • [6] Bayesian Depth-from-Defocus with Shading Constraints
    Li, Chen
    Su, Shuochen
    Matsushita, Yasuyuki
    Zhou, Kun
    Lin, Stephen
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 217 - 224
  • [7] Diffraction-limited depth-from-defocus
    Mair, C
    Goodman, CJ
    ELECTRONICS LETTERS, 2000, 36 (24) : 2012 - 2013
  • [8] Introducing More Physics into Variational Depth-from-Defocus
    Persch, Nico
    Schroers, Christopher
    Setzer, Simon
    Weickert, Joachim
    PATTERN RECOGNITION, GCPR 2014, 2014, 8753 : 15 - 28
  • [9] Role of optics in the accuracy of depth-from-defocus systems
    Blayvas, Ilya
    Kimmel, Ron
    Rivlin, Ehud
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2007, 24 (04) : 967 - 972
  • [10] Measuring the Size of Neoplasia in Colonoscopy using Depth-from-Defocus
    Chadebecq, Francois
    Tilmant, Christophe
    Bartoli, Adrien
    2012 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2012, : 1478 - 1481