Blind Motion Deblurring With Pixel-Wise Kernel Estimation via Kernel Prediction Networks

被引:5
|
作者
Carbajal G. [1 ]
Vitoria P. [2 ]
Lezama J. [1 ]
Muse P. [1 ]
机构
[1] Universidad de la República, Department of Electrical Engineering, Montevideo
[2] Universitat Pompeu Fabra, Image Processing Group, Barcelona
关键词
deep learning; kernel prediction networks; motion deblurring; Non-uniform motion kernel estimation;
D O I
10.1109/TCI.2023.3322012
中图分类号
学科分类号
摘要
In recent years, the removal of motion blur in photographs has seen impressive progress in the hands of deep learning-based methods, trained to map directly from blurry to sharp images. For this reason, approaches that explicitly use a forward degradation model received significantly less attention. However, a well-defined specification of the blur genesis, as an intermediate step, promotes the generalization and explainability of the method. Towards this goal, we propose a learning-based motion deblurring method based on dense non-uniform motion blur estimation followed by a non-blind deconvolution approach. Specifically, given a blurry image, a first network estimates the dense per-pixel motion blur kernels using a lightweight representation composed of a set of image-adaptive basis motion kernels and the corresponding mixing coefficients. Then, a second network trained jointly with the first one, unrolls a non-blind deconvolution method using the motion kernel field estimated by the first network. The model-driven aspect is further promoted by training the networks on sharp/blurry pairs synthesized according to a convolution-based, non-uniform motion blur degradation model. Qualitative and quantitative evaluation shows that the kernel prediction network produces accurate motion blur estimates, and that the deblurring pipeline leads to restorations of real blurred images that are competitive or superior to those obtained with existing end-to-end deep learning-based methods. © 2015 IEEE.
引用
收藏
页码:928 / 943
页数:15
相关论文
共 50 条
  • [21] Blind motion image deblurring using an effective blur kernel prior
    Taiebeh Askari Javaran
    Hamid Hassanpour
    Vahid Abolghasemi
    Multimedia Tools and Applications, 2019, 78 : 22555 - 22574
  • [22] Blur kernel estimation via salient edges and low rank prior for blind image deblurring
    Dong, Jiangxin
    Pan, Jinshan
    Su, Zhixun
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2017, 58 : 134 - 145
  • [23] Pixel-wise confidence estimation for segmentation in Bayesian Convolutional Neural Networks
    Rémi Martin
    Luc Duong
    Machine Vision and Applications, 2023, 34
  • [24] Self-paced Kernel Estimation for Robust Blind Image Deblurring
    Gong, Dong
    Tan, Mingkui
    Zhang, Yanning
    van den Hengel, Anton
    Shi, Qinfeng
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1670 - 1679
  • [25] Edge Enhancing Based Blind Kernel Estimation for Deep Image Deblurring
    Yang, Chunyu
    Wang, Weiwei
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2025, 44 (02) : 1017 - 1044
  • [26] Blur Kernel Estimation Model with Combined Constraints for Blind Image Deblurring
    Liao, Ying
    Li, Weihong
    Cui, Jinkai
    Gong, Weiguo
    2018 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2018, : 388 - 395
  • [27] Pixel-wise confidence estimation for segmentation in Bayesian Convolutional Neural Networks
    Martin, Remi
    Duong, Luc
    MACHINE VISION AND APPLICATIONS, 2023, 34 (01)
  • [28] Transformer-Based Attention Networks for Continuous Pixel-Wise Prediction
    Yang, Guanglei
    Tang, Hao
    Ding, Mingli
    Sebe, Nicu
    Ricci, Elisa
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16249 - 16259
  • [29] Towards interpretable and robust hand detection via pixel-wise prediction
    Liu, Dan
    Zhang, Libo
    Luo, Tiejian
    Tao, Lili
    Wu, Yanjun
    PATTERN RECOGNITION, 2020, 105
  • [30] Semantic-Aware Face Deblurring With Pixel-Wise Projection Discriminator
    Han, Sujy
    Lee, Tae Bok
    Heo, Yong Seok
    IEEE ACCESS, 2023, 11 : 11587 - 11600