Neumann Network with Recursive Kernels for Single Image Defocus Deblurring

被引:9
|
作者
Quan, Yuhui [1 ,2 ]
Wu, Zicong [1 ,2 ]
Ji, Hui [3 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] Natl Univ Singapore, Dept Math, Singapore 119076, Singapore
关键词
D O I
10.1109/CVPR52729.2023.00557
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Single image defocus deblurring (SIDD) refers to recovering an all-in-focus image from a defocused blurry one. It is a challenging recovery task due to the spatially-varying defocus blurring effects with significant size variation. Motivated by the strong correlation among defocus kernels of different sizes and the blob-type structure of defocus kernels, we propose a learnable recursive kernel representation (RKR) for defocus kernels that expresses a defocus kernel by a linear combination of recursive, separable and positive atom kernels, leading to a compact yet effective and physics-encoded parametrization of the spatially-varying defocus blurring process. Afterwards, a physics-driven and efficient deep model with a cross-scale fusion structure is presented for SIDD, with inspirations from the truncated Neumann series for approximating the matrix inversion of the RKR-based blurring operator. In addition, a reblurring loss is proposed to regularize the RKR learning. Extensive experiments show that, our proposed approach significantly outperforms existing ones, with a model size comparable to that of the top methods.
引用
收藏
页码:5754 / 5763
页数:10
相关论文
共 50 条
  • [11] Single Image Blind Deblurring with Deep Recursive Networks
    Wu, Yeyun
    Wang, Junsheng
    Zhang, Xiaofeng
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 1607 - 1611
  • [12] Prior and Prediction Inverse Kernel Transformer for Single Image Defocus Deblurring
    Tang, Peng
    Xu, Zhiqiang
    Zhou, Chunlai
    Wei, Pengfei
    Han, Peng
    Cao, Xin
    Lasser, Tobias
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5145 - 5153
  • [13] Decoupling Image Deblurring Into Twofold: A Hierarchical Model for Defocus Deblurring
    Liang, Pengwei
    Jiang, Junjun
    Liu, Xianming
    Ma, Jiayi
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2024, 10 : 1207 - 1220
  • [14] Deep Single Image Defocus Deblurring via Gaussian Kernel Mixture Learning
    Quan, Yuhui
    Wu, Zicong
    Xu, Ruotao
    Ji, Hui
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (12) : 11361 - 11377
  • [15] SIDGAN: Efficient Multi-Module Architecture for Single Image Defocus Deblurring
    Ling, Shenggui
    Zhan, Hongmin
    Cao, Lijia
    ELECTRONICS, 2024, 13 (12)
  • [16] Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image
    Xin, Shumian
    Wadhwa, Neal
    Xue, Tianfan
    Barron, Jonathan T.
    Srinivasan, Pratul P.
    Chen, Jiawen
    Gkioulekas, Ioannis
    Garg, Rahul
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2208 - 2218
  • [17] Spatially variant defocus blur map estimation and deblurring from a single image
    Zhang, Xinxin
    Wang, Ronggang
    Jiang, Xiubao
    Wang, Wenmin
    Gao, Wen
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 35 : 257 - 264
  • [18] Perceptual quality evaluation for image defocus deblurring
    Lia, Leida
    Yan, Ya
    Fang, Yuming
    Wang, Shiqi
    Tang, Lu
    Qian, Jiansheng
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 48 : 81 - 91
  • [19] Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions
    Son, Hyeongseok
    Lee, Junyong
    Cho, Sunghyun
    Lee, Seungyong
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2622 - 2630
  • [20] Defocus Hyperspectral Image Deblurring with Adaptive Reference Image and Scale Map
    Li, De-Wang
    Lai, Lin-Jing
    Huang, Hua
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2019, 34 (03) : 569 - 580