Deep Supervised Attention Network for Dynamic Scene Deblurring

被引:0
|
作者
Jang, Seok-Woo [1 ]
Yan, Limin [2 ]
Kim, Gye-Young [2 ]
机构
[1] Anyang Univ, Dept Software, 22,37 Beongil, Anyang 14028, South Korea
[2] Soongsil Univ, Sch Software, 369 Sangdo Ro, Seoul 06978, South Korea
关键词
dynamic deblurring; multiple loss function; multi-scale network; supervised attention; recurrent network; feature mapping;
D O I
10.3390/s25061896
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this study, we propose a dynamic scene deblurring approach using a deep supervised attention network. While existing deep learning-based deblurring methods have significantly outperformed traditional techniques, several challenges remain: (1) Invariant weights: Small conventional neural network (CNN) models struggle to address the spatially variant nature of dynamic scene deblurring, making it difficult to capture the necessary information. A more effective architecture is needed to better extract valuable features. (2) Limitations of standard datasets: Current datasets often suffer from low data volume, unclear ground truth (GT) images, and a single blur scale, which hinders performance. To address these challenges, we propose a multi-scale, end-to-end recurrent network that utilizes supervised attention to recover sharp images. The supervised attention mechanism focuses the model on features most relevant to ambiguous information as data are passed between networks at difference scales. Additionally, we introduce new loss functions to overcome the limitations of the peak signal-to-noise ratio (PSNR) estimation metric. By incorporating a fast Fourier transform (FFT), our method maps features into frequency space, aiding in the recovery of lost high-frequency details. Experimental results demonstrate that our model outperforms previous methods in both quantitative and qualitative evaluations, producing higher-quality deblurring results.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Rectificatory Semantic Information Supplement Network(RSIS-net) For Dynamic Scene Deblurring
    Liu, Yiming
    Luo, Yifei
    Li, Junhui
    Huang, Wenzhuo
    Xu, Dahong
    Luo, Duqiang
    2020 5TH INTERNATIONAL CONFERENCE ON COMMUNICATION, IMAGE AND SIGNAL PROCESSING (CCISP 2020), 2020, : 112 - 117
  • [32] Inner crossover fusion network with pixel-wise sampling for dynamic scene deblurring
    Guo W.
    Cui J.
    Wang Y.
    Xu W.
    Cai T.
    Wang X.
    Digital Signal Processing: A Review Journal, 2023, 134
  • [33] Deep neural network with attention model for scene text recognition
    Li, Shuohao
    Tang, Min
    Guo, Qiang
    Lei, Jun
    Zhang, Jun
    IET COMPUTER VISION, 2017, 11 (07) : 605 - 612
  • [34] weakly supervised text attention network for generating text proposals in scene images
    Li Rong
    En Mengyi
    Li Jianqiang
    Zhang haibin
    2017 14TH IAPR INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION (ICDAR), VOL 1, 2017, : 324 - 330
  • [35] Multi-scale Deformable Deblurring Kernel Prediction for Dynamic Scene Deblurring
    Zhu, Kai
    Sang, Nong
    IMAGE AND GRAPHICS (ICIG 2021), PT III, 2021, 12890 : 253 - 264
  • [36] Progressive edge-sensing dynamic scene deblurring
    Tianlin Zhang
    Jinjiang Li
    Hui Fan
    Computational Visual Media, 2022, 8 (03) : 495 - 508
  • [37] Dynamic Scene Deblurring Based on Semantic Information Supplement
    Liu, Yiming
    Li, Junhui
    Huang, Wenzhuo
    Tang, Kang
    Xu, Dahong
    2020 4TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT 2020), 2020, 1518
  • [38] Total variation constraint GAN for dynamic scene deblurring
    Ouyang, Yi
    IMAGE AND VISION COMPUTING, 2019, 88 : 113 - 119
  • [39] Progressive edge-sensing dynamic scene deblurring
    Tianlin Zhang
    Jinjiang Li
    Hui Fan
    Computational Visual Media, 2022, 8 : 495 - 508
  • [40] Progressive edge-sensing dynamic scene deblurring
    Zhang, Tianlin
    Li, Jinjiang
    Fan, Hui
    COMPUTATIONAL VISUAL MEDIA, 2022, 8 (03) : 495 - 508