Deep Supervised Attention Network for Dynamic Scene Deblurring

被引:0
|
作者
Jang, Seok-Woo [1 ]
Yan, Limin [2 ]
Kim, Gye-Young [2 ]
机构
[1] Anyang Univ, Dept Software, 22,37 Beongil, Anyang 14028, South Korea
[2] Soongsil Univ, Sch Software, 369 Sangdo Ro, Seoul 06978, South Korea
关键词
dynamic deblurring; multiple loss function; multi-scale network; supervised attention; recurrent network; feature mapping;
D O I
10.3390/s25061896
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this study, we propose a dynamic scene deblurring approach using a deep supervised attention network. While existing deep learning-based deblurring methods have significantly outperformed traditional techniques, several challenges remain: (1) Invariant weights: Small conventional neural network (CNN) models struggle to address the spatially variant nature of dynamic scene deblurring, making it difficult to capture the necessary information. A more effective architecture is needed to better extract valuable features. (2) Limitations of standard datasets: Current datasets often suffer from low data volume, unclear ground truth (GT) images, and a single blur scale, which hinders performance. To address these challenges, we propose a multi-scale, end-to-end recurrent network that utilizes supervised attention to recover sharp images. The supervised attention mechanism focuses the model on features most relevant to ambiguous information as data are passed between networks at difference scales. Additionally, we introduce new loss functions to overcome the limitations of the peak signal-to-noise ratio (PSNR) estimation metric. By incorporating a fast Fourier transform (FFT), our method maps features into frequency space, aiding in the recovery of lost high-frequency details. Experimental results demonstrate that our model outperforms previous methods in both quantitative and qualitative evaluations, producing higher-quality deblurring results.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] A unified deep sparse graph attention network for scene graph generation
    Zhou, Hao
    Yang, Yazhou
    Luo, Tingjin
    Zhang, Jun
    Li, Shuohao
    PATTERN RECOGNITION, 2022, 123
  • [42] Natural Scene Text Detection Based on Deep Supervised Fully Convolutional Network
    Zhang, Nan
    Jin, Xiaoning
    Li, Xiaowei
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT III, 2018, 11166 : 439 - 448
  • [43] Blind Attention Geometric Restraint Neural Network for Single Image Dynamic/Defocus Deblurring
    Zhang, Jie
    Zhai, Wanming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8404 - 8417
  • [44] Attention Network for Non-Uniform Deblurring
    Qi, Qing
    Guo, Jichang
    Jin, Weipei
    IEEE ACCESS, 2020, 8 (08): : 100044 - 100057
  • [45] Self-supervised monocular depth estimation with large kernel attention and dynamic scene perception
    Xiang, Xuezhi
    Wang, Yao
    Li, Xiaoheng
    Zhang, Lei
    Zhen, Xiantong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2025, 108
  • [46] Efficient Dynamic Scene Deblurring Using Spatially Variant Deconvolution Network with Optical Flow Guided Training
    Yuan, Yuan
    Su, Wei
    Ma, Dandan
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3552 - 3561
  • [47] Progressive downsampling and adaptive guidance networks for dynamic scene deblurring
    Cui, Jinkai
    Li, Weihong
    Guo, Wei
    Gong, Weiguo
    PATTERN RECOGNITION, 2022, 132
  • [48] Progressive downsampling and adaptive guidance networks for dynamic scene deblurring
    Cui, Jinkai
    Li, Weihong
    Guo, Wei
    Gong, Weiguo
    Pattern Recognition, 2022, 132
  • [49] Lightweight Patch-Wise Casformer for dynamic scene deblurring
    Chen, Ziyi
    Cui, Guangmang
    Li, Zihan
    Zhao, Jufeng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 100
  • [50] Weakly Supervised Attention Rectification for Scene Text Recognition
    Gu, Chengyu
    Wang, Shilin
    Zhu, Yiwei
    Huang, Zheng
    Chen, Kai
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 779 - 786