Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring

被引:2
|
作者
Zhang, Huicong [1 ]
Xie, Haozhe [2 ]
Yao, Hongxun [1 ]
机构
[1] Harbin Inst Technol, Harbin, Peoples R China
[2] Nanyang Technol Univ, S Lab, Singapore, Singapore
关键词
D O I
10.1109/CVPR52733.2024.00258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video deblurring relies on leveraging information from other frames in the video sequence to restore the blurred regions in the current frame. Mainstream approaches employ bidirectional feature propagation, spatio-temporal transformers, or a combination of both to extract information from the video sequence. However, limitations in memory and computational resources constraints the temporal window length of the spatio-temporal transformer, preventing the extraction of longer temporal contextual information from the video sequence. Additionally, bidirectional feature propagation is highly sensitive to inaccurate optical flow in blurry frames, leading to error accumulation during the propagation process. To address these issues, we propose BSSTNet, Blur-aware Spatio-temporal Sparse Transformer Network. It introduces the blur map, which converts the originally dense attention into a sparse form, enabling a more extensive utilization of information throughout the entire video sequence. Specifically, BSSTNet (1) uses a longer temporal window in the transformer, leveraging information from more distant frames to restore the blurry pixels in the current frame. (2) introduces bidirectional feature propagation guided by blur maps, which reduces error accumulation caused by the blur frame. The experimental results demonstrate the proposed BSSTNet outperforms the state-of-the-art methods on the GoPro and DVD datasets.
引用
收藏
页码:2673 / 2681
页数:9
相关论文
共 50 条
  • [1] Wavelet-Based, Blur-Aware Decoupled Network for Video Deblurring
    Wang, Hua
    Pawara, Pornntiwa
    Chamchong, Rapeeporn
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [2] Attacking Defocus Detection With Blur-Aware Transformation for Defocus Deblurring
    Zhao, Wenda
    Hu, Guang
    Wei, Fei
    Wang, Haipeng
    He, You
    Lu, Huchuan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5450 - 5460
  • [3] BANet: A Blur-Aware Attention Network for Dynamic Scene Deblurring
    Tsai, Fu-Jen
    Peng, Yan-Tsung
    Tsai, Chung-Chi
    Lin, Yen-Yu
    Lin, Chia-Wen
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6789 - 6799
  • [4] Adversarial Spatio-Temporal Learning for Video Deblurring
    Zhang, Kaihao
    Luo, Wenhan
    Zhong, Yiran
    Ma, Lin
    Liu, Wei
    Li, Hongdong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (01) : 291 - 301
  • [5] Dast-Net: Depth-Aware Spatio-Temporal Network for Video Deblurring
    Zhu, Qi
    Xiao, Zeyu
    Huang, Jie
    Zhao, Feng
    Proceedings - IEEE International Conference on Multimedia and Expo, 2022, 2022-July
  • [6] Spatio-Temporal Filter Adaptive Network for Video Deblurring
    Zhou, Shangchen
    Zhang, Jiawei
    Pan, Jinshan
    Xie, Haozhe
    Zuo, Wangmeng
    Ren, Jimmy
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2482 - 2491
  • [7] Adaptive Spatio-Temporal Convolutional Network for Video Deblurring
    Duan, Fengzhi
    Yao, Hongxun
    IMAGE AND GRAPHICS (ICIG 2021), PT III, 2021, 12890 : 777 - 788
  • [8] Learning Spatio-Temporal Sharpness Map for Video Deblurring
    Zhu, Qi
    Zheng, Naishan
    Huang, Jie
    Zhou, Man
    Zhang, Jinghao
    Zhao, Feng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3957 - 3970
  • [9] Spatio-Temporal Deformable Attention Network for Video Deblurring
    Zhang, Huicong
    Xie, Haozhe
    Yao, Hongxun
    COMPUTER VISION - ECCV 2022, PT XVI, 2022, 13676 : 581 - 596
  • [10] Spatio-Temporal Deformable Attention Network for Video Deblurring
    Zhang, Huicong
    Xie, Haozhe
    Yao, Hongxun
    arXiv, 2022,