Decoupled Frequency Learning for Dynamic Scene Deblurring

被引:0
|
作者
Liu, Tao [1 ]
Tan, Shan [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan, Peoples R China
关键词
D O I
10.1109/ICPR56361.2022.9956626
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite end-to-end deep learning methods have recently advanced state-of-the-art for dynamic scene deblurring, they are often biased towards learning low-frequency (LF) information, thus missing sufficient high-frequency (HF) details. In this paper, we experimentally verify that different image frequencies affect the final deblurring quality in different manners. Considering this, we point out that the LF learning bias problem arises from the existing training scheme with frequencies coupled, to some extent. Concretely, current training scheme fails to distinguish different frequencies but optimize them as a whole towards one common objective, thereby resulting in sub-optimal results. To ameliorate this problem, we propose an alternative training strategy, namely Decoupled Frequency Learning (DFL). Specifically, DFL treats deblurring task as two separate sub-tasks, which correspond to image LF and HF components, respectively. Different losses are tailored-designed for different frequencies to better guide their learning towards appropriate objectives. The proposed DFL scheme is simple yet effective, and compatible to any existing deep models. Extensive experiments on public benchmarks demonstrate its clear benefits to the state-of-the-art in terms of both quantitative measures and perceptual quality.
引用
收藏
页码:89 / 96
页数:8
相关论文
共 50 条
  • [1] Dynamic Scene Deblurring
    Kim, Tae Hyun
    Ahn, Byeongjoo
    Lee, Kyoung Mu
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 3160 - 3167
  • [2] Dual learning generative adversarial network for dynamic scene deblurring
    Ji Y.
    Dai Y.-P.
    Hirota K.
    Shao S.
    Kongzhi yu Juece/Control and Decision, 2024, 39 (04): : 1305 - 1314
  • [3] ADVERSARIAL REPRESENTATION LEARNING FOR DYNAMIC SCENE DEBLURRING: A SIMPLE, FAST AND ROBUST APPROACH
    Liu, Yuan-Yuan
    Ye, Lu-Yue
    Shao, Wen-Ze
    Ge, Qi
    Wang, Li-Qian
    Bao, Bing-Kun
    Li, Hai-Bo
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4644 - 4648
  • [4] Multi-Task Learning Framework for Motion Estimation and Dynamic Scene Deblurring
    Jung, Hyungjoo
    Kim, Youngjung
    Jang, Hyunsung
    Ha, Namkoo
    Sohn, Kwanghoon
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 (30) : 8170 - 8183
  • [5] Dynamic Scene Deblurring by Depth Guided Model
    Li, Lerenhan
    Pan, Jinshan
    Lai, Wei-Sheng
    Gao, Changxin
    Sang, Nong
    Yang, Ming-Hsuan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 5273 - 5288
  • [6] Segmentation-Free Dynamic Scene Deblurring
    Kim, Tae Hyun
    Lee, Kyoung Mu
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 2766 - 2773
  • [7] Structures Guided Dynamic Scene Deblurring Method
    Qi, Qing
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [8] Multi-scale Deformable Deblurring Kernel Prediction for Dynamic Scene Deblurring
    Zhu, Kai
    Sang, Nong
    IMAGE AND GRAPHICS (ICIG 2021), PT III, 2021, 12890 : 253 - 264
  • [9] Progressive edge-sensing dynamic scene deblurring
    Tianlin Zhang
    Jinjiang Li
    Hui Fan
    ComputationalVisualMedia, 2022, 8 (03) : 495 - 508
  • [10] Deep Supervised Attention Network for Dynamic Scene Deblurring
    Jang, Seok-Woo
    Yan, Limin
    Kim, Gye-Young
    SENSORS, 2025, 25 (06)