Efficient Video Deblurring Guided by Motion Magnitude

被引:12
|
作者
Wang, Yusheng [1 ]
Lu, Yunfan [2 ]
Gao, Ye [4 ]
Wang, Lin [2 ,3 ]
Zhong, Zhihang [1 ]
Zheng, Yinqiang [1 ]
Yamashita, Atsushi [1 ]
机构
[1] Univ Tokyo, Bunkyo Ku, Tokyo, Japan
[2] HKUST Guangzhou, Informat Hub, AI Thrust, Guangzhou, Peoples R China
[3] HKUST, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[4] Tokyo Res Ctr, Meguro, Japan
来源
关键词
Blur estimation; Motion magnitude; Video deblurring;
D O I
10.1007/978-3-031-19800-7_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video deblurring is a highly under-constrained problem due to the spatially and temporally varying blur. An intuitive approach for video deblurring includes two steps: a) detecting the blurry region in the current frame; b) utilizing the information from clear regions in adjacent frames for current frame deblurring. To realize this process, our idea is to detect the pixel-wise blur level of each frame and combine it with video deblurring. To this end, we propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring. Specifically, as the pixel movement along its trajectory during the exposure time is positively correlated to the level of motion blur, we first use the average magnitude of optical flow from the high-frequency sharp frames to generate the synthetic blurry frames and their corresponding pixel-wise motion magnitude maps. We then build a dataset including the blurry frame and MMP pairs. The MMP is then learned by a compact CNN by regression. The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring. We conduct intensive experiments to validate the effectiveness of the proposed methods on the public datasets. Our codes are available at https://github.com/sollynoay/MMP- RNN.
引用
收藏
页码:413 / 429
页数:17
相关论文
共 50 条
  • [1] Efficient generative model for motion deblurring
    Xiang, Han
    Sang, Haiwei
    Sun, Lilei
    Zhao, Yong
    JOURNAL OF ENGINEERING-JOE, 2020, 2020 (13): : 491 - 494
  • [2] Flow-Guided Sparse Transformer for Video Deblurring
    Lin, Jing
    Cai, Yuanhao
    Hu, Xiaowan
    Wang, Haoqian
    Yan, Youliang
    Zou, Xueyi
    Ding, Henghui
    Zhang, Yulun
    Timofte, Radu
    Van Gool, Luc
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [3] Soft-Segmentation Guided Object Motion Deblurring
    Pan, Jinshan
    Hu, Zhe
    Su, Zhixun
    Lee, Hsin-Ying
    Yang, Ming-Hsuan
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 459 - 468
  • [4] AGGREGATED DILATED CONVOLUTIONS FOR EFFICIENT MOTION DEBLURRING
    Miao, Hong
    Zhang, Wenqiang
    Bai, Jiansong
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [5] Combining Motion Compensation with Spatiotemporal Constraint for Video Deblurring
    Li, Jing
    Gong, Weiguo
    Li, Weihong
    SENSORS, 2018, 18 (06)
  • [6] A Video Deblurring Optimization Algorithm Based on Motion Detection
    Zhang, Yinan
    He, Jing
    Yuan, Jie
    PROCEEDINGS OF 3RD INTERNATIONAL CONFERENCE ON MULTIMEDIA TECHNOLOGY (ICMT-13), 2013, 84 : 1069 - 1076
  • [7] Three-stage motion deblurring from a video
    Ren, Chunjian
    Chen, Wenbin
    Shen, I-fan
    COMPUTER VISION - ACCV 2007, PT II, PROCEEDINGS, 2007, 4844 : 53 - +
  • [8] An efficient motion deblurring based on FPSF and clustering
    Huang, Hui-Yu
    Tsai, Wei-Chang
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2019, 16 (05) : 4036 - 4052
  • [9] Video deblurring via motion compensation and adaptive information fusion
    Zhan, Zongqian
    Yang, Xue
    Li, Yihui
    Pang, Chao
    NEUROCOMPUTING, 2019, 341 : 88 - 98
  • [10] EFFICIENT MOTION DEBLURRING WITH FEATURE TRANSFORMATION AND SPATIAL ATTENTION
    Purohit, Kuldeep
    Rajagopalan, A. N.
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4674 - 4678