FIFNET: A convolutional neural network for motion-based multiframe super-resolution using fusion of interpolated frames

被引:8
|
作者
Elwarfalli, Hamed [1 ]
Hardie, Russell C. [1 ]
机构
[1] Univ Dayton, Dept Elect & Comp Engn, 300 Coll Pk, Dayton, OH 45469 USA
关键词
Multiframe super-resolution; Convolutional neural network; Fusion of interpolated frames; Image restoration; Subpixel registration;
D O I
10.1016/j.cviu.2020.103097
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel motion-based multiframe image super-resolution (SR) algorithm using a convolutional neural network (CNN) that fuses multiple interpolated input frames to produce an SR output. We refer to the proposed CNN and associated preprocessing as the Fusion of Interpolated Frames Network (FIFNET). We believe this is the first such CNN approach in the literature to perform motion-based multiframe SR by fusing multiple input frames in a single network. We study the FIFNET using translational interframe motion with both fixed and random frame shifts. The input to the network is a sequence of interpolated and aligned frames. One key innovation is that we compute subpixel interframe registration information for each interpolated pixel and feed this into the network as additional input channels. We demonstrate that this subpixel registration information is critical to network performance. We also employ a realistic camera-specific optical transfer function model that accounts for diffraction and detector integration when generating training data. We present a number of experimental results to demonstrate the efficacy of the proposed FIFNET using both simulated and real camera data. The real data come directly from a camera and are not artificially downsampled or degraded. In the quantitative results with simulated data, we show that the FIFNET performs favorably in comparison to the benchmark methods tested.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Projection Super-resolution Based on Convolutional Neural Network for Computed Tomography
    Tang, Chao
    Zhang, Wenkun
    Li, Ziheng
    Cai, Ailong
    Wang, Linyuan
    Li, Lei
    Liang, Ningning
    Yan, Bin
    [J]. 15TH INTERNATIONAL MEETING ON FULLY THREE-DIMENSIONAL IMAGE RECONSTRUCTION IN RADIOLOGY AND NUCLEAR MEDICINE, 2019, 11072
  • [32] Terahertz image super-resolution based on a deep convolutional neural network
    Long, Zhenyu
    Wang, Tianyi
    You, Chengwu
    Yang, Zhengang
    Wang, Kejia
    Liu, Jinsong
    [J]. APPLIED OPTICS, 2019, 58 (10) : 2731 - 2735
  • [34] Image Super-resolution Based on Tiny Recurrent Convolutional Neural Network
    Ma Hao-yu
    Xu Zhi-hai
    Feng Hua-jun
    Li Qi
    Chen Yue-ting
    [J]. ACTA PHOTONICA SINICA, 2018, 47 (04)
  • [35] MobileSR: Efficient Convolutional Neural Network for Super-resolution
    Zhang, Lulu
    Li, HuiYong
    Liu, Xuefeng
    Niu, Jianwei
    Wu, Jiyan
    [J]. 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [36] Image Super-Resolution With Deep Convolutional Neural Network
    Ji, Xiancai
    Lu, Yao
    Guo, Li
    [J]. 2016 IEEE FIRST INTERNATIONAL CONFERENCE ON DATA SCIENCE IN CYBERSPACE (DSC 2016), 2016, : 626 - 630
  • [37] Convolutional Neural Network for Smoke Image Super-Resolution
    Liu, Maoshen
    Gu, Ke
    Qiao, Junfei
    [J]. PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2018), 2018,
  • [38] Light Field Angular Super-Resolution using Convolutional Neural Network with Residual Network
    Kim, Dong-Myung
    Kang, Hyun-Soo
    Hong, Jang-Eui
    Suh, Jae-Won
    [J]. 2019 ELEVENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN 2019), 2019, : 595 - 597
  • [39] Efficient document-image super-resolution using convolutional neural network
    Ram Krishna Pandey
    A G Ramakrishnan
    [J]. Sādhanā, 2018, 43
  • [40] Efficient document-image super-resolution using convolutional neural network
    Pandey, Ram Krishna
    Ramakrishnan, A. G.
    [J]. SADHANA-ACADEMY PROCEEDINGS IN ENGINEERING SCIENCES, 2018, 43 (02):