Video super-resolution with inverse recurrent net and hybrid local fusion

被引:8
|
作者
Li, Dingyi [1 ,2 ]
Wang, Zengfu [3 ,4 ]
Yang, Jian [1 ,2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, PCA Lab,Minist Educ, Key Lab Intelligent Percept & Syst High Dimens In, Nanjing 210094, Jiangsu, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Jiangsu Key Lab Image & Video Understanding Socia, Nanjing 210094, Jiangsu, Peoples R China
[3] Chinese Acad Sci, Inst Intelligent Machines, Hefei 230031, Anhui, Peoples R China
[4] Univ Sci & Technol China, Dept Automation, Hefei 230027, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Video super-resolution; Bidirectional recurrent convolutional neural; network; Sliding-window; Local fusion; IMAGE SUPERRESOLUTION;
D O I
10.1016/j.neucom.2022.03.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video super-resolution converts low-resolution videos to sharp high-resolution ones. In order to make better use of temporal information in video super-resolution, we design inverse recurrent net and hybrid local fusion. We concatenate the original low-resolution input sequence and its inverse sequence repeatedly. The new sequence is viewed as a combination of different stages, and is processed sequentially by using orent net. The outputs of the last two stages in opposite directions are fused to generate the final images. Our inverse recurrent net can extract more bidirectional temporal information in the input sequence, without adding parameter to the corresponding unidirectional recurrent net. We also propose a hybrid local fusion method which uses parallel fusion and cascade fusion for incorporating slidingwindow-based methods into our inverse recurrent net. Extensive experimental results demonstrate the effectiveness of the proposed inverse recurrent net and hybrid local fusion, in terms of visual quality and quantitative evaluations. The code will be released at https://github.com/5ofwind. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:40 / 51
页数:12
相关论文
共 50 条
  • [31] Omniscient Video Super-Resolution
    Yi, Peng
    Wang, Zhongyuan
    Jiang, Kui
    Jiang, Junjun
    Lu, Tao
    Tian, Xin
    Ma, Jiayi
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 4409 - 4418
  • [32] Global-local fusion network for face super-resolution
    Lu, Tao
    Wang, Jiaming
    Jiang, Junjun
    Zhang, Yanduo
    NEUROCOMPUTING, 2020, 387 : 309 - 320
  • [33] Efficient Video Super-Resolution through Recurrent Latent Space Propagation
    Fuoli, Dario
    Gu, Shuhang
    Timofte, Radu
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3476 - 3485
  • [34] A Fast and Scalable Frame-Recurrent Video Super-Resolution Framework
    Hou, Kaixuan
    Luo, Jianping
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 310 - 325
  • [35] A lightweight hardware-efficient recurrent network for video super-resolution
    Mo, Yannan
    Chen, Dihu
    Su, Tao
    ELECTRONICS LETTERS, 2022, 58 (18) : 699 - 701
  • [36] Bidirectional Temporal-Recurrent Propagation Networks for Video Super-Resolution
    Han, Lei
    Fan, Cien
    Yang, Ye
    Zou, Lian
    ELECTRONICS, 2020, 9 (12) : 1 - 15
  • [37] Multi-Stage Feature Fusion Network for Video Super-Resolution
    Song, Huihui
    Xu, Wenjie
    Liu, Dong
    Liu, Bo
    Liu, Qingshan
    Metaxas, Dimitris N.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 2923 - 2934
  • [38] Dual-Stream Fusion Network for Spatiotemporal Video Super-Resolution
    Tseng, Min-Yuan
    Chen, Yen-Chung
    Lee, Yi-Lun
    Lai, Wei-Sheng
    Tsai, Yi-Hsuan
    Chiu, Wei-Chen
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2683 - 2692
  • [39] Lightweight Video Super-Resolution for Compressed Video
    Kwon, Ilhwan
    Li, Jun
    Prasad, Mukesh
    ELECTRONICS, 2023, 12 (03)
  • [40] VIDEO SUPER-RESOLUTION FOR MIXED RESOLUTION STEREO
    Jain, Ankit K.
    Nguyen, Truong Q.
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 962 - 966