Lightweight image super-resolution with the adaptive weight learning network

被引:0
|
作者
Zhang Y. [1 ]
Cheng P. [1 ]
Zhang S. [1 ]
Wang X. [2 ]
机构
[1] School of Electro-Mechanical Engineering, Xidian University, Xi'an
[2] School of Electronic Engineering, Xidian University, Xi'an
关键词
Adaptive weight; Convolutional neural networks; Deep learning; Lightweight; Super-resolution;
D O I
10.19665/j.issn1001-2400.2021.05.003
中图分类号
学科分类号
摘要
In recent years, the single-image super-resolution (SISR) method using deep convolutional neural networks (CNN) has achieved remarkable results. The Pixel Attention Network(PAN) is one of the most advanced lightweight super-resolution methods, which can lead to a good reconstruction performance with a very small number of parameters. But the PAN is limited by the parameters of each module, resulting in slow model training and strict training conditions. To address these problems, this paper proposes a Lightweight Adaptive Weight learning Network (LAWN) for image super-resolution. The network uses multiple adaptive weight modules to form a non-linear mapping network, with each module extracting different levels of feature information. In each adaptive weight module, the network employs the attention branch and the non-attention branch to extract the corresponding information, and then the adaptive weight fusion branch is employed to integrate these two branches. Splitting and fusing the two branches with a specific convolutional layer greatly reduces the number of parameters of the attention branch and the non-attention branch, which helps the network to achieve a relative balance between the number of parameters and the performance. The quantitative evaluations on benchmark datasets demonstrate that the proposed LAWN reduces the number of model parameters and performs favorably against state-of-the-art methods in terms of both PSNR and SSIM. Experimental results show that this method can reconstruct more accurate texture details. The qualitative evaluations with better visual effects prove the effectiveness of the proposed method. © 2021, The Editorial Board of Journal of Xidian University. All right reserved.
引用
收藏
页码:15 / 22
页数:7
相关论文
共 37 条
  • [1] SU Heng, ZHOU Jie, ZHANG Zhihao, Survey of Super-Resolution Image Reconstruction Methods, Acta Automatica Snica, 39, 8, pp. 1202-1213, (2013)
  • [2] WANG Z H, CHEN J, HOI S C H., Deep Learning for Image Super-Rsolution: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 10, pp. 3365-3387, (2020)
  • [3] DONG C, LOY C C, HE K M, Et al., Image Super-Resolution Using Deep Convolutional Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 2, pp. 295-307, (2016)
  • [4] KIM J, LEE J K, LEE K M., Deeply-Recursive Convolutional Network for Image Su-per-Resolution, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637-1645, (2016)
  • [5] ZHANG Y L, LI K P, LI K, Et al., Image Super-Resolution Using Very Deep Residual Channel Attention Net-works, Proceedings of the European Conference on Computer Vision (ECCV), pp. 286-301, (2018)
  • [6] KIM J, LEE J K, LEEK M., Accurate Image Super-Resolution Using Very Deep Convolutional Net-works, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646-1654, (2016)
  • [7] LIM B, SON S, KIM H, Et al., Enhanced Deep Residual Networks for Single Image Su-per-Resolution, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136-144, (2017)
  • [8] ZHANG Y L, TIAN Y P, KONG Y, Et al., Residual Dense Network for Image Super-Resolution, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472-2481, (2018)
  • [9] TAI Y, YANG J, LIU X., Image Super-Resolution Bia Deep Recursive Residual Network, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3147-3155, (2017)
  • [10] WANG Shiping, BI Duyan, LIU Kun, Et al., Multi-Mapping Convolution Neural Network for the Image Super-Resolution Algorithm, Journal of Xidian University, 45, 4, pp. 155-160, (2018)