A Multi-Attention Feature Distillation Neural Network for Lightweight Single Image Super-Resolution

被引:0
|
作者
Zhang, Yongfei [1 ,2 ]
Lin, Xinying [1 ,3 ]
Yang, Hong [1 ]
He, Jie [4 ]
Qing, Linbo [1 ]
He, Xiaohai [1 ]
Li, Yi [5 ]
Chen, Honggang [1 ,6 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
[2] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
[3] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[4] Wuzhou Univ, Guangxi Key Lab Machine Vis & Intelligent Control, Wuzhou 543002, Peoples R China
[5] DI Sinma Sichuan Machinery Co Ltd, Suining 629201, Peoples R China
[6] Yunnan Univ, Yunnan Key Lab Software Engn, Kunming 650600, Peoples R China
基金
中国国家自然科学基金;
关键词
SPARSE REPRESENTATION; INTERPOLATION;
D O I
10.1155/2024/3255233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, remarkable performance improvements have been produced by deep convolutional neural networks (CNN) for single image super-resolution (SISR). Nevertheless, a high proportion of CNN-based SISR models are with quite a few network parameters and high computational complexity for deep or wide architectures. How to more fully utilize deep features to make a balance between model complexity and reconstruction performance is one of the main challenges in this field. To address this problem, on the basis of the well-known information multi-distillation model, a multi-attention feature distillation network termed as MAFDN is developed for lightweight and accurate SISR. Specifically, an effective multi-attention feature distillation block (MAFDB) is designed and used as the basic feature extraction unit in MAFDN. With the help of multi-attention layers including pixel attention, spatial attention, and channel attention, MAFDB uses multiple information distillation branches to learn more discriminative and representative features. Furthermore, MAFDB introduces the depthwise over-parameterized convolutional layer (DO-Conv)-based residual block (OPCRB) to enhance its ability without incurring any parameter and computation increase in the inference stage. The results on commonly used datasets demonstrate that our MAFDN outperforms existing representative lightweight SISR models when taking both reconstruction performance and model complexity into consideration. For example, for x4 SR on Set5, MAFDN (597K/33.79G) obtains 0.21 dB/0.0037 and 0.10 dB/0.0015 PSNR/SSIM gains over the attention-based SR model AFAN (692K/50.90G) and the feature distillation-based SR model DDistill-SR (675K/32.83G), respectively.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Multi-scale convolutional attention network for lightweight image super-resolution
    Xie, Feng
    Lu, Pei
    Liu, Xiaoyong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [32] Multi-scale feature selection network for lightweight image super-resolution
    Li, Minghong
    Zhao, Yuqian
    Zhang, Fan
    Luo, Biao
    Yang, Chunhua
    Gui, Weihua
    Chang, Kan
    NEURAL NETWORKS, 2024, 169 : 352 - 364
  • [33] Lightweight interactive feature inference network for single-image super-resolution
    Wang, Li
    Li, Xing
    Tian, Wei
    Peng, Jianhua
    Chen, Rui
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [34] A Lightweight Pyramid Feature Fusion Network for Single Image Super-Resolution Reconstruction
    Liu, Bingzan
    Ning, Xin
    Ma, Shichao
    Lian, Xiaobin
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 1575 - 1579
  • [35] Image Super-Resolution via Lightweight Attention-Directed Feature Aggregation Network
    Wang, Li
    Li, Ke
    Tang, Jingjing
    Liang, Yuying
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (02)
  • [36] Feature Fusion Attention Network for Image Super-resolution
    Zhou D.-W.
    Ma L.-Y.
    Tian J.-Y.
    Sun X.-X.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (09): : 2233 - 2241
  • [37] MRDN: A lightweight Multi-stage residual distillation network for image Super-Resolution
    Yang, Xin
    Guo, Yingqing
    Li, Zhiqiang
    Zhou, Dake
    Li, Tao
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 204
  • [38] Lightweight global-locally connected distillation network for single image super-resolution
    Zeng, Cong
    Li, Guangyao
    Chen, Qiaochuan
    Xiao, Qingguo
    APPLIED INTELLIGENCE, 2022, 52 (15) : 17797 - 17809
  • [39] Lightweight global-locally connected distillation network for single image super-resolution
    Cong Zeng
    Guangyao Li
    Qiaochuan Chen
    Qingguo Xiao
    Applied Intelligence, 2022, 52 : 17797 - 17809
  • [40] FADLSR: A Lightweight Super-Resolution Network Based on Feature Asymmetric Distillation
    Xin Yang
    Hengrui Li
    Hanying Jian
    Tao Li
    Circuits, Systems, and Signal Processing, 2023, 42 : 2149 - 2168