Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network

被引:0
|
作者
Chu, Yuezhong [1 ]
Wang, Kang [1 ]
Zhang, Xuefeng [1 ]
Heng, Liu [1 ]
机构
[1] Anhui Univ Technol, Sch Comp Sci & Technol, Maanshan 243032, Peoples R China
基金
中国国家自然科学基金;
关键词
image super-resolution; large kernel attention; multiscale self-attention; transformer;
D O I
10.1155/2024/2293286
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In order to solve the problem that the current image super-resolution model has too many parameters and high computational complexity, this paper proposes a lightweight hybrid attention network (LHAN). LHAN consists of three parts: shallow feature extraction, lightweight hybrid attention block (LHAB), and upsampling module. LHAB combines multiscale self-attention and large-core attention. In order to make the network lightweight, multiscale self-attention block (MSSAB) improves the self-attention mechanism and uses windows of different sizes for group calculations. At the same time, in large-core attention, we use depth-based attention. Separate convolutions are used to reduce parameters. While keeping the receptive field unchanged, a normal convolution and a dilated convolution are used to replace the large kernel convolution. The four times super-resolution experimental results on five data sets, including Set5 and Set14, show that our proposed method performs well in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Specifically, in the benchmark data set on Urban, compared with SwinIR, the PSNR index of our method is improved by 0.10 dB. In addition, the parameter amount and calculation amount (floating point operations (FLOPs)) of our method are reduced by 315K and 16.4 G, respectively. Our proposed LHAN not only reduces the number of parameters and calculations but also achieves excellent performance in reconstruction quality.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Lightweight image super-resolution reconstruction based on inverted residual attention network
    Lu, Pei
    Xie, Feng
    Liu, Xiaoyong
    Lu, Xi
    He, Jiawang
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (03)
  • [2] A sparse lightweight attention network for image super-resolution
    Hongao Zhang
    Jinsheng Fang
    Siyu Hu
    Kun Zeng
    [J]. The Visual Computer, 2024, 40 (2) : 1261 - 1272
  • [3] A sparse lightweight attention network for image super-resolution
    Zhang, Hongao
    Fang, Jinsheng
    Hu, Siyu
    Zeng, Kun
    [J]. VISUAL COMPUTER, 2024, 40 (02): : 1261 - 1272
  • [4] Image Super-Resolution Reconstruction Based on Lightweight Multi-Scale Channel Attention Network
    Zhou, Deng-Wen
    Li, Wen-Bin
    Li, Jin-Xin
    Huang, Zhi-Yong
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2022, 50 (10): : 2336 - 2346
  • [5] Lightweight frequency-based attention network for image super-resolution
    Tang, E.
    Wang, Li
    Wang, Yuanyuan
    Yu, Yongtao
    Zeng, Xiaoqin
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (05)
  • [6] CT image super-resolution reconstruction based on global hybrid attention
    Chi, Jianning
    Sun, Zhiyi
    Wang, Huan
    Lyu, Pengfei
    Yu, Xiaosheng
    Wu, Chengdong
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
  • [7] Lightweight image super-resolution with multiscale residual attention network
    Xiao, Cunjun
    Dong, Hui
    Li, Haibin
    Li, Yaqian
    Zhang, Wenming
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (04)
  • [8] Lightweight adaptive enhanced attention network for image super-resolution
    Li Wang
    Lizhong Xu
    Jianqiang Shi
    Jie Shen
    Fengcheng Huang
    [J]. Multimedia Tools and Applications, 2022, 81 : 6513 - 6537
  • [9] Lightweight Attention-Guided Network for Image Super-Resolution
    Ding, Zixuan
    Juan, Zhang
    Xiang, Li
    Wang, Xinyu
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (14)
  • [10] Lightweight adaptive enhanced attention network for image super-resolution
    Wang, Li
    Xu, Lizhong
    Shi, Jianqiang
    Shen, Jie
    Huang, Fengcheng
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (05) : 6513 - 6537