Image super-resolution reconstruction using Swin Transformer with efficient channel attention networks

被引:0
|
作者
Sun, Zhenxi [1 ,2 ]
Zhang, Jin [1 ,2 ,3 ]
Chen, Ziyi [1 ,2 ]
Hong, Lu [1 ,2 ]
Zhang, Rui [1 ,2 ]
Li, Weishi [1 ,2 ,3 ]
Xia, Haojie [1 ,2 ,3 ]
机构
[1] Hefei Univ Technol, Sch Instrument Sci & Optoelect Engn, Hefei 230009, Peoples R China
[2] Anhui Prov Key Lab Measuring Theory & Precis Instr, Hefei 230009, Peoples R China
[3] Minist Educ, Engn Res Ctr Safety Crit Ind Measurement & Control, Hefei 230009, Peoples R China
基金
国家重点研发计划;
关键词
Image super-resolution; Swin Transformer; Efficient channel attention; Multi-attention fusion;
D O I
10.1016/j.engappai.2024.108859
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image super-resolution reconstruction (SR) is an important ill-posed problem in low-level vision, which aims to reconstruct high-resolution images from low-resolution images. Although current state-of-the-art methods exhibit impressive performance, their recovery of image detail information and edge information is still unsatisfactory. To address this problem, this paper proposes a shifted window Transformer (Swin Transformer) with an efficient channel attention network (S-ECAN), which combines the attention based on convolutional neural networks and the self-attention of the Swin Transformer to combine the advantages of both and focuses on learning high-frequency features of images. In addition, to solve the problem of Convolutional Neural Network (CNN) based channel attention consumes a large number of parameters to achieve good performance, this paper proposes the Efficient Channel Attention Block (ECAB), which only involves a handful of parameters while bringing clear performance gain. Extensive experimental validation shows that the proposed model can recover more high-frequency details and texture information. The model is validated on Set5, Set14, B100, Urban100, and Manga109 datasets, where it outperforms the state-of-the-art methods by 0.03-0.13 dB, 0.04- 0.09 dB, 0.01-0.06 dB, 0.13-0.20 dB, and 0.06-0.17 dB respectively in terms of objective metrics. Ultimately, the substantial performance gains and enhanced visual results over prior arts validate the effectiveness and competitiveness of our proposed approach, which achieves an improved performance-complexity trade-off.
引用
下载
收藏
页数:10
相关论文
共 50 条
  • [41] Super-resolution image reconstruction using multisensors
    Ching, WK
    Ng, MK
    Sze, KN
    Yau, AC
    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, 2005, 12 (2-3) : 271 - 281
  • [42] LCFormer: linear complexity transformer for efficient image super-resolution
    Gao, Xiang
    Wu, Sining
    Zhou, Ying
    Wang, Fan
    Hu, Xiaopeng
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [43] A Residual Network with Efficient Transformer for Lightweight Image Super-Resolution
    Yan, Fengqi
    Li, Shaokun
    Zhou, Zhiguo
    Shi, Yonggang
    ELECTRONICS, 2024, 13 (01)
  • [44] Efficient image super-resolution based on transformer with bidirectional interaction
    Gendy, Garas
    He, Guanghui
    Sabor, Nabil
    APPLIED SOFT COMPUTING, 2024, 165
  • [45] Lightweight network with masks for light field image super-resolution based on swin attention
    Wang, Xingzheng
    Wu, Shaoyong
    Li, Jiahui
    Wu, Jianbin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (33) : 79785 - 79804
  • [46] Super-Resolution Magnetic Resonance Imaging Reconstruction Using Deep Attention Networks
    He, Xiuxiu
    Lei, Yang
    Fu, Yabo
    Mao, Hui
    Curran, Walter J.
    Liu, Tian
    Yang, Xiaofeng
    MEDICAL IMAGING 2020: IMAGE PROCESSING, 2021, 11313
  • [47] Image super-resolution reconstruction based on dynamic attention network
    Zhao X.-Q.
    Wang Z.
    Song Z.-Y.
    Jiang H.-M.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2023, 57 (08): : 1487 - 1494
  • [48] Structured Fusion Attention Network for Image Super-Resolution Reconstruction
    Dai, Yaonan
    Yu, Jiuyang
    Hu, Tianhao
    Lu, Yang
    Zheng, Xiaotao
    IEEE ACCESS, 2022, 10 : 31896 - 31906
  • [49] Structured Fusion Attention Network for Image Super-Resolution Reconstruction
    Dai, Yaonan
    Yu, Jiuyang
    Hu, Tianhao
    Lu, Yang
    Zheng, Xiaotao
    IEEE Access, 2022, 10 : 31896 - 31906
  • [50] Image Super-Resolution Using Dilated Window Transformer
    Park, Soobin
    Choi, Yong Suk
    IEEE ACCESS, 2023, 11 (60028-60039): : 60028 - 60039