Image super-resolution reconstruction using Swin Transformer with efficient channel attention networks

被引:0
|
作者
Sun, Zhenxi [1 ,2 ]
Zhang, Jin [1 ,2 ,3 ]
Chen, Ziyi [1 ,2 ]
Hong, Lu [1 ,2 ]
Zhang, Rui [1 ,2 ]
Li, Weishi [1 ,2 ,3 ]
Xia, Haojie [1 ,2 ,3 ]
机构
[1] Hefei Univ Technol, Sch Instrument Sci & Optoelect Engn, Hefei 230009, Peoples R China
[2] Anhui Prov Key Lab Measuring Theory & Precis Instr, Hefei 230009, Peoples R China
[3] Minist Educ, Engn Res Ctr Safety Crit Ind Measurement & Control, Hefei 230009, Peoples R China
基金
国家重点研发计划;
关键词
Image super-resolution; Swin Transformer; Efficient channel attention; Multi-attention fusion;
D O I
10.1016/j.engappai.2024.108859
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image super-resolution reconstruction (SR) is an important ill-posed problem in low-level vision, which aims to reconstruct high-resolution images from low-resolution images. Although current state-of-the-art methods exhibit impressive performance, their recovery of image detail information and edge information is still unsatisfactory. To address this problem, this paper proposes a shifted window Transformer (Swin Transformer) with an efficient channel attention network (S-ECAN), which combines the attention based on convolutional neural networks and the self-attention of the Swin Transformer to combine the advantages of both and focuses on learning high-frequency features of images. In addition, to solve the problem of Convolutional Neural Network (CNN) based channel attention consumes a large number of parameters to achieve good performance, this paper proposes the Efficient Channel Attention Block (ECAB), which only involves a handful of parameters while bringing clear performance gain. Extensive experimental validation shows that the proposed model can recover more high-frequency details and texture information. The model is validated on Set5, Set14, B100, Urban100, and Manga109 datasets, where it outperforms the state-of-the-art methods by 0.03-0.13 dB, 0.04- 0.09 dB, 0.01-0.06 dB, 0.13-0.20 dB, and 0.06-0.17 dB respectively in terms of objective metrics. Ultimately, the substantial performance gains and enhanced visual results over prior arts validate the effectiveness and competitiveness of our proposed approach, which achieves an improved performance-complexity trade-off.
引用
下载
收藏
页数:10
相关论文
共 50 条
  • [21] Non-local sparse attention based swin transformer V2 for image super-resolution
    Lv, Ningning
    Yuan, Min
    Xie, Yufei
    Zhan, Kun
    Lu, Fuxiang
    SIGNAL PROCESSING, 2024, 222
  • [22] Image super-resolution via channel attention and spatial attention
    Enmin Lu
    Xiaoxiao Hu
    Applied Intelligence, 2022, 52 : 2260 - 2268
  • [23] Image super-resolution via channel attention and spatial attention
    Lu, Enmin
    Hu, Xiaoxiao
    APPLIED INTELLIGENCE, 2022, 52 (02) : 2260 - 2268
  • [24] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [25] Efficient mixed transformer for single image super-resolution
    Zheng, Ling
    Zhu, Jinchen
    Shi, Jinpeng
    Weng, Shizhuang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [26] ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution
    Zhang, Mingjin
    Zhang, Chi
    Zhang, Qiming
    Guo, Jie
    Gao, Xinbo
    Zhang, Jing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 23016 - 23027
  • [27] HADT: Image super-resolution restoration using Hybrid Attention-Dense Connected Transformer Networks
    Guo, Ying
    Tian, Chang
    Liu, Jie
    Di, Chong
    Ning, Keqing
    Neurocomputing, 2025, 614
  • [28] Single image super-resolution reconstruction based on split-attention networks
    Peng, Yanfei
    Liu, Lanxi
    Wang, Gang
    Meng, Xin
    Li, Yongxin
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2024, 39 (07) : 950 - 960
  • [29] Reference Image Guided Super-Resolution via Progressive Channel Attention Networks
    Huan-Jing Yue
    Sheng Shen
    Jing-Yu Yang
    Hao-Feng Hu
    Yan-Fang Chen
    Journal of Computer Science and Technology, 2020, 35 : 551 - 563
  • [30] Reference Image Guided Super-Resolution via Progressive Channel Attention Networks
    Yue, Huan-Jing
    Shen, Sheng
    Yang, Jing-Yu
    Hu, Hao-Feng
    Chen, Yan-Fang
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2020, 35 (03) : 551 - 563