Multi-attention fusion transformer for single-image super-resolution

被引:0
|
作者
Li, Guanxing [1 ]
Cui, Zhaotong [1 ]
Li, Meng [1 ]
Han, Yu [1 ]
Li, Tianping [1 ]
机构
[1] Shandong Normal Univ, Sch Phys & Elect, Jinan, Shandong, Peoples R China
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
基金
中国国家自然科学基金;
关键词
Super-resolution; Attention mechanism; Transformer; MAFT; Multi-attention fusion;
D O I
10.1038/s41598-024-60579-5
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recently, Transformer-based methods have gained prominence in image super-resolution (SR) tasks, addressing the challenge of long-range dependence through the incorporation of cross-layer connectivity and local attention mechanisms. However, the analysis of these networks using local attribution maps has revealed significant limitations in leveraging the spatial extent of input information. To unlock the inherent potential of Transformer in image SR, we propose the Multi-Attention Fusion Transformer (MAFT), a novel model designed to integrate multiple attention mechanisms with the objective of expanding the number and range of pixels activated during image reconstruction. This integration enhances the effective utilization of input information space. At the core of our model lies the Multi-attention Adaptive Integration Groups, which facilitate the transition from dense local attention to sparse global attention through the introduction of Local Attention Aggregation and Global Attention Aggregation blocks with alternating connections, effectively broadening the network's receptive field. The effectiveness of our proposed algorithm has been validated through comprehensive quantitative and qualitative evaluation experiments conducted on benchmark datasets. Compared to state-of-the-art methods (e.g. HAT), the proposed MAFT achieves 0.09 dB gains on Urban100 dataset for x 4 SR task while containing 32.55% and 38.01% fewer parameters and FLOPs, respectively.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Multi-attention augmented network for single image super-resolution
    Chen, Rui
    Zhang, Heng
    Liu, Jixin
    PATTERN RECOGNITION, 2022, 122
  • [2] Multi-Attention Multi-Image Super-Resolution Transformer (MAST) for Remote Sensing
    Li, Jiaao
    Lv, Qunbo
    Zhang, Wenjian
    Zhu, Baoyu
    Zhang, Guiyu
    Tan, Zheng
    REMOTE SENSING, 2023, 15 (17)
  • [3] Attention Fusion Generative Adversarial Network for Single-Image Super-Resolution Reconstruction
    Peng Yanfei
    Zhang Pingjia
    Gao Yi
    Zi Lingling
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (20)
  • [4] Improving Single-Image Super-Resolution with Dilated Attention
    Zhang, Xinyu
    Cheng, Boyuan
    Yang, Xiaosong
    Xiao, Zhidong
    Zhang, Jianjun
    You, Lihua
    ELECTRONICS, 2024, 13 (12)
  • [5] Single Image Super Resolution via Multi-Attention Fusion Recurrent Network
    Kou, Qiqi
    Cheng, Deqiang
    Zhang, Haoxiang
    Liu, Jingjing
    Guo, Xin
    Jiang, He
    IEEE ACCESS, 2023, 11 : 98653 - 98665
  • [6] Single-image super-resolution with multilevel residual attention network
    Qin, Ding
    Gu, Xiaodong
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (19): : 15615 - 15628
  • [7] Residual Triplet Attention Network for Single-Image Super-Resolution
    Huang, Feng
    Wang, Zhifeng
    Wu, Jing
    Shen, Ying
    Chen, Liqiong
    ELECTRONICS, 2021, 10 (17)
  • [8] Single-image super-resolution with multilevel residual attention network
    Ding Qin
    Xiaodong Gu
    Neural Computing and Applications, 2020, 32 : 15615 - 15628
  • [9] A Multi-Attention Feature Distillation Neural Network for Lightweight Single Image Super-Resolution
    Zhang, Yongfei
    Lin, Xinying
    Yang, Hong
    He, Jie
    Qing, Linbo
    He, Xiaohai
    Li, Yi
    Chen, Honggang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2024, 2024
  • [10] Single-Image Super-Resolution: A Survey
    Yao, Tingting
    Luo, Yu
    Chen, Yantong
    Yang, Dongqiao
    Zhao, Lei
    COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, CSPS 2018, VOL II: SIGNAL PROCESSING, 2020, 516 : 119 - 125