CMASR: Lightweight image super-resolution with cluster and match attention

被引:0
|
作者
Huang, Detian [1 ,2 ]
Lin, Mingxin [1 ]
Liu, Hang [1 ]
Zeng, Huanqiang [1 ,2 ]
机构
[1] Huaqiao Univ, Coll Engn, Quanzhou 362021, Fujian, Peoples R China
[2] Quanzhou Digital Inst, Quanzhou 362021, Fujian, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Image super-resolution; Transformer; Token clustering; Axial self-attention;
D O I
10.1016/j.imavis.2025.105457
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Transformer has recently achieved impressive success in image super-resolution due to its ability to model long-range dependencies with multi-head self-attention (MHSA). However, most existing MHSAs focus only on the dependencies among individual tokens, and ignore the ones among token clusters containing several tokens, resulting in the inability of Transformer to adequately explore global features. On the other hand, Transformer neglects local features, which inevitably hinders accurate detail reconstruction. To address the above issues, we propose a lightweight image super-resolution method with cluster and match attention (CMASR). Specifically, a token Clustering block is designed to divide input tokens into token clusters of different sizes with depthwise separable convolution. Subsequently, we propose an efficient axial matching self-attention (AMSA) mechanism, which introduces an axial matrix to extract local features, including axial similarities and symmetries. Further, by combining AMSA and Window Self-Attention, we construct a Hybrid Self-Attention block to capture the dependencies among token clusters of different sizes to sufficiently extract axial local features and global features. Extensive experiments demonstrate that the proposed CMASR outperforms state-of-the-art methods with fewer computational cost (i.e., the number of parameters and FLOPs).
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Feature enhanced cascading attention network for lightweight image super-resolution
    Huang, Feng
    Liu, Hongwei
    Chen, Liqiong
    Shen, Ying
    Yu, Min
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [22] Lightweight Multi-Scale Asymmetric Attention Network for Image Super-Resolution
    Zhang, Min
    Wang, Huibin
    Zhang, Zhen
    Chen, Zhe
    Shen, Jie
    MICROMACHINES, 2022, 13 (01)
  • [23] A lightweight multi-scale channel attention network for image super-resolution
    Li, Wenbin
    Li, Juefei
    Li, Jinxin
    Huang, Zhiyong
    Zhou, Dengwen
    NEUROCOMPUTING, 2021, 456 : 327 - 337
  • [24] Lightweight image super-resolution reconstruction based on inverted residual attention network
    Lu, Pei
    Xie, Feng
    Liu, Xiaoyong
    Lu, Xi
    He, Jiawang
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (03)
  • [25] Lightweight multi-scale residual networks with attention for image super-resolution
    Liu, Huan
    Cao, Feilong
    Wen, Chenglin
    Zhang, Qinghua
    KNOWLEDGE-BASED SYSTEMS, 2020, 203
  • [26] Lightweight Super-Resolution Image-Reconstruction Model with Adaptive Residual Attention
    Jiang Ming
    Xiao Qingsheng
    Yi Jianbing
    Cao Feng
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (16)
  • [27] LSAGNet: lightweight self-attention guidance network for image super-resolution
    Ye, Shutong
    Zhu, Yi
    Zhang, Mingming
    Dai, Xinyan
    Zhao, Shengyu
    Xie, Chao
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (06)
  • [28] Lightweight multi-scale distillation attention network for image super-resolution
    Tang, Yinggan
    Hu, Quanwei
    Bu, Chunning
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [29] Multi-scale convolutional attention network for lightweight image super-resolution
    Xie, Feng
    Lu, Pei
    Liu, Xiaoyong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [30] A Lightweight Hyperspectral Image Super-Resolution Method Based on Multiple Attention Mechanisms
    Bu, Lijing
    Dai, Dong
    Zhang, Zhengpeng
    Xie, Xinyu
    Deng, Mingjun
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT II, 2023, 14087 : 639 - 651