CMASR: Lightweight image super-resolution with cluster and match attention

被引:0
|
作者
Huang, Detian [1 ,2 ]
Lin, Mingxin [1 ]
Liu, Hang [1 ]
Zeng, Huanqiang [1 ,2 ]
机构
[1] Huaqiao Univ, Coll Engn, Quanzhou 362021, Fujian, Peoples R China
[2] Quanzhou Digital Inst, Quanzhou 362021, Fujian, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Image super-resolution; Transformer; Token clustering; Axial self-attention;
D O I
10.1016/j.imavis.2025.105457
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Transformer has recently achieved impressive success in image super-resolution due to its ability to model long-range dependencies with multi-head self-attention (MHSA). However, most existing MHSAs focus only on the dependencies among individual tokens, and ignore the ones among token clusters containing several tokens, resulting in the inability of Transformer to adequately explore global features. On the other hand, Transformer neglects local features, which inevitably hinders accurate detail reconstruction. To address the above issues, we propose a lightweight image super-resolution method with cluster and match attention (CMASR). Specifically, a token Clustering block is designed to divide input tokens into token clusters of different sizes with depthwise separable convolution. Subsequently, we propose an efficient axial matching self-attention (AMSA) mechanism, which introduces an axial matrix to extract local features, including axial similarities and symmetries. Further, by combining AMSA and Window Self-Attention, we construct a Hybrid Self-Attention block to capture the dependencies among token clusters of different sizes to sufficiently extract axial local features and global features. Extensive experiments demonstrate that the proposed CMASR outperforms state-of-the-art methods with fewer computational cost (i.e., the number of parameters and FLOPs).
引用
收藏
页数:10
相关论文
共 50 条
  • [1] A scalable attention network for lightweight image super-resolution
    Fang, Jinsheng
    Chen, Xinyu
    Zhao, Jianglong
    Zeng, Kun
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (08)
  • [2] A sparse lightweight attention network for image super-resolution
    Hongao Zhang
    Jinsheng Fang
    Siyu Hu
    Kun Zeng
    The Visual Computer, 2024, 40 (2) : 1261 - 1272
  • [3] A sparse lightweight attention network for image super-resolution
    Zhang, Hongao
    Fang, Jinsheng
    Hu, Siyu
    Zeng, Kun
    VISUAL COMPUTER, 2024, 40 (02): : 1261 - 1272
  • [4] Lightweight image super-resolution with multiscale residual attention network
    Xiao, Cunjun
    Dong, Hui
    Li, Haibin
    Li, Yaqian
    Zhang, Wenming
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (04)
  • [5] Lightweight image super-resolution with sliding Proxy Attention Network
    Hu, Zhenyu
    Sun, Wanjie
    Chen, Zhenzhong
    SIGNAL PROCESSING, 2025, 227
  • [6] EFFICIENT HIERARCHICAL STRIPE ATTENTION FOR LIGHTWEIGHT IMAGE SUPER-RESOLUTION
    Chen, Xiaying
    Zhou, Yue
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 3770 - 3774
  • [7] Lightweight adaptive enhanced attention network for image super-resolution
    Li Wang
    Lizhong Xu
    Jianqiang Shi
    Jie Shen
    Fengcheng Huang
    Multimedia Tools and Applications, 2022, 81 : 6513 - 6537
  • [8] LKASR: Large kernel attention for lightweight image super-resolution
    Feng, Hao
    Wang, Liejun
    Li, Yongming
    Du, Anyu
    KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [9] Lightweight Attention-Guided Network for Image Super-Resolution
    Ding, Zixuan
    Juan, Zhang
    Xiang, Li
    Wang, Xinyu
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (14)
  • [10] Lightweight adaptive enhanced attention network for image super-resolution
    Wang, Li
    Xu, Lizhong
    Shi, Jianqiang
    Shen, Jie
    Huang, Fengcheng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (05) : 6513 - 6537