SRFormer: Permuted Self-Attention for Single Image Super-Resolution

被引:25
|
作者
Zhou, Yupeng [1 ]
Li, Zhen [1 ]
Guo, Chun-Le [1 ]
Bai, Song [2 ]
Cheng, Ming-Ming [1 ]
Hou, Qibin [1 ]
机构
[1] Nankai Univ, Sch Comp Sci, VCIP, Tianjin, Peoples R China
[2] ByteDance, Singapore, Singapore
关键词
D O I
10.1109/ICCV51070.2023.01174
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous works have shown that increasing the window size for Transformer-based image super-resolution models ( e.g., SwinIR) can significantly improve the model performance but the computation overhead is also considerable. In this paper, we present SRFormer, a simple but novel method that can enjoy the benefit of large window self-attention but introduces even less computational burden. The core of our SRFormer is the permuted self-attention (PSA), which strikes an appropriate balance between the channel and spatial information for self-attention. Our PSA is simple and can be easily applied to existing super-resolution networks based on window self-attention. Without any bells and whistles, we show that our SRFormer achieves a 33.86dB PSNR score on the Urban100 dataset, which is 0.46dB higher than that of SwinIR but uses fewer parameters and computations. We hope our simple and effective approach can serve as a useful tool for future research in super-resolution model design. Our code is available at https://github.com/HVision- NKU/ SRFormer.
引用
收藏
页码:12734 / 12745
页数:12
相关论文
共 50 条
  • [1] A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution
    Park, Karam
    Soh, Jae Woong
    Cho, Nam Ik
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 907 - 918
  • [2] SELF-ATTENTION FOR AUDIO SUPER-RESOLUTION
    Rakotonirina, Nathanael Carraz
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [3] Image super-resolution reconstruction based on self-attention GAN
    Wang, Xue-Song
    Chao, Jie
    Cheng, Yu-Hu
    [J]. Kongzhi yu Juece/Control and Decision, 2021, 36 (06): : 1324 - 1332
  • [4] Optimal Deep Multi-Route Self-Attention for Single Image Super-Resolution
    Ngambenjavichaikul, Nisawan
    Chen, Sovann
    Aramvith, Supavadee
    [J]. PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1181 - 1186
  • [5] Single-Image Super-Resolution based on a Self-Attention Deep Neural Network
    Jiang, Linfu
    Zhong, Minzhi
    Qiu, Fangchi
    [J]. 2020 13TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2020), 2020, : 387 - 391
  • [6] ALAN: Self-Attention Is Not All You Need for Image Super-Resolution
    Chen, Qiangpu
    Qin, Jinghui
    Wen, Wushao
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 11 - 15
  • [7] Dual Self-Attention Swin Transformer for Hyperspectral Image Super-Resolution
    Long, Yaqian
    Wang, Xun
    Xu, Meng
    Zhang, Shuyu
    Jiang, Shuguo
    Jia, Sen
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [8] Non-local self-attention network for image super-resolution
    Zeng, Kun
    Lin, Hanjiang
    Yan, Zhiqiang
    Fang, Jinsheng
    Lai, Taotao
    [J]. APPLIED INTELLIGENCE, 2024, 54 (07) : 5336 - 5352
  • [9] SSIR: Spatial shuffle multi-head self-attention for Single Image Super-Resolution
    Zhao, Liangliang
    Gao, Junyu
    Deng, Donghu
    Li, Xuelong
    [J]. PATTERN RECOGNITION, 2024, 148
  • [10] Image Super-Resolution Reconstruction Method Based on Self-Attention Deep Network
    Chen Zihan
    Wu Haobo
    Pei Haodong
    Chen Rong
    Hu Jiaxin
    Shi Hengtong
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (04)