Efficient Dual Attention Transformer for Image Super-Resolution

被引:0
|
作者
Park, Soobin [1 ]
Jeong, Yuna [1 ]
Choi, Yong Suk [1 ]
机构
[1] Hanyang Univ, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Image super-resolution; Low-level vision; Vision transformer; Self-attention; Computer vision;
D O I
10.1145/3605098.3635991
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Research based on computationally efficient local-window self-attention has been actively advancing in the field of image super-resolution (SR), leading to significant performance improvements. However, in most recent studies, local-window self-attention focuses only on spatial dimension, without sufficient consideration of the channel dimension. Additionally, extracting global information while maintaining the efficiency of local-window self-attention, still remains a challenging task in image SR. To resolve these problems, we propose a novel efficient dual attention transformer (EDAT). Our EDAT presents a dual attention block (DAB) that empowers the exploration of interdependencies not just among features residing at diverse spatial locations but also among distinct channels. Moreover, we propose a global attention block (GAB) to achieve efficient global feature extraction by reducing the spatial size of the keys and values. Our extensive experiments demonstrate that our DAB and GAB complement each other, exhibiting a synergistic effect. Furthermore, based on the two attention blocks, DAB and GAB, our EDAT achieves state-of-the-art results on five benchmark datasets.
引用
下载
收藏
页码:963 / 970
页数:8
相关论文
共 50 条
  • [31] Super-Resolution Generative Adversarial Network Based on the Dual Dimension Attention Mechanism for Biometric Image Super-Resolution
    Huang, Chi-En
    Li, Yung-Hui
    Aslam, Muhammad Saqlain
    Chang, Ching-Chun
    SENSORS, 2021, 21 (23)
  • [32] Multi-attention fusion transformer for single-image super-resolution
    Li, Guanxing
    Cui, Zhaotong
    Li, Meng
    Han, Yu
    Li, Tianping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [33] Joint image deblurring and super-resolution with attention dual supervised network
    Zhang, Dongyang
    Liang, Zhenwen
    Shao, Jie
    NEUROCOMPUTING, 2020, 412 (412) : 187 - 196
  • [34] Dual-path attention network for single image super-resolution
    Huang, Zhiyong
    Li, Wenbin
    Li, Jinxin
    Zhou, Dengwen
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 169
  • [35] Dual-view Attention Networks for Single Image Super-Resolution
    Guo, Jingcai
    Ma, Shiheng
    Zhang, Jie
    Zhou, Qihua
    Guo, Song
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2728 - 2736
  • [36] Learning Attention from Attention: Efficient Self-Refinement Transformer for Face Super-Resolution
    Li, Guanxin
    Shi, Jingang
    Zong, Yuan
    Wang, Fei
    Wang, Tian
    Gong, Yihong
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1035 - 1043
  • [37] Spatial relaxation transformer for image super-resolution
    Li, Yinghua
    Zhang, Ying
    Zeng, Hao
    He, Jinglu
    Guo, Jie
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (07)
  • [38] HASN: hybrid attention separable network for efficient image super-resolution
    Cao, Weifeng
    Lei, Xiaoyan
    Shi, Jun
    Liang, Wanyong
    Liu, Jie
    Bai, Zongfei
    VISUAL COMPUTER, 2024, : 3423 - 3435
  • [39] Efficient Non-local Contrastive Attention for Image Super-resolution
    Xia, Bin
    Hang, Yucheng
    Tian, Yapeng
    Yang, Wenming
    Liao, Qingmin
    Zhou, Jie
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2759 - 2767
  • [40] Efficient Long-Range Attention Network for Image Super-Resolution
    Zhang, Xindong
    Zeng, Hui
    Guo, Shi
    Zhang, Lei
    COMPUTER VISION - ECCV 2022, PT XVII, 2022, 13677 : 649 - 667