Efficient Dual Attention Transformer for Image Super-Resolution

被引:0
|
作者
Park, Soobin [1 ]
Jeong, Yuna [1 ]
Choi, Yong Suk [1 ]
机构
[1] Hanyang Univ, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Image super-resolution; Low-level vision; Vision transformer; Self-attention; Computer vision;
D O I
10.1145/3605098.3635991
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Research based on computationally efficient local-window self-attention has been actively advancing in the field of image super-resolution (SR), leading to significant performance improvements. However, in most recent studies, local-window self-attention focuses only on spatial dimension, without sufficient consideration of the channel dimension. Additionally, extracting global information while maintaining the efficiency of local-window self-attention, still remains a challenging task in image SR. To resolve these problems, we propose a novel efficient dual attention transformer (EDAT). Our EDAT presents a dual attention block (DAB) that empowers the exploration of interdependencies not just among features residing at diverse spatial locations but also among distinct channels. Moreover, we propose a global attention block (GAB) to achieve efficient global feature extraction by reducing the spatial size of the keys and values. Our extensive experiments demonstrate that our DAB and GAB complement each other, exhibiting a synergistic effect. Furthermore, based on the two attention blocks, DAB and GAB, our EDAT achieves state-of-the-art results on five benchmark datasets.
引用
下载
收藏
页码:963 / 970
页数:8
相关论文
共 50 条
  • [21] Transformer for Single Image Super-Resolution
    Lu, Zhisheng
    Li, Juncheng
    Liu, Hong
    Huang, Chaoyan
    Zhang, Linlin
    Zeng, Tieyong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 456 - 465
  • [22] Efficient residual attention network for single image super-resolution
    Fangwei Hao
    Taiping Zhang
    Linchang Zhao
    Yuanyan Tang
    Applied Intelligence, 2022, 52 : 652 - 661
  • [23] Efficient residual attention network for single image super-resolution
    Hao, Fangwei
    Zhang, Taiping
    Zhao, Linchang
    Tang, Yuanyan
    APPLIED INTELLIGENCE, 2022, 52 (01) : 652 - 661
  • [24] Efficient Global Attention Networks for Image Super-Resolution Reconstruction
    Wang Qingqing
    Xin Yuelan
    Zhao Jia
    Guo Jiang
    Wang Haochen
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (10)
  • [25] A Dual Transformer Super-Resolution Network for Improving the Definition of Vibration Image
    Zhu, Yang
    Wang, Sen
    Zhang, Yinhui
    He, Zifen
    Wang, Qingjian
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [26] Dual Attention with the Self-Attention Alignment for Efficient Video Super-resolution
    Chu, Yuezhong
    Qiao, Yunan
    Liu, Heng
    Han, Jungong
    COGNITIVE COMPUTATION, 2022, 14 (03) : 1140 - 1151
  • [27] Dual Attention with the Self-Attention Alignment for Efficient Video Super-resolution
    Yuezhong Chu
    Yunan Qiao
    Heng Liu
    Jungong Han
    Cognitive Computation, 2022, 14 : 1140 - 1151
  • [28] Transformer-Based Selective Super-resolution for Efficient Image Refinement
    Zhang, Tianyi
    Kasichainula, Kishore
    Zhuo, Yaoxin
    Li, Baoxin
    Seo, Jae-Sun
    Cao, Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 7305 - 7313
  • [29] AdaFormer: Efficient Transformer with Adaptive Token Sparsification for Image Super-resolution
    Luo, Xiaotong
    Ai, Zekun
    Liang, Qiuyuan
    Liu, Ding
    Xie, Yuan
    Qu, Yanyun
    Fu, Yun
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4009 - 4016
  • [30] EdgeFormer: Edge-Aware Efficient Transformer for Image Super-Resolution
    Luo, Xiaotong
    Ai, Zekun
    Liang, Qiuyuan
    Xie, Yuan
    Shi, Zhongchao
    Fan, Jianping
    Qu, Yanyun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73