Efficient Dual Attention Transformer for Image Super-Resolution

被引:0
|
作者
Park, Soobin [1 ]
Jeong, Yuna [1 ]
Choi, Yong Suk [1 ]
机构
[1] Hanyang Univ, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Image super-resolution; Low-level vision; Vision transformer; Self-attention; Computer vision;
D O I
10.1145/3605098.3635991
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Research based on computationally efficient local-window self-attention has been actively advancing in the field of image super-resolution (SR), leading to significant performance improvements. However, in most recent studies, local-window self-attention focuses only on spatial dimension, without sufficient consideration of the channel dimension. Additionally, extracting global information while maintaining the efficiency of local-window self-attention, still remains a challenging task in image SR. To resolve these problems, we propose a novel efficient dual attention transformer (EDAT). Our EDAT presents a dual attention block (DAB) that empowers the exploration of interdependencies not just among features residing at diverse spatial locations but also among distinct channels. Moreover, we propose a global attention block (GAB) to achieve efficient global feature extraction by reducing the spatial size of the keys and values. Our extensive experiments demonstrate that our DAB and GAB complement each other, exhibiting a synergistic effect. Furthermore, based on the two attention blocks, DAB and GAB, our EDAT achieves state-of-the-art results on five benchmark datasets.
引用
下载
收藏
页码:963 / 970
页数:8
相关论文
共 50 条
  • [41] Efficient Attention Fusion Feature Extraction Network for Image Super-Resolution
    Wang, Tuoran
    Cheng, Na
    Ding, Shijia
    Wang, Hongyu
    ACM International Conference Proceeding Series, 2023, : 35 - 44
  • [42] A Dual CNN for Image Super-Resolution
    Song, Jiagang
    Xiao, Jingyu
    Tian, Chunwei
    Hu, Yuxuan
    You, Lei
    Zhang, Shichao
    ELECTRONICS, 2022, 11 (05)
  • [43] Dual-aware transformer network for single-image super-resolution
    Luo, Zhonghua
    Wang, Li
    Wang, Fengzhou
    Ruan, Yinglan
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [44] Image Super-Resolution via Efficient Transformer Embedding Frequency Decomposition With Restart
    Zuo, Yifan
    Yao, Wenhao
    Hu, Yuqi
    Fang, Yuming
    Liu, Wei
    Peng, Yuxin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 4670 - 4685
  • [45] Efficient image super-resolution integration
    Xu, Ke
    Wang, Xin
    Yang, Xin
    He, Shengfeng
    Zhang, Qiang
    Yin, Baocai
    Wei, Xiaopeng
    Lau, Rynson W. H.
    VISUAL COMPUTER, 2018, 34 (6-8): : 1065 - 1076
  • [46] Efficient image warping and super-resolution
    Chiang, MC
    Boult, TE
    THIRD IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION - WACV '96, PROCEEDINGS, 1996, : 56 - 61
  • [47] Efficient Blind Image Super-Resolution
    Vais, Olga
    Makarov, Ilya
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II, 2023, 14135 : 229 - 240
  • [48] Efficient image super-resolution integration
    Ke Xu
    Xin Wang
    Xin Yang
    Shengfeng He
    Qiang Zhang
    Baocai Yin
    Xiaopeng Wei
    Rynson W. H. Lau
    The Visual Computer, 2018, 34 : 1065 - 1076
  • [49] SwiniPASSR: Swin Transformer based Parallax Attention Network for Stereo Image Super-Resolution
    Jin, Kai
    Wei, Zeqiang
    Yang, Angulia
    Guo, Sha
    Gao, Mingzhi
    Zhou, Xiuzhuang
    Guo, Guodong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 919 - 928
  • [50] Densely Connected Transformer With Linear Self-Attention for Lightweight Image Super-Resolution
    Zeng, Kun
    Lin, Hanjiang
    Yan, Zhiqiang
    Fang, Jinsheng
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72