Efficient Dual Attention Transformer for Image Super-Resolution

被引:0
|
作者
Park, Soobin [1 ]
Jeong, Yuna [1 ]
Choi, Yong Suk [1 ]
机构
[1] Hanyang Univ, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Image super-resolution; Low-level vision; Vision transformer; Self-attention; Computer vision;
D O I
10.1145/3605098.3635991
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Research based on computationally efficient local-window self-attention has been actively advancing in the field of image super-resolution (SR), leading to significant performance improvements. However, in most recent studies, local-window self-attention focuses only on spatial dimension, without sufficient consideration of the channel dimension. Additionally, extracting global information while maintaining the efficiency of local-window self-attention, still remains a challenging task in image SR. To resolve these problems, we propose a novel efficient dual attention transformer (EDAT). Our EDAT presents a dual attention block (DAB) that empowers the exploration of interdependencies not just among features residing at diverse spatial locations but also among distinct channels. Moreover, we propose a global attention block (GAB) to achieve efficient global feature extraction by reducing the spatial size of the keys and values. Our extensive experiments demonstrate that our DAB and GAB complement each other, exhibiting a synergistic effect. Furthermore, based on the two attention blocks, DAB and GAB, our EDAT achieves state-of-the-art results on five benchmark datasets.
引用
下载
收藏
页码:963 / 970
页数:8
相关论文
共 50 条
  • [1] DASR: Dual-Attention Transformer for infrared image super-resolution
    Liang, Shubo
    Song, Kechen
    Zhao, Wenli
    Li, Song
    Yan, Yunhui
    INFRARED PHYSICS & TECHNOLOGY, 2023, 133
  • [2] Dual Aggregation Transformer for Image Super-Resolution
    Chen, Zheng
    Zhang, Yulun
    Gu, Jinjin
    Kong, Linghe
    Yang, Xiaokang
    Yu, Fisher
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12278 - 12287
  • [3] Dual Self-Attention Swin Transformer for Hyperspectral Image Super-Resolution
    Long, Yaqian
    Wang, Xun
    Xu, Meng
    Zhang, Shuyu
    Jiang, Shuguo
    Jia, Sen
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] Efficient Multi-Scale Cosine Attention Transformer for Image Super-Resolution
    Chen, Yuzhen
    Wang, Gencheng
    Chen, Rong
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1442 - 1446
  • [5] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [6] Efficient mixed transformer for single image super-resolution
    Zheng, Ling
    Zhu, Jinchen
    Shi, Jinpeng
    Weng, Shizhuang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [7] ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution
    Zhang, Mingjin
    Zhang, Chi
    Zhang, Qiming
    Guo, Jie
    Gao, Xinbo
    Zhang, Jing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 23016 - 23027
  • [8] Image super-resolution reconstruction using Swin Transformer with efficient channel attention networks
    Sun, Zhenxi
    Zhang, Jin
    Chen, Ziyi
    Hong, Lu
    Zhang, Rui
    Li, Weishi
    Xia, Haojie
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 136
  • [9] Edge-Aware Attention Transformer for Image Super-Resolution
    Wang, Haoqian
    Xing, Zhongyang
    Xu, Zhongjie
    Cheng, Xiangai
    Li, Teng
    IEEE Signal Processing Letters, 2024, 31 : 2905 - 2909
  • [10] Parallel attention recursive generalization transformer for image super-resolution
    Jing Wang
    Yuanyuan Hao
    Hongxing Bai
    Lingyu Yan
    Scientific Reports, 15 (1)