DASR: Dual-Attention Transformer for infrared image super-resolution

被引:3
|
作者
Liang, Shubo
Song, Kechen [1 ]
Zhao, Wenli
Li, Song
Yan, Yunhui [1 ]
机构
[1] Northeastern Univ, Sch Mech Engn & Automat, Shenyang, Liaoning, Peoples R China
基金
中国国家自然科学基金;
关键词
Single image super-resolution; Infrared image process; Vision Transformer; Attention mechanisms; Deep convolution network; NETWORK;
D O I
10.1016/j.infrared.2023.104837
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
The infrared image super-resolution (SR) method successfully overcomes the hardware limitations of infrared cameras that reconstruct higher-quality images with improved efficiency and cost-effectiveness. However, existing infrared SR methods do not take into account the specificity of infrared images and are primarily designed for small-scale factors. In this paper, we first investigate the domain differences between infrared and visible images and the impact of these differences on super-resolution tasks. It was found that compared with visible SR, the infrared SR will require more global edge structure information rather than local texture information reconstructed by previous CNN-based methods mainly. To address this disparity, we propose a novel infrared SR model, named DASR, which incorporates a Transformer with spatial and channel dual-attention mechanisms. In DASR, spatial attention captures both local and global information through window-based and cross-window contextual long-range interactions, while channel attention captures channel-wise global information through cross-channel interactions. With this new Transformer architecture, our method effectively extracts spatial and channel global information, which cannot be captured by the local receptive field of convolution, thus making it more suitable for infrared SR. Extensive experiments on benchmark datasets have indicated that our method outperforms the state-of-the-art infrared SR methods with less number of parameters and lower computational complexity while having the best visual results.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    [J]. 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [2] Dense Dual-Attention Network for Light Field Image Super-Resolution
    Mo, Yu
    Wang, Yingqian
    Xiao, Chao
    Yang, Jungang
    An, Wei
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4431 - 4443
  • [3] Image super-resolution reconstruction based on multi-scale dual-attention
    Li, Hong-an
    Wang, Diao
    Zhang, Jing
    Li, Zhanli
    Ma, Tian
    [J]. CONNECTION SCIENCE, 2023, 35 (01)
  • [4] Dual-attention guided multi-scale network for single image super-resolution
    Wen, Juan
    Zha, Lei
    [J]. APPLIED INTELLIGENCE, 2022, 52 (11) : 12258 - 12271
  • [5] Dual-attention guided multi-scale network for single image super-resolution
    Juan Wen
    Lei Zha
    [J]. Applied Intelligence, 2022, 52 : 12258 - 12271
  • [6] Dual Aggregation Transformer for Image Super-Resolution
    Chen, Zheng
    Zhang, Yulun
    Gu, Jinjin
    Kong, Linghe
    Yang, Xiaokang
    Yu, Fisher
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12278 - 12287
  • [7] Dual Self-Attention Swin Transformer for Hyperspectral Image Super-Resolution
    Long, Yaqian
    Wang, Xun
    Xu, Meng
    Zhang, Shuyu
    Jiang, Shuguo
    Jia, Sen
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [8] Edge-Aware Attention Transformer for Image Super-Resolution
    Wang, Haoqian
    Xing, Zhongyang
    Xu, Zhongjie
    Cheng, Xiangai
    Li, Teng
    [J]. IEEE Signal Processing Letters, 2024, 31 : 2905 - 2909
  • [9] Image super-resolution using dilated neighborhood attention transformer
    Chen, Li
    Zuo, Jinnian
    Du, Kai
    Zou, Jinsong
    Yin, Shaoyun
    Wang, Jinyu
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [10] LKFormer: large kernel transformer for infrared image super-resolution
    Qin, Feiwei
    Yan, Kang
    Wang, Changmiao
    Ge, Ruiquan
    Peng, Yong
    Zhang, Kai
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (28) : 72063 - 72077