ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution

被引:12
|
作者
Zhang, Mingjin [1 ]
Zhang, Chi [1 ]
Zhang, Qiming [2 ]
Guo, Jie [1 ]
Gao, Xinbo [3 ]
Zhang, Jing [2 ]
机构
[1] Xidian Univ, Xian, Shaanxi, Peoples R China
[2] Univ Sydney, Sydney, NSW, Australia
[3] Chongqing Univ Posts & Telecommun, Chongqing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.02109
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Single hyperspectral image super-resolution (singleHSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation. However, the prevailing CNN-based approaches have shown limitations in building long-range dependencies and capturing interaction information between spectral features. This results in inadequate utilization of spectral information and artifacts after upsampling. To address this issue, we propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure. Specifically, we first introduce a robust and spectral-friendly similarity metric, i.e., the spectral correlation coefficient of the spectrum (SCC), to replace the original attention matrix and incorporates inductive biases into the model to facilitate training. Built upon it, we further utilize the kernelizable attention technique with theoretical support to form a novel efficient SCC-kernel-based self-attention (ESSA) and reduce attention computation to linear complexity. ESSA enlarges the receptive field for features after upsampling without bringing much computation and allows the model to effectively utilize spatial-spectral information from different scales, resulting in the generation of more natural high-resolution images. Without the need for pretraining on large-scale datasets, our experiments demonstrate ESSA's effectiveness in both visual quality and quantitative results. The code will be released at ESSAformer.
引用
收藏
页码:23016 / 23027
页数:12
相关论文
共 50 条
  • [21] Transformer-Based Selective Super-resolution for Efficient Image Refinement
    Zhang, Tianyi
    Kasichainula, Kishore
    Zhuo, Yaoxin
    Li, Baoxin
    Seo, Jae-Sun
    Cao, Yu
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 7305 - 7313
  • [22] AdaFormer: Efficient Transformer with Adaptive Token Sparsification for Image Super-resolution
    Luo, Xiaotong
    Ai, Zekun
    Liang, Qiuyuan
    Liu, Ding
    Xie, Yuan
    Qu, Yanyun
    Fu, Yun
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4009 - 4016
  • [23] EdgeFormer: Edge-Aware Efficient Transformer for Image Super-Resolution
    Luo, Xiaotong
    Ai, Zekun
    Liang, Qiuyuan
    Xie, Yuan
    Shi, Zhongchao
    Fan, Jianping
    Qu, Yanyun
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [24] Hyperspectral Image Super-Resolution With a Mosaic RGB Image
    Fu, Ying
    Zheng, Yinqiang
    Huang, Hua
    Sato, Imari
    Sato, Yoichi
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (11) : 5539 - 5552
  • [25] Deep Blind Hyperspectral Image Super-Resolution
    Zhang, Lei
    Nie, Jiangtao
    Wei, Wei
    Li, Yong
    Zhang, Yanning
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2388 - 2400
  • [26] Spatial relaxation transformer for image super-resolution
    Li, Yinghua
    Zhang, Ying
    Zeng, Hao
    He, Jinglu
    Guo, Jie
    [J]. JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (07)
  • [27] Dual Aggregation Transformer for Image Super-Resolution
    Chen, Zheng
    Zhang, Yulun
    Gu, Jinjin
    Kong, Linghe
    Yang, Xiaokang
    Yu, Fisher
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12278 - 12287
  • [28] Image Super-Resolution via Efficient Transformer Embedding Frequency Decomposition With Restart
    Zuo, Yifan
    Yao, Wenhao
    Hu, Yuqi
    Fang, Yuming
    Liu, Wei
    Peng, Yuxin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 4670 - 4685
  • [29] Efficient Multi-Scale Cosine Attention Transformer for Image Super-Resolution
    Chen, Yuzhen
    Wang, Gencheng
    Chen, Rong
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1442 - 1446
  • [30] Efficient image super-resolution integration
    Xu, Ke
    Wang, Xin
    Yang, Xin
    He, Shengfeng
    Zhang, Qiang
    Yin, Baocai
    Wei, Xiaopeng
    Lau, Rynson W. H.
    [J]. VISUAL COMPUTER, 2018, 34 (6-8): : 1065 - 1076