Double Attention Transformer for Hyperspectral Image Classification

被引:24
|
作者
Tang, Ping [1 ]
Zhang, Meng [1 ]
Liu, Zhihui [2 ]
Song, Rong [3 ]
机构
[1] Cent China Normal Univ, Sch Comp, Wuhan 430079, Peoples R China
[2] China Univ Geosci, Sch Math & Phys, Wuhan 430074, Peoples R China
[3] Cent China Normal Univ, Sch Marxism, Wuhan 430079, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Transformers; Fuses; Data mining; Tokenization; IP networks; Correlation; Double-attention transformer encoder (DATE); hyperspectral image (HSI) classification; vision transformer (ViT);
D O I
10.1109/LGRS.2023.3248582
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Convolutional neural networks (CNNs) have become one of the most popular tools to tackle hyperspectral image (HSI) classification tasks. However, CNN suffers from the long-range dependencies problem, which may degrade the classification performance. To address this issue, this letter proposes a transformer-based backbone network for HSI classification. The core component is a newly designed double-attention transformer encoder (DATE), which contains two self-attention modules, termed spectral attention module (SPE) and spatial attention module (SPA). SPE extracts the global dependency among spectral bands, and SPA mines the local features of spatial correlation information among pixels. The local spatial tokens and the global spectral token are fused together and updated by SPA. In this way, DATE can not only capture the global dependence among spectral bands but also extract the local spatial information, which greatly improves the classification performance. To reduce the possible information loss as the network depth increases, a new skip connection mechanism is devised for cross-layer feature fusion. Experimental results in several datasets indicate that the new algorithm holds very competitive classification performance compared to the state-of-the-art methods.
引用
下载
收藏
页数:5
相关论文
共 50 条
  • [1] Hierarchical Attention Transformer for Hyperspectral Image Classification
    Arshad, Tahir
    Zhang, Junping
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21 : 1 - 5
  • [2] Dual attention transformer network for hyperspectral image classification
    Shu, Zhenqiu
    Wang, Yuyang
    Yu, Zhengtao
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [3] Attention Head Interactive Dual Attention Transformer for Hyperspectral Image Classification
    Shi, Cuiping
    Yue, Shuheng
    Wang, Liguo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 1
  • [4] A U-Shaped Convolution-Aided Transformer with Double Attention for Hyperspectral Image Classification
    Qin, Ruiru
    Wang, Chuanzhi
    Wu, Yongmei
    Du, Huafei
    Lv, Mingyun
    REMOTE SENSING, 2024, 16 (02)
  • [5] Hyperspectral Image Classification Based on Multibranch Attention Transformer Networks
    Bai, Jing
    Wen, Zheng
    Xiao, Zhu
    Ye, Fawang
    Zhu, Yongdong
    Alazab, Mamoun
    Jiao, Licheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [6] Spectral-Spatial Morphological Attention Transformer for Hyperspectral Image Classification
    Roy, Swalpa Kumar
    Deria, Ankur
    Shah, Chiranjibi
    Haut, Juan M.
    Du, Qian
    Plaza, Antonio
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [7] Spectral-Spatial Morphological Attention Transformer for Hyperspectral Image Classification
    Roy, Swalpa Kumar
    Deria, Ankur
    Shah, Chiranjibi
    Haut, Juan M.
    Du, Qian
    Plaza, Antonio
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [8] Double-branch feature fusion transformer for hyperspectral image classification
    Lanxue Dang
    Libo Weng
    Yane Hou
    Xianyu Zuo
    Yang Liu
    Scientific Reports, 13
  • [9] Double-branch feature fusion transformer for hyperspectral image classification
    Dang, Lanxue
    Weng, Libo
    Hou, Yane
    Zuo, Xianyu
    Liu, Yang
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [10] Hyperspectral Image Classification: An Analysis Employing CNN, LSTM, Transformer, and Attention Mechanism
    Viel, Felipe
    Maciel, Renato Cotrim
    Seman, Laio Oriel
    Zeferino, Cesar Albenes
    Bezerra, Eduardo Augusto
    Leithardt, Valderi Reis Quietinho
    IEEE ACCESS, 2023, 11 : 24835 - 24850