Masked Self-Distillation Domain Adaptation for Hyperspectral Image Classification

被引:0
|
作者
Fang, Zhuoqun [1 ,2 ]
He, Wenqiang [3 ]
Li, Zhaokui [3 ]
Du, Qian [4 ]
Chen, Qiusheng [5 ]
机构
[1] Shenyang Aerosp Univ, Coll Artificial Intelligence, Shenyang 110136, Peoples R China
[2] Chinese Acad Sci, Shenyang Inst Comp Technol, Shenyang 110168, Peoples R China
[3] Shenyang Aerosp Univ, Sch Comp Sci, Shenyang 110136, Peoples R China
[4] Mississippi State Univ, Dept Elect & Comp Engn, Starkville, MS 39762 USA
[5] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Training; Task analysis; Data models; Data mining; Adaptation models; Correlation; Classification; hyperspectral image (HSI); knowledge distillation; masked image modeling (MIM); unsupervised domain adaptation (UDA);
D O I
10.1109/TGRS.2024.3436814
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Deep learning-based unsupervised domain adaptation (UDA) has shown potential in cross-scene hyperspectral image (HSI) classification. However, existing methods often experience reduced feature discriminability during domain alignment due to the difficulty of extracting semantic information from unlabeled target domain data. This challenge is exacerbated by ambiguous categories with similar material compositions and the underutilization of target domain samples. To address these issues, we propose a novel masked self-distillation domain adaptation (MSDA) framework, which enhances feature discriminability by integrating masked self-distillation (MSD) into domain adaptation. A class-separable adversarial training (CSAT) module is introduced to prevent misclassification between ambiguous categories by decreasing class correlation. Simultaneously, CSAT reduces the discrepancy between source and target domains through biclassifier adversarial training. Furthermore, the MSD module performs a pretext task on target domain samples to extract class-relevant knowledge. Specifically, MSD enforces consistency between outputs generated from masked target images, where spatial-spectral portions of an HSI patch are randomly obscured, and predictions are produced based on the complete patches by an exponential moving average (EMA) teacher. By minimizing consistency loss, the network learns to associate categorical semantics with unmasked regions. Notably, MSD is tailored for HSI data by preserving the samples' central pixel and the object to be classified, thus maintaining class information. Consequently, MSDA extracts highly discriminative features by improving class separability and learning class-relevant knowledge, ultimately enhancing UDA performance. Experimental results on four datasets demonstrate that MSDA surpasses the existing state-of-the-art UDA methods for HSI classification. The code is available at https://github.com/Li-ZK/MSDA-2024.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Hyperspectral Image Classification Based on Dense Convolution and Domain Adaptation
    Zhao Chunhui
    Li Tong
    Feng Shou
    [J]. ACTA PHOTONICA SINICA, 2021, 50 (03)
  • [22] Attention-based Domain Adaptation for Hyperspectral Image Classification
    Rafi, Robiul Hossain Md.
    Tang, Bo
    Du, Qian
    Younan, Nicolas H.
    [J]. 2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, : 67 - 70
  • [23] Tensor Alignment Based Domain Adaptation for Hyperspectral Image Classification
    Qin, Yao
    Bruzzone, Lorenzo
    Li, Biao
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (11): : 9290 - 9307
  • [24] Class-independent domain adaptation for hyperspectral image classification
    Yu, Long
    Li, Jun
    He, Lin
    Li, Yunfei
    [J]. National Remote Sensing Bulletin, 2024, 28 (03) : 610 - 623
  • [25] Variational Self-Distillation for Remote Sensing Scene Classification
    Hu, Yutao
    Huang, Xin
    Luo, Xiaoyan
    Han, Jungong
    Cao, Xianbin
    Zhang, Jun
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [26] Self-Distillation for Few-Shot Image Captioning
    Chen, Xianyu
    Jiang, Ming
    Zhao, Qi
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 545 - 555
  • [27] Self-Supervised Learning With Adaptive Distillation for Hyperspectral Image Classification
    Yue, Jun
    Fang, Leyuan
    Rahmani, Hossein
    Ghamisi, Pedram
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [28] Hyperspectral Image Classification Based on Masked Self-Supervisied Pretraining Network
    Zhang, HongZhe
    Feng, Shou
    Wang, XueQing
    Liu, JianFei
    Qin, Boao
    Zhong, HaiYang
    [J]. International Geoscience and Remote Sensing Symposium (IGARSS), 2023, 2023-July : 7665 - 7668
  • [29] HYPERSPECTRAL IMAGE CLASSIFICATION BASED ON MASKED SELF-SUPERVISIED PRETRAINING NETWORK
    Zhang, HongZhe
    Feng, Shou
    Wang, XueQing
    Liu, JianFei
    Qin, Boao
    Zhong, HaiYang
    [J]. IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 7665 - 7668
  • [30] Class Reconstruction Driven Adversarial Domain Adaptation for Hyperspectral Image Classification
    Pande, Shivam
    Banerjee, Biplab
    Pizurica, Aleksandra
    [J]. PATTERN RECOGNITION AND IMAGE ANALYSIS, PT I, 2020, 11867 : 472 - 484