Masked Self-Distillation Domain Adaptation for Hyperspectral Image Classification

被引:0
|
作者
Fang, Zhuoqun [1 ,2 ]
He, Wenqiang [3 ]
Li, Zhaokui [3 ]
Du, Qian [4 ]
Chen, Qiusheng [5 ]
机构
[1] Shenyang Aerosp Univ, Coll Artificial Intelligence, Shenyang 110136, Peoples R China
[2] Chinese Acad Sci, Shenyang Inst Comp Technol, Shenyang 110168, Peoples R China
[3] Shenyang Aerosp Univ, Sch Comp Sci, Shenyang 110136, Peoples R China
[4] Mississippi State Univ, Dept Elect & Comp Engn, Starkville, MS 39762 USA
[5] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Training; Task analysis; Data models; Data mining; Adaptation models; Correlation; Classification; hyperspectral image (HSI); knowledge distillation; masked image modeling (MIM); unsupervised domain adaptation (UDA);
D O I
10.1109/TGRS.2024.3436814
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Deep learning-based unsupervised domain adaptation (UDA) has shown potential in cross-scene hyperspectral image (HSI) classification. However, existing methods often experience reduced feature discriminability during domain alignment due to the difficulty of extracting semantic information from unlabeled target domain data. This challenge is exacerbated by ambiguous categories with similar material compositions and the underutilization of target domain samples. To address these issues, we propose a novel masked self-distillation domain adaptation (MSDA) framework, which enhances feature discriminability by integrating masked self-distillation (MSD) into domain adaptation. A class-separable adversarial training (CSAT) module is introduced to prevent misclassification between ambiguous categories by decreasing class correlation. Simultaneously, CSAT reduces the discrepancy between source and target domains through biclassifier adversarial training. Furthermore, the MSD module performs a pretext task on target domain samples to extract class-relevant knowledge. Specifically, MSD enforces consistency between outputs generated from masked target images, where spatial-spectral portions of an HSI patch are randomly obscured, and predictions are produced based on the complete patches by an exponential moving average (EMA) teacher. By minimizing consistency loss, the network learns to associate categorical semantics with unmasked regions. Notably, MSD is tailored for HSI data by preserving the samples' central pixel and the object to be classified, thus maintaining class information. Consequently, MSDA extracts highly discriminative features by improving class separability and learning class-relevant knowledge, ultimately enhancing UDA performance. Experimental results on four datasets demonstrate that MSDA surpasses the existing state-of-the-art UDA methods for HSI classification. The code is available at https://github.com/Li-ZK/MSDA-2024.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Transferable adversarial masked self-distillation for unsupervised domain adaptation
    Xia, Yuelong
    Yun, Li-Jun
    Yang, Chengfu
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (06) : 6567 - 6580
  • [2] Transferable adversarial masked self-distillation for unsupervised domain adaptation
    Yuelong Xia
    Li-Jun Yun
    Chengfu Yang
    [J]. Complex & Intelligent Systems, 2023, 9 : 6567 - 6580
  • [3] Tolerant Self-Distillation for image classification
    Liu, Mushui
    Yu, Yunlong
    Ji, Zhong
    Han, Jungong
    Zhang, Zhongfei
    [J]. NEURAL NETWORKS, 2024, 174
  • [4] Image classification based on self-distillation
    Yuting Li
    Linbo Qing
    Xiaohai He
    Honggang Chen
    Qiang Liu
    [J]. Applied Intelligence, 2023, 53 : 9396 - 9408
  • [5] Image classification based on self-distillation
    Li, Yuting
    Qing, Linbo
    He, Xiaohai
    Chen, Honggang
    Liu, Qiang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (08) : 9396 - 9408
  • [6] Hyperspectral Image Classification Based on Pyramid Coordinate Attention and Weighted Self-Distillation
    Shang, Ronghua
    Ren, Jinhong
    Zhu, Songling
    Zhang, Weitong
    Feng, Jie
    Li, Yangyang
    Jiao, Licheng
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [7] A Self-distillation Lightweight Image Classification Network Scheme
    Ni, Shuiping
    Ma, Xinliang
    [J]. Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (06): : 66 - 71
  • [8] SIMPLE SELF-DISTILLATION LEARNING FOR NOISY IMAGE CLASSIFICATION
    Sasaya, Tenta
    Watanabe, Takashi
    Ida, Takashi
    Ono, Toshiyuki
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 795 - 799
  • [9] Self-Distillation for Unsupervised 3D Domain Adaptation
    Cardace, Adriano
    Spezialetti, Riccardo
    Ramirez, Pierluigi Zama
    Salti, Samuele
    Di Stefano, Luigi
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4155 - 4166