X-Shaped Interactive Autoencoders With Cross-Modality Mutual Learning for Unsupervised Hyperspectral Image Super-Resolution

被引:63
|
作者
Li, Jiaxin [1 ,2 ]
Zheng, Ke [3 ]
Li, Zhi [1 ,2 ]
Gao, Lianru [1 ]
Jia, Xiuping [4 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Key Lab Computat Opt Imaging Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Coll Resources & Environm, Beijing 100049, Peoples R China
[3] Liaocheng Univ, Coll Geog & Environm, Liaocheng, Peoples R China
[4] Univ New South Wales, Sch Engn & Informat Technol, Canberra, ACT, Australia
关键词
Hyperspectral image (HSI); spectral unmixing; super-resolution; unsupervised learning; TENSOR FACTORIZATION; MULTISPECTRAL IMAGES; FUSION; QUALITY; DECOMPOSITION; NETWORK; NET;
D O I
10.1109/TGRS.2023.3300043
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Hyperspectral image super-resolution (HSI-SR) can compensate for the incompleteness of single-sensor imaging and provide desirable products with both high spatial and spectral resolution. Among them, unmixing-inspired networks have drawn considerable attention due to their straightforward unsupervised paradigm. However, most do not fully capture and utilize the multimodal information due to their limited representation ability of constructed networks, hence leaving large room for further improvement. To this end, we propose an X-shaped interactive autoencoder network with cross-modality mutual learning between hyperspectral and multispectral data, XINet for short, to cope with this problem. Generally, it employs a coupled structure equipped with two autoencoders, aiming at deriving latent abundances and corresponding endmembers from input correspondence. Inside the network, a novel X-shaped interactive architecture is designed by coupling two disjointed U-Nets together via a parameter-shared strategy, which not only enables sufficient information flow between two modalities but also leads to informative spatial-spectral features. Considering the complementarity across each modality, a cross-modality mutual learning module (CMMLM) is constructed to further transfer knowledge from one modality to another, allowing for better utilization of multimodal features. Moreover, a joint self-supervised loss is proposed to effectively optimize our proposed XINet, enabling an unsupervised manner without external triplets supervision. Extensive experiments, including super-resolved results in four datasets, robustness analysis, and extension to other applications, are conducted, and the superiority of our method is demonstrated.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Enhanced Autoencoders With Attention-Embedded Degradation Learning for Unsupervised Hyperspectral Image Super-Resolution
    Gao, Lianru
    Li, Jiaxin
    Zheng, Ke
    Jia, Xiuping
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [2] INTERACTIVE AUTOENCODERS WITH DEGRADATION CONSTRAINT FOR HYPERSPECTRAL SUPER-RESOLUTION
    Li, Jiaxin
    Zheng, Ke
    Gao, Lianru
    Ni, Li
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 7447 - 7450
  • [3] Cross-Modality Deep Learning Achieves Super-Resolution in Fluorescence Microscopy
    Wang, Hongda
    Rivenson, Yair
    Jin, Yiyin
    Wei, Zhensong
    Gao, Ronald
    Gunaydin, Harun
    Bentolila, Laurent A.
    Kural, Comert
    Ozcan, Aydogan
    2019 CONFERENCE ON LASERS AND ELECTRO-OPTICS (CLEO), 2019,
  • [4] Deep learning enables cross-modality super-resolution in fluorescence microscopy
    Hongda Wang
    Yair Rivenson
    Yiyin Jin
    Zhensong Wei
    Ronald Gao
    Harun Günaydın
    Laurent A. Bentolila
    Comert Kural
    Aydogan Ozcan
    Nature Methods, 2019, 16 : 103 - 110
  • [5] Deep learning enables cross-modality super-resolution in fluorescence microscopy
    Wang, Hongda
    Rivenson, Yair
    Jin, Yiyin
    Wei, Zhensong
    Gao, Ronald
    Gunaydin, Harun
    Bentolila, Laurent A.
    Kural, Comert
    Ozcan, Aydogan
    NATURE METHODS, 2019, 16 (01) : 103 - +
  • [6] Unsupervised and Unregistered Hyperspectral Image Super-Resolution With Mutual Dirichlet-Net
    Qu, Ying
    Qi, Hairong
    Kwan, Chiman
    Yokoya, Naoto
    Chanussot, Jocelyn
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [7] Cross-Modality High-Frequency Transformer for MR Image Super-Resolution
    Fang, Chaowei
    Zhang, Dingwen
    Wang, Liang
    Zhang, Yulun
    Cheng, Lechao
    Han, Junwei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 1584 - 1592
  • [8] Cross-Modality Reference and Feature Mutual-Projection for 3D Brain MRI Image Super-Resolution
    Wang, Lulu
    Zhang, Wanqi
    Chen, Wei
    He, Zhongshi
    Jia, Yuanyuan
    Du, Jinglong
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2024, 37 (06): : 2838 - 2851
  • [9] Cross-Modality Contrastive Learning for Hyperspectral Image Classification
    Hang, Renlong
    Qian, Xuwei
    Liu, Qingshan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [10] Unsupervised Adaptation Learning for Hyperspectral Imagery Super-resolution
    Zhang, Lei
    Nie, Jiangtao
    Wei, Wei
    Zhang, Yanning
    Liao, Shengcai
    Shao, Ling
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3070 - 3079