Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning

被引:12
|
作者
Liu, Chenfang [1 ]
Sun, Hao [1 ]
Xu, Yanjie [1 ]
Kuang, Gangyao [1 ]
机构
[1] Natl Univ Def Technol, State Key Lab Complex Electromagnet Environm Effe, Changsha 410073, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-source; contrastive self-supervised learning; pretraining; SAR-optical; DATA FUSION;
D O I
10.3390/rs14184632
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
SAR-optical images from different sensors can provide consistent information for scene classification. However, the utilization of unlabeled SAR-optical images in deep learning-based remote sensing image interpretation remains an open issue. In recent years, contrastive self-supervised learning (CSSL) methods have shown great potential for obtaining meaningful feature representations from massive amounts of unlabeled data. This paper investigates the effectiveness of CSSL-based pretraining models for SAR-optical remote-sensing classification. Firstly, we analyze the contrastive strategies of single-source and multi-source SAR-optical data augmentation under different CSSL architectures. We find that the CSSL framework without explicit negative sample selection naturally fits the multi-source learning problem. Secondly, we find that the registered SAR-optical images can guide the Siamese self-supervised network without negative samples to learn shared features, which is also the reason why the CSSL framework outperforms the CSSL framework with negative samples. Finally, we apply the CSSL pretrained network without negative samples that can learn the shared features of SAR-optical images to the downstream domain adaptation task of optical transfer to SAR images. We find that the choice of a pretrained network is important for downstream tasks.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Group Contrastive Self-Supervised Learning on Graphs
    Xu, Xinyi
    Deng, Cheng
    Xie, Yaochen
    Ji, Shuiwang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3169 - 3180
  • [42] Self-supervised contrastive learning on agricultural images
    Guldenring, Ronja
    Nalpantidis, Lazaros
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 191
  • [43] A comprehensive perspective of contrastive self-supervised learning
    Chen, Songcan
    Geng, Chuanxing
    FRONTIERS OF COMPUTER SCIENCE, 2021, 15 (04)
  • [44] A comprehensive perspective of contrastive self-supervised learning
    Songcan Chen
    Chuanxing Geng
    Frontiers of Computer Science, 2021, 15
  • [45] Slimmable Networks for Contrastive Self-supervised Learning
    Zhao, Shuai
    Zhu, Linchao
    Wang, Xiaohan
    Yang, Yi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1222 - 1237
  • [46] Self-supervised contrastive learning for itinerary recommendation
    Chen, Lei
    Zhu, Guixiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [47] Similarity Contrastive Estimation for Self-Supervised Soft Contrastive Learning
    Denize, Julien
    Rabarisoa, Jaonary
    Orcesi, Astrid
    Herault, Romain
    Canu, Stephane
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2705 - 2715
  • [48] Pathological Image Contrastive Self-supervised Learning
    Qin, Wenkang
    Jiang, Shan
    Luo, Lin
    RESOURCE-EFFICIENT MEDICAL IMAGE ANALYSIS, REMIA 2022, 2022, 13543 : 85 - 94
  • [49] Contrastive Transformation for Self-supervised Correspondence Learning
    Wang, Ning
    Zhou, Wengang
    Li, Hougiang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10174 - 10182
  • [50] Self-Supervised Contrastive Learning for Singing Voices
    Yakura, Hiromu
    Watanabe, Kento
    Goto, Masataka
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1614 - 1623