Self-Supervised Learning across the Spectrum

被引:0
|
作者
Shenoy, Jayanth [1 ]
Zhang, Xingjian Davis [1 ]
Tao, Bill [1 ]
Mehrotra, Shlok [1 ]
Yang, Rem [1 ]
Zhao, Han [1 ]
Vasisht, Deepak [1 ]
机构
[1] Univ Illinois, Champaign, IL 61801 USA
基金
美国国家科学基金会;
关键词
SITS; foundational models; self-supervised learning; multimodal; CLOUD REMOVAL;
D O I
10.3390/rs16183470
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Satellite image time series (SITS) segmentation is crucial for many applications, like environmental monitoring, land cover mapping, and agricultural crop type classification. However, training models for SITS segmentation remains a challenging task due to the lack of abundant training data, which requires fine-grained annotation. We propose S4, a new self-supervised pretraining approach that significantly reduces the requirement for labeled training data by utilizing two key insights of satellite imagery: (a) Satellites capture images in different parts of the spectrum, such as radio frequencies and visible frequencies. (b) Satellite imagery is geo-registered, allowing for fine-grained spatial alignment. We use these insights to formulate pretraining tasks in S4. To the best of our knowledge, S4 is the first multimodal and temporal approach for SITS segmentation. S4's novelty stems from leveraging multiple properties required for SITS self-supervision: (1) multiple modalities, (2) temporal information, and (3) pixel-level feature extraction. We also curate m2s2-SITS, a large-scale dataset of unlabeled, spatially aligned, multimodal, and geographic-specific SITS that serves as representative pretraining data for S4. Finally, we evaluate S4 on multiple SITS segmentation datasets and demonstrate its efficacy against competing baselines while using limited labeled data. Through a series of extensive comparisons and ablation studies, we demonstrate S4's ability as an effective feature extractor for downstream semantic segmentation.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [42] Backdoor Attacks on Self-Supervised Learning
    Saha, Aniruddha
    Tejankar, Ajinkya
    Koohpayegani, Soroush Abbasi
    Pirsiavash, Hamed
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13327 - 13336
  • [43] Mean Shift for Self-Supervised Learning
    Koohpayegani, Soroush Abbasi
    Tejankar, Ajinkya
    Pirsiavash, Hamed
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10306 - 10315
  • [44] Synergistic Self-supervised and Quantization Learning
    Cao, Yun-Hao
    Sun, Peiqin
    Huang, Yechang
    Wu, Jianxin
    Zhou, Shuchang
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 587 - 604
  • [45] On Feature Decorrelation in Self-Supervised Learning
    Hua, Tianyu
    Wang, Wenxiao
    Xue, Zihui
    Ren, Sucheng
    Wang, Yue
    Zhao, Hang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9578 - 9588
  • [46] SELF-SUPERVISED LEARNING-MODEL
    SAGA, K
    SUGASAKA, T
    SEKIGUCHI, M
    FUJITSU SCIENTIFIC & TECHNICAL JOURNAL, 1993, 29 (03): : 209 - 216
  • [47] Graph Self-Supervised Learning: A Survey
    Liu, Yixin
    Jin, Ming
    Pan, Shirui
    Zhou, Chuan
    Zheng, Yu
    Xia, Feng
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (06) : 5879 - 5900
  • [48] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [49] Self-Supervised Adversarial Imitation Learning
    Monteiro, Juarez
    Gavenski, Nathan
    Meneguzzi, Felipe
    Barros, Rodrigo C.
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [50] Nonequilibrium thermodynamics of self-supervised learning
    Salazar, Domingos S. P.
    PHYSICS LETTERS A, 2021, 419