Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification

被引:91
|
作者
Yuan, Yuan [1 ]
Lin, Lei [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Geog & Biol Informat, Nanjing 210023, Peoples R China
[2] Beijing Qihoo Technol Co Ltd, Beijing 100015, Peoples R China
基金
中国国家自然科学基金;
关键词
Bidirectional encoder representations from Transformers (BERT); classification; satellite image time series (SITS); self-supervised learning; transfer learning; unsupervised pretraining; LAND-COVER CLASSIFICATION; CROP CLASSIFICATION; REPRESENTATION;
D O I
10.1109/JSTARS.2020.3036602
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for the SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data are scarce. To address this problem, we propose a novel self-supervised pretraining scheme to initialize a transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pretraining is completed, the pretrained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed pretraining scheme, leading to substantial improvements in classification accuracy using transformer, 1-D convolutional neural network, and bidirectional long short-term memory network. The code and the pretrained model will be available at https://github.com/linlei1214/SITS-BERT upon publication.
引用
收藏
页码:474 / 487
页数:14
相关论文
共 50 条
  • [41] Self-Supervised Pretraining Transformer for Seismic Data Denoising
    Wang, Hongzhou
    Lin, Jun
    Li, Yue
    Dong, Xintong
    Tong, Xunqian
    Lu, Shaoping
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 25
  • [42] Self-Supervised Monocular Depth Estimation With Extensive Pretraining
    Choi, Hyukdoo
    IEEE ACCESS, 2021, 9 : 157236 - 157246
  • [43] Self-Supervised Monocular Depth Estimation with Extensive Pretraining
    Choi, Hyukdoo
    IEEE Access, 2021, 9 : 157236 - 157246
  • [44] Decoupled self-supervised label augmentation for fully-supervised image classification
    Gao, Wanshun
    Wu, Meiqing
    Lam, Siew-Kei
    Xia, Qihui
    Zou, Jianhua
    KNOWLEDGE-BASED SYSTEMS, 2022, 235
  • [45] Self-supervised vision transformers for semantic segmentation
    Gu, Xianfan
    Hu, Yingdong
    Wen, Chuan
    Gao, Yang
    Computer Vision and Image Understanding, 2025, 251
  • [46] PROPERTY NEURONS IN SELF-SUPERVISED SPEECH TRANSFORMERS
    Lin, Tzu-Quan
    Lin, Guan-Ting
    Lee, Hung-Yi
    Tang, Hao
    arXiv,
  • [47] Emerging Properties in Self-Supervised Vision Transformers
    Caron, Mathilde
    Touvron, Hugo
    Misra, Ishan
    Jegou, Herve
    Mairal, Julien
    Bojanowski, Piotr
    Joulin, Armand
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9630 - 9640
  • [48] Nearest Neighboring Self-Supervised Learning for Hyperspectral Image Classification
    Qin, Yao
    Ye, Yuanxin
    Zhao, Yue
    Wu, Junzheng
    Zhang, Han
    Cheng, Kenan
    Li, Kun
    REMOTE SENSING, 2023, 15 (06)
  • [49] Self-Supervised Classification of SAR Images With Optical Image Assistance
    Li, Chenxuan
    Guo, Weiwei
    Zhang, Zenghui
    Zhang, Tao
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 15
  • [50] Big Self-Supervised Models Advance Medical Image Classification
    Azizi, Shekoofeh
    Mustafa, Basil
    Ryan, Fiona
    Beaver, Zachary
    Freyberg, Jan
    Deaton, Jonathan
    Loh, Aaron
    Karthikesalingam, Alan
    Kornblith, Simon
    Chen, Ting
    Natarajan, Vivek
    Norouzi, Mohammad
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3458 - 3468