Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification

被引:91
|
作者
Yuan, Yuan [1 ]
Lin, Lei [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Geog & Biol Informat, Nanjing 210023, Peoples R China
[2] Beijing Qihoo Technol Co Ltd, Beijing 100015, Peoples R China
基金
中国国家自然科学基金;
关键词
Bidirectional encoder representations from Transformers (BERT); classification; satellite image time series (SITS); self-supervised learning; transfer learning; unsupervised pretraining; LAND-COVER CLASSIFICATION; CROP CLASSIFICATION; REPRESENTATION;
D O I
10.1109/JSTARS.2020.3036602
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for the SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data are scarce. To address this problem, we propose a novel self-supervised pretraining scheme to initialize a transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pretraining is completed, the pretrained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed pretraining scheme, leading to substantial improvements in classification accuracy using transformer, 1-D convolutional neural network, and bidirectional long short-term memory network. The code and the pretrained model will be available at https://github.com/linlei1214/SITS-BERT upon publication.
引用
收藏
页码:474 / 487
页数:14
相关论文
共 50 条
  • [21] A Deeper Look at Sheet Music Composer Classification Using Self-Supervised Pretraining
    Yang, Daniel
    Ji, Kevin
    Tsai, Tj
    APPLIED SCIENCES-BASEL, 2021, 11 (04): : 1 - 16
  • [22] TUNET: A BLOCK-ONLINE BANDWIDTH EXTENSION MODEL BASED ON TRANSFORMERS AND SELF-SUPERVISED PRETRAINING
    Viet-Anh Nguyen
    Nguyen, Anh H. T.
    Khong, Andy W. H.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 161 - 165
  • [23] SPeCiaL: Self-supervised Pretraining for Continual Learning
    Caccia, Lucas
    Pineau, Joelle
    CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 91 - 103
  • [24] Self-Supervised Multi-Task Pretraining Improves Image Aesthetic Assessment
    Pfister, Jan
    Kobs, Konstantin
    Hotho, Andreas
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 816 - 825
  • [25] Instance Localization for Self-supervised Detection Pretraining
    Yang, Ceyuan
    Wu, Zhirong
    Zhou, Bolei
    Lin, Stephen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3986 - 3995
  • [26] Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification
    Eldele, Emadeldeen
    Ragab, Mohamed
    Chen, Zhenghua
    Wu, Min
    Kwoh, Chee-Keong
    Li, Xiaoli
    Guan, Cuntai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) : 15604 - 15618
  • [27] Self-supervised Vision Transformers for Land-cover Segmentation and Classification
    Scheibenreif, Linus
    Hanna, Joelle
    Mommert, Michael
    Borth, Damian
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 1421 - 1430
  • [28] Self-Supervised Transformers for fMRI representation
    Malkiel, Itzik
    Rosenman, Gony
    Wolf, Lior
    Hendler, Talma
    INTERNATIONAL CONFERENCE ON MEDICAL IMAGING WITH DEEP LEARNING, VOL 172, 2022, 172 : 895 - 913
  • [29] On Separate Normalization in Self-supervised Transformers
    Chen, Xiaohui
    Wang, Yinkai
    Du, Yuanqi
    Hassoun, Soha
    Liu, Li-Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [30] DETECTING LAND COVER CHANGES BETWEEN SATELLITE IMAGE TIME SERIES BY EXPLOITING SELF-SUPERVISED REPRESENTATION LEARNING CAPABILITIES
    Adebayo, Adebowale Daniel
    Pelletier, Charlotte
    Lang, Stefan
    Valero, Silvia
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 7168 - 7171