Barlow twin self-supervised pre-training for remote sensing change detection

被引:1
|
作者
Feng, Wenqing [1 ]
Tu, Jihui [2 ]
Sun, Chenhao [3 ]
Xu, Wei [1 ,4 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci, Hangzhou, Peoples R China
[2] Yangtze Univ, Elect & Informat Sch, Jingzhou, Peoples R China
[3] Changsha Univ Sci & Technol, Elect & Informat Engn Sch, Changsha, Peoples R China
[4] Natl Univ Def Technol, Informat Syst & Management Coll, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
NETWORKS;
D O I
10.1080/2150704X.2023.2264493
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Remote sensing change detection (CD) methods that rely on supervised deep convolutional neural networks require large-scale labelled data, which is time-consuming and laborious to collect and label, especially for bi-temporal samples containing changed areas. Conversely, acquiring a large volume of unannotated images is relatively easy. Recently, self-supervised contrastive learning has emerged as a promising method for learning from unannotated images, thereby reducing the need for annotation. However, most existing methods employ random values or ImageNet pre-trained models to initialize their encoders and lack prior knowledge tailored to the demands of CD tasks, thus constraining the performance of CD models. To address these challenges, we propose a novel Barlow Twins self-supervised pre-training method for CD (BTSCD), which uses absolute feature differences to directly learn distinct representations associated with changed regions from unlabelled bi-temporal remote sensing images in a self-supervised manner. Experimental results obtained using two publicly available CD datasets demonstrate that our proposed approach exhibits competitive quantitative performance. Moreover, the proposed method achieved final results superior to those of existing state-of-the-art methods.
引用
收藏
页码:1087 / 1099
页数:13
相关论文
共 50 条
  • [31] Contrastive Self-Supervised Pre-Training for Video Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Jinjian
    Dong, Weisheng
    Shi, Guangming
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 458 - 471
  • [32] A debiased self-training framework with graph self-supervised pre-training aided for semi-supervised rumor detection
    Qiao, Yuhan
    Cui, Chaoqun
    Wang, Yiying
    Jia, Caiyan
    NEUROCOMPUTING, 2024, 604
  • [33] Masked Text Modeling: A Self-Supervised Pre-training Method for Scene Text Detection
    Wang, Keran
    Xie, Hongtao
    Wang, Yuxin
    Zhang, Dongming
    Qu, Yadong
    Gao, Zuan
    Zhang, Yongdong
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2006 - 2015
  • [34] Feature-Differencing-Based Self-Supervised Pre-Training for Land-Use/Land-Cover Change Detection in High-Resolution Remote Sensing Images
    Feng, Wenqing
    Guan, Fangli
    Sun, Chenhao
    Xu, Wei
    LAND, 2024, 13 (07)
  • [35] Self-Supervised Pre-Training Joint Framework: Assisting Lightweight Detection Network for Underwater Object Detection
    Wang, Zhuo
    Chen, Haojie
    Qin, Hongde
    Chen, Qin
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (03)
  • [36] Token Boosting for Robust Self-Supervised Visual Transformer Pre-training
    Li, Tianjiao
    Foo, Lin Geng
    Hu, Ping
    Shang, Xindi
    Rahmani, Hossein
    Yuan, Zehuan
    Liu, Jun
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24027 - 24038
  • [37] Joint Encoder-Decoder Self-Supervised Pre-training for ASR
    Arunkumar, A.
    Umesh, S.
    INTERSPEECH 2022, 2022, : 3418 - 3422
  • [38] Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures
    Guo, Yuzhi
    Wu, Jiaxiang
    Ma, Hehuan
    Huang, Junzhou
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6801 - 6809
  • [39] Stabilizing Label Assignment for Speech Separation by Self-supervised Pre-training
    Huang, Sung-Feng
    Chuang, Shun-Po
    Liu, Da-Rong
    Chen, Yi-Chen
    Yang, Gene-Ping
    Lee, Hung-yi
    INTERSPEECH 2021, 2021, : 3056 - 3060
  • [40] DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder
    Zhang, Zhenyu
    Guo, Tao
    Chen, Meng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3647 - 3651