Dynamic and Adaptive Self-Training for Semi-Supervised Remote Sensing Image Semantic Segmentation

被引:0
|
作者
Jin, Jidong [1 ,2 ,3 ,4 ]
Lu, Wanxuan [1 ,2 ]
Yu, Hongfeng [1 ,2 ]
Rong, Xuee [1 ,2 ,3 ,4 ]
Sun, Xian [1 ,2 ,3 ,4 ]
Wu, Yirong [1 ,2 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Inst Elect, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Inst Elect, Key Lab Network Informat Syst Technol NIST, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Remote sensing; Semantic segmentation; Transformers; Data models; Training; Semantics; Predictive models; Consistency regularization (CR); remote sensing (RS) image; self-training; semantic segmentation; semisupervised learning (SSL);
D O I
10.1109/TGRS.2024.3407142
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing (RS) technology has made remarkable progress, providing a wealth of data for various applications, such as ecological conservation and urban planning. However, the meticulous annotation of this data is labor-intensive, leading to a shortage of labeled data, particularly in tasks like semantic segmentation. Semi-supervised methods, combining consistency regularization (CR) with self-training, offer a solution to efficiently utilize labeled and unlabeled data. However, these methods encounter challenges due to imbalanced data ratios. To tackle these challenges, we introduce a self-training approach named dynamic and adaptive self-training (DAST), which is combined with dynamic pseudo-label sampling (DPS), distribution matching (DM), and adaptive threshold updating (ATU). DPS is tailored to address the issue of class distribution imbalance by giving priority to classes with fewer samples. Meanwhile, DM and ATU aim to reduce distribution disparities by adjusting model predictions across augmented images within the framework of CR, ensuring they align with the actual data distribution. Experimental results on the Potsdam and iSAID datasets demonstrate that DAST effectively balances class distribution, aligns model predictions with data distribution, and stabilizes pseudo-labels, leading to state-of-the-art performance on both datasets. These findings highlight the potential of DAST in overcoming the challenges associated with significant disparities in labeled-to-unlabeled data ratios.
引用
下载
收藏
页码:1 / 1
页数:14
相关论文
共 50 条
  • [31] SEMI-SUPERVISED FACE RECOGNITION WITH LDA SELF-TRAINING
    Zhao, Xuran
    Evans, Nicholas
    Dugelay, Jean-Luc
    2011 18TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2011,
  • [32] Semi-supervised self-training for decision tree classifiers
    Jafar Tanha
    Maarten van Someren
    Hamideh Afsarmanesh
    International Journal of Machine Learning and Cybernetics, 2017, 8 : 355 - 370
  • [33] Federated Self-training for Semi-supervised Audio Recognition
    Tsouvalas, Vasileios
    Saeed, Aaqib
    Ozcelebi, Tanir
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [34] The student-teacher framework guided by self-training and consistency regularization for semi-supervised medical image segmentation
    Li, Boliang
    Xu, Yaming
    Wang, Yan
    Li, Luxiu
    Zhang, Bo
    PLOS ONE, 2024, 19 (04):
  • [35] Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation
    Chaitanya, Krishna
    Erdil, Ertunc
    Karani, Neerav
    Konukoglu, Ender
    MEDICAL IMAGE ANALYSIS, 2023, 87
  • [36] Semi-supervised point cloud segmentation using self-training with label confidence prediction
    Li, Hongyan
    Sun, Zhengxing
    Wu, Yunjie
    Song, Youcheng
    NEUROCOMPUTING, 2021, 437 : 227 - 237
  • [37] Semi-supervised semantic segmentation with cross teacher training
    Xiao, Hui
    Li, Dong
    Xu, Hao
    Fu, Shuibo
    Yan, Diqun
    Song, Kangkang
    Peng, Chengbin
    NEUROCOMPUTING, 2022, 508 : 36 - 46
  • [38] Semi-supervised semantic segmentation based on Generative Adversarial Networks for remote sensing images
    Liu Yu-Xi
    Zhang Bo
    Wang Bin
    JOURNAL OF INFRARED AND MILLIMETER WAVES, 2020, 39 (04) : 473 - 482
  • [39] Self-training and Multi-level Adversarial Network for Domain Adaptive Remote Sensing Image Segmentation
    Zheng, Yilin
    He, Lingmin
    Wu, Xiangping
    Pan, Chen
    NEURAL PROCESSING LETTERS, 2023, 55 (08) : 10613 - 10638
  • [40] Self-training and Multi-level Adversarial Network for Domain Adaptive Remote Sensing Image Segmentation
    Yilin Zheng
    Lingmin He
    Xiangping Wu
    Chen Pan
    Neural Processing Letters, 2023, 55 : 10613 - 10638