Dynamic and Adaptive Self-Training for Semi-Supervised Remote Sensing Image Semantic Segmentation

被引:0
|
作者
Jin, Jidong [1 ,2 ,3 ,4 ]
Lu, Wanxuan [1 ,2 ]
Yu, Hongfeng [1 ,2 ]
Rong, Xuee [1 ,2 ,3 ,4 ]
Sun, Xian [1 ,2 ,3 ,4 ]
Wu, Yirong [1 ,2 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Inst Elect, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Inst Elect, Key Lab Network Informat Syst Technol NIST, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Remote sensing; Semantic segmentation; Transformers; Data models; Training; Semantics; Predictive models; Consistency regularization (CR); remote sensing (RS) image; self-training; semantic segmentation; semisupervised learning (SSL);
D O I
10.1109/TGRS.2024.3407142
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing (RS) technology has made remarkable progress, providing a wealth of data for various applications, such as ecological conservation and urban planning. However, the meticulous annotation of this data is labor-intensive, leading to a shortage of labeled data, particularly in tasks like semantic segmentation. Semi-supervised methods, combining consistency regularization (CR) with self-training, offer a solution to efficiently utilize labeled and unlabeled data. However, these methods encounter challenges due to imbalanced data ratios. To tackle these challenges, we introduce a self-training approach named dynamic and adaptive self-training (DAST), which is combined with dynamic pseudo-label sampling (DPS), distribution matching (DM), and adaptive threshold updating (ATU). DPS is tailored to address the issue of class distribution imbalance by giving priority to classes with fewer samples. Meanwhile, DM and ATU aim to reduce distribution disparities by adjusting model predictions across augmented images within the framework of CR, ensuring they align with the actual data distribution. Experimental results on the Potsdam and iSAID datasets demonstrate that DAST effectively balances class distribution, aligns model predictions with data distribution, and stabilizes pseudo-labels, leading to state-of-the-art performance on both datasets. These findings highlight the potential of DAST in overcoming the challenges associated with significant disparities in labeled-to-unlabeled data ratios.
引用
下载
收藏
页码:1 / 1
页数:14
相关论文
共 50 条
  • [21] Semi-supervised Deep Learning via Transformation Consistency Regularization for Remote Sensing Image Semantic Segmentation
    Zhang, Bin
    Zhang, Yongjun
    Li, Yansheng
    Wan, Yi
    Guo, Haoyu
    Zheng, Zhi
    Yang, Kun
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 5782 - 5796
  • [22] A Bias Correction Semi-Supervised Semantic Segmentation Framework for Remote Sensing Images
    Zhang, Li
    Tan, Zhenshan
    Zheng, Yuzhi
    Zhang, Guo
    Zhang, Wen
    Li, Zhijiang
    IEEE Transactions on Geoscience and Remote Sensing, 2025, 63
  • [23] SEMI-SUPERVISED LANDCOVER CLASSIFICATION WITH ADAPTIVE PIXEL-REBALANCING SELF-TRAINING
    Lu, Xiaoqiang
    Cao, Guojin
    Gou, Tong
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 4611 - 4614
  • [24] Semi-supervised Object Detection with Adaptive Class-Rebalancing Self-Training
    Zhang, Fangyuan
    Pan, Tianxiang
    Wang, Bin
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3252 - 3261
  • [25] Semi-supervised self-training for sentence subjectivity classification
    Wang, Bin
    Spencer, Bruce
    Ling, Charles X.
    Zhang, Harry
    ADVANCES IN ARTIFICIAL INTELLIGENCE, 2008, 5032 : 344 - +
  • [26] Semi-supervised self-training for decision tree classifiers
    Tanha, Jafar
    van Someren, Maarten
    Afsarmanesh, Hamideh
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2017, 8 (01) : 355 - 370
  • [27] Semi-supervised Gait Recognition Based on Self-training
    Li, Yanan
    Yin, Yilong
    Liu, Lili
    Pang, Shaohua
    Yu, Qiuhong
    2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE (AVSS), 2012, : 288 - 293
  • [28] Semi-supervised self-training of object detection models
    Rosenberg, C
    Hebert, M
    Schneiderman, H
    WACV 2005: SEVENTH IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION, PROCEEDINGS, 2005, : 29 - 36
  • [29] Self-training guided disentangled adaptation for cross-domain remote sensing image semantic segmentation
    Zhao, Qi
    Lyu, Shuchang
    Zhao, Hongbo
    Liu, Binghao
    Chen, Lijiang
    Cheng, Guangliang
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 127
  • [30] Semi-supervised Continual Learning with Meta Self-training
    Ho, Stella
    Liu, Ming
    Du, Lan
    Li, Yunfeng
    Gao, Longxiang
    Gao, Shang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4024 - 4028