GraSS: Contrastive Learning With Gradient-Guided Sampling Strategy for Remote Sensing Image Semantic Segmentation

被引:0
|
作者
Zhang, Zhaoyang [1 ,2 ]
Ren, Zhen [1 ]
Tao, Chao [1 ]
Zhang, Yunsheng [1 ]
Peng, Chengli [1 ]
Li, Haifeng [1 ,2 ]
机构
[1] Cent South Univ, Sch Geosci & Infophys, Changsha 410083, Peoples R China
[2] Xiangjiang Lab, Changsha 410205, Peoples R China
关键词
Contrastive loss; gradient-guided; remote sensing image (RSI); self-supervised learning; semantic segmentation;
D O I
10.1109/TGRS.2023.3336285
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Self-supervised contrastive learning (SSCL) has achieved significant milestones in remote sensing image (RSI) understanding. Its essence lies in designing an unsupervised instance discrimination pretext task to extract image features from a large number of unlabeled images that are beneficial for downstream tasks. However, existing instance discrimination-based SSCL suffers from two limitations when applied to the RSI semantic segmentation task: 1) positive sample confounding issue (SCI), SSCL treats different augmentations of the same RSI as positive samples, but the richness, complexity, and imbalance of RSI ground objects lead to the model actually pulling a variety of different ground objects closer while pulling positive samples closer, which confuse the feature of different ground objects, and 2) feature adaptation bias, SSCL treats RSI patches containing various ground objects as individual instances for discrimination and obtains instance-level features, which are not fully adapted to pixel-level or object-level semantic segmentation tasks. To address the above limitations, we consider constructing samples containing single ground objects to alleviate positive SCI and make the model obtain object-level features from the contrastive between single ground objects. Meanwhile, we observed that the discrimination information can be mapped to specific regions in RSI through the gradient of unsupervised contrastive loss, and these specific regions tend to contain single ground objects. Based on this, we propose contrastive learning with gradient-guided sampling strategy (GraSS) for RSI semantic segmentation. GraSS consists of two stages: 1) the instance discrimination warm-up stage to provide initial discrimination information to the contrastive loss gradients and 2) the gradient-guided sampling contrastive training stage to adaptively construct samples containing more singular ground objects using the discrimination information. Experimental results on three open datasets demonstrate that GraSS effectively enhances the performance of SSCL in high-resolution RSI semantic segmentation. Compared with eight baseline methods from six different types of SSCL, GraSS achieves an average improvement of 1.57% and a maximum improvement of 3.58% in terms of mean intersection over the union (mIoU). In addition, we discovered that the unsupervised contrastive loss gradients contain rich feature information, which inspires us to use gradient information more extensively during model training to attain additional model capacity. The source code is available at https://github.com/GeoX-Lab/GraSS.
引用
收藏
页码:1 / 14
页数:14
相关论文
共 50 条
  • [1] RiSSNet: Contrastive Learning Network with a Relaxed Identity Sampling Strategy for Remote Sensing Image Semantic Segmentation
    Li, Haifeng
    Jing, Wenxuan
    Wei, Guo
    Wu, Kai
    Su, Mingming
    Liu, Lu
    Wu, Hao
    Li, Penglong
    Qi, Ji
    [J]. REMOTE SENSING, 2023, 15 (13)
  • [2] GCL: Gradient-Guided Contrastive Learning for Medical Image Segmentation with Multi-Perspective Meta Labels
    Wu, Yixuan
    Chen, Jintai
    Yan, Jiahuan
    Zhu, Yiheng
    Chen, Danny Z.
    Wu, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 463 - 471
  • [3] Joint Learning of Semantic Segmentation and Height Estimation for Remote Sensing Image Leveraging Contrastive Learning
    Gao, Zhi
    Sun, Wenbo
    Lu, Yao
    Zhang, Yichen
    Song, Weiwei
    Zhang, Yongjun
    Zhai, Ruifang
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] SegMind: Semisupervised Remote Sensing Image Semantic Segmentation With Masked Image Modeling and Contrastive Learning Method
    Li, Zhenghong
    Chen, Hao
    Wu, Jiangjiang
    Li, Jun
    Jing, Ning
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [5] Domain Adaptation for Remote Sensing Image Semantic Segmentation: An Integrated Approach of Contrastive Learning and Adversarial Learning
    Bai, Lubin
    Du, Shihong
    Zhang, Xiuyuan
    Wang, Haoyu
    Liu, Bo
    Ouyang, Song
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [6] Remote Sensing Image Semantic Change Detection Boosted by Semi-Supervised Contrastive Learning of Semantic Segmentation
    Zhang, Xiuwei
    Yang, Yizhe
    Ran, Lingyan
    Chen, Liang
    Wang, Kangwei
    Yu, Lei
    Wang, Peng
    Zhang, Yanning
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [7] Unsupervised Prototype-Wise Contrastive Learning for Domain Adaptive Semantic Segmentation in Remote Sensing Image
    Ma, Siteng
    Hou, Biao
    Guo, Xianpeng
    Wu, Zitong
    Li, Zhihao
    Wu, Hang
    Jiao, Licheng
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [8] Guided contrastive boundary learning for semantic segmentation
    Qiu, Shoumeng
    Chen, Jie
    Zhang, Haiqiang
    Wan, Ru
    Xue, Xiangyang
    Pu, Jian
    [J]. PATTERN RECOGNITION, 2024, 155
  • [9] Spatial and Semantic Consistency Contrastive Learning for Self-Supervised Semantic Segmentation of Remote Sensing Images
    Dong, Zhe
    Liu, Tianzhu
    Gu, Yanfeng
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [10] Spatial and Semantic Consistency Contrastive Learning for Self-Supervised Semantic Segmentation of Remote Sensing Images
    Dong, Zhe
    Liu, Tianzhu
    Gu, Yanfeng
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61