SSL-SoilNet: A Hybrid Transformer-Based Framework With Self-Supervised Learning for Large-Scale Soil Organic Carbon Prediction

被引:0
|
作者
Kakhani, Nafiseh [1 ,2 ]
Rangzan, Moien [3 ]
Jamali, Ali [4 ]
Attarchi, Sara [3 ]
Alavipanah, Seyed Kazem [3 ]
Mommert, Michael [5 ]
Tziolas, Nikolaos [6 ]
Scholten, Thomas [1 ,2 ]
机构
[1] Univ Tubingen, Dept Geosci Soil Sci & Geomorphol, CRC RessourceCultures 1070, Tubingen, Germany
[2] Univ Tubingen, DFG Cluster Excellence Machine Learning, Tubingen, Germany
[3] Univ Tehran, Fac Geog, Dept Remote Sensing & GIS, Tehran 141556619, Iran
[4] Simon Fraser Univ, Dept Geog, Burnaby, BC V5A 1S6, Canada
[5] Stuttgart Univ Appl Sci, Fac Geomat Comp Sci & Math, D-70174 Stuttgart, Germany
[6] Univ Florida, Inst Food & Agr Sci, Southwest Florida Res & Educ Ctr, Dept Soil Water & Ecosyst Sci, Gainesville, FL 34142 USA
关键词
Data models; Meteorology; Transformers; Contrastive learning; Carbon; Remote sensing; Training; deep learning (DL); digital soil mapping (DSM); Europe; LUCAS; self-supervised model; soil organic carbon (SOC); spatiotemporal model; CLIMATE SURFACES; FOREST SOILS; STOCKS; INDICATORS; GRADIENT;
D O I
10.1109/TGRS.2024.3446042
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Soil organic carbon (SOC) constitutes a fundamental component of terrestrial ecosystem functionality, playing a pivotal role in nutrient cycling, hydrological balance, and erosion mitigation. Precise mapping of SOC distribution is imperative for the quantification of ecosystem services, notably carbon sequestration and soil fertility enhancement. Digital soil mapping (DSM) leverages statistical models and advanced technologies, including machine learning (ML), to accurately map soil properties, such as SOC, utilizing diverse data sources like satellite imagery, topography, remote sensing indices, and climate series. Within the domain of ML, self-supervised learning (SSL), which exploits unlabeled data, has gained prominence in recent years. This study introduces a novel approach that aims to learn the geographical link between multimodal features via self-supervised contrastive learning, employing pretrained Vision Transformers (ViT) for image inputs and Transformers for climate data, before fine-tuning the model with ground reference samples. The proposed approach has undergone rigorous testing on two distinct large-scale datasets, with results indicating its superiority over traditional supervised learning models, which depends solely on labeled data. Furthermore, through the utilization of various evaluation metrics (e.g., root-mean-square error (RMSE), mean absolute error (MAE), concordance correlation coefficient (CCC), etc.), the proposed model exhibits higher accuracy when compared to other conventional ML algorithms like random forest and gradient boosting. This model is a robust tool for predicting SOC and contributes to the advancement of DSM techniques, thereby facilitating land management and decision-making processes based on accurate information.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Transformer-Based Self-Supervised Learning for Emotion Recognition
    Vazquez-Rodriguez, Juan
    Lefebvre, Gregoire
    Cumin, Julien
    Crowley, James L.
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2605 - 2612
  • [2] Self-Supervised Graph Transformer on Large-Scale Molecular Data
    Rong, Yu
    Bian, Yatao
    Xu, Tingyang
    Xie, Weiyang
    Wei, Ying
    Huang, Wenbing
    Huang, Junzhou
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Self-supervised Learning for Large-scale Item Recommendations
    Yao, Tiansheng
    Yi, Xinyang
    Cheng, Derek Zhiyuan
    Yu, Felix
    Chen, Ting
    Menon, Aditya
    Hong, Lichan
    Chi, Ed H.
    Tjoa, Steve
    Kang, Jieqi
    Ettinger, Evan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4321 - 4330
  • [4] TransPath: Transformer-Based Self-supervised Learning for Histopathological Image Classification
    Wang, Xiyue
    Yang, Sen
    Zhang, Jun
    Wang, Minghui
    Zhang, Jing
    Huang, Junzhou
    Yang, Wei
    Han, Xiao
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VIII, 2021, 12908 : 186 - 195
  • [5] Self-supervised learning based on Transformer for flow reconstruction and prediction
    Xu, Bonan
    Zhou, Yuanye
    Bian, Xin
    PHYSICS OF FLUIDS, 2024, 36 (02)
  • [6] Transformer-Based Self-Supervised Multimodal Representation Learning for Wearable Emotion Recognition
    Wu, Yujin
    Daoudi, Mohamed
    Amad, Ali
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (01) : 157 - 172
  • [7] Vision Transformer-Based Self-supervised Learning for Ulcerative Colitis Grading in Colonoscopy
    Pyatha, Ajay
    Xu, Ziang
    Ali, Sharib
    DATA ENGINEERING IN MEDICAL IMAGING, DEMI 2023, 2023, 14314 : 102 - 110
  • [8] Self-supervised contrastive representation learning for large-scale trajectories
    Li, Shuzhe
    Chen, Wei
    Yan, Bingqi
    Li, Zhen
    Zhu, Shunzhi
    Yu, Yanwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 357 - 366
  • [9] MOFormer: Self-Supervised Transformer Model for Metal-Organic Framework Property Prediction
    Cao, Zhonglin
    Magar, Rishikesh
    Wang, Yuyang
    Farimani, Amir Barati
    JOURNAL OF THE AMERICAN CHEMICAL SOCIETY, 2023, 145 (05) : 2958 - 2967
  • [10] Personvit: large-scale self-supervised vision transformer for person re-identification
    Hu, Bin
    Wang, Xinggang
    Liu, Wenyu
    Machine Vision and Applications, 2025, 36 (02)