A lightweight CNN-Transformer network for pixel-based crop mapping using time-series Sentinel-2 imagery

被引:0
|
作者
Wang, Yumiao [1 ,2 ]
Feng, Luwei [3 ]
Sun, Weiwei [1 ]
Wang, Lihua [1 ]
Yang, Gang [1 ]
Chen, Binjie [1 ]
机构
[1] Ningbo Univ, Dept Geog & Spatial Informat Tech, Ningbo 315211, Peoples R China
[2] Ningbo Univ, Inst East China Sea, Ningbo 315211, Zhejiang, Peoples R China
[3] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Crop mapping; Convolutional neural network; Transformer; Pixel-based classification; Temporal Sentinel-2 data;
D O I
10.1016/j.compag.2024.109370
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Deep learning approaches have provided state-of-the-art performance in crop mapping. Recently, several studies have combined the strengths of two dominant deep learning architectures, Convolutional Neural Networks (CNNs) and Transformers, to classify crops using remote sensing images. Despite their success, many of these models utilize patch-based methods that require extensive data labeling, as each sample contains multiple pixels with corresponding labels. This leads to higher costs in data preparation and processing. Moreover, previous methods rarely considered the impact of missing values caused by clouds and no-observations in remote sensing data. Therefore, this study proposes a lightweight multi-stage CNN-Transformer network (MCTNet) for pixel- based crop mapping using time-series Sentinel-2 imagery. MCTNet consists of several successive modules, each containing a CNN sub-module and a Transformer sub-module to extract important features from the images, respectively. An attention-based learnable positional encoding (ALPE) module is designed in the Transformer sub-module to capture the complex temporal relations in the time-series data with different missing rates. Arkansas and California in the U.S. are selected to evaluate the model. Experimental results show that the MCTNet has a lightweight advantage with the fewest parameters and memory usage while achieving the superior performance compared to eight advanced models. Specifically, MCTNet obtained an overall accuracy (OA) of 0.968, a kappa coefficient (Kappa) of 0.951, and a macro-averaged F1 score (F1) of 0.933 in Arkansas, and an OA of 0.852, a Kappa of 0.806, and an F1 score of 0.829 in California. The results highlight the importance of each component of the model, particularly the ALPE module, which enhanced the Kappa of MCTNet by 4.2% in Arkansas and improved the model's robustness to missing values in remote sensing data. Additionally, visualization results demonstrated that the features extracted from CNN and Transformer sub-modules are complementary, explaining the effectiveness of the MCTNet.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] A Dual Attention Convolutional Neural Network for Crop Classification Using Time-Series Sentinel-2 Imagery
    Seydi, Seyd Teymoor
    Amani, Meisam
    Ghorbanian, Arsalan
    REMOTE SENSING, 2022, 14 (03)
  • [2] A Convolutional Neural Network Method for Rice Mapping Using Time-Series of Sentinel-1 and Sentinel-2 Imagery
    Saadat, Mohammad
    Seydi, Seyd Teymoor
    Hasanlou, Mahdi
    Homayouni, Saeid
    AGRICULTURE-BASEL, 2022, 12 (12):
  • [3] A new attention-based CNN approach for crop mapping using time series Sentinel-2 images
    Wang, Yumiao
    Zhang, Zhou
    Feng, Luwei
    Ma, Yuchi
    Du, Qingyun
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 184
  • [4] Early Crop Mapping Based on Sentinel-2 Time-Series Data and the Random Forest Algorithm
    Wei, Peng
    Ye, Huichun
    Qiao, Shuting
    Liu, Ronghao
    Nie, Chaojia
    Zhang, Bingrui
    Song, Lijuan
    Huang, Shanyu
    REMOTE SENSING, 2023, 15 (13)
  • [5] A temporal-spatial deep learning network for winter wheat mapping using time-series Sentinel-2 imagery
    Fan, Lingling
    Xia, Lang
    Yang, Jing
    Sun, Xiao
    Wu, Shangrong
    Qiu, Bingwen
    Chen, Jin
    Wu, Wenbin
    Yang, Peng
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2024, 214 : 48 - 64
  • [6] Mapping Mediterranean maquis formations using Sentinel-2 time-series
    Listiani, Indira Aprilia
    Leloglu, Ugur Murat
    Zeydanli, Ugur
    Caliskan, Bilgehan Kaan
    ECOLOGICAL INFORMATICS, 2022, 71
  • [7] CerealNet: A Hybrid Deep Learning Architecture for Cereal Crop Mapping Using Sentinel-2 Time-Series
    Machichi, Mouad Alami
    El Mansouri, Loubna
    Imani, Yasmina
    Bourja, Omar
    Hadria, Rachid
    Lahlou, Ouiam
    Benmansour, Samir
    Zennayi, Yahya
    Bourzeix, Francois
    INFORMATICS-BASEL, 2022, 9 (04):
  • [8] Mapping of olive trees using Sentinel-2 and Sentinel-1 images: an evaluation of pixel-based analyses
    Ramat, G.
    Fontanelli, G.
    Baroni, F.
    Lapini, A.
    Paloscia, S.
    Pettinato, S.
    Pilia, S.
    Santi, E.
    Santurri, L.
    Souissi, N.
    PROCEEDINGS OF 2023 IEEE INTERNATIONAL WORKSHOP ON METROLOGY FOR AGRICULTURE AND FORESTRY, METROAGRIFOR, 2023, : 263 - 267
  • [9] Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis
    Belgiu, Mariana
    Csillik, Ovidiu
    REMOTE SENSING OF ENVIRONMENT, 2018, 204 : 509 - 523
  • [10] Mapping irrigated, rainfed and paddy croplands from time-series Sentinel-2 images by integrating pixel-based classification and image segmentation on Google Earth Engine
    Xing, Huaqiao
    Chen, Bingyao
    Feng, Yongyu
    Ni, Yuanlong
    Hou, Dongyang
    Wang, Xue
    Kong, Yawei
    GEOCARTO INTERNATIONAL, 2022, 37 (26) : 13291 - 13310