SegVeg: Segmenting RGB Images into Green and Senescent Vegetation by Combining Deep and Shallow Methods

被引:17
|
作者
Serouart, Mario [1 ,2 ]
Madec, Simon [1 ,3 ]
David, Etienne [1 ,2 ,4 ]
Velumani, Kaaviya
Lozano, Raul Lopez [2 ]
Weiss, Marie [2 ]
Baret, Frederic [2 ]
机构
[1] Arvalis, Inst Vegetal, 228 Route Aerodrome CS 40509, F-84914 Avignon 9, France
[2] Avignon Univ, INRAE, UMR EMMAH, UMT CAPTE, 228 Route Aerodrome CS 40509, F-84914 Avignon 9, France
[3] CIRAD, UMR TETIS, F-34398 Montpellier, France
[4] Hiphen SAS, 228 Route Aerodrome CS 40509, F-84914 Avignon 9, France
关键词
SEMANTIC SEGMENTATION; COLOR; CLASSIFICATION;
D O I
10.34133/2022/9803570
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps. A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation. the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated (R-2 = 0.94) by the SegVeg model, while the senescent and background fractions show slightly degraded performances (R-2 = 0.70 and 0.73, respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.
引用
收藏
页数:17
相关论文
共 28 条
  • [21] Leaf area index estimation of pergola-trained vineyards in arid regions using classical and deep learning methods based on UAV-based RGB images
    Ilniyaz, Osman
    Du, Qingyun
    Shen, Huanfeng
    He, Wenwen
    Feng, Luwei
    Azadi, Hossein
    Kurban, Alishir
    Chen, Xi
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2023, 207
  • [22] Semantic segmentation of plant roots from RGB (mini-) rhizotron images—generalisation potential and false positives of established methods and advanced deep-learning models
    Pavel Baykalov
    Bart Bussmann
    Richard Nair
    Abraham George Smith
    Gernot Bodner
    Ofer Hadar
    Naftali Lazarovitch
    Boris Rewald
    [J]. Plant Methods, 19
  • [23] Benchmarking Anchor-Based and Anchor-Free State-of-the-Art Deep Learning Methods for Individual Tree Detection in RGB High-Resolution Images
    Zamboni, Pedro
    Junior, Jose Marcato
    Silva, Jonathan de Andrade
    Miyoshi, Gabriela Takahashi
    Matsubara, Edson Takashi
    Nogueira, Keiller
    Goncalves, Wesley Nunes
    [J]. REMOTE SENSING, 2021, 13 (13)
  • [24] Deep learning based banana plant detection and counting using high-resolution red-green-blue (RGB) images collected from unmanned aerial vehicle (UAV)
    Neupane, Bipul
    Horanont, Teerayut
    Nguyen Duy Hung
    [J]. PLOS ONE, 2019, 14 (10):
  • [25] Combining Model- and Deep-Learning-Based Methods for the Accurate and Robust Segmentation of the Intra-Cochlear Anatomy in Clinical Head CT Images
    Fan, Yubo
    Zhang, Dongqing
    Wang, Jianing
    Noble, Jack H.
    Dawant, Benoit M.
    [J]. MEDICAL IMAGING 2020: IMAGE PROCESSING, 2021, 11313
  • [26] Semantic segmentation of plant roots from RGB (mini-) rhizotron images-generalisation potential and false positives of established methods and advanced deep-learning models
    Baykalov, Pavel
    Bussmann, Bart
    Nair, Richard
    Smith, Abraham George
    Bodner, Gernot
    Hadar, Ofer
    Lazarovitch, Naftali
    Rewald, Boris
    [J]. PLANT METHODS, 2023, 19 (01)
  • [27] Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3+
    Zhou, Kuochen
    Li, Wei
    Zhao, Dazhe
    [J]. TECHNOLOGY AND HEALTH CARE, 2022, 30 : S173 - S190
  • [28] GREEN ATTACK OR OVERFITTING? COMPARING MACHINE-LEARNING- AND VEGETATION-INDEX-BASED METHODS TO EARLY DETECT EUROPEAN SPRUCE BARK BEETLE ATTACKS USING MULTISPECTRAL DRONE IMAGES
    Huo, Langning
    Persson, Henrik Jan
    Bohlin, Jonas
    Lindberg, Eva
    [J]. IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 546 - 549