SegVeg: Segmenting RGB Images into Green and Senescent Vegetation by Combining Deep and Shallow Methods

被引:17
|
作者
Serouart, Mario [1 ,2 ]
Madec, Simon [1 ,3 ]
David, Etienne [1 ,2 ,4 ]
Velumani, Kaaviya
Lozano, Raul Lopez [2 ]
Weiss, Marie [2 ]
Baret, Frederic [2 ]
机构
[1] Arvalis, Inst Vegetal, 228 Route Aerodrome CS 40509, F-84914 Avignon 9, France
[2] Avignon Univ, INRAE, UMR EMMAH, UMT CAPTE, 228 Route Aerodrome CS 40509, F-84914 Avignon 9, France
[3] CIRAD, UMR TETIS, F-34398 Montpellier, France
[4] Hiphen SAS, 228 Route Aerodrome CS 40509, F-84914 Avignon 9, France
关键词
SEMANTIC SEGMENTATION; COLOR; CLASSIFICATION;
D O I
10.34133/2022/9803570
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps. A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation. the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated (R-2 = 0.94) by the SegVeg model, while the senescent and background fractions show slightly degraded performances (R-2 = 0.70 and 0.73, respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.
引用
收藏
页数:17
相关论文
共 28 条
  • [1] Deep syntactic processing by combining shallow methods
    Dienes, M
    Dubey, A
    [J]. 41ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 2003, : 431 - 438
  • [2] Segmentation of RGB images using different vegetation indices and thresholding methods
    Aureliano Netto, Abdon Francisco
    Martins, Rodrigo Nogueira
    Aquino de Souza, Guilherme Silverio
    Araujo, Guilherme de Moura
    Hatum de Almeida, Samira Luns
    Capelini, Vinicius Agnolette
    [J]. NATIVA, 2018, 6 (04): : 389 - 394
  • [3] Combining RGB and Depth images for Indoor Scene Classification using Deep Learning
    Pujar, Karthik
    Chickerur, Satyadhyan
    Patil, Mahesh S.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMPUTING RESEARCH (ICCIC), 2017, : 466 - 473
  • [4] COMBINING BAYESIAN AND DEEP LEARNING METHODS FOR THE DELINEATION OF THE FAN IN ULTRASOUND IMAGES
    Dadoun, Hind
    Delingette, Herve
    Rousseau, Anne-Laure
    de Kerviler, Eric
    Ayache, Nicholas
    [J]. 2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 743 - 747
  • [5] Feasibility of Combining Deep Learning and RGB Images Obtained by Unmanned Aerial Vehicle for Leaf Area Index Estimation in Rice
    Yamaguchi, Tomoaki
    Tanaka, Yukie
    Imachi, Yuto
    Yamashita, Megumi
    Katsura, Keisuke
    [J]. REMOTE SENSING, 2021, 13 (01)
  • [6] Blackberry Fruit Classification in Underexposed Images Combining Deep Learning and Image Fusion Methods
    Morales-Vargas, Eduardo
    Fuentes-Aguilar, Rita Q.
    de-la-Cruz-Espinosa, Emanuel
    Hernandez-Melgarejo, Gustavo
    He, Yong
    Liu, Fei
    [J]. SENSORS, 2023, 23 (23)
  • [7] A Semi-Automated Method to Extract Green and Non-Photosynthetic Vegetation Cover from RGB Images in Mixed Grasslands
    Xu, Dandan
    Pu, Yihan
    Guo, Xulin
    [J]. SENSORS, 2020, 20 (23) : 1 - 17
  • [8] Deep ensemble learning and transfer learning methods for classification of senescent cells from nonlinear optical microscopy images
    Sorrentino, Salvatore
    Manetti, Francesco
    Bresci, Arianna
    Vernuccio, Federico
    Ceconello, Chiara
    Ghislanzoni, Silvia
    Bongarzone, Italia
    Vanna, Renzo
    Cerullo, Giulio
    Polli, Dario
    [J]. FRONTIERS IN CHEMISTRY, 2023, 11
  • [9] Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation
    Picon, Artzai
    Bereciartua-Perez, Arantza
    Eguskiza, Itziar
    Romero-Rodriguez, Javier
    Jimenez-Ruiz, Carlos Javier
    Eggers, Till
    Klukas, Christian
    Navarra-Mestre, Ramon
    [J]. ARTIFICIAL INTELLIGENCE IN AGRICULTURE, 2022, 6 : 199 - 210
  • [10] Combining plant height, canopy coverage and vegetation index from UAV-based RGB images to estimate leaf nitrogen concentration of summer maize
    Lu, Junsheng
    Cheng, Dongling
    Geng, Chenming
    Zhang, Zhitao
    Xiang, Youzhen
    Hu, Tiantian
    [J]. BIOSYSTEMS ENGINEERING, 2021, 202 : 42 - 54