Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models

被引:32
|
作者
Lin, Zhe [1 ]
Guo, Wenxuan [1 ,2 ]
机构
[1] Texas Tech Univ, Dept Plant & Soil Sci, Lubbock, TX 79409 USA
[2] Texas A&M AgriLife Res, Dept Soil & Crop Sci, Lubbock, TX 79403 USA
关键词
cotton stand count; unmanned aerial systems; deep learning; remote sensing; MobileNet; CenterNet; !text type='Python']Python[!/text; Tensorflow; WHEAT;
D O I
10.3390/rs13142822
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R-2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models (vol 13, 2822, 2021)
    Lin, Zhe
    Guo, Wenxuan
    [J]. REMOTE SENSING, 2022, 14 (10)
  • [2] Deep Learning to Detect Road Distress from Unmanned Aerial System Imagery
    Long Ngo Hoang Truong
    Mora, Omar E.
    Cheng, Wen
    Tang, Hairui
    Singh, Mankirat
    [J]. TRANSPORTATION RESEARCH RECORD, 2021, 2675 (09) : 776 - 788
  • [3] Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning
    Lin, Zhe
    Guo, Wenxuan
    [J]. FRONTIERS IN PLANT SCIENCE, 2020, 11
  • [4] Towards Efficient Machine Learning Methods for Penguin Counting in Unmanned Aerial System Imagery
    Liu, Yang
    Shah, Vikrant
    Borowicz, Alexander
    Wethington, Michael
    Strycker, Noah
    Forrest, Steve
    Lynch, Heather
    Singh, Hanumant
    [J]. 2020 IEEE/OES AUTONOMOUS UNDERWATER VEHICLES SYMPOSIUM (AUV), 2020,
  • [5] Counting Buildings from Unmanned Aerial Vehicle Images Using a Deep Learning Based Approach
    Naturinda, Evet
    Omia, Emmanuel
    Kemigyisha, Fortunate
    Aboth, Jackline
    Kabenge, Isa
    Gidudu, Anthony
    [J]. SOUTH AFRICAN JOURNAL OF GEOMATICS, 2024, 13 (01): : 83 - 93
  • [6] Plastic Contaminant Detection in Aerial Imagery of Cotton Fields Using Deep Learning
    Yadav, Pappu Kumar
    Thomasson, J. Alex
    Hardin, Robert
    Searcy, Stephen W.
    Braga-Neto, Ulisses
    Popescu, Sorin C.
    Rodriguez III, Roberto
    Martin, Daniel E.
    Enciso, Juan
    Meza, Karem
    White, Emma L.
    [J]. AGRICULTURE-BASEL, 2023, 13 (07):
  • [7] Automatic Road Pavement Distress Recognition Using Deep Learning Networks from Unmanned Aerial Imagery
    Samadzadegan, Farhad
    Javan, Farzaneh Dadrass
    Mahini, Farnaz Ashtari
    Gholamshahi, Mehrnaz
    Nex, Francesco
    [J]. DRONES, 2024, 8 (06)
  • [8] Livestock Detection and Counting in Kenyan Rangelands Using Aerial Imagery and Deep Learning Techniques
    Ocholla, Ian A.
    Pellikka, Petri
    Karanja, Faith
    Vuorinne, Ilja
    Vaisanen, Tuomas
    Boitt, Mark
    Heiskanen, Janne
    [J]. REMOTE SENSING, 2024, 16 (16)
  • [9] Counting Cars from Aerial Videos Using Deep Learning
    Polidoro, Caio H. S.
    de Castro, Wellington V. M.
    Marcato, Jose
    Salgado Filho, Geison
    Matsubara, Edson T.
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I, 2019, 11804 : 637 - 649
  • [10] Identifying rice field weeds from unmanned aerial vehicle remote sensing imagery using deep learning
    Guo, Zhonghui
    Cai, Dongdong
    Zhou, Yunyi
    Xu, Tongyu
    Yu, Fenghua
    [J]. PLANT METHODS, 2024, 20 (01)