Maize seedling information extraction from UAV images based on semi-automatic sample generation and Mask R-CNN model

被引:15
|
作者
Gao, Xiang [1 ]
Zan, Xuli [1 ,3 ]
Yang, Shuai [1 ]
Zhang, Runda [1 ]
Chen, Shuaiming [1 ]
Zhang, Xiaodong [1 ,2 ]
Liu, Zhe [1 ,2 ]
Ma, Yuntao [1 ]
Zhao, Yuanyuan [1 ,2 ]
Li, Shaoming [1 ,2 ]
机构
[1] China Agr Univ, Coll Land Sci & Technol, Beijing 100083, Peoples R China
[2] Minist Agr & Rural Affairs, Key Lab Remote Sensing Agrihazards, Beijing 100083, Peoples R China
[3] Beijing Water Sci &Technol Inst, Beijing, Peoples R China
关键词
UAV; Precision agriculture; Emergence rate; Sample generation; Deep learning; IDENTIFICATION; INDEXES; HEIGHT;
D O I
10.1016/j.eja.2023.126845
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Context: The emergence rate and growth of maize seedlings are crucial for variety selection and farm managers; however, the complex planting environment and seedling morphological differences pose great challenges for seedling detection.Objective: This study aims to rapidly quickly and accurately extract maize seedling information in the field environment based on UAV images with reduced labor cost.Methods: In this paper, we proposed an automatic identification method for maize seedlings adapted to complex scenarios (different varieties and different seedling development stages) by fine-tuning the Mask R-CNN model. Aiming at the difficulty of obtaining the training data required by the deep learning algorithm, this paper proposes a semi-automatic labeling method for the deep learning sample data of maize seedlings. At last, we proposed a method to identify the locations of disrupted monopoly and extract seedling information such as coverage and seedling area uniformity, the mapping covers the whole experimental field.Results and conclusions: This paper includes a discussion on the effect of real flight data and resampling data on model detection results. Indeed, the results show that the identification precision of the real flight data under the same resolution is lower than that of the resampled data. The detection precision of the model decreased as the spatial resolution decreased. To ensure AP@ 0.5IOU above 0.8, the minimum image spatial resolution is 2.1 cm. We finally selected a model which had the training data with a spatial resolution of 0.8 cm in 2019 and the average precision AP@0.5IOU was 0.887, the average accuracy of emergence rate monitoring in 2019 was 98.87 % and migration to 2020 is 95.70 %, 2021 is 98.77 %.Significance: This work can quickly and effectively extract maize seedlings, and provide accurate seedling in-formation, which can provide support for timely supplementation and subsequent seed selection.
引用
收藏
页数:15
相关论文
共 46 条
  • [31] Transferability of the Deep Learning Mask R-CNN Model for Automated Mapping of Ice-Wedge Polygons in High-Resolution Satellite and UAV Images
    Zhang, Weixing
    Liljedahl, Anna K.
    Kanevskiy, Mikhail
    Epstein, Howard E.
    Jones, Benjamin M.
    Jorgenson, M. Torre
    Kent, Kelcy
    REMOTE SENSING, 2020, 12 (07)
  • [32] Damaged Building Extraction Using Modified Mask R-CNN Model Using Post-Event Aerial Images of the 2016 Kumamoto Earthquake
    Zhan, Yihao
    Liu, Wen
    Maruyama, Yoshihisa
    REMOTE SENSING, 2022, 14 (04)
  • [33] Automatic Teeth Recognition Method from Dental Panoramic Images Using Faster R-CNN and Prior Knowledge Model
    Motoki, Kota
    Mahdi, Fahad Parvez
    Yagi, Naomi
    Nii, Manabu
    Kobashi, Syoji
    2020 JOINT 11TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 21ST INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS-ISIS), 2020, : 355 - 359
  • [34] Research on multi dimensional feature extraction and recognition of industrial and mining solid waste images based on mask R-CNN and graph convolutional networks
    Wang, Shuqin
    Cheng, Na
    Hu, Yan
    DISCOVER APPLIED SCIENCES, 2025, 7 (04)
  • [35] Individual Tree Species Identification and Crown Parameters Extraction Based on Mask R-CNN: Assessing the Applicability of Unmanned Aerial Vehicle Optical Images
    Yao, Zongqi
    Chai, Guoqi
    Lei, Lingting
    Jia, Xiang
    Zhang, Xiaoli
    REMOTE SENSING, 2023, 15 (21)
  • [36] Semi-Automatic 3D City Model Generation from Large-Format Aerial Images
    Buyukdemircioglu, Mehmet
    Kocaman, Sultan
    Isikdag, Umit
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2018, 7 (09)
  • [37] The Combined Use of UAV-Based RGB and DEM Images for the Detection and Delineation of Orange Tree Crowns with Mask R-CNN: An Approach of Labeling and Unified Framework
    Lucena, Felipe
    Breunig, Fabio Marcelo
    Kux, Hermann
    FUTURE INTERNET, 2022, 14 (10)
  • [38] Improved Mask R-CNN for Rural Building Roof Type Recognition from UAV High-Resolution Images: A Case Study in Hunan Province, China
    Wang, Yanjun
    Li, Shaochun
    Teng, Fei
    Lin, Yunhao
    Wang, Mengjie
    Cai, Hengfan
    REMOTE SENSING, 2022, 14 (02)
  • [39] Skin lesion segmentation from dermoscopic images by using Mask R-CNN, Retina-Deeplab, and graph-based methods
    Bagheri, Fatemeh
    Tarokh, Mohammad Jafar
    Ziaratban, Majid
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 67
  • [40] SEMI-AUTOMATIC ROAD NETWORK EXTRACTION FROM DIGITAL IMAGES USING OBJECT-BASED CLASSIFICATION AND MORPHOLOGICAL OPERATORS
    Nunes, Darlan Miranda
    Medeiros, Nilcilene das Gracas
    dos Santos, Afonso de Paula
    BOLETIM DE CIENCIAS GEODESICAS, 2018, 24 (04): : 485 - 502