A survey of public datasets for computer vision tasks in precision agriculture

被引:182
|
作者
Lu, Yuzhen [1 ]
Young, Sierra [1 ]
机构
[1] North Carolina State Univ, Dept Biol & Agr Engn, Raleigh, NC 27695 USA
基金
美国食品与农业研究所;
关键词
Dataset; Crop; Computer vision; Precision agriculture; Robotics; Data sharing; Images; WEED-CONTROL; SEMANTIC SEGMENTATION; APPLE DETECTION; SUGAR-BEET; CLASSIFICATION; LOCALIZATION; CROPS; IMAGES; ROBOTS;
D O I
10.1016/j.compag.2020.105760
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Computer vision technologies have attracted significant interest in precision agriculture in recent years. At the core of robotics and artificial intelligence, computer vision enables various tasks from planting to harvesting in the crop production cycle to be performed automatically and efficiently. However, the scarcity of public image datasets remains a crucial bottleneck for fast prototyping and evaluation of computer vision and machine learning algorithms for the targeted tasks. Since 2015, a number of image datasets have been established and made publicly available to alleviate this bottleneck. Despite this progress, a dedicated survey on these datasets is still lacking. To fill this gap, this paper makes the first comprehensive but not exhaustive review of the public image datasets collected under field conditions for facilitating precision agriculture, which include 15 datasets on weed control, 10 datasets on fruit detection, and 9 datasets on miscellaneous applications. We survey the main characteristics and applications of these datasets, and discuss the key considerations for creating high-quality public image datasets. This survey paper will be valuable for the research community on the selection of suitable image datasets for algorithm development and identification of where creation of new image datasets is needed to support precision agriculture.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Advancing precision agriculture with computer vision: A comparative study of YOLO models for weed and crop recognition
    Zoubek, Tomas
    Bumbalek, Roman
    Ufitikirezi, Jean de Dieu Marcel
    Strob, Miroslav
    Filip, Martin
    Spalek, Frantisek
    Hermanek, Ales
    Bartos, Petr
    CROP PROTECTION, 2025, 190
  • [32] Design of a computer vision system for a differential spraying operation in precision agriculture using Hebbian learning
    Pajares, G.
    Tellaeche, A.
    BurgosArtizzu, X. -P.
    Ribeiro, A.
    IET COMPUTER VISION, 2007, 1 (3-4) : 93 - 99
  • [33] Synthetic data for computer vision in agriculture
    Afonso, Manya
    Giufrida, Valerio
    FRONTIERS IN PLANT SCIENCE, 2023, 14
  • [34] Vision-Language Models for Vision Tasks: A Survey
    Zhang, Jingyi
    Huang, Jiaxing
    Jin, Sheng
    Lu, Shijian
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5625 - 5644
  • [35] A systematic review on computer vision-based parking lot management applied on public datasets
    de Almeida, Paulo Ricardo Lisboa
    Alves, Jeovane Honorio
    Parpinelli, Rafael Stubs
    Barddal, Jean Paul
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 198
  • [36] A Framework for Tasks Allocation and Scheduling in Precision Agriculture Settings
    Santilli, Matteo
    Carpio, Renzo Fabrizio
    Gasparri, Andrea
    2021 20TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2021, : 996 - 1002
  • [37] Application of Graph Structures in Computer Vision Tasks
    Andriyanov, Nikita
    MATHEMATICS, 2022, 10 (21)
  • [38] Causal reasoning in typical computer vision tasks
    ZHANG KeXuan
    SUN QiYu
    ZHAO ChaoQiang
    TANG Yang
    Science China(Technological Sciences), 2024, 67 (01) : 105 - 120
  • [39] Computer Vision Onboard UAVs for Civilian Tasks
    Pascual Campoy
    Juan F. Correa
    Ivan Mondragón
    Carol Martínez
    Miguel Olivares
    Luis Mejías
    Jorge Artieda
    Journal of Intelligent and Robotic Systems, 2009, 54 : 105 - 135
  • [40] Causal reasoning in typical computer vision tasks
    Zhang, Kexuan
    Sun, Qiyu
    Zhao, Chaoqiang
    Tang, Yang
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2024, 67 (01) : 105 - 120