An approach to rapid processing of camera trap images with minimal human input

被引:7
|
作者
Duggan, Matthew T. [1 ]
Groleau, Melissa F. [1 ]
Shealy, Ethan P. [1 ]
Self, Lillian S. [1 ]
Utter, Taylor E. [1 ]
Waller, Matthew M. [1 ]
Hall, Bryan C. [2 ]
Stone, Chris G. [2 ]
Anderson, Layne L. [2 ]
Mousseau, Timothy A. [1 ]
机构
[1] Univ South Carolina UofSC, Dept Biol Sci, Columbia, SC 29208 USA
[2] South Carolina Army Natl Guard Environm Off, Eastover, SC USA
来源
ECOLOGY AND EVOLUTION | 2021年 / 11卷 / 17期
关键词
camera trap; deep learning; neural network; transfer learning; wildlife ecology; NETWORKS;
D O I
10.1002/ece3.7970
中图分类号
Q14 [生态学(生物生态学)];
学科分类号
071012 ; 0713 ;
摘要
Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies. We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers. We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images. With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors.
引用
收藏
页码:12051 / 12063
页数:13
相关论文
共 50 条
  • [41] An evaluation of platforms for processing camera-trap data using artificial intelligence
    Velez, Juliana
    McShea, William
    Shamon, Hila
    Castiblanco-Camacho, Paula J.
    Tabak, Michael A.
    Chalmers, Carl
    Fergus, Paul
    Fieberg, John
    METHODS IN ECOLOGY AND EVOLUTION, 2023, 14 (02): : 459 - 477
  • [42] POSITRON CAMERA FOR RAPID VISUALIZATION OF DYNAMIC-FUNCTION PROCESSING
    BURNHAM, CA
    BROWNELL, GL
    JOURNAL OF NUCLEAR MEDICINE, 1971, 12 (06) : 344 - &
  • [43] Developing assistant tools for geometric camera calibration:: Assessing the quality of input images
    Ouellet, JN
    Hébert, P
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4, 2004, : 80 - 83
  • [44] Classification Efficiency of Pre-Trained Deep CNN Models on Camera Trap Images
    Stancic, Adam
    Vyroubal, Vedran
    Slijepcevic, Vedran
    JOURNAL OF IMAGING, 2022, 8 (02)
  • [45] Animal Scanner: Software for classifying humans, animals, and empty frames in camera trap images
    Yousif, Hayder
    Yuan, Jianhe
    Kays, Roland
    He, Zhihai
    ECOLOGY AND EVOLUTION, 2019, 9 (04): : 1578 - 1589
  • [46] Neural architecture search based on packed samples for identifying animals in camera trap images
    Jia, Liang
    Tian, Ye
    Zhang, Junguo
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 10511 - 10533
  • [47] Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images
    Jia, Liang
    Tian, Ye
    Zhang, Junguo
    ANIMALS, 2022, 12 (04):
  • [48] Identifying individual jaguars from camera-trap images using the HotSpotter program
    Wiig, Oystein
    da Silva Teixeira, Karollyna
    Sena, Leonardo
    de Oliveira, Halicia Celeste Santos
    Mendes-Oliveira, Ana Cristina
    MAMMALIA, 2023, 87 (06) : 602 - 605
  • [49] Identifying animal species in camera trap images using deep learning and citizen science
    Willi, Marco
    Pitman, Ross T.
    Cardoso, Anabelle W.
    Locke, Christina
    Swanson, Alexandra
    Boyer, Amy
    Veldthuis, Marten
    Fortson, Lucy
    METHODS IN ECOLOGY AND EVOLUTION, 2019, 10 (01): : 80 - 91
  • [50] Recognition of European mammals and birds in camera trap images using deep neural networks
    Schneider, Daniel
    Lindner, Kim
    Vogelbacher, Markus
    Bellafkir, Hicham
    Farwig, Nina
    Freisleben, Bernd
    IET COMPUTER VISION, 2024,