Detection and identification of European woodpeckers with deep convolutional neural networks

被引:23
|
作者
Florentin, Juliette [1 ]
Dutoit, Thierry [2 ]
Verlinden, Olivier [1 ]
机构
[1] Univ Mons, Theoret Mech Dynam & Vibrat, Pl Parc 20, Mons, Belgium
[2] Univ Mons, Informat Signal & Artificial Intelligence, Pl Parc 20, Mons, Belgium
关键词
Bird call detection; Bird sound classification; Deep convolutional neural networks; Drumming; Ecoacoustics; Woodpecker calls; Woodpeckers; AUDIO RECORDINGS;
D O I
10.1016/j.ecoinf.2019.101023
中图分类号
Q14 [生态学(生物生态学)];
学科分类号
071012 ; 0713 ;
摘要
Every spring, European forest soundscapes fill up with the drums and calls of woodpeckers as they draw territories and pair up. Each drum or call is species-specific and easily picked up by a trained ear. In this study, we worked toward automating this process and thus toward making the continuous acoustic monitoring of woodpeckers practical. We recorded from March to May successively in Belgium, Luxemburg and France, collecting hundreds of gigabytes of data. We shed 50-80% of these recordings using the Acoustic Complexity Index (ACI). Then, for both the detection of the target signals in the audio stream and the identification of the different species, we implemented transfer learning from computer vision to audio analysis. This meant transforming sounds into images via spectrograms and retraining legacy deep image networks that have been made public (e.g. Inception) to work with such data. The visual patterns produced by drums (vertical lines) and call syllables (hats, straight lines, waves, etc.) in spectrograms are characteristic and allow an identification of the signals. We retrained using data from Xeno-Canto, Tierstimmen and a private collection. In the subsequent analysis of the field recordings, the repurposed networks gave outstanding results for the detection of drums (either 0.2-9.9% of false positives, or for the toughest dataset, a reduction from 28,601 images to 1000 images left for manual review) and for the detection and identification of calls (73.5-100.0% accuracy; in the toughest case, dataset reduction from 643,901 images to 14,667 images). However, they performed less well for the identification of drums than a simpler method using handcrafted features and the k-Nearest Neighbor (k-NN) classifier. The species character in drums does not lie in shapes but in temporal patterns: speed, acceleration, number of strikes and duration of the drums. These features are secondary information in spectrograms, and the image networks that have learned invariance toward object size tend to disregard them. At locations where they drummed abundantly, the accuracy was 83.0% for Picus canus (93.1% for k-NN) and 36.1% for Dryocopus martins (81.5% for k-NN). For the three field locations we produced time lines of the encountered woodpecker activity (6 species, 11 signals).
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Deep Convolutional Neural Networks for DGA Detection
    Catania, Carlos
    Garcia, Sebastian
    Torres, Pablo
    [J]. COMPUTER SCIENCE - CACIC 2018, 2019, 995 : 327 - 340
  • [2] Stenosis Detection with Deep Convolutional Neural Networks
    Antczak, Karol
    Liberadzki, Lukasz
    [J]. 22ND INTERNATIONAL CONFERENCE ON CIRCUITS, SYSTEMS, COMMUNICATIONS AND COMPUTERS (CSCC 2018), 2018, 210
  • [3] Deep Convolutional Neural Networks for pedestrian detection
    Tome, D.
    Monti, F.
    Baroffio, L.
    Bondi, L.
    Tagliasacchi, M.
    Tubaro, S.
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 47 : 482 - 489
  • [4] Identification of sunflower seeds with deep convolutional neural networks
    Ferhat Kurtulmuş
    [J]. Journal of Food Measurement and Characterization, 2021, 15 : 1024 - 1033
  • [5] Identification of sunflower seeds with deep convolutional neural networks
    Kurtulmus, Ferhat
    [J]. JOURNAL OF FOOD MEASUREMENT AND CHARACTERIZATION, 2021, 15 (02) : 1024 - 1033
  • [6] Deep Learning Convolutional Neural Networks for Radio Identification
    Riyaz, Shamnaz
    Sankhe, Kunal
    Ioannidis, Stratis
    Chowdhury, Kaushik
    [J]. IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (09) : 146 - 152
  • [7] Smile detection in the wild with deep convolutional neural networks
    Junkai Chen
    Qihao Ou
    Zheru Chi
    Hong Fu
    [J]. Machine Vision and Applications, 2017, 28 : 173 - 183
  • [8] Evaluation of deep convolutional neural networks for glaucoma detection
    Phan, Sang
    Satoh, Shin'ichi
    Yoda, Yoshioki
    Kashiwagi, Kenji
    Oshika, Tetsuro
    Oshika, Tetsuro
    Hasegawa, Takashi
    Kashiwagi, Kenji
    Miyake, Masahiro
    Sakamoto, Taiji
    Yoshitomi, Takeshi
    Inatani, Masaru
    Yamamoto, Tetsuya
    Sugiyama, Kazuhisa
    Nakamura, Makoto
    Tsujikawa, Akitaka
    Sotozono, Chie
    Sonoda, Koh-Hei
    Terasaki, Hiroko
    Ogura, Yuichiro
    Fukuchi, Takeo
    Shiraga, Fumio
    Nishida, Kohji
    Nakazawa, Toru
    Aihara, Makoto
    Yamashita, Hidetoshi
    Hiyoyuki, Iijima
    [J]. JAPANESE JOURNAL OF OPHTHALMOLOGY, 2019, 63 (03) : 276 - 283
  • [9] Deep Convolutional Neural Networks for Fire Detection in Images
    Sharma, Jivitesh
    Granmo, Ole-Christoffer
    Goodwin, Morten
    Fidje, Jahn Thomas
    [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EANN 2017, 2017, 744 : 183 - 193
  • [10] Object Detection Using Deep Convolutional Neural Networks
    Qian, Huimin
    Xu, Jiawei
    Zhou, Jun
    [J]. 2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 1151 - 1156