Marker-Less Motion Capture of Insect Locomotion With Deep Neural Networks Pre-trained on Synthetic Videos

被引:3
|
作者
Arent, Ilja [1 ]
Schmidt, Florian P. [1 ,2 ]
Botsch, Mario [2 ,3 ]
Duerr, Volker [1 ,2 ]
机构
[1] Bielefeld Univ, Biol Cybernet, Fac Biol, Bielefeld, Germany
[2] Bielefeld Univ, Ctr Cognit Interact Technol, Bielefeld, Germany
[3] TU Dortmund Univ, Comp Graph, Dortmund, Germany
来源
关键词
insect locomotion; machine learning; behavioral analysis; marker-less motion capture; deep neural network; motion tracking; MOVEMENT; WALKING; BODY;
D O I
10.3389/fnbeh.2021.637806
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Motion capture of unrestrained moving animals is a major analytic tool in neuroethology and behavioral physiology. At present, several motion capture methodologies have been developed, all of which have particular limitations regarding experimental application. Whereas marker-based motion capture systems are very robust and easily adjusted to suit different setups, tracked species, or body parts, they cannot be applied in experimental situations where markers obstruct the natural behavior (e.g., when tracking delicate, elastic, and/or sensitive body structures). On the other hand, marker-less motion capture systems typically require setup- and animal-specific adjustments, for example by means of tailored image processing, decision heuristics, and/or machine learning of specific sample data. Among the latter, deep-learning approaches have become very popular because of their applicability to virtually any sample of video data. Nevertheless, concise evaluation of their training requirements has rarely been done, particularly with regard to the transfer of trained networks from one application to another. To address this issue, the present study uses insect locomotion as a showcase example for systematic evaluation of variation and augmentation of the training data. For that, we use artificially generated video sequences with known combinations of observed, real animal postures and randomized body position, orientation, and size. Moreover, we evaluate the generalization ability of networks that have been pre-trained on synthetic videos to video recordings of real walking insects, and estimate the benefit in terms of reduced requirement for manual annotation. We show that tracking performance is affected only little by scaling factors ranging from 0.5 to 1.5. As expected from convolutional networks, the translation of the animal has no effect. On the other hand, we show that sufficient variation of rotation in the training data is essential for performance, and make concise suggestions about how much variation is required. Our results on transfer from synthetic to real videos show that pre-training reduces the amount of necessary manual annotation by about 50%.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Comparative Analysis of Pre-Trained Deep Neural Networks for Vision-Based Security Systems on a Novel Dataset
    Sinha, Kshitij
    Lalit, Muskan
    8TH IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES, BDCAT 2021, 2021, : 120 - 127
  • [42] Transfer learning with pre-trained deep convolutional neural networks for the automatic assessment of liver steatosis in ultrasound images
    Constantinescu, Elena Codruta
    Udristoiu, Anca-Loredana
    Udristoiu, Stefan Cristinel
    Iacob, Andreea Valentina
    Gruionu, Lucian Gheorghe
    Gruionu, Gabriel
    Sandulescu, Larisa
    Saftoiu, Adrian
    MEDICAL ULTRASONOGRAPHY, 2021, 23 (02) : 135 - 139
  • [43] Incident detection and classification in renewable energy news using pre-trained language models on deep neural networks
    Wang, Qiqing
    Li, Cunbin
    JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING, 2022, 22 (01) : 57 - 76
  • [44] Unleashing the Potential of Custom Chat Generative Pre-Trained Transformer and Inordinately Deep Neural Networks in Pneumonia Diagnosis
    Khurdula, Harsha Vardhan
    Hajiarbabi, Mohammadreza
    SOUTHEASTCON 2024, 2024, : 412 - 420
  • [45] Automatic fish species classification in underwater videos: exploiting pre-trained deep neural network models to compensate for limited labelled data
    Siddiqui, Shoaib Ahmed
    Salman, Ahmad
    Malik, Muhammad Imran
    Shafait, Faisal
    Mian, Ajmal
    Shortis, Mark R.
    Harvey, Euan S.
    ICES JOURNAL OF MARINE SCIENCE, 2018, 75 (01) : 374 - 389
  • [46] Gray-to-color image conversion in the classification of breast lesions on ultrasound using pre-trained deep neural networks
    Wilfrido Gómez-Flores
    Wagner Coelho de Albuquerque Pereira
    Medical & Biological Engineering & Computing, 2023, 61 : 3193 - 3207
  • [47] Deep learning image-based automated application on classification of tomato leaf disease by pre-trained deep convolutional neural networks
    Madupuri, ReddyPriya
    Vemula, Dinesh Reddy
    Chettupally, Anil Carie
    Sangi, Abdur Rashid
    Ravi, Pallam
    MEHRAN UNIVERSITY RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY, 2023, 42 (03) : 52 - 58
  • [48] Dynamically pre-trained deep recurrent neural networks using environmental monitoring data for predicting PM2.5
    Bun Theang Ong
    Komei Sugiura
    Koji Zettsu
    Neural Computing and Applications, 2016, 27 : 1553 - 1566
  • [49] Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
    Martin, Charles H.
    Mahoney, Michael W.
    PROCEEDINGS OF THE 2020 SIAM INTERNATIONAL CONFERENCE ON DATA MINING (SDM), 2020, : 505 - 513
  • [50] Dynamically pre-trained deep recurrent neural networks using environmental monitoring data for predicting PM2.5
    Ong, Bun Theang
    Sugiura, Komei
    Zettsu, Koji
    NEURAL COMPUTING & APPLICATIONS, 2016, 27 (06): : 1553 - 1566