Marker-Less Motion Capture of Insect Locomotion With Deep Neural Networks Pre-trained on Synthetic Videos

被引:3
|
作者
Arent, Ilja [1 ]
Schmidt, Florian P. [1 ,2 ]
Botsch, Mario [2 ,3 ]
Duerr, Volker [1 ,2 ]
机构
[1] Bielefeld Univ, Biol Cybernet, Fac Biol, Bielefeld, Germany
[2] Bielefeld Univ, Ctr Cognit Interact Technol, Bielefeld, Germany
[3] TU Dortmund Univ, Comp Graph, Dortmund, Germany
来源
关键词
insect locomotion; machine learning; behavioral analysis; marker-less motion capture; deep neural network; motion tracking; MOVEMENT; WALKING; BODY;
D O I
10.3389/fnbeh.2021.637806
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Motion capture of unrestrained moving animals is a major analytic tool in neuroethology and behavioral physiology. At present, several motion capture methodologies have been developed, all of which have particular limitations regarding experimental application. Whereas marker-based motion capture systems are very robust and easily adjusted to suit different setups, tracked species, or body parts, they cannot be applied in experimental situations where markers obstruct the natural behavior (e.g., when tracking delicate, elastic, and/or sensitive body structures). On the other hand, marker-less motion capture systems typically require setup- and animal-specific adjustments, for example by means of tailored image processing, decision heuristics, and/or machine learning of specific sample data. Among the latter, deep-learning approaches have become very popular because of their applicability to virtually any sample of video data. Nevertheless, concise evaluation of their training requirements has rarely been done, particularly with regard to the transfer of trained networks from one application to another. To address this issue, the present study uses insect locomotion as a showcase example for systematic evaluation of variation and augmentation of the training data. For that, we use artificially generated video sequences with known combinations of observed, real animal postures and randomized body position, orientation, and size. Moreover, we evaluate the generalization ability of networks that have been pre-trained on synthetic videos to video recordings of real walking insects, and estimate the benefit in terms of reduced requirement for manual annotation. We show that tracking performance is affected only little by scaling factors ranging from 0.5 to 1.5. As expected from convolutional networks, the translation of the animal has no effect. On the other hand, we show that sufficient variation of rotation in the training data is essential for performance, and make concise suggestions about how much variation is required. Our results on transfer from synthetic to real videos show that pre-training reduces the amount of necessary manual annotation by about 50%.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] FairFix: Enhancing Fairness of Pre-trained Deep Neural Networks with Scarce Data Resources
    Li, Zhixin
    Zhu, Rui
    Wang, Zihao
    Li, Jiale
    Liu, Kaiyuan
    Qin, Yue
    Fan, Yongming
    Gu, Mingyu
    Lu, Zhihui
    Wu, Jie
    Chai, Hongfeng
    Wang, XiaoFeng
    Tang, Haixu
    PROCEEDINGS OF THE 2024 IEEE 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT DATA AND SECURITY, IDS 2024, 2024, : 14 - 20
  • [22] Use of Optimised LSTM Neural Networks Pre-Trained With Synthetic Data to Estimate PV Generation
    Martinez-Comesana, Miguel
    Martinez-Torres, Javier
    Eguia-Oller, Pablo
    Lopez-Gomez, Javier
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2023,
  • [23] Evaluation of Open-Source and Pre-Trained Deep Convolutional Neural Networks Suitable for Player Detection and Motion Analysis in Squash
    Brumann, Christopher
    Kukuk, Markus
    Reinsberger, Claus
    SENSORS, 2021, 21 (13)
  • [24] CLASSIFICATION OF NOISE BETWEEN FLOORS IN A BUILDING USING PRE-TRAINED DEEP CONVOLUTIONAL NEURAL NETWORKS
    Choi, Hwiyong
    Lee, Seungjun
    Yang, Haesang
    Seong, Woojae
    2018 16TH INTERNATIONAL WORKSHOP ON ACOUSTIC SIGNAL ENHANCEMENT (IWAENC), 2018, : 535 - 539
  • [25] Ensemble learning based lung and colon cancer classification with pre-trained deep neural networks
    Savas, Serkan
    Guler, Osman
    HEALTH AND TECHNOLOGY, 2025, 15 (01) : 105 - 117
  • [27] Autonomous dishwasher loading from cluttered trays using pre-trained deep neural networks
    Voysey, Isobel
    Thuruthel, Thomas George
    Iida, Fumiya
    ENGINEERING REPORTS, 2021, 3 (05)
  • [28] Time-Series Forecasting of Indoor Temperature Using Pre-trained Deep Neural Networks
    Romeu, Pablo
    Zamora-Martinez, Francisco
    Botella-Rocamora, Paloma
    Pardo, Juan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2013, 2013, 8131 : 451 - 458
  • [29] Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability
    Montes, Diego
    Peerapatanapokin, Pongpatapee
    Schultz, Jeff
    Guo, Chengjun
    Jiang, Wenxin
    Davis, James C.
    PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1605 - 1609
  • [30] Object detection and classification of butterflies using efficient CNN and pre-trained deep convolutional neural networks
    R. Faerie Mattins
    M. Vergin Raja Sarobin
    Azrina Abd Aziz
    S. Srivarshan
    Multimedia Tools and Applications, 2024, 83 : 48457 - 48482