Quantifying Facial Gestures Using Deep Learning in a New World Monkey

被引:0
|
作者
Carugati, Filippo [1 ]
Gorio, Dayanna Curagi [1 ]
De Gregorio, Chiara [1 ,2 ]
Valente, Daria [1 ,3 ]
Ferrario, Valeria [1 ,4 ]
Lefaux, Brice [5 ]
Friard, Olivier [1 ]
Gamba, Marco [1 ]
机构
[1] Univ Torino, Dept Life Sci & Syst Biol, Turin, Italy
[2] Univ Warwick, Dept Psychol, Coventry, England
[3] Parco Nat Viva Garda Zool Pk, Bussolengo, Italy
[4] Chester Zoo, Chester, England
[5] Zoo Mulhouse, Mulhouse, France
关键词
cotton-top tamarin; DeepLabCut; markerless pose estimation; primate face; Saguinus oedipus; SAGUINUS-OEDIPUS; VOCAL REPERTOIRE; EVOLUTION; TAMARIN; COMMUNICATION; EXPRESSIONS; VOCALIZATIONS; PRIMATES; DISPLAYS; BEHAVIOR;
D O I
10.1002/ajp.70013
中图分类号
Q95 [动物学];
学科分类号
071002 ;
摘要
Facial gestures are a crucial component of primate multimodal communication. However, current methodologies for extracting facial data from video recordings are labor-intensive and prone to human subjectivity. Although automatic tools for this task are still in their infancy, deep learning techniques are revolutionizing animal behavior research. This study explores the distinctiveness of facial gestures in cotton-top tamarins, quantified using markerless pose estimation algorithms. From footage of captive individuals, we extracted and manually labeled frames to develop a model that can recognize a custom set of landmarks positioned on the face of the target species. The trained model predicted landmark positions and subsequently transformed them into distance matrices representing landmarks' spatial distributions within each frame. We employed three competitive machine learning classifiers to assess the ability to automatically discriminate facial configurations that cooccur with vocal emissions and are associated with different behavioral contexts. Initial analysis showed correct classification rates exceeding 80%, suggesting that voiced facial configurations are highly distinctive from unvoiced ones. Our findings also demonstrated varying context specificity of facial gestures, with the highest classification accuracy observed during yawning, social activity, and resting. This study highlights the potential of markerless pose estimation for advancing the study of primate multimodal communication, even in challenging species such as cotton-top tamarins. The ability to automatically distinguish facial gestures in different behavioral contexts represents a critical step in developing automated tools for extracting behavioral cues from raw video data.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Automated Dubbing and Facial Synchronization using Deep Learning
    Bazaz, Saad A.
    Subhani, AbdurRehman
    Hadi, Syed Z. A.
    PROCEEDINGS OF 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (ICAI 2022), 2022, : 127 - 131
  • [22] A new learning method for obstetric gestures using the BirthSIM simulator
    Moreau, R.
    Pham, M. T.
    Redarce, T.
    Dupuis, O.
    PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10, 2007, : 2279 - +
  • [23] Quantifying the clinical evaluation of retinal fluid change in real-world OCT images using deep learning
    Michl, Martin
    Goldbach, Felix
    Mylonas, Georgios
    Deak, Gabor
    Alten, Thomas
    Sacu, Stefan
    Bogunovic, Hrvoje
    Gerendas, Bianca S.
    Schmidt-Erfurth, Ursula
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2021, 62 (08)
  • [24] Deep Learning Classification of Touch Gestures Using Distributed Normal and Shear Force
    Choi, Hojung
    Brouwer, Dane
    Lin, Michael A.
    Yoshida, Kyle T.
    Rognon, Carine
    Stephens-Fripp, Benjamin
    Okamura, Allison M.
    Cutkosky, Mark R.
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 3659 - 3665
  • [25] Temporal signed gestures segmentation in an image sequence using deep reinforcement learning
    Kalandyk, Dawid
    Kapuscinski, Tomasz
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
  • [26] Segmentation and Recognition of Eating Gestures from Wrist Motion using Deep Learning
    Luktuke, Yadnyesh Y.
    Hoover, Adam
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 1368 - 1373
  • [27] Quantifying plant mimesis in fossil insects using deep learning
    Fan, Li
    Xu, Chunpeng
    Jarzembowski, Edmund A.
    Cui, Xiaohui
    HISTORICAL BIOLOGY, 2022, 34 (05) : 907 - 916
  • [28] Quantifying the uncertainty of precipitation forecasting using probabilistic deep learning
    Xu, Lei
    Chen, Nengcheng
    Yang, Chao
    Yu, Hongchu
    Chen, Zeqiang
    HYDROLOGY AND EARTH SYSTEM SCIENCES, 2022, 26 (11) : 2923 - 2938
  • [29] A NEW DEEP LEARNING BASED SYSTEM FOR FACIAL GESTURE ANALYSIS
    Soylu, Busra Emek
    Guzel, Mehmet Serdar
    Askerzade, I. N.
    PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON CONTROL AND OPTIMIZATION WITH INDUSTRIAL APPLICATIONS, VOL I, 2018, : 361 - 363
  • [30] DeepEmo: Real-world Facial Expression Analysis via Deep Learning
    Deng, Weihong
    Hu, Jiani
    Zhang, Shuo
    Gao, Jun
    2015 VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2015,