Multi-animal 3D social pose estimation, identification and behaviour embedding with a few-shot learning framework

被引:1
|
作者
Han, Yaning [1 ,2 ,3 ,4 ]
Chen, Ke [1 ,2 ,3 ,4 ]
Wang, Yunke [1 ,3 ,4 ]
Liu, Wenhao [1 ,3 ,4 ,5 ]
Wang, Zhouwei [1 ,2 ,3 ,4 ]
Wang, Xiaojing [1 ,3 ,4 ,6 ]
Han, Chuanliang [1 ,3 ,4 ]
Liao, Jiahui [1 ,3 ,4 ,7 ]
Huang, Kang [1 ,2 ,3 ,4 ]
Cai, Shengyuan [1 ,3 ,4 ]
Huang, Yiting [1 ,3 ,4 ]
Wang, Nan [1 ,2 ,3 ,4 ]
Li, Jinxiu [8 ]
Song, Yangwangzi [8 ]
Li, Jing [9 ]
Wang, Guo-Dong [8 ]
Wang, Liping [1 ,3 ,4 ]
Zhang, Yaping [8 ]
Wei, Pengfei [1 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Shenzhen Key Lab Neuropsychiat Modulat, Shenzhen, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Chinese Acad Sci, Brain Cognit & Brain Dis Inst, Shenzhen Inst Adv Technol, CAS Key Lab Brain Connectome & Manipulat, Shenzhen, Peoples R China
[4] Chinese Acad Sci, Brain Cognit & Brain Dis Inst, Shenzhen Inst Adv Technol, Guangdong Prov Key Lab Brain Connectome & Behav, Shenzhen, Peoples R China
[5] City Univ Hong Kong, Dept Neurosci, Kowloon Tong, Hong Kong, Peoples R China
[6] China Univ Geosci, Dept Phys Educ, Beijing, Peoples R China
[7] Southern Med Univ, Sch Biomed Engn, Guangzhou, Peoples R China
[8] Chinese Acad Sci, Kunming Inst Zool, State Key Lab Genet Resources & Evolut, Kunming, Peoples R China
[9] Chinese Minist Publ Secur, Kunming Police Dog Base, Kunming, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
TRACKING; MICE;
D O I
10.1038/s42256-023-00776-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The quantification of animal social behaviour is an essential step to reveal brain functions and psychiatric disorders during interaction phases. While deep learning-based approaches have enabled precise pose estimation, identification and behavioural classification of multi-animals, their application is challenged by the lack of well-annotated datasets. Here we show a computational framework, the Social Behavior Atlas (SBeA) used to overcome the problem caused by the limited datasets. SBeA uses a much smaller number of labelled frames for multi-animal three-dimensional pose estimation, achieves label-free identification recognition and successfully applies unsupervised dynamic learning to social behaviour classification. SBeA is validated to uncover previously overlooked social behaviour phenotypes of autism spectrum disorder knockout mice. Our results also demonstrate that the SBeA can achieve high performance across various species using existing customized datasets. These findings highlight the potential of SBeA for quantifying subtle social behaviours in the fields of neuroscience and ecology. Multi-animal behaviour quantification is pivotal for deciphering animal social behaviours and has broad applications in neuroscience and ecology. Han and colleagues develop a few-shot learning framework for multi-animal 3D pose estimation, identity recognition and social behaviour classification.
引用
收藏
页码:48 / 61
页数:20
相关论文
共 50 条
  • [1] Multi-animal 3D social pose estimation, identification and behaviour embedding with a few-shot learning framework
    Yaning Han
    Ke Chen
    Yunke Wang
    Wenhao Liu
    Zhouwei Wang
    Xiaojing Wang
    Chuanliang Han
    Jiahui Liao
    Kang Huang
    Shengyuan Cai
    Yiting Huang
    Nan Wang
    Jinxiu Li
    Yangwangzi Song
    Jing Li
    Guo-Dong Wang
    Liping Wang
    Yaping Zhang
    Pengfei Wei
    [J]. Nature Machine Intelligence, 2024, 6 : 48 - 61
  • [2] Multi-animal pose estimation, identification and tracking with DeepLabCut
    Jessy Lauer
    Mu Zhou
    Shaokai Ye
    William Menegas
    Steffen Schneider
    Tanmay Nath
    Mohammed Mostafizur Rahman
    Valentina Di Santo
    Daniel Soberanes
    Guoping Feng
    Venkatesh N. Murthy
    George Lauder
    Catherine Dulac
    Mackenzie Weygandt Mathis
    Alexander Mathis
    [J]. Nature Methods, 2022, 19 : 496 - 504
  • [3] Multi-animal pose estimation, identification and tracking with DeepLabCut
    Lauer, Jessy
    Zhou, Mu
    Ye, Shaokai
    Menegas, William
    Schneider, Steffen
    Nath, Tanmay
    Rahman, Mohammed Mostafizur
    Di Santo, Valentina
    Soberanes, Daniel
    Feng, Guoping
    Murthy, Venkatesh N.
    Lauder, George
    Dulac, Catherine
    Mathis, Mackenzie Weygandt
    Mathis, Alexander
    [J]. NATURE METHODS, 2022, 19 (04) : 496 - 504
  • [4] 6D Object Pose Estimation using Few-Shot Instance Segmentation and 3D Matching
    Li, Wanyi
    Sun, Jia
    Luo, Yongkang
    Wang, Peng
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1071 - 1077
  • [5] MetaGeo: A General Framework for Social User Geolocation Identification With Few-Shot Learning
    Zhou, Fan
    Qi, Xiuxiu
    Zhang, Kunpeng
    Trajcevski, Goce
    Zhong, Ting
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8950 - 8964
  • [6] 3D Model classification based on few-shot learning
    Nie, Jie
    Xu, Ning
    Zhou, Ming
    Yan, Ge
    Wei, Zhiqiang
    [J]. NEUROCOMPUTING, 2020, 398 : 539 - 546
  • [7] Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose
    Wang, Angtian
    Mei, Shenxiao
    Yuille, Alan
    Kortylewski, Adam
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] A Few-Shot Learning Framework for Air Vehicle Detection by Similarity Embedding
    Chen, Juan
    Liu, Yuchuan
    Liu, Yicong
    Wang, Shiying
    Chen, Siyuan
    [J]. TENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2018), 2019, 11069
  • [9] Angular Penalty for Few-Shot Incremental 3D Object Learning
    Ma, Bingtao
    Cong, Yang
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [10] Pose Adaptive Dual Mixup for Few-Shot Single-View 3D Reconstruction
    Cheng, Ta-Ying
    Yang, Hsuan-Ru
    Trigoni, Niki
    Chen, Hwann-Tzong
    Liu, Tyng-Luh
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 427 - 435