Spatio-Temporal Difference Descriptor for Skeleton-Based Action Recognition

被引:0
|
作者
Ding, Chongyang [1 ]
Liu, Kai [1 ]
Korhonen, Jari [2 ]
Belyaev, Evgeny [3 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian, Peoples R China
[2] Shenzhen Univ, Sch Comp Sci & Software Engn, Shenzhen, Peoples R China
[3] ITMO Univ, Int Lab Comp Technol, St Petersburg, Russia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In skeletal representation, intra-frame differences between body joints, as well as inter-frame dynamics between body skeletons contain discriminative information for action recognition. Conventional methods for modeling human skeleton sequences generally depend on motion trajectory and body joint dependency information, thus lacking the ability to identify the inherent differences of human skeletons. In this paper, we propose a spatio-temporal difference descriptor based on a directional convolution architecture that enables us to learn the spatio-temporal differences and contextual dependencies between different body joints simultaneously. The overall model is built on a deep symmetric positive definite (SPD) metric learning architecture designed to learn discriminative manifold features with the well-designed non-linear mapping operation. Experiments on several action datasets show that our proposed method achieves up to 3% accuracy improvement over state-of-the-art methods.
引用
收藏
页码:1227 / 1235
页数:9
相关论文
共 50 条
  • [31] Skeleton-based action recognition based on spatio-temporal adaptive graph convolutional neural-network
    Cao Y.
    Liu C.
    Huang Z.
    Sheng Y.
    [J]. Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2020, 48 (11): : 5 - 10
  • [32] Spatio-Temporal Graph Convolution for Skeleton Based Action Recognition
    Li, Chaolong
    Cui, Zhen
    Zheng, Wenming
    Xu, Chunyan
    Yang, Jian
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3482 - 3489
  • [33] Robust Skeleton-based Action Recognition through Hierarchical Aggregation of Local and Global Spatio-temporal Features
    Ren, J.
    Napoleon, R.
    Andre, B.
    Chris, S.
    Liu, M.
    Ma, J.
    [J]. 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2018, : 901 - 906
  • [34] SPATIO-TEMPORAL MULTI-SCALE SOFT QUANTIZATION LEARNING FOR SKELETON-BASED HUMAN ACTION RECOGNITION
    Yang, Jianyu
    Zhu, Chen
    Yuan, Junsong
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1078 - 1083
  • [35] Multi-scale spatio-temporal network for skeleton-based gait recognition
    He, Dongzhi
    Xue, Yongle
    Li, Yunyu
    Sun, Zhijie
    Xiao, Xingmei
    Wang, Jin
    [J]. AI COMMUNICATIONS, 2023, 36 (04) : 297 - 310
  • [36] Skeleton-Based Human Action Recognition through Third-Order Tensor Representation and Spatio-Temporal Analysis
    Barmpoutis, Panagiotis
    Stathaki, Tania
    Camarinopoulos, Stephanos
    [J]. INVENTIONS, 2019, 4 (01)
  • [37] SKELETON-BASED ACTION RECOGNITION WITH SYNCHRONOUS LOCAL AND NON-LOCAL SPATIO-TEMPORAL LEARNING AND FREQUENCY ATTENTION
    Hu, Guyue
    Cui, Bo
    Yu, Shan
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1216 - 1221
  • [38] ENSEMBLE SPATIO-TEMPORAL DISTANCE NET FOR SKELETON BASED ACTION RECOGNITION
    Naveenkumar, M.
    Domnic, S.
    [J]. SCALABLE COMPUTING-PRACTICE AND EXPERIENCE, 2019, 20 (03): : 485 - 494
  • [39] Recognizing Skeleton-Based Hand Gestures by a Spatio-Temporal Network
    Li, Xin
    Liao, Jun
    Liu, Li
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: APPLIED DATA SCIENCE TRACK, PT IV, 2021, 12978 : 151 - 167
  • [40] Action recognition with spatio-temporal augmented descriptor and fusion method
    Li, Lijun
    Dai, Shuling
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (12) : 13953 - 13969