A High Invariance Motion Representation for Skeleton-Based Action Recognition

被引:2
|
作者
Guo, Songrui [1 ]
Pan, Huawei [1 ]
Tan, Guanghua [1 ]
Chen, Lin [1 ]
Gao, Chunming [1 ]
机构
[1] Hunan Univ, Coll Informat Sci & Engn, Changsha 410000, Hunan, Peoples R China
关键词
Skeletal representation; relative geometry; orthogonal group; multi-value; computation cost; JOINTS;
D O I
10.1142/S021800141650018X
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human action recognition is very important and significant research work in numerous fields of science, for example, human-computer interaction, computer vision and crime analysis. In recent years, relative geometry features have been widely applied to the description of relative relation of body motion. It brings many benefits to action recognition such as clear description, abundant features etc. But the obvious disadvantage is that the extracted features severely rely on the local coordinate system. It is difficult to find a bijection between relative geometry and skeleton motion. To overcome this problem, many previous methods use relative rotation and translation between all skeleton pairs to increase robustness. In this paper we present a new motion representation method. It establishes a motion model based on the relative geometry with the aid of special orthogonal group SO(3). At the same time, we proved that this motion representation method can establish a bijection between relative geometry and motion of skeleton pairs. After the motion representation method in this paper is used, the computation cost of action recognition reduces from the two-way relative motion (motion from A to B and B to A) to one-way relative motion (motion from A to B or B to A) between any skeleton pair, namely, permutation problem P-n(2) is simplified into combinatorics problem C-n(2). Finally, the experimental results of the three motion datasets are all superior to present skeleton-based action recognition methods.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] LEARNING EXPLICIT SHAPE AND MOTION EVOLUTION MAPS FOR SKELETON-BASED HUMAN ACTION RECOGNITION
    Liu, Hong
    Tu, Juanhui
    Liu, Mengyuan
    Ding, Runwei
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 1333 - 1337
  • [22] A Novel Skeleton Spatial Pyramid Model for Skeleton-based Action Recognition
    Li, Yanshan
    Guo, Tianyu
    Xia, Rongjie
    Liu, Xing
    [J]. 2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 16 - 20
  • [23] Fully Attentional Network for Skeleton-Based Action Recognition
    Liu, Caifeng
    Zhou, Hongcheng
    [J]. IEEE ACCESS, 2023, 11 : 20478 - 20485
  • [24] Insight on Attention Modules for Skeleton-Based Action Recognition
    Jiang, Quanyan
    Wu, Xiaojun
    Kittler, Josef
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 242 - 255
  • [25] Skeleton-based action recognition with JRR-GCN
    Ye, Fanfan
    Tang, Huiming
    [J]. ELECTRONICS LETTERS, 2019, 55 (17) : 933 - 935
  • [26] Research Progress in Skeleton-Based Human Action Recognition
    Liu B.
    Zhou S.
    Dong J.
    Xie M.
    Zhou S.
    Zheng T.
    Zhang S.
    Ye X.
    Wang X.
    [J]. Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (09): : 1299 - 1322
  • [27] Profile HMMs for skeleton-based human action recognition
    Ding, Wenwen
    Liu, Kai
    Fu, Xujia
    Cheng, Fei
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 42 : 109 - 119
  • [28] Skeleton-based action recognition with extreme learning machines
    Chen, Xi
    Koskela, Markus
    [J]. NEUROCOMPUTING, 2015, 149 : 387 - 396
  • [29] Temporal Extension Module for Skeleton-Based Action Recognition
    Obinata, Yuya
    Yamamoto, Takuma
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 534 - 540
  • [30] Adversarial Attack on Skeleton-Based Human Action Recognition
    Liu, Jian
    Akhtar, Naveed
    Mian, Ajmal
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1609 - 1622