Mix Dimension in Poincare Geometry for 3D Skeleton-based Action Recognition

被引:38
|
作者
Peng, Wei [1 ]
Shi, Jingang [2 ]
Xia, Zhaoqiang [3 ]
Zhao, Guoying [1 ]
机构
[1] Univ Oulu, CMVS, Oulu, Finland
[2] Xi An Jiao Tong Univ, Sch Software Engn, Xian, Peoples R China
[3] Northwestern Polytech Univ, Xian, Peoples R China
基金
中国国家自然科学基金; 芬兰科学院;
关键词
Skeleton-based Action Recognition; Graph Topology Analysis; Rie-mann Manifold; Graph Convolutional Networks;
D O I
10.1145/3394171.3413910
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data, e.g., skeletal data in human action recognition, providing an exciting new way to fuse rich structural information for nodes residing in different parts of a graph. In human action recognition, current works introduce a dynamic graph generation mechanism to better capture the underlying semantic skeleton connections and thus improves the performance. In this paper, we provide an orthogonal way to explore the underlying connections. Instead of introducing an expensive dynamic graph generation paradigm, we build a more efficient GCN on a Riemann manifold, which we think is a more suitable space to model the graph data, to make the extracted representations fit the embedding matrix. Specifically, we present a novel spatial-temporal GCN (ST-GCN) architecture which is defined via the Poincare geometry such that it is able to better model the latent anatomy of the structure data. To further explore the optimal projection dimension in the Riemann space, we mix different dimensions on the manifold and provide an efficient way to explore the dimension for each ST-GCN layer. With the final resulted architecture, we evaluate our method on two current largest scale 3D datasets, i.e., NTU RGB+D and NTU RGB+D 120. The comparison results show that the model could achieve a superior performance under any given evaluation metrics with only 40% model size when compared with the previous best GCN method, which proves the effectiveness of our model.
引用
收藏
页码:1432 / 1440
页数:9
相关论文
共 50 条
  • [41] Insight on Attention Modules for Skeleton-Based Action Recognition
    Jiang, Quanyan
    Wu, Xiaojun
    Kittler, Josef
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 242 - 255
  • [42] Skeleton-based action recognition with JRR-GCN
    Ye, Fanfan
    Tang, Huiming
    ELECTRONICS LETTERS, 2019, 55 (17) : 933 - 935
  • [43] Research Progress in Skeleton-Based Human Action Recognition
    Liu B.
    Zhou S.
    Dong J.
    Xie M.
    Zhou S.
    Zheng T.
    Zhang S.
    Ye X.
    Wang X.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (09): : 1299 - 1322
  • [44] Temporal Extension Module for Skeleton-Based Action Recognition
    Obinata, Yuya
    Yamamoto, Takuma
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 534 - 540
  • [45] Adversarial Attack on Skeleton-Based Human Action Recognition
    Liu, Jian
    Akhtar, Naveed
    Mian, Ajmal
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1609 - 1622
  • [46] Skeleton-based action recognition with extreme learning machines
    Chen, Xi
    Koskela, Markus
    NEUROCOMPUTING, 2015, 149 : 387 - 396
  • [47] Profile HMMs for skeleton-based human action recognition
    Ding, Wenwen
    Liu, Kai
    Fu, Xujia
    Cheng, Fei
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 42 : 109 - 119
  • [48] Skeleton-based Action Recognition with Graph Involution Network
    Tang, Zhihao
    Xia, Hailun
    Gao, Xinkai
    Gao, Feng
    Feng, Chunyan
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3348 - 3354
  • [49] Bootstrapped Representation Learning for Skeleton-Based Action Recognition
    Moliner, Olivier
    Huang, Sangxia
    Astrom, Kalle
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4153 - 4163
  • [50] Convolutional relation network for skeleton-based action recognition
    Zhu, Jiagang
    Zou, Wei
    Zhu, Zheng
    Hu, Yiming
    NEUROCOMPUTING, 2019, 370 : 109 - 117