Human skeleton pose and spatio-temporal feature-based activity recognition using ST-GCN

被引:3
|
作者
Lovanshi, Mayank [1 ]
Tiwari, Vivek [1 ,2 ]
机构
[1] Int Inst Informat Technol IIIT, Naya Raipur, India
[2] ABV Indian Inst Informat Technol & Management, Gwalior, India
关键词
Activity recognition; Pose estimation; ST-GCN; Spatio-temporal feature; Skeleton joints; SPATIAL-DISTRIBUTION; UNIFIED FRAMEWORK; GRADIENTS; MODEL;
D O I
10.1007/s11042-023-16001-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Skeleton-based Human Activity Recognition has recently sparked a lot of attention because skeleton data has proven resistant to changes in lighting, body sizes, dynamic camera perspectives, and complicated backgrounds. The Spatial-Temporal Graph Convolutional Networks (ST-GCN) model has been exposed to study spatial and temporal dependencies effectively from skeleton data. However, efficient use of 3D skeleton in-depth information remains a significant challenge, specifically for human joint motion patterns and linkages information. This study attempts a promising solution through a custom ST-GCN model and skeleton joints for human activity recognition. Special attention was given to spatial & temporal features, which were further fed to the classification model for better pose estimation. A comparative study is presented for activity recognition using large-scale databases such as NTU-RGB-D, Kinetics-Skeleton, and Florence 3D datasets. The Custom ST-GCN model outperforms (Top-1 accuracy) the state-of-the-art method on NTU-RGB-D, Kinetics-Skeleton & Florence 3D dataset with a higher margin by 0.7%, 1.25%, and 1.92%, respectively. Similarly, with Top-5 accuracy, the Custom ST-GCN model offers results hike by 0.5%, 0.73% & 1.52%, respectively. It shows that the presented graph-based topologies capture the changing aspects of a motion-based skeleton sequence better than some of the other approaches.
引用
收藏
页码:12705 / 12730
页数:26
相关论文
共 50 条
  • [1] Human skeleton pose and spatio-temporal feature-based activity recognition using ST-GCN
    Mayank Lovanshi
    Vivek Tiwari
    Multimedia Tools and Applications, 2024, 83 : 12705 - 12730
  • [2] Skeleton-Based ST-GCN for Human Action Recognition With Extended Skeleton Graph and Partitioning Strategy
    WANG, Q. U. A. N. Y. U.
    ZHANG, K. A. I. X. I. A. N. G.
    ASGHAR, M. A. N. J. O. T. H. O. A. L., I
    IEEE ACCESS, 2022, 10 : 41403 - 41410
  • [3] Multi-Channel Feature Fusion Spatio-Temporal GCN for Human Pose Forecasting
    Tan, HaoYao
    Yuan, JunBin
    Xu, QingZhen
    SSRN, 2023,
  • [4] STFC: Spatio-temporal feature chain for skeleton-based human action recognition
    Ding, Wenwen
    Liu, Kai
    Cheng, Fei
    Zhang, Jin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2015, 26 : 329 - 337
  • [5] Skeleton-based action recognition using sparse spatio-temporal GCN with edge effective resistance
    Ahmad, Tasweer
    Jin, Lianwen
    Lin, Luojun
    Tang, GuoZhi
    NEUROCOMPUTING, 2021, 423 : 389 - 398
  • [6] Spatio-temporal analysis of feature-based attention
    Schoenfeld, M. A.
    Hopf, J.-M.
    Martinez, A.
    Mai, H. M.
    Sattler, C.
    Gasde, A.
    Heinze, H.-J.
    Hillyard, S. A.
    CEREBRAL CORTEX, 2007, 17 (10) : 2468 - 2477
  • [7] Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints
    Tasnim, Nusrat
    Islam, Mohammad Khairul
    Baek, Joong-Hwan
    APPLIED SCIENCES-BASEL, 2021, 11 (06):
  • [8] Construction of tennis pose estimation and action recognition model based on improved ST-GCN
    Yu, Yang
    MCB Molecular and Cellular Biomechanics, 2024, 21 (01):
  • [9] ST-GCN human action recognition based on new partition strategy
    Yang S.
    Li Z.
    Wang J.
    He D.
    Li Q.
    Li D.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2023, 29 (12): : 4040 - 4050
  • [10] SKELETON ACTION RECOGNITION BASED ON SPATIO-TEMPORAL FEATURES
    Huang, Qian
    Xie, Mengting
    Li, Xing
    Wang, Shuaichen
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3284 - 3288