Spatial-temporal slowfast graph convolutional network for skeleton-based action recognition

被引:9
|
作者
Fang, Zheng [1 ]
Zhang, Xiongwei [1 ]
Cao, Tieyong [1 ,2 ]
Zheng, Yunfei [1 ]
Sun, Meng [1 ]
机构
[1] Peoples Liberat Army Engn Univ, Inst Command & Control Engn, Nanjing 210001, Jiangsu, Peoples R China
[2] Army Artillery & Def Acad PLA Nanjing, Nanjing, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
computer vision; graph theory; video signal processing; video signals;
D O I
10.1049/cvi2.12080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In skeleton-based action recognition, the graph convolutional network (GCN) has achieved great success. Modelling skeleton data in a suitable spatial-temporal way and designing the adjacency matrix are crucial aspects for GCN-based methods to capture joint relationships. In this study, we propose the spatial-temporal slowfast graph convolutional network (STSF-GCN) and design the adjacency matrices for the skeleton data graphs in STSF-GCN. STSF-GCN contains two pathways: (1) the fast pathway is in a high frame rate, and joints of adjacent frames are unified to build 'small' spatial-temporal graphs. A new spatial-temporal adjacency matrix is proposed for these 'small' spatial-temporal graphs. Ablation studies verify the effectiveness of the proposed adjacency matrix. (2) The slow pathway is in a low frame rate, and joints from all frames are unified to build one 'big' spatial-temporal graph. The adjacency matrix for the 'big' spatial-temporal graph is obtained by computing self-attention coefficients of each joint. Finally, outputs from two pathways are fused to predict the action category. STSF-GCN can efficiently capture both long-range and short-range spatial-temporal joint relationships. On three datasets for skeleton-based action recognition, STSF-GCN can achieve state-of-the-art performance with much less computational cost.
引用
收藏
页码:205 / 217
页数:13
相关论文
共 50 条
  • [31] Multi-Stream and Enhanced Spatial-Temporal Graph Convolution Network for Skeleton-Based Action Recognition
    Li, Fanjia
    Zhu, Aichun
    Xu, Yonggang
    Cui, Ran
    Hua, Gang
    IEEE ACCESS, 2020, 8 : 97757 - 97770
  • [32] Multi-Branch Spatial-Temporal Attention Graph Convolution Network for Skeleton-based Action Recognition
    Wang, Daoshuai
    Li, Dewei
    Guan, Yaonan
    Wang, Gang
    Shao, Haibin
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 6487 - 6492
  • [33] Spatial Temporal Graph Deconvolutional Network for Skeleton-Based Human Action Recognition
    Peng, Wei
    Shi, Jingang
    Zhao, Guoying
    IEEE Signal Processing Letters, 2021, 28 : 244 - 248
  • [34] Spatial–Temporal gated graph attention network for skeleton-based action recognition
    Mrugendrasinh Rahevar
    Amit Ganatra
    Pattern Analysis and Applications, 2023, 26 (3) : 929 - 939
  • [35] Skeleton-Based Action Recognition with Shift Graph Convolutional Network
    Cheng, Ke
    Zhang, Yifan
    He, Xiangyu
    Chen, Weihan
    Cheng, Jian
    Lu, Hanqing
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 180 - 189
  • [36] Spatial Temporal Graph Deconvolutional Network for Skeleton-Based Human Action Recognition
    Peng, Wei
    Shi, Jingang
    Zhao, Guoying
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 244 - 248
  • [37] A lightweight graph convolutional network for skeleton-based action recognition
    Dinh-Tan Pham
    Quang-Tien Pham
    Tien-Thanh Nguyen
    Thi-Lan Le
    Hai Vu
    Multimedia Tools and Applications, 2023, 82 : 3055 - 3079
  • [38] Shallow Graph Convolutional Network for Skeleton-Based Action Recognition
    Yang, Wenjie
    Zhang, Jianlin
    Cai, Jingju
    Xu, Zhiyong
    SENSORS, 2021, 21 (02) : 1 - 14
  • [39] Ghost Graph Convolutional Network for Skeleton-based Action Recognition
    Jang, Sungjun
    Lee, Heansung
    Cho, Suhwan
    Woo, Sungmin
    Lee, Sangyoun
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,
  • [40] A lightweight graph convolutional network for skeleton-based action recognition
    Pham, Dinh-Tan
    Pham, Quang-Tien
    Nguyen, Tien-Thanh
    Le, Thi-Lan
    Vu, Hai
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (02) : 3055 - 3079