Discriminative Spatio-Temporal Pattern Discovery for 3D Action Recognition

被引:31
|
作者
Weng, Junwu [1 ]
Weng, Chaoqun [1 ]
Yuan, Junsong [2 ]
Liu, Zicheng [3 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[2] SUNY Buffalo, Dept Comp Sci & Engn, Buffalo, NY 14260 USA
[3] Microsoft Res, Redmond, WA 98052 USA
关键词
NBMIM; spatio-temporal pattern discovery; discriminative skeleton-based action recognition;
D O I
10.1109/TCSVT.2018.2818151
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Despite the recent success of 3D action recognition using depth sensor, most existing works target how to improve the action recognition performance, rather than understanding how different types of actions are performed. In this paper, we propose to discover discriminative spatio-temporal patterns for 3D action recognition. Discovering these patterns can not only help to improve the action recognition performance but also help us to understand and differentiate between the action category. Our proposed method takes the spatio-temporal structure of 3D action into consideration and can discover essential spatiotemporal patterns that play key roles in action recognition. Instead of relying on an end-to-end network to learn the 3D action representation and perform classification, we simply present each 3D action as a series of temporal stages composed by 3D poses. Then, we rely on nearest neighbor matching and bilinear classifiers to simultaneously identify both critical temporal stages and spatial joints for each action class. Despite using raw action representation and a linear classifier, experiments on five benchmark data sets show that the proposed spatiotemporal naive Bayes mutual information maximization can achieve a competitive performance compared with the state-of-the-art methods that use sophisticated end-to-end learning, and has the advantage of finding discriminative spatio-temporal action patterns.
引用
收藏
页码:1077 / 1089
页数:13
相关论文
共 50 条
  • [1] Spatio-temporal deformable 3D ConvNets with attention for action recognition
    Li, Jun
    Liu, Xianglong
    Zhang, Mingyuan
    Wang, Deqing
    [J]. PATTERN RECOGNITION, 2020, 98
  • [2] Action Recognition Using Discriminative Spatio-Temporal Neighborhood Features
    Cheng, Shi-Lei
    Yang, Jiang-Feng
    Ma, Zheng
    Xie, Mei
    [J]. INTERNATIONAL CONFERENCE ON COMPUTER NETWORKS AND INFORMATION SECURITY (CNIS 2015), 2015, : 166 - 172
  • [3] Accelerated Learning of Discriminative Spatio-temporal Features for Action Recognition
    Varshney, Munender
    Rameshan, Renu
    [J]. 2016 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS (SPCOM), 2016,
  • [4] Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
    Liu, Jun
    Shahroudy, Amir
    Xu, Dong
    Wang, Gang
    [J]. COMPUTER VISION - ECCV 2016, PT III, 2016, 9907 : 816 - 833
  • [5] Spatio-Temporal Features in Action Recognition Using 3D Skeletal Joints
    Trascau, Mihai
    Nan, Mihai
    Florea, Adina Magda
    [J]. SENSORS, 2019, 19 (02)
  • [6] 3D R Transform on Spatio-Temporal Interest Points for Action Recognition
    Yuan, Chunfeng
    Li, Xi
    Hu, Weiming
    Ling, Haibin
    Maybank, Stephen
    [J]. 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 724 - 730
  • [7] 3D human action recognition using spatio-temporal motion templates
    Lv, FJ
    Nevatia, R
    Lee, MW
    [J]. COMPUTER VISION IN HUMAN-COMPUTER INTERACTION, PROCEEDINGS, 2005, 3766 : 120 - 130
  • [8] Spatio-temporal attention on manifold space for 3D human action recognition
    Ding, Chongyang
    Liu, Kai
    Cheng, Fei
    Belyaev, Evgeny
    [J]. APPLIED INTELLIGENCE, 2021, 51 (01) : 560 - 570
  • [9] Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition
    Hara, Kensho
    Kataoka, Hirokatsu
    Satoh, Yutaka
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 3154 - 3160
  • [10] Spatio-temporal attention on manifold space for 3D human action recognition
    Chongyang Ding
    Kai Liu
    Fei Cheng
    Evgeny Belyaev
    [J]. Applied Intelligence, 2021, 51 : 560 - 570