Flexible human action recognition in depth video sequences using masked joint trajectories

被引:0
|
作者
Antonio Tejero-de-Pablos
Yuta Nakashima
Naokazu Yokoya
Francisco-Javier Díaz-Pernas
Mario Martínez-Zarzuela
机构
[1] Nara Institute of Science and Technology,
[2] University of Valladolid,undefined
[3] Campus Miguel Delibes,undefined
关键词
Flexible human action recognition; Runtime learning; Noisy joint trajectory; Depth video sequences;
D O I
暂无
中图分类号
学科分类号
摘要
Human action recognition applications are greatly benefited from the use of commodity depth sensors that are capable of skeleton tracking. Some of these applications (e.g., customizable gesture interfaces) require learning of new actions at runtime and may not count with many training instances. This paper presents a human action recognition method designed for flexibility, which allows taking users’ feedback to improve recognition performance and to add a new action instance without computationally expensive optimization for training classifiers. Our nearest neighbor-based action classifier adopts dynamic time warping to handle variability in execution rate. In addition, it uses the confidence values associated to each tracked joint position to mask erroneous trajectories for robustness against noise. We evaluate the proposed method with various datasets with different frame rates, actors, and noise. The experimental results demonstrate its adequacy for learning of actions from depth sequences at runtime. We achieve an accuracy comparable to the state-of-the-art techniques on the challenging MSR-Action3D dataset.
引用
收藏
相关论文
共 50 条
  • [1] Flexible human action recognition in depth video sequences using masked joint trajectories
    Tejero-de-Pablos, Antonio
    Nakashima, Yuta
    Yokoya, Naokazu
    Diaz-Pernas, Francisco-Javier
    Martinez-Zarzuela, Mario
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2016,
  • [2] Action Recognition from Depth Video Sequences Using Microsoft Kinect
    Lahan, Gautam Shankar
    Talukdar, Anjan Kumar
    Sarma, Kandarpa Kumar
    2019 FIFTH INTERNATIONAL CONFERENCE ON IMAGE INFORMATION PROCESSING (ICIIP 2019), 2019, : 35 - 40
  • [3] Human Action Recognition in Video Sequences Using Deep Belief Networks
    Abdellaoui, Mehrez
    Douik, Ali
    TRAITEMENT DU SIGNAL, 2020, 37 (01) : 37 - 44
  • [4] Human action recognition using fusion of features for unconstrained video sequences
    Patel, Chirag I.
    Garg, Sanjay
    Zaveri, Tanish
    Banerjee, Asim
    Patel, Ripal
    COMPUTERS & ELECTRICAL ENGINEERING, 2018, 70 : 284 - 301
  • [5] Human Body Articulation for Action Recognition in Video Sequences
    Thi, Tuan Hue
    Lu, Sijun
    Zhang, Jian
    Cheng, Li
    Wang, Li
    AVSS: 2009 6TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2009, : 92 - +
  • [6] Human action recognition based on multi-scale feature maps from depth video sequences
    Chang Li
    Qian Huang
    Xing Li
    Qianhan Wu
    Multimedia Tools and Applications, 2021, 80 : 32111 - 32130
  • [7] Human action recognition based on multi-scale feature maps from depth video sequences
    Li, Chang
    Huang, Qian
    Li, Xing
    Wu, Qianhan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (21-23) : 32111 - 32130
  • [8] Human Action Recognition Using Stereo Trajectories
    Habashi, Pejman
    Boufama, Boubakeur
    Ahmad, Imran Shafiq
    PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2020, 1144 : 94 - 105
  • [9] Enhanced Depth Motion Maps for Improved Human Action Recognition from Depth Action Sequences
    Rao, Dustakar Surendra
    Rao, L. Koteswara
    Bhagyaraju, Vipparthi
    Meng, Goh Kam
    TRAITEMENT DU SIGNAL, 2024, 41 (03) : 1461 - 1472
  • [10] A Revisit to Human Action Recognition from Depth Sequences: Guided SVM-Sampling for Joint Selection
    Antunes, Michel
    Aouada, Djamila
    Ottersten, Bjoern
    2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016), 2016,