Flexible human action recognition in depth video sequences using masked joint trajectories

被引:0
|
作者
Antonio Tejero-de-Pablos
Yuta Nakashima
Naokazu Yokoya
Francisco-Javier Díaz-Pernas
Mario Martínez-Zarzuela
机构
[1] Nara Institute of Science and Technology,
[2] University of Valladolid,undefined
[3] Campus Miguel Delibes,undefined
关键词
Flexible human action recognition; Runtime learning; Noisy joint trajectory; Depth video sequences;
D O I
暂无
中图分类号
学科分类号
摘要
Human action recognition applications are greatly benefited from the use of commodity depth sensors that are capable of skeleton tracking. Some of these applications (e.g., customizable gesture interfaces) require learning of new actions at runtime and may not count with many training instances. This paper presents a human action recognition method designed for flexibility, which allows taking users’ feedback to improve recognition performance and to add a new action instance without computationally expensive optimization for training classifiers. Our nearest neighbor-based action classifier adopts dynamic time warping to handle variability in execution rate. In addition, it uses the confidence values associated to each tracked joint position to mask erroneous trajectories for robustness against noise. We evaluate the proposed method with various datasets with different frame rates, actors, and noise. The experimental results demonstrate its adequacy for learning of actions from depth sequences at runtime. We achieve an accuracy comparable to the state-of-the-art techniques on the challenging MSR-Action3D dataset.
引用
收藏
相关论文
共 50 条
  • [21] Curvature: A signature for Action Recognition in Video Sequences
    Chen, He
    Chirikjian, Gregory S.
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3743 - 3750
  • [22] A Probabilistic Approach for Human Action Recognition using Motion Trajectories
    Chalamala, Srinivasa Rao
    Kumar, Prasanna A. L. P.
    2016 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS, MODELLING AND SIMULATION (ISMS), 2016, : 185 - 190
  • [23] Human Action Recognition Using Improved Salient Dense Trajectories
    Li, Qingwu
    Cheng, Haisu
    Zhou, Yan
    Huo, Guanying
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2016, 2016
  • [24] Human Activity Recognition for Video Surveillance using Sequences of Postures
    Htike, Kyaw Kyaw
    Khalifa, Othman O.
    Ramli, Huda Adibah Mohd
    Abushariah, Mohammad A. M.
    2014 THIRD INTERNATIONAL CONFERENCE ON E-TECHNOLOGIES AND NETWORKS FOR DEVELOPMENT (ICEND), 2014, : 79 - 82
  • [25] Human action recognition with salient trajectories
    Yi, Yang
    Lin, Yikun
    SIGNAL PROCESSING, 2013, 93 (11) : 2932 - 2941
  • [26] Human Action Recognition in Video
    Singh, Dushyant Kumar
    ADVANCED INFORMATICS FOR COMPUTING RESEARCH, ICAICR 2018, PT I, 2019, 955 : 54 - 66
  • [27] Survey on artificial intelligence-based human action recognition in video sequences
    Kumar, Rahul
    Kumar, Shailender
    OPTICAL ENGINEERING, 2023, 62 (02)
  • [28] Depth Context: a new descriptor for human activity recognition by using sole depth sequences
    Liu, Mengyuan
    Liu, Hong
    NEUROCOMPUTING, 2016, 175 : 747 - 758
  • [29] On the improvement of human action recognition from depth map sequences using Space-Time Occupancy Patterns
    Vieira, Antonio W.
    Nascimento, Erickson R.
    Oliveira, Gabriel L.
    Liu, Zicheng
    Campos, Mario F. M.
    PATTERN RECOGNITION LETTERS, 2014, 36 : 221 - 227
  • [30] Action Recognition in Real Homes using Low Resolution Depth Video Data
    Casagrande, Flavia Dias
    Nedrejord, Oda Olsen
    Lee, Wonho
    Zouganeli, Evi
    2019 IEEE 32ND INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS), 2019, : 156 - 161