Flexible human action recognition in depth video sequences using masked joint trajectories

被引:0
|
作者
Antonio Tejero-de-Pablos
Yuta Nakashima
Naokazu Yokoya
Francisco-Javier Díaz-Pernas
Mario Martínez-Zarzuela
机构
[1] Nara Institute of Science and Technology,
[2] University of Valladolid,undefined
[3] Campus Miguel Delibes,undefined
关键词
Flexible human action recognition; Runtime learning; Noisy joint trajectory; Depth video sequences;
D O I
暂无
中图分类号
学科分类号
摘要
Human action recognition applications are greatly benefited from the use of commodity depth sensors that are capable of skeleton tracking. Some of these applications (e.g., customizable gesture interfaces) require learning of new actions at runtime and may not count with many training instances. This paper presents a human action recognition method designed for flexibility, which allows taking users’ feedback to improve recognition performance and to add a new action instance without computationally expensive optimization for training classifiers. Our nearest neighbor-based action classifier adopts dynamic time warping to handle variability in execution rate. In addition, it uses the confidence values associated to each tracked joint position to mask erroneous trajectories for robustness against noise. We evaluate the proposed method with various datasets with different frame rates, actors, and noise. The experimental results demonstrate its adequacy for learning of actions from depth sequences at runtime. We achieve an accuracy comparable to the state-of-the-art techniques on the challenging MSR-Action3D dataset.
引用
收藏
相关论文
共 50 条
  • [11] Skeleton embedded motion body partition for human action recognition using depth sequences
    Ji, Xiaopeng
    Cheng, Jun
    Feng, Wei
    Tao, Dapeng
    SIGNAL PROCESSING, 2018, 143 : 56 - 68
  • [12] The Study on Human Action Recognition with Depth Video for Intelligent Monitoring
    Liu, Xueping
    Li, Yibo
    Li, Youru
    Yu, Shi
    Tian, Can
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 5702 - 5706
  • [13] The spatial Laplacian and temporal energy pyramid representation for human action recognition using depth sequences
    Ji, Xiaopeng
    Cheng, Jun
    Tao, Dapeng
    Wu, Xinyu
    Feng, Wei
    KNOWLEDGE-BASED SYSTEMS, 2017, 122 : 64 - 74
  • [14] A Robust Deep Model for Human Action Recognition in Restricted Video Sequences
    Chenarlogh, Vahid Ashkani
    Jond, Hossein B.
    Platos, Jan
    2020 43RD INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING (TSP), 2020, : 541 - 544
  • [15] FAST AND RELIABLE HUMAN ACTION RECOGNITION IN VIDEO SEQUENCES BY SEQUENTIAL ANALYSIS
    Fang, Hui
    Thiyagalingam, Jeyarajan
    Bessis, Nik
    Edirisinghe, Eran
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3973 - 3977
  • [16] Human-Body Action Recognition Based on Dense Trajectories and Video Saliency
    Gao Deyong
    Kang Zibing
    Wang Song
    Wang Yangping
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (24)
  • [17] Sparse Spatio-Temporal Representation of Joint Shape-Motion Cues for Human Action Recognition in Depth Sequences
    Tran, Quang D.
    Ly, Ngoc Q.
    PROCEEDINGS OF 2013 IEEE RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES: RESEARCH, INNOVATION, AND VISION FOR THE FUTURE (RIVF), 2013, : 253 - 258
  • [18] A Survey on Human Action Recognition Using Depth Sensors
    Liang, Bin
    Zheng, Lihong
    2015 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2015, : 76 - 83
  • [19] Cross-view human action recognition from depth maps using spectral graph sequences
    Kerola, Tommi
    Inoue, Nakamasa
    Shinoda, Koichi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 154 : 108 - 126
  • [20] Averaging Video Sequences to Improve Action Recognition
    Gao, Zhen
    Lu, Guoliang
    Yan, Peng
    2016 9TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2016), 2016, : 89 - 93