Flexible human action recognition in depth video sequences using masked joint trajectories

被引:0
|
作者
Antonio Tejero-de-Pablos
Yuta Nakashima
Naokazu Yokoya
Francisco-Javier Díaz-Pernas
Mario Martínez-Zarzuela
机构
[1] Nara Institute of Science and Technology,
[2] University of Valladolid,undefined
[3] Campus Miguel Delibes,undefined
关键词
Flexible human action recognition; Runtime learning; Noisy joint trajectory; Depth video sequences;
D O I
暂无
中图分类号
学科分类号
摘要
Human action recognition applications are greatly benefited from the use of commodity depth sensors that are capable of skeleton tracking. Some of these applications (e.g., customizable gesture interfaces) require learning of new actions at runtime and may not count with many training instances. This paper presents a human action recognition method designed for flexibility, which allows taking users’ feedback to improve recognition performance and to add a new action instance without computationally expensive optimization for training classifiers. Our nearest neighbor-based action classifier adopts dynamic time warping to handle variability in execution rate. In addition, it uses the confidence values associated to each tracked joint position to mask erroneous trajectories for robustness against noise. We evaluate the proposed method with various datasets with different frame rates, actors, and noise. The experimental results demonstrate its adequacy for learning of actions from depth sequences at runtime. We achieve an accuracy comparable to the state-of-the-art techniques on the challenging MSR-Action3D dataset.
引用
收藏
相关论文
共 50 条
  • [31] Action recognition for depth video using multi-view dynamic images
    Xiao, Yang
    Chen, Jun
    Wang, Yancheng
    Cao, Zhiguo
    Zhou, Joey Tianyi
    Bai, Xiang
    INFORMATION SCIENCES, 2019, 480 : 287 - 304
  • [32] Human Action Recognition Using Associated Depth and Skeleton Information
    Li, Keyu
    Liu, Zhigang
    Liang, Liqin
    Song, Yanan
    2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2016, : 418 - 422
  • [33] HUMAN ACTION RECOGNITION USING ASSOCIATED DEPTH AND SKELETON INFORMATION
    Tang, Nick C.
    Lin, Yen-Yu
    Hua, Ju-Hsuan
    Weng, Ming-Fang
    Liao, Hong-Yuan Mark
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [34] Human Action Recognition Using Fusion of Depth and Inertial Sensors
    Fuad, Zain
    Unel, Mustafa
    IMAGE ANALYSIS AND RECOGNITION (ICIAR 2018), 2018, 10882 : 373 - 380
  • [35] Human Action Recognition Using Multilevel Depth Motion Maps
    Xu Weiyao
    Wu Muqing
    Zhao Min
    Liu Yifeng
    Lv Bo
    Xia Ting
    IEEE ACCESS, 2019, 7 : 41811 - 41822
  • [36] Human Action Recognition Using Spatial and Temporal Sequences Alignment
    Li, Yandi
    Zhao, Zhihao
    SECOND INTERNATIONAL CONFERENCE ON OPTICS AND IMAGE PROCESSING (ICOIP 2022), 2022, 12328
  • [37] Human action recognition on depth dataset
    Zan Gao
    Hua Zhang
    Anan A. Liu
    Guangping Xu
    Yanbing Xue
    Neural Computing and Applications, 2016, 27 : 2047 - 2054
  • [38] Human action recognition on depth dataset
    Gao, Zan
    Zhang, Hua
    Liu, Anan A.
    Xu, Guangping
    Xue, Yanbing
    NEURAL COMPUTING & APPLICATIONS, 2016, 27 (07): : 2047 - 2054
  • [39] LearningWeighted Joint-based Features for Action Recognition using Depth Camera
    Chen, Guang
    Clarke, Daniel
    Knoll, Alois
    PROCEEDINGS OF THE 2014 9TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, THEORY AND APPLICATIONS (VISAPP 2014), VOL 2, 2014, : 549 - 556
  • [40] Video-Based Action Recognition Using Dimension Reduction of Deep Covariance Trajectories
    Dai, Mengyu
    Srivastava, Anuj
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 621 - 630