Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network

被引:28
|
作者
Yasin, Hashim [1 ]
Hussain, Mazhar [1 ]
Weber, Andreas [2 ]
机构
[1] Natl Univ Comp & Emerging Sci, Dept Comp Sci, Islamabad 44000, Pakistan
[2] Univ Bonn, Dept Comp Sci 2, D-53115 Bonn, Germany
关键词
action recognition; deep neural network (DNN); motion capture (MoCap) datasets; keyframe extraction; MOTION CAPTURE; SEQUENCE;
D O I
10.3390/s20082226
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this paper, we propose a novel and efficient framework for 3D action recognition using a deep learning architecture. First, we develop a 3D normalized pose space that consists of only 3D normalized poses, which are generated by discarding translation and orientation information. From these poses, we extract joint features and employ them further in a Deep Neural Network (DNN) in order to learn the action model. The architecture of our DNN consists of two hidden layers with the sigmoid activation function and an output layer with the softmax function. Furthermore, we propose a keyframe extraction methodology through which, from a motion sequence of 3D frames, we efficiently extract the keyframes that contribute substantially to the performance of the action. In this way, we eliminate redundant frames and reduce the length of the motion. More precisely, we ultimately summarize the motion sequence, while preserving the original motion semantics. We only consider the remaining essential informative frames in the process of action recognition, and the proposed pipeline is sufficiently fast and robust as a result. Finally, we evaluate our proposed framework intensively on publicly available benchmark Motion Capture (MoCap) datasets, namely HDM05 and CMU. From our experiments, we reveal that our proposed scheme significantly outperforms other state-of-the-art approaches.
引用
下载
收藏
页数:24
相关论文
共 50 条
  • [21] F-E3D: FPGA-based Acceleration of an Efficient 3D Convolutional Neural Network for Human Action Recognition
    Fan, Hongxiang
    Luo, Cheng
    Zeng, Chenglong
    Ferianc, Martin
    Que, Zhiqiang
    Liu, Shuanglong
    Niu, Xinyu
    Luk, Wayne
    2019 IEEE 30TH INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP 2019), 2019, : 1 - 8
  • [22] Efficient combination of classifiers for 3D action recognition
    Jan Sedmidubsky
    Pavel Zezula
    Multimedia Systems, 2021, 27 : 941 - 952
  • [23] Efficient combination of classifiers for 3D action recognition
    Sedmidubsky, Jan
    Zezula, Pavel
    MULTIMEDIA SYSTEMS, 2021, 27 (05) : 941 - 952
  • [24] Action recognition with motion map 3D network
    Sun, Yuchao
    Wu, Xinxiao
    Yu, Wennan
    Yu, Feiwu
    NEUROCOMPUTING, 2018, 297 : 33 - 39
  • [25] Deep Learning-Based Action Recognition Using 3D Skeleton Joints Information
    Tasnim, Nusrat
    Islam, Md. Mahbubul
    Baek, Joong-Hwan
    INVENTIONS, 2020, 5 (03) : 1 - 15
  • [26] Efficient 3D LIDAR based loop closing using deep neural network
    Yin, Huan
    Ding, Xiaqing
    Tang, Li
    Wang, Yue
    Xiong, Rong
    2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE ROBIO 2017), 2017, : 481 - 486
  • [27] Skeleton-Based Square Grid for Human Action Recognition With 3D Convolutional Neural Network
    Ding, Wenwen
    Ding, Chongyang
    Li, Guang
    Liu, Kai
    IEEE ACCESS, 2021, 9 : 54078 - 54089
  • [28] Basketball technique action recognition using 3D convolutional neural networks
    Wang, Jingfei
    Zuo, Liang
    Martinez, Carlos Cordente
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [29] 3D ACTION RECOGNITION USING DATA VISUALIZATION AND CONVOLUTIONAL NEURAL NETWORKS
    Liu, Mengyuan
    Chen, Chen
    Liu, Hong
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 925 - 930
  • [30] 3D CONVOLUTIONAL NEURAL NETWORK WITH MULTI-MODEL FRAMEWORK FOR ACTION RECOGNITION
    Jing, Longlong
    Ye, Yuancheng
    Yang, Xiaodong
    Tian, Yingli
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1837 - 1841