Deep Learning-Based Human Action Recognition with Key-Frames Sampling Using Ranking Methods

被引:8
|
作者
Tasnim, Nusrat [1 ]
Baek, Joong-Hwan [1 ]
机构
[1] Korea Aerosp Univ, Sch Elect & Informat Engn, Goyang 10540, South Korea
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 09期
关键词
human-machine or object interaction; human action recognition; deep learning; key frames sampling; ranking method;
D O I
10.3390/app12094165
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Nowadays, the demand for human-machine or object interaction is growing tremendously owing to its diverse applications. The massive advancement in modern technology has greatly influenced researchers to adopt deep learning models in the fields of computer vision and image-processing, particularly human action recognition. Many methods have been developed to recognize human activity, which is limited to effectiveness, efficiency, and use of data modalities. Very few methods have used depth sequences in which they have introduced different encoding techniques to represent an action sequence into the spatial format called dynamic image. Then, they have used a 2D convolutional neural network (CNN) or traditional machine learning algorithms for action recognition. These methods are completely dependent on the effectiveness of the spatial representation. In this article, we propose a novel ranking-based approach to select key frames and adopt a 3D-CNN model for action classification. We directly use the raw sequence instead of generating the dynamic image. We investigate the recognition results with various levels of sampling to show the competency and robustness of the proposed system. We also examine the universality of the proposed method on three benchmark human action datasets: DHA (depth-included human action), MSR-Action3D (Microsoft Action 3D), and UTD-MHAD (University of Texas at Dallas Multimodal Human Action Dataset). The proposed method secures better performance than state-of-the-art techniques using depth sequences.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Skeleton Motion History based Human Action Recognition Using Deep Learning
    Phyo, Cho Nilar
    Zin, Thi Thi
    Tin, Pyke
    2017 IEEE 6TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE), 2017,
  • [32] Are Microcontrollers Ready for Deep Learning-Based Human Activity Recognition?
    Elsts, Atis
    McConville, Ryan
    ELECTRONICS, 2021, 10 (21)
  • [33] A Survey of Deep Learning-Based Human Activity Recognition in Radar
    Li, Xinyu
    He, Yuan
    Jing, Xiaojun
    REMOTE SENSING, 2019, 11 (09)
  • [34] Deep Learning-Based Action Recognition Using 3D Skeleton Joints Information
    Tasnim, Nusrat
    Islam, Md. Mahbubul
    Baek, Joong-Hwan
    INVENTIONS, 2020, 5 (03) : 1 - 15
  • [35] A robust and secure key-frames based video watermarking system using chaotic encryption
    Himeur, Yassine
    Boukabou, Abdelkrim
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (07) : 8603 - 8627
  • [36] Action Detection and Recognition in Continuous Action Streams by Deep Learning-Based Sensing Fusion
    Dawar, Neha
    Kehtarnavaz, Nasser
    IEEE SENSORS JOURNAL, 2018, 18 (23) : 9660 - 9668
  • [37] A comprehensive survey and deep learning-based approach for human recognition using ear biometric
    Kamboj, Aman
    Rani, Rajneesh
    Nigam, Aditya
    VISUAL COMPUTER, 2022, 38 (07): : 2383 - 2416
  • [38] A comprehensive survey and deep learning-based approach for human recognition using ear biometric
    Aman Kamboj
    Rajneesh Rani
    Aditya Nigam
    The Visual Computer, 2022, 38 : 2383 - 2416
  • [39] Deep learning-based methods for natural hazard named entity recognition
    Junlin Sun
    Yanrong Liu
    Jing Cui
    Handong He
    Scientific Reports, 12
  • [40] A Comprehensive Study on Deep Learning-Based Methods for Sign Language Recognition
    Adaloglou, Nikolas
    Chatzis, Theocharis
    Papastratis, Ilias
    Stergioulas, Andreas
    Papadopoulos, Georgios Th.
    Zacharopoulou, Vassia
    Xydopoulos, George J.
    Atzakas, Klimnis
    Papazachariou, Dimitris
    Daras, Petros
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1750 - 1762