Deep Learning-Based Human Action Recognition with Key-Frames Sampling Using Ranking Methods

被引:8
|
作者
Tasnim, Nusrat [1 ]
Baek, Joong-Hwan [1 ]
机构
[1] Korea Aerosp Univ, Sch Elect & Informat Engn, Goyang 10540, South Korea
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 09期
关键词
human-machine or object interaction; human action recognition; deep learning; key frames sampling; ranking method;
D O I
10.3390/app12094165
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Nowadays, the demand for human-machine or object interaction is growing tremendously owing to its diverse applications. The massive advancement in modern technology has greatly influenced researchers to adopt deep learning models in the fields of computer vision and image-processing, particularly human action recognition. Many methods have been developed to recognize human activity, which is limited to effectiveness, efficiency, and use of data modalities. Very few methods have used depth sequences in which they have introduced different encoding techniques to represent an action sequence into the spatial format called dynamic image. Then, they have used a 2D convolutional neural network (CNN) or traditional machine learning algorithms for action recognition. These methods are completely dependent on the effectiveness of the spatial representation. In this article, we propose a novel ranking-based approach to select key frames and adopt a 3D-CNN model for action classification. We directly use the raw sequence instead of generating the dynamic image. We investigate the recognition results with various levels of sampling to show the competency and robustness of the proposed system. We also examine the universality of the proposed method on three benchmark human action datasets: DHA (depth-included human action), MSR-Action3D (Microsoft Action 3D), and UTD-MHAD (University of Texas at Dallas Multimodal Human Action Dataset). The proposed method secures better performance than state-of-the-art techniques using depth sequences.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] Deep learning-based methods for natural hazard named entity recognition
    Sun, Junlin
    Liu, Yanrong
    Cui, Jing
    He, Handong
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [42] Identifying the key frames: An attention-aware sampling method for action recognition
    Dong, Wenkai
    Zhang, Zhaoxiang
    Song, Chunfeng
    Tan, Tieniu
    PATTERN RECOGNITION, 2022, 130
  • [43] A robust and secure key-frames based video watermarking system using chaotic encryption
    Yassine Himeur
    Abdelkrim Boukabou
    Multimedia Tools and Applications, 2018, 77 : 8603 - 8627
  • [44] Deep Learning for Human Action Recognition
    Shekokar, R. U.
    Kale, S. N.
    2021 6TH INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2021,
  • [45] A Deep Learning-based Ranking Approach for Microblog Retrieval
    Ibtihel, Ben Ltaifa
    Lobna, Hlaoua
    Lotfi, Ben Romdhane
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KES 2019), 2019, 159 : 352 - 362
  • [46] Object detection and recognition using deep learning-based techniques
    Sharma, Preksha
    Gupta, Surbhi
    Vyas, Sonali
    Shabaz, Mohammad
    IET COMMUNICATIONS, 2023, 17 (13) : 1589 - 1599
  • [47] Human Action Recognition Based on Self-learned Key Frames and Features Extraction
    Fu, Qi
    Liu, Lina
    Ma, Shiwei
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 3498 - 3502
  • [48] Deep Learning-Based Gait Recognition Using Smartphones in the Wild
    Zou, Qin
    Wang, Yanling
    Wang, Qian
    Zhao, Yi
    Li, Qingquan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 3197 - 3212
  • [49] Deep learning-based multi-view 3D-human action recognition using skeleton and depth data
    Ghosh, Sampat Kumar
    Rashmi, M.
    Mohan, Biju R.
    Guddeti, Ram Mohana Reddy
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (13) : 19829 - 19851
  • [50] Deep learning-based multi-view 3D-human action recognition using skeleton and depth data
    Sampat Kumar Ghosh
    Rashmi M
    Biju R Mohan
    Ram Mohana Reddy Guddeti
    Multimedia Tools and Applications, 2023, 82 : 19829 - 19851