Human action recognition using key point detection and machine learning

被引:0
|
作者
Archana, M. [1 ]
Kavitha, S. [2 ]
Vathsala, A. Vani [3 ]
机构
[1] SRM Inst Sci & Technol, Dept CSE, CSE, Kattankulathur, Tamil Nadu, India
[2] SRM Inst Sci & Technol, Dept CTECH, Kattankulathur, Tamil Nadu, India
[3] CVR Coll Engn, Dept CSE, Hyderabad, Telangana, India
关键词
Computer Vision; Features; Human Action; Pose Estimation; Media Pipe; OpenCV Body Language;
D O I
10.1109/ICPCSN62568.2024.00070
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of Computer vision detection human activity is primarily being quite challenged. There are various methods and approaches for this job one of the popular techniques is key point detection which identifies the external skeleton of a human body, and these points can be further used to recognize or classify poses. As this method looks quite difficult on its own, there are several libraries out there which can perform this key point detection. Google's Media Pipe is one such efficient library which has different functionalities which include hand points, human body pose, human pupil detection, human face mesh identification and background segmentation. This library has been trained with 30,000 samples and then the model is converted to work with OpenCV without any need of deep learning architecture. The main motive is to use machine learning for classifying poses from the points which are generated by the library. The proposed approach uses the BLR Body Language Rule, which captures the angles of limbs which play a vital role in identifying a human action and forms a dataset out of these angles and uses ML to learn the patterns from it. This is a fully automatic process and can be adapted to various situations such that a broad variety of applications can be possible.
引用
收藏
页码:410 / 413
页数:4
相关论文
共 50 条
  • [21] Effective action recognition with embedded key point shifts
    Cao, Haozhi
    Xu, Yuecong
    Yang, Jianfei
    Mao, Kezhi
    Yin, Jianxiong
    See, Simon
    PATTERN RECOGNITION, 2021, 120
  • [22] Action Recognition by Learning Discriminative Key Poses
    Cheema, Shahzad
    Eweiwi, Abdalrahman
    Thurau, Christian
    Bauckhage, Christian
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [23] Learning Discriminative Key Poses for Action Recognition
    Liu, Li
    Shao, Ling
    Zhen, Xiantong
    Li, Xuelong
    IEEE TRANSACTIONS ON CYBERNETICS, 2013, 43 (06) : 1860 - 1870
  • [24] Human action recognition with bag of visual words using different machine learning methods and hyperparameter optimization
    Aslan, Muhammet Fatih
    Durdu, Akif
    Sabanci, Kadir
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (12): : 8585 - 8597
  • [25] Human action recognition with bag of visual words using different machine learning methods and hyperparameter optimization
    Muhammet Fatih Aslan
    Akif Durdu
    Kadir Sabanci
    Neural Computing and Applications, 2020, 32 : 8585 - 8597
  • [26] Discriminative Human Action Recognition using HOI Descriptor and Key Poses
    Akila, K.
    Chitrakala, S.
    2014 INTERNATIONAL CONFERENCE ON SCIENCE ENGINEERING AND MANAGEMENT RESEARCH (ICSEMR), 2014,
  • [27] Local Spatio-Temporal Interest Point Detection for Human Action Recognition
    Li, Feng
    Du, Jixiang
    2012 IEEE FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2012, : 579 - 582
  • [28] Multi-view Regularized Extreme Learning Machine for Human Action Recognition
    Iosifidis, Alexandros
    Tefas, Anastasios
    Pitas, Ioannis
    ARTIFICIAL INTELLIGENCE: METHODS AND APPLICATIONS, 2014, 8445 : 84 - 94
  • [29] Deep Learning-Based Human Action Recognition with Key-Frames Sampling Using Ranking Methods
    Tasnim, Nusrat
    Baek, Joong-Hwan
    APPLIED SCIENCES-BASEL, 2022, 12 (09):
  • [30] A Framework of Human Emotion Recognition Using Extreme Learning Machine
    Utama, Prasetia
    Widodo
    Ajie, Hamidillah
    2014 INTERNATIONAL CONFERENCE OF ADVANCED INFORMATICS: CONCEPT, THEORY AND APPLICATION (ICAICTA), 2014, : 315 - 320