Human action recognition using hull convexity defect features with multi-modality setups

被引:5
|
作者
Youssef, M. M. [1 ]
Asari, V. K. [1 ]
机构
[1] Univ Dayton, Dayton, OH 45469 USA
关键词
Human action recognition; Biometrics; Convex hulls; Computer vision; Neural networks; Pattern recognition;
D O I
10.1016/j.patrec.2013.01.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider developing a taxonomic shape driven algorithm to solve the problem of human action recognition and develop a new feature extraction technique using hull convexity defects. To test and validate this approach, we use silhouettes of subjects performing ten actions from a commonly used video database by action recognition researchers. A morphological algorithm is used to filter noise from the silhouette. A convex hull is then created around the silhouette frame, from which convex defects will be used as the features for analysis. A complete feature consists of thirty individual values which represent the five largest convex hull defects areas. A consecutive sequence of these features form a complete action. Action frame sequences are preprocessed to separate the data into two sets based on perspective planes and bilateral symmetry. Features are then normalized to create a final set of action sequences. We then formulate and investigate three methods to classify ten actions from the database. Testing and training of the nine test subjects is performed using a leave one out methodology. Classification utilizes both PCA and minimally encoded neural networks. Performance evaluation results show that the Hull Convexity Defect Algorithm provides comparable results with less computational complexity. This research can lead to a real time performance application that can be incorporated to include distinguishing more complex actions and multiple person interaction. (C) 2013 Elsevier B.V. All rights reserved.
引用
收藏
页码:1971 / 1979
页数:9
相关论文
共 50 条
  • [1] Multi-modality learning for human action recognition
    Ziliang Ren
    Qieshi Zhang
    Xiangyang Gao
    Pengyi Hao
    Jun Cheng
    Multimedia Tools and Applications, 2021, 80 : 16185 - 16203
  • [2] Multi-modality learning for human action recognition
    Ren, Ziliang
    Zhang, Qieshi
    Gao, Xiangyang
    Hao, Pengyi
    Cheng, Jun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (11) : 16185 - 16203
  • [3] Human Action Recognition Via Multi-modality Information
    Gao, Zan
    Song, Jian-ming
    Zhang, Hua
    Liu, An-An
    Xue, Yan-Bing
    Xu, Guang-ping
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2014, 9 (02) : 739 - 748
  • [4] Multi-modality Fusion Network for Action Recognition
    Huang, Kai
    Qin, Zheng
    Xu, Kaiping
    Ye, Shuxiong
    Wang, Guolong
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 139 - 149
  • [5] MMA: a multi-view and multi-modality benchmark dataset for human action recognition
    Zan Gao
    Tao-tao Han
    Hua Zhang
    Yan-bing Xue
    Guang-ping Xu
    Multimedia Tools and Applications, 2018, 77 : 29383 - 29404
  • [6] MMA: a multi-view and multi-modality benchmark dataset for human action recognition
    Gao, Zan
    Han, Tao-tao
    Zhang, Hua
    Xue, Yan-bing
    Xu, Guang-ping
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) : 29383 - 29404
  • [7] Focal Channel Knowledge Distillation for Multi-Modality Action Recognition
    Gan, Lipeng
    Cao, Runze
    Li, Ning
    Yang, Man
    Li, Xiaochao
    IEEE ACCESS, 2023, 11 : 78285 - 78298
  • [8] GCN-Based Multi-Modality Fusion Network for Action Recognition
    Liu, Shaocan
    Wang, Xingtao
    Xiong, Ruiqin
    Fan, Xiaopeng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1242 - 1253
  • [9] Identifying features for multi-modality coding
    Shah, D
    Marshall, S
    SIXTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND ITS APPLICATIONS, VOL 1, 1997, (443): : 73 - 76
  • [10] A Novel Two-Stream Transformer-Based Framework for Multi-Modality Human Action Recognition
    Shi, Jing
    Zhang, Yuanyuan
    Wang, Weihang
    Xing, Bin
    Hu, Dasha
    Chen, Liangyin
    APPLIED SCIENCES-BASEL, 2023, 13 (04):