Feature and Decision Level Fusion for Action Recognition

被引:0
|
作者
Abouelenien, Mohamed [1 ]
Wan, Yiwen [1 ]
Saudagar, Abdullah [1 ]
机构
[1] Univ North Texas, Denton, TX 76203 USA
关键词
Action classification; Feature-level fusion; Decision-level fusion; Adaboost; Direct classification;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Classification of actions by human actors from video enables new technologies in diverse areas such as surveillance and content-based retrieval. We propose and evaluate alternative models, one based on feature-level fusion and the second on decision-level fusion. Both models employ direct classification - inferring from low-level features the nature of the action. Interesting points are assumed to have salient 3D (spatial plus temporal) gradients that distinguish them from their neighborhoods. They are identified using three distinct 3D interesting-point detectors. Each detected interest point set is represented as a bag-of-words descriptor. The feature level fusion model concatenates descriptors subsequently used as input to a classifier. The decision level fusion uses an ensemble and majority voting scheme. Public data sets consisting of hundreds of action videos were used in testing. Within the test videos, multiple actors performed various actions including walking, running, jogging, handclapping, boxing, and waving. Performance comparison showed very high classification accuracy for both models with feature-level fusion having an edge. For feature-level fusion the novelty is the fused histogram of visual words derived from different sets of interesting points detected by different saliency detectors. For decision fusion besides Adaboost the majority voting scheme is also utilized in ensemble classifiers based on support vector machines, k-nearest neighbor, and decision trees. The main contribution, however, is the comparison between the models and, drilling down, the performance of different base classifiers, and different interest point detectors for human motion recognition.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Human action recognition using multi-feature fusion
    Shao, Yan-Hua, 1818, Board of Optronics Lasers (25):
  • [42] Multiple Feature Fusion in Convolutional Neural Networks for Action Recognition
    LI Hongyang
    CHEN Jun
    HU Ruimin
    Wuhan University Journal of Natural Sciences, 2017, 22 (01) : 73 - 78
  • [43] Heterogeneous Semantic Level Features Fusion for Action Recognition
    Cai, Junjie
    Merler, Michele
    Pankanti, Sharath
    Tian, Qi
    ICMR'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2015, : 307 - 314
  • [44] Human Action Recognition by Decision-Making Level Fusion Based on Spatial-Temporal Features
    Li Yandi
    Xu Xiping
    ACTA OPTICA SINICA, 2018, 38 (08)
  • [45] A new feature fusion method at decision level and its application
    韩光
    赵春霞
    张浩峰
    袁夏
    Optoelectronics Letters, 2010, 6 (02) : 129 - 132
  • [46] A new feature fusion method at decision level and its application
    Han G.
    Zhao C.-X.
    Zhang H.-F.
    Yuan X.
    Optoelectronics Letters, 2010, 6 (02) : 129 - 132
  • [47] Feature Level and Decision Level Fusion in Kernel Sparse Representation based Classifiers
    Zare, T.
    Sadeghi, M. T.
    2016 8TH INTERNATIONAL SYMPOSIUM ON TELECOMMUNICATIONS (IST), 2016, : 462 - 467
  • [48] Advanced Variants of Feature Level Fusion for Finger Vein Recognition
    Kauba, Christof
    Uhl, Andreas
    Piciucco, Emanuela
    Maiorana, Emanuele
    Campisi, Patrizio
    PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2016), 2016, P-260
  • [49] Feature-level data fusion for bimodal person recognition
    Chibelushi, CC
    Mason, JSD
    Deravi, F
    SIXTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND ITS APPLICATIONS, VOL 1, 1997, (443): : 399 - 403
  • [50] Feature Detector-level Fusion Methods in Food Recognition
    Razali, Mohd Norhisham
    Manshor, Noridayu
    PROCEEDINGS OF 2019 2ND INTERNATIONAL CONFERENCE ON COMMUNICATION ENGINEERING AND TECHNOLOGY (ICCET 2019), 2019, : 134 - 138