Evaluating a bag-of-visual features approach using spatio-temporal features for action recognition

被引:32
|
作者
Nazir, Saima [1 ]
Yousaf, Muhammad Haroon [1 ]
Velastin, Sergio A. [2 ,3 ]
机构
[1] Univ Engn & Technol Taxila, Taxila, Pakistan
[2] Univ Carlos III Madrid, Getafe, Spain
[3] Queen Mary Univ London, London, England
关键词
Human action recognition; Local spatio-temporal features; Bag-of-visual features; Hollywood-2; dataset;
D O I
10.1016/j.compeleceng.2018.01.037
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The detection of the spatial-temporal interest points has a key role in human action recognition algorithms. This research work aims to exploit the existing strength of bag-of-visual features and presents a method for automatic action recognition in realistic and complex scenarios. This paper provides a better feature representation by combining the benefit of both a well-known feature detector and descriptor i.e. the 3D Harris space-time interest point detector and the 3D Scale-Invariant Feature Transform descriptor. Finally, action videos are represented using a histogram of visual features by following the traditional bag-of-visual feature approach. Apart from video representation, a support vector machine (SVM) classifier is used for training and testing. A large number of experiments show the effectiveness of our method on existing benchmark datasets and shows state-of-the-art performance. This article reports 68.1% mean Average Precision (mAP), 94% and 91.8% average accuracy for Hollywood-2, UCF Sports and KTH datasets respectively. (C) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:660 / 669
页数:10
相关论文
共 50 条
  • [31] Action recognition using lie algebrized gaussians over dense local spatio-temporal features
    Chen, Meng
    Gong, Liyu
    Wang, Tianjiang
    Feng, Qi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (06) : 2127 - 2142
  • [32] Action recognition using lie algebrized gaussians over dense local spatio-temporal features
    Meng Chen
    Liyu Gong
    Tianjiang Wang
    Qi Feng
    Multimedia Tools and Applications, 2015, 74 : 2127 - 2142
  • [33] Skeleton-based action recognition using spatio-temporal features with convolutional neural networks
    Rostami, Zahra
    Afrasiabi, Mahlagha
    Khotanlou, Hassan
    2017 IEEE 4TH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED ENGINEERING AND INNOVATION (KBEI), 2017, : 583 - 587
  • [34] Spatio-Temporal Vector of Locally Max Pooled Features for Action Recognition in Videos
    Duta, Ionut Cosmin
    Ionescu, Bogdan
    Aizawa, Kiyoharu
    Sebe, Nicu
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3205 - 3214
  • [35] Human Action Recognition by Learning Spatio-Temporal Features With Deep Neural Networks
    Wang, Lei
    Xu, Yangyang
    Cheng, Jun
    Xia, Haiying
    Yin, Jianqin
    Wu, Jiaji
    IEEE ACCESS, 2018, 6 : 17913 - 17922
  • [36] Gait Recognition using Spatio-temporal Silhouette-based Features
    Sabir, Azhin
    Al-jawad, Naseer
    Jassim, Sabah
    MOBILE MULTIMEDIA/IMAGE PROCESSING, SECURITY, AND APPLICATIONS 2013, 2013, 8755
  • [37] Riemannian Spatio-Temporal Features of Locomotion for Individual Recognition
    Zhang, Jianhai
    Feng, Zhiyong
    Su, Yong
    Xing, Meng
    Xue, Wanli
    SENSORS, 2019, 19 (01)
  • [38] Deep spatio-temporal features for multimodal emotion recognition
    Nguyen, Dung
    Nguyen, Kien
    Sridharan, Sridha
    Ghasemi, Afsane
    Dean, David
    Fookes, Clinton
    2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2017), 2017, : 1215 - 1223
  • [39] Unusual Event Detection using Sparse Spatio-Temporal Features and Bag of Words Model
    Mandadi, Balakrishna
    Sethi, Amit
    2013 IEEE SECOND INTERNATIONAL CONFERENCE ON IMAGE INFORMATION PROCESSING (ICIIP), 2013, : 629 - 634
  • [40] A Novel Recognition and Classification Approach for Motor Imagery Based on Spatio-Temporal Features
    Lv, Renjie
    Chang, Wenwen
    Yan, Guanghui
    Nie, Wenchao
    Zheng, Lei
    Guo, Bin
    Sadiq, Muhammad Tariq
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (01) : 210 - 223