Human Action Recognition in Videos: A comparative evaluation of the classical and velocity adaptation space-time interest points techniques

被引:0
|
作者
de Almeida, Ana Paula G. S. [1 ]
Espinoza, Bruno Luiggi M. [1 ]
Vidal, Flavio de Barros [1 ]
机构
[1] Univ Brasilia, Brasilia, DF, Brazil
关键词
human action recognition; support vector machine; space-time interest points; C-STIP; V-STIP; SCALE;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Human action recognition is a topic widely studied over time, using numerous techniques and methods to solve a fundamental problem in automatic video analysis. Basically, a traditional human action recognition system collects video frames of human activities, extracts the desired features of each human skeleton and classify them to distinguish human gesture. However, almost all of these approaches roll out the space-time information of the recognition process. In this paper we present a novel use of an existing state-of-the-art space-time technique, the Space-Time Interest Point (STIP) detector and its velocity adaptation, to human action recognition process. Using STIPs as descriptors and a Support Vector Machine classifier, we evaluate four different public video datasets to validate our methodology and demonstrate its accuracy in real scenarios.
引用
收藏
页码:43 / 50
页数:8
相关论文
共 17 条
  • [1] Velocity adaptation of space-time interest points
    Laptev, I
    Lindeberg, T
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 1, 2004, : 52 - 56
  • [2] Group Action Recognition Using Space-Time Interest Points
    Wei, Qingdi
    Zhang, Xiaoqin
    Kong, Yu
    Hu, Weiming
    Ling, Haibin
    ADVANCES IN VISUAL COMPUTING, PT 2, PROCEEDINGS, 2009, 5876 : 757 - +
  • [3] Clustering Space-Time Interest Points for Action Representation
    Jin, Sou-Young
    Choi, Ho-Jin
    SIXTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2013), 2013, 9067
  • [4] Recognising Action as Clouds of Space-Time Interest Points
    Bregonzio, Matteo
    Gong, Shaogang
    Xiang, Tao
    CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4, 2009, : 1948 - 1955
  • [5] STIP-GCN: Space-time interest points graph convolutional network for action recognition
    Yenduri, Sravani
    Chalavadi, Vishnu
    Mohan, C. Krishna
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [6] Human Action Recognition: a feature point trajectory and space-time interest point based approach
    Chen, Xinyi
    Yi, Yang
    2018 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI), 2018, : 1160 - 1164
  • [7] Performance Evaluation Space-Time Interest Points Using Branching Particle Filters
    Chaudhary, Nisha
    Verma, Mukesh Kumar
    Jha, Saurabh Kumar
    2018 INTERNATIONAL CONFERENCE ON COMPUTING, POWER AND COMMUNICATION TECHNOLOGIES (GUCON), 2018, : 810 - 813
  • [8] Performance Evaluation Space-Time Interest Points Using Branching Particle Filters
    Priyadarshi, Rahul
    Thakur, Ankush
    Singh, Anjila Deonath
    PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON MICROELECTRONICS, COMPUTING AND COMMUNICATION SYSTEMS, MCCS 2018, 2019, 556 : 83 - 90
  • [9] Evaluation of Color Spatio-Temporal Interest Points for Human Action Recognition
    Everts, Ivo
    van Gemert, Jan C.
    Gevers, Theo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (04) : 1569 - 1580
  • [10] Contextual Statistics of Space-Time Ordered Features for Human Action Recognition
    Bilinski, Piotr
    Bremond, Francois
    2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE (AVSS), 2012, : 228 - 233