Human Action Recognition in Videos Using Kinematic Features and Multiple Instance Learning

被引:277
|
作者
Ali, Saad [1 ]
Shah, Mubarak [2 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[2] Univ Cent Florida, Comp Vis Lab, Sch Elect Engn & Comp Sci, Harris Corp Engn Ctr, Orlando, FL 32816 USA
关键词
Action recognition; motion; video analysis; principal component analysis; kinematic features;
D O I
10.1109/TPAMI.2008.284
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a set of kinematic features that are derived from the optical flow for human action recognition in videos. The set of kinematic features includes divergence, vorticity, symmetric and antisymmetric flow fields, second and third principal invariants of flow gradient and rate of strain tensor, and third principal invariant of rate of rotation tensor. Each kinematic feature, when computed from the optical flow of a sequence of images, gives rise to a spatiotemporal pattern. It is then assumed that the representative dynamics of the optical flow are captured by these spatiotemporal patterns in the form of dominant kinematic trends or kinematic modes. These kinematic modes are computed by performing Principal Component Analysis (PCA) on the spatiotemporal volumes of the kinematic features. For classification, we propose the use of multiple instance learning (MIL) in which each action video is represented by a bag of kinematic modes. Each video is then embedded into a kinematic-mode-based feature space and the coordinates of the video in that space are used for classification using the nearest neighbor algorithm. The qualitative and quantitative results are reported on the benchmark data sets.
引用
收藏
页码:288 / 303
页数:16
相关论文
共 50 条
  • [31] Multiple Instance Learning with Correlated Features
    Huang, Yiheng
    Zhang, Wensheng
    Wang, Jue
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2012, 14 (02) : 305 - 313
  • [32] Human Activity Recognition in Videos Using Deep Learning
    Kumar, Mohit
    Rana, Adarsh
    Ankita
    Yadav, Arun Kumar
    Yadav, Divakar
    SOFT COMPUTING AND ITS ENGINEERING APPLICATIONS, ICSOFTCOMP 2022, 2023, 1788 : 288 - 299
  • [33] Improving Human Action Recognition Using Hierarchical Features And Multiple Classifier Ensembles
    Bulbul, Mohammad Farhad
    Islam, Saiful
    Zhou, Yatong
    Ali, Hazrat
    COMPUTER JOURNAL, 2021, 64 (11): : 1633 - 1655
  • [34] The Progress of Human Action Recognition in Videos Based on Deep Learning: A Review
    Luo H.-L.
    Tong K.
    Kong F.-S.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2019, 47 (05): : 1162 - 1173
  • [35] HUMAN ACTION RECOGNITION FRAMEWORK BY FUSING MULTIPLE FEATURES
    Xiao, Qian
    Cheng, Jun
    2013 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (ICIA), 2013, : 985 - 990
  • [36] Action Recognition Using Ensemble Weighted Multi-Instance Learning
    Chen, Guang
    Giuliani, Manuel
    Clarke, Daniel
    Gaschler, Andre
    Knoll, Alois
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2014, : 4520 - 4525
  • [37] Human action recognition in videos based on spatiotemporal features and bag-of-poses
    da Silva, Murilo Varges
    Marana, Aparecido Nilceu
    APPLIED SOFT COMPUTING, 2020, 95 (95)
  • [38] HUMAN ACTION RECOGNITION IN STEREOSCOPIC VIDEOS BASED ON BAG OF FEATURES AND DISPARITY PYRAMIDS
    Iosifidis, Alexandros
    Tefas, Anastasios
    Nikolaidis, Nikos
    Pitas, Ioannis
    2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2014, : 1317 - 1321
  • [39] Human Action Recognition using Skeleton features
    Patil, Akash Anil
    Swaminathan, A.
    Rajan, Ashoka R.
    Narayanan, Neela, V
    Gayathri, R.
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT (ISMAR-ADJUNCT 2022), 2022, : 289 - 296
  • [40] Learning Kinematic Formulas from Multiple View Videos
    Song, Liangchen
    Liu, Sheng
    Liu, Celong
    Li, Zhong
    Ding, Yuqi
    Xu, Yi
    Yuan, Junsong
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 126 - 134