Robust action recognition using local motion and group sparsity

被引:55
|
作者
Cho, Jungchan [1 ]
Lee, Minsik [1 ]
Chang, Hyung Jin [2 ]
Oh, Songhwai [1 ]
机构
[1] Seoul Natl Univ, ASRI, Dept Elect & Comp Engn, Seoul, South Korea
[2] Univ London Imperial Coll Sci Technol & Med, Dept Elect & Elect Engn, London SW7 2AZ, England
基金
新加坡国家研究基金会;
关键词
Action recognition; Motion descriptor; Sparse representation; Dynamic scene understanding;
D O I
10.1016/j.patcog.2013.12.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recognizing actions in a video is a critical step for making many vision-based applications possible and has attracted much attention recently. However, action recognition in a video is a challenging task due to wide variations within an action, camera motion, cluttered background, and occlusions, to name a few. While dense sampling based approaches are currently achieving the state-of-the-art performance in action recognition, they do not perform well for many realistic video sequences since, by considering every motion found in a video equally, the discriminative power of these approaches is often reduced due to clutter motions, such as background changes and camera motions. In this paper, we robustly identify local motions of interest in an unsupervised manner by taking advantage of group sparsity. In order to robustly classify action types, we emphasize local motion by combining local motion descriptors and full motion descriptors and apply group sparsity to the emphasized motion features using the multiple kernel method. In experiments, we show that different types of actions can be well recognized using a small number of selected local motion descriptors and the proposed algorithm achieves the state-of-the-art performance on popular benchmark datasets, outperforming existing methods. We also demonstrate that the group sparse representation with the multiple kernel method can dramatically improve the action recognition performance. (C) 2013 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1813 / 1825
页数:13
相关论文
共 50 条
  • [1] Robust Face Recognition with Individual and Group Sparsity Constraints
    Chen, Tianjiao
    Qu, Lei
    Wei, Sui
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISKE 2013), 2014, 277 : 105 - 114
  • [2] Human Action Recognition Using Adaptive Local Motion Descriptor in Spark
    Uddin, M. D. Azher
    Joolee, Joolekha Bibi
    Alam, Aftab
    Lee, Young-Koo
    IEEE ACCESS, 2017, 5 : 21157 - 21167
  • [3] LOCAL SALIENT MOTION ANALYSIS FOR ACTION RECOGNITION
    Lu Ping
    Jin Lizuo
    Sun Jian
    Li Yawei
    2015 34TH CHINESE CONTROL CONFERENCE (CCC), 2015, : 3742 - 3746
  • [4] Robust Face Recognition With Kernelized Locality-Sensitive Group Sparsity Representation
    Tan, Shoubiao
    Sun, Xi
    Chan, Wentao
    Qu, Lei
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (10) : 4661 - 4668
  • [5] A Group Sparsity-Driven Approach to 3-D Action Recognition
    Cosar, Serhan
    Cetin, Mujdat
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [6] Group Sparsity and Geometry Constrained Dictionary Learning for Action Recognition from Depth Maps
    Luo, Jiajia
    Wang, Wei
    Qi, Hairong
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 1809 - 1816
  • [7] Human action recognition based on aggregated local motion estimates
    M. Lucena
    N. Pérez de la Blanca
    J. M. Fuertes
    Machine Vision and Applications, 2012, 23 : 135 - 150
  • [8] Human action recognition based on aggregated local motion estimates
    Lucena, M.
    Perez de la Blanca, N.
    Fuertes, J. M.
    MACHINE VISION AND APPLICATIONS, 2012, 23 (01) : 135 - 150
  • [9] Action Recognition Using Form and Motion Modalities
    Meng, Quanling
    Zhu, Heyan
    Zhang, Weigang
    Piao, Xuefeng
    Zhang, Aijie
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2020, 16 (01)
  • [10] Real time human action recognition from RGB clips using local motion histogram
    Srivastava, Awadhesh Kumar
    Biswas, K. K.
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2019, 13 (02): : 219 - 228