Human Action Recognition Based on Supervised Class-Specific Dictionary Learning with Deep Convolutional Neural Network Features

被引:13
|
作者
Gu, Binjie [1 ]
Xiong, Weili [1 ]
Bai, Zhonghu [2 ]
机构
[1] Jiangnan Univ, Key Lab Adv Proc Control Light Ind, Minist Educ, Wuxi, Jiangsu, Peoples R China
[2] Jiangnan Univ, Natl Engn Lab Cereal Fermentat Technol, Wuxi, Jiangsu, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2020年 / 63卷 / 01期
基金
中国国家自然科学基金;
关键词
Action recognition; deep CNN features; sparse model; supervised dictionary learning; DISCRIMINATIVE DICTIONARY; SPARSE REPRESENTATION;
D O I
10.32604/cmc.2020.06898
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human action recognition under complex environment is a challenging work. Recently, sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions. The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class, and the minimal reconstruction error indicates its corresponding class. However, how to learn a discriminative dictionary is still a difficult work. In this work, we make two contributions. First, we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network (CNN) features. Secondly, we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term. Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.
引用
收藏
页码:243 / 262
页数:20
相关论文
共 50 条
  • [11] Human ear recognition based on deep convolutional neural network
    Tian Ying
    Wang Shining
    Li Wanxiang
    PROCEEDINGS OF THE 30TH CHINESE CONTROL AND DECISION CONFERENCE (2018 CCDC), 2018, : 1830 - 1835
  • [12] Long Jump Action Recognition Based on Deep Convolutional Neural Network
    Wang, Zhiteng
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [13] Convolutional Neural Network-Based Dictionary Learning for SAR Target Recognition
    Tao, Lei
    Zhou, Yue
    Jiang, Xue
    Liu, Xingzhao
    Zhou, Zhixin
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2021, 18 (10) : 1776 - 1780
  • [14] Task-Driven Dictionary Learning based on Convolutional Neural Network Features
    Tirer, Tom
    Giryes, Raja
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 1885 - 1889
  • [15] Learning Zeroth Class Dictionary for Human Action Recognition
    Cai, Jiaxin
    Tang, Xin
    Zhang, Lifang
    Feng, Guocan
    COMPUTER VISION, PT III, 2017, 773 : 651 - 666
  • [16] Learning Graph Convolutional Network for Skeleton-Based Human Action Recognition by Neural Searching
    Peng, Wei
    Hong, Xiaopeng
    Chen, Haoyu
    Zhao, Guoying
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 2669 - 2676
  • [17] Human action recognition and art interaction based on convolutional neural network
    Cai, Zhuohao
    Yang, Yi
    Lin, Lan
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 6112 - 6116
  • [18] Dance Action Recognition and Pose Estimation Based on Deep Convolutional Neural Network
    Zhu, Fengling
    Zhu, Ruichao
    TRAITEMENT DU SIGNAL, 2021, 38 (02) : 529 - 538
  • [19] Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition
    Tang, Xin
    Feng, Guo-can
    Li, Xiao-xin
    Cai, Jia-xin
    PLOS ONE, 2015, 10 (11):
  • [20] END-TO-END LEARNING OF DEEP CONVOLUTIONAL NEURAL NETWORK FOR 3D HUMAN ACTION RECOGNITION
    Li, Chao
    Sun, Shouqian
    Min, Xin
    Lin, Wenqian
    Nie, Binling
    Zhang, Xianfu
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2017,