Human Action Recognition Based on Supervised Class-Specific Dictionary Learning with Deep Convolutional Neural Network Features

被引:13
|
作者
Gu, Binjie [1 ]
Xiong, Weili [1 ]
Bai, Zhonghu [2 ]
机构
[1] Jiangnan Univ, Key Lab Adv Proc Control Light Ind, Minist Educ, Wuxi, Jiangsu, Peoples R China
[2] Jiangnan Univ, Natl Engn Lab Cereal Fermentat Technol, Wuxi, Jiangsu, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2020年 / 63卷 / 01期
基金
中国国家自然科学基金;
关键词
Action recognition; deep CNN features; sparse model; supervised dictionary learning; DISCRIMINATIVE DICTIONARY; SPARSE REPRESENTATION;
D O I
10.32604/cmc.2020.06898
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human action recognition under complex environment is a challenging work. Recently, sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions. The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class, and the minimal reconstruction error indicates its corresponding class. However, how to learn a discriminative dictionary is still a difficult work. In this work, we make two contributions. First, we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network (CNN) features. Secondly, we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term. Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.
引用
收藏
页码:243 / 262
页数:20
相关论文
共 50 条
  • [31] AN EFFICIENT FACE CLASSIFICATION METHOD BASED ON SHARED AND CLASS-SPECIFIC DICTIONARY LEARNING
    Li, Wenjing
    Liang, Jiuzhen
    Wu, Qin
    Zhou, Yuxuan
    Xu, Xiuxiu
    Wang, Nianbing
    Zhou, Qi
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 2596 - 2600
  • [32] Human action recognition based on recognition of linear patterns in action bank features using convolutional neural networks
    Ijjina, Earnest Paul
    Mohan, C. Krishna
    2014 13TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2014, : 178 - 182
  • [33] Human action recognition based on convolutional neural network and spatial pyramid representation
    Xiao, Jihai
    Cui, Xiaohong
    Li, Feng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 71 (71)
  • [34] Deep convolutional neural network based plant species recognition through features of leaf
    Dhananjay Bisen
    Multimedia Tools and Applications, 2021, 80 : 6443 - 6456
  • [35] A Deep Learning Framework Using Convolutional Neural Network for Multi-class Object Recognition
    Hayat, Shaukat
    She Kun
    Zuo Tengtao
    Yue Yu
    Tu, Tianyi
    Du, Yantong
    2018 IEEE 3RD INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING (ICIVC), 2018, : 194 - 198
  • [36] Speech Emotion Recognition Based on Multiple Acoustic Features and Deep Convolutional Neural Network
    Bhangale, Kishor
    Kothandaraman, Mohanaprasad
    ELECTRONICS, 2023, 12 (04)
  • [37] Deep convolutional neural network based plant species recognition through features of leaf
    Bisen, Dhananjay
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (04) : 6443 - 6456
  • [38] CAPTCHA recognition based on deep convolutional neural network
    Wang, Jing
    Qin, Jiaohua
    Xiang, Xuyu
    Tan, Yun
    Pan, Nan
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2019, 16 (05) : 5851 - 5861
  • [39] Gesture Recognition based on Deep Convolutional Neural Network
    Jayanthi, P.
    Bhama, Ponsy R. K. Sathia
    2018 10TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING (ICOAC), 2018, : 367 - 372
  • [40] Human Activity Recognition Based On Video Summarization And Deep Convolutional Neural Network
    Kushwaha, Arati
    Khare, Manish
    Bommisetty, Reddy Mounika
    Khare, Ashish
    COMPUTER JOURNAL, 2024,