Human action recognition on depth dataset

被引:24
|
作者
Gao, Zan [1 ,2 ]
Zhang, Hua [1 ,2 ]
Liu, Anan A. [3 ]
Xu, Guangping [1 ,2 ]
Xue, Yanbing [1 ,2 ]
机构
[1] Tianjin Univ Technol, Key Lab Comp Vis & Syst, Minist Educ, Tianjin 300384, Peoples R China
[2] Tianjin Univ Technol, Tianjin Key Lab Intelligence Comp & Novel Softwar, Tianjin 300384, Peoples R China
[3] Tianjin Univ, Sch Elect Informat Engn, Tianjin 300072, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2016年 / 27卷 / 07期
基金
中国国家自然科学基金;
关键词
Human action recognition; Depth image; Multi-feature; Feature mapping; MMDLM;
D O I
10.1007/s00521-015-2002-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human action recognition is a hot research topic; however, the change in shapes, the high variability of appearances, dynamitic background, potential occlusions in different actions and the image limit of 2D sensor make it more difficult. To solve these problems, we pay more attention to the depth channel and the fusion of different features. Thus, we firstly extract different features for depth image sequence, and then, multi-feature mapping and dictionary learning model (MMDLM) is proposed to deeply discover the relationship between these different features, where two dictionaries and a feature mapping function are simultaneously learned. What is more, these dictionaries can fully characterize the structure information of different features, while the feature mapping function is a regularization term, which can reveal the intrinsic relationship between these two features. Large-scale experiments on two public depth datasets, MSRAction3D and DHA, show that the performances of these different depth features have a big difference, but they are complementary. Further, the features fusion by MMDLM is very efficient and effective on both datasets, which is comparable to the state-of-the-art methods.
引用
收藏
页码:2047 / 2054
页数:8
相关论文
共 50 条
  • [1] Human action recognition on depth dataset
    Zan Gao
    Hua Zhang
    Anan A. Liu
    Guangping Xu
    Yanbing Xue
    Neural Computing and Applications, 2016, 27 : 2047 - 2054
  • [2] A Multiviewpoint Outdoor Dataset for Human Action Recognition
    Perera, Asanka G.
    Law, Yee Wei
    Ogunwa, Titilayo T.
    Chahl, Javaan
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2020, 50 (05) : 405 - 413
  • [3] Human Action Recognition on Raw Depth Maps
    Trelinski, Jacek
    Kwolek, Bogdan
    2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2021,
  • [4] UTD-MHAD: A MULTIMODAL DATASET FOR HUMAN ACTION RECOGNITION UTILIZING A DEPTH CAMERA AND A WEARABLE INERTIAL SENSOR
    Chen, Chen
    Jafari, Roozbeh
    Kehtarnavaz, Nasser
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 168 - 172
  • [5] Aeriform in-action: A novel dataset for human action recognition in aerial videos
    Kapoor, Surbhi
    Sharma, Akashdeep
    Verma, Amandeep
    Singh, Sarbjeet
    PATTERN RECOGNITION, 2023, 140
  • [6] Spatio-Temporal Action Localization for Human Action Recognition in Large Dataset
    Megrhi, Sameh
    Jmal, Marwa
    Beghdadi, Azeddine
    Mseddi, Wided
    VIDEO SURVEILLANCE AND TRANSPORTATION IMAGING APPLICATIONS 2015, 2015, 9407
  • [7] Benchmarking a Multimodal and Multiview and Interactive Dataset for Human Action Recognition
    Liu, An-An
    Xu, Ning
    Nie, Wei-Zhi
    Su, Yu-Ting
    Wong, Yongkang
    Kankanhalli, Mohan
    IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (07) : 1781 - 1794
  • [8] The Johns Hopkins University Multimodal Dataset for Human Action Recognition
    Murray, Thomas S.
    Mendat, Daniel R.
    Pouliquen, Philippe O.
    Andreou, Andreas G.
    RADAR SENSOR TECHNOLOGY XIX; AND ACTIVE AND PASSIVE SIGNATURES VI, 2015, 9461
  • [9] A large-scale fMRI dataset for human action recognition
    Zhou, Ming
    Gong, Zhengxin
    Dai, Yuxuan
    Wen, Yushan
    Liu, Youyi
    Zhen, Zonglei
    SCIENTIFIC DATA, 2023, 10 (01)
  • [10] A large-scale fMRI dataset for human action recognition
    Ming Zhou
    Zhengxin Gong
    Yuxuan Dai
    Yushan Wen
    Youyi Liu
    Zonglei Zhen
    Scientific Data, 10