Human action recognition on depth dataset

被引:24
|
作者
Gao, Zan [1 ,2 ]
Zhang, Hua [1 ,2 ]
Liu, Anan A. [3 ]
Xu, Guangping [1 ,2 ]
Xue, Yanbing [1 ,2 ]
机构
[1] Tianjin Univ Technol, Key Lab Comp Vis & Syst, Minist Educ, Tianjin 300384, Peoples R China
[2] Tianjin Univ Technol, Tianjin Key Lab Intelligence Comp & Novel Softwar, Tianjin 300384, Peoples R China
[3] Tianjin Univ, Sch Elect Informat Engn, Tianjin 300072, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2016年 / 27卷 / 07期
基金
中国国家自然科学基金;
关键词
Human action recognition; Depth image; Multi-feature; Feature mapping; MMDLM;
D O I
10.1007/s00521-015-2002-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human action recognition is a hot research topic; however, the change in shapes, the high variability of appearances, dynamitic background, potential occlusions in different actions and the image limit of 2D sensor make it more difficult. To solve these problems, we pay more attention to the depth channel and the fusion of different features. Thus, we firstly extract different features for depth image sequence, and then, multi-feature mapping and dictionary learning model (MMDLM) is proposed to deeply discover the relationship between these different features, where two dictionaries and a feature mapping function are simultaneously learned. What is more, these dictionaries can fully characterize the structure information of different features, while the feature mapping function is a regularization term, which can reveal the intrinsic relationship between these two features. Large-scale experiments on two public depth datasets, MSRAction3D and DHA, show that the performances of these different depth features have a big difference, but they are complementary. Further, the features fusion by MMDLM is very efficient and effective on both datasets, which is comparable to the state-of-the-art methods.
引用
收藏
页码:2047 / 2054
页数:8
相关论文
共 50 条
  • [31] FUSION OF DEPTH, SKELETON, AND INERTIAL DATA FOR HUMAN ACTION RECOGNITION
    Chen, Chen
    Jafari, Roozbeh
    Kehtarnavazi, Nasser
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2712 - 2716
  • [32] A Deep Sequence Learning Framework for Action Recognition in Small-Scale Depth Video Dataset
    Bulbul, Mohammad Farhad
    Ullah, Amin
    Ali, Hazrat
    Kim, Daijin
    SENSORS, 2022, 22 (18)
  • [33] ReadingAct RGB-D action dataset and human action recognition from local features
    Chen, Lulu
    Wei, Hong
    Ferryman, James
    PATTERN RECOGNITION LETTERS, 2014, 50 : 159 - 169
  • [34] InHARD - Industrial Human Action Recognition Dataset in the Context of Industrial Collaborative Robotics
    Dallel, Mejdi
    Havard, Vincent
    Baudry, David
    Savatier, Xavier
    PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2020, : 393 - 398
  • [35] Spatio-temporal action localization and detection for human recognition in big dataset
    Megrhi, Sameh
    Jmal, Marwa
    Souidene, Wided
    Beghdadi, Azeddine
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 41 : 375 - 390
  • [36] Depth-based human action recognition using histogram of templates
    Younsi, Merzouk
    Yesli, Samir
    Diaf, Moussa
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (14) : 40415 - 40449
  • [37] Depth MHI Based Deep Learning Model for Human Action Recognition
    Gu, Ye
    Ye, Xiaofeng
    Sheng, Weihua
    2018 13TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2018, : 395 - 400
  • [38] Depth-based human action recognition using histogram of templates
    Merzouk Younsi
    Samir Yesli
    Moussa Diaf
    Multimedia Tools and Applications, 2024, 83 : 40415 - 40449
  • [39] Human Action Recognition using Meta Learning for RGB and Depth Information
    Amiri, S. Mohsen
    Pourazad, Mahsa T.
    Nasiopoulos, Panos
    Leung, Victor C. M.
    2014 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS (ICNC), 2014, : 363 - 367
  • [40] Human Action Recognition Based on Depth Images from Microsoft Kinect
    Liu, Tongyang
    Song, Yang
    Gu, Yu
    Li, Ao
    2013 FOURTH GLOBAL CONGRESS ON INTELLIGENT SYSTEMS (GCIS), 2013, : 200 - 204