Human action recognition on depth dataset

被引:24
|
作者
Gao, Zan [1 ,2 ]
Zhang, Hua [1 ,2 ]
Liu, Anan A. [3 ]
Xu, Guangping [1 ,2 ]
Xue, Yanbing [1 ,2 ]
机构
[1] Tianjin Univ Technol, Key Lab Comp Vis & Syst, Minist Educ, Tianjin 300384, Peoples R China
[2] Tianjin Univ Technol, Tianjin Key Lab Intelligence Comp & Novel Softwar, Tianjin 300384, Peoples R China
[3] Tianjin Univ, Sch Elect Informat Engn, Tianjin 300072, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2016年 / 27卷 / 07期
基金
中国国家自然科学基金;
关键词
Human action recognition; Depth image; Multi-feature; Feature mapping; MMDLM;
D O I
10.1007/s00521-015-2002-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human action recognition is a hot research topic; however, the change in shapes, the high variability of appearances, dynamitic background, potential occlusions in different actions and the image limit of 2D sensor make it more difficult. To solve these problems, we pay more attention to the depth channel and the fusion of different features. Thus, we firstly extract different features for depth image sequence, and then, multi-feature mapping and dictionary learning model (MMDLM) is proposed to deeply discover the relationship between these different features, where two dictionaries and a feature mapping function are simultaneously learned. What is more, these dictionaries can fully characterize the structure information of different features, while the feature mapping function is a regularization term, which can reveal the intrinsic relationship between these two features. Large-scale experiments on two public depth datasets, MSRAction3D and DHA, show that the performances of these different depth features have a big difference, but they are complementary. Further, the features fusion by MMDLM is very efficient and effective on both datasets, which is comparable to the state-of-the-art methods.
引用
收藏
页码:2047 / 2054
页数:8
相关论文
共 50 条
  • [21] MetaVD: A Meta Video Dataset for enhancing human action recognition datasets
    Yoshikawa, Yuya
    Shigeto, Yutaro
    Takeuchi, Akikazu
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 212
  • [22] Human Action Recognition Using Associated Depth and Skeleton Information
    Li, Keyu
    Liu, Zhigang
    Liang, Liqin
    Song, Yanan
    2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2016, : 418 - 422
  • [23] Human Action Recognition with Skeletal Information from Depth Camera
    Zhu, Hong-Min
    Pun, Chi-Man
    2013 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (ICIA), 2013, : 1082 - 1085
  • [24] Discovering latent attributes for human action recognition in depth sequence
    Su, Yuting
    Jia, Pingping
    Liu, An-An
    Yang, Zhaoxuan
    ELECTRONICS LETTERS, 2014, 50 (20) : 1436 - 1437
  • [25] HUMAN ACTION RECOGNITION USING ASSOCIATED DEPTH AND SKELETON INFORMATION
    Tang, Nick C.
    Lin, Yen-Yu
    Hua, Ju-Hsuan
    Weng, Ming-Fang
    Liao, Hong-Yuan Mark
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [26] The Study on Human Action Recognition with Depth Video for Intelligent Monitoring
    Liu, Xueping
    Li, Yibo
    Li, Youru
    Yu, Shi
    Tian, Can
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 5702 - 5706
  • [27] A survey of depth and inertial sensor fusion for human action recognition
    Chen Chen
    Roozbeh Jafari
    Nasser Kehtarnavaz
    Multimedia Tools and Applications, 2017, 76 : 4405 - 4425
  • [28] Human Action Recognition Using Fusion of Depth and Inertial Sensors
    Fuad, Zain
    Unel, Mustafa
    IMAGE ANALYSIS AND RECOGNITION (ICIAR 2018), 2018, 10882 : 373 - 380
  • [29] Human Action Recognition Using Multilevel Depth Motion Maps
    Xu Weiyao
    Wu Muqing
    Zhao Min
    Liu Yifeng
    Lv Bo
    Xia Ting
    IEEE ACCESS, 2019, 7 : 41811 - 41822
  • [30] A survey of depth and inertial sensor fusion for human action recognition
    Chen, Chen
    Jafari, Roozbeh
    Kehtarnavaz, Nasser
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (03) : 4405 - 4425