Deep learning-based multi-view 3D-human action recognition using skeleton and depth data

被引:0
|
作者
Sampat Kumar Ghosh
Rashmi M
Biju R Mohan
Ram Mohana Reddy Guddeti
机构
[1] National Institute of Technology Karnataka,Department of Information Technology
来源
关键词
Convolutional neural networks; Deep learning; Feature fusion; Human action recognition; Score fusion;
D O I
暂无
中图分类号
学科分类号
摘要
Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on single-view HAR. Recognizing human actions from arbitrary viewpoints is more challenging, as the same action is observed differently from different angles. This paper proposes a multi-stream Convolutional Neural Network (CNN) model for multi-view HAR using depth and skeleton data. We also propose a novel and efficient depth descriptor, Edge Detected-Motion History Image (ED-MHI), based on Canny Edge Detection and Motion History Image. Also, the proposed skeleton descriptor, Motion and Orientation of Joints (MOJ), represent the appropriate action by using joint motion and orientation. Experimental results on two datasets of human actions: NUCLA Multiview Action3D and NTU RGB-D using a Cross-subject evaluation protocol demonstrated that the proposed system exhibits the superior performance as compared to the state-of-the-art works with 93.87% and 85.61% accuracy, respectively.
引用
收藏
页码:19829 / 19851
页数:22
相关论文
共 50 条
  • [31] Deep Learning-Based Human Action Recognition in Videos
    Li, Song
    Shi, Qian
    [J]. JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2024,
  • [32] Deep learning-based multi-modal approach using RGB and skeleton sequences for human activity recognition
    Pratishtha Verma
    Animesh Sah
    Rajeev Srivastava
    [J]. Multimedia Systems, 2020, 26 : 671 - 685
  • [33] Adaptive multi-view graph convolutional networks for skeleton-based action recognition
    Liu, Xing
    Li, Yanshan
    Xia, Rongjie
    [J]. NEUROCOMPUTING, 2021, 444 : 288 - 300
  • [34] Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition
    Sun, Bin
    Kong, Dehui
    Wang, Shaofan
    Wang, Lichun
    Yin, Baocai
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2021, 15 (02)
  • [35] A Joint Learning-Based Method for Multi-View Depth Map Super Resolution
    Li, Jing
    Lu, Zhichao
    Zeng, Gang
    Gan, Rui
    Wang, Long
    Zha, Hongbin
    [J]. 2013 SECOND IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR 2013), 2013, : 456 - 460
  • [36] Multi-View Human Action Recognition Using Wavelet Data Reduction and Multi-Class Classification
    Aryanfar, Alihossein
    Yaakob, Razali
    Halin, Alfian Abdul
    Sulaiman, Md Nasir
    Kasmiran, Khairul Azhar
    Mohammadpour, Leila
    [J]. PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND SOFTWARE ENGINEERING (SCSE'15), 2015, 62 : 585 - 592
  • [37] Human Daily Action Analysis with Multi-view and Color-Depth Data
    Cheng, Zhongwei
    Qin, Lei
    Ye, Yituo
    Huang, Qingming
    Tian, Qi
    [J]. COMPUTER VISION - ECCV 2012, PT II, 2012, 7584 : 52 - 61
  • [38] Representation Learning with Depth and Breadth for Recommendation Using Multi-view Data
    Han, Xiaotian
    Shi, Chuan
    Zheng, Lei
    Yu, Philip S.
    Li, Jianxin
    Lu, Yuanfu
    [J]. WEB AND BIG DATA (APWEB-WAIM 2018), PT I, 2018, 10987 : 181 - 188
  • [39] Skeleton Based Human Action Recognition for Smart City Application Using Deep Learning
    Rashmi, M.
    Guddeti, Ram Mohana Reddy
    [J]. 2020 INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS & NETWORKS (COMSNETS), 2020,
  • [40] Human action recognition using multi-view image sequences features
    Ahmad, Mohiuddin
    Lee, Seong-Whan
    [J]. PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION - PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE, 2006, : 523 - +