Human Activity Recognition Using Combined Deep Architectures

被引:0
|
作者
Tomas, Amsalu [1 ]
Biswas, K. K. [2 ]
机构
[1] Wolaita Sodo Univ, Dept Comp Sci, Wolaita Sodo, Ethiopia
[2] Indian Inst Technol Delhi, Dept Comp Sci & Engn, New Delhi, India
关键词
combined deep architecture; convolutional neural netwok; stacked auto-encoders; MHI; late fusion;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human activity recognition has been an active area of research in computer vision and artificial intelligence since the last two decades. This research aims to combine color information from RGBD with motion information from skeletal joints in order to capture subtle motions in the video which in used to tackle temporal features extraction. The researchers show this by using combined deep architecture: Convolutional Neural Network (CNN) and Stacked Auto-encoders (SAE). In this model CNN is used to learn motion representations from Motion History Images (MHIs) of sampled RGB image frames. Whereas SAE is used to learn discriminative movements of human skeletal joints by taking the distances of joints from mean joint at each sampled frame. The proposed model is able to learn low level abstractions of joint motion sequences as well as how motion changes with location of image in each MHI frame. Both deep architectures were trained separately by averaging the Softmax class posteriors across the sampled frames to obtain score of the video clip. Normalize class scores of each of the networks in [0,1] range and perform late fusion by taking weighted mean of class scores based on relative performances of the two networks. The proposed model is evaluated on standard action recognition benchmarks of MSR Daily Activity3D and MSR Action3D datasets, where the proposed architecture has improved recognition accuracy.
引用
收藏
页码:41 / 45
页数:5
相关论文
共 50 条
  • [1] Human Activity Recognition using Deep Learning
    Moola, Ramu
    Hossain, Ashraf
    [J]. 2022 URSI REGIONAL CONFERENCE ON RADIO SCIENCE, USRI-RCRS, 2022, : 165 - 168
  • [2] Alternative Deep Learning Architectures for Feature-Level Fusion in Human Activity Recognition
    Julien Maitre
    Kevin Bouchard
    Sébastien Gaboury
    [J]. Mobile Networks and Applications, 2021, 26 : 2076 - 2086
  • [3] Alternative Deep Learning Architectures for Feature-Level Fusion in Human Activity Recognition
    Maitre, Julien
    Bouchard, Kevin
    Gaboury, Sebastien
    [J]. MOBILE NETWORKS & APPLICATIONS, 2021, 26 (05): : 2076 - 2086
  • [4] Human activity recognition using deep electroencephalography learning
    Salehzadeh, Amirsaleh
    Calitz, Andre P.
    Greyling, Jean
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2020, 62
  • [5] Human Activity Recognition in Videos Using Deep Learning
    Kumar, Mohit
    Rana, Adarsh
    Ankita
    Yadav, Arun Kumar
    Yadav, Divakar
    [J]. SOFT COMPUTING AND ITS ENGINEERING APPLICATIONS, ICSOFTCOMP 2022, 2023, 1788 : 288 - 299
  • [6] Human Activity Recognition Using Deep Belief Networks
    Yalcin, Hulya
    [J]. 2016 24TH SIGNAL PROCESSING AND COMMUNICATION APPLICATION CONFERENCE (SIU), 2016, : 1649 - 1652
  • [7] Deep Human Activity Recognition Using Wearable Sensors
    Lawal, Isah A.
    Bano, Sophia
    [J]. 12TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS (PETRA 2019), 2019, : 45 - 48
  • [8] Analysis of Human Activity Recognition using Deep Learning
    Khattar, Lamiyah
    Kapoor, Chinmay
    Aggarwal, Garima
    [J]. 2021 11TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, DATA SCIENCE & ENGINEERING (CONFLUENCE 2021), 2021, : 100 - 104
  • [9] Human Daily Activity Recognition Performed Using Wearable Inertial Sensors Combined With Deep Learning Algorithms
    Yen, Chih-Ta
    Liao, Jia-Xian
    Huang, Yi-Kai
    [J]. IEEE ACCESS, 2020, 8 : 174105 - 174114
  • [10] Combined deep centralized coordinate learning and hybrid loss for human activity recognition
    Bourjandi, Masoumeh
    Yadollahzadeh-Tabari, Meisam
    Golsorkhtabaramiri, Mehdi
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (22):