A Novel CNN-BiLSTM-GRU Hybrid Deep Learning Model for Human Activity Recognition

被引:0
|
作者
Lalwani, Pooja [1 ]
Ganeshan, R. [1 ]
机构
[1] School of Computing Science and Engineering, VIT Bhopal University, Kothri Kalan, Astha, Madhya Pradesh, Sehore,466114, India
关键词
Deep neural networks - Human robot interaction - Long short-term memory - Wearable sensors - Wireless sensor networks;
D O I
10.1007/s44196-024-00689-0
中图分类号
学科分类号
摘要
Human Activity Recognition (HAR) is critical in a variety of disciplines, including healthcare and robotics. This paper presents a new Convolutional Neural Network with Bidirectional Long Short-Term Memory and along with Gated Recurrent Unit (CNN-BiLSTM-GRU)hybrid deep learning model designed for Human Activity Recognition (HAR) that makes use of data from wearable sensors and mobile devices. Surprisingly, the model achieves an amazing accuracy rate of 99.7% on the difficult Wireless Sensor Data Mining (WISDM) dataset, demonstrating its ability to properly identify human behaviors. This study emphasizes parameter optimization, with a focus on batch size 0.3 as a significant component in improving the model’s robustness. Furthermore, the findings of this study have far-reaching implications for bipedal robotics, where precise HAR (Human Activity Recognition) is critical to improving human–robot interaction quality and overall work efficiency. These discoveries not only strengthen Human Activity Recognition (HAR) techniques, but also provide practical benefits in real-world applications, particularly in the robotics and healthcare areas. This study thus makes a significant contribution to the continuous development of Human Activity Recognition methods and their actual applications, emphasizing their important role in stimulating innovation and efficiency across a wide range of industries. © The Author(s) 2024.
引用
收藏
相关论文
共 50 条
  • [21] A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data
    Challa, Sravan Kumar
    Kumar, Akhilesh
    Semwal, Vijay Bhaskar
    VISUAL COMPUTER, 2022, 38 (12): : 4095 - 4109
  • [22] A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data
    Sravan Kumar Challa
    Akhilesh Kumar
    Vijay Bhaskar Semwal
    The Visual Computer, 2022, 38 : 4095 - 4109
  • [23] Multi-sensor human activity recognition using CNN and GRU
    Ohoud Nafea
    Wadood Abdul
    Ghulam Muhammad
    International Journal of Multimedia Information Retrieval, 2022, 11 : 135 - 147
  • [24] Multi-sensor human activity recognition using CNN and GRU
    Nafea, Ohoud
    Abdul, Wadood
    Muhammad, Ghulam
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2022, 11 (02) : 135 - 147
  • [25] Novel Human Activity Recognition by graph engineered ensemble deep learning model
    Ghalan, Mamta
    Aggarwal, Rajesh Kumar
    IFAC JOURNAL OF SYSTEMS AND CONTROL, 2024, 27
  • [26] A Novel Deep Learning Model for Smartphone-Based Human Activity Recognition
    Agti, Nadia
    Sabri, Lyazid
    Kazar, Okba
    Chibani, Abdelghani
    MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES, MOBIQUITOUS 2023, PT II, 2024, 594 : 231 - 243
  • [27] Hybrid CNN-GRU Model for High Efficient Handwritten Digit Recognition
    Vantruong Nguyen
    Cai, Jueping
    Chu, Jie
    2019 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION (AIPR 2019), 2019, : 66 - 71
  • [28] Wearable Sensor-Based Human Activity Recognition with Hybrid Deep Learning Model
    Luwe, Yee Jia
    Lee, Chin Poo
    Lim, Kian Ming
    INFORMATICS-BASEL, 2022, 9 (03):
  • [29] ENHANCING HUMAN ACTIVITY RECOGNITION THROUGH SENSOR FUSION AND HYBRID DEEP LEARNING MODEL
    Tarekegn, Adane Nega
    Ullah, Mohib
    Cheikh, Faouzi Alaya
    Sajjad, Muhammad
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [30] Wearable sensors for human activity recognition based on a self-attention CNN-BiLSTM model
    Guo, Huafeng
    Xiang, Changcheng
    Chen, Shiqiang
    SENSOR REVIEW, 2023, 43 (5/6) : 347 - 358