A Deep Learning Approach for Human Activities Recognition From Multimodal Sensing Devices

被引:47
|
作者
Ihianle, Isibor Kennedy [1 ]
Nwajana, Augustine O. [2 ]
Ebenuwa, Solomon Henry [3 ]
Otuka, Richard, I [3 ]
Owa, Kayode [1 ]
Orisatoki, Mobolaji O. [4 ]
机构
[1] Nottingham Trent Univ, Dept Comp Sci, Nottingham NG11 8NS, England
[2] Univ Greenwich, Fac Engn & Sci, London SE10 9JR, England
[3] Univ East London, Sch Architecture Comp & Engn ACE, London E16 2RD, England
[4] Univ Sussex, Dept Engn & Design, Brighton BN1 9RH, E Sussex, England
来源
IEEE ACCESS | 2020年 / 8卷 / 08期
关键词
Feature extraction; Machine learning; Activity recognition; Convolution; Hidden Markov models; Logic gates; Human activity recognition; deep learning; machine learning; wearable sensors; convolutional neural network; long short-term memory; BIDIRECTIONAL LSTM;
D O I
10.1109/ACCESS.2020.3027979
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Research in the recognition of human activities of daily living has significantly improved using deep learning techniques. Traditional human activity recognition techniques often use handcrafted features from heuristic processes from single sensing modality. The development of deep learning techniques has addressed most of these problems by the automatic feature extraction from multimodal sensing devices to recognise activities accurately. In this paper, we propose a deep learning multi-channel architecture using a combination of convolutional neural network (CNN) and Bidirectional long short-term memory (BLSTM). The advantage of this model is that the CNN layers perform direct mapping and abstract representation of raw sensor inputs for feature extraction at different resolutions. The BLSTM layer takes full advantage of the forward and backward sequences to improve the extracted features for activity recognition significantly. We evaluate the proposed model on two publicly available datasets. The experimental results show that the proposed model performed considerably better than our baseline models and other models using the same datasets. It also demonstrates the suitability of the proposed model on multimodal sensing devices for enhanced human activity recognition.
引用
收藏
页码:179028 / 179038
页数:11
相关论文
共 50 条
  • [21] Emotion Recognition on Multimodal with Deep Learning and Ensemble
    Dharma, David Adi
    Zahra, Amalia
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (12) : 656 - 663
  • [22] An approach to sport activities recognition based on an inertial sensor and deep learning
    Pajak, Grzegorz
    Krutz, Pascal
    Patalas-Maliszewska, Justyna
    Rehm, Matthias
    Pajak, Iwona
    Dix, Martin
    SENSORS AND ACTUATORS A-PHYSICAL, 2022, 345
  • [23] Multimodal Sequential Modeling and Recognition of Human Activities
    Selmi, Mouna
    El-Yacoubi, Mounim A.
    COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS, PT II (ICCHP 2016), 2016, 9759 : 541 - 548
  • [24] MM-Fit: Multimodal Deep Learning for Automatic Exercise Logging across Sensing Devices
    Stromback, David
    Huang, Sangxia
    Radu, Valentin
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2020, 4 (04):
  • [25] A Lightweight Deep Learning Model for Human Activity Recognition on Edge Devices
    Agarwal, Preeti
    Alam, Mansaf
    INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND DATA SCIENCE, 2020, 167 : 2364 - 2373
  • [26] Comparing Recognition Performance and Robustness of Multimodal Deep Learning Models for Multimodal Emotion Recognition
    Liu, Wei
    Qiu, Jie-Lin
    Zheng, Wei-Long
    Lu, Bao-Liang
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (02) : 715 - 729
  • [27] Multimodal Wearable Sensing for Sport-Related Activity Recognition Using Deep Learning Networks
    Mekruksavanich, Sakorn
    Jitpattanakul, Anuchit
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2022, 13 (02) : 132 - 138
  • [28] Classification of human's activities from gesture recognition in live videos using deep learning
    Khan, Amjad Rehman
    Saba, Tanzila
    Khan, Muhammad Zeeshan
    Fati, Suliman Mohamed
    Khan, Muhammad Usman Ghani
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (10):
  • [29] MultiSense: Cross-labelling and Learning Human Activities Using Multimodal Sensing Data
    Zhang, Lan
    Zheng, Daren
    Yuan, Mu
    Han, Feng
    Wu, Zhengtao
    Liu, Mengjing
    Li, Xiang-Yang
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2023, 19 (03)
  • [30] A Classifier Approach using Deep Learning for Human Activity Recognition
    Rawat, Sarthak Singh
    Bisht, Abhishek
    Nijhawan, Rahul
    2019 FIFTH INTERNATIONAL CONFERENCE ON IMAGE INFORMATION PROCESSING (ICIIP 2019), 2019, : 486 - 490