Indoor human activity recognition using high-dimensional sensors and deep neural networks

被引:0
|
作者
Baptist Vandersmissen
Nicolas Knudde
Azarakhsh Jalalvand
Ivo Couckuyt
Tom Dhaene
Wesley De Neve
机构
[1] Ghent University–IMEC,Department of Electronics and Information Systems
[2] Ghent University–IMEC,Department of Information Technology
[3] Ghent University Global Campus,Center for Biotech Data Science
来源
关键词
Activity recognition; Deep neural networks; High-dimensional sensors; Sensor fusion;
D O I
暂无
中图分类号
学科分类号
摘要
Many smart home applications rely on indoor human activity recognition. This challenge is currently primarily tackled by employing video camera sensors. However, the use of such sensors is characterized by fundamental technical deficiencies in an indoor environment, often also resulting in a breach of privacy. In contrast, a radar sensor resolves most of these flaws and maintains privacy in particular. In this paper, we investigate a novel approach toward automatic indoor human activity recognition, feeding high-dimensional radar and video camera sensor data into several deep neural networks. Furthermore, we explore the efficacy of sensor fusion to provide a solution in less than ideal circumstances. We validate our approach on two newly constructed and published data sets that consist of 2347 and 1505 samples distributed over six different types of gestures and events, respectively. From our analysis, we can conclude that, when considering a radar sensor, it is optimal to make use of a three-dimensional convolutional neural network that takes as input sequential range-Doppler maps. This model achieves 12.22% and 2.97% error rate on the gestures and the events data set, respectively. A pretrained residual network is employed to deal with the video camera sensor data and obtains 1.67% and 3.00% error rate on the same data sets. We show that there exists a clear benefit in combining both sensors to enable activity recognition in the case of less than ideal circumstances.
引用
收藏
页码:12295 / 12309
页数:14
相关论文
共 50 条
  • [1] Indoor human activity recognition using high-dimensional sensors and deep neural networks
    Vandersmissen, Baptist
    Knudde, Nicolas
    Jalalvand, Azarakhsh
    Couckuyt, Ivo
    Dhaene, Tom
    De Neve, Wesley
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (16): : 12295 - 12309
  • [2] Human Activity Recognition using Wearable Sensors by Deep Convolutional Neural Networks
    Jiang, Wenchao
    Yin, Zhaozheng
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 1307 - 1310
  • [3] Human activity recognition with smartphone sensors using deep learning neural networks
    Ronao, Charissa Ann
    Cho, Sung-Bae
    EXPERT SYSTEMS WITH APPLICATIONS, 2016, 59 : 235 - 244
  • [4] Deep Convolutional Neural Networks for Human Activity Recognition with Smartphone Sensors
    Ronao, Charissa Ann
    Cho, Sung-Bae
    NEURAL INFORMATION PROCESSING, ICONIP 2015, PT IV, 2015, 9492 : 46 - 53
  • [5] Minimax optimal high-dimensional classification using deep neural networks
    Wang, Shuoyang
    Shang, Zuofeng
    STAT, 2022, 11 (01):
  • [6] Deep ReLU neural networks in high-dimensional approximation
    Dung, Dinh
    Nguyen, Van Kien
    NEURAL NETWORKS, 2021, 142 : 619 - 635
  • [7] Convolutional Neural Networks for Human Activity Recognition using Mobile Sensors
    Zeng, Ming
    Nguyen, Le T.
    Yu, Bo
    Mengshoel, Ole J.
    Zhu, Jiang
    Wu, Pang
    Zhang, Joy
    2014 6TH INTERNATIONAL CONFERENCE ON MOBILE COMPUTING, APPLICATIONS AND SERVICES (MOBICASE), 2014, : 197 - 205
  • [8] Device Position-Independent Human Activity Recognition with Wearable Sensors Using Deep Neural Networks
    Mekruksavanich, Sakorn
    Jitpattanakul, Anuchit
    APPLIED SCIENCES-BASEL, 2024, 14 (05):
  • [9] Deep Wavelet Convolutional Neural Networks for Multimodal Human Activity Recognition Using Wearable Inertial Sensors
    Vuong, Thi Hong
    Doan, Tung
    Takasu, Atsuhiro
    SENSORS, 2023, 23 (24)
  • [10] Using Neural Networks For Indoor Human Activity Recognition with Spatial Location Information
    Li, Jun
    Guo, Yidong
    Qi, Ying
    2019 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2019), VOL 2, 2019, : 146 - 149