Multi-sensor data fusion for complex human activity recognition

被引:0
|
作者
Song X. [1 ]
Zhang X. [1 ]
Zhang Z. [1 ]
Chen X. [1 ]
Liu H. [1 ]
机构
[1] School of Computer Science and Technology, Harbin Institute of Technology, Harbin
关键词
Complex human activity recognition; Deep learning; Multi-sensor data fusion; Multi-task learning;
D O I
10.16511/j.cnki.qhdxxb.2020.22.003
中图分类号
学科分类号
摘要
Human activity recognition based on wearable sensors has been widely used in various fields, but complex human activity recognition based on multiple wearable sensors still has many problems. These problems include the incompatibility of many signals from multiple sensors and the low classification accuracy of complex activities. This paper presents a multi-sensor decision-level data fusion model using multi-task deep learning for complex activity recognition. The model uses deep learning to automatically extract the features of the original sensor data. In addition, the concurrent complex activities are divided into multiple sub-tasks using a multi-task learning method. Each sub-task shares the network structure and promotes mutual learning, which improves the generalization performance of the model. Tests show that the model can achieve a 94.6% recognition accuracy rate for cyclical activities, 93.4% for non-cyclical activities, and 92.8% for concurrent complex activities. The recognition accuracy rate is on average 8% higher than those of three baseline models. © 2020, Tsinghua University Press. All right reserved.
引用
下载
收藏
页码:814 / 821
页数:7
相关论文
共 15 条
  • [1] CORNACCHIA M, OZCAN K, ZHENG Y, Et al., A survey on activity detection and classification using wearable sensors, IEEE Sensors Journal, 17, 2, pp. 386-403, (2017)
  • [2] SHOAIB M, BOSCH S, INCEL O D, Et al., Complex human activity recognition using smartphone and wrist-worn motion sensors, Sensors, 16, 4, (2016)
  • [3] GRAVINA R, ALINIA P, GHASEMZADEH H, Et al., Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges, Information Fusion, 35, pp. 68-80, (2017)
  • [4] CAPELA N A, LEMAIRE E D, BADDOUR N., Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients, PLoS One, 10, 4, (2015)
  • [5] BULLING A, BLANKE U, SCHIELE B., A tutorial on human activity recognition using body-worn inertial sensors, ACM Computing Surveys (CSUR), 46, 3, (2014)
  • [6] KUNCHEVA L I., Combining pattern classifiers: Methods and algorithms, (2004)
  • [7] DIETTERICH T G., Ensemble methods in machine learning, Proceedings of the First International Workshop on Multiple Classifier Systems, pp. 1-15, (2000)
  • [8] IGNATOV A., Real-time human activity recognition from accelerometer data using convolutional neural networks, Applied Soft Computing, 62, pp. 915-922, (2018)
  • [9] YANG J B, NHUT N M, SAN P P, Et al., Deep convolutional neural networks on multichannel time series for human activity recognition, Proceedings of the 24th International Joint Conference on Artificial Intelligence, (2015)
  • [10] LEE S M, YOON S M, CHO H., Human activity recognition from accelerometer data using convolutional neural network, 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 131-134, (2017)