A hybrid CNN and BLSTM network for human complex activity recognition with multi-feature fusion

被引:9
|
作者
Huan, Ruohong [1 ]
Zhan, Ziwei [1 ]
Ge, Luoqi [1 ]
Chi, Kaikai [1 ]
Chen, Peng [1 ]
Liang, Ronghua [1 ]
机构
[1] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou 310023, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Complex activity recognition; Deep learning; CNN; BLSTM; Feature fusion; Feature selection;
D O I
10.1007/s11042-021-11363-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A hybrid convolutional neural network (CNN) and bidirectional long short-term memory (BLSTM) network for human complex activity recognition with multi-feature fusion is proposed in this paper. Specifically, a new CNN model is designed to extract the spatial features from the sensor data. Considering that in the process of activity recognition, the output at the current moment is not only related to the previous state, but also to the subsequent state. BLSTM network is further used to extract the temporal context of state information to improve the performance of activity recognition. In order to fully mine the features from the sensor data and further improve the performance of activity recognition, a new feature selection method named SFSANW (sequential forward selection and network weights), which is based on sequential forward selection algorithm and network weights is proposed to select features extracted by the traditional methods to obtain dominant features. The dominant features are then fused with the feature vectors extracted by the hybrid CNN and BLSTM network. Experiments are performed on two complex activity datasets, PAMAP2 and UT-Data, and 92.23% and 98.07% F1 scores are obtained, respectively. The experimental results demonstrate that the proposed method can achieve better performance of complex activity recognition, which is superior to the traditional machine learning algorithms and the state-of-the-art deep learning algorithms.
引用
收藏
页码:36159 / 36182
页数:24
相关论文
共 50 条
  • [21] Pose-aware Multi-feature Fusion Network for Driver Distraction Recognition
    Wu, Mingyan
    Zhang, Xi
    Shen, Linlin
    Yu, Hang
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1228 - 1235
  • [22] Multi-feature fusion gesture recognition based on deep convolutional neural network
    Yun Wei-guo
    Shi Qi-qi
    Wang Min
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2019, 34 (04) : 417 - 422
  • [23] Multi-feature fusion network for person reidentification
    Wang, Xihe
    Zhang, Yongjun
    Xu, Yujie
    Cui, Zhongwei
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [24] sEMG-Based Gesture Recognition via Multi-Feature Fusion Network
    Chen, Zekun
    Qiao, Xiupeng
    Liang, Shili
    Yan, Tao
    Chen, Zhongye
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (04) : 2570 - 2580
  • [25] Speaker recognition research based on voice multi-feature Bayesian network fusion
    Zhu, J. (jmzhu6688@163.com), 1600, Science Press, 18,Shuangqing Street,Haidian, Beijing, 100085, China (34):
  • [26] MFF-Net: A multi-feature fusion network for community detection in complex network
    Cai, Biao
    Wang, Mingyue
    Chen, Yongkeng
    Hu, Yanmei
    Liu, Mingzhe
    KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [27] A Hybrid Attention Network for Malware Detection Based on Multi-Feature Aligned and Fusion
    Yang, Xing
    Yang, Denghui
    Li, Yizhou
    ELECTRONICS, 2023, 12 (03)
  • [28] Offline Handwritten Text Recognition Using Hybrid CNN-BLSTM Network
    Namdeo, Rahul Kumar
    Gupta, Chetan
    Shrivastava, Ritu
    Proceedings - 2022 IEEE 11th International Conference on Communication Systems and Network Technologies, CSNT 2022, 2022, : 318 - 323
  • [29] Human–object interaction recognition based on interactivity detection and multi-feature fusion
    Limin Xia
    Xiaoyue Ding
    Cluster Computing, 2024, 27 : 1169 - 1183
  • [30] Multi-Feature Fusion for Enhanced Feature Representation in Automatic Modulation Recognition
    Cao, Jiuxiao
    Zhu, Rui
    Wu, Lingfeng
    Wang, Jun
    Shi, Guohao
    Chu, Peng
    Zhao, Kang
    IEEE ACCESS, 2025, 13 : 1164 - 1178