Human activity recognition based on multi-modal fusion

被引:0
|
作者
Zhang, Cheng [1 ]
Zu, Tianqi [1 ]
Hou, Yibin [1 ,2 ]
He, Jian [1 ,2 ]
Yang, Shengqi [1 ,2 ]
Dong, Ruihai [3 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
[2] Beijing Univ Technol, Beijing Engn Res Ctr IoT Software & Syst, Beijing 100124, Peoples R China
[3] Univ Coll Dublin, Insight Ctr Data Analyt, Dublin, Ireland
关键词
Human activity recognition; Multi-modal fusion; Fall detection; Convolutional network; Wearable device;
D O I
10.1007/s42486-023-00132-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, human activity recognition (HAR) methods are developing rapidly. However, most existing methods base on single input data modality, and suffers from accuracy and robustness issues. In this paper, we present a novel multi-modal HAR architecture which fuses signals from both RGB visual data and Inertial Measurement Units (IMU) data. As for the RGB modality, the speed-weighted star RGB representation is proposed to aggregate the temporal information, and a convolutional network is employed to extract features; As for the IMU modality, Fast Fourier transform and multi-layer perceptron are employed to extract the dynamical features of IMU data. As for the feature fusion scheme, the global soft attention layer is designed to adjust the weights according to the concatenated features, and the L-softmax with soft voting is adopted to classify activities. The proposed method is evaluated on the UP-Fall dataset, the F1-scores are 0.92 and 1.00 for 11 classes classification task and fall/non-fall binary classification task respectively.
引用
收藏
页码:321 / 332
页数:12
相关论文
共 50 条
  • [1] Human activity recognition based on multi-modal fusion
    Cheng Zhang
    Tianqi Zu
    Yibin Hou
    Jian He
    Shengqi Yang
    Ruihai Dong
    [J]. CCF Transactions on Pervasive Computing and Interaction, 2023, 5 : 321 - 332
  • [2] Multi-modal fusion method for human action recognition based on IALC
    Zhang, Yinhuan
    Xiao, Qinkun
    Liu, Xing
    Wei, Yongquan
    Chu, Chaoqin
    Xue, Jingyun
    [J]. IET IMAGE PROCESSING, 2023, 17 (02) : 388 - 400
  • [3] Hybrid Multi-modal Fusion for Human Action Recognition
    Seddik, Bassem
    Gazzah, Sami
    Ben Amara, Najoua Essoukri
    [J]. IMAGE ANALYSIS AND RECOGNITION, ICIAR 2017, 2017, 10317 : 201 - 209
  • [4] Multi-modal Sensing for Human Activity Recognition
    Bruno, Barbara
    Grosinger, Jasmin
    Mastrogiovanni, Fulvio
    Pecora, Federico
    Saffiotti, Alessandro
    Sathyakeerthy, Subhash
    Sgorbissa, Antonio
    [J]. 2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2015, : 594 - 600
  • [5] Interpretable Passive Multi-Modal Sensor Fusion for Human Identification and Activity Recognition
    Yuan, Liangqi
    Andrews, Jack
    Mu, Huaizheng
    Vakil, Asad
    Ewing, Robert
    Blasch, Erik
    Li, Jia
    [J]. SENSORS, 2022, 22 (15)
  • [6] Rethinking Fusion Baselines for Multi-modal Human Action Recognition
    Jiang, Hongda
    Li, Yanghao
    Song, Sijie
    Liu, Jiaying
    [J]. ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT III, 2018, 11166 : 178 - 187
  • [7] Multi-modal lifelog data fusion for improved human activity recognition: A hybrid approach
    Oh, Yongkyung
    Kim, Sungil
    [J]. INFORMATION FUSION, 2024, 110
  • [8] Multi-Modal Fusion Emotion Recognition Based on HMM and ANN
    Xu, Chao
    Cao, Tianyi
    Feng, Zhiyong
    Dong, Caichao
    [J]. CONTEMPORARY RESEARCH ON E-BUSINESS TECHNOLOGY AND STRATEGY, 2012, 332 : 541 - 550
  • [9] Geological Body Recognition Based on Multi-Modal Feature Fusion
    Fu S.
    Li C.
    Zhang H.
    Liu C.
    Li F.
    [J]. Diqiu Kexue - Zhongguo Dizhi Daxue Xuebao/Earth Science - Journal of China University of Geosciences, 2023, 48 (10): : 3743 - 3752
  • [10] Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion
    Xue, Yaxu
    Yu, Yadong
    Yin, Kaiyang
    Li, Pengfei
    Xie, Shuangxi
    Ju, Zhaojie
    [J]. IEEE SENSORS JOURNAL, 2022, 22 (07) : 6793 - 6805