Improving Cross-Subject Activity Recognition via Adversarial Learning

被引:8
|
作者
Leite, Clayton Frederick Souza [1 ]
Xiao, Yu [1 ]
机构
[1] Aalto Univ, Dept Commun & Networking, Espoo 02150, Finland
基金
欧盟地平线“2020”;
关键词
Training; Machine learning; Feature extraction; Activity recognition; Training data; Degradation; Generators; Human activity recognition; deep learning; adversarial learning; data augmentation; cross-subject performance; GESTURE RECOGNITION; HAND;
D O I
10.1109/ACCESS.2020.2993818
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning has been widely used for implementing human activity recognition from wearable sensors like inertial measurement units. The performance of deep activity recognition is heavily affected by the amount and variability of the labeled data available for training the deep learning models. On the other hand, it is costly and time-consuming to collect and label data. Given limited training data, it is hard to maintain high performance across a wide range of subjects, due to the differences in the underlying data distribution of the training and the testing sets. In this work, we develop a novel solution that applies adversarial learning to improve cross-subject performance by generating training data that mimic artificial subjects - i.e. through data augmentation - and enforcing the activity classifier to ignore subject-dependent information. Contrary to domain adaptation methods, our solution does not utilize any data from subjects of the test set (or target domain). Furthermore, our solution is versatile as it can be utilized together with any deep neural network as the classifier. Considering the open dataset PAMAP2, nearly 10 & x0025; higher cross-subject performance in terms of F1-score can be achieved when training a CNN-LSTM-based classifier with our solution. A performance gain of 5 & x0025; is also observed when our solution is applied to a state-of-the-art HAR classifier composed of a combination of inception neural network and recurrent neural network. We also investigate different influencing factors of classification performance (i.e. selection of sensor modalities, sampling rates and the number of subjects in the training data), and summarize a practical guideline for implementing deep learning solutions for sensor-based human activity recognition.
引用
收藏
页码:90542 / 90554
页数:13
相关论文
共 50 条
  • [41] Learning a robust unified domain adaptation framework for cross-subject EEG-based emotion recognition
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [42] Self-supervised contrastive learning for EEG-based cross-subject motor imagery recognition
    Li, Wenjie
    Li, Haoyu
    Sun, Xinlin
    Kang, Huicong
    An, Shan
    Wang, Guoxin
    Gao, Zhongke
    JOURNAL OF NEURAL ENGINEERING, 2024, 21 (02)
  • [43] Cross-Subject EEG-Based Emotion Recognition via Semisupervised Multisource Joint Distribution Adaptation
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [44] Cross-Subject Emotion Recognition Using Fused Entropy Features of EEG
    Zuo, Xin
    Zhang, Chi
    Hamalainen, Timo
    Gao, Hanbing
    Fu, Yu
    Cong, Fengyu
    ENTROPY, 2022, 24 (09)
  • [45] A Transfer Learning Strategy for Cross-Subject and Cross-Time Hand Gesture Recognition Based on A-Mode Ultrasound
    Lian, Yue
    Lu, Zongxing
    Huang, Xin
    Shangguan, Qican
    Yao, Ligang
    Huang, Jie
    Liu, Zhoujie
    IEEE SENSORS JOURNAL, 2024, 24 (10) : 17183 - 17192
  • [46] Joint EEG Feature Transfer and Semisupervised Cross-Subject Emotion Recognition
    Peng, Yong
    Liu, Honggang
    Kong, Wanzeng
    Nie, Feiping
    Lu, Bao-Liang
    Cichocki, Andrzej
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (07) : 8104 - 8115
  • [47] KL Divergence-based transfer learning for cross-subject eye movement recognition with EOG signals
    Su, RuiZhi
    Zeng, Zheng
    Tao, LinKai
    Wang, ZaiHao
    Chen, Chen
    Chen, Wei
    2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC, 2023,
  • [48] FMLAN: A novel framework for cross-subject and cross-session EEG emotion recognition
    Yu, Peng
    He, Xiaopeng
    Li, Haoyu
    Dou, Haowen
    Tan, Yeyu
    Wu, Hao
    Chen, Badong
    Biomedical Signal Processing and Control, 2025, 100
  • [49] Cross-Subject and Cross-Modal Transfer for Generalized Abnormal Gait Pattern Recognition
    Gu, Xiao
    Guo, Yao
    Deligianni, Fani
    Lo, Benny
    Yang, Guang-Zhong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (02) : 546 - 560
  • [50] Cross-subject generalizable representation learning with class-subject dual labels for biosignals
    Kim, Hyeonji
    Kim, Jaehoon
    Kim, Seoung Bum
    KNOWLEDGE-BASED SYSTEMS, 2024, 295