Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training

被引:4
|
作者
Islam, Tanvir [1 ]
Washington, Peter [1 ]
机构
[1] Univ Hawaii Manoa, Informat & Comp Sci, Honolulu, HI 96822 USA
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 21期
基金
美国国家卫生研究院;
关键词
mobile sensing; affective computing; personalized machine learning; self-supervised learning; biosignals; stress prediction; PSYCHOLOGICAL STRESS;
D O I
10.3390/app132112035
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Stress is widely recognized as a major contributor to a variety of health issues. Stress prediction using biosignal data recorded by wearables is a key area of study in mobile sensing research because real-time stress prediction can enable digital interventions to immediately react at the onset of stress, helping to avoid many psychological and physiological symptoms such as heart rhythm irregularities. Electrodermal activity (EDA) is often used to measure stress. However, major challenges with the prediction of stress using machine learning include the subjectivity and sparseness of the labels, a large feature space, relatively few labels, and a complex nonlinear and subjective relationship between the features and outcomes. To tackle these issues, we examined the use of model personalization: training a separate stress prediction model for each user. To allow the neural network to learn the temporal dynamics of each individual's baseline biosignal patterns, thus enabling personalization with very few labels, we pre-trained a one-dimensional convolutional neural network (1D CNN) using self-supervised learning (SSL). We evaluated our method using the Wearable Stress and Affect Detection(WESAD) dataset. We fine-tuned the pre-trained networks to the stress-prediction task and compared against equivalent models without any self-supervised pre-training. We discovered that embeddings learned using our pre-training method outperformed the supervised baselines with significantly fewer labeled data points: the models trained with SSL required less than 30% of the labels to reach equivalent performance without personalized SSL. This personalized learning method can enable precision health systems that are tailored to each subject and require few annotations by the end user, thus allowing for the mobile sensing of increasingly complex, heterogeneous, and subjective outcomes such as stress.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map
    Xu, Chenfeng
    Li, Tian
    Tang, Chen
    Sun, Lingfeng
    Keutzer, Kurt
    Tomizuka, Masayoshi
    Fathi, Alireza
    Zhan, Wei
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 34 - 50
  • [42] Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels
    Zheltonozhskii, Evgenii
    Baskin, Chaim
    Mendelson, Avi
    Bronstein, Alex M.
    Litany, Or
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 387 - 397
  • [43] The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models
    Chen, Tianlong
    Frankle, Jonathan
    Chang, Shiyu
    Liu, Sijia
    Zhang, Yang
    Carbin, Michael
    Wang, Zhangyang
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16301 - 16311
  • [44] Language-agnostic Age and Gender Classification of Voice using Self-supervised Pre-training
    Lastow, Fredrik
    Ekberg, Edwin
    Nugues, Pierre
    2022 34TH WORKSHOP OF THE SWEDISH ARTIFICIAL INTELLIGENCE SOCIETY (SAIS 2022), 2022, : 19 - 27
  • [45] Self-Supervised Pre-Training Boosts Semantic Scene Segmentation on LiDAR data
    Caros, Mariona
    Just, Ariadna
    Segui, Santi
    Vitria, Jordi
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [46] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
    Chen, Tianlong
    Liu, Sijia
    Chang, Shiyu
    Cheng, Yu
    Amini, Lisa
    Wang, Zhangyang
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 696 - 705
  • [47] The Effectiveness of Self-supervised Pre-training for Multi-modal Endometriosis Classification
    Butler, David
    Wang, Hu
    Zhang, Yuan
    To, Minh-Son
    Condous, George
    Leonardi, Mathew
    Knox, Steven
    Avery, Jodie
    Hull, M. Louise
    Carneiro, Gustavo
    2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC, 2023,
  • [48] Task-Customized Self-Supervised Pre-training with Scalable Dynamic Routing
    Liu, Zhili
    Han, Jianhua
    Hong, Lanqing
    Xu, Hang
    Chen, Kai
    Xu, Chunjing
    Li, Zhenguo
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1854 - 1862
  • [49] SPot-the-Difference Self-supervised Pre-training for Anomaly Detection and Segmentation
    Zou, Yang
    Jeong, Jongheon
    Pemula, Latha
    Zhang, Dongqing
    Dabeer, Onkar
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 392 - 408
  • [50] PerFedRec++: Enhancing Personalized Federated Recommendation with Self-Supervised Pre-Training
    Luo, Sichun
    Xiao, Yuanzhang
    Zhang, Xinyi
    Liu, Yang
    Ding, Wenbo
    Song, Linqi
    ACM Transactions on Intelligent Systems and Technology, 2024, 15 (05)