Improving motion sickness severity classification through multi-modal data fusion

被引:9
|
作者
Dennison, Mark [1 ]
D'Zmura, Mike [2 ]
Harrison, Andre [3 ]
Lee, Michael [3 ]
Raglin, Adrienne [3 ]
机构
[1] US Army, Res Lab West, 12025 E Waterfront Dr, Playa Vista, CA 90094 USA
[2] Univ Calif Irvine, Dept Cognit Sci, 2201 Social & Behav Sci Gateway Bldg, Irvine, CA 92697 USA
[3] US Army, Res Lab, 2800 Powder Mill Rd, Adelphi, MD 20783 USA
关键词
motion sickness; virtual reality; multimodal computing; machine learning; RESPONSES; SWAY;
D O I
10.1117/12.2519085
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Head mounted displays (HMD) may prove useful for synthetic training and augmentation of military C5ISR decision-making. Motion sickness caused by such HMD use is detrimental, resulting in decreased task performance or total user dropout. The genesis of sickness symptoms is often measured using paper surveys, which are difficult to deploy in live scenarios. Here, we demonstrate a new way to track sickness severity using machine learning on data collected from heterogeneous, non-invasive sensors worn by users who navigated a virtual environment while remaining stationary in reality. We discovered that two models, one trained on heterogeneous sensor data and another trained only on electroencephalography ( EEG) data, were able to classify sickness severity with over 95% accuracy and were statistically comparable in performance. Greedy feature optimization was used to maximize accuracy while minimizing the feature subspace. We found that across models, the features with the most weight were previously reported in the literature as being related to motion sickness severity. Finally, we discuss how models constructed on heterogeneous vs homogeneous sensor data may be useful in different real-world scenarios.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Improving multi-modal data fusion by anomaly detection
    Jakub Simanek
    Vladimir Kubelka
    Michal Reinstein
    Autonomous Robots, 2015, 39 : 139 - 154
  • [2] Improving multi-modal data fusion by anomaly detection
    Simanek, Jakub
    Kubelka, Vladimir
    Reinstein, Michal
    AUTONOMOUS ROBOTS, 2015, 39 (02) : 139 - 154
  • [3] Multi-modal Data Fusion For Pain Intensity Assessment and Classification
    Thiam, Patrick
    Schwenker, Friedhelm
    PROCEEDINGS OF THE 2017 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA 2017), 2017,
  • [4] Matching fusion framework on multi-modal data for glaucoma severity diagnosis
    Yi, Sanli
    Feng, Xueli
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 123
  • [5] Soft multi-modal data fusion
    Coppock, S
    Mazack, L
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 636 - 641
  • [6] Multi-modal data fusion: A description
    Coppock, S
    Mazlack, LJ
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 1136 - 1142
  • [7] Improved Sentiment Classification by Multi-modal Fusion
    Gan, Lige
    Benlamri, Rachid
    Khoury, Richard
    2017 THIRD IEEE INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND APPLICATIONS (IEEE BIGDATASERVICE 2017), 2017, : 11 - 16
  • [8] Multi-Modal Data Fusion for Big Events
    Papacharalapous, A. E.
    Hovelynck, Stefan
    Cats, O.
    Lankhaar, J. W.
    Daamen, W.
    van Oort, N.
    van Lint, J. W. C.
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2015, 7 (04) : 5 - 10
  • [9] Research on Emotion Classification Based on Multi-modal Fusion
    Xiang, Zhihua
    Radzi, Nor Haizan Mohamed
    Hashim, Haslina
    BAGHDAD SCIENCE JOURNAL, 2024, 21 (02) : 548 - 560
  • [10] Multi-Modal Fusion for Enhanced Automatic Modulation Classification
    Li, Yingkai
    Wang, Shufei
    Zhang, Yibin
    Huang, Hao
    Wang, Yu
    Zhang, Qianyun
    Lin, Yun
    Gui, Guan
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,