Improving motion sickness severity classification through multi-modal data fusion

被引:9
|
作者
Dennison, Mark [1 ]
D'Zmura, Mike [2 ]
Harrison, Andre [3 ]
Lee, Michael [3 ]
Raglin, Adrienne [3 ]
机构
[1] US Army, Res Lab West, 12025 E Waterfront Dr, Playa Vista, CA 90094 USA
[2] Univ Calif Irvine, Dept Cognit Sci, 2201 Social & Behav Sci Gateway Bldg, Irvine, CA 92697 USA
[3] US Army, Res Lab, 2800 Powder Mill Rd, Adelphi, MD 20783 USA
关键词
motion sickness; virtual reality; multimodal computing; machine learning; RESPONSES; SWAY;
D O I
10.1117/12.2519085
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Head mounted displays (HMD) may prove useful for synthetic training and augmentation of military C5ISR decision-making. Motion sickness caused by such HMD use is detrimental, resulting in decreased task performance or total user dropout. The genesis of sickness symptoms is often measured using paper surveys, which are difficult to deploy in live scenarios. Here, we demonstrate a new way to track sickness severity using machine learning on data collected from heterogeneous, non-invasive sensors worn by users who navigated a virtual environment while remaining stationary in reality. We discovered that two models, one trained on heterogeneous sensor data and another trained only on electroencephalography ( EEG) data, were able to classify sickness severity with over 95% accuracy and were statistically comparable in performance. Greedy feature optimization was used to maximize accuracy while minimizing the feature subspace. We found that across models, the features with the most weight were previously reported in the literature as being related to motion sickness severity. Finally, we discuss how models constructed on heterogeneous vs homogeneous sensor data may be useful in different real-world scenarios.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Multi-modal Extreme Classification
    Mittal, Anshul
    Dahiya, Kunal
    Malani, Shreya
    Ramaswamy, Janani
    Kuruvilla, Seba
    Ajmera, Jitendra
    Chang, Keng-Hao
    Agarwal, Sumeet
    Kar, Purushottam
    Varma, Manik
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12383 - 12392
  • [32] Multi-Modal and Multi-Temporal Data Fusion: Outcome of the 2012 GRSS Data Fusion Contest
    Berger, Christian
    Voltersen, Michael
    Eckardt, Robert
    Eberle, Jonas
    Heyer, Thomas
    Salepci, Nesrin
    Hese, Soeren
    Schmullius, Christiane
    Tao, Junyi
    Auer, Stefan
    Bamler, Richard
    Ewald, Ken
    Gartley, Michael
    Jacobson, John
    Buswell, Alan
    Du, Qian
    Pacifici, Fabio
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2013, 6 (03) : 1324 - 1340
  • [33] A Novel Framework for Multi-Modal Data Fusion in Radiation Oncology
    Ganguly, S.
    Ma, R.
    Polvorosa, C.
    Baker, J.
    Cao, Y.
    Chang, J.
    MEDICAL PHYSICS, 2024, 51 (10) : 7958 - 7959
  • [34] Interactive Fusion and Tracking For Multi-Modal Spatial Data Visualization
    Elshehaly, M.
    Gracanin, D.
    Gad, M.
    Elmongui, H. G.
    Matkovic, K.
    COMPUTER GRAPHICS FORUM, 2015, 34 (03) : 251 - 260
  • [35] SPFUSIONNET: SKETCH SEGMENTATION USING MULTI-MODAL DATA FUSION
    Wang, Fei
    Lin, Shujin
    Wu, Hefeng
    Li, Hanhui
    Wang, Ruomei
    Luo, Xiaonan
    He, Xiangjian
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1654 - 1659
  • [36] ACMTF for Fusion of Multi-Modal Neuroimaging Data and Identification of Biomarkers
    Acar, Evrim
    Levin-Schwartz, Yuri
    Calhoun, Vince D.
    Adali, Tulay
    2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2017, : 643 - 647
  • [37] Latent correlation embedded discriminative multi-modal data fusion
    Zhu, Qi
    Xu, Xiangyu
    Yuan, Ning
    Zhang, Zheng
    Guan, Donghai
    Huang, Sheng-Jun
    Zhang, Daoqiang
    SIGNAL PROCESSING, 2020, 171
  • [38] Multi-modal fusion attention sentiment analysis for mixed sentiment classification
    Xue, Zhuanglin
    Xu, Jiabin
    COGNITIVE COMPUTATION AND SYSTEMS, 2024,
  • [39] Fusion of infrared and range data: Multi-modal face images
    Chen, X
    Flynn, PJ
    Bowyer, KW
    ADVANCES IN BIOMETRICS, PROCEEDINGS, 2006, 3832 : 55 - 63
  • [40] AF: An Association-Based Fusion Method for Multi-Modal Classification
    Liang, Xinyan
    Qian, Yuhua
    Guo, Qian
    Cheng, Honghong
    Liang, Jiye
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9236 - 9254