A Comprehensive Approach to Early Detection of Workplace Stress with Multi-Modal Analysis and Explainable AI

被引:0
|
作者
Upadhya, Jiblal [1 ]
Poudel, Khem [1 ]
Ranganathan, Jaishree [1 ]
机构
[1] Middle Tennesee State Univ, Murfreesboro, TN 37132 USA
关键词
Stress; Work environment; Multi-Modal representation; Explainable AI; Bias;
D O I
10.1145/3632634.3655878
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This study introduces a novel framework for stress detection, leveraging the synergy of physiological signals and facial expressions through advanced machine learning techniques. Employing a suite of models including Long Short-Term Memory (LSTM) networks, Support Vector Machines (SVM), and Convolutional Neural Networks (CNNs) such as VGG16 and Custom CNN models, we undertake a comprehensive analysis across varied data durations. Our findings highlights the superiority of LSTM networks, which consistently outperform SVMs across metrics, particularly excelling in longer data sequences with notable improvements of 4 % in average in test accuracy, precision, recall, and F1 scores. This highlights the critical advantage of deep learning in capturing complex temporal patterns inherent in stress manifestations. Moreover, our exploration reveals that VGG16 surpasses custom CNNs, achieving a remarkable test accuracy of 87%, thereby setting a new standard in stress detection through facial expression analysis. This research not only advances the state of the art in stress classification but also underscores the transformative potential of multimodal data integration in understanding stress. By demonstrating significant improvements over existing methods [1, 4, 19], this work paves the way for innovative, AI-driven approaches to stress management, emphasizing the critical role of multimodal representations in enhancing the accuracy and reliability of stress detection systems. Also, we have harnessed the three Explainable AI tools (XAI), i.e., SHAP, LIME and Permutation Importance to illuminate the decision-making processes of complex AI models, aiding in the detection and reduction of biases. Through this pioneering effort, we contribute to the broader endeavor of improving mental health and well-being with technology.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] A Multi-Modal Approach for the Detection of Account Anonymity on Social Media Platforms
    Wang, Bo
    Guo, Jie
    Huang, Zheng
    Qiu, Weidong
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [42] A coupled autoencoder approach for multi-modal analysis of cell types
    Gala, Rohan
    Gouwens, Nathan
    Yao, Zizhen
    Budzillo, Agata
    Penn, Osnat
    Tasic, Bosiljka
    Murphy, Gabe
    Zeng, Hongkui
    Sumbul, Uygar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [43] Toward Multi-Modal Approach for Identification and Detection of Cyberbullying in Social Networks
    Al-Khasawneh, Mahmoud Ahmad
    Faheem, Muhammad
    Alarood, Ala Abdulsalam
    Habibullah, Safa
    Alsolami, Eesa
    IEEE ACCESS, 2024, 12 : 90158 - 90170
  • [44] Explainable Multi-Modal and Local Approaches to Modelling Injuries in Sports Data
    Hudson, Dan
    Den Hartigh, Ruud. J. R.
    Meerhoff, L. Rens A.
    Atzmueller, Martin
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 949 - 957
  • [45] Multi-Modal Pedestrian Detection with Large Misalignment Based on Modal-Wise Regression and Multi-Modal IoU
    Wanchaitanawong, Napat
    Tanaka, Masayuki
    Shibata, Takashi
    Okutomi, Masatoshi
    PROCEEDINGS OF 17TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA 2021), 2021,
  • [46] Gesture desk - An integrated multi-modal gestural workplace for sonification
    Hermann, T
    Henning, T
    Ritter, H
    GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION, 2003, 2915 : 369 - 379
  • [47] Comprehensive Semi-Supervised Multi-Modal Learning
    Yang, Yang
    Wang, Ke-Tao
    Zhan, De-Chuan
    Xiong, Hui
    Jiang, Yuan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4092 - 4098
  • [48] Comprehensive Multi-Modal Interactions for Referring Image Segmentation
    Jain, Kanishk
    Gandhi, Vineet
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3427 - 3435
  • [49] A comprehensive video dataset for multi-modal recognition systems
    Handa A.
    Agarwal R.
    Kohli N.
    Data Science Journal, 2019, 18 (01):
  • [50] MULTI-MODAL PREDICTION OF PTSD AND STRESS INDICATORS
    Rozgic, Viktor
    Vazquez-Reina, Amelio
    Crystal, Michael
    Srivastava, Amit
    Tan, Veasna
    Berka, Chris
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,