Illuminating the Neural Landscape of Pilot Mental States: A Convolutional Neural Network Approach with Shapley Additive Explanations Interpretability

被引:5
|
作者
Alreshidi, Ibrahim [1 ,2 ,3 ]
Bisandu, Desmond [1 ,2 ]
Moulitsas, Irene [1 ,2 ]
机构
[1] Cranfield Univ, Ctr Computat Engn Sci, Cranfield MK43 0AL, England
[2] Cranfield Univ, Digital Aviat Res & Technol Ctr DARTeC, Machine Learning & Data Analyt Lab, Cranfield MK43 0AL, England
[3] Univ Hail, Coll Comp Sci & Engn, Hail 81451, Saudi Arabia
关键词
aviation safety; convolutional neural network; deep learning; EEG; electroencephalogram; interpretability/explainability; machine learning; mental states classification; pilot deficiencies; SHapley Additive exPlanations; EEG; ALPHA;
D O I
10.3390/s23229052
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Predicting pilots' mental states is a critical challenge in aviation safety and performance, with electroencephalogram data offering a promising avenue for detection. However, the interpretability of machine learning and deep learning models, which are often used for such tasks, remains a significant issue. This study aims to address these challenges by developing an interpretable model to detect four mental states-channelised attention, diverted attention, startle/surprise, and normal state-in pilots using EEG data. The methodology involves training a convolutional neural network on power spectral density features of EEG data from 17 pilots. The model's interpretability is enhanced via the use of SHapley Additive exPlanations values, which identify the top 10 most influential features for each mental state. The results demonstrate high performance in all metrics, with an average accuracy of 96%, a precision of 96%, a recall of 94%, and an F1 score of 95%. An examination of the effects of mental states on EEG frequency bands further elucidates the neural mechanisms underlying these states. The innovative nature of this study lies in its combination of high-performance model development, improved interpretability, and in-depth analysis of the neural correlates of mental states. This approach not only addresses the critical need for effective and interpretable mental state detection in aviation but also contributes to our understanding of the neural underpinnings of these states. This study thus represents a significant advancement in the field of EEG-based mental state detection.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Elucidating microbubble structure behavior with a Shapley Additive Explanations neural network algorithm
    Zhuo, Qingxia
    Zhang, Linfei
    Wang, Lei
    Liu, Qinkai
    Zhang, Sen
    Wang, Guanjun
    Xue, Chenyang
    OPTICAL FIBER TECHNOLOGY, 2024, 88
  • [2] Neural network models and shapley additive explanations for a beam-ring structure
    Sun, Ying
    Zhang, Luying
    Yao, Minghui
    Zhang, Junhua
    CHAOS SOLITONS & FRACTALS, 2024, 185
  • [3] Survey on Convolutional Neural Network Interpretability
    Dou H.
    Zhang L.-M.
    Han F.
    Shen F.-R.
    Zhao J.
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (01): : 159 - 184
  • [4] SHapley Additive exPlanations for Explaining Artificial Neural Network Based Mode Choice Models
    Anil Koushik
    M. Manoj
    N. Nezamuddin
    Transportation in Developing Economies, 2024, 10
  • [5] SHapley Additive exPlanations for Explaining Artificial Neural Network Based Mode Choice Models
    Koushik, Anil
    Manoj, M.
    Nezamuddin, N.
    TRANSPORTATION IN DEVELOPING ECONOMIES, 2024, 10 (01)
  • [6] Predicting travel mode choice with a robust neural network and Shapley additive explanations analysis
    Tang, Li
    Tang, Chuanli
    Fu, Qi
    Ma, Changxi
    IET INTELLIGENT TRANSPORT SYSTEMS, 2024, 18 (07) : 1339 - 1354
  • [7] Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley Additive Explanations (SHAP)
    Younisse, Remah
    Ahmad, Ashraf
    Abu Al-Haija, Qasem
    BIG DATA AND COGNITIVE COMPUTING, 2022, 6 (04)
  • [8] An artificial neural network-pharmacokinetic model and its interpretation using Shapley additive explanations
    Ogami, Chika
    Tsuji, Yasuhiro
    Seki, Hiroto
    Kawano, Hideaki
    To, Hideto
    Matsumoto, Yoshiaki
    Hosono, Hiroyuki
    CPT-PHARMACOMETRICS & SYSTEMS PHARMACOLOGY, 2021, 10 (07): : 760 - 768
  • [9] Evaluating Explanations of Convolutional Neural Network Image Classifications
    Shah, Sumeet S.
    Sheppard, John W.
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [10] Epileptic signal classification using convolutional neural network and Shapley additive explainable artificial intelligence method
    Rathod, Prajakta
    Naik, Shefali
    Bhalodiya, Jayendra M.
    Neural Computing and Applications, 2025, 37 (06) : 4937 - 4955