Drivers' Mental Engagement Analysis Using Multi-Sensor Fusion Approaches Based on Deep Convolutional Neural Networks

被引:1
|
作者
Najafi, Taraneh Aminosharieh [1 ]
Affanni, Antonio [1 ]
Rinaldo, Roberto [1 ]
Zontone, Pamela [1 ]
机构
[1] Univ Udine, Polytech Dept Engn & Architecture, Via Sci 206, I-33100 Udine, Italy
关键词
sensor fusion; drivers' mental engagement; electroencephalogram; electrodermal activity; electrocardiogram; deep convolutional neural network; RECOGNITION; ALGORITHM;
D O I
10.3390/s23177346
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this paper, we present a comprehensive assessment of individuals' mental engagement states during manual and autonomous driving scenarios using a driving simulator. Our study employed two sensor fusion approaches, combining the data and features of multimodal signals. Participants in our experiment were equipped with Electroencephalogram (EEG), Skin Potential Response (SPR), and Electrocardiogram (ECG) sensors, allowing us to collect their corresponding physiological signals. To facilitate the real-time recording and synchronization of these signals, we developed a custom-designed Graphical User Interface (GUI). The recorded signals were pre-processed to eliminate noise and artifacts. Subsequently, the cleaned data were segmented into 3 s windows and labeled according to the drivers' high or low mental engagement states during manual and autonomous driving. To implement sensor fusion approaches, we utilized two different architectures based on deep Convolutional Neural Networks (ConvNets), specifically utilizing the Braindecode Deep4 ConvNet model. The first architecture consisted of four convolutional layers followed by a dense layer. This model processed the synchronized experimental data as a 2D array input. We also proposed a novel second architecture comprising three branches of the same ConvNet model, each with four convolutional layers, followed by a concatenation layer for integrating the ConvNet branches, and finally, two dense layers. This model received the experimental data from each sensor as a separate 2D array input for each ConvNet branch. Both architectures were evaluated using a Leave-One-Subject-Out (LOSO) cross-validation approach. For both cases, we compared the results obtained when using only EEG signals with the results obtained by adding SPR and ECG signals. In particular, the second fusion approach, using all sensor signals, achieved the highest accuracy score, reaching 82.0%. This outcome demonstrates that our proposed architecture, particularly when integrating EEG, SPR, and ECG signals at the feature level, can effectively discern the mental engagement of drivers.
引用
收藏
页数:27
相关论文
共 50 条
  • [21] A Multi-sensor Data Fusion Approach for Sleep Apnea Monitoring using Neural Networks
    Premasiri, Swapna
    de Silva, Clarence W.
    Gamage, Lalith B.
    [J]. 2018 IEEE 14TH INTERNATIONAL CONFERENCE ON CONTROL AND AUTOMATION (ICCA), 2018, : 470 - 475
  • [22] Multi-sensor fusion model for constructing internal representation using autoencoder neural networks
    Yaginuma, Y
    Kimoto, T
    Yamakawa, H
    [J]. ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 1646 - 1651
  • [23] Sensor Fusion for Myoelectric Control Based on Deep Learning With Recurrent Convolutional Neural Networks
    Wang, Weiming
    Chen, Biao
    Xia, Peng
    Hu, Jie
    Peng, Yinghong
    [J]. ARTIFICIAL ORGANS, 2018, 42 (09) : E272 - E282
  • [24] Information fusion for hazard analysis in multi-sensor systems based on Bayesian networks
    Barrho, Joerg
    Hauger, Johannes
    Kiencke, Uwe
    [J]. WORLD CONGRESS ON ENGINEERING 2007, VOLS 1 AND 2, 2007, : 1 - +
  • [25] Convolutional Neural Network Approach for Mapping Arctic Vegetation using Multi-Sensor Remote Sensing Fusion
    Langford, Zachary L.
    Kumar, Jitendra
    Hoffman, Forrest M.
    [J]. 2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2017), 2017, : 322 - 331
  • [26] VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS
    Chen, Xingyue
    Wang, Yunhong
    Liu, Qingjie
    [J]. 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1557 - 1561
  • [27] Multi-sensor signal fusion-based modulation classification by using wireless sensor networks
    Zhang, Yan
    Ansari, Nirwan
    Su, Wei
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2015, 15 (12): : 1621 - 1632
  • [28] Weighting Method Based on Entropy Analysis for Multi-sensor Data Fusion in Wireless Sensor Networks
    Suh, Donghyok
    Yoon, Shinsook
    Jeon, Seoin
    Ryu, Keunho
    [J]. DATABASE THEORY AND APPLICATION, BIO-SCIENCE AND BIO-TECHNOLOGY, 2011, 258 : 41 - 50
  • [29] An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks
    Shamwell, E. Jared
    Nothwang, William D.
    Perlis, Donald
    [J]. SENSORS, 2018, 18 (05)
  • [30] Extending lifetime of wireless sensor networks using multi-sensor data fusion
    SOUMITRA DAS
    S BARANI
    SANJEEV WAGH
    S S SONAVANE
    [J]. Sādhanā, 2017, 42 : 1083 - 1090