Drivers' Mental Engagement Analysis Using Multi-Sensor Fusion Approaches Based on Deep Convolutional Neural Networks

被引:1
|
作者
Najafi, Taraneh Aminosharieh [1 ]
Affanni, Antonio [1 ]
Rinaldo, Roberto [1 ]
Zontone, Pamela [1 ]
机构
[1] Univ Udine, Polytech Dept Engn & Architecture, Via Sci 206, I-33100 Udine, Italy
关键词
sensor fusion; drivers' mental engagement; electroencephalogram; electrodermal activity; electrocardiogram; deep convolutional neural network; RECOGNITION; ALGORITHM;
D O I
10.3390/s23177346
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this paper, we present a comprehensive assessment of individuals' mental engagement states during manual and autonomous driving scenarios using a driving simulator. Our study employed two sensor fusion approaches, combining the data and features of multimodal signals. Participants in our experiment were equipped with Electroencephalogram (EEG), Skin Potential Response (SPR), and Electrocardiogram (ECG) sensors, allowing us to collect their corresponding physiological signals. To facilitate the real-time recording and synchronization of these signals, we developed a custom-designed Graphical User Interface (GUI). The recorded signals were pre-processed to eliminate noise and artifacts. Subsequently, the cleaned data were segmented into 3 s windows and labeled according to the drivers' high or low mental engagement states during manual and autonomous driving. To implement sensor fusion approaches, we utilized two different architectures based on deep Convolutional Neural Networks (ConvNets), specifically utilizing the Braindecode Deep4 ConvNet model. The first architecture consisted of four convolutional layers followed by a dense layer. This model processed the synchronized experimental data as a 2D array input. We also proposed a novel second architecture comprising three branches of the same ConvNet model, each with four convolutional layers, followed by a concatenation layer for integrating the ConvNet branches, and finally, two dense layers. This model received the experimental data from each sensor as a separate 2D array input for each ConvNet branch. Both architectures were evaluated using a Leave-One-Subject-Out (LOSO) cross-validation approach. For both cases, we compared the results obtained when using only EEG signals with the results obtained by adding SPR and ECG signals. In particular, the second fusion approach, using all sensor signals, achieved the highest accuracy score, reaching 82.0%. This outcome demonstrates that our proposed architecture, particularly when integrating EEG, SPR, and ECG signals at the feature level, can effectively discern the mental engagement of drivers.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] Image segmentation using convolutional neural networks in multi-sensor information fusion
    Wenying Zhang
    Min Dong
    Li Jiang
    [J]. Soft Computing, 2023, 27 : 18353 - 18372
  • [2] Image segmentation using convolutional neural networks in multi-sensor information fusion
    Zhang, Wenying
    Dong, Min
    Jiang, Li
    [J]. SOFT COMPUTING, 2023, 27 (23) : 18353 - 18372
  • [3] IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion
    Dehzangi, Omid
    Taherisadr, Mojtaba
    ChangalVala, Raghvendar
    [J]. SENSORS, 2017, 17 (12)
  • [4] A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks
    Li, Hao
    Ghamisi, Pedram
    Rasti, Behnood
    Wu, Zhaoyan
    Shapiro, Aurelie
    Schultz, Michael
    Zipf, Alexander
    [J]. REMOTE SENSING, 2020, 12 (12)
  • [5] An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox
    Jing, Luyang
    Wang, Taiyong
    Zhao, Ming
    Wang, Peng
    [J]. SENSORS, 2017, 17 (02)
  • [6] FALL DETECTION USING CONVOLUTIONAL NEURAL NETWORK WITH MULTI-SENSOR FUSION
    Zhou, Xu
    Qian, Li-Chang
    You, Peng-Jie
    Ding, Ze-Gang
    Han, Yu-Qi
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW 2018), 2018,
  • [7] Multi-sensor fusion based optimized deep convolutional neural network for boxing punch activity recognition
    Jayakumar, Brindha
    Govindarajan, Nallavan
    [J]. PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART P-JOURNAL OF SPORTS ENGINEERING AND TECHNOLOGY, 2024,
  • [8] FusionLane: Multi-Sensor Fusion for Lane Marking Semantic Segmentation Using Deep Neural Networks
    Yin, Ruochen
    Cheng, Yong
    Wu, Huapeng
    Song, Yuntao
    Yu, Biao
    Niu, Runxin
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) : 1543 - 1553
  • [9] UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network
    Elamin, Ahmed
    El-Rabbany, Ahmed
    [J]. REMOTE SENSING, 2022, 14 (17)
  • [10] Improved Multi-Sensor Fusion Dynamic Odometry Based on Neural Networks
    Luo, Lishu
    Peng, Fulun
    Dong, Longhui
    [J]. Sensors, 2024, 24 (19)