A multi-modal driver emotion dataset and study: Including facial expressions and synchronized physiological signals

被引:5
|
作者
Xiang, Guoliang [1 ]
Yao, Song [1 ]
Deng, Hanwen [1 ]
Wu, Xianhui [1 ]
Wang, Xinghua [1 ]
Xu, Qian [1 ]
Yu, Tianjian [1 ]
Wang, Kui [1 ]
Peng, Yong [1 ]
机构
[1] Cent South Univ, Sch Traff & Transportat Engn, Key Lab Traff Safety Track, Minist Educ, Changsha 410075, Peoples R China
基金
中国国家自然科学基金;
关键词
Driver emotion recognition; Multi -modal information; Facial expression; Physiological signal; Smart vehicle; FRAMEWORK;
D O I
10.1016/j.engappai.2023.107772
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To address the limitations of databases in the field of emotion recognition and to cater to the trend of integrating data from multiple sources, we have established a multi-modal emotional dataset based on spontaneous expression of drivers. By selecting emotional induction materials and inducing emotions before each driving task, facial expression videos and synchronous physiological signals of the drivers during driving were collected. The dataset includes records of 64 participants under five different emotions (neutral, happy, angry, sad, and fear), and the emotional valence, arousal, and peak time of all participants in each driving task were recorded. To analyze the dataset, spatio-temporal convolutional neural networks were designed to analyze the different modalities of data with varying durations in the dataset, aiming to investigate their performance in emotion recognition. The results demonstrate that the fusion of multi-modal data significantly improves the accuracy of driver's emotion recognition, with accuracy increases of 11.28% and 6.83% compared to using only facial video signals or physiological signals, respectively. Therefore, the publication and analysis of multi-modal emotional data for driving scenarios is crucial to support further research in the fields of multimodal perception and intelligent transportation engineering.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Multi-modal emotion analysis from facial expressions and electroencephalogram
    Huang, Xiaohua
    Kortelainen, Jukka
    Zhao, Guoying
    Li, Xiaobai
    Moilanen, Antti
    Seppanen, Tapio
    Pietikainen, Matti
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 147 : 114 - 124
  • [2] Emotion recognition with multi-modal peripheral physiological signals
    Gohumpu, Jennifer
    Xue, Mengru
    Bao, Yanchi
    [J]. FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [3] Emotion Recognition with Facial Expressions and Physiological Signals
    Zhong, Boxuan
    Qin, Zikun
    Yang, Shuo
    Chen, Junyu
    Mudrick, Nicholas
    Taub, Michelle
    Azevedo, Roger
    Lobaton, Edgar
    [J]. 2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 1170 - 1177
  • [4] Emotion recognition based on multi-modal physiological signals and transfer learning
    Fu, Zhongzheng
    Zhang, Boning
    He, Xinrun
    Li, Yixuan
    Wang, Haoyuan
    Huang, Jian
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [5] Multimodal Emotion Recognition by Combining Physiological Signals and Facial Expressions: A Preliminary Study
    Kortelainen, Jukka
    Tiinanen, Suvi
    Huang, Xiaohua
    Li, Xiaobai
    Laukka, Seppo
    Pietikainen, Matti
    Seppanen, Tapio
    [J]. 2012 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2012, : 5238 - 5241
  • [6] Entropy-Assisted Multi-Modal Emotion Recognition Framework Based on Physiological Signals
    Tung, Kuan
    Liu, Po-Kang
    Chuang, Yu-Chuan
    Wang, Sheng-Hui
    Wu, An-Yeu
    [J]. 2018 IEEE-EMBS CONFERENCE ON BIOMEDICAL ENGINEERING AND SCIENCES (IECBES), 2018, : 22 - 26
  • [7] Multi-modal emotion recognition using recurrence plots and transfer learning on physiological signals
    Elalamy, Rayan
    Fanourakis, Marios
    Chanel, Guillaume
    [J]. 2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2021,
  • [8] Facial emotion recognition using multi-modal information
    De Silva, LC
    Miyasato, T
    Nakatsu, R
    [J]. ICICS - PROCEEDINGS OF 1997 INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING, VOLS 1-3: THEME: TRENDS IN INFORMATION SYSTEMS ENGINEERING AND WIRELESS MULTIMEDIA COMMUNICATIONS, 1997, : 397 - 401
  • [9] Hidden Emotion Detection using Multi-modal Signals
    Kim, Dae Ha
    Song, Byung Cheol
    [J]. EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21), 2021,
  • [10] Beyond Emotion: A Multi-Modal Dataset for Human Desire Understanding
    Jia, Ao
    He, Yu
    Zhang, Yazhou
    Uprety, Sagar
    Song, Dawei
    Lioma, Christina
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1512 - 1522