A novel multimodal EEG-image fusion approach for emotion recognition: introducing a multimodal KMED dataset

被引:0
|
作者
Bahar Hatipoglu Yilmaz [1 ]
Cemal Kose [1 ]
Cagatay Murat Yilmaz [2 ]
机构
[1] Karadeniz Technical University,Department of Computer Engineering
[2] Karadeniz Technical University,Department of Software Engineering
关键词
Multimodal emotion recognition; feature-level fusion; EEG; Face images; KMED dataset; DEAP dataset;
D O I
10.1007/s00521-024-10925-5
中图分类号
学科分类号
摘要
Nowadays, bio-signal-based emotion recognition have become a popular research topic. However, there are some problems that must be solved before emotion-based systems can be realized. We therefore aimed to propose a feature-level fusion (FLF) method for multimodal emotion recognition (MER). In this method, first, EEG signals are transformed to signal images named angle amplitude graphs (AAG). Second, facial images are recorded simultaneously with EEG signals, and then peak frames are selected among all the recorded facial images. After that, these modalities are fused at the feature level. Finally, all feature extraction and classification experiments are evaluated on these final features. In this work, we also introduce a new multimodal benchmark dataset, KMED, which includes EEG signals and facial videos from 14 participants. Experiments were carried out on the newly introduced KMED and publicly available DEAP datasets. For the KMED dataset, we achieved the highest classification accuracy of 89.95% with k-Nearest Neighbor algorithm in the (3-disgusting and 4-relaxing) class pair. For the DEAP dataset, we got the highest accuracy of 92.44% with support vector machines in arousal compared to the results of previous works. These results demonstrate that the proposed feature-level fusion approach have considerable potential for MER systems. Additionally, the introduced KMED benchmark dataset will facilitate future studies of multimodal emotion recognition.
引用
收藏
页码:5187 / 5202
页数:15
相关论文
共 50 条
  • [31] Multimodal transformer augmented fusion for speech emotion recognition
    Wang, Yuanyuan
    Gu, Yu
    Yin, Yifei
    Han, Yingping
    Zhang, He
    Wang, Shuang
    Li, Chenyu
    Quan, Dou
    FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [32] Multimodal Physiological Signals Fusion for Online Emotion Recognition
    Pan, Tongjie
    Ye, Yalan
    Cai, Hecheng
    Huang, Shudong
    Yang, Yang
    Wang, Guoqing
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5879 - 5888
  • [33] Review on Multimodal Fusion Techniques for Human Emotion Recognition
    Karani, Ruhina
    Desai, Sharmishta
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (10) : 287 - 296
  • [34] Context-aware Multimodal Fusion for Emotion Recognition
    Li, Jinchao
    Wang, Shuai
    Chao, Yang
    Liu, Xunying
    Meng, Helen
    INTERSPEECH 2022, 2022, : 2013 - 2017
  • [35] A multimodal fusion approach for image captioning
    Zhao, Dexin
    Chang, Zhi
    Guo, Shutao
    NEUROCOMPUTING, 2019, 329 : 476 - 485
  • [36] A review on EEG-based multimodal learning for emotion recognition
    Pillalamarri, Rajasekhar
    Shanmugam, Udhayakumar
    ARTIFICIAL INTELLIGENCE REVIEW, 2025, 58 (05)
  • [37] Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG
    Pan, Jiahui
    Fang, Weijie
    Zhang, Zhihang
    Chen, Bingzhi
    Zhang, Zheng
    Wang, Shuihua
    IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY, 2024, 5 : 396 - 403
  • [38] Emotion Recognition Based on Feedback Weighted Fusion of Multimodal Emotion Data
    Wei, Wei
    Jia, Qingxuan
    Feng, Yongli
    2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE ROBIO 2017), 2017, : 1682 - 1687
  • [39] Multimodal Emotion Recognition From EEG Signals and Facial Expressions
    Wang, Shuai
    Qu, Jingzi
    Zhang, Yong
    Zhang, Yidie
    IEEE ACCESS, 2023, 11 : 33061 - 33068
  • [40] Multimodal Emotion Recognition using EEG and Eye Tracking Data
    Zheng, Wei-Long
    Dong, Bo-Nan
    Lu, Bao-Liang
    2014 36TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2014, : 5040 - 5043