A novel multimodal EEG-image fusion approach for emotion recognition: introducing a multimodal KMED dataset

被引:0
|
作者
Bahar Hatipoglu Yilmaz [1 ]
Cemal Kose [1 ]
Cagatay Murat Yilmaz [2 ]
机构
[1] Karadeniz Technical University,Department of Computer Engineering
[2] Karadeniz Technical University,Department of Software Engineering
关键词
Multimodal emotion recognition; feature-level fusion; EEG; Face images; KMED dataset; DEAP dataset;
D O I
10.1007/s00521-024-10925-5
中图分类号
学科分类号
摘要
Nowadays, bio-signal-based emotion recognition have become a popular research topic. However, there are some problems that must be solved before emotion-based systems can be realized. We therefore aimed to propose a feature-level fusion (FLF) method for multimodal emotion recognition (MER). In this method, first, EEG signals are transformed to signal images named angle amplitude graphs (AAG). Second, facial images are recorded simultaneously with EEG signals, and then peak frames are selected among all the recorded facial images. After that, these modalities are fused at the feature level. Finally, all feature extraction and classification experiments are evaluated on these final features. In this work, we also introduce a new multimodal benchmark dataset, KMED, which includes EEG signals and facial videos from 14 participants. Experiments were carried out on the newly introduced KMED and publicly available DEAP datasets. For the KMED dataset, we achieved the highest classification accuracy of 89.95% with k-Nearest Neighbor algorithm in the (3-disgusting and 4-relaxing) class pair. For the DEAP dataset, we got the highest accuracy of 92.44% with support vector machines in arousal compared to the results of previous works. These results demonstrate that the proposed feature-level fusion approach have considerable potential for MER systems. Additionally, the introduced KMED benchmark dataset will facilitate future studies of multimodal emotion recognition.
引用
收藏
页码:5187 / 5202
页数:15
相关论文
共 50 条
  • [1] Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
    Huang, Yongrui
    Yang, Jianhao
    Liao, Pengkai
    Pan, Jiahui
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2017, 2017
  • [2] Multimodal emotion recognition for the fusion of speech and EEG signals
    Ma J.
    Sun Y.
    Zhang X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (01): : 143 - 150
  • [3] A Multimodal Dataset for Mixed Emotion Recognition
    Yang, Pei
    Liu, Niqi
    Liu, Xinge
    Shu, Yezhi
    Ji, Wenqi
    Ren, Ziqi
    Sheng, Jenny
    Yu, Minjing
    Yi, Ran
    Zhang, Dan
    Liu, Yong-Jin
    SCIENTIFIC DATA, 2024, 11 (01)
  • [4] A novel signal to image transformation and feature level fusion for multimodal emotion recognition
    Yilmaz, Bahar Hatipoglu
    Kose, Cemal
    BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK, 2021, 66 (04): : 353 - 362
  • [5] Emotion Recognition from EEG and Facial Expressions: a Multimodal Approach
    Chaparro, Valentina
    Gomez, Alejandro
    Salgado, Alejandro
    Quintero, O. Lucia
    Lopez, Natalia
    Villa, Luisa F.
    2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2018, : 530 - 533
  • [6] Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition
    Thammasan, Nattapong
    Fukui, Ken-ichi
    Numao, Masayuki
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4991 - 4992
  • [7] A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals
    Fu, Baole
    Gu, Chunrui
    Fu, Ming
    Xia, Yuxiao
    Liu, Yinhua
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [8] A novel approach for multimodal medical image fusion
    Liu, Zhaodong
    Yin, Hongpeng
    Chai, Yi
    Yang, Simon X.
    EXPERT SYSTEMS WITH APPLICATIONS, 2014, 41 (16) : 7425 - 7435
  • [9] Residual multimodal Transformer for expression-EEG fusion continuous emotion recognition
    Jin, Xiaofang
    Xiao, Jieyu
    Jin, Libiao
    Zhang, Xinruo
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2024, 9 (05) : 1290 - 1304
  • [10] Multimodal emotion recognition based on the fusion of vision, EEG, ECG, and EMG signals
    Bhatlawande, Shripad
    Pramanik, Sourjadip
    Shilaskar, Swati
    Sole, Swarali
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2024, 15 (01) : 41 - 58