A novel multimodal EEG-image fusion approach for emotion recognition: introducing a multimodal KMED dataset

被引:0
|
作者
Bahar Hatipoglu Yilmaz [1 ]
Cemal Kose [1 ]
Cagatay Murat Yilmaz [2 ]
机构
[1] Karadeniz Technical University,Department of Computer Engineering
[2] Karadeniz Technical University,Department of Software Engineering
关键词
Multimodal emotion recognition; feature-level fusion; EEG; Face images; KMED dataset; DEAP dataset;
D O I
10.1007/s00521-024-10925-5
中图分类号
学科分类号
摘要
Nowadays, bio-signal-based emotion recognition have become a popular research topic. However, there are some problems that must be solved before emotion-based systems can be realized. We therefore aimed to propose a feature-level fusion (FLF) method for multimodal emotion recognition (MER). In this method, first, EEG signals are transformed to signal images named angle amplitude graphs (AAG). Second, facial images are recorded simultaneously with EEG signals, and then peak frames are selected among all the recorded facial images. After that, these modalities are fused at the feature level. Finally, all feature extraction and classification experiments are evaluated on these final features. In this work, we also introduce a new multimodal benchmark dataset, KMED, which includes EEG signals and facial videos from 14 participants. Experiments were carried out on the newly introduced KMED and publicly available DEAP datasets. For the KMED dataset, we achieved the highest classification accuracy of 89.95% with k-Nearest Neighbor algorithm in the (3-disgusting and 4-relaxing) class pair. For the DEAP dataset, we got the highest accuracy of 92.44% with support vector machines in arousal compared to the results of previous works. These results demonstrate that the proposed feature-level fusion approach have considerable potential for MER systems. Additionally, the introduced KMED benchmark dataset will facilitate future studies of multimodal emotion recognition.
引用
收藏
页码:5187 / 5202
页数:15
相关论文
共 50 条
  • [21] Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection in Children with Autism
    Sousa, Annanda
    Young, Karen
    d'Aquin, Mathieu
    Zarrouk, Manel
    Holloway, Jennifer
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, UAHCI 2023, PT I, 2023, 14020 : 657 - 677
  • [22] PhyMER: Physiological Dataset for Multimodal Emotion Recognition With Personality as a Context
    Pant, Sudarshan
    Yang, Hyung-Jeong
    Lim, Eunchae
    Kim, Soo-Hyung
    Yoo, Seok-Bong
    IEEE ACCESS, 2023, 11 : 107638 - 107656
  • [23] Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning
    Luna-Jimenez, Cristina
    Griol, David
    Callejas, Zoraida
    Kleinlein, Ricardo
    Montero, Juan M.
    Fernandez-Martinez, Fernando
    SENSORS, 2021, 21 (22)
  • [24] Multimodal Emotion Recognition Using Feature Fusion: An LLM-Based Approach
    Chandraumakantham, Omkumar
    Gowtham, N.
    Zakariah, Mohammed
    Almazyad, Abdulaziz
    IEEE ACCESS, 2024, 12 : 108052 - 108071
  • [25] Speech emotion recognition using multimodal feature fusion with machine learning approach
    Sandeep Kumar Panda
    Ajay Kumar Jena
    Mohit Ranjan Panda
    Susmita Panda
    Multimedia Tools and Applications, 2023, 82 : 42763 - 42781
  • [26] HYBRID FUSION BASED APPROACH FOR MULTIMODAL EMOTION RECOGNITION WITH INSUFFICIENT LABELED DATA
    Kumar, Puneet
    Khokher, Vedanti
    Gupta, Yukti
    Raman, Balasubramanian
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 314 - 318
  • [27] Speech emotion recognition using multimodal feature fusion with machine learning approach
    Panda, Sandeep Kumar
    Jena, Ajay Kumar
    Panda, Mohit Ranjan
    Panda, Susmita
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (27) : 42763 - 42781
  • [28] An early fusion approach for multimodal emotion recognition using deep recurrent networks
    Bucur, Beniamin
    Somfeleam, Iulia
    Ghiurutan, Alexandru
    Lcmnaru, Camelia
    Dinsoreanu, Mihaela
    2018 IEEE 14TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP), 2018, : 71 - 78
  • [29] A Framework to Evaluate Fusion Methods for Multimodal Emotion Recognition
    Pena, Diego
    Aguilera, Ana
    Dongo, Irvin
    Heredia, Juanpablo
    Cardinale, Yudith
    IEEE ACCESS, 2023, 11 : 10218 - 10237
  • [30] Dual Memory Fusion for Multimodal Speech Emotion Recognition
    Priyasad, Darshana
    Fernando, Tharindu
    Sridharan, Sridha
    Denman, Simon
    Fookes, Clinton
    INTERSPEECH 2023, 2023, : 4543 - 4547