Audio-visual stimulation based emotion classification by correlated EEG channels

被引:18
|
作者
Ahirwal, Mitul Kumar [1 ]
Kose, Mangesh Ramaji [2 ]
机构
[1] Maulana Azad Natl Inst Technol, Bhopal 462003, MP, India
[2] Natl Inst Technol, Raipur 492010, Madhya Pradesh, India
关键词
EEG signals; Emotion classification; Channel selection; Channel correlation; Feature extraction; FEATURE-SELECTION; RECOGNITION; SIGNALS; ENTROPY;
D O I
10.1007/s12553-019-00394-5
中图分类号
R-058 [];
学科分类号
摘要
In this paper, a new channel selection technique is presented for emotion classification by electroencephalography (EEG) signals. Audio-visual stimulation is used to generate emotions at the time of experiment. After recording of EEG signals, feature extraction and classification has been applied to classify the emotions (happy, angry, sad and relaxing). The main highlights of the study include: 1) identification/characterization of audio-visual stimulation which generate harmful emotions and 2) proposed approach to reduce the number of EEG channels for emotion classification. Intention behind identification of audio-visual stimulation (video) responsible for harmful emotions like sad and anger is to control their access over social media and another public platform. EEG channels are selected on the basis of their activation probability, calculated from the correlation matrix of EEG channels. Three types of features are extracted from EEG signals, time domain, frequency domain and entropy based. After feature extraction three different algorithms, support vector machine (SVM), artificial neural network (ANN) and naive bayes (NB) are used to classify the emotions. This study is conducted over the DEAP (Database for emotion analysis using Physiological signals) database of EEG signals recorded at different emotional states of several subjects. To compare performance after channel selection, parameters like accuracy, average precision and average recall are calculated. After result analysis, ANN is found as best classifier with 97.74% average accuracy. Among listed features, entropy-based features are found as best features with 90.53% average accuracy.
引用
收藏
页码:7 / 23
页数:17
相关论文
共 50 条
  • [31] AUDIO-VISUAL EMOTION RECOGNITION WITH BOOSTED COUPLED HMM
    Lu, Kun
    Jia, Yunde
    [J]. 2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 1148 - 1151
  • [32] Emotion Spotting: Discovering Regions of Evidence in Audio-Visual Emotion Expressions
    Kim, Yelin
    Provost, Emily Mower
    [J]. ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, : 92 - 99
  • [33] Temporal aggregation of audio-visual modalities for emotion recognition
    Birhala, Andreea
    Ristea, Catalin Nicolae
    Radoi, Anamaria
    Dutu, Liviu Cristian
    [J]. 2020 43RD INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING (TSP), 2020, : 305 - 308
  • [34] AUDIO-VISUAL EMOTION RECOGNITION USING BOLTZMANN ZIPPERS
    Lu, Kun
    Jia, Yunde
    [J]. 2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012), 2012, : 2589 - 2592
  • [35] Analysis of temporal perception for audio-visual stimulation
    Yu, Mi
    Lee, Sang-Min
    Piao, Yong-Jun
    Kwon, Tae-Kyu
    Kim, Nam-Gyun
    [J]. WORLD CONGRESS ON MEDICAL PHYSICS AND BIOMEDICAL ENGINEERING 2006, VOL 14, PTS 1-6, 2007, 14 : 591 - +
  • [36] Audio-visual emotion recognition with multilayer boosted HMM
    Lü, Kun
    Jia, Yun-De
    Zhang, Xin
    [J]. Journal of Beijing Institute of Technology (English Edition), 2013, 22 (01): : 89 - 93
  • [37] Fusion of Classifier Predictions for Audio-Visual Emotion Recognition
    Noroozi, Fatemeh
    Marjanovic, Marina
    Njegus, Angelina
    Escalera, Sergio
    Anbarjafari, Gholamreza
    [J]. 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 61 - 66
  • [38] Audio-visual emotion recognition with multilayer boosted HMM
    吕坤
    贾云得
    张欣
    [J]. Journal of Beijing Institute of Technology, 2013, 22 (01) : 89 - 93
  • [39] Audio-Visual Emotion Challenge 2012: A Simple Approach
    van der Maaten, Laurens
    [J]. ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 473 - 476
  • [40] Complementary models for audio-visual speech classification
    Gonzalo D. Sad
    Lucas D. Terissi
    Juan C. Gómez
    [J]. International Journal of Speech Technology, 2022, 25 : 231 - 249