Emotional Vocal Expressions Recognition Using the COST 2102 Italian Database of Emotional Speech

被引:0
|
作者
Atassi, Hicham [1 ,2 ]
Riviello, Maria Teresa [3 ]
Smekal, Zdenek [2 ]
Hussain, Amir [1 ]
Esposito, Anna [3 ]
机构
[1] Univ Stirling, Dept Comp Sci & Math, Stirling FK9 4LA, Scotland
[2] Brno Univ Technol, Dept Telecommun, Brno, Czech Republic
[3] Univ Naples 2, Dept Psychol, IIASS, Caserta, Italy
关键词
Emotion recognition; speech; Italian database; spectral features; high level features;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The present paper proposes a new speaker-independent approach to the classification of emotional vocal expressions by using the COST 2102 Italian database of emotional speech. The audio records extracted from video clips of Italian movies possess a certain degree of spontaneity and are either noisy or slightly degraded by an interruption making the collected stimuli more realistic in comparison with available emotional databases containing utterances recorded under studio conditions. The audio stimuli represent 6 basic emotional states: happiness, sarcasm/irony, fear, anger, surprise, and sadness. For these more realistic conditions, and using a speaker independent approach, the proposed system is able to classify the emotions under examination with 60.7% accuracy by using a hierarchical structure consisting of a Perceptron and fifteen Gaussian Mixture Models (GMM) trained to distinguish within each pair (couple) of emotions under examination. The best features in terms of high discriminative power were selected by using the Sequential Floating Forward Selection (SFFS) algorithm among a large number of spectral, prosodic and voice quality features. The results were compared with the subjective evaluation of the stimuli provided by human subjects.
引用
收藏
页码:255 / +
页数:5
相关论文
共 50 条
  • [41] INTERCULTURAL RECOGNITION OF EMOTIONAL EXPRESSIONS BY 3 NATIONAL RACIAL GROUPS - ENGLISH, ITALIAN AND JAPANESE
    SHIMODA, K
    ARGYLE, M
    BITTI, PR
    EUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY, 1978, 8 (02) : 169 - 179
  • [42] Generative emotional AI for speech emotion recognition: The case for synthetic emotional speech augmentation
    Latif, Siddique
    Shahid, Abdullah
    Qadir, Junaid
    APPLIED ACOUSTICS, 2023, 210
  • [43] Erratum to: Recognizing emotional speech in Persian: A validated database of Persian emotional speech (Persian ESD)
    Niloofar Keshtiari
    Michael Kuhlmann
    Moharram Eslami
    Gisela Klann-Delius
    Behavior Research Methods, 2015, 47 : 295 - 295
  • [44] Recognition of Emotional States in Natural Speech
    Kaminska, Dorota
    Sapinski, Tomasz
    Pelikant, Adam
    2013 SIGNAL PROCESSING SYMPOSIUM (SPS), 2013,
  • [45] Deep Learning for Emotional Speech Recognition
    Sanchez-Gutierrez, Maximo E.
    Marcelo Albornoz, E.
    Martinez-Licona, Fabiola
    Leonardo Rufiner, H.
    Goddard, John
    PATTERN RECOGNITION, MCPR 2014, 2014, 8495 : 311 - +
  • [46] Deep Learning for Emotional Speech Recognition
    Alhamada, M., I
    Khalifa, O. O.
    Abdalla, A. H.
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC DEVICES, SYSTEMS AND APPLICATIONS (ICEDSA2020), 2020, 2306
  • [47] Dimensionality Reduction for Emotional Speech Recognition
    Fewzee, Pouria
    Karray, Fakhri
    PROCEEDINGS OF 2012 ASE/IEEE INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY, RISK AND TRUST AND 2012 ASE/IEEE INTERNATIONAL CONFERENCE ON SOCIAL COMPUTING (SOCIALCOM/PASSAT 2012), 2012, : 532 - 537
  • [48] PREDICTORS OF NONVERBAL RECOGNITION OF EMOTIONAL FACIAL EXPRESSIONS
    Zhegallo, Alexander V.
    Basyul, Ivan A.
    EKSPERIMENTALNAYA PSIKHOLOGIYA, 2023, 16 (03): : 53 - 68
  • [49] Featural processing in recognition of emotional facial expressions
    Beaudry, Olivia
    Roy-Charland, Annie
    Perron, Melanie
    Cormier, Isabelle
    Tapp, Roxane
    COGNITION & EMOTION, 2014, 28 (03) : 416 - 432
  • [50] Impaired recognition of facial emotional expressions in elderly
    Lee, CT
    Lee, HK
    Kweon, YS
    Chae, JH
    Lee, KU
    EUROPEAN NEUROPSYCHOPHARMACOLOGY, 2005, 15 : S619 - S619