Emotional Vocal Expressions Recognition Using the COST 2102 Italian Database of Emotional Speech

被引:0
|
作者
Atassi, Hicham [1 ,2 ]
Riviello, Maria Teresa [3 ]
Smekal, Zdenek [2 ]
Hussain, Amir [1 ]
Esposito, Anna [3 ]
机构
[1] Univ Stirling, Dept Comp Sci & Math, Stirling FK9 4LA, Scotland
[2] Brno Univ Technol, Dept Telecommun, Brno, Czech Republic
[3] Univ Naples 2, Dept Psychol, IIASS, Caserta, Italy
关键词
Emotion recognition; speech; Italian database; spectral features; high level features;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The present paper proposes a new speaker-independent approach to the classification of emotional vocal expressions by using the COST 2102 Italian database of emotional speech. The audio records extracted from video clips of Italian movies possess a certain degree of spontaneity and are either noisy or slightly degraded by an interruption making the collected stimuli more realistic in comparison with available emotional databases containing utterances recorded under studio conditions. The audio stimuli represent 6 basic emotional states: happiness, sarcasm/irony, fear, anger, surprise, and sadness. For these more realistic conditions, and using a speaker independent approach, the proposed system is able to classify the emotions under examination with 60.7% accuracy by using a hierarchical structure consisting of a Perceptron and fifteen Gaussian Mixture Models (GMM) trained to distinguish within each pair (couple) of emotions under examination. The best features in terms of high discriminative power were selected by using the Sequential Floating Forward Selection (SFFS) algorithm among a large number of spectral, prosodic and voice quality features. The results were compared with the subjective evaluation of the stimuli provided by human subjects.
引用
收藏
页码:255 / +
页数:5
相关论文
共 50 条
  • [1] The COST 2102 Italian Audio and Video Emotional Database
    Esposito, Anna
    Riviello, Maria Teresa
    Di Maio, Giuseppe
    NEURAL NETS WIRN09, 2009, 204 : 51 - 61
  • [2] On the recognition of emotional vocal expressions: motivations for a holistic approach
    Esposito, Anna
    Esposito, Antonietta M.
    COGNITIVE PROCESSING, 2012, 13 : 541 - 550
  • [3] On the recognition of emotional vocal expressions: motivations for a holistic approach
    Anna Esposito
    Antonietta M. Esposito
    Cognitive Processing, 2012, 13 : 541 - 550
  • [4] EMOVO Corpus: an Italian Emotional Speech Database
    Costantini, Giovanni
    Iadarola, Iacopo
    Paoloni, Andrea
    Todisco, Massimiliano
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 3501 - 3504
  • [5] Recognition of Vocal Socioemotional Expressions at Varying Levels of Emotional Intensity
    Morningstar, Michele
    Gilbert, Annie C.
    Burdo, Jessica
    Leis, Maria
    Dirks, Melanie A.
    EMOTION, 2021, 21 (07) : 1570 - 1575
  • [6] Chung-Ang Auditory Database of Korean Emotional Speech: A Validated Set of Vocal Expressions With Different Intensities
    Nam, Youngja
    Lee, Chankyu
    IEEE ACCESS, 2022, 10 : 122745 - 122761
  • [7] A validated battery of vocal emotional expressions
    Maurage, Pierre
    Joassin, Frederic
    Philippot, Pierre
    Campanella, Salvatore
    NEUROPSYCHOLOGICAL TRENDS, 2007, (02) : 63 - 74
  • [8] Detection of emotional expressions in speech
    Julia, Fatema N.
    Iftekharuddin, Khan M.
    PROCEEDINGS OF THE IEEE SOUTHEASTCON 2006, 2006, : 307 - 312
  • [9] Emotions and Speech Disorders: Do Developmental Stutters Recognize Emotional Vocal Expressions?
    Esposito, Anna
    Troncone, Alda
    TOWARD AUTONOMOUS, ADAPTIVE, AND CONTEXT-AWARE MULTIMODAL INTERFACES: THEORETICAL AND PRACTICAL ISSUES, 2011, 6456 : 155 - 164
  • [10] Creation and Analysis of Emotional Speech Database for Multiple Emotions Recognition
    Sato, Ryota
    Sasaki, Ryohei
    Suga, Norisato
    Furukawa, Toshihiro
    PROCEEDINGS OF 2020 23RD CONFERENCE OF THE ORIENTAL COCOSDA INTERNATIONAL COMMITTEE FOR THE CO-ORDINATION AND STANDARDISATION OF SPEECH DATABASES AND ASSESSMENT TECHNIQUES (ORIENTAL-COCOSDA 2020), 2020, : 33 - 37