Classification of Emotions from Speech using Implicit Features

被引:0
|
作者
Srivastava, Mohit [1 ]
Agarwal, Anupam [1 ]
机构
[1] Indian Inst Informat Technol, Human Comp Interact, Allahabad, Uttar Pradesh, India
关键词
implicit features; SVM; emotions; ANN; RECOGNITION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human computer interaction with the time has extended its branches to many different other fields like engineering, cognition, medical etc. Speech analysis has also become an important area of concern. People involved are using this mode for the interaction with the machines to bridge the gap between physical and digital world. Speech emotion recognition has become an integral subfield in the domain. Human beings have an excellent capability to determine the situation by knowing the emotions, and can change the emotion of interaction depending on the context. In the following work the implicit features of the speech have been used for the detection of emotions like anger, happiness, sadness, fear and disgust. As a data set, we have used a standard Berlin emotional database for testing. The classification is done using SVM (support vector machine) which is found to be more consistent with all the emotions as compared to ANN (artificial neural network).
引用
收藏
页码:266 / 271
页数:6
相关论文
共 50 条
  • [1] On Emotions as Features for Speech Overlaps Classification
    Egorow, Olga
    Wendemuth, Andreas
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (01) : 175 - 186
  • [2] Recognition of Emotions from Speech using Excitation Source Features
    Koolagudi, Shashidhar G.
    Devliyal, Swati
    Chawla, Bhavna
    Barthwal, Anurag
    Rao, K. Sreenivasa
    [J]. INTERNATIONAL CONFERENCE ON MODELLING OPTIMIZATION AND COMPUTING, 2012, 38 : 3409 - 3417
  • [3] Classification of emotions from speech signal
    Majkowski, Andrzej
    Kolodziej, Marcin
    Rak, Remigiusz J.
    Korczynski, Robert
    [J]. 2016 SIGNAL PROCESSING: ALGORITHMS, ARCHITECTURES, ARRANGEMENTS, AND APPLICATIONS (SPA), 2016, : 276 - 281
  • [4] Speech Synthesis of Emotions Using Vowel Features
    Boku, Kanu
    Asada, Taro
    Yoshitomi, Yasunari
    Tabuse, Masayoshi
    [J]. INTERNATIONAL JOURNAL OF SOFTWARE INNOVATION, 2013, 1 (01) : 54 - 67
  • [5] Discriminating Emotions in the Valence Dimension from Speech Using Timbre Features
    Tursunov, Anvarjon
    Kwon, Soonil
    Pang, Hee-Suk
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (12):
  • [6] Emotions Classification from Speech with Deep Learning
    Chowanda, Andry
    Muliono, Yohan
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (04) : 777 - 781
  • [7] Dysarthric speech classification from coded telephone speech using glottal features
    Narendra, N. P.
    Alku, Paavo
    [J]. SPEECH COMMUNICATION, 2019, 110 : 47 - 55
  • [8] A Novel Approach for Classification of Speech Emotions Based on Deep and Acoustic Features
    Er, Mehmet Bilal
    [J]. IEEE ACCESS, 2020, 8 : 221640 - 221653
  • [9] Speech synthesis of emotions using vowel features of a speaker
    Boku, Kanu
    Asada, Taro
    Yoshitomi, Yasunari
    Tabuse, Masayoshi
    [J]. ARTIFICIAL LIFE AND ROBOTICS, 2014, 19 (01) : 27 - 32
  • [10] Speech Synthesis of Emotions in a Sentence Using Vowel Features
    Makino, Rintaro
    Yoshitomi, Yasunari
    Asada, Taro
    Tabuse, Masayoshi
    [J]. PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON ARTIFICIAL LIFE AND ROBOTICS (ICAROB2020), 2020, : 403 - 406