Classification of Emotions from Speech using Implicit Features

被引:0
|
作者
Srivastava, Mohit [1 ]
Agarwal, Anupam [1 ]
机构
[1] Indian Inst Informat Technol, Human Comp Interact, Allahabad, Uttar Pradesh, India
关键词
implicit features; SVM; emotions; ANN; RECOGNITION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human computer interaction with the time has extended its branches to many different other fields like engineering, cognition, medical etc. Speech analysis has also become an important area of concern. People involved are using this mode for the interaction with the machines to bridge the gap between physical and digital world. Speech emotion recognition has become an integral subfield in the domain. Human beings have an excellent capability to determine the situation by knowing the emotions, and can change the emotion of interaction depending on the context. In the following work the implicit features of the speech have been used for the detection of emotions like anger, happiness, sadness, fear and disgust. As a data set, we have used a standard Berlin emotional database for testing. The classification is done using SVM (support vector machine) which is found to be more consistent with all the emotions as compared to ANN (artificial neural network).
引用
收藏
页码:266 / 271
页数:6
相关论文
共 50 条
  • [31] Heart Rate Detection and Classification from Speech Spectral Features Using Machine Learning
    Usman, Mohammed
    Zubair, Mohammed
    Ahmad, Zeeshan
    Zaidi, Monji
    Ijyas, Thafasal
    Parayangat, Muneer
    Wajid, Mohd
    Shiblee, Mohammad
    Ali, Syed Jaffar
    [J]. ARCHIVES OF ACOUSTICS, 2021, 46 (01) : 41 - 53
  • [32] Classification of speech under stress using target driven features
    Womack, BD
    Hansen, JHL
    [J]. SPEECH COMMUNICATION, 1996, 20 (1-2) : 131 - 150
  • [33] Speech/music classification using visual and spectral chromagram features
    Birajdar, Gajanan K.
    Patil, Mukesh D.
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2020, 11 (01) : 329 - 347
  • [34] Classification of Speech with and without Face Mask using Acoustic Features
    Das, Rohan Kumar
    Li, Haizhou
    [J]. 2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 747 - 752
  • [35] Classification of speech degradations at network endpoints using psychoacoustic features
    Yuan, Hua
    Falk, Tiago H.
    Chan, Wai-Yip
    [J]. 2007 CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, VOLS 1-3, 2007, : 1602 - 1605
  • [36] Recognition and Classification of Pauses in Stuttered Speech using Acoustic Features
    Afroz, Fathima
    Koolagudi, Shashidhar G.
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 2019, : 921 - 926
  • [37] Speech/music classification using visual and spectral chromagram features
    Gajanan K. Birajdar
    Mukesh D. Patil
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2020, 11 : 329 - 347
  • [38] Classification of Distorted Text and Speech Using Projection Pursuit Features
    Asthana, Rajesh
    Verma, Neelam
    Ratan, Ram
    [J]. 2015 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI), 2015, : 1408 - 1413
  • [39] EmoSense: Automatically Sensing Emotions From Speech By Multi-way Classification
    Reddy, V. Ramu
    Viraraghavan, Venkata Subramanian
    [J]. 2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2018, : 4987 - 4990
  • [40] Speech/non-speech classification using multiple features for robust endpoint detection
    Shin, WH
    Lee, BS
    Lee, YK
    Lee, JS
    [J]. 2000 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, PROCEEDINGS, VOLS I-VI, 2000, : 1399 - 1402