Feature extraction algorithms to improve the speech emotion recognition rate

被引:1
|
作者
Anusha Koduru
Hima Bindu Valiveti
Anil Kumar Budati
机构
[1] GRIET,
关键词
Emotion recognition; Preprocessing; Feature extraction; Feature selection; Mel Frequency Cepstral coefficients; Discrete wavelet transform; Zero crossing rate;
D O I
暂无
中图分类号
学科分类号
摘要
In this digitally growing era speech emotion recognition plays significant role in several applications such as Human Computer Interface (HCI), lie detection, automotive system to assist steering, intelligent tutoring system, audio mining, security, Telecommunication, Interaction between a human and machine at home, hospitals, shops etc. Speech is a unique human characteristic used as a tool to communicate and express one’s perspective to others. Speech emotion recognition is extracting the emotions of the speaker from his or her speech signal. Feature extraction, Feature selection and classifier are three main stages of the emotion recognition. The main aim of this work is to improve the speech emotion recognition rate of a system using the different feature extraction algorithms. The work emphasizes on the preprocessing of the received audio samples where the noise from speech samples is removed using filters. In next step, the Mel Frequency Cepstral Coefficients (MFCC), Discrete Wavelet Transform (DWT), pitch, energy and Zero crossing rate (ZCR) algorithms are used for extracting the features. In feature selection stage Global feature algorithm is used to remove redundant information from features and to identify the emotions from extracted features machine learning classification algorithms are used. These feature extraction algorithms are validated for universal emotions comprising Anger, Happiness, Sad and Neutral.
引用
收藏
页码:45 / 55
页数:10
相关论文
共 50 条
  • [1] Feature extraction algorithms to improve the speech emotion recognition rate
    Koduru, Anusha
    Valiveti, Hima Bindu
    Budati, Anil Kumar
    [J]. INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2020, 23 (01) : 45 - 55
  • [2] Composite Feature Extraction for Speech Emotion Recognition
    Fu, Yangzhi
    Yuan, Xiaochen
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING (CSE 2020), 2020, : 72 - 77
  • [3] On the Speech Properties and Feature Extraction Methods in Speech Emotion Recognition
    Kacur, Juraj
    Puterka, Boris
    Pavlovicova, Jarmila
    Oravec, Milos
    [J]. SENSORS, 2021, 21 (05) : 1 - 27
  • [4] A Salient Feature Extraction Algorithm for Speech Emotion Recognition
    Liang, Ruiyu
    Tao, Huawei
    Tang, Guichen
    Wang, Qingyun
    Zhao, Li
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2015, E98D (09): : 1715 - 1718
  • [5] Impact of Feature Extraction and Feature Selection Algorithms on Punjabi Speech Emotion Recognition Using Convolutional Neural Network
    Kaur, Kamaldeep
    Singh, Parminder
    [J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (05)
  • [6] Speech Emotion Recognition Using Unsupervised Feature Selection Algorithms
    Bandela, Surekha Reddy
    Kumar, T. Kishore
    [J]. RADIOENGINEERING, 2020, 29 (02) : 353 - 364
  • [7] Using adaptive genetic algorithms to improve speech emotion recognition
    Sedaaghi, Mohammad H.
    Kotropoulos, Constantine
    Ververidis, Dimitrios
    [J]. 2007 IEEE NINTH WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, 2007, : 461 - +
  • [8] Speech emotion recognition based on syllable-level feature extraction
    Rehman, Abdul
    Liu, Zhen-Tao
    Wu, Min
    Cao, Wei-Hua
    Jiang, Cheng-Shan
    [J]. APPLIED ACOUSTICS, 2023, 211
  • [9] Emotional feature extraction based on phoneme information for speech emotion recognition
    Hyun, Kyang Hak
    Kim, Eun Ho
    Kwak, Yoon Keun
    [J]. 2007 RO-MAN: 16TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1-3, 2007, : 797 - +
  • [10] A Pattern Mining Approach in Feature Extraction for Emotion Recognition from Speech
    Avci, Umut
    Akkurt, Gamze
    Unay, Devrim
    [J]. SPEECH AND COMPUTER, SPECOM 2019, 2019, 11658 : 54 - 63