Text Independent Speaker and Emotion Independent Speech Recognition in Emotional Environment

被引:0
|
作者
Revathi, A. [1 ]
Venkataramani, Y. [1 ]
机构
[1] Saranathan Coll Engn, Tiruchirappalli, India
关键词
Clustering; GMM; Speech recognition; Probability; MFPLPC; Emotions; Quantization;
D O I
10.1007/978-81-322-2250-7_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is well known fact that the accuracy of the speaker identification or speech recognition using the speeches recorded in neutral environment is normally good. It has become a challenging work to improve the accuracy of the recognition system using the speeches recorded in emotional environment. This paper mainly discusses the effectiveness on the use of iterative clustering technique and Gaussian mixture modeling technique (GMM) for recognizing speech and speaker from the emotional speeches using Mel frequency perceptual linear predictive cepstral coefficients (MFPLPC) and MFPLPC concatenated with probability as a feature. For the emotion independent speech recognition, models are created for speeches of archetypal emotions such as boredom, disgust, fear, happy, neutral and sad and testing is done on the speeches of emotion anger. For the text independent speaker recognition, individual models are created for all speakers using speeches of nine utterances and testing is done using the speeches of a tenth utterance. 80 % of the data is used for training and 20 % of the data is used for testing. This system provides the average accuracy of 95 % for text independent speaker recognition and emotion independent speech recognition for the system tested on models developed using MFPLPC and MFPLPC concatenated with probability. Accuracy is increased by 1 %, if the group classification is done prior to speaker classification with reference to the set of male or female speakers forming a group. Text independent speaker recognition is also evaluated by doing group classification using clustering technique and speaker in a group is identified by applying the test vectors on the GMM models corresponding to the small set of speakers in a group and the accuracy obtained is 97 %.
引用
收藏
页码:43 / 52
页数:10
相关论文
共 50 条
  • [1] Speaker independent speech emotion recognition by ensemble classification
    Schuller, B
    Reiter, S
    Müller, R
    Al-Hames, M
    Lang, M
    Rigoll, G
    [J]. 2005 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), VOLS 1 AND 2, 2005, : 865 - 868
  • [2] Graph Learning Based Speaker Independent Speech Emotion Recognition
    Xu, Xinzhou
    Huang, Chengwei
    Wu, Chen
    Wang, Qingyun
    Zhao, Li
    [J]. ADVANCES IN ELECTRICAL AND COMPUTER ENGINEERING, 2014, 14 (02) : 17 - 22
  • [3] TEXT INDEPENDENT SPEAKER RECOGNITION
    FOIL, JT
    JOHNSON, DH
    [J]. IEEE COMMUNICATIONS MAGAZINE, 1983, 21 (09) : 22 - 25
  • [4] Speaker Adversarial Neural Network (SANN) for Speaker-independent Speech Emotion Recognition
    Md Shah Fahad
    Ashish Ranjan
    Akshay Deepak
    Gayadhar Pradhan
    [J]. Circuits, Systems, and Signal Processing, 2022, 41 : 6113 - 6135
  • [5] Speaker Adversarial Neural Network (SANN) for Speaker-independent Speech Emotion Recognition
    Fahad, Md Shah
    Ranjan, Ashish
    Deepak, Akshay
    Pradhan, Gayadhar
    [J]. CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2022, 41 (11) : 6113 - 6135
  • [6] Underlying Text Independent Speaker Recognition
    Singh, Nilu
    Khan, R. A.
    [J]. PROCEEDINGS OF THE 10TH INDIACOM - 2016 3RD INTERNATIONAL CONFERENCE ON COMPUTING FOR SUSTAINABLE GLOBAL DEVELOPMENT, 2016, : 6 - 10
  • [7] TEXT-INDEPENDENT SPEAKER RECOGNITION
    ATAL, BS
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1972, 52 (01): : 181 - &
  • [8] Text-Dependent Versus Text-Independent Speech Emotion Recognition
    Nayak, Biswajit
    Pradhan, Manoj Kumar
    [J]. PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION TECHNOLOGIES, IC3T 2015, VOL 1, 2016, 379 : 153 - 161
  • [9] COMPARISON OF SPEAKER DEPENDENT AND SPEAKER INDEPENDENT EMOTION RECOGNITION
    Rybka, Jan
    Janicki, Artur
    [J]. INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2013, 23 (04) : 797 - 808
  • [10] Domain Invariant Feature Learning for Speaker-Independent Speech Emotion Recognition
    Lu, Cheng
    Zong, Yuan
    Zheng, Wenming
    Li, Yang
    Tang, Chuangao
    Schuller, Bjoern W.
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2217 - 2230