Using learning automata in brain emotional learning for speech emotion recognition

被引:8
|
作者
Farhoudi Z. [1 ]
Setayeshi S. [2 ]
Rabiee A. [3 ]
机构
[1] Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran
[2] Department of Medical Radiation, Amirkabir University of Technology, Tehran
[3] Department of Computer Science, Dolatabad Branch, Islamic Azad University, Isfahan
关键词
Brain emotional learning; Emotional state; Learning automata; Neural network; Speech emotion recognition;
D O I
10.1007/s10772-017-9426-0
中图分类号
学科分类号
摘要
We propose an improved version of brain emotional learning (BEL) model trained via learning automata (LA) for speech emotion recognition. Inspiring from the limbic system in mammalian brain, the original BEL model is composed of two neural network components, namely amygdala and orbitofrontal cortex. In this modified BEL model, named brain emotional learning based on learning automata (BELBLA), we have employed the theory of the stochastic LA in error back-propagation to train the BEL model in decreasing the high computational complexity of the traditional gradient method. Hence, the performance of the model can be enhanced. For a speech emotion recognition task, we extract the usual features, such as energy, pitch, formants, amplitude, zero crossing rate and MFCC, from average short-term signals of the emotional Berlin dataset. The experimental results show that the BELBLA outperforms some opponents, like hidden Markov model, Gaussian mixture model, k-nearest neighbor, support vector machines and artificial neural networks, for this application. © 2017, Springer Science+Business Media New York.
引用
收藏
页码:553 / 562
页数:9
相关论文
共 50 条
  • [31] Learning Spontaneity to Improve Emotion Recognition in Speech
    Mangalam, Karttikeya
    Guha, Tanaya
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 946 - 950
  • [32] Speech emotion recognition with unsupervised feature learning
    Zheng-wei HUANG
    Wen-tao XUE
    Qi-rong MAO
    FrontiersofInformationTechnology&ElectronicEngineering, 2015, 16 (05) : 358 - 366
  • [33] LEARNING WITH SYNTHESIZED SPEECH FOR AUTOMATIC EMOTION RECOGNITION
    Schuller, Bjoern
    Burkhardt, Felix
    2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 5150 - 5153
  • [34] SPEECH EMOTION RECOGNITION WITH ENSEMBLE LEARNING METHODS
    Shih, Po-Yuan
    Chen, Chia-Ping
    Wu, Chung-Hsien
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2756 - 2760
  • [35] Machine Learning Approach for Emotion Recognition in Speech
    Gjoreski, Martin
    Gjoreski, Hristijan
    Kulakov, Andrea
    INFORMATICA-JOURNAL OF COMPUTING AND INFORMATICS, 2014, 38 (04): : 377 - 383
  • [36] Federated Learning for Speech Emotion Recognition Applications
    Latif, Siddique
    Khalifa, Sara
    Rana, Rajib
    Jurdak, Raja
    2020 19TH ACM/IEEE INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS (IPSN 2020), 2020, : 341 - 342
  • [37] Speech Emotion Recognition with Discriminative Feature Learning
    Zhou, Huan
    Liu, Kai
    INTERSPEECH 2020, 2020, : 4094 - 4097
  • [38] CONTRASTIVE UNSUPERVISED LEARNING FOR SPEECH EMOTION RECOGNITION
    Li, Mao
    Yang, Bo
    Levy, Joshua
    Stolcke, Andreas
    Rozgic, Viktor
    Matsoukas, Spyros
    Papayiannis, Constantinos
    Bone, Daniel
    Wang, Chao
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6329 - 6333
  • [39] Learning Transferable Features for Speech Emotion Recognition
    Marczewski, Alison
    Veloso, Adriano
    Ziviani, Nivio
    PROCEEDINGS OF THE THEMATIC WORKSHOPS OF ACM MULTIMEDIA 2017 (THEMATIC WORKSHOPS'17), 2017, : 529 - 536
  • [40] Speech emotion recognition with unsupervised feature learning
    Huang, Zheng-wei
    Xue, Wen-tao
    Mao, Qi-rong
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2015, 16 (05) : 358 - 366