Using learning automata in brain emotional learning for speech emotion recognition

被引:8
|
作者
Farhoudi Z. [1 ]
Setayeshi S. [2 ]
Rabiee A. [3 ]
机构
[1] Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran
[2] Department of Medical Radiation, Amirkabir University of Technology, Tehran
[3] Department of Computer Science, Dolatabad Branch, Islamic Azad University, Isfahan
关键词
Brain emotional learning; Emotional state; Learning automata; Neural network; Speech emotion recognition;
D O I
10.1007/s10772-017-9426-0
中图分类号
学科分类号
摘要
We propose an improved version of brain emotional learning (BEL) model trained via learning automata (LA) for speech emotion recognition. Inspiring from the limbic system in mammalian brain, the original BEL model is composed of two neural network components, namely amygdala and orbitofrontal cortex. In this modified BEL model, named brain emotional learning based on learning automata (BELBLA), we have employed the theory of the stochastic LA in error back-propagation to train the BEL model in decreasing the high computational complexity of the traditional gradient method. Hence, the performance of the model can be enhanced. For a speech emotion recognition task, we extract the usual features, such as energy, pitch, formants, amplitude, zero crossing rate and MFCC, from average short-term signals of the emotional Berlin dataset. The experimental results show that the BELBLA outperforms some opponents, like hidden Markov model, Gaussian mixture model, k-nearest neighbor, support vector machines and artificial neural networks, for this application. © 2017, Springer Science+Business Media New York.
引用
收藏
页码:553 / 562
页数:9
相关论文
共 50 条
  • [41] Speech emotion recognition via learning analogies
    Ntalampiras, Stavros
    PATTERN RECOGNITION LETTERS, 2021, 144 : 21 - 26
  • [42] Discriminative Feature Learning for Speech Emotion Recognition
    Zhang, Yuying
    Zou, Yuexian
    Peng, Junyi
    Luo, Danqing
    Huang, Dongyan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: TEXT AND TIME SERIES, PT IV, 2019, 11730 : 198 - 210
  • [43] Active Learning for Dimensional Speech Emotion Recognition
    Han, Wenjing
    Li, Haifeng
    Ruan, Huabin
    Ma, Lin
    Sun, Jiayin
    Schuller, Bjoern
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 2840 - 2844
  • [44] Fusion of deep learning features with mixture of brain emotional learning for audio-visual emotion recognition
    Farhoudi, Zeinab
    Setayeshi, Saeed
    SPEECH COMMUNICATION, 2021, 127 : 92 - 103
  • [45] Speech Emotion Recognition Using Deep Learning LSTM for Tamil Language
    Fernandes, Bennilo
    Mannepalli, Kasiprasad
    PERTANIKA JOURNAL OF SCIENCE AND TECHNOLOGY, 2021, 29 (03): : 1915 - 1936
  • [46] Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms
    Satt, Aharon
    Rozenberg, Shai
    Hoory, Ron
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 1089 - 1093
  • [47] An Emotion Recognition Method Using Speech Signals Based on Deep Learning
    Byun, Sung-woo
    Shin, Bo-ra
    Lee, Seok-Pil
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 124 : 181 - 182
  • [48] Speech Emotion Recognition using Feature Selection with Adaptive Structure Learning
    Rayaluru, Akshay
    Bandela, Surekha Reddy
    Kumar, T. Kishore
    2019 IEEE INTERNATIONAL SYMPOSIUM ON SMART ELECTRONIC SYSTEMS (ISES 2019), 2019, : 233 - 236
  • [49] Learning Temporal Clusters Using Capsule Routing for Speech Emotion Recognition
    Jalal, Md Asif
    Loweimi, Erfan
    Moore, Roger K.
    Hain, Thomas
    INTERSPEECH 2019, 2019, : 1701 - 1705
  • [50] ScSer: Supervised Contrastive Learning for Speech Emotion Recognition using Transformers
    Alaparthi, Varun Sai
    Pasam, Tejeswara Reddy
    Inagandla, Deepak Abhiram
    Prakash, Jay
    Singh, Pramod Kumar
    2022 15TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI), 2022,