Inference of Human Beings’ Emotional States from Speech in Human–Robot Interactions

被引:0
|
作者
Laurence Devillers
Marie Tahon
Mohamed A. Sehili
Agnes Delaborde
机构
[1] LIMSI-CNRS,
[2] Université Paris-Sorbonne IV,undefined
来源
International Journal of Social Robotics | 2015年 / 7卷
关键词
Human–robot interaction; Emotion recognition; Prediction reliability; Real-life data;
D O I
暂无
中图分类号
学科分类号
摘要
The challenge of this study is twofold: recognizing emotions from audio signals in naturalistic Human–Robot Interaction (HRI) environment, and using a cross-dataset recognition for robustness evaluation. The originality of this work lies in the use of six emotional models in parallel, generated using two training corpora and three acoustic feature sets. The models are obtained from two databases collected in different tasks, and a third independent real-life HRI corpus (collected within the ROMEO project—http://www.projetromeo.com/) is used for test. As primary results, for the task of four-emotion recognition, and by combining the probabilistic outputs of six different systems in a very simplistic way, we obtained better results compared to the best baseline system. Moreover, to investigate the potential of fusing many systems’ outputs using a “perfect” fusion method, we calculate the oracle performance (oracle considers a correct prediction if at least one of the systems outputs a correct prediction). The obtained oracle score is 73 % while the auto-coherence score on the same corpus (i.e. performance obtained by using the same data for training and for testing) is about 57 %. We experiment a reliability estimation protocol that makes use of outputs from many systems. Such reliability measurement of an emotion recognition system’s decision could help to construct a relevant emotional and interactional user profile which could be used to drive the expressive behavior of the robot.
引用
收藏
页码:451 / 463
页数:12
相关论文
共 50 条
  • [41] Pharmacokinetic and haemodynamic interactions between amlodipine and losartan in human beings
    Park, Jin-Woo
    Kim, Kyoung-Ah
    Kim, Yong Il
    Park, Ji-Young
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 (04) : 345 - 352
  • [42] Emotional affordances for human-robot interaction
    Vallverdu, Jordi
    Trovato, Gabriele
    ADAPTIVE BEHAVIOR, 2016, 24 (05) : 320 - 334
  • [43] Efficient Recognition of Human Emotional States from Audio Signals
    Erdem, Ernur Sonat
    Sert, Mustafa
    2014 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2014, : 139 - 142
  • [44] Emotion Recognition From Speech to Improve Human-robot Interaction
    Zhu, Changrui
    Ahamd, Wasim
    IEEE 17TH INT CONF ON DEPENDABLE, AUTONOM AND SECURE COMP / IEEE 17TH INT CONF ON PERVAS INTELLIGENCE AND COMP / IEEE 5TH INT CONF ON CLOUD AND BIG DATA COMP / IEEE 4TH CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2019, : 370 - 375
  • [45] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10
  • [46] Learning from Human-Robot Interactions in Modeled Scenes
    Murnane, Mark
    Breitmeyer, Max
    Ferraro, Francis
    Matuszek, Cynthia
    Engel, Don
    SIGGRAPH '19 - ACM SIGGRAPH 2019 POSTERS, 2019,
  • [47] Learning Legible Motion from Human-Robot Interactions
    Busch, Baptiste
    Grizou, Jonathan
    Lopes, Manuel
    Stulp, Freek
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2017, 9 (05) : 765 - 779
  • [48] Kant's impure ethics: From rational beings to human beings
    Rauscher, F
    JOURNAL OF THE HISTORY OF PHILOSOPHY, 2001, 39 (02) : 300 - 302
  • [49] Robot vision system based on sleep and wake functions of human beings
    Mikawa, M
    SICE 2004 ANNUAL CONFERENCE, VOLS 1-3, 2004, : 727 - 732
  • [50] Kant's impure ethics: From rational beings to human beings
    Costelloe, TM
    EIGHTEENTH-CENTURY LIFE, 2003, 27 (02) : 96 - 106