A tension-moderating mechanism for promoting speech-based human-robot interaction

被引:4
|
作者
Kanda, T [1 ]
Iwase, K [1 ]
Shiomi, M [1 ]
Ishiguro, H [1 ]
机构
[1] ATR Intelligent Robot & Commun Labs, Dept Commun Robots, Kyoto, Japan
关键词
human-robot interaction; emotion recognition; tension emotion; speech-based interaction;
D O I
10.1109/IROS.2005.1545035
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a method for promoting human-robot interaction based on emotion recognition with particular focus on tension emotion. There are two types of emotions expressed in a short time. One is autonomic emotion caused by a stimulus, such as joy and fear. The other is self-reported emotion, such as tension, that is relatively independent of a single stimulus. In our preliminary experiment, we observed that tension emotion (self-reported emotion) obstructs the expression of autonomic emotion, which has demerits on speech recognition and interaction. Our method is based on detection and moderation of tension emotion. If a robot detects tension emotion, it tries to ease it so that a person will interact with it more comfortably and express autonomic emotions. It also retrieves nuances from expressed emotions for supplementing insufficient speech recognition, which will also promote interaction.
引用
收藏
页码:527 / 532
页数:6
相关论文
共 50 条
  • [21] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10
  • [22] Child Speech Recognition in Human-Robot Interaction: Evaluations and Recommendations
    Kennedy, James
    Lemaignan, Severin
    Montassier, Caroline
    Lavalade, Pauline
    Irfan, Bahar
    Papadopoulos, Fotios
    Senft, Emmanuel
    Belpaeme, Tony
    PROCEEDINGS OF THE 2017 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'17), 2017, : 82 - 90
  • [23] Toward More Expressive Speech Communication in Human-Robot Interaction
    Delic, Vlado
    Borovac, Branislav
    Gnjatovic, Milan
    Tasevski, Jovica
    Miskovic, Dragisa
    Pekar, Darko
    Secujski, Milan
    INTERACTIVE COLLABORATIVE ROBOTICS, ICR 2018, 2018, 11097 : 44 - 51
  • [24] Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    SERVICE ORIENTATION IN HOLONIC AND MULTI-AGENT MANUFACTURING CONTROL, 2012, 402 : 183 - 196
  • [25] Mutual assistance between speech and vision for human-robot interaction
    Burger, Brice
    Lerasle, Frederic
    Ferrane, Isabelle
    Clodic, Aurelie
    2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, 2008, : 4011 - +
  • [26] A Training Tool for Speech Driven Human-Robot Interaction Applications
    Hudson, Christopher
    Bethel, Cindy L.
    Carruth, Daniel W.
    Pleva, Matus
    Juhar, Jozef
    Ondas, Stanislav
    2017 15TH IEEE INTERNATIONAL CONFERENCE ON EMERGING ELEARNING TECHNOLOGIES AND APPLICATIONS (ICETA 2017), 2017, : 167 - 172
  • [27] Emotion Recognition From Speech to Improve Human-robot Interaction
    Zhu, Changrui
    Ahamd, Wasim
    IEEE 17TH INT CONF ON DEPENDABLE, AUTONOM AND SECURE COMP / IEEE 17TH INT CONF ON PERVAS INTELLIGENCE AND COMP / IEEE 5TH INT CONF ON CLOUD AND BIG DATA COMP / IEEE 4TH CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2019, : 370 - 375
  • [28] An application of speech/speaker recognition system for human-robot interaction
    Jo, Hyun
    Kim, Gyeongho
    Park, Youngjin
    2007 INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS, VOLS 1-6, 2007, : 757 - 760
  • [29] Minimal representation of speech signals for generation of emotion speech and human-robot interaction
    Lee, Heyoung
    Bien, Z. Zenn
    2007 RO-MAN: 16TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1-3, 2007, : 137 - +
  • [30] A Goal Triggering Mechanism for Continuous Human-Robot Interaction
    Umbrico, Alessandro
    Cesta, Amedeo
    Cortellessa, Gabriella
    Orlandini, Andrea
    AI*IA 2018 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, 11298 : 460 - 473