Predicting user mental states in spoken dialogue systems

被引:0
|
作者
Zoraida Callejas
David Griol
Ramón López-Cózar
机构
[1] CITIC-UGR,Department of Languages and Computer Systems
[2] University of Granada,Department of Computer Science
[3] Carlos III University of Madrid,undefined
关键词
Emotion Recognition; User Profile; Dialogue System; Baseline System; User Intention;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.
引用
收藏
相关论文
共 50 条
  • [41] Designing the user interface for a natural spoken dialogue system
    Boyce, SJ
    DESIGN OF COMPUTING SYSTEMS: SOCIAL AND ERGONOMIC CONSIDERATIONS, 1997, 21 : 367 - 370
  • [42] Error handling in spoken dialogue systems
    Carlson, R
    Hirschberg, J
    Swerts, M
    SPEECH COMMUNICATION, 2005, 45 (03) : 207 - 209
  • [43] Reinforcement learning for spoken dialogue systems
    Singh, S
    Kearns, M
    Litman, D
    Walker, M
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 12, 2000, 12 : 956 - 962
  • [44] Learning to ground in spoken dialogue systems
    Pietquin, Olivier
    2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol IV, Pts 1-3, 2007, : 165 - 168
  • [45] Machine Learning for Spoken Dialogue Systems
    Lemon, Oliver
    Pietquin, Olivier
    INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 1761 - +
  • [46] Confidence measures for spoken dialogue systems
    San-Segundo, R
    Pellom, B
    Hacioglu, K
    Ward, W
    Pardo, JM
    2001 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I-VI, PROCEEDINGS: VOL I: SPEECH PROCESSING 1; VOL II: SPEECH PROCESSING 2 IND TECHNOL TRACK DESIGN & IMPLEMENTATION OF SIGNAL PROCESSING SYSTEMS NEURALNETWORKS FOR SIGNAL PROCESSING; VOL III: IMAGE & MULTIDIMENSIONAL SIGNAL PROCESSING MULTIMEDIA SIGNAL PROCESSING - VOL IV: SIGNAL PROCESSING FOR COMMUNICATIONS; VOL V: SIGNAL PROCESSING EDUCATION SENSOR ARRAY & MULTICHANNEL SIGNAL PROCESSING AUDIO & ELECTROACOUSTICS; VOL VI: SIGNAL PROCESSING THEORY & METHODS STUDENT FORUM, 2001, : 393 - 396
  • [47] A clarification algorithm for spoken dialogue systems
    Lewis, C
    Di Fabbrizio, G
    2005 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1-5: SPEECH PROCESSING, 2005, : 37 - 40
  • [48] Recognition of emotional states in spoken dialogue with a robot
    Komatani, K
    Ito, R
    Kawahara, T
    Okuno, HG
    INNOVATIONS IN APPLIED ARTIFICIAL INTELLIGENCE, 2004, 3029 : 413 - 423
  • [49] Modeling user behavior online for disambiguating user input in a spoken dialogue system
    Wang, Fangju
    Swegles, Kyle
    SPEECH COMMUNICATION, 2013, 55 (01) : 84 - 98
  • [50] Predicting how it sounds: Re-ranking dialogue prompts based on TTS quality for adaptive Spoken Dialogue Systems
    Boidin, Cedric
    Rieser, Verena
    van der Plas, Lonneke
    Lemon, Oliver
    Chevelu, Jonathan
    INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, : 2443 - +