Detection of Affective States From Text and Speech for Real-Time Human-Computer Interaction

被引:11
|
作者
Calix, Ricardo A. [1 ]
Javadpour, Leili [1 ]
Knapp, Gerald M. [1 ]
机构
[1] Louisiana State Univ, Baton Rouge, LA 70803 USA
关键词
knowledge representation; cognitive processes; language; human-computer interaction;
D O I
10.1177/0018720811425922
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Objective: The goal of this work is to develop and test an automated system methodology that can detect emotion from text and speech features. Background: Affective human-computer interaction will be critical for the success of new systems that will be prevalent in the 21st century. Such systems will need to properly deduce human emotional state before they can determine how to best interact with people. Method: Corpora and machine learning classification models are used to train and test a methodology for emotion detection. The methodology uses a step-wise approach to detect sentiment in sentences by first filtering out neutral sentences, then distinguishing among positive, negative, and five emotion classes. Results: Results of the classification between emotion and neutral sentences achieved recall accuracies as high as 77% in the University of Illinois at Urbana-Champaign (UIUC) corpus and 61% in the Louisiana State University medical drama (LSU-MD) corpus for emotion samples. Once neutral sentences were filtered out, the methodology achieved accuracy scores for detecting negative sentences as high as 92.3%. Conclusion: Results of the feature analysis indicate that speech spectral features are better than speech prosodic features for emotion detection. Accumulated sentiment composition text features appear to be very important as well. This work contributes to the study of human communication by providing a better understanding of how language factors help to best convey human emotion and how to best automate this process. Application: Results of this study can be used to develop better automated assistive systems that interpret human language and respond to emotions through 3-D computer graphics.
引用
收藏
页码:530 / 545
页数:16
相关论文
共 50 条
  • [31] Speech and Language Processing for Multimodal Human-Computer Interaction
    L. Deng
    Y. Wang
    K. Wang
    A. Acero
    H. Hon
    J. Droppo
    C. Boulis
    M. Mahajan
    X.D. Huang
    [J]. Journal of VLSI signal processing systems for signal, image and video technology, 2004, 36 : 161 - 187
  • [32] Speech timing prediction in multimodal human-computer interaction
    Bourguet, ML
    Ando, A
    [J]. HUMAN-COMPUTER INTERACTION - INTERACT '97, 1997, : 453 - 460
  • [33] Construction of Virtual 3D Life-Like Human Hand for Real-Time Human-Computer Interaction
    Ren, Shuai
    Wu, Hao
    Wan, Yi
    [J]. 2015 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2015, : 345 - 350
  • [34] Real-time Gesture Animation Generation from Speech for Virtual Human Interaction
    Rebol, Manuel
    Guetl, Christian
    Pietroszek, Krzysztof
    [J]. EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21), 2021,
  • [35] USER REPRESENTATIONS OF COMPUTER-SYSTEMS IN HUMAN-COMPUTER SPEECH INTERACTION
    AMALBERTI, R
    CARBONELL, N
    FALZON, P
    [J]. INTERNATIONAL JOURNAL OF MAN-MACHINE STUDIES, 1993, 38 (04): : 547 - 566
  • [36] Exploring simple visual languages for real time human-computer interaction.
    Ayala, P
    Barandiaran, I
    Vicente, D
    Graña, M
    [J]. VECIMS'03: 2003 IEEE INTERNATIONAL SYMPOSIUM ON VIRTUAL ENVIRONMENTS, HUMAN-COMPUTER INTERFACES AND MEASUREMENT SYSTEMS, 2003, : 107 - 112
  • [37] A Framework of Real Time Hand Gesture Vision Based Human-Computer Interaction
    Sha, Liang
    Wang, Guijin
    Lin, Xinggang
    Wang, Kongqiao
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2011, E94A (03) : 979 - 989
  • [38] Speech and Text Analysis for Multimodal Addressee Detection in Human-Human-Computer Interaction
    Akhtiamov, Oleg
    Sidorov, Maxim
    Karpov, Alexey
    Minker, Wolfgang
    [J]. 18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 2521 - 2525
  • [39] Affective state detection via facial expression analysis within a human-computer interaction context
    Samara, Anas
    Galway, Leo
    Bond, Raymond
    Wang, Hui
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2019, 10 (06) : 2175 - 2184
  • [40] Dynamic constraint and objective generation approach for real-time train rescheduling model under human-computer interaction
    Kai Liu
    Jianrui Miao
    Zhengwen Liao
    Xiaojie Luan
    Lingyun Meng
    [J]. High-speed Railway, 2023, 1 (04) : 248 - 257