Detection of Affective States From Text and Speech for Real-Time Human-Computer Interaction

被引:11
|
作者
Calix, Ricardo A. [1 ]
Javadpour, Leili [1 ]
Knapp, Gerald M. [1 ]
机构
[1] Louisiana State Univ, Baton Rouge, LA 70803 USA
关键词
knowledge representation; cognitive processes; language; human-computer interaction;
D O I
10.1177/0018720811425922
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Objective: The goal of this work is to develop and test an automated system methodology that can detect emotion from text and speech features. Background: Affective human-computer interaction will be critical for the success of new systems that will be prevalent in the 21st century. Such systems will need to properly deduce human emotional state before they can determine how to best interact with people. Method: Corpora and machine learning classification models are used to train and test a methodology for emotion detection. The methodology uses a step-wise approach to detect sentiment in sentences by first filtering out neutral sentences, then distinguishing among positive, negative, and five emotion classes. Results: Results of the classification between emotion and neutral sentences achieved recall accuracies as high as 77% in the University of Illinois at Urbana-Champaign (UIUC) corpus and 61% in the Louisiana State University medical drama (LSU-MD) corpus for emotion samples. Once neutral sentences were filtered out, the methodology achieved accuracy scores for detecting negative sentences as high as 92.3%. Conclusion: Results of the feature analysis indicate that speech spectral features are better than speech prosodic features for emotion detection. Accumulated sentiment composition text features appear to be very important as well. This work contributes to the study of human communication by providing a better understanding of how language factors help to best convey human emotion and how to best automate this process. Application: Results of this study can be used to develop better automated assistive systems that interpret human language and respond to emotions through 3-D computer graphics.
引用
收藏
页码:530 / 545
页数:16
相关论文
共 50 条
  • [41] Real-Time Feedback Towards Voluntary Pupil Control in Human-Computer Interaction: Enabling Continuous Pupillary Feedback
    Georgi, Juliane
    Kowalski, David
    Ehlers, Jan
    Huckauf, Anke
    [J]. ICTS FOR IMPROVING PATIENTS REHABILITATION RESEARCH TECHNIQUES, REHAB 2014, 2015, 515 : 104 - 114
  • [42] Real-Time Myocontrol of a Human-Computer Interface by Paretic Muscles After Stroke
    Yang, Chun
    Long, Jinyi
    Urbin, M. A.
    Feng, Yanyun
    Song, Ge
    Weng, Jian
    Li, Zhijun
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2018, 10 (04) : 1126 - 1132
  • [43] Real-time automated diagnosis for human-computer based monitoring and control systems
    Boyd, MA
    Khalil, AA
    Herrin, SA
    [J]. ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM - 1997 PROCEEDINGS: THE INTERNATIONAL SYMPOSIUM ON PRODUCT QUALITY & INTEGRITY, 1997, : 355 - 360
  • [44] A Real-Time Scene Text to Speech System
    Neumann, Lukas
    Matas, Jiri
    [J]. COMPUTER VISION - ECCV 2012, PT III, 2012, 7585 : 619 - 622
  • [45] CONVERTING TEXT INTO SPEECH IN REAL-TIME WITH MICROCOMPUTERS
    OMOTAYO, OR
    [J]. MICROPROCESSORS AND MICROSYSTEMS, 1984, 8 (09) : 481 - 487
  • [46] Effects of speech- and text-based interaction modes in natural language human-computer dialogue
    Le Bigot, Ludovic
    Rouet, Jean-Francois
    Jamet, Eric
    [J]. HUMAN FACTORS, 2007, 49 (06) : 1045 - 1053
  • [47] The uulmMAC Database-A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction
    Hazer-Rau, Dilana
    Meudt, Sascha
    Daucher, Andreas
    Spohrs, Jennifer
    Hoffmann, Holger
    Schwenker, Friedhelm
    Traue, Harald C.
    [J]. SENSORS, 2020, 20 (08)
  • [48] Emotion Recognition through Speech Signal for Human-Computer Interaction
    Patnaik, S. Lalitha Sahruday
    Arvind, T. H.
    Madhusudhan, Vivek
    Tripathi, Shikha
    [J]. 2014 FIFTH INTERNATIONAL SYMPOSIUM ON ELECTRONIC SYSTEM DESIGN (ISED), 2014, : 217 - 218
  • [49] Recognition of Emotional states in Natural Human-Computer Interaction
    Milanova, Mariofanna
    Sirakov, Nikolay
    [J]. ISSPIT: 8TH IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY, 2008, : 186 - +
  • [50] Automated Speech Recognition System in Advancement of Human-Computer Interaction
    Panda, Soumya Priyadarsini
    [J]. 2017 INTERNATIONAL CONFERENCE ON COMPUTING METHODOLOGIES AND COMMUNICATION (ICCMC), 2017, : 302 - 306