Multimodal Affect: Perceptually Evaluating an Affective Talking Head

被引:0
|
作者
Legde, Katharina [1 ]
Castillo, Susana [1 ]
Cunningham, Douglas W. [1 ]
机构
[1] Brandenburg Tech Univ Cottbus, Inst Informat, Lehrstuhl Graf Syst, D-03046 Cottbus, Germany
关键词
Experimentation; Human Factors; Affective interfaces; emotion; speech; facial animation; OF-THE-ART; HEARING LIPS; RECOGNITION; BEHAVIOR; SPEECH;
D O I
10.1145/2811265
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Many tasks such as driving or rapidly sorting items can be best achieved by direct actions. Other tasks such as giving directions, being guided through a museum, or organizing a meeting are more easily solved verbally. Since computers are increasingly being used in all aspects of daily life, it would be of great advantage if we could communicate verbally with them. Although advanced interactions with computers are possible, a vast majority of interactions are still based on the WIMP (Window, Icon, Menu, Point) metaphor [Hevner and Chatterjee 2010] and are, therefore, via simple text and gesture commands. The field of affective interfaces is working toward making computers more accessible by giving them (rudimentary) natural-language abilities, including using synthesized speech, facial expressions, and virtual body motions. Once the computer is granted a virtual body, however, it must be given the ability to use it to nonverbally convey socio-emotional information (such as emotions, intentions, mental state, and expectations) or it will likely be misunderstood. Here, we present a simple affective talking head along with the results of an experiment on the multimodal expression of emotion. The results show that although people can sometimes recognize the intended emotion from the semantic content of the text even when the face does not convey affect, they are considerably better at it when the face also shows emotion. Moreover, when both face and text convey emotion, people can detect different levels of emotional intensity.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] A MODEL FOR MULTIMODAL DIALOGUE SYSTEM OUTPUT APPLIED TO AN ANIMATED TALKING HEAD
    Beskow, Jonas
    Edlund, Jens
    Nordstrand, Magnus
    SPOKEN MULTIMODAL HUMAN-COMPUTER DIALOGUE IN MOBILE ENVIRONMENTS, 2005, 28 : 93 - 113
  • [2] Evaluating perceptually prefiltered video
    Steiger, O
    Ebrahimi, T
    Cavallaro, A
    2005 IEEE International Conference on Multimedia and Expo (ICME), Vols 1 and 2, 2005, : 1291 - 1294
  • [3] Evaluating a 3-D virtual talking head on pronunciation learning
    Peng, Xiaolan
    Chen, Hui
    Wang, Lan
    Wang, Hongan
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2018, 109 : 26 - 40
  • [4] Enhancing Multimodal Affect Recognition with Multi-Task Affective Dynamics Modeling
    Henderson, Nathan
    Min, Wookhee
    Rowe, Jonathan
    Lester, James
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2021,
  • [5] Evaluating Multimodal Wearable Sensors for Quantifying Affective States and Depression With Neural Networks
    Ahmed, Abdullah
    Ramesh, Jayroop
    Ganguly, Sandipan
    Aburukba, Raafat
    Sagahyroon, Assim
    Aloul, Fadi
    IEEE SENSORS JOURNAL, 2023, 23 (19) : 22788 - 22802
  • [6] REFLECTIONS OF A TALKING HEAD
    POST, JM
    POLITICAL PSYCHOLOGY, 1993, 14 (03) : 559 - 564
  • [7] Confessions of a talking head
    Peck, A
    ACADEME-BULLETIN OF THE AAUP, 1996, 82 (04): : 28 - &
  • [8] Confessions of a talking head
    Tippee, B
    OIL & GAS JOURNAL, 1998, 96 (28) : 19 - 19
  • [9] Training a talking head
    Cohen, MM
    Massaro, DW
    Clark, R
    FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS, 2002, : 499 - 504
  • [10] Generalization of Affective Learning About Faces to Perceptually Similar Faces
    Verosky, Sara C.
    Todorov, Alexander
    PSYCHOLOGICAL SCIENCE, 2010, 21 (06) : 779 - 785