Multimodal Affect: Perceptually Evaluating an Affective Talking Head

被引:0
|
作者
Legde, Katharina [1 ]
Castillo, Susana [1 ]
Cunningham, Douglas W. [1 ]
机构
[1] Brandenburg Tech Univ Cottbus, Inst Informat, Lehrstuhl Graf Syst, D-03046 Cottbus, Germany
关键词
Experimentation; Human Factors; Affective interfaces; emotion; speech; facial animation; OF-THE-ART; HEARING LIPS; RECOGNITION; BEHAVIOR; SPEECH;
D O I
10.1145/2811265
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Many tasks such as driving or rapidly sorting items can be best achieved by direct actions. Other tasks such as giving directions, being guided through a museum, or organizing a meeting are more easily solved verbally. Since computers are increasingly being used in all aspects of daily life, it would be of great advantage if we could communicate verbally with them. Although advanced interactions with computers are possible, a vast majority of interactions are still based on the WIMP (Window, Icon, Menu, Point) metaphor [Hevner and Chatterjee 2010] and are, therefore, via simple text and gesture commands. The field of affective interfaces is working toward making computers more accessible by giving them (rudimentary) natural-language abilities, including using synthesized speech, facial expressions, and virtual body motions. Once the computer is granted a virtual body, however, it must be given the ability to use it to nonverbally convey socio-emotional information (such as emotions, intentions, mental state, and expectations) or it will likely be misunderstood. Here, we present a simple affective talking head along with the results of an experiment on the multimodal expression of emotion. The results show that although people can sometimes recognize the intended emotion from the semantic content of the text even when the face does not convey affect, they are considerably better at it when the face also shows emotion. Moreover, when both face and text convey emotion, people can detect different levels of emotional intensity.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Talking and Looking: the SmartWeb Multimodal Interaction Corpus
    Schiel, Florian
    Moegele, Hannes
    SIXTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, LREC 2008, 2008, : 1990 - 1994
  • [22] Stimulus-Driven Affective Change: Evaluating Computational Models of Affect Dynamics in Conjunction with Input
    Niels Vanhasbroeck
    Tim Loossens
    Nil Anarat
    Sigert Ariens
    Wolf Vanpaemel
    Agnes Moors
    Francis Tuerlinckx
    Affective Science, 2022, 3 : 559 - 576
  • [23] Stimulus-Driven Affective Change: Evaluating Computational Models of Affect Dynamics in Conjunction with Input
    Vanhasbroeck, Niels
    Loossens, Tim
    Anarat, Nil
    Ariens, Sigert
    Vanpaemel, Wolf
    Moors, Agnes
    Tuerlinckx, Francis
    AFFECTIVE SCIENCE, 2022, 3 (03) : 559 - 576
  • [24] Talking Drawings: A Multimodal Pathway for Demonstrating Learning
    Cappello, Marva
    Walker, Nancy T.
    READING TEACHER, 2021, 74 (04): : 407 - 418
  • [25] Bodies in action Multimodal analysis of walking and talking
    Mondada, Lorenza
    LANGUAGE AND DIALOGUE, 2014, 4 (03) : 357 - 403
  • [26] Evaluating a synthetic talking head using a dual task: Modality effects on speech understanding and cognitive load
    Stevens, Catherine J.
    Gibert, Guillaume
    Leung, Yvonne
    Zhang, Zhengzhi
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2013, 71 (04) : 440 - 454
  • [27] Multimodal dual perception fusion framework for multimodal affective analysis
    Lu, Qiang
    Sun, Xia
    Long, Yunfei
    Zhao, Xiaodi
    Zou, Wang
    Feng, Jun
    Wang, Xuxin
    INFORMATION FUSION, 2025, 115
  • [28] Validating a multilingual and multimodal affective database
    Lopez, Juan Miguel
    Cearreta, Idoia
    Fajardo, Inmaculada
    Garay, Nestor
    USABILITY AND INTERNATIONALIZATION, PT 2, PROCEEDINGS: GLOBAL AND LOCAL USER INTERFACES, 2007, 4560 : 422 - +
  • [29] GameVibe: a multimodal affective game corpus
    Barthet, Matthew
    Kaselimi, Maria
    Pinitas, Kosmas
    Makantasis, Konstantinos
    Liapis, Antonios
    Yannakakis, Georgios N.
    SCIENTIFIC DATA, 2024, 11 (01)
  • [30] Special Issue on Multimodal Affective Interaction
    Sebe, Nicu
    Aghajan, Hamid
    Huang, Thomas
    Magnenat-Thalmann, Nadia
    Shan, Caifeng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2010, 12 (06) : 477 - 480