Multimodal Affect: Perceptually Evaluating an Affective Talking Head

被引:0
|
作者
Legde, Katharina [1 ]
Castillo, Susana [1 ]
Cunningham, Douglas W. [1 ]
机构
[1] Brandenburg Tech Univ Cottbus, Inst Informat, Lehrstuhl Graf Syst, D-03046 Cottbus, Germany
关键词
Experimentation; Human Factors; Affective interfaces; emotion; speech; facial animation; OF-THE-ART; HEARING LIPS; RECOGNITION; BEHAVIOR; SPEECH;
D O I
10.1145/2811265
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Many tasks such as driving or rapidly sorting items can be best achieved by direct actions. Other tasks such as giving directions, being guided through a museum, or organizing a meeting are more easily solved verbally. Since computers are increasingly being used in all aspects of daily life, it would be of great advantage if we could communicate verbally with them. Although advanced interactions with computers are possible, a vast majority of interactions are still based on the WIMP (Window, Icon, Menu, Point) metaphor [Hevner and Chatterjee 2010] and are, therefore, via simple text and gesture commands. The field of affective interfaces is working toward making computers more accessible by giving them (rudimentary) natural-language abilities, including using synthesized speech, facial expressions, and virtual body motions. Once the computer is granted a virtual body, however, it must be given the ability to use it to nonverbally convey socio-emotional information (such as emotions, intentions, mental state, and expectations) or it will likely be misunderstood. Here, we present a simple affective talking head along with the results of an experiment on the multimodal expression of emotion. The results show that although people can sometimes recognize the intended emotion from the semantic content of the text even when the face does not convey affect, they are considerably better at it when the face also shows emotion. Moreover, when both face and text convey emotion, people can detect different levels of emotional intensity.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Multimodal Engagement Classification for Affective Cinema
    Abadi, Mojtaba Khomami
    Staiano, Jacopo
    Cappelletti, Alessandro
    Zancanaro, Massimo
    Sebe, Nicu
    2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2013, : 411 - 416
  • [32] AFFECTIVE GAMES: A MULTIMODAL CLASSIFICATION SYSTEM
    Hamdy, Salma
    King, David
    19TH INTERNATIONAL CONFERENCE ON INTELLIGENT GAMES AND SIMULATION (GAME-ON(R) 2018), 2018, : 19 - 26
  • [33] Affective Haptics and Multimodal Experiments Research
    Jiao, Yang
    Xu, Yingqing
    HUMAN-COMPUTER INTERACTION. MULTIMODAL AND NATURAL INTERACTION, HCI 2020, PT II, 2020, 12182 : 380 - 391
  • [34] Using Multimodal Transformers in Affective Computing
    Vazquez-Rodriguez, Juan
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,
  • [35] AFFECTIVE IDENTIFICATIONS AND GENEALOGY OF THE AFFECT
    GUILLAUMIN, J
    REVUE FRANCAISE DE PSYCHANALYSE, 1991, 55 (04): : 979 - 988
  • [36] Putting the affect into affective polarisation
    Bakker, Bert N.
    Lelkes, Yphtach
    COGNITION & EMOTION, 2024, 38 (04) : 418 - 436
  • [37] The Czech computerized talking head "Chatter"
    Chaloupka, J
    7TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL IV, PROCEEDINGS: IMAGE, ACOUSTIC, SPEECH AND SIGNAL PROCESSING, 2003, : 320 - 323
  • [38] The Furhat Social Companion Talking Head
    Al Moubayed, Samer
    Beskow, Jonas
    Skantze, Gabriel
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 747 - 749
  • [39] Affective reactions when talking about emotional events
    Zech, E
    Bradley, MM
    Lang, PJ
    PSYCHOPHYSIOLOGY, 2002, 39 : S90 - S90
  • [40] A 'Talking Head on the Rue du Bac'
    Clifton, H
    AMERICAN POETRY REVIEW, 1997, 26 (03): : 30 - 30