Annotation of Utterances for Conversational Nonverbal Behaviors

被引:0
|
作者
Funkhouser, Allison [1 ]
Simmons, Reid [1 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
来源
SOCIAL ROBOTICS, (ICSR 2016) | 2016年 / 9979卷
关键词
D O I
10.1007/978-3-319-47437-3_51
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nonverbal behaviors play an important role in communication for both humans and social robots. However, adding contextually appropriate animations by hand is time consuming and does not scale well. Previous researchers have developed automated systems for inserting animations based on utterance text, yet these systems lack human understanding of social context and are still being improved. This work proposes a middle ground where untrained human workers label semantic information, which is input to an automatic system to produce appropriate gestures. To test this approach, untrained workers from Mechanical Turk labeled semantic information, specifically emotion and emphasis, for each utterance, which was used to automatically add animations. Videos of a robot performing the animated dialogue were rated by a second set of participants. Results showed untrained workers are capable of providing reasonable labeling of semantic information and that emotional expressions derived from the labels were rated more highly than control videos. More study is needed to determine the effects of emphasis labels.
引用
收藏
页码:521 / 530
页数:10
相关论文
共 50 条
  • [1] Synthesizing multimodal utterances for conversational agents
    Kopp, S
    Wachsmuth, P
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2004, 15 (01) : 39 - 52
  • [2] Generating verbal and nonverbal utterances for virtual characters
    Kempe, B
    Pfleger, N
    Löckelt, M
    VIRTUAL STORYTELLING: USING VIRTUAL REALITY TECHNOLOGIES FOR STORYTELLING, PROCEEDINGS, 2005, 3805 : 73 - 76
  • [3] A Parameter-Based Model for Generating Culturally Adaptive Nonverbal Behaviors in Embodied Conversational Agents
    Lipi, Afia Akhter
    Nakano, Yukiko
    Rehm, Matthias
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, PT II, PROCEEDINGS: INTELLIGENT AND UBIQUITOUS INTERACTION ENVIRONMENTS, 2009, 5615 : 631 - +
  • [4] BEYOND ISOLATED UTTERANCES: CONVERSATIONAL EMOTION RECOGNITION
    Pappagari, Raghavendra
    Zelasko, Piotr
    Villalba, Jesus
    Moro-Velazquez, Laureano
    Dehak, Najim
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 39 - 46
  • [5] RELATIONSHIP BETWEEN DENTIST NONVERBAL BEHAVIORS AND PATIENT NONVERBAL BEHAVIORS
    MACINTYRE, BA
    KWASMAN, R
    GLADSTEIN, G
    JOURNAL OF DENTAL RESEARCH, 1976, 55 : B133 - B133
  • [6] NONVERBAL EXPECTANCY VIOLATIONS AND CONVERSATIONAL INVOLVEMENT
    BURGOON, JK
    NEWTON, DA
    WALTHER, JB
    BAESLER, EJ
    JOURNAL OF NONVERBAL BEHAVIOR, 1989, 13 (02) : 97 - 119
  • [7] Researching for GIS annotation The design of a conversational annotation interface
    Liu, Yunxing
    Van Bussel, Tjeu
    Khan, Vassilis-Javed
    Martens, Jean-Bernard
    NINTH INTERNATIONAL SYMPOSIUM OF CHINESE CHI, CHINESE CHI 2021, 2021, : 72 - 82
  • [8] Cultural Difference in Nonverbal Behaviors in Negotiation Conversations: Towards a Model for Culture-Adapted Conversational Agents
    Nori, Fumie
    Lipi, Afia Akhter
    Nakano, Yukiko
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: DESIGN FOR ALL AND EINCLUSION, PT 1, 2011, 6765 : 410 - 419
  • [9] Rewriting Conversational Utterances with Instructed Large Language Models
    Galimzhanova, Elnara
    Muntean, Cristina Ioana
    Nardini, Franco Maria
    Perego, Raffaele
    Rocchietti, Guido
    2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 56 - 63
  • [10] Spatio-Temporal Variation of Conversational Utterances on Twitter
    Alis, Christian M.
    Lim, May T.
    PLOS ONE, 2013, 8 (10):