A Conversational Agent Framework with Multi-modal Personality Expression

被引:23
|
作者
Sonlu, Sinan [1 ]
Gudukbay, Ugur [1 ]
Durupinar, Funda [2 ]
机构
[1] Bilkent Univ, Dept Comp Engn, TR-06800 Ankara, Turkey
[2] Univ Massachusetts, Dept Comp Sci, 100 Morrisey Blvd, Boston, MA 02125 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2021年 / 40卷 / 01期
关键词
Conversational agent; OCEAN personality; emotion; Laban movement analysis; nonverbal cues; MODEL; PERCEPTION; PARAMETERS; EMOTIONS; BEHAVIOR;
D O I
10.1145/3439795
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Consistently exhibited personalities are crucial elements of realistic, engaging, and behavior-rich conversational virtual agents. Both nonverbal and verbal cues help convey these agents' unseen psychological states, contributing to our effective communication with them. We introduce a comprehensive framework to design conversational agents that express personality through non-verbal behaviors like body movement and facial expressions, as well as verbal behaviors like dialogue selection and voice transformation. We use the OCEAN personality model, which defines personality as a combination of five orthogonal factors of openness, conscientiousness, extraversion, agreeableness, and neuroticism. The framework combines existing personality expression methods with novel ones such as new algorithms to convey Laban Shape and Effort qualities. We perform Amazon Mechanical Turk studies to analyze how different communication modalities influence our perception of virtual agent personalities and compare their individual and combined effects on each personality dimension. The results indicate that our personality-basedmodifications are perceived as natural, and each additional modality improves perception accuracy, with the best performance achieved when all the modalities are present. We also report some correlations for the perception of conscientiousness with neuroticism and openness with extraversion.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] A Generic Participatory Sensing Framework for Multi-modal Datasets
    Wu, Fang-Jing
    Luo, Tie
    [J]. 2014 IEEE NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (IEEE ISSNIP 2014), 2014,
  • [32] Unified reconstruction framework for multi-modal medical imaging
    Dong, Di
    Tian, Jie
    Dai, Yakang
    Yan, Guorui
    Yang, Fei
    Wu, Ping
    [J]. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY, 2011, 19 (01) : 111 - 126
  • [33] Adaptive information fusion network for multi-modal personality recognition
    Bao, Yongtang
    Liu, Xiang
    Qi, Yue
    Liu, Ruijun
    Li, Haojie
    [J]. COMPUTER ANIMATION AND VIRTUAL WORLDS, 2024, 35 (03)
  • [34] A framework for unsupervised segmentation of multi-modal medical images
    El-Baz, Ayman
    Farag, Aly
    Ali, Asem
    Gimel'farb, Georgy
    Casanova, Manuel
    [J]. COMPUTER VISION APPROACHES TO MEDICAL IMAGE ANALYSIS, 2006, 4241 : 120 - 131
  • [35] A Unified Framework for Multi-Modal Isolated Gesture Recognition
    Duan, Jiali
    Wan, Jun
    Zhou, Shuai
    Guo, Xiaoyuan
    Li, Stan Z.
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2018, 14 (01)
  • [36] maplab 2.0 - A Modular and Multi-Modal Mapping Framework
    Cramariuc, Andrei
    Bernreiter, Lukas
    Tschopp, Florian
    Fehr, Marius
    Reijgwart, Victor
    Nieto, Juan
    Siegwart, Roland
    Cadena, Cesar
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02): : 520 - 527
  • [37] Multi-modal information integration for interactive multi-agent systems
    Yamaguchi, T
    Sato, M
    Takagi, T
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 1998, 23 (2-4) : 183 - 199
  • [38] Multi-modal Information Integration for Interactive Multi-agent Systems
    Toru Yamaguchi
    Makoto Sato
    Tomohiro Takagi
    [J]. Journal of Intelligent and Robotic Systems, 1998, 23 : 183 - 199
  • [39] SpotFake: A Multi-modal Framework for Fake News Detection
    Singhal, Shivangi
    Shah, Rajiv Ratn
    Chakraborty, Tanmoy
    Kumaraguru, Ponnurangam
    Satoh, Shin'ichi
    [J]. 2019 IEEE FIFTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2019), 2019, : 39 - 47
  • [40] UniColor: A Unified Framework for Multi-Modal Colorization with Transformer
    Huang, Zhitong
    Zhao, Nanxuan
    Liao, Jing
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2022, 41 (06):