Modeling accessibility of embodied agents for multi-modal dialogue in complex virtual worlds

被引:0
|
作者
Sampath, D [1 ]
Rickel, J [1 ]
机构
[1] Univ So Calif, Inst Informat Sci, Marina Del Rey, CA 90292 USA
来源
INTELLIGENT VIRTUAL AGENTS | 2003年 / 2792卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Virtual humans are an important part of immersive virtual worlds, where they interact with human users in the roles of mentors, guides, teammates, companions or adversaries. A good dialogue model is essential for achieving realistic interaction between humans and agents. Any such model requires modeling accessibility of individuals, so that agents know which individuals are accessible for communication, by what modality (e.g. speech, gestures) and at what degree they can see or hear each other. This work presents a computational model of accessibility that is domain independent and capable of handling multiple individuals inhabiting a complex virtual world.
引用
收藏
页码:119 / 126
页数:8
相关论文
共 50 条
  • [41] Towards Emotion-aided Multi-modal Dialogue Act Classification
    Saha, Tulika
    Patra, Aditya Prakash
    Saha, Sriparna
    Bhattacharyya, Pushpak
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4361 - 4372
  • [42] Preliminary Study of Multi-modal Dialogue System for Personal Robot with IoTs
    Yamasaki, Shintaro
    Matsui, Kenji
    DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE, 2018, 620 : 286 - 292
  • [43] A NOVEL METHOD FOR AUTOMATICALLY GENERATING MULTI-MODAL DIALOGUE FROM TEXT
    Prendinger, Helmut
    Piwek, Paul
    Ishizuka, Mitsuru
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2007, 1 (03) : 319 - 334
  • [44] Towards Sentiment-Aware Multi-Modal Dialogue Policy Learning
    Saha, Tulika
    Saha, Sriparna
    Bhattacharyya, Pushpak
    COGNITIVE COMPUTATION, 2022, 14 (01) : 246 - 260
  • [45] Multi-scale, multi-modal neural modeling and simulation
    Ishii, Shin
    Diesmann, Markus
    Doya, Kenji
    NEURAL NETWORKS, 2011, 24 (09) : 917 - 917
  • [46] Design of Seamless Multi-modal Interaction Framework for Intelligent Virtual Agents in Wearable Mixed Reality Environment
    Ali, Ghazanfar
    Le, Hong-Quan
    Kim, Junho
    Hwang, Seung-Won
    Hwang, Jae-In
    PROCEEDINGS OF THE 32ND INTERNATIONAL CONFERENCE ON COMPUTER ANIMATION AND SOCIAL AGENTS (CASA 2019), 2019, : 47 - 52
  • [47] Improved Multi-kernel SVM for Multi-modal and Imbalanced Dialogue Act Classification
    Zhou, Yucan
    Cui, Xiaowei
    Hu, Qinghua
    Jia, Yuan
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [48] VIRTUAL PROTOTYPING AND MULTI-MODAL INTERFACES TO TEST THE CONTROL OF AN ORTHOSIS
    Pujana-Arrese, Aron
    Landaluze, Joseba
    Gimeno, Jesus
    Fernandez, Marcos
    6TH INTERNATIONAL INDUSTRIAL SIMULATION CONFERENCE 2008, 2008, : 173 - +
  • [49] Software infrastructure for interactive, multi-modal virtual and augmented realities
    Martin, GA
    Daly, J
    Washburn, DA
    Lazarus, T
    Goldiez, B
    ISAS/CITSA 2004: International Conference on Cybernetics and Information Technologies, Systems and Applications and 10th International Conference on Information Systems Analysis and Synthesis, Vol 4, Proceedings, 2004, : 13 - 18
  • [50] Community Structure and Multi-Modal Oscillations in Complex Networks
    Dorrian, Henry
    Borresen, Jon
    Amos, Martyn
    PLOS ONE, 2013, 8 (10):