Modeling accessibility of embodied agents for multi-modal dialogue in complex virtual worlds

被引:0
|
作者
Sampath, D [1 ]
Rickel, J [1 ]
机构
[1] Univ So Calif, Inst Informat Sci, Marina Del Rey, CA 90292 USA
来源
INTELLIGENT VIRTUAL AGENTS | 2003年 / 2792卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Virtual humans are an important part of immersive virtual worlds, where they interact with human users in the roles of mentors, guides, teammates, companions or adversaries. A good dialogue model is essential for achieving realistic interaction between humans and agents. Any such model requires modeling accessibility of individuals, so that agents know which individuals are accessible for communication, by what modality (e.g. speech, gestures) and at what degree they can see or hear each other. This work presents a computational model of accessibility that is domain independent and capable of handling multiple individuals inhabiting a complex virtual world.
引用
收藏
页码:119 / 126
页数:8
相关论文
共 50 条
  • [21] Talking to virtual humans: Dialogue models and methodologies for embodied conversational agents
    Traum, David
    MODELLING COMMUNICATION WITH ROBOTS AND VIRTUAL HUMANS, 2008, 4930 : 296 - 309
  • [22] Embodied navigation with multi-modal information: A survey from tasks to methodology
    Wu, Yuchen
    Zhang, Pengcheng
    Gu, Meiying
    Zheng, Jin
    Bai, Xiao
    INFORMATION FUSION, 2024, 112
  • [23] COORDINATED MULTI-MODAL EXPRESSION AND EMBODIED MEANING IN THE EMERGENCE OF SYMBOLIC COMMUNICATION
    Brown, J. Erin
    EVOLUTION OF LANGUAGE, PROCEEDINGS, 2010, : 375 - 376
  • [24] Multi-modal multi-hop interaction network for dialogue response generation
    Zhou, Jie
    Tian, Junfeng
    Wang, Rui
    Wu, Yuanbin
    Yan, Ming
    He, Liang
    Huang, Xuanjing
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227
  • [25] A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW
    Hamdi, Hamza
    Richard, Paul
    Suteau, Aymeric
    Saleh, Mehdi
    PECCS 2011: PROCEEDINGS OF THE 1ST INTERNATIONAL CONFERENCE ON PERVASIVE AND EMBEDDED COMPUTING AND COMMUNICATION SYSTEMS, 2011, : 551 - 556
  • [26] VIRTUAL REALITY INTERFACE DESIGN FOR MULTI-MODAL TELEOPERATION
    Kadavasal, Muthukkumar S.
    Oliver, James H.
    WINVR2009: PROCEEDINGS OF THE ASME/AFM WORLD CONFERENCE ON INNOVATIVE VIRTUAL REALITY - 2009, 2009, : 169 - 174
  • [27] A Probabilistic Approach to Multi-Modal Adaptive Virtual Fixtures
    Muehlbauer, Maximilian
    Hulin, Thomas
    Weber, Bernhard
    Calinon, Sylvain
    Stulp, Freek
    Albu-Schaeffer, Alin
    Silverio, Joao
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (06) : 5298 - 5305
  • [28] Experiences with Multi-Modal Collaborative Virtual Laboratory (MMCVL)
    Desai, Kevin
    Jin, Rong
    Prabhakaran, Balakrishnan
    Diehl, Paul
    Belmonte, Uriel Haile Hernndez
    Ramirez, Victor Ayala
    Johnson, Vinu
    Gans, Murry
    2017 IEEE THIRD INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2017), 2017, : 376 - 383
  • [29] Multi-modal Virtual scenario enhances neurofeedback learning
    Cohen, Avihay
    Keynan, Jackob N.
    Jackont, Gilan
    Green, Nilli
    Rashap, Iris
    Shani, Ofir
    Charles, Fred
    Cavazza, Marc
    Hendler, Talma
    Raz, Gal
    FRONTIERS IN ROBOTICS AND AI, 2016, 3
  • [30] COMPLEX SEGREGATION ANALYSIS ON MULTI-MODAL DISTRIBUTIONS
    TAI, JJ
    GROSS, AJ
    BIOMETRICAL JOURNAL, 1989, 31 (01) : 123 - 129