A conversational paradigm for multimodal human interaction

被引:0
|
作者
Quek, F [1 ]
机构
[1] Wright State Univ, CSE Dept, Vis Interfaces & Sys Lab VISLab, Dayton, OH 45435 USA
关键词
D O I
10.1109/AIPR.2001.991207
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present an alternative to the manipulative and semaphoric gesture recognition paradigms. Human multimodal communicative behaviors form a tightly integrated whole. We present a paradigm multimodal analysis in natural discourse based on a feature decompositive psycholinguistically, derived model that permits us to access the underlying structure and intent of multimodal communicative discourse. We outline the psycholinguistics that drive our paradigm, the Catchment concept that facilitates our getting a computational handle on discourse entities, and sioninarize some approaches and results that realize the vision. We show examples of such discourse-structuring features as handedness, types of symmetry, gaze-at-interlocutor and hand 'origos'. Such analysis is an alternative to the 'recognition of one discrete gesture out of k stylized whole gesture models' paradigm.
引用
收藏
页码:80 / 86
页数:7
相关论文
共 50 条
  • [31] Conversational Grounding in Multimodal Dialog Systems
    Mohapatra, Biswesh
    PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2023, 2023, : 706 - 710
  • [32] A Multimodal Model for Predicting Conversational Feedbacks
    Boudin, Auriane
    Bertrand, Roxane
    Rauzy, Stephane
    Ochs, Magalie
    Blache, Philippe
    TEXT, SPEECH, AND DIALOGUE, TSD 2021, 2021, 12848 : 537 - 549
  • [33] Muxi: a Multimodal Conversational Interface for the Metaverse
    Barra, Paola
    Cantone, Andrea Antonio
    Francese, Rita
    Giammetti, Marco
    Sais, Raffaele
    Santosuosso, Otino Pio
    Sepe, Aurelio
    Spera, Simone
    Tortora, Genoveffa
    Vitiello, Giuliana
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ADVANCED VISUAL INTERFACES, AVI 2024, 2024,
  • [34] Multimodal Interfaces in Support of Human-Human Interaction
    Waibel, Alex
    GESTURE IN EMBODIED COMMUNICATION AND HUMAN-COMPUTER INTERACTION, 2010, 5934 : 243 - 244
  • [35] Synthesizing multimodal utterances for conversational agents
    Kopp, S
    Wachsmuth, P
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2004, 15 (01) : 39 - 52
  • [36] Multimodal Backchannels for Embodied Conversational Agents
    Bevacqua, Elisabetta
    Pammi, Sathish
    Hyniewska, Sylwia Julia
    Schroeder, Marc
    Pelachaud, Catherine
    INTELLIGENT VIRTUAL AGENTS, IVA 2010, 2010, 6356 : 194 - 200
  • [37] DISTANCE EDUCATION AND THE CONVERSATIONAL PARADIGM - REPLY
    HOLMBERG, B
    EDUCATIONAL & TRAINING TECHNOLOGY INTERNATIONAL, 1991, 28 (01): : 71 - 73
  • [38] Multimodal output for a conversational telephony system
    Mast, M
    Günther, C
    Kunzmann, S
    Ross, T
    2000 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, PROCEEDINGS VOLS I-III, 2000, : 293 - 296
  • [39] Incremental multimodal feedback for conversational agents
    Kopp, Stefan
    Stocksmeier, Thorsten
    Gibbon, Dafydd
    INTELLIGENT VIRTUAL AGENTS, PROCEEDINGS, 2007, 4722 : 139 - +
  • [40] Contextual factors and adaptative multimodal human-computer interaction: Multi-level specification of emotion and expressivity in Embodied Conversational Agents
    Lamolle, M
    Mancini, M
    Pelachaud, C
    Abrilian, S
    Martin, JC
    Devillers, L
    MODELING AND USING CONTEXT, PROCEEDINGS, 2005, 3554 : 225 - 239