A conversational paradigm for multimodal human interaction

被引:0
|
作者
Quek, F [1 ]
机构
[1] Wright State Univ, CSE Dept, Vis Interfaces & Sys Lab VISLab, Dayton, OH 45435 USA
关键词
D O I
10.1109/AIPR.2001.991207
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present an alternative to the manipulative and semaphoric gesture recognition paradigms. Human multimodal communicative behaviors form a tightly integrated whole. We present a paradigm multimodal analysis in natural discourse based on a feature decompositive psycholinguistically, derived model that permits us to access the underlying structure and intent of multimodal communicative discourse. We outline the psycholinguistics that drive our paradigm, the Catchment concept that facilitates our getting a computational handle on discourse entities, and sioninarize some approaches and results that realize the vision. We show examples of such discourse-structuring features as handedness, types of symmetry, gaze-at-interlocutor and hand 'origos'. Such analysis is an alternative to the 'recognition of one discrete gesture out of k stylized whole gesture models' paradigm.
引用
收藏
页码:80 / 86
页数:7
相关论文
共 50 条
  • [41] A Quantum -Like multimodal network framework for modeling interaction dynamics in multiparty conversational sentiment analysis
    Zhang, Yazhou
    Song, Dawei
    Li, Xiang
    Zhang, Peng
    Wang, Panpan
    Rong, Lu
    Yu, Guangliang
    Wang, Bo
    INFORMATION FUSION, 2020, 62 : 14 - 31
  • [42] Enhancing human-computer interaction with embodied conversational agents
    Foster, Mary Ellen
    Universal Access in Human-Computer Interaction: Ambient Interaction, Pt 2, Proceedings, 2007, 4555 : 828 - 837
  • [43] Challenges in Exploiting Conversational Memory in Human-Agent Interaction
    Campos, Joana
    Kennedy, James
    Lehman, Jill F.
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 1649 - 1657
  • [44] Conversational agent or direct manipulation in human-system interaction
    den Os, E
    Boves, L
    Rossignol, S
    ten Bosch, L
    Vuurpijl, L
    SPEECH COMMUNICATION, 2005, 47 (1-2) : 194 - 207
  • [45] The Vernissage Corpus: A Conversational Human-Robot-Interaction Dataset
    Jayagopi, Dinesh Babu
    Sheiki, Samira
    Klotz, David
    Wienke, Johannes
    Odobez, Jean-Marc
    Wrede, Sebastien
    Khalidov, Vasil
    Nyugen, Laurent
    Wrede, Britta
    Gatica-Perez, Daniel
    PROCEEDINGS OF THE 8TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2013), 2013, : 149 - +
  • [46] How to Improve Human-Robot Interaction with Conversational Fillers
    Wigdor, Noel
    de Greeff, Joachim
    Looije, Rosemarijn
    Neerincx, Mark A.
    2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2016, : 219 - 224
  • [47] MULTIMODAL THERAPY PARADIGM
    WILLIAMS, TE
    BULLETIN OF THE BRITISH PSYCHOLOGICAL SOCIETY, 1982, 35 (JAN): : 37 - 37
  • [48] Multimodal Interfaces of Human-Computer Interaction
    Karpov, A. A.
    Yusupov, R. M.
    HERALD OF THE RUSSIAN ACADEMY OF SCIENCES, 2018, 88 (01) : 67 - 74
  • [49] Multimodal human interaction analysis in vehicle cockpit
    Portes, Quentin
    Pinquier, Julien
    Lerasle, Frederic
    Carvalho, Jose Mendes
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2118 - 2124
  • [50] Multimodal Emotion Recognition for Human Robot Interaction
    Adiga, Sharvari
    Vaishnavi, D. V.
    Saxena, Suchitra
    ShikhaTripathi
    2020 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI 2020), 2020, : 197 - 203