A multimodal interface framework for using hand gestures and speech in virtual environment applications

被引:0
|
作者
LaViola, JJ [1 ]
机构
[1] Brown Univ, NSF Sci & Technol Ctr Comp Graph & Sci Visualizat, Providence, RI 02912 USA
来源
GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION | 1999年 / 1739卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent approaches to providing users with a more natural method of interacting with virtual environment applications have shown that more than one mode of input can be both beneficial and intuitive as a communication medium between humans and computer applications. Hand gestures and speech appear to be two of the most logical since users will typically be in environments that will have them immersed in a virtual world with limited access to traditional input devices such as the keyboard or the mouse. In this paper, we describe an ongoing research project to develop multimodal interfaces that incorporate 3D hand gestures and speech in virtual environments.
引用
收藏
页码:303 / 314
页数:12
相关论文
共 50 条
  • [1] An evaluation of an augmented reality multimodal interface using speech and paddle gestures
    Irawati, Sylvia
    Green, Scott
    Billinghurst, Mark
    Duenser, Andreas
    Ko, Heedong
    ADVANCES IN ARTIFICIAL REALITY AND TELE-EXISTENCE, PROCEEDINGS, 2006, 4282 : 272 - 283
  • [2] Using Hand Gesture and Speech in a Multimodal Augmented Reality Environment
    Dias, Miguel Sales
    Bastos, Rafael
    Fernandes, Joao
    Tavares, Joao
    Santos, Pedro
    GESTURE-BASED HUMAN-COMPUTER INTERACTION AND SIMULATION, 2009, 5085 : 175 - +
  • [3] Processing Iconic Gestures in a Multimodal Virtual Construction Environment
    Froehlich, Christian
    Biermann, Peter
    Latoschik, Marc E.
    Wachsmuth, Ipke
    GESTURE-BASED HUMAN-COMPUTER INTERACTION AND SIMULATION, 2009, 5085 : 187 - 192
  • [4] Comparing hand gestures and a gamepad interface for locomotion in virtual environments
    Zhao, Jingbo
    An, Ruize
    Xu, Ruolin
    Lin, Banghao
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2022, 166
  • [5] Smart Virtual Lab Using Hand Gestures
    Ikram, Warda
    Jeong, Yoonji
    Lee, Byeonggwon
    Um, Kyhyun
    Cho, Kyungeun
    Advanced Multimedia and Ubiquitous Engineering: Future Information Technology, 2015, 352 : 165 - 170
  • [6] Multimodal Media Center Interface Based on Speech, Gestures and Haptic Feedback
    Turunen, Markku
    Hakulinen, Jaakko
    Hella, Juho
    Rajaniemi, Juha-Pekka
    Melto, Aleksi
    Makinen, Erno
    Rantala, Jussi
    Heimonen, Tomi
    Laivo, Tuuli
    Soronen, Hannu
    Hansen, Mervi
    Valkama, Pellervo
    Miettinen, Toni
    Raisamo, Roope
    HUMAN-COMPUTER INTERACTION - INTERACT 2009, PT II, PROCEEDINGS, 2009, 5727 : 54 - +
  • [7] Utilize speech and gestures to realize natural interaction in a virtual environment
    Latoschik, ME
    Frohlich, M
    Jung, B
    Wachsmuth, I
    IECON '98 - PROCEEDINGS OF THE 24TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4, 1998, : 2028 - 2033
  • [8] Virtual mouse using hand gestures by skin recognition
    ArulMurugan, S.
    Somaiswariy, S.
    Banu, S. Rosshan
    Angel, R. Ruby
    JOURNAL OF POPULATION THERAPEUTICS AND CLINICAL PHARMACOLOGY, 2023, 30 (07): : E251 - E258
  • [9] Generating Co -Speech Gestures for Virtual Agents from Multimodal Information Based on Transformer
    Yu, Yue
    Shi, Jiande
    2023 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW, 2023, : 887 - 888
  • [10] Temporal symbolic integration applied to a multimodal system using gestures and speech
    Sowa, T
    Fröhlich, M
    Latoschik, ME
    GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION, 1999, 1739 : 291 - 302