A multimodal interface framework for using hand gestures and speech in virtual environment applications

被引:0
|
作者
LaViola, JJ [1 ]
机构
[1] Brown Univ, NSF Sci & Technol Ctr Comp Graph & Sci Visualizat, Providence, RI 02912 USA
来源
GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION | 1999年 / 1739卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent approaches to providing users with a more natural method of interacting with virtual environment applications have shown that more than one mode of input can be both beneficial and intuitive as a communication medium between humans and computer applications. Hand gestures and speech appear to be two of the most logical since users will typically be in environments that will have them immersed in a virtual world with limited access to traditional input devices such as the keyboard or the mouse. In this paper, we describe an ongoing research project to develop multimodal interfaces that incorporate 3D hand gestures and speech in virtual environments.
引用
收藏
页码:303 / 314
页数:12
相关论文
共 50 条
  • [31] Vision-based Multimodal Human-Computer Interaction using Hand and Head Gestures
    Agrawal, Anupam
    Raj, Rohit
    Porwal, Shubha
    2013 IEEE CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGIES (ICT 2013), 2013, : 1288 - 1292
  • [32] 3D Interaction techniques using gestures recognition in virtual environment
    Messaci, A.
    Zenati, N.
    Bellarbi, A.
    Belhocine, M.
    2015 4TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING (ICEE), 2015, : 167 - +
  • [33] Fingertip Gestures Recognition Using Leap Motion and Camera for Interaction with Virtual Environment
    Rehman, Inam Ur
    Ullah, Sehat
    Khan, Dawar
    Khalid, Shah
    Alam, Aftab
    Jabeen, Gul
    Rabbi, Ihsan
    Rahman, Haseeb Ur
    Ali, Numan
    Azher, Muhammad
    Nabi, Syed
    Khan, Sangeen
    ELECTRONICS, 2020, 9 (12) : 1 - 20
  • [34] A pedagogical note on teaching L2 prosody and speech sounds using hand gestures
    Li, Peng
    Baills, Florence
    Alazard-Guiu, Charlotte
    Baque, Lorraine
    Prieto, Pilar
    JOURNAL OF SECOND LANGUAGE PRONUNCIATION, 2023, 9 (03) : 340 - 349
  • [35] Spatial Learning Using Locomotion Interface to Virtual Environment
    Patel, Kanubhai K.
    Vij, Sanjaykumar
    IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 2012, 5 (02): : 170 - 176
  • [36] Using A Brain Computer Interface For Virtual Environment Control
    Holzner, C.
    Guger, C.
    Groenegress, C.
    Edlinger, G.
    Slater, M.
    ANALYSIS OF BIOMEDICAL SIGNALS AND IMAGES, 2008, : 213 - 215
  • [37] Interaction framework for home environment using speech and vision
    Kleindienst, Jan
    Macek, Tomas
    Seredi, Ladislav
    Sedivy, Jan
    IMAGE AND VISION COMPUTING, 2007, 25 (12) : 1836 - 1847
  • [38] Multimodal speech-gesture interface for handfree painting on a virtual paper using partial recurrent neural networks as gesture recognizer
    Corradini, A
    Cohen, PR
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 2293 - 2298
  • [39] MULTIMODAL ANALYSIS OF SPEECH PROSODY AND UPPER BODY GESTURES USING HIDDEN SEMI-MARKOV MODELS
    Bozkurt, Elif
    Asta, Shahriar
    Ozkul, Serkan
    Yemez, Yucel
    Erzin, Engin
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 3652 - 3656
  • [40] Gestures In-The-Wild: Detecting Conversational Hand Gestures in Crowded Scenes Using a Multimodal Fusion of Bags of Video Trajectories and Body Worn Acceleration
    Cabrera-Quiros, Laura
    Tax, David M. J.
    Hung, Hayley
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (01) : 138 - 147