Qualitative Analysis of a Multimodal Interface System using Speech/Gesture

被引:0
|
作者
Baig, Muhammad Zeeshan [1 ]
Kavakli, Manolya [1 ]
机构
[1] Macquarie Univ, Fac Sci & Engn, Dept Comp, VISOR Res Grp, Sydney, NSW 2109, Australia
关键词
Speech; Gesture; MMIS; 3D Modelling; CAD; Object Manipulation;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we present an upgraded version of the 3D modelling system, De-SIGN v3 [1]. The system uses speech and gesture recognition technology to collect information from the user in real-time. These inputs are then transferred to the main program to carry out required 3D object creation and manipulation operations. The aim of the system is to analyse the designer behaviour and quality of interaction, in a virtual reality environment. The system has the basic functionality for 3D object modelling. The users have performed two sets of experiments. In the first experiment, the participants had to draw 3D objects using keyboard and mouse. In the second experiment, speech and gesture inputs have been used for 3D modelling. The evaluation has been done with the help of questionnaires and task completion ratings. The results showed that with speech, it is easy to draw the objects but sometimes system detects the numbers incorrectly. With gestures, it is difficult to stabilize the hand at one position. The completion rate was above 90% with the upgraded system but the precision is low depending on participants.
引用
收藏
页码:2811 / 2816
页数:6
相关论文
共 50 条
  • [31] Understanding Multimodal User Gesture and Speech Behavior for Object Manipulation in Augmented Reality Using Elicitation
    Williams, Adam S.
    Garcia, Jason
    Ortega, Francisco
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (12) : 3479 - 3489
  • [32] Speech/gesture interface to a visual-computing environment
    Sharma, R
    Zeller, M
    Pavlovic, VI
    Huang, TS
    Lo, Z
    Chu, S
    Zhao, YX
    Phillips, JC
    Schulten, K
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2000, 20 (02) : 29 - 37
  • [33] Development of Deskwork Support System Using Pointing Gesture Interface
    Sugi, Masao
    Nakanishi, Hisato
    Nishino, Masataka
    Tamura, Yusuke
    Arai, Tamio
    Ota, Jun
    JOURNAL OF ROBOTICS AND MECHATRONICS, 2010, 22 (04) : 430 - 438
  • [34] Improving the Believability of Virtual Characters Using Qualitative Gesture Analysis
    Mazzarino, Barbara
    Peinado, Manuel
    Boulic, Ronan
    Volpe, Gualtiero
    Wanderley, Marcelo M.
    GESTURE-BASED HUMAN-COMPUTER INTERACTION AND SIMULATION, 2009, 5085 : 48 - +
  • [35] A prototype robot speech interface with multimodal feedback
    Haage, M
    Schötz, S
    Nugues, P
    IEEE ROMAN 2002, PROCEEDINGS, 2002, : 247 - 252
  • [36] A multimodal interface framework for using hand gestures and speech in virtual environment applications
    LaViola, JJ
    GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION, 1999, 1739 : 303 - 314
  • [37] A Combination of Static and Stroke Gesture with Speech for Multimodal Interaction in a Virtual Environment
    Chun, Lam Meng
    Arshad, Haslina
    Piumsomboon, Thammathip
    Billinghurst, Mark
    5TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING AND INFORMATICS 2015, 2015, : 59 - 64
  • [38] Multimodal language in bilingual and monolingual children: Gesture production and speech disfluency
    Arslan, Burcu
    Aktan-Erciyes, Asli
    Goeksun, Tilbe
    BILINGUALISM-LANGUAGE AND COGNITION, 2023, 26 (05) : 971 - 983
  • [39] Multimodal language use in Savosavo Refusing, excluding and negating with speech and gesture
    Bressem, Jana
    Stein, Nicole
    Wegener, Claudia
    PRAGMATICS, 2017, 27 (02): : 173 - 206
  • [40] Learning Co-Speech Gesture for Multimodal Aphasia Type Detection
    Lee, Daeun
    Son, Sejung
    Jeon, Hyolim
    Kim, Seungbae
    Han, Jinyoung
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 9287 - 9303