Qualitative Analysis of a Multimodal Interface System using Speech/Gesture

被引:0
|
作者
Baig, Muhammad Zeeshan [1 ]
Kavakli, Manolya [1 ]
机构
[1] Macquarie Univ, Fac Sci & Engn, Dept Comp, VISOR Res Grp, Sydney, NSW 2109, Australia
关键词
Speech; Gesture; MMIS; 3D Modelling; CAD; Object Manipulation;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we present an upgraded version of the 3D modelling system, De-SIGN v3 [1]. The system uses speech and gesture recognition technology to collect information from the user in real-time. These inputs are then transferred to the main program to carry out required 3D object creation and manipulation operations. The aim of the system is to analyse the designer behaviour and quality of interaction, in a virtual reality environment. The system has the basic functionality for 3D object modelling. The users have performed two sets of experiments. In the first experiment, the participants had to draw 3D objects using keyboard and mouse. In the second experiment, speech and gesture inputs have been used for 3D modelling. The evaluation has been done with the help of questionnaires and task completion ratings. The results showed that with speech, it is easy to draw the objects but sometimes system detects the numbers incorrectly. With gestures, it is difficult to stabilize the hand at one position. The completion rate was above 90% with the upgraded system but the precision is low depending on participants.
引用
收藏
页码:2811 / 2816
页数:6
相关论文
共 50 条
  • [1] The performance and cognitive workload analysis of a multimodal speech and visual gesture (mSVG) UAV control interface
    Abioye, Ayodeji Opeyemi
    Prior, Stephen D.
    Saddington, Peter
    Ramchurn, Sarvapali D.
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2022, 147
  • [2] Analysis of Apraxic Speech Errors by Using a System of Speech Gesture Units
    Schulz, S.
    Heim, S.
    Willmes, K.
    Kroeger, B. J.
    SPRACHE-STIMME-GEHOR, 2014, 38 : E7 - E8
  • [3] Multimodal speech-gesture interface for handfree painting on a virtual paper using partial recurrent neural networks as gesture recognizer
    Corradini, A
    Cohen, PR
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 2293 - 2298
  • [4] From a Wizard of Oz experiment to a real time speech and gesture multimodal interface
    Carbini, S.
    Delphin-Poulat, L.
    Perron, L.
    Viallet, J. E.
    SIGNAL PROCESSING, 2006, 86 (12) : 3559 - 3577
  • [5] Using Hand Gesture and Speech in a Multimodal Augmented Reality Environment
    Dias, Miguel Sales
    Bastos, Rafael
    Fernandes, Joao
    Tavares, Joao
    Santos, Pedro
    GESTURE-BASED HUMAN-COMPUTER INTERACTION AND SIMULATION, 2009, 5085 : 175 - +
  • [6] Asynchronous Multimodal Text Entry using Speech and Gesture Keyboards
    Kristensson, Per Ola
    Vertanen, Keith
    12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 588 - +
  • [7] Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques Multimodal Fusion Using Gesture and Speech Input for AR
    Ismail, Ajune Wanis
    Billinghurst, Mark
    Sunar, Mohd Shahrizal
    Yusof, Cik Suhaimi
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2019, 868 : 309 - 322
  • [8] Human Gesture Analysis using Multimodal features
    Dan, Luo
    Ekenel, Hazim Kemal
    Jun, Ohya
    2012 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2012, : 471 - 476
  • [9] Multimodal Driver Interaction with Gesture, Gaze and Speech
    Aftab, Abdul Rafey
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 487 - 492
  • [10] A gaze and speech multimodal interface
    Zhang, QH
    Imamiya, A
    Go, K
    Mao, XY
    24TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS, PROCEEDINGS, 2004, : 208 - 213