Qualitative Analysis of a Multimodal Interface System using Speech/Gesture

被引:0
|
作者
Baig, Muhammad Zeeshan [1 ]
Kavakli, Manolya [1 ]
机构
[1] Macquarie Univ, Fac Sci & Engn, Dept Comp, VISOR Res Grp, Sydney, NSW 2109, Australia
关键词
Speech; Gesture; MMIS; 3D Modelling; CAD; Object Manipulation;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we present an upgraded version of the 3D modelling system, De-SIGN v3 [1]. The system uses speech and gesture recognition technology to collect information from the user in real-time. These inputs are then transferred to the main program to carry out required 3D object creation and manipulation operations. The aim of the system is to analyse the designer behaviour and quality of interaction, in a virtual reality environment. The system has the basic functionality for 3D object modelling. The users have performed two sets of experiments. In the first experiment, the participants had to draw 3D objects using keyboard and mouse. In the second experiment, speech and gesture inputs have been used for 3D modelling. The evaluation has been done with the help of questionnaires and task completion ratings. The results showed that with speech, it is easy to draw the objects but sometimes system detects the numbers incorrectly. With gestures, it is difficult to stabilize the hand at one position. The completion rate was above 90% with the upgraded system but the precision is low depending on participants.
引用
收藏
页码:2811 / 2816
页数:6
相关论文
共 50 条
  • [41] Research on multimodal human-robot interaction based on speech and gesture
    Deng Yongda
    Li Fang
    Xin Huang
    COMPUTERS & ELECTRICAL ENGINEERING, 2018, 72 : 443 - 454
  • [42] Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    SERVICE ORIENTATION IN HOLONIC AND MULTI-AGENT MANUFACTURING CONTROL, 2012, 402 : 183 - 196
  • [43] Developing strategic readers: a multimodal analysis of a primary school teacher's use of speech, gesture and artefacts
    Shanahan, Lynn E.
    Roof, Lisa M.
    LITERACY, 2013, 47 (03) : 157 - 164
  • [44] Speaker Diarization Using Gesture and Speech
    Gebre, Binyam Gebrekidan
    Wittenburg, Peter
    Drude, Sebastian
    Huijbregts, Marijn
    Heskes, Tom
    15TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2014), VOLS 1-4, 2014, : 582 - 586
  • [45] Speak-As-You-Swipe (SAYS): A Multimodal Interface Combining Speech and Gesture Keyboard Synchronously for Continuous Mobile Text Entry
    Sim, Khe Chai
    ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 555 - 560
  • [46] Robust Algorithms for a Multimodal Biometric System Using Palmprint and Speech
    Raghavendra, R.
    JOURNAL OF INTELLIGENT SYSTEMS, 2011, 20 (04) : 305 - 326
  • [47] Speech and gesture analysis: a new approach
    Natarajan, Jayanthi
    Bajaj, Utkarsh
    Shahi, Dishant
    Soni, Rohan
    Anand, Tarun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (15) : 20763 - 20779
  • [48] Hand Gesture Mouse Interface System
    Puthukkeril, Verghese Koshy
    Sundar, Shyam E. H.
    Kumar, Nandha P. R.
    2013 INTERNATIONAL CONFERENCE ON HUMAN COMPUTER INTERACTIONS (ICHCI), 2013,
  • [49] Gesture facilitates the syntactic analysis of speech
    Holle, Henning
    Obermeier, Christian
    Schmidt-Kassow, Maren
    Friederici, Angela D.
    Ward, Jamie
    Gunter, Thomas C.
    FRONTIERS IN PSYCHOLOGY, 2012, 3
  • [50] Speech and gesture analysis: a new approach
    Jayanthi Natarajan
    Utkarsh Bajaj
    Dishant Shahi
    Rohan Soni
    Tarun Anand
    Multimedia Tools and Applications, 2022, 81 : 20763 - 20779