Requirements for navigation through drawings on wearable computers by using speech commands

被引:0
|
作者
Reinhardt, J [1 ]
Scherer, RJ [1 ]
机构
[1] Tech Univ Dresden, D-8027 Dresden, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Wearable computers will strongly impact the work on a construction site in the near future. These computers are firstly small in size and secondly the persons using them should not be hampered in doing their basic work by the use of mouse, joystick or even keyboard, which are used as input facilities. Oftentimes the only way to put data in the computer under site conditions, is a speech recognition interface. However, the application of voice as the data input media requires new methods and technologies for the navigation through drawings displayed on a wearable computer on site. This paper describes technologies, methods and algorithms that were introduced to support navigation through a drawing by using speech commands. The context for this problem was given by a project of Carnegie Mellon University and TU - Dresden that focuses on the usage of wearable computers to efficiently measure the progress of a construction project. Being confined to have only a small range of speech commands, it is necessary to optimize the applied command algorithms in a way that the users can navigate to the desired element as quickly as possible.
引用
收藏
页码:385 / 389
页数:5
相关论文
共 50 条
  • [21] Modeling and Detecting Student Attention and Interest Level using Wearable Computers
    Zhu, Ziwei
    Ober, Sebastian
    Jafari, Roozbeh
    2017 IEEE 14TH INTERNATIONAL CONFERENCE ON WEARABLE AND IMPLANTABLE BODY SENSOR NETWORKS (BSN), 2017, : 13 - 18
  • [22] A CSCW system for distributed search/collection tasks using wearable computers
    Sumiya, T
    Inoue, A
    Shiba, S
    Kato, J
    Shigeno, H
    Okada, K
    SIXTH IEEE WORKSHOP ON MOBILE COMPUTING SYSTEMS AND APPLICATIONS, PROCEEDINGS, 2004, : 20 - 27
  • [23] Power Wheelchair Navigation Assistance Using Wearable Vibrotactile Haptics
    Devigne, Louise
    Aggravi, Marco
    Bivaud, Morgane
    Balix, Nathan
    Teodorescu, Catalin Stefan
    Carlson, Tom
    Spreters, Tom
    Pacchierotti, Claudio
    Babel, Marie
    IEEE TRANSACTIONS ON HAPTICS, 2020, 13 (01) : 52 - 58
  • [24] Building Innovative Speech Interfaces using Patterns and Antipatterns of Commands for Controlling Loader Cranes
    Majewski, Maciej
    Kacalak, Wojciech
    2016 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE & COMPUTATIONAL INTELLIGENCE (CSCI), 2016, : 525 - 530
  • [25] Development of Voice Commands in Digital Signage for Improved Indoor Navigation Using Google Assistant SDK
    Sheppard, David
    Felker, Nick
    Schmalzel, John
    2019 IEEE SENSORS APPLICATIONS SYMPOSIUM (SAS), 2019,
  • [26] Providing navigation assistance through ForceHand: a wearable force-feedback glove
    Das, Swagata
    Kurita, Yuichi
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [27] Decoding silent speech commands from articulatory movements through soft magnetic skin and machine learning
    Dong, Penghao
    Li, Yizong
    Chen, Si
    Grafstein, Justin T.
    Khan, Irfaan
    Yao, Shanshan
    MATERIALS HORIZONS, 2023, 10 (12) : 5607 - 5620
  • [28] SPEAKING TO, FROM, AND THROUGH COMPUTERS - SPEECH TECHNOLOGIES AND USER-INTERFACE DESIGN
    BENNETT, RW
    GREENSPAN, SL
    SYRDAL, AK
    TSCHIRGI, JE
    WISOWATY, JJ
    AT&T TECHNICAL JOURNAL, 1989, 68 (05): : 17 - 30
  • [29] Speech Synthesis Using Ambiguous Inputs From Wearable Keyboards
    Iwasaki, Matsuri
    Hara, Sunao
    Abe, Masanobu
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1172 - 1178
  • [30] Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals
    Christine Lætitia Lisetti
    Fatma Nasoz
    EURASIP Journal on Advances in Signal Processing, 2004