Multi-modal interaction for UAS control

被引:2
|
作者
Taylor, Glenn [1 ]
Purman, Ben [1 ]
Schermerhorn, Paul [1 ]
Garcia-Sampedro, Guillermo [1 ]
Hubal, Robert [1 ]
Crabtree, Kathleen [2 ]
Rowe, Allen [3 ]
Spriggs, Sarah [3 ]
机构
[1] Soar Technol, Ann Arbor, MI 48105 USA
[2] Booz Allen Hamilton, Norfolk, VA USA
[3] Air Force Res Lab, Dayton, OH USA
来源
关键词
UAS control; natural interfaces; multi-modal interaction;
D O I
10.1117/12.2180020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unmanned aircraft systems (UASs) have seen a dramatic increase in military operations over the last two decades. The increased demand for their capabilities on the battlefield has resulted in quick fielding with user interfaces that are designed more for engineers in mind than for UAS operators. UAS interfaces tend to support tele-operation with a joystick or complex, menu-driven interfaces that have a steep learning curve. These approaches to control require constant attention to manage a single UAS and require increased heads-down time in an interface to search for and click on the right menus to invoke commands. The time and attention required by these interfaces makes it difficult to increase a single operator's span of control to encompass multiple UAS or the control of sensor systems. In this paper, we explore an alternative interface to the standard menu-based control interfaces. Our approach in this work was to first study how operators might want to task a UAS if they were not constrained by a typical menu interface. Based on this study, we developed a prototype multi-modal dialogue interface for more intuitive control of multiple unmanned aircraft and their sensor systems using speech and map-based gesture/sketch. The system we developed is a two-way interface that allows a user to draw on a map while speaking commands to the system, and which provides feedback to the user to ensure the user knows what the system is doing. When the system does not understand the user for some reason - for example, because speech recognition failed or because the user did not provide enough information - the system engages with the user in a dialogue to gather the information needed to perform the command. With the help of UAS operators, we conducted a user study to compare the performance of our prototype system against a representative menu-based control interface in terms of usability, time on task, and mission effectiveness. This paper describes a study to gather data about how people might use a natural interface, the system itself, and the results of the user study.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Monitoring-Control System Using Multi-modal Interaction Agent in Ubiquitous Environment
    Park, Sungdo
    Kim, Jeongseok
    Chang, Hyokyung
    Jang, Bokman
    Choi, Euiin
    [J]. COMMUNICATION AND NETWORKING, 2009, 56 : 602 - 607
  • [32] Wearable Multi-modal Interface for Human Multi-robot Interaction
    Gromov, Boris
    Gambardella, Luca M.
    Di Caro, Gianni A.
    [J]. 2016 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2016, : 240 - 245
  • [33] Multi-level Interaction Network for Multi-Modal Rumor Detection
    Zou, Ting
    Qian, Zhong
    Li, Peifeng
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [34] Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Liu, Yang
    Zhang, Lihua
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2093 - 2097
  • [35] Flexible Dual Multi-Modal Hashing for Incomplete Multi-Modal Retrieval
    Wei, Yuhong
    An, Junfeng
    [J]. INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024,
  • [36] Multi-Modal 2020: Multi-Modal Argumentation 30 Years Later
    Gilbert, Michael A.
    [J]. INFORMAL LOGIC, 2022, 42 (03): : 487 - 506
  • [37] Multi-modal control scheme for rehabilitation robotic exoskeletons
    Li, Xiang
    Pan, Yongping
    Chen, Gong
    Yu, Haoyong
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7): : 759 - 777
  • [38] Multi-modal biometrics with PKIs for border control applications
    Kwon, T
    Moon, H
    [J]. COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2005, PT 1, 2005, 3480 : 584 - 590
  • [39] Multi-Modal Interfaces for Control of Assistive Robotic Devices
    McMurrough, Christopher
    [J]. ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 329 - 332
  • [40] Mode reconstruction for source coding and multi-modal control
    Austin, A
    Egerstedt, M
    [J]. HYBRID SYSTEMS: COMPUTATION AND CONTROL, PROCEEDINGS, 2003, 2623 : 36 - 49