Multimodal Interaction for Human-Robot Teams

被引:0
|
作者
Burke, Dustin [1 ]
Schurr, Nathan [1 ]
Ayers, Jeanine [1 ]
Rousseau, Jeff [1 ]
Fertitta, John [1 ]
Carlin, Alan [1 ]
Dumond, Danielle [1 ]
机构
[1] Aptima Inc, Woburn, MA 01801 USA
来源
关键词
Human-robot interaction; human-robot teams; multimodal control; context-aware; wearable computing; autonomy; manned-unmanned teaming;
D O I
10.1117/12.2016334
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Information Transfer Within Human Robot Teams: Multimodal Attention Management in Human-Robot Interaction
    Mortimer, Bruce J. P.
    Elliott, Linda R.
    [J]. 2017 IEEE CONFERENCE ON COGNITIVE AND COMPUTATIONAL ASPECTS OF SITUATION MANAGEMENT (COGSIMA), 2017,
  • [2] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    [J]. FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [3] A Dialogue System for Multimodal Human-Robot Interaction
    Lucignano, Lorenzo
    Cutugno, Francesco
    Rossi, Silvia
    Finzi, Alberto
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 197 - 204
  • [4] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    [J]. 2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540
  • [5] Improving Human-Robot Interaction by a Multimodal Interface
    Ubeda, Andres
    Ianez, Eduardo
    Azorin, Jose M.
    Sabater, Jose M.
    Garcia, Nicolas M.
    Perez, Carlos
    [J]. IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010, : 3580 - 3585
  • [6] Affective Human-Robot Interaction with Multimodal Explanations
    Zhu, Hongbo
    Yu, Chuang
    Cangelosi, Angelo
    [J]. SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 241 - 252
  • [7] Designing a Multimodal Human-Robot Interaction Interface for an Industrial Robot
    Mocan, Bogdan
    Fulea, Mircea
    Brad, Stelian
    [J]. ADVANCES IN ROBOT DESIGN AND INTELLIGENT CONTROL, 2016, 371 : 255 - 263
  • [8] Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot
    Stiefelhagen, Rainer
    Ekenel, Hazim Kemal
    Fugen, Christian
    Gieselmann, Petra
    Holzapfel, Hartwig
    Kraft, Florian
    Nickel, Kai
    Voit, Michael
    Waibel, Alex
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (05) : 840 - 851
  • [9] Multimodal fusion and human-robot interaction control of an intelligent robot
    Gong, Tao
    Chen, Dan
    Wang, Guangping
    Zhang, Weicai
    Zhang, Junqi
    Ouyang, Zhongchuan
    Zhang, Fan
    Sun, Ruifeng
    Ji, Jiancheng Charles
    Chen, Wei
    [J]. FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2024, 11
  • [10] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706