Multimodal Interaction for Human-Robot Teams

被引:0
|
作者
Burke, Dustin [1 ]
Schurr, Nathan [1 ]
Ayers, Jeanine [1 ]
Rousseau, Jeff [1 ]
Fertitta, John [1 ]
Carlin, Alan [1 ]
Dumond, Danielle [1 ]
机构
[1] Aptima Inc, Woburn, MA 01801 USA
来源
关键词
Human-robot interaction; human-robot teams; multimodal control; context-aware; wearable computing; autonomy; manned-unmanned teaming;
D O I
10.1117/12.2016334
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Experiments in human-robot teams
    Nielsen, CW
    Goodrich, MA
    Crandall, JW
    [J]. MULTI-ROBOT SYSTEMS: FROM SWARMS TO INTELLIGENT AUTOMATA, VOL II, 2003, : 241 - 252
  • [22] Human-Robot Teams: A Review
    Wolf, Franziska Doris
    Stock-Homburg, Ruth
    [J]. SOCIAL ROBOTICS, ICSR 2020, 2020, 12483 : 246 - 258
  • [23] Skill Interaction Categories for Communication in Flexible Human-Robot Teams
    Riedelbauch, Dominik
    Schweizer, Stephan
    Henrich, Dominik
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 3810 - 3816
  • [24] Computational Methodology for the Allocation of Work and Interaction in Human-Robot Teams
    Ijtsma, Martijn
    Ma, Lanssie M.
    Pritchett, Amy R.
    Feigh, Karen M.
    [J]. JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2019, 13 (04) : 221 - 241
  • [25] Human-Robot Interaction and Collaborative Manipulation with Multimodal Perception Interface for Human
    Huang, Shouren
    Ishikawa, Masatoshi
    Yamakawa, Yuji
    [J]. PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 289 - 291
  • [26] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    [J]. SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10
  • [27] Comparing alternative modalities in the context of multimodal human-robot interaction
    Saren, Suprakas
    Mukhopadhyay, Abhishek
    Ghose, Debasish
    Biswas, Pradipta
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2024, 18 (01) : 69 - 85
  • [28] Evaluations of embedded Modules dedicated to multimodal Human-Robot Interaction
    Burger, Brice
    Lerasle, Frederic
    Ferrane, Isabelle
    [J]. RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 341 - +
  • [29] Multimodal Approach to Affective Human-Robot Interaction Design with Children
    Okita, Sandra Y.
    Ng-Thow-Hing, Victor
    Sarvadevabhatla, Ravi K.
    [J]. ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2011, 1 (01)
  • [30] Real-time Framework for Multimodal Human-Robot Interaction
    Gast, Juergen
    Bannat, Alexander
    Rehrl, Tobias
    Wallhoff, Frank
    Rigoll, Gerhard
    Wendt, Cornelia
    Schmidt, Sabrina
    Popp, Michael
    Faerber, Berthold
    [J]. HSI: 2009 2ND CONFERENCE ON HUMAN SYSTEM INTERACTIONS, 2009, : 273 - 280