Modeling Supervisor Safe Sets for Improving Collaboration in Human-Robot Teams

被引:0
|
作者
McPherson, David L. [1 ]
Scobee, Dexter R. R. [1 ]
Menke, Joseph [1 ]
Yang, Allen Y. [1 ]
Sastry, S. Shankar [1 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
来源
2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2018年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When a human supervisor collaborates with a team of robots, the human's attention is divided, and cognitive resources are at a premium. We aim to optimize the distribution of these resources and the flow of attention. To this end, we propose the model of an idealized supervisor to describe human behavior. Such a supervisor employs a potentially inaccurate internal model of the the robots' dynamics to judge safety. We represent these safety judgements by constructing a safe set from this internal model using reachability theory. When a robot leaves this safe set, the idealized supervisor will intervene to assist, regardless of whether or not the robot remains objectively safe. False positives, where a human supervisor incorrectly judges a robot to be in danger, needlessly consume supervisor attention. In this work, we propose a method that decreases false positives by learning the supervisor's safe set and using that information to govern robot behavior. We prove that robots behaving according to our approach will reduce the occurrence of false positives for our idealized supervisor model. Furthermore, we empirically validate our approach with a user study that demonstrates a significant (p = 0:0328) reduction in false positives for our method compared to a baseline safety controller.
引用
收藏
页码:861 / 868
页数:8
相关论文
共 50 条
  • [21] An XR-based Approach to Safe Human-Robot Collaboration
    Choi, Sung Ho
    Park, Kyeong-Beom
    Roh, Dong Hyeon
    Lee, Jae Yeol
    Ghasemi, Yalda
    Jeong, Heejin
    2022 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW 2022), 2022, : 472 - 473
  • [22] Impedance Control of Redundant Manipulators for Safe Human-Robot Collaboration
    Ficuciello, Fanny
    Villani, Luigi
    Siciliano, Bruno
    ACTA POLYTECHNICA HUNGARICA, 2016, 13 (01) : 223 - 238
  • [23] A Novel Constrained Trajectory Planner for Safe Human-robot Collaboration
    Melchiorre, Matteo
    Scimmi, Leonardo Sabatino
    Mauro, Stefano
    Pastorelli, Stefano
    PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS (ICINCO), 2022, : 539 - 548
  • [24] Simplifying the AI Planning modeling for Human-Robot Collaboration
    Foderaro, Elisa
    Cesta, Amedeo
    Umbrico, Alessandro
    Orlandini, Andrea
    2021 30TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2021, : 1011 - 1016
  • [25] Modeling of Trust Within a Human-Robot Collaboration Framework
    Rabby, Md Khurram Monir
    Khan, Mubbashar Altaf
    Karimoddini, Ali
    Jiang, Steven Xiaochun
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 4267 - 4272
  • [26] Experiments in human-robot teams
    Nielsen, CW
    Goodrich, MA
    Crandall, JW
    MULTI-ROBOT SYSTEMS: FROM SWARMS TO INTELLIGENT AUTOMATA, VOL II, 2003, : 241 - 252
  • [27] Human-Robot Teams: A Review
    Wolf, Franziska Doris
    Stock-Homburg, Ruth
    SOCIAL ROBOTICS, ICSR 2020, 2020, 12483 : 246 - 258
  • [28] Trust, but Verify: Autonomous Robot Trust Modeling in Human-Robot Collaboration
    Alhaji, Basel
    Prilla, Michael
    Rausch, Andreas
    PROCEEDINGS OF THE 9TH INTERNATIONAL USER MODELING, ADAPTATION AND PERSONALIZATION HUMAN-AGENT INTERACTION, HAI 2021, 2021, : 402 - 406
  • [29] Towards a Safe Human-Robot Collaboration Using Information on Human Worker Activity
    Orsag, Luka
    Stipancic, Tomislav
    Koren, Leon
    SENSORS, 2023, 23 (03)
  • [30] Human tracking from quantised sensors: An application to safe human-robot collaboration
    Zanchettin, Andrea Maria
    CONTROL ENGINEERING PRACTICE, 2023, 141