Formal model of human erroneous behavior for safety analysis in collaborative robotics

被引:43
|
作者
Askarpour, Mehrnoosh [1 ]
Mandrioli, Dino [1 ]
Rossi, Matteo [1 ]
Vicentini, Federico [2 ]
机构
[1] Politecn Milan, DEIB, Milan, Italy
[2] Natl Res Council Italy, STIIMA, Rome, Italy
关键词
Formal verification; Human modeling; Safety analysis; Human-robot collaboration; HUMAN-AUTOMATION INTERACTION; SYSTEMATIC ANALYSIS; DESIGN; VERIFICATION; PART;
D O I
10.1016/j.rcim.2019.01.001
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Recent developments in manufacturing technologies, also known as Industry 4.0, seek to build Smart Factories where supply chains and production lines are equipped with a higher level of automation. However, this significant innovation does not entirely eliminate the need for the presence of human operators; on the contrary, it requires them to collaborate with robots and execute hybrid tasks. Thus, creating safe workspaces for human operators is crucial for the future of factories where humans and robots collaborate closely in common workspaces. The uncertainty of human behavior and, consequently, of the actual execution of workflows, pose significant challenges to the safety of collaborative applications. This paper extends our earlier work, a formal verification methodology to analyze the safety of collaborative robotics applications (Askarpour et al. 2017) [1], with a rich non-deterministic formal model of operator behaviors that captures the hazardous situations resulting from human errors. The model allows safety engineers to refine their designs until all plausible erroneous behaviors are considered and mitigated. The solidity of the proposed approach is evaluated on a pair of real-life case studies.
引用
收藏
页码:465 / 476
页数:12
相关论文
共 50 条
  • [1] Safety Assessment of Collaborative Robotics Through Automated Formal Verification
    Vicentini, Federico
    Askarpour, Mehrnoosh
    Rossi, Matteo G.
    Mandrioli, Dino
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (01) : 42 - 61
  • [2] A formal method for assessing the impact of task-based erroneous human behavior on system safety
    Bolton, Matthew L.
    Molinaro, Kylie A.
    Houser, Adam M.
    [J]. RELIABILITY ENGINEERING & SYSTEM SAFETY, 2019, 188 : 168 - 180
  • [3] How to Formally Model Human in Collaborative Robotics
    Askarpour, Mehrnoosh
    [J]. ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2020, (329): : 1 - 14
  • [4] Terminology in safety of collaborative robotics
    Vicentini, Federico
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2020, 63
  • [5] A Formal Control Architecture for Collaborative Robotics Applications
    Zanchettin, Andrea Maria
    Marconi, Mattia
    Ongini, Carlo
    Rossi, Roberto
    Rocco, Paolo
    [J]. PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2020, : 654 - 657
  • [6] A formal method for including the probability of erroneous human task behavior in system analyses
    Bolton, Matthew L.
    Zheng, Xi
    Kang, Eunsuk
    [J]. RELIABILITY ENGINEERING & SYSTEM SAFETY, 2021, 213
  • [7] Properties for formally assessing the performance level of human-human collaborative procedures with miscommunications and erroneous human behavior
    Pan, Dan
    Bolton, Matthew L.
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ERGONOMICS, 2018, 63 : 75 - 88
  • [8] A Reinforcement Model for Collaborative Security and Its Formal Analysis
    Misra, Janardan
    Saha, Indranil
    [J]. NEW SECURITY PARADIGMS WORKSHOP 2009, PROCEEDINGS, 2009, : 101 - 114
  • [9] Editorial: Safety in Collaborative Robotics and Autonomous Systems
    Dani, Ashwin
    Kan, Zhen
    Kamalapurkar, Rushikesh
    Gans, Nicholas
    [J]. FRONTIERS IN ROBOTICS AND AI, 2022, 9
  • [10] Safety 4.0 for Collaborative Robotics in the Factories of the Future
    Caruana, Luca
    Francalanza, Emmanuel
    [J]. FME TRANSACTIONS, 2021, 49 (04): : 842 - 850