Safety Enforcement for the Verification of Autonomous Systems

被引:7
|
作者
de Niz, Dionisio [1 ]
Andersson, Bjorn [1 ]
Moreno, Gabriel [1 ]
机构
[1] Carnegie Mellon Univ, SEI, 4500 Fifth Ave, Pittsburgh, PA 15213 USA
基金
美国安德鲁·梅隆基金会;
关键词
Autonomous Systems; Verification; Runtime Assurance; Enforcers;
D O I
10.1117/12.2307575
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Verifying that the behavior of an autonomous systems is safe is fundamental for safety-critical properties like preventing crashes in autonomous vehicles. Unfortunately, exhaustive verification techniques fail to scale to the size of real-life systems. Moreover, these systems frequently use algorithms whose runtime behavior cannot be determined at design time (e.g., machine learning algorithms). This presents another problem given that these algorithms cannot be verified at design time. Fortunately, a technique known as runtime assurance can be used for these cases. The strategy that runtime assurance uses to verify a system is to add small components (known as enforcers) to the system that monitor its output and evaluate whether the output is safe or not. If the output is safe, then the enforcer lets it pass; if the output is unsafe, the enforcer replaces it with a safe output. For instance, in a drone system that must be restricted to fly within a constrained area (a.k.a. geo-fence) an enforcer can be used to monitor the movement commands to the drone. Then, if a movement command keeps the drone within the geo-fence, the enforcer lets it pass, but if the command takes the drone outside of this area, the enforcer replaces it with a safe command (e.g., hovering). Given that enforcers are small components fully specified at design time, it is possible to use exhaustive verification techniques to prove that they can keep the behavior of the whole system safe (e.g., the drone flying within the geo-fence) even if the system contains unverified code.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] VERIFICATION AND VALIDATION FOR SYSTEMS IMPORTANT TO SAFETY
    THOMAS, NC
    DOWLING, EF
    [J]. IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 1982, 29 (01) : 952 - 958
  • [32] Safety Verification for Probabilistic Hybrid Systems
    Zhang, Lijun
    She, Zhikun
    Ratschan, Stefan
    Hermanns, Holger
    Hahn, Ernst Moritz
    [J]. COMPUTER AIDED VERIFICATION, PROCEEDINGS, 2010, 6174 : 196 - 211
  • [33] Formal Verification of Neural Network Controlled Autonomous Systems
    Sun, Xiaowu
    Khedr, Haitham
    Shoukry, Yasser
    [J]. PROCEEDINGS OF THE 2019 22ND ACM INTERNATIONAL CONFERENCE ON HYBRID SYSTEMS: COMPUTATION AND CONTROL (HSCC '19), 2019, : 147 - 156
  • [34] Verification of autonomous systems using embedded behavior auditors
    Dvorak, D
    Tailor, E
    [J]. SPACE TECHNOLOGY AND APPLICATIONS INTERNATIONAL FORUM - 1999, PTS ONE AND TWO, 1999, 458 : 654 - 659
  • [35] Proxy Verification and Validation For Critical Autonomous and AI Systems
    Laplante, Phil
    Kassab, Mohamad
    DeFranco, Joanna
    [J]. 2022 IEEE 29TH ANNUAL SOFTWARE TECHNOLOGY CONFERENCE (STC 2022), 2022, : 37 - 40
  • [37] A Mathematical Framework and Theory Governing the Verification of Autonomous Systems
    Hartman, Benjamin T.
    Tatum, Richard D.
    [J]. 2022 OCEANS HAMPTON ROADS, 2022,
  • [38] Formal Specification and Verification of Autonomous Robotic Systems: A Survey
    Luckcuck, Matt
    Farrell, Marie
    Dennis, Louise A.
    Dixon, Clare
    Fisher, Michael
    [J]. ACM COMPUTING SURVEYS, 2019, 52 (05)
  • [39] A Summary of Formal Specification and Verification of Autonomous Robotic Systems
    Luckcuck, Matt
    Farrell, Marie
    Dennis, Louise A.
    Dixon, Clare
    Fisher, Michael
    [J]. INTEGRATED FORMAL METHODS, IFM 2019, 2019, 11918 : 538 - 541
  • [40] RIACS workshop on the verification and validation of autonomous and adaptive systems
    Pecheur, C
    Visser, W
    Simmons, R
    [J]. AI MAGAZINE, 2001, 22 (03) : 107 - 111