Continuous Safety Verification of Neural Networks

被引:2
|
作者
Cheng, Chih-Hong [1 ]
Yan, Rongjie [2 ,3 ]
机构
[1] DENSO AUTOMOTIVE Deutschland GmbH, Eching, Germany
[2] ISCAS, State Key Lab Comp Sci, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Beijing, Peoples R China
关键词
DNN; safety; formal verification; continuous engineering;
D O I
10.23919/DATE51398.2021.9473994
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges. In particular, the continuous engineering paradigm of gradually perfecting a DNN-based perception can make the previously established result of safety verification no longer valid. This can occur either due to the newly encountered examples (i.e., input domain enlargement) inside the Operational Design Domain or due to the subsequent parameter fine-tuning activities of a DNN. This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting. By considering the reuse of state abstractions, network abstractions, and Lipschitz constants, we develop several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem. The overall concept is evaluated in a 1/10-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
引用
收藏
页码:1478 / 1483
页数:6
相关论文
共 50 条
  • [21] Fairify: Fairness Verification of Neural Networks
    Biswas, Sumon
    Rajan, Hridesh
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 1546 - 1558
  • [22] Formal verification for quantized neural networks
    Kovasznai, Gergely
    Kiss, Dorina Hedvig
    Mlinko, Peter
    ANNALES MATHEMATICAE ET INFORMATICAE, 2023, 57 : 36 - 48
  • [23] Formal Verification of Deep Neural Networks
    Narodytska, Nina
    PROCEEDINGS OF THE 2018 18TH CONFERENCE ON FORMAL METHODS IN COMPUTER AIDED DESIGN (FMCAD), 2018, : 1 - 1
  • [24] Formal verification for quantized neural networks
    Kovasznai, Gergely
    Kiss, Dorina Hedvig
    Mlinko, Peter
    ANNALES MATHEMATICAE ET INFORMATICAE, 2023, 57 : 36 - 48
  • [25] Randomized approach to verification of neural networks
    Zakrzewski, RR
    2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 2819 - 2824
  • [26] A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability?
    Huang, Xiaowei
    Kroening, Daniel
    Ruan, Wenjie
    Sharp, James
    Sun, Youcheng
    Thamo, Emese
    Wu, Min
    Yi, Xinping
    COMPUTER SCIENCE REVIEW, 2020, 37
  • [27] Approximation by neural networks is not continuous
    Kainen, PC
    Kurková, V
    Vogt, A
    NEUROCOMPUTING, 1999, 29 (1-3) : 47 - 56
  • [28] CONTINUOUS UNLEARNING IN NEURAL NETWORKS
    YOUN, CH
    KAK, SC
    ELECTRONICS LETTERS, 1989, 25 (03) : 202 - 203
  • [29] MODELS OF CONTINUOUS NEURAL NETWORKS
    CARMESIN, HO
    PHYSICS LETTERS A, 1991, 156 (3-4) : 183 - 186
  • [30] Safety Verification of Neural Network Controlled Systems
    Claviere, Arthur
    Asselin, Eric
    Garion, Christophe
    Pagetti, Claire
    51ST ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS (DSN-W 2021), 2021, : 47 - 54