Continuous Safety Verification of Neural Networks

被引:2
|
作者
Cheng, Chih-Hong [1 ]
Yan, Rongjie [2 ,3 ]
机构
[1] DENSO AUTOMOTIVE Deutschland GmbH, Eching, Germany
[2] ISCAS, State Key Lab Comp Sci, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Beijing, Peoples R China
关键词
DNN; safety; formal verification; continuous engineering;
D O I
10.23919/DATE51398.2021.9473994
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges. In particular, the continuous engineering paradigm of gradually perfecting a DNN-based perception can make the previously established result of safety verification no longer valid. This can occur either due to the newly encountered examples (i.e., input domain enlargement) inside the Operational Design Domain or due to the subsequent parameter fine-tuning activities of a DNN. This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting. By considering the reuse of state abstractions, network abstractions, and Lipschitz constants, we develop several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem. The overall concept is evaluated in a 1/10-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
引用
收藏
页码:1478 / 1483
页数:6
相关论文
共 50 条
  • [1] Safety Verification of Deep Neural Networks
    Huang, Xiaowei
    Kwiatkowska, Marta
    Wang, Sen
    Wu, Min
    COMPUTER AIDED VERIFICATION, CAV 2017, PT I, 2017, 10426 : 3 - 29
  • [2] Verification of Neural Networks for Safety Critical Applications
    Khalifa, Khaled
    Safar, Mona
    El-Kharashi, M. Watheq
    2020 32ND INTERNATIONAL CONFERENCE ON MICROELECTRONICS (ICM), 2020, : 99 - 102
  • [3] Automated Safety Verification of Programs Invoking Neural Networks
    Christakis, Maria
    Eniser, Hasan Ferit
    Hermanns, Holger
    Hoffmann, Joerg
    Kothari, Yugesh
    Li, Jianlin
    Navas, Jorge A.
    Wuestholz, Valentin
    COMPUTER AIDED VERIFICATION (CAV 2021), PT I, 2021, 12759 : 201 - 224
  • [4] Towards Safety Verification of Direct Perception Neural Networks
    Cheng, Chili-Hong
    Huang, Chung-Hao
    Brunner, Thomas
    Hashemi, Vahid
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 1640 - 1643
  • [5] Verification and validation of neural networks for safety-critical applications
    Hull, J
    Ward, D
    Zakrzewski, RR
    PROCEEDINGS OF THE 2002 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 2002, 1-6 : 4789 - 4794
  • [6] SAFETY-CRITICAL NEURAL COMPUTING - EXPLANATION AND VERIFICATION IN KNOWLEDGE AUGMENTED NEURAL NETWORKS
    JOHNSON, JH
    PICTON, PD
    HALLAM, NJ
    ARTIFICIAL INTELLIGENCE IN ENGINEERING, 1993, 8 (04): : 307 - 313
  • [7] LSTM Neural Networks: Input to State Stability and Probabilistic Safety Verification
    Bonassi, Fabio
    Terzi, Enrico
    Farina, Marcello
    Scattolini, Riccardo
    LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 85 - 94
  • [8] Verification and Repair of Neural Networks
    Guidotti, Dario
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15714 - 15715
  • [9] Incremental Verification of Neural Networks
    Ugare, Shubham
    Banerjee, Debangshu
    Misailovic, Sasa
    Singh, Gagandeep
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2023, 7 (PLDI): : 1920 - 1945
  • [10] Neural Networks for Runtime Verification
    Perotti, Alan
    Garcez, Artur d'Avila
    Boella, Guido
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 2637 - 2644