Robustness Verification in Neural Networks

被引:0
|
作者
Wurm, Adrian [1 ]
机构
[1] BTU Cottbus Senftenberg, Lehrstuhl Theoret Informat, Pl Deutsch Einheit 1, D-03046 Cottbus, Germany
关键词
D O I
10.1007/978-3-031-60599-4_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we investigate formal verification problems for Neural Network computations. Of central importance will be various robustness and minimization problems such as: Given symbolic specifications of allowed inputs and outputs in form of Linear Programming instances, one question is whether there do exist valid inputs such that the network computes a valid output? And does this property hold for all valid inputs? Do two given networks compute the same function? Is there a smaller network computing the same function? The complexity of these questions have been investigated recently from a practical point of view and approximated by heuristic algorithms. We complement these achievements by giving a theoretical framework that enables us to interchange security and efficiency questions in neural networks and analyze their computational complexities. We show that the problems are conquerable in a semi-linear setting, meaning that for piecewise linear activation functions and when the sum- or maximum metric is used, most of them are in P or in NP at most.
引用
收藏
页码:263 / 278
页数:16
相关论文
共 50 条
  • [31] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [32] On Robustness and Transferability of Convolutional Neural Networks
    Djolonga, Josip
    Yung, Jessica
    Tschannen, Michael
    Romijnders, Rob
    Beyer, Lucas
    Kolesnikov, Alexander
    Puigcerver, Joan
    Minderer, Matthias
    D'Amour, Alexander
    Moldovan, Dan
    Gelly, Sylvain
    Houlsby, Neil
    Zhai, Xiaohua
    Lucic, Mario
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16453 - 16463
  • [33] Noise robustness in multilayer neural networks
    Copelli, M
    Eichhorn, R
    Kinouchi, O
    Biehl, M
    Simonetti, R
    Riegler, P
    Caticha, N
    [J]. EUROPHYSICS LETTERS, 1997, 37 (06): : 427 - 432
  • [34] Quantitative Robustness Analysis of Neural Networks
    Downing, Mara
    [J]. PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 1527 - 1531
  • [35] A Causal View on Robustness of Neural Networks
    Zhang, Cheng
    Zhang, Kun
    Li, Yingzhen
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [36] Design quality and robustness with neural networks
    Ali, ÖG
    Chen, YT
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (06): : 1518 - 1527
  • [37] ON THE HYSTERESIS AND ROBUSTNESS OF HOPFIELD NEURAL NETWORKS
    SCHONFELD, D
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-ANALOG AND DIGITAL SIGNAL PROCESSING, 1993, 40 (11): : 745 - 748
  • [38] Stochasticity and robustness in spiking neural networks
    Olin-Ammentorp, Wilkie
    Beckmann, Karsten
    Schuman, Catherine D.
    Plank, James S.
    Cady, Nathaniel C.
    [J]. NEUROCOMPUTING, 2021, 419 : 23 - 36
  • [39] Incremental Verification of Neural Networks
    Ugare, Shubham
    Banerjee, Debangshu
    Misailovic, Sasa
    Singh, Gagandeep
    [J]. PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2023, 7 (PLDI): : 1920 - 1945
  • [40] Neural Networks for Runtime Verification
    Perotti, Alan
    Garcez, Artur d'Avila
    Boella, Guido
    [J]. PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 2637 - 2644