Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation

被引:9
|
作者
Yang, Pengfei [1 ,2 ]
Li, Jianlin [1 ,2 ]
Liu, Jiangchao [3 ]
Huang, Cheng-Chao [4 ]
Li, Renjue [1 ,2 ]
Chen, Liqian [3 ]
Huang, Xiaowei [5 ]
Zhang, Lijun [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Natl Univ Def Technol, Changsha, Peoples R China
[4] Inst Intelligent Software, Guangzhou, Peoples R China
[5] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
基金
英国工程与自然科学研究理事会;
关键词
Deep neural network; Verification; Robustness; Abstract interpretation; Symbolic propagation; Lipschitz constant; FRAMEWORK;
D O I
10.1007/s00165-021-00548-1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a (delta, epsilon)-global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties: We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented symbolically and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties. We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constant can be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness. We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach. We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks.
引用
收藏
页码:407 / 435
页数:29
相关论文
共 50 条
  • [21] Enhancing deep neural networks via multiple kernel learning
    Lauriola, Ivano
    Gallicchio, Claudio
    Aiolli, Fabio
    [J]. PATTERN RECOGNITION, 2020, 101
  • [22] OCCROB: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks
    Guo, Xingwu
    Zhou, Ziwei
    Zhang, Yueling
    Katz, Guy
    Zhang, Min
    [J]. TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS, PT I, TACAS 2023, 2023, 13993 : 208 - 226
  • [23] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [24] Improving Robustness Verification of Neural Networks with General Activation Functions via Branching and Optimization
    Luo, Zhengwu
    Wang, Lina
    Wang, Run
    Yang, Kang
    Ye, Aoshuang
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [25] Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming
    Fazlyab, Mahyar
    Morari, Manfred
    Pappas, George J.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (01) : 1 - 15
  • [26] Neuron Pairs in Binarized Neural Networks Robustness Verification via Integer Linear Programming
    Lubczyk, Dymitr
    Neto, Jose
    [J]. COMBINATORIAL OPTIMIZATION, ISCO 2024, 2024, 14594 : 305 - 317
  • [27] Symbolic regression via neural networks
    Boddupalli, N.
    Matchen, T.
    Moehlis, J.
    [J]. CHAOS, 2023, 33 (08)
  • [28] Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training
    Li, Chaofei
    Zhu, Ziyuan
    Niu, Ruicheng
    Zhao, Yuting
    [J]. COMPUTERS & SECURITY, 2024, 143
  • [29] DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting
    Chen, Shaoru
    Wong, Eric
    Kolter, J. Zico
    Fazlyab, Mahyar
    [J]. IEEE Open Journal of Control Systems, 2022, 1 : 126 - 140
  • [30] Improving adversarial robustness of deep neural networks via adaptive margin evolution
    Ma, Linhai
    Liang, Liang
    [J]. NEUROCOMPUTING, 2023, 551