Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation

被引:9
|
作者
Yang, Pengfei [1 ,2 ]
Li, Jianlin [1 ,2 ]
Liu, Jiangchao [3 ]
Huang, Cheng-Chao [4 ]
Li, Renjue [1 ,2 ]
Chen, Liqian [3 ]
Huang, Xiaowei [5 ]
Zhang, Lijun [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Natl Univ Def Technol, Changsha, Peoples R China
[4] Inst Intelligent Software, Guangzhou, Peoples R China
[5] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
基金
英国工程与自然科学研究理事会;
关键词
Deep neural network; Verification; Robustness; Abstract interpretation; Symbolic propagation; Lipschitz constant; FRAMEWORK;
D O I
10.1007/s00165-021-00548-1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a (delta, epsilon)-global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties: We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented symbolically and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties. We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constant can be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness. We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach. We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks.
引用
收藏
页码:407 / 435
页数:29
相关论文
共 50 条
  • [31] Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
    Li, Pengcheng
    Yi, Jinfeng
    Zhou, Bowen
    Zhang, Lijun
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2909 - 2915
  • [32] Improving robustness of deep neural networks via large-difference transformation
    Wang, Longwei
    Wang, Chengfei
    Li, Yupeng
    Wang, Rui
    [J]. NEUROCOMPUTING, 2021, 450 : 411 - 419
  • [33] Robustness evaluation for deep neural networks via mutation decision boundaries analysis
    Lin, Renhao
    Zhou, Qinglei
    Wu, Bin
    Nan, Xiaofei
    [J]. INFORMATION SCIENCES, 2022, 601 : 147 - 161
  • [34] Safety Verification of Deep Neural Networks
    Huang, Xiaowei
    Kwiatkowska, Marta
    Wang, Sen
    Wu, Min
    [J]. COMPUTER AIDED VERIFICATION, CAV 2017, PT I, 2017, 10426 : 3 - 29
  • [35] Formal Verification of Deep Neural Networks
    Narodytska, Nina
    [J]. PROCEEDINGS OF THE 2018 18TH CONFERENCE ON FORMAL METHODS IN COMPUTER AIDED DESIGN (FMCAD), 2018, : 1 - 1
  • [36] Connecting Deep Neural Networks with Symbolic Knowledge
    Kumar, Arjun
    Oates, Tim
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 3601 - 3608
  • [37] Training Robust Deep Neural Networks via Adversarial Noise Propagation
    Liu, Aishan
    Liu, Xianglong
    Yu, Hang
    Zhang, Chongzhi
    Liu, Qiang
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5769 - 5781
  • [38] ROBUST LEARNING VIA ENSEMBLE DENSITY PROPAGATION IN DEEP NEURAL NETWORKS
    Carannante, Giuseppina
    Dera, Dimah
    Rasool, Ghulam
    Bouaynaya, Nidhal C.
    Mihaylova, Lyudmila
    [J]. PROCEEDINGS OF THE 2020 IEEE 30TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2020,
  • [39] Robustness guarantees for deep neural networks on videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, : 308 - 317
  • [40] Robustness Guarantees for Deep Neural Networks on Videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 308 - 317