Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation

被引:9
|
作者
Yang, Pengfei [1 ,2 ]
Li, Jianlin [1 ,2 ]
Liu, Jiangchao [3 ]
Huang, Cheng-Chao [4 ]
Li, Renjue [1 ,2 ]
Chen, Liqian [3 ]
Huang, Xiaowei [5 ]
Zhang, Lijun [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Natl Univ Def Technol, Changsha, Peoples R China
[4] Inst Intelligent Software, Guangzhou, Peoples R China
[5] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
基金
英国工程与自然科学研究理事会;
关键词
Deep neural network; Verification; Robustness; Abstract interpretation; Symbolic propagation; Lipschitz constant; FRAMEWORK;
D O I
10.1007/s00165-021-00548-1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a (delta, epsilon)-global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties: We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented symbolically and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties. We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constant can be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness. We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach. We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks.
引用
收藏
页码:407 / 435
页数:29
相关论文
共 50 条
  • [41] Analyzing the Noise Robustness of Deep Neural Networks
    Liu, Mengchen
    Liu, Shixia
    Su, Hang
    Cao, Kelei
    Zhu, Jun
    [J]. 2018 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), 2018, : 60 - 71
  • [42] Robustness Guarantees for Deep Neural Networks on Videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 308 - 317
  • [43] Robustness guarantees for deep neural networks on videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, : 308 - 317
  • [44] SoK: Certified Robustness for Deep Neural Networks
    Li, Linyi
    Xie, Tao
    Li, Bo
    [J]. 2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1289 - 1310
  • [45] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    [J]. Machine Vision and Applications, 2024, 35
  • [46] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [47] Improve Robustness of Deep Neural Networks by Coding
    Huang, Kunping
    Raviv, Netanel
    Jain, Siddharth
    Upadhyaya, Pulakesh
    Bruck, Jehoshua
    Siegel, Paul H.
    Jiang, Anxiao
    [J]. 2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [48] Analyzing the Noise Robustness of Deep Neural Networks
    Cao, Kelei
    Liu, Mengchen
    Su, Hang
    Wu, Jing
    Zhu, Jun
    Liu, Shixia
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (07) : 3289 - 3304
  • [49] Robustness of deep neural networks in adversarial examples
    [J]. Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [50] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (03)