Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation

被引:9
|
作者
Yang, Pengfei [1 ,2 ]
Li, Jianlin [1 ,2 ]
Liu, Jiangchao [3 ]
Huang, Cheng-Chao [4 ]
Li, Renjue [1 ,2 ]
Chen, Liqian [3 ]
Huang, Xiaowei [5 ]
Zhang, Lijun [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Natl Univ Def Technol, Changsha, Peoples R China
[4] Inst Intelligent Software, Guangzhou, Peoples R China
[5] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
基金
英国工程与自然科学研究理事会;
关键词
Deep neural network; Verification; Robustness; Abstract interpretation; Symbolic propagation; Lipschitz constant; FRAMEWORK;
D O I
10.1007/s00165-021-00548-1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a (delta, epsilon)-global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties: We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented symbolically and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties. We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constant can be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness. We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach. We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks.
引用
收藏
页码:407 / 435
页数:29
相关论文
共 50 条
  • [1] Robustness Verification of Classification Deep Neural Networks via Linear Programming
    Lin, Wang
    Yang, Zhengfeng
    Chen, Xin
    Zhao, Qingye
    Li, Xiangkun
    Liu, Zhiming
    He, Jifeng
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11410 - 11419
  • [2] Robustness Verification Boosting for Deep Neural Networks
    Feng, Chendong
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, : 531 - 535
  • [3] Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification
    Li, Jianlin
    Liu, Jiangchao
    Yang, Pengfei
    Chen, Liqian
    Huang, Xiaowei
    Zhang, Lijun
    [J]. STATIC ANALYSIS (SAS 2019), 2019, 11822 : 296 - 319
  • [4] PRODEEP: A Platform for Robustness Verification of Deep Neural Networks
    Li, Renjue
    Li, Jianlin
    Huang, Cheng-Chao
    Yang, Pengfei
    Huang, Xiaowei
    Zhang, Lijun
    Xue, Bai
    Hermanns, Holger
    [J]. PROCEEDINGS OF THE 28TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '20), 2020, : 1630 - 1634
  • [5] Enhancing Offline Signature Verification via Transfer Learning and Deep Neural Networks
    S. Singh
    S. Chandra
    Agya Ram Verma
    [J]. Augmented Human Research, 2024, 9 (1)
  • [6] Analyzing Forward Robustness of Feedforward Deep Neural Networks with LeakyReLU Activation Function Through Symbolic Propagation
    Masetti, Giulio
    Di Giandomenico, Felicita
    [J]. ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 460 - 474
  • [7] Eager Falsification for Accelerating Robustness Verification of Deep Neural Networks
    Guo, Xingwu
    Wan, Wenjie
    Zhang, Zhaodi
    Zhang, Min
    Song, Fu
    Wen, Xuejun
    [J]. 2021 IEEE 32ND INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING (ISSRE 2021), 2021, : 345 - 356
  • [8] A Parallel Optimization Method for Robustness Verification of Deep Neural Networks
    Lin, Renhao
    Zhou, Qinglei
    Nan, Xiaofei
    Hu, Tianqing
    [J]. MATHEMATICS, 2024, 12 (12)
  • [9] Robustness Verification in Neural Networks
    Wurm, Adrian
    [J]. INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH, PT II, CPAIOR 2024, 2024, 14743 : 263 - 278
  • [10] Accelerating Spectral Normalization for Enhancing Robustness of Deep Neural Networks
    Pan, Zhixin
    Mishra, Prabhat
    [J]. 2021 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2021), 2021, : 260 - 265