Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks

被引:1
|
作者
Zhang, Zhaodi [1 ]
Wu, Yiting [1 ]
Liu, Si [2 ]
Liu, Jing [3 ]
Zhang, Min [1 ,4 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai, Peoples R China
[4] East China Normal Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai, Peoples R China
关键词
REFINEMENT;
D O I
10.1145/3551349.3556907
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The robustness of deep neural networks is crucial to modern AIenabled systems and should be formally verified. Sigmoid-like neural networks have been adopted in a wide range of applications. Due to their non-linearity, Sigmoid-like activation functions are usually over-approximated for efficient verification, which inevitably introduces imprecision. Considerable efforts have been devoted to finding the so-called tighter approximations to obtain more precise verification results. However, existing tightness definitions are heuristic and lack theoretical foundations. We conduct a thorough empirical analysis of existing neuron-wise characterizations of tightness and reveal that they are superior only on specific neural networks. We then introduce the notion of network-wise tightness as a unified tightness definition and show that computing networkwise tightness is a complex non-convex optimization problem. We bypass the complexity from different perspectives via two efficient, provably tightest approximations. The results demonstrate the promising performance achievement of our approaches over state of the art: (i) achieving up to 251.28% improvement to certified lower robustness bounds; and (ii) exhibiting notably more precise verification results on convolutional networks.
引用
下载
收藏
页数:13
相关论文
共 50 条
  • [1] Tightening Robustness Verification of Convolutional Neural Networks with Fine-Grained Linear Approximation
    Wu, Yiting
    Zhang, Min
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11674 - 11681
  • [2] Robustness Verification in Neural Networks
    Wurm, Adrian
    INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH, PT II, CPAIOR 2024, 2024, 14743 : 263 - 278
  • [3] Robustness Verification of Classification Deep Neural Networks via Linear Programming
    Lin, Wang
    Yang, Zhengfeng
    Chen, Xin
    Zhao, Qingye
    Li, Xiangkun
    Liu, Zhiming
    He, Jifeng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11410 - 11419
  • [4] Hardware implementation of evolvable block-based neural networks utilizing a cost efficient sigmoid-like activation function
    Nambiar, Vishnu P.
    Khalil-Hani, Mohamed
    Sahnoun, Riadh
    Marsono, M. N.
    NEUROCOMPUTING, 2014, 140 : 228 - 241
  • [5] Verification of Neural Networks' Global Robustness
    Kabaha, Anan
    Cohen, Dana Drachsler
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2024, 8 (OOPSLA):
  • [6] Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks
    Liu Y.
    Yang P.-F.
    Zhang L.-J.
    Wu Z.-L.
    Feng Y.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (07): : 1 - 33
  • [7] Robustness Verification Boosting for Deep Neural Networks
    Feng, Chendong
    2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, : 531 - 535
  • [8] Neuron Pairs in Binarized Neural Networks Robustness Verification via Integer Linear Programming
    Lubczyk, Dymitr
    Neto, Jose
    COMBINATORIAL OPTIMIZATION, ISCO 2024, 2024, 14594 : 305 - 317
  • [9] PRODEEP: A Platform for Robustness Verification of Deep Neural Networks
    Li, Renjue
    Li, Jianlin
    Huang, Cheng-Chao
    Yang, Pengfei
    Huang, Xiaowei
    Zhang, Lijun
    Xue, Bai
    Hermanns, Holger
    PROCEEDINGS OF THE 28TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '20), 2020, : 1630 - 1634
  • [10] Width Provably Matters in Optimization for Deep Linear Neural Networks
    Du, Simon S.
    Hu, Wei
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97