Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks

被引:1
|
作者
Zhang, Zhaodi [1 ]
Wu, Yiting [1 ]
Liu, Si [2 ]
Liu, Jing [3 ]
Zhang, Min [1 ,4 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai, Peoples R China
[4] East China Normal Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai, Peoples R China
关键词
REFINEMENT;
D O I
10.1145/3551349.3556907
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The robustness of deep neural networks is crucial to modern AIenabled systems and should be formally verified. Sigmoid-like neural networks have been adopted in a wide range of applications. Due to their non-linearity, Sigmoid-like activation functions are usually over-approximated for efficient verification, which inevitably introduces imprecision. Considerable efforts have been devoted to finding the so-called tighter approximations to obtain more precise verification results. However, existing tightness definitions are heuristic and lack theoretical foundations. We conduct a thorough empirical analysis of existing neuron-wise characterizations of tightness and reveal that they are superior only on specific neural networks. We then introduce the notion of network-wise tightness as a unified tightness definition and show that computing networkwise tightness is a complex non-convex optimization problem. We bypass the complexity from different perspectives via two efficient, provably tightest approximations. The results demonstrate the promising performance achievement of our approaches over state of the art: (i) achieving up to 251.28% improvement to certified lower robustness bounds; and (ii) exhibiting notably more precise verification results on convolutional networks.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Some comparisons between linear approximation and approximation by neural networks
    Sanguineti, M
    Hlavácková-Schindler, K
    ARTIFICIAL NEURAL NETS AND GENETIC ALGORITHMS, 1999, : 172 - 177
  • [22] Attack-Guided Efficient Robustness Verification of ReLU Neural Networks
    Zhu, Yiwei
    Wang, Feng
    Wan, Wenjie
    Zhang, Min
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [23] Property-Directed Verification and Robustness Certification of Recurrent Neural Networks
    Khmelnitsky, Igor
    Neider, Daniel
    Roy, Rajarshi
    Xie, Xuan
    Barbot, Benoit
    Bollig, Benedikt
    Finkel, Alain
    Haddad, Serge
    Leucker, Martin
    Ye, Lina
    AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS, ATVA 2021, 2021, 12971 : 364 - 380
  • [24] Iterative Counter-Example Guided Robustness Verification for Neural Networks
    Hanumanthaiah, Karthik
    Basu, Samik
    AI VERIFICATION, SAIV 2024, 2024, 14846 : 179 - 187
  • [25] Efficient Robustness Verification of the Deep Neural Networks for Smart IoT Devices
    Zhang, Zhaodi
    Liu, Jing
    Zhang, Min
    Sun, Haiying
    COMPUTER JOURNAL, 2022, 65 (11): : 2894 - 2908
  • [26] Robustness Verification of Swish Neural Networks Embedded in Autonomous Driving Systems
    Zhang, Zhaodi
    Liu, Jing
    Liu, Guanjun
    Wang, Jiacun
    Zhang, John
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (04) : 2041 - 2050
  • [27] Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation
    Yang, Pengfei
    Li, Jianlin
    Liu, Jiangchao
    Huang, Cheng-Chao
    Li, Renjue
    Chen, Liqian
    Huang, Xiaowei
    Zhang, Lijun
    FORMAL ASPECTS OF COMPUTING, 2021, 33 (03) : 407 - 435
  • [28] Robustness Verification of Semantic Segmentation Neural Networks Using Relaxed Reachability
    Tran, Hoang-Dung
    Pal, Neelanjana
    Musau, Patrick
    Lopez, Diego Manzanas
    Hamilton, Nathaniel
    Yang, Xiaodong
    Bak, Stanley
    Johnson, Taylor T.
    COMPUTER AIDED VERIFICATION (CAV 2021), PT I, 2021, 12759 : 263 - 286
  • [29] Robustness of convergence in finite time for linear programming neural networks
    Di Marco, M
    Forti, M
    Grazzini, M
    INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, 2006, 34 (03) : 307 - 316
  • [30] A linear approximation for training Recurrent Random Neural Networks
    Halici, U
    Karaöz, E
    ADVANCES IN COMPUTER AND INFORMATION SCIENCES '98, 1998, 53 : 149 - 156