A domain-theoretic framework for robustness analysis of neural networks

被引:1
|
作者
Zhou, Can [1 ]
Shaikh, Razin A. [1 ,2 ]
Li, Yiran [3 ]
Farjudian, Amin [3 ]
机构
[1] Univ Oxford, Dept Comp Sci, Oxford, England
[2] Quantinuum Ltd, Oxford, England
[3] Univ Nottingham Ningbo China, Sch Comp Sci, Ningbo, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain theory; neural network; robustness; Lipschitz constant; Clarke-gradient; QUERY-DRIVEN COMMUNICATION; REAL; SEMANTICS; COMPUTABILITY; COMPUTATION; SPACES;
D O I
10.1017/S0960129523000142
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A domain-theoretic framework is presented for validated robustness analysis of neural networks. First, global robustness of a general class of networks is analyzed. Then, using the fact that Edalat's domain-theoretic L-derivative coincides with Clarke's generalized gradient, the framework is extended for attack-agnostic local robustness analysis. The proposed framework is ideal for designing algorithms which are correct by construction. This claim is exemplified by developing a validated algorithm for estimation of Lipschitz constant of feedforward regressors. The completeness of the algorithm is proved over differentiable networks and also over general position ReLU networks. Computability results are obtained within the framework of effectively given domains. Using the proposed domain model, differentiable and non-differentiable networks can be analyzed uniformly. The validated algorithm is implemented using arbitrary-precision interval arithmetic, and the results of some experiments are presented. The software implementation is truly validated, as it handles floating-point errors as well.
引用
收藏
页码:68 / 105
页数:38
相关论文
共 50 条
  • [21] A Domain-Theoretic Model Of Nominally-Typed Object-Oriented Programming
    AbdelGawad, Moez A.
    ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE, 2014, 301 : 3 - 19
  • [22] DeepGlobal: A framework for global robustness verification of feedforward neural networks
    Sun, Weidi
    Lu, Yuteng
    Zhang, Xiyue
    Sun, Meng
    JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 128
  • [23] A Hierarchical Framework for Complex Networks Robustness Analysis to Errors
    Bessani, Michel
    Massignan, Julio A. D.
    London, Joao B. A., Jr.
    Maciel, Carlos D.
    Fanucchi, Rodrigo Zempulski
    Camillo, Marcos H. M.
    2017 11TH ANNUAL IEEE INTERNATIONAL SYSTEMS CONFERENCE (SYSCON), 2017, : 737 - 744
  • [24] UnbiasedNets: a dataset diversification framework for robustness bias alleviation in neural networks
    Naseer, Mahum
    Prabakaran, Bharath Srinivas
    Hasan, Osman
    Shafique, Muhammad
    MACHINE LEARNING, 2024, 113 (05) : 2499 - 2526
  • [25] UnbiasedNets: a dataset diversification framework for robustness bias alleviation in neural networks
    Mahum Naseer
    Bharath Srinivas Prabakaran
    Osman Hasan
    Muhammad Shafique
    Machine Learning, 2024, 113 : 2499 - 2526
  • [26] A Fourier domain acceleration framework for convolutional neural networks
    Lin, Jinhua
    Ma, Lin
    Yao, Yu
    NEUROCOMPUTING, 2019, 364 : 254 - 268
  • [27] Robustness analysis of neural networks with an application to system identification
    KrishnaKumar, K
    Nishita, K
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 1999, 22 (05) : 695 - 701
  • [28] ROBUSTNESS AND PERTURBATION ANALYSIS OF A CLASS OF ARTIFICIAL NEURAL NETWORKS
    WANG, KN
    MICHEL, AN
    NEURAL NETWORKS, 1994, 7 (02) : 251 - 259
  • [29] Scalable and Modular Robustness Analysis of Deep Neural Networks
    Zhong, Yuyi
    Ta, Quang-Trung
    Luo, Tianzuo
    Zhang, Fanlong
    Khoo, Siau-Cheng
    PROGRAMMING LANGUAGES AND SYSTEMS, APLAS 2021, 2021, 13008 : 3 - 22
  • [30] A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks
    Sharmin, Saima
    Panda, Priyadarshini
    Sarwar, Syed Shakib
    Lee, Chankyu
    Ponghiran, Wachirawit
    Roy, Kaushik
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,