Interval Universal Approximation for Neural Networks

被引:5
|
作者
Wang, Zi [1 ]
Albarghouthi, A. W. S. [1 ]
Prakriya, Gautam [2 ]
Jha, Somesh [1 ]
机构
[1] Univ Wisconsin, Madison, WI 53706 USA
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
基金
美国国家科学基金会;
关键词
Abstract Interpretation; Universal Approximation; MULTILAYER FEEDFORWARD NETWORKS;
D O I
10.1145/3498675
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
To verify safety and robustness of neural networks, researchers have successfully applied abstract interpretation, primarily using the interval abstract domain. In this paper, we study the theoretical power and limits of the interval domain for neural-network verification. First, we introduce the interval universal approximation (IUA) theorem. IUA shows that neural networks not only can approximate any continuous function f (universal approximation) as we have known for decades, but we can find a neural network, using any well-behaved activation function, whose interval bounds are an arbitrarily close approximation of the set semantics of f (the result of applying f to a set of inputs). We call this notion of approximation interval approximation. Our theorem generalizes the recent result of Baader et al. from ReLUs to a rich class of activation functions that we call squashable functions. Additionally, the IUA theorem implies that we can always construct provably robust neural networks under l(infinity)-norm using almost any practical activation function. Second, we study the computational complexity of constructing neural networks that are amenable to precise interval analysis. This is a crucial question, as our constructive proof of IUA is exponential in the size of the approximation domain. We boil this question down to the problem of approximating the range of a neural network with squashable activation functions. We show that the range approximation problem (RA) is a.2-intermediate problem, which is strictly harder than NP-complete problems, assuming coNP. NP. As a result, IUA is an inherently hard problem: No matter what abstract domain or computational tools we consider to achieve interval approximation, there is no efficient construction of such a universal approximator. This implies that it is hard to construct a provably robust network, even if we have a robust network to start with.
引用
收藏
页数:29
相关论文
共 50 条
  • [21] The universal approximation capabilities of double 2π-periodic approximate identity neural networks
    Fard, Saeed Panahian
    Zainuddin, Zarita
    [J]. SOFT COMPUTING, 2015, 19 (10) : 2883 - 2890
  • [22] Universal approximation of fuzzy functions by polygonal fuzzy neural networks with general inputs
    He, Chun-Mei
    Ye, You-Pei
    Li, Jian
    Xu, Wei-Hong
    [J]. Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2009, 22 (03): : 481 - 487
  • [23] Conditions for Radial Basis Function Neural networks to Universal Approximation and Numerical Experiments
    Nong, Jifu
    [J]. 2013 25TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), 2013, : 2193 - 2197
  • [24] Double Approximate Identity Neural Networks Universal Approximation in Real Lebesgue Spaces
    Zainuddin, Zarita
    Fard, Saeed Panahian
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2012, PT I, 2012, 7663 : 409 - 415
  • [25] Universal Approximation Power of Deep Residual Neural Networks Through the Lens of Control
    Tabuada, Paulo
    Gharesifard, Bahman
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 2715 - 2728
  • [26] On the universal approximation theorem of fuzzy neural networks with random membership function parameters
    Wang, LP
    Liu, B
    Wan, CR
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS, 2005, 3496 : 45 - 50
  • [27] Universal approximation theorem for vector- and hypercomplex-valued neural networks
    Valle, Marcos Eduardo
    Vital, Wington L.
    Vieira, Guilherme
    [J]. NEURAL NETWORKS, 2024, 180
  • [28] A novel interval approximation method for passivity and stability analysis of delayed neural networks
    Qiu, Yunfei
    Qiu, Xuechao
    [J]. INFORMATION SCIENCES, 2024, 662
  • [29] A Universal Approximation Result for Difference of Log-Sum-Exp Neural Networks
    Calafiore, Giuseppe C.
    Gaubert, Stephane
    Possieri, Corrado
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5603 - 5612
  • [30] Interval neural networks
    Garczarczyk, ZA
    [J]. ISCAS 2000: IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS - PROCEEDINGS, VOL III: EMERGING TECHNOLOGIES FOR THE 21ST CENTURY, 2000, : 567 - 570