Parameter identifiability of a deep feedforward ReLU neural network

被引:0
|
作者
Joachim Bona-Pellissier
François Bachoc
François Malgouyres
机构
[1] Université de Toulouse,Institut de Mathématiques de Toulouse, UMR5219
[2] CNRS,undefined
[3] UPS IMT,undefined
来源
Machine Learning | 2023年 / 112卷
关键词
ReLU networks; Equivalent parameters; Symmetries; Parameter recovery; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
The possibility for one to recover the parameters—weights and biases—of a neural network thanks to the knowledge of its function on a subset of the input space can be, depending on the situation, a curse or a blessing. On one hand, recovering the parameters allows for better adversarial attacks and could also disclose sensitive information from the dataset used to construct the network. On the other hand, if the parameters of a network can be recovered, it guarantees the user that the features in the latent spaces can be interpreted. It also provides foundations to obtain formal guarantees on the performances of the network. It is therefore important to characterize the networks whose parameters can be identified and those whose parameters cannot. In this article, we provide a set of conditions on a deep fully-connected feedforward ReLU neural network under which the parameters of the network are uniquely identified—modulo permutation and positive rescaling—from the function it implements on a subset of the input space.
引用
收藏
页码:4431 / 4493
页数:62
相关论文
共 50 条
  • [21] The Construction and Approximation of ReLU Neural Network Operators
    Chen, Hengjie
    Yu, Dansheng
    Li, Zhong
    [J]. JOURNAL OF FUNCTION SPACES, 2022, 2022
  • [22] Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization
    Li, Gen
    Wang, Ganghua
    Ding, Jie
    [J]. IEEE Transactions on Information Theory, 2023, 69 (09) : 5921 - 5935
  • [23] Soft error reliability predictor based on a Deep Feedforward Neural Network
    Ruiz Falco, David
    Serrano-Cases, Alejandro
    Martinez-Alvarez, Antonio
    Cuenca-Asensi, Sergio
    [J]. 21ST IEEE LATIN-AMERICAN TEST SYMPOSIUM (LATS 2020), 2020,
  • [24] Predicting the DNA Conductance Using a Deep Feedforward Neural Network Model
    Aggarwal, Abhishek
    Vinayak, Vinayak
    Bag, Saientan
    Bhattacharyya, Chiranjib
    Waghmare, Umesh, V
    Maiti, Prabal K.
    [J]. JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2021, 61 (01) : 106 - 114
  • [25] Parameter Optimization and Performance Analysis of Magnetorheological Valve Based on Feedforward Neural Network
    Hu, Guoliang
    Fang, Bing
    Yang, Xiao
    Zhou, Feng
    Yu, Lifan
    [J]. Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2024, 44 (12): : 1263 - 1276
  • [26] On Theoretical Analysis of Single Hidden Layer Feedforward Neural Networks with Relu Activations
    Shen, Guorui
    Yuan, Ye
    [J]. 2019 34RD YOUTH ACADEMIC ANNUAL CONFERENCE OF CHINESE ASSOCIATION OF AUTOMATION (YAC), 2019, : 706 - 709
  • [27] Learning Deep ReLU Networks Is Fixed-Parameter Tractable
    Chen, Sitan
    Klivans, Adam R.
    Meka, Raghu
    [J]. 2021 IEEE 62ND ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2021), 2022, : 696 - 707
  • [28] Unboundedness of Linear Regions of Deep ReLU Neural Networks
    Ponomarchuk, Anton
    Koutschan, Christoph
    Moser, Bernhard
    [J]. DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2022 WORKSHOPS, 2022, 1633 : 3 - 10
  • [29] Packet-Dropouts Compensation for Networked Control System via Deep ReLU Neural Network
    Cui, Yi
    Cao, Yang
    Kang, Yu
    Li, Pengfei
    Wang, Xuefeng
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2017), PT VI, 2017, 10639 : 61 - 70
  • [30] A generative model for fBm with deep ReLU neural networks
    Allouche, Michael
    Girard, Stephane
    Gobet, Emmanuel
    [J]. JOURNAL OF COMPLEXITY, 2022, 73