Tighter Risk Certificates for Neural Networks

被引:0
|
作者
Perez-Ortiz, Maria [1 ]
Rivasplata, Omar [1 ]
Shawe-Taylor, John [1 ]
Szepesvari, Csaba [2 ]
机构
[1] UCL, AI Ctr, London, England
[2] DeepMind Edmonton, Edmonton, AB, Canada
基金
加拿大自然科学与工程研究理事会; 英国工程与自然科学研究理事会;
关键词
Deep learning; neural work training; weight randomisation; generalisation; pathwise reparametrised gradients; PAC-Bayes with Backprop; data-dependent priors; PAC-BAYESIAN ANALYSIS; BOUNDS; BACKPROPAGATION; FRAMEWORK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents an empirical study regarding training probabilistic neural networks using training objectives derived from PAC-Bayes bounds. In the context of probabilistic neural networks, the output of training is a probability distribution over network weights. We present two training objectives, used here for the first time in connection with training neural networks. These two training objectives are derived from tight PAC-Bayes bounds. We also re-implement a previously used training objective based on a classical PAC-Bayes bound, to compare the properties of the predictors learned using the different training objectives. We compute risk certificates for the learnt predictors, based on part of the data used to learn the predictors. We further experiment with different types of priors on the weights (both data-free and data-dependent priors) and neural network architectures. Our experiments on MNIST and CIFAR-10 show that our training methods produce competitive test set errors and non-vacuous risk bounds with much tighter values than previous results in the literature, showing promise not only to guide the learning algorithm through bounding the risk but also for model selection. These observations suggest that the methods studied here might be good candidates for self-certified learning, in the sense of using the whole data set for learning a predictor and certifying its risk on any unseen data (from the same distribution as the training data) potentially without the need for holding out test data.
引用
收藏
页数:40
相关论文
共 50 条
  • [1] Synthesizing Barrier Certificates Using Neural Networks
    Zhao, Hengjun
    Zeng, Xia
    Chen, Taolue
    Liu, Zhiming
    PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON HYBRID SYSTEMS: COMPUTATION AND CONTROL (HSCC2020) (PART OF CPS-IOT WEEK), 2020,
  • [2] Hunting Malicious TLS Certificates with Deep Neural Networks
    Torroledo, Ivan
    Camacho, Luis David
    Bahnsen, Alejandro Correa
    AISEC'18: PROCEEDINGS OF THE 11TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, 2018, : 64 - 73
  • [3] Robustness Certificates for Implicit Neural Networks: A Mixed Monotone Contractive Approach
    Jafarpour, Saber
    Abate, Matthew
    Davydov, Alexander
    Bullo, Francesco
    Coogan, Samuel
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 168, 2022, 168
  • [4] Robustness Certificates for Implicit Neural Networks: A Mixed Monotone Contractive Approach
    Jafarpour, Saber
    Abate, Matthew
    Davydov, Alexander
    Bullo, Francesco
    Coogan, Samuel
    Proceedings of Machine Learning Research, 2022, 168 : 917 - 930
  • [5] Application of neural networks for evaluating energy performance certificates of residential buildings
    Khayatian, Fazel
    Sarto, Luca
    Dall'O', Giuliano
    ENERGY AND BUILDINGS, 2016, 125 : 45 - 54
  • [6] Neural Closure Certificates
    Nadali, Alireza
    Murali, Vishnu
    Trivedi, Ashutosh
    Zamani, Majid
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21446 - 21453
  • [7] Synthesizing ReLU Neural Networks with Two Hidden Layers as Barrier Certificates for Hybrid Systems
    Zhao, Qingye
    Chen, Xin
    Zhang, Yifan
    Sha, Meng
    Yang, Zhengfeng
    Lin, Wang
    Tang, Enyi
    Chen, Qiguang
    Li, Xuandong
    HSCC2021: PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON HYBRID SYSTEMS: COMPUTATION AND CONTROL (PART OF CPS-IOT WEEK), 2021,
  • [8] Detection of Rogue Certificates from Trusted Certificate Authorities Using Deep Neural Networks
    Dong, Zheng
    Kane, Kevin
    Camp, L. Jean
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2016, 19 (02)
  • [9] Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK
    Yang, Hongru
    Jiang, Ziyu
    Zhang, Ruizhe
    Liang, Yingbin
    Wang, Zhangyang
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [10] Shared Certificates for Neural Network Verification
    Fischer, Marc
    Sprecher, Christian
    Dimitrov, Dimitar Iliev
    Singh, Gagandeep
    Vechev, Martin
    COMPUTER AIDED VERIFICATION (CAV 2022), PT I, 2022, 13371 : 127 - 148