Tighter Risk Certificates for Neural Networks

被引:0
|
作者
Perez-Ortiz, Maria [1 ]
Rivasplata, Omar [1 ]
Shawe-Taylor, John [1 ]
Szepesvari, Csaba [2 ]
机构
[1] UCL, AI Ctr, London, England
[2] DeepMind Edmonton, Edmonton, AB, Canada
基金
加拿大自然科学与工程研究理事会; 英国工程与自然科学研究理事会;
关键词
Deep learning; neural work training; weight randomisation; generalisation; pathwise reparametrised gradients; PAC-Bayes with Backprop; data-dependent priors; PAC-BAYESIAN ANALYSIS; BOUNDS; BACKPROPAGATION; FRAMEWORK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents an empirical study regarding training probabilistic neural networks using training objectives derived from PAC-Bayes bounds. In the context of probabilistic neural networks, the output of training is a probability distribution over network weights. We present two training objectives, used here for the first time in connection with training neural networks. These two training objectives are derived from tight PAC-Bayes bounds. We also re-implement a previously used training objective based on a classical PAC-Bayes bound, to compare the properties of the predictors learned using the different training objectives. We compute risk certificates for the learnt predictors, based on part of the data used to learn the predictors. We further experiment with different types of priors on the weights (both data-free and data-dependent priors) and neural network architectures. Our experiments on MNIST and CIFAR-10 show that our training methods produce competitive test set errors and non-vacuous risk bounds with much tighter values than previous results in the literature, showing promise not only to guide the learning algorithm through bounding the risk but also for model selection. These observations suggest that the methods studied here might be good candidates for self-certified learning, in the sense of using the whole data set for learning a predictor and certifying its risk on any unseen data (from the same distribution as the training data) potentially without the need for holding out test data.
引用
收藏
页数:40
相关论文
共 50 条
  • [21] A tighter upper bound on storage capacity of multilayer networks
    Takahashi, H
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 1998, E81A (02) : 333 - 339
  • [22] Compositional Neural Certificates for Networked Dynamical Systems
    Zhang, Songyuan
    Xiu, Yumeng
    Qu, Guannan
    Fan, Chuchu
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [23] Cellular Neural Networks Model of Risk Management
    Slavova, Angela
    2008 11TH INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND THEIR APPLICATIONS, 2008, : 181 - 185
  • [24] Social networks and risk of neural tube defects
    Suzan L. Carmichael
    Gary M. Shaw
    Eric Neri
    Donna M. Schaffer
    Steve Selvin
    European Journal of Epidemiology, 2003, 18 (2) : 129 - 133
  • [25] Neural networks to estimate the risk for preeclampsia occurrence
    Neocleous, Costas K.
    Anastasopoulos, Panagiotis
    Nikolaides, Kypros H.
    Schizas, Christos N.
    Neokleous, Kleanthis C.
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 1679 - +
  • [26] Social networks and risk of neural tube defects
    Carmichael, SL
    Shaw, GM
    Neri, E
    Schaffer, DM
    Selvin, S
    EUROPEAN JOURNAL OF EPIDEMIOLOGY, 2003, 18 (02) : 129 - 133
  • [27] A neural networks approach for software risk analysis
    Yong, Hu
    Juhua, Chen
    Zhenbang, Rong
    Liu, Mei
    Kang, Me
    ICDM 2006: SIXTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, WORKSHOPS, 2006, : 722 - +
  • [28] Generalization and risk bounds for recurrent neural networks
    Cheng, Xuewei
    Huang, Ke
    Ma, Shujie
    NEUROCOMPUTING, 2025, 616
  • [29] Stability Certificates for Networks of Heterogeneous Linear Systems
    Pates, Richard
    Vinnicombe, Glenn
    2012 IEEE 51ST ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2012, : 6915 - 6920