Tighter Risk Certificates for Neural Networks

被引:0
|
作者
Perez-Ortiz, Maria [1 ]
Rivasplata, Omar [1 ]
Shawe-Taylor, John [1 ]
Szepesvari, Csaba [2 ]
机构
[1] UCL, AI Ctr, London, England
[2] DeepMind Edmonton, Edmonton, AB, Canada
基金
加拿大自然科学与工程研究理事会; 英国工程与自然科学研究理事会;
关键词
Deep learning; neural work training; weight randomisation; generalisation; pathwise reparametrised gradients; PAC-Bayes with Backprop; data-dependent priors; PAC-BAYESIAN ANALYSIS; BOUNDS; BACKPROPAGATION; FRAMEWORK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents an empirical study regarding training probabilistic neural networks using training objectives derived from PAC-Bayes bounds. In the context of probabilistic neural networks, the output of training is a probability distribution over network weights. We present two training objectives, used here for the first time in connection with training neural networks. These two training objectives are derived from tight PAC-Bayes bounds. We also re-implement a previously used training objective based on a classical PAC-Bayes bound, to compare the properties of the predictors learned using the different training objectives. We compute risk certificates for the learnt predictors, based on part of the data used to learn the predictors. We further experiment with different types of priors on the weights (both data-free and data-dependent priors) and neural network architectures. Our experiments on MNIST and CIFAR-10 show that our training methods produce competitive test set errors and non-vacuous risk bounds with much tighter values than previous results in the literature, showing promise not only to guide the learning algorithm through bounding the risk but also for model selection. These observations suggest that the methods studied here might be good candidates for self-certified learning, in the sense of using the whole data set for learning a predictor and certifying its risk on any unseen data (from the same distribution as the training data) potentially without the need for holding out test data.
引用
收藏
页数:40
相关论文
共 50 条
  • [31] Fastened CROWN: Tightened Neural Network Robustness Certificates
    Lyu, Zhaoyang
    Ko, Ching-Yun
    Kong, Zhifeng
    Wong, Ngai
    Lin, Dahua
    Daniel, Luca
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5037 - 5044
  • [32] Learning safe neural network controllers with barrier certificates
    Zhao, Hengjun
    Zeng, Xia
    Chen, Taolue
    Liu, Zhiming
    Woodcock, Jim
    FORMAL ASPECTS OF COMPUTING, 2021, 33 (03) : 437 - 455
  • [33] Risk-averting criteria for training neural networks
    Lo, JTH
    8TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING, VOLS 1-3, PROCEEDING, 2001, : 476 - 481
  • [34] Differentiating Neural Networks Underlying Risk for Depression in Youth
    Singh, Manpreet K.
    Kelley, Ryan G.
    Howe, Meghan
    Gotlib, Ian
    Chang, Kiki
    NEUROPSYCHOPHARMACOLOGY, 2013, 38 : S225 - S226
  • [35] Coherent risk measure using feedfoward neural networks
    Lee, H
    Lee, J
    Yoon, Y
    Kim, S
    ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 2, PROCEEDINGS, 2005, 3497 : 904 - 909
  • [36] RiskNet: Neural Risk Assessment in Networks of Unreliable Resources
    Krzysztof Rusek
    Piotr Boryło
    Piotr Jaglarz
    Fabien Geyer
    Albert Cabellos
    Piotr Chołda
    Journal of Network and Systems Management, 2023, 31
  • [37] Quantile convolutional neural networks for Value at Risk forecasting
    Petnehazi, Gabor
    MACHINE LEARNING WITH APPLICATIONS, 2021, 6
  • [38] Risk assessment in construction projects with the use of neural networks
    Giannakos, L.
    Xenidis, Y.
    SAFETY AND RELIABILITY - SAFE SOCIETIES IN A CHANGING WORLD, 2018, : 1563 - 1570
  • [39] Artificial neural networks and risk stratification in emergency departments
    Greta Falavigna
    Giorgio Costantino
    Raffaello Furlan
    James V. Quinn
    Andrea Ungar
    Roberto Ippoliti
    Internal and Emergency Medicine, 2019, 14 : 291 - 299
  • [40] Risk degree of debris flow applying neural networks
    Chang, Tung-Chiung
    NATURAL HAZARDS, 2007, 42 (01) : 209 - 224