The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural Networks

被引:0
|
作者
Matsubara, Takuo [1 ]
Oates, Chris J. [1 ]
Briol, Francois-Xavier [2 ]
机构
[1] Newcastle Univ, Sch Math Stat & Phys, Newcastle Upon Tyne NE1 7RU, Tyne & Wear, England
[2] UCL, Dept Stat Sci, London WC1E 6BT, England
基金
英国工程与自然科学研究理事会;
关键词
Bayesian neural networks; Gaussian processes; prior selection; ridgelet transform; statistical learning theory; APPROXIMATION; CONVERGENCE; BOUNDS; RATES;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Bayesian neural networks attempt to combine the strong predictive performance of neural networks with formal quantification of uncertainty associated with the predictive output in the Bayesian framework. However, it remains unclear how to endow the parameters of the network with a prior distribution that is meaningful when lifted into the output space of the network. A possible solution is proposed that enables the user to posit an appropriate Gaussian process covariance function for the task at hand. Our approach constructs a prior distribution for the parameters of the network, called a ridgelet prior, that approximates the posited Gaussian process in the output space of the network. In contrast to existing work on the connection between neural networks and Gaussian processes, our analysis is non-asymptotic, with finite sample-size error bounds provided. This establishes the universality property that a Bayesian neural network can approximate any Gaussian process whose covariance function is sufficiently regular. Our experimental assessment is limited to a proof-of-concept, where we demonstrate that the ridgelet prior can out-perform an unstructured prior on regression problems for which a suitable Gaussian process prior can be provided.
引用
收藏
页数:57
相关论文
共 50 条
  • [21] Scalable Bayesian optimization with randomized prior networks
    Bhouri, Mohamed Aziz
    Joly, Michael
    Yu, Robert
    Sarkar, Soumalya
    Perdikaris, Paris
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2023, 417
  • [22] THE SYSTEMATIC SPECIFICATION OF A FULL PRIOR COVARIANCE-MATRIX FOR ASSET DEMAND EQUATIONS
    SMITH, G
    QUARTERLY JOURNAL OF ECONOMICS, 1981, 96 (02): : 317 - 339
  • [23] On the use of a pruning prior for neural networks
    Goutte, C
    NEURAL NETWORKS FOR SIGNAL PROCESSING VI, 1996, : 52 - 61
  • [24] Bayesian Sparse Spiked Covariance Model with a Continuous Matrix Shrinkage Prior*
    Xie, Fangzheng
    Cape, Joshua
    Priebe, Carey E.
    Xu, Yanxun
    BAYESIAN ANALYSIS, 2022, 17 (04): : 1193 - 1217
  • [25] Preventing Catastrophic Forgetting using Prior Transfer in Physics Informed Bayesian Neural Networks
    Van Heck, Cedric
    Coene, Annelies
    Crevecoeur, Guillaume
    2022 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2022, : 650 - 657
  • [26] Sensitivity to prior specification in Bayesian genome-based prediction models
    Lehermeier, Christina
    Wimmer, Valentin
    Albrecht, Theresa
    Auinger, Hans-Juergen
    Gianola, Daniel
    Schmid, Volker J.
    Schoen, Chris-Carolin
    STATISTICAL APPLICATIONS IN GENETICS AND MOLECULAR BIOLOGY, 2013, 12 (03) : 375 - 391
  • [27] Impacts of prior mis-specification on Bayesian fisheries stock assessment
    Chen, Yong
    Sun, Chi-Lu
    Kanaiwa, Minoru
    MARINE AND FRESHWATER RESEARCH, 2008, 59 (02) : 145 - 156
  • [28] Iterative aggregation of Bayesian networks incorporating prior knowledge
    Liu, Dongsheng
    Fifth Wuhan International Conference on E-Business, Vols 1-3: INTEGRATION AND INNOVATION THROUGH MEASUREMENT AND MANAGEMENT, 2006, : 446 - 452
  • [29] Bayesian Networks: The Parental Synergy and the Prior Convergence Error
    Bolt, Janneke H.
    AI (ASTERISK) IA 2009: EMERGENT PERSPECTIVES IN ARTIFICIAL INTELLIGENCE, 2009, 5883 : 1 - 10
  • [30] Learning Bayesian networks with integration of indirect prior knowledge
    Pei, Baikang
    Rowe, David W.
    Shin, Dong-Guk
    INTERNATIONAL JOURNAL OF DATA MINING AND BIOINFORMATICS, 2010, 4 (05) : 505 - 519