New prior distribution for Bayesian neural network and learning via Hamiltonian Monte Carlo

被引:1
|
作者
Ramchoun, Hassan [1 ]
Ettaouil, Mohamed [1 ]
机构
[1] Univ Sidi Mohamed Ben Abdellah, Dept Math, Modeling & Sci Comp Lab, Fac Sci & Tech, Fes, Morocco
关键词
Bayesian multilayer feedforward neural network; Prior; Hamiltonian Monte Carlo; Evidence framework; Hyper-parameter; Regularization; SMOOTHING L-1/2 REGULARIZATION; GRADIENT-METHOD;
D O I
10.1007/s12530-019-09288-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A prior distribution of weights for Multilayer feedforward neural network in Bayesian point of view plays a central role toward generalization. In this context, we propose a new prior law for weights parameters which motivate the network regularization more thanl1 early proposed. To train the network, we have based on Hamiltonian Monte Carlo, it is used to simulate the prior and the posterior distribution. The generated samples are used to approximate the gradient of the evidence which allows to re-estimate the hyperparameters that balance a trade off between the likelihood term and regularized term, on the other hand we use the obtained posterior samples to estimate the network output. The case problem studied in this paper includes a regression and classification tasks. The obtained results illustrate the advantages of our approach in term of error rate compared to old approach, unfortunately our method consomme time before convergence.
引用
收藏
页码:661 / 671
页数:11
相关论文
共 50 条
  • [1] Hamiltonian Monte Carlo based on evidence framework for Bayesian learning to neural network
    Ramchoun, Hassan
    Ettaouil, Mohamed
    [J]. SOFT COMPUTING, 2019, 23 (13) : 4815 - 4825
  • [2] Hamiltonian Monte Carlo based on evidence framework for Bayesian learning to neural network
    Hassan Ramchoun
    Mohamed Ettaouil
    [J]. Soft Computing, 2019, 23 : 4815 - 4825
  • [3] Neural network gradient Hamiltonian Monte Carlo
    Lingge Li
    Andrew Holbrook
    Babak Shahbaba
    Pierre Baldi
    [J]. Computational Statistics, 2019, 34 : 281 - 299
  • [4] Neural network gradient Hamiltonian Monte Carlo
    Li, Lingge
    Holbrook, Andrew
    Shahbaba, Babak
    Baldi, Pierre
    [J]. COMPUTATIONAL STATISTICS, 2019, 34 (01) : 281 - 299
  • [5] Bayesian Federated Learning with Hamiltonian Monte Carlo: Algorithm and Theory
    Liang, Jiajun
    Zhang, Qian
    Deng, Wei
    Song, Qifan
    Lin, Guang
    [J]. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2024,
  • [6] Separable Shadow Hamiltonian Hybrid Monte Carlo for Bayesian Neural Network Inference in wind speed forecasting
    Mbuvha, Rendani
    Mongwe, Wilson Tsakane
    Marwala, Tshilidzi
    [J]. ENERGY AND AI, 2021, 6
  • [7] Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo
    Vyacheslav Kungurtsev
    Adam Cobb
    Tara Javidi
    Brian Jalaian
    [J]. Machine Learning, 2023, 112 : 2791 - 2819
  • [8] Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo
    Kungurtsev, Vyacheslav
    Cobb, Adam
    Javidi, Tara
    Jalaian, Brian
    [J]. MACHINE LEARNING, 2023, 112 (08) : 2791 - 2819
  • [9] Modified Hamiltonian Monte Carlo for Bayesian inference
    Radivojevic, Tijana
    Akhmatskaya, Elena
    [J]. STATISTICS AND COMPUTING, 2020, 30 (02) : 377 - 404
  • [10] Modified Hamiltonian Monte Carlo for Bayesian inference
    Tijana Radivojević
    Elena Akhmatskaya
    [J]. Statistics and Computing, 2020, 30 : 377 - 404