Bayesian inference for generalized linear models for spiking neurons

被引:35
|
作者
Gerwinn, Sebastian [1 ,2 ]
Macke, Jakob H. [1 ,2 ,3 ]
Bethge, Matthias [1 ,2 ]
机构
[1] Max Planck Inst Biol Cybernet, D-72076 Tubingen, Germany
[2] Univ Tubingen, Werner Reichardt Ctr Integrat Neurosci, Tubingen, Germany
[3] UCL, Gatsby Computat Neurosci Unit, London, England
关键词
spiking neurons; Bayesian inference; population coding; sparsity; multielectrode recordings; receptive field; GLM; functional connectivity; MAXIMUM-LIKELIHOOD-ESTIMATION; INTERAURAL TIME DIFFERENCES; FRAMEWORK; ENSEMBLE; TRAINS;
D O I
10.3389/fncom.2010.00012
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Bayesian inference for generalized linear mixed models
    Fong, Youyi
    Rue, Havard
    Wakefield, Jon
    [J]. BIOSTATISTICS, 2010, 11 (03) : 397 - 412
  • [2] Bayesian inference for sparse generalized linear models
    Seeger, Matthias
    Gerwinn, Sebastian
    Bethge, Matthias
    [J]. MACHINE LEARNING: ECML 2007, PROCEEDINGS, 2007, 4701 : 298 - +
  • [3] Bayesian spiking neurons I: Inference
    Deneve, Sophie
    [J]. NEURAL COMPUTATION, 2008, 20 (01) : 91 - 117
  • [4] Unified Bayesian Inference Framework for Generalized Linear Models
    Meng, Xiangming
    Wu, Sheng
    Zhu, Jiang
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (03) : 398 - 402
  • [5] Differentially Private Bayesian Inference for Generalized Linear Models
    Kulkarni, Tejas
    Jalko, Joonas
    Koskela, Antti
    Kaski, Samuel
    Honkela, Antti
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [6] Multistep Inference for Generalized Linear Spiking Models Curbs Runaway Excitation
    Hocker, David
    Park, Il Memming
    [J]. 2017 8TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2017, : 613 - 616
  • [7] Approximate Bayesian Inference in Spatial Generalized Linear Mixed Models
    Eidsvik, Jo
    Martino, Sara
    Rue, Havard
    [J]. SCANDINAVIAN JOURNAL OF STATISTICS, 2009, 36 (01) : 1 - 22
  • [8] Semiparametric Bayesian inference on generalized linear measurement error models
    Nian-Sheng Tang
    De-Wang Li
    An-Min Tang
    [J]. Statistical Papers, 2017, 58 : 1091 - 1113
  • [9] Bayesian Inference on Hierarchical Nonlocal Priors in Generalized Linear Models
    Cao, Xuan
    Lee, Kyoungjae
    [J]. BAYESIAN ANALYSIS, 2024, 19 (01): : 99 - 122
  • [10] Semiparametric Bayesian inference on generalized linear measurement error models
    Tang, Nian-Sheng
    Li, De-Wang
    Tang, An-Min
    [J]. STATISTICAL PAPERS, 2017, 58 (04) : 1091 - 1113