Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

被引:0
|
作者
Sebastian Bitzer
Stefan J. Kiebel
机构
[1] MPI for Human Cognitive and Brain Sciences,
来源
Biological Cybernetics | 2012年 / 106卷
关键词
Recurrent neural networks; Bayesian inference; Nonlinear dynamics; Human motion;
D O I
暂无
中图分类号
学科分类号
摘要
Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a ‘recognizing RNN’ (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.
引用
收藏
页码:201 / 217
页数:16
相关论文
共 50 条
  • [1] Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
    Bitzer, Sebastian
    Kiebel, Stefan J.
    [J]. BIOLOGICAL CYBERNETICS, 2012, 106 (4-5) : 201 - 217
  • [2] Sparse Bayesian Recurrent Neural Networks
    Chatzis, Sotirios P.
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT II, 2015, 9285 : 359 - 372
  • [3] Bayesian learning for recurrent neural networks
    Crucianu, M
    Boné, R
    de Beauville, JPA
    [J]. NEUROCOMPUTING, 2001, 36 (01) : 235 - 242
  • [4] DISCRETE RECURRENT NEURAL NETWORKS FOR GRAMMATICAL INFERENCE
    ZENG, Z
    GOODMAN, RM
    SMYTH, P
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02): : 320 - 330
  • [5] Recurrent Bayesian reasoning in probabilistic neural networks
    Grim, Jiri
    Hora, Jan
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2007, PT 1, PROCEEDINGS, 2007, 4668 : 129 - +
  • [6] Dynamic Action Inference with Recurrent Spiking Neural Networks
    Traub, Manuel
    Butz, Martin, V
    Legenstein, Robert
    Otte, Sebastian
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2021, PT V, 2021, 12895 : 233 - 244
  • [7] Verifying Recurrent Neural Networks Using Invariant Inference
    Jacoby, Yuval
    Barrett, Clark
    Katz, Guy
    [J]. AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2020), 2020, 12302 : 57 - 74
  • [8] Natural language grammatical inference with recurrent neural networks
    Lawrence, S
    Giles, CL
    Fong, S
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2000, 12 (01) : 126 - 140
  • [9] Recurrent neural networks
    Siegelmann, HT
    [J]. COMPUTER SCIENCE TODAY, 1995, 1000 : 29 - 45
  • [10] MARGINAL BAYESIAN POSTERIOR INFERENCE USING RECURRENT NEURAL NETWORKS WITH APPLICATION TO SEQUENTIAL MODELS
    Fisher, Thayer
    Luedtke, Alex
    Carone, Marco
    Simon, Noah
    [J]. STATISTICA SINICA, 2023, 33 : 1507 - 1532