Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

被引:0
|
作者
Sebastian Bitzer
Stefan J. Kiebel
机构
[1] MPI for Human Cognitive and Brain Sciences,
来源
Biological Cybernetics | 2012年 / 106卷
关键词
Recurrent neural networks; Bayesian inference; Nonlinear dynamics; Human motion;
D O I
暂无
中图分类号
学科分类号
摘要
Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a ‘recognizing RNN’ (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.
引用
收藏
页码:201 / 217
页数:16
相关论文
共 50 条
  • [31] Stability of Recurrent Neural Networks
    Jalab, Hamid A.
    Ibrahim, Rabha W.
    [J]. INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2006, 6 (12): : 159 - 164
  • [32] RECURRENT NEURAL NETWORKS FOR SYLLABICATION
    HUNT, A
    [J]. SPEECH COMMUNICATION, 1993, 13 (3-4) : 323 - 332
  • [33] Fault detection and identification using Bayesian recurrent neural networks
    Sun, Weike
    Paiva, Antonio R. C.
    Xu, Peng
    Sundaram, Anantha
    Braatz, Richard D.
    [J]. COMPUTERS & CHEMICAL ENGINEERING, 2020, 141
  • [34] Restricted Recurrent Neural Networks
    Diao, Enmao
    Ding, Jie
    Tarokh, Vahid
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 56 - 63
  • [35] Interpretation of recurrent neural networks
    Pedersen, MW
    Larsen, J
    [J]. NEURAL NETWORKS FOR SIGNAL PROCESSING VII, 1997, : 82 - 91
  • [36] Multistability in recurrent neural networks
    Cheng, Chang-Yuan
    Lin, Kuang-Hui
    Shih, Chih-Wen
    [J]. SIAM JOURNAL ON APPLIED MATHEMATICS, 2006, 66 (04) : 1301 - 1320
  • [37] Relational recurrent neural networks
    Santoro, Adam
    Faulkner, Ryan
    Raposo, David
    Rae, Jack
    Chrzanowski, Mike
    Weber, Theophane
    Wierstre, Daan
    Vinyals, Oriol
    Pascanu, Razvan
    Lillicrap, Timothy
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [38] Recurrent Neural Networks for Storytelling
    Choi, YunSeok
    Kim, SuAh
    Lee, Jee-Hyong
    [J]. 2016 JOINT 8TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 17TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), 2016, : 841 - 845
  • [39] MARKOV RECURRENT NEURAL NETWORKS
    Kuo, Che-Yu
    Chien, Jen-Tzung
    [J]. 2018 IEEE 28TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2018,
  • [40] Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling
    Gan, Zhe
    Li, Chunyuan
    Chen, Changyou
    Pu, Yunchen
    Su, Qinliang
    Carin, Lawrence
    [J]. PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 321 - 331