Laws of large numbers and langevin approximations for stochastic neural field equations

被引:2
|
作者
Riedler, Martin G. [1 ]
Buckwar, Evelyn [1 ]
机构
[1] Institute for Stochastics, Johannes Kepler University, Linz, Austria
关键词
Central Limit Theorem - Chemical Langevin equation - Infinite dimensions - Law of large numbers - Neural field equations - Piecewise deterministic Markov process;
D O I
10.1186/2190-5983-3-1
中图分类号
学科分类号
摘要
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson-Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. © 2013 M.G. Riedler, E. Buckwar; licensee Springer.
引用
收藏
页码:1 / 54
相关论文
共 50 条