A learning result for continuous-time recurrent neural networks

被引:12
|
作者
Sontag, ED [1 ]
机构
[1] Rutgers State Univ, Dept Math, Hill Ctr, Piscataway, NJ 08854 USA
关键词
recurrent neural networks; system identification; computational learning theory;
D O I
10.1016/S0167-6911(98)00006-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to approximate the observed behavior. It is shown that the number of inputs needed for reliable generalization (the sample complexity of the learning problem) is upper bounded by an expression that grows polynomially with the dimension of the network and logarithmically with the number of output derivatives being matched. (C) 1998 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:151 / 158
页数:8
相关论文
共 50 条