On-line Gibbs learning

被引:20
|
作者
Kim, JW [1 ]
Sompolinsky, H [1 ]
机构
[1] HEBREW UNIV JERUSALEM, CTR NEURAL COMPUTAT, IL-91904 JERUSALEM, ISRAEL
关键词
D O I
10.1103/PhysRevLett.76.3021
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
We propose a new model of on-line learning which is appropriate for learning realizable and unrealizable, smooth as well as threshold, functions. Following each presentation of an example the new weights are chosen from a Gibbs distribution with an on-line energy that balances the need to minimize the instantaneous error against the need to minimize the change in the weights. We show that this algorithm finds the weights that minimize the generalization error in the limit of an infinite number of examples. The asymptotic rate of convergence is similar to that of batch learning.
引用
收藏
页码:3021 / 3024
页数:4
相关论文
共 50 条