On learning context-free and context-sensitive languages

被引:13
|
作者
Bodén, M
Wiles, J
机构
[1] Halmstad Univ, Sch Informat Sci Comp & Elect Engn, S-30118 Halmstad, Sweden
[2] Univ Queensland, Sch Informat Technol & Elect Engn, St Lucia, Qld 4072, Australia
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2002年 / 13卷 / 02期
关键词
language; prediction; recurrent neural network (RNN);
D O I
10.1109/72.991436
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
引用
收藏
页码:491 / 493
页数:3
相关论文
共 50 条