REPRESENTATIONAL TRAJECTORIES IN CONNECTIONIST LEARNING

被引:4
|
作者
CLARK, A [1 ]
机构
[1] WASHINGTON UNIV,DEPT PHILOSOPHY,ST LOUIS,MO 63130
关键词
CONNECTIONISM; LEARNING; DEVELOPMENT; RECURRENT NETWORKS; UNLEARNING; CATASTROPHIC FORGETTING;
D O I
10.1007/BF00974197
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to 'miss' the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we see exactly how the solution works, however, it becomes clear that ft is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND 'informed' undersampling in early learning.
引用
收藏
页码:317 / 332
页数:16
相关论文
共 50 条