Knowledge representation and possible worlds for neural networks

被引:0
|
作者
Healy, Michael J.
Caudell, Thomas P. [1 ,2 ]
机构
[1] Univ New Mexico, Dept Elect & Comp Engn, Albuquerque, NM 87131 USA
[2] Univ New Mexico, Dept Comp Sci, Albuquerque, NM 87131 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The semantics of neural networks can be analyzed mathematically as a distributed system of knowledge and as systems of possible worlds expressed in the knowledge. Learning in a neural network can be analyzed as an attempt to acquire a representation of knowledge. We express the knowledge system, systems of possible worlds, and neural architectures at different stages of learning as categories. Diagrammatic constructs express learning in terms of pre-existing knowledge representations. Functors express structure-preserving associations between the categories. This analysis provides a mathematical vehicle for understanding connectionist systems and yields design principles for advancing the state of the art.
引用
收藏
页码:3047 / +
页数:2
相关论文
共 50 条