CLIP - CONCEPT-LEARNING FROM INFERENCE PATTERNS

被引:47
|
作者
YOSHIDA, K
MOTODA, H
机构
[1] Advanced Research Laboratory, Hitachi, Ltd., Hatoyama, Saitama
关键词
D O I
10.1016/0004-3702(94)00066-A
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A new concept-learning method called CLIP (concept learning from inference patterns) is proposed that learns new concepts from inference patterns, not from positive/negative examples that most conventional concept learning methods use. The learned concepts enable an efficient inference on a more abstract level. We use a colored digraph to represent inference patterns. The graph representation is expressive enough and enables the quantitative analysis of the inference pattern frequency. The learning process consists of the following two steps: (1) Convert the original inference patterns to a colored digraph, and (2) Extract a set of typical patterns which appears frequently in the digraph. The basic idea is that the smaller the digraph becomes, the smaller the amount of data to be handled becomes and, accordingly, the more efficient the inference process that uses these data. Also, we can reduce the size of the graph by replacing each frequently appearing graph pattern with a single node, and each reduced node represents a new concept. Experimentally, CLIP automatically generates multilevel representations from a given physical/single-level representation of a carry-chain circuit: These representations involve abstract descriptions of the circuit, such as mathematical and logical descriptions.
引用
收藏
页码:63 / 92
页数:30
相关论文
共 50 条