A NOTE ON LEAST-SQUARES LEARNING PROCEDURES AND CLASSIFICATION BY NEURAL NETWORK MODELS

被引:15
|
作者
SHOEMAKER, PA
机构
[1] Solid-State Electronics Division, Naval Ocean Systems Center, San Diego
来源
关键词
D O I
10.1109/72.80304
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years there has been considerable interest in the capabilities of neural network models applied to tasks such as classification [1]-[4], which typically require some a posteriori judgment of likelihood (for example, of class membership) based upon probabilistic data. This has led to analyses of network learning and function in a statistical context [5]-[8]. We consider neural network models as mathematical classifiers whose inputs comprise random variables generated according to arbitrary stationary class distributions, and address the implication of learning based upon minimization of sum-square classification error over a training set of these observations for which class assignments are absolutely determined. Expectations for network outputs in such case are weighted least-squares approximations to a posteriori probabilities for the classes, which justifies interpretation of network outputs as indicating degree of confidence in class membership. We demonstrate this with a straightforward proof, in which class probability densities are regarded as primitives, and which for simplicity does not rely upon probability theory or statistics. In addition, we cite more detailed results giving conditions for consistency of the estimators (with respect to the weighted least-squares approximations) [9], and discuss some issues relating to the suitability of neural network models and back-propagation training [10] for approximation of conditional probabilities in classification tasks.
引用
收藏
页码:158 / 160
页数:3
相关论文
共 50 条