A NEAREST HYPERRECTANGLE LEARNING-METHOD

被引:76
|
作者
SALZBERG, S
机构
关键词
EXEMPLAR; INDUCTION; GENERALIZATION; PREDICTION; INCREMENTAL LEARNING; EXCEPTIONS;
D O I
10.1007/BF00114779
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a theory of learning called nested generalized exemplar (NGE) theory, in which learning is accomplished by storing objects in Euclidean n-space, E(n), as hyperrectangles. The hyperrectangles may be nested inside one another to arbitrary depth. In contrast to generalization processes that replace symbolic formulae by more general formulae, the NGE algorithm modifies hyperrectangles by growing and reshaping them in a well-defined fashion. The axes of these hyperrectangles are defined by the variable measured for each example. Each variable can have any range on the real line; thus the theory is not restricted to symbolic or binary values. This paper describes some advantages and disadvantages of NGE theory, positions it as a form of exemplar-based learning, and compares it to other inductive learning theories. An implementation has been tested in three different domains, for which results are presented below: prediction of breast cancer, classification of iris flowers, and prediction of survival times for heart attack patients. The results in these domains support the claim that NGE theory can be used to create compact representations with excellent predictive accuracy.
引用
收藏
页码:251 / 276
页数:26
相关论文
共 50 条
  • [31] Modification of Nested Hyperrectangle Exemplar as a Proposition of Information Fusion Method
    Wozniak, Michal
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING, PROCEEDINGS, 2009, 5788 : 687 - 694
  • [32] AUTOMATION FOR ANALYTICAL INSTRUMENT CONTROLLED BY MICROCOMPUTER .2. LEARNING-METHOD CONTROL APPLIED TO PH AUTOMATIC TITRATION
    NISHIKAWA, T
    OGASAWARA, I
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 1979, (APR): : 16 - 16
  • [33] A k-nearest neighbour method for managing the evolution of a learning base
    Henry, JL
    ICCIMA 2001: FOURTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND MULTIMEDIA APPLICATIONS, PROCEEDINGS, 2001, : 357 - +
  • [34] A method of learning weighted similarity function to improve the performance of nearest neighbor
    Jahromi, Mansoor Zolghadri
    Parvinnia, Elham
    John, Robert
    INFORMATION SCIENCES, 2009, 179 (17) : 2964 - 2973
  • [35] A COST SENSITIVE LEARNING METHOD TO TUNE THE NEAREST NEIGHBOUR FOR INTRUSION DETECTION
    Moosavi, M. R.
    Jahromi, M. Zolghadri
    Ghodratnama, S.
    Taheri, M.
    Sadreddini, M. H.
    IRANIAN JOURNAL OF SCIENCE AND TECHNOLOGY-TRANSACTIONS OF ELECTRICAL ENGINEERING, 2012, 36 (E2) : 109 - 129
  • [36] DYNAMIC SYSTEM-IDENTIFICATION BY NEURAL-NETWORK - A NEW, FAST LEARNING-METHOD BASED ON ERROR BACK-PROPAGATION
    PAL, C
    HAGIWARA, I
    KAYABA, N
    MORISHITA, S
    JOURNAL OF INTELLIGENT MATERIAL SYSTEMS AND STRUCTURES, 1994, 5 (01) : 127 - 135
  • [37] Learning efficient and interpretable prototypes from data for nearest neighbor classification method
    Ezghari, Soufiane
    Benouini, Rachid
    Zahi, Azeddine
    Zenkouar, Khalid
    2017 INTELLIGENT SYSTEMS AND COMPUTER VISION (ISCV), 2017,
  • [38] Learning with Nearest Neighbour Classifiers
    Sergio Bermejo
    Joan Cabestany
    Neural Processing Letters, 2001, 13 : 159 - 181
  • [39] Learning with nearest neighbour classifiers
    Bermejo, S
    Cabestany, J
    NEURAL PROCESSING LETTERS, 2001, 13 (02) : 159 - 181
  • [40] A transfer-learning fault diagnosis method considering nearest neighbor feature constraints
    Zeng, Mengjie
    Li, Shunming
    Li, Ranran
    Li, Jiacheng
    Xu, Kun
    Li, Xianglian
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2023, 34 (01)