Data abstractions for decision tree induction

被引:4
|
作者
Kudoh, Y [1 ]
Haraguchi, M [1 ]
Okubo, Y [1 ]
机构
[1] Hokkaido Univ, Div Elect & Informat Engn, Sapporo, Hokkaido 0608628, Japan
关键词
data mining; machine learning; abstraction; classification;
D O I
10.1016/S0304-3975(02)00178-0
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
When descriptions of data values in a database are too concrete or too detailed, the computational complexity needed to discover useful knowledge from the database will be generally increased. Furthermore, discovered knowledge tends to become complicated. A notion of data abstraction seems useful to resolve this kind of problems, as we obtain a smaller and more general database after the abstraction, from which we can quickly extract more abstract knowledge that is expected to be easier to understand. In general, however, since there exist several possible abstractions, we have to carefully select one according to which the original database is generalized. An inadequate selection would make the accuracy of extracted knowledge worse. From this point of view, we propose in this paper a method of selecting an appropriate abstraction from possible ones, assuming that our task is to construct a decision tree from a relational database. Suppose that, for each attribute in a relational database, we have a class of possible abstractions for the attribute values. As an appropriate abstraction for each attribute, we prefer an abstraction such that, even after the abstraction, the distribution of target classes necessary to perform our classification task can be preserved within an acceptable error range given by user. By the selected abstractions, the original database can be transformed into a small generalized database written in abstract values. Therefore, it would be expected that, from the generalized database, we can construct a decision tree whose size is much smaller than one constructed from the original database. Furthermore, such a size reduction can be justified under some theoretical assumptions. The appropriateness of abstraction is precisely defined in terms of the standard information theory. Therefore, we call our abstraction framework Information Theoretical Abstraction. We show some experimental results obtained by a system ITA that is an implementation of our abstraction method. From those results, it is verified that our method is very effective in reducing the size of detected decision tree without making classification errors so worse. (C) 2002 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:387 / 416
页数:30
相关论文
共 50 条
  • [31] SOME EXPERIMENTS IN DECISION TREE INDUCTION
    WILLIAMS, GJ
    AUSTRALIAN COMPUTER JOURNAL, 1987, 19 (02): : 84 - 91
  • [32] Educational Data Mining with Decision Tree and Rule Induction: SAU İLİTAM Example
    Diren, Deniz Demirciolu
    Horzum, Mehmet Baris
    PAMUKKALE UNIVERSITESI EGITIM FAKULTESI DERGISI-PAMUKKALE UNIVERSITY JOURNAL OF EDUCATION, 2024, (61):
  • [33] Feature subset selection for decision tree induction in the context of otoneurological data:: A preliminary study
    Viikki, K
    Kentala, E
    Juhola, M
    Pyykkö, I
    MEDINFO 2001: PROCEEDINGS OF THE 10TH WORLD CONGRESS ON MEDICAL INFORMATICS, PTS 1 AND 2, 2001, 84 : 583 - 583
  • [34] Reinforcement Learning Based Decision Tree Induction over Data Streams with Concept Drifts
    Blake, Christopher
    Ntoutsi, Eirini
    2018 9TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK), 2018, : 328 - 335
  • [35] Different decision tree induction strategies for a medical decision problem
    Burduk, Robert
    Wozniak, Michal
    CENTRAL EUROPEAN JOURNAL OF MEDICINE, 2012, 7 (02): : 183 - 193
  • [36] Reusable components in decision tree induction algorithms
    Suknovic, Milija
    Delibasic, Boris
    Jovanovic, Milos
    Vukicevic, Milan
    Becejski-Vujaklija, Dragana
    Obradovic, Zoran
    COMPUTATIONAL STATISTICS, 2012, 27 (01) : 127 - 148
  • [37] OVERCOMING PROCESS DELAYS WITH DECISION TREE INDUCTION
    EVANS, B
    FISHER, D
    IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1994, 9 (01): : 60 - 66
  • [38] Possibilistic induction in decision-tree learning
    Hüllermeier, E
    MACHINE LEARNING: ECML 2002, 2002, 2430 : 173 - 184
  • [39] On the Use of Clustering in Possibilistic Decision Tree Induction
    Jenhani, Ilyes
    Benferhat, Salem
    Elouedi, Zied
    SYMBOLIC AND QUANTITATIVE APPROACHES TO REASONING WITH UNCERTAINTY, PROCEEDINGS, 2009, 5590 : 505 - +
  • [40] Identifying Markov Blankets with decision tree induction
    Frey, L
    Fisher, D
    Tsamardinos, I
    Aliferis, CF
    Statnikov, A
    THIRD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, 2003, : 59 - 66