On the value of partial information for learning from examples

被引:9
|
作者
Ratsaby, J [1 ]
Maiorov, V
机构
[1] Technion Israel Inst Technol, Dept Elect Engn, IL-32000 Haifa, Israel
[2] Technion Israel Inst Technol, Dept Math, IL-32000 Haifa, Israel
关键词
D O I
10.1006/jcom.1997.0459
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The PAC model of learning and its extension to real valued function classes provides a well-accepted theoretical framework for representing the problem of learning a target function g(x) using a random sample {(x(i), g(x(i)))}(i=1)(m). Based on the uniform strong law of large numbers the PAC model establishes the sample complexity, i.e., the sample size m which is sufficient for accurately estimating the target function to within high confidence. Often, in addition to a random sample, some form of prior knowledge is available about the target. It is intuitive that increasing the amount of information should have the same effect on the error as increasing the sample size. But quantitatively how does the rate of error with respect to increasing information compare to the rate of error with increasing sample size? To answer this we consider a new approach based on a combination of information-based complexity of Traub et al. and Vapnik-Chervonenkis (VC) theory. In contrast to VC-theory where function classes of finite pseudo-dimension are used only for statistical-based estimation, we let such classes play a dual role of functional estimation as well as approximation. This is captured in a newly introduced quantity, rho(d)(F), which represents a nonlinear width of a function class F. We then extend the notion of the nth minimal radius of information and define a quantity I-n,I-d(F) which measures the minimal approximation error of the worst-case target g is an element of F by the family of function classes having pseudo-dimension d given partial information on g consisting of values taken by n linear operators. The error rates are calculated which leads to a quantitative notion of the value of partial information for the paradigm of learning from examples. (C) 1997 Academic Press.
引用
收藏
页码:509 / 544
页数:36
相关论文
共 50 条
  • [41] Learning translation templates from examples
    Guvenir, HA
    Cicekli, I
    INFORMATION SYSTEMS, 1998, 23 (06) : 353 - 363
  • [42] LEARNING CLINICAL REASONING FROM EXAMPLES
    KASSIRER, JP
    KOPELMAN, RI
    ACTA CLINICA BELGICA, 1991, 46 (05): : 338 - 344
  • [43] Learning from 'good examples of practice'
    Kelchtermans, Geert
    TEACHERS AND TEACHING, 2015, 21 (04) : 361 - 365
  • [44] Simulating Learning from Language and Examples
    Weitekamp, Daniel
    Rachatasumrit, Napol
    Wei, Rachael
    Harpstead, Erik
    Koedinger, Kenneth
    ARTIFICIAL INTELLIGENCE IN EDUCATION. POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS, DOCTORAL CONSORTIUM AND BLUE SKY, AIED 2023, 2023, 1831 : 580 - 586
  • [45] LEARNING CLINICAL REASONING FROM EXAMPLES
    KASSIRER, JP
    KOPELMAN, RI
    HOSPITAL PRACTICE, 1989, 24 (03): : 27 - &
  • [46] Implementation of fuzzy learning from examples
    Hong, TP
    Lee, CY
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2000, 6 (04): : 261 - 269
  • [47] Learning DFA from Simple Examples
    Rajesh Parekh
    Vasant Honavar
    Machine Learning, 2001, 44 : 9 - 35
  • [48] LEARNING AUTOMATA FROM ORDERED EXAMPLES
    PORAT, S
    FELDMAN, JA
    MACHINE LEARNING, 1991, 7 (2-3) : 109 - 138
  • [49] A FORMAL APPROACH TO LEARNING FROM EXAMPLES
    DELGRANDE, JP
    INTERNATIONAL JOURNAL OF MAN-MACHINE STUDIES, 1987, 26 (02): : 123 - 141
  • [50] Learning from ambiguously labeled examples
    Huellermeier, Eyke
    Beringer, Juergen
    INTELLIGENT DATA ANALYSIS, 2006, 10 (05) : 419 - 439