On the value of partial information for learning from examples

被引:9
|
作者
Ratsaby, J [1 ]
Maiorov, V
机构
[1] Technion Israel Inst Technol, Dept Elect Engn, IL-32000 Haifa, Israel
[2] Technion Israel Inst Technol, Dept Math, IL-32000 Haifa, Israel
关键词
D O I
10.1006/jcom.1997.0459
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The PAC model of learning and its extension to real valued function classes provides a well-accepted theoretical framework for representing the problem of learning a target function g(x) using a random sample {(x(i), g(x(i)))}(i=1)(m). Based on the uniform strong law of large numbers the PAC model establishes the sample complexity, i.e., the sample size m which is sufficient for accurately estimating the target function to within high confidence. Often, in addition to a random sample, some form of prior knowledge is available about the target. It is intuitive that increasing the amount of information should have the same effect on the error as increasing the sample size. But quantitatively how does the rate of error with respect to increasing information compare to the rate of error with increasing sample size? To answer this we consider a new approach based on a combination of information-based complexity of Traub et al. and Vapnik-Chervonenkis (VC) theory. In contrast to VC-theory where function classes of finite pseudo-dimension are used only for statistical-based estimation, we let such classes play a dual role of functional estimation as well as approximation. This is captured in a newly introduced quantity, rho(d)(F), which represents a nonlinear width of a function class F. We then extend the notion of the nth minimal radius of information and define a quantity I-n,I-d(F) which measures the minimal approximation error of the worst-case target g is an element of F by the family of function classes having pseudo-dimension d given partial information on g consisting of values taken by n linear operators. The error rates are calculated which leads to a quantitative notion of the value of partial information for the paradigm of learning from examples. (C) 1997 Academic Press.
引用
收藏
页码:509 / 544
页数:36
相关论文
共 50 条
  • [31] Improved lower bounds for learning from noisy examples: An information-theoretic approach
    Gentile, C
    Helmbold, DP
    INFORMATION AND COMPUTATION, 2001, 166 (02) : 133 - 155
  • [32] Active Learning of Predefined Models for Information Extraction: Selecting Regular Expressions from Examples
    Bartoli, Alberto
    De Lorenzo, Andrea
    Medvet, Eric
    Tarlao, Fabiano
    FUZZY SYSTEMS AND DATA MINING V (FSDM 2019), 2019, 320 : 645 - 651
  • [33] Learning from examples: Instructional principles from the worked examples research
    Atkinson, RK
    Derry, SJ
    Renkl, A
    Wortham, D
    REVIEW OF EDUCATIONAL RESEARCH, 2000, 70 (02) : 181 - 214
  • [34] Inverse sequence alignment from partial examples
    Kim, Eagu
    Kececioglu, John
    ALGORITHMS IN BIOINFORMATICS, PROCEEDINGS, 2007, 4645 : 359 - +
  • [35] SOCIAL LEARNING WITH PARTIAL INFORMATION SHARING
    Bordignon, Virginia
    Matta, Vincenzo
    Sayed, Ali H.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 5540 - 5544
  • [36] Music preference learning with partial information
    Moh, Yvonne
    Orbanz, Peter
    Buhmann, Joachim M.
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 2021 - 2024
  • [37] Learning and the value of information: Evidence from health plan report cards
    Chernew, Michael
    Gowrisankaran, Gautain
    Scanlon, Dennis P.
    JOURNAL OF ECONOMETRICS, 2008, 144 (01) : 156 - 174
  • [38] Learning Partial-Value Variable Associations
    Ye, Nong
    Fok, Ting Yan
    ICBDC 2019: PROCEEDINGS OF 2019 4TH INTERNATIONAL CONFERENCE ON BIG DATA AND COMPUTING, 2019, : 24 - 28
  • [39] Inductive learning from fuzzy examples
    Wang, CH
    Hong, TP
    Tseng, SS
    FUZZ-IEEE '96 - PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3, 1996, : 13 - 18
  • [40] Learning DFA from simple examples
    Parekh, R
    Honavar, V
    MACHINE LEARNING, 2001, 44 (1-2) : 9 - 35