Approximation Methods for Supervised Learning

被引:0
|
作者
Ronald DeVore
Gerard Kerkyacharian
Dominique Picard
Vladimir Temlyakov
机构
[1] Department of Mathematics,
[2] University of South Carolina,undefined
[3] Columbia,undefined
[4] SC 29208,undefined
[5] Universite Paris X-Nanterre,undefined
[6] 200 Avenue de la Republique,undefined
[7] F 92001 Nanterre cedex,undefined
[8] Laboratoire de Probabilites et Modeles Aletoires CNRS-UMR 7599,undefined
[9] Universite Paris VI et Universite Paris VII,undefined
[10] 16 rue de Clisson,undefined
[11] F-750013 Paris ,undefined
关键词
Learning theory; Regression estimation; Entropy; Nonlinear methods; Upper and lower estimates;
D O I
暂无
中图分类号
学科分类号
摘要
Let ρ be an unknown Borel measure defined on the space Z := X × Y with X ⊂ ℝd and Y = [-M,M]. Given a set z of m samples zi =(xi,yi) drawn according to ρ, the problem of estimating a regression function fρ using these samples is considered. The main focus is to understand what is the rate of approximation, measured either in expectation or probability, that can be obtained under a given prior fρ ∈ Θ, i.e., under the assumption that fρ is in the set Θ, and what are possible algorithms for obtaining optimal or semioptimal (up to logarithms) results. The optimal rate of decay in terms of m is established for many priors given either in terms of smoothness of fρ or its rate of approximation measured in one of several ways. This optimal rate is determined by two types of results. Upper bounds are established using various tools in approximation such as entropy, widths, and linear and nonlinear approximation. Lower bounds are proved using Kullback-Leibler information together with Fano inequalities and a certain type of entropy. A distinction is drawn between algorithms which employ knowledge of the prior in the construction of the estimator and those that do not. Algorithms of the second type which are universally optimal for a certain range of priors are given.
引用
收藏
页码:3 / 58
页数:55
相关论文
共 50 条
  • [1] Approximation methods for supervised learning
    DeVore, R
    Kerkyacharian, G
    Picard, D
    Temlyakov, V
    [J]. FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2006, 6 (01) : 3 - 58
  • [2] Machine learning: supervised methods
    Danilo Bzdok
    Martin Krzywinski
    Naomi Altman
    [J]. Nature Methods, 2018, 15 : 5 - 6
  • [3] Machine learning: supervised methods
    Bzdok, Danilo
    Krzywinski, Martin
    Altman, Naomi
    [J]. NATURE METHODS, 2018, 15 (01) : 5 - 6
  • [4] Quantum computing methods for supervised learning
    Kulkarni, Viraj
    Kulkarni, Milind
    Pant, Aniruddha
    [J]. QUANTUM MACHINE INTELLIGENCE, 2021, 3 (02)
  • [5] Quantum computing methods for supervised learning
    Viraj Kulkarni
    Milind Kulkarni
    Aniruddha Pant
    [J]. Quantum Machine Intelligence, 2021, 3
  • [6] OPTIMAL APPROXIMATION OF A DECISION LAW IN A SUPERVISED LEARNING SYSTEM
    WEISS, H
    HAMMER, A
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1974, SMC4 (05): : 461 - 465
  • [7] Function Approximation Using Combined Unsupervised and Supervised Learning
    Andras, Peter
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (03) : 495 - 505
  • [8] Use of OPTICS and Supervised Learning Methods for Database Intrusion Detection Use of OPTICS and Supervised Learning Methods for DIDS
    Subudhi, Sharmila
    Panigrahi, Suvasini
    Behera, Tanmay Kumar
    [J]. 2017 2ND INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2017, : 331 - 337
  • [9] Tradeoffs in Accuracy and Efficiency in Supervised Learning Methods
    Collingwood, Loren
    Wilkerson, John
    [J]. JOURNAL OF INFORMATION TECHNOLOGY & POLITICS, 2012, 9 (03) : 298 - 318
  • [10] Supervised Learning Methods in Sort Yield Modeling
    Hu, Helen
    [J]. 2009 IEEE/SEMI ADVANCED SEMICONDUCTOR MANUFACTURING CONFERENCE, 2009, : 133 - 136