An approximation theory approach to learning with l1 regularization

被引:13
|
作者
Wang, Hong-Yan [1 ]
Xiao, Quan-Wu [2 ]
Zhou, Ding-Xuan [3 ]
机构
[1] Zhejiang Gongshang Univ, Sch Math & Stat, Hangzhou 310018, Zhejiang, Peoples R China
[2] Microsoft Search Technol Ctr Asia, Beijing 100080, Peoples R China
[3] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
关键词
Learning theory; Data dependent hypothesis spaces; Kernel-based regularization scheme; E-1-regularizer; Multivariate approximation; MODEL SELECTION; SPACES; INTERPOLATION; REGRESSION; OPERATORS;
D O I
10.1016/j.jat.2012.12.004
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Regularization schemes with an l(1)-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the l(1)-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the nonning set condition. (C) 2012 Elsevier Inc. All rights reserved.
引用
收藏
页码:240 / 258
页数:19
相关论文
共 50 条
  • [1] A mixed l1 regularization approach for sparse simultaneous approximation of parameterized PDEs
    Dexter, Nick
    Hoang Tran
    Webster, Clayton
    ESAIM-MATHEMATICAL MODELLING AND NUMERICAL ANALYSIS-MODELISATION MATHEMATIQUE ET ANALYSE NUMERIQUE, 2019, 53 (06): : 2025 - 2045
  • [2] Data Modeling: Visual Psychology Approach and L1/2 Regularization Theory
    Xu, Zongben
    PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS, VOL IV: INVITED LECTURES, 2010, : 3151 - 3184
  • [3] L1/2 regularization
    ZongBen Xu
    Hai Zhang
    Yao Wang
    XiangYu Chang
    Yong Liang
    Science China Information Sciences, 2010, 53 : 1159 - 1169
  • [4] L1/2 regularization
    XU ZongBen 1
    2 Department of Mathematics
    3 University of Science and Technology
    Science China(Information Sciences), 2010, 53 (06) : 1159 - 1169
  • [5] Online Efficient Learning with Quantized KLMS and L1 Regularization
    Chen, Badong
    Zhao, Songlin
    Seth, Sohan
    Principe, Jose C.
    2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [6] An l2/l1 regularization framework for diverse learning tasks
    Wang, Shengzheng
    Peng, Jing
    Liu, Wei
    SIGNAL PROCESSING, 2015, 109 : 206 - 211
  • [7] The Group-Lasso: l1,∞ Regularization versus l1,2 Regularization
    Vogt, Julia E.
    Roth, Volker
    PATTERN RECOGNITION, 2010, 6376 : 252 - 261
  • [8] L1/2 Regularization: A Thresholding Representation Theory and a Fast Solver
    Xu, Zongben
    Chang, Xiangyu
    Xu, Fengmin
    Zhang, Hai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (07) : 1013 - 1027
  • [9] Relating lp regularization and reweighted l1 regularization
    Wang, Hao
    Zeng, Hao
    Wang, Jiashan
    Wu, Qiong
    OPTIMIZATION LETTERS, 2021, 15 (08) : 2639 - 2660
  • [10] Iterative regularization with a general penalty term-theory and application to L1 and TV regularization
    Bot, Radu Ioan
    Hein, Torsten
    INVERSE PROBLEMS, 2012, 28 (10)