VARIABILITY REGULARIZATION IN LARGE-MARGIN CLASSIFICATION

被引:0
|
作者
Mansjur, Dwi Sianto [1 ]
Wada, Ted S. [1 ]
Juang, Biing-Hwang [1 ]
机构
[1] Georgia Inst Technol, Ctr Signal & Image Proc, Atlanta, GA 30332 USA
关键词
empirical risk minimization; structural risk minimization; model selection; model regularization; large-margin classification;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper introduces a novel regularization strategy to address the generalization issues for large-margin classifiers from the Empirical Risk Minimization (ERM) perspective. First, the ERM principle is argued to be more flexible than the Structural Risk Minimization (SRM) principle by reviewing the difference between the two strategies as the fundamental principles for large-margin classifier design. Second, after studying the large-margin classifier design based on the SRM principle, a realization of the ERM principle is proposed in the form of a bias-variance criterion instead of the conventional expected error criterion. The bias-variance criterion is shown to have the regularization capability needed by a large-margin classifier designed according to the ERM principle. Finally, a mathematical programming procedure is used to efficiently achieve the best regularization policy. The new regularization strategy based on the ERM principle is evaluated on a set of machine learning experiments. Experimental results clearly demonstrate the strength of the proposed regularization strategy to achieve the minimum error rate performance measure.
引用
收藏
页码:1956 / 1959
页数:4
相关论文
共 50 条
  • [31] Efficient Private Algorithms for Learning Large-Margin Halfspaces
    Huy Le Nguyen
    Ullman, Jonathan
    Zakynthinou, Lydia
    [J]. ALGORITHMIC LEARNING THEORY, VOL 117, 2020, 117 : 704 - 724
  • [32] Large-margin Distribution Machine-based regression
    Rastogi, Reshma
    Anand, Pritam
    Chandra, Suresh
    [J]. Neural Computing and Applications, 2020, 32 (08) : 3633 - 3648
  • [33] A flexible probabilistic framework for large-margin mixture of experts
    Archit Sharma
    Siddhartha Saxena
    Piyush Rai
    [J]. Machine Learning, 2019, 108 : 1369 - 1393
  • [34] A Geometric Perspective of Large-Margin Training of Gaussian Models
    Xiao, Lin
    Deng, Li
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2010, 27 (06) : 118 - 123
  • [35] A flexible probabilistic framework for large-margin mixture of experts
    Sharma, Archit
    Saxena, Siddhartha
    Rai, Piyush
    [J]. MACHINE LEARNING, 2019, 108 (8-9) : 1369 - 1393
  • [36] Large-margin multi-view Gaussian process
    Chang Xu
    Dacheng Tao
    Yangxi Li
    Chao Xu
    [J]. Multimedia Systems, 2015, 21 : 147 - 157
  • [37] Large-Margin Learning of Compact Binary Image Encodings
    Paisitkriangkrai, Sakrapee
    Shen, Chunhua
    van den Hengel, Anton
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (09) : 4041 - 4054
  • [38] Large-Margin Feature Adaptation for Automatic Speech Recognition
    Cheng, Chih-Chieh
    Sha, Fei
    Saul, Lawrence K.
    [J]. 2009 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION & UNDERSTANDING (ASRU 2009), 2009, : 87 - +
  • [39] Scalable Large-Margin Mahalanobis Distance Metric Learning
    Shen, Chunhua
    Kim, Junae
    Wang, Lei
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (09): : 1524 - 1530
  • [40] Large-Margin Multi-View Information Bottleneck
    Xu, Chang
    Tao, Dacheng
    Xu, Chao
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (08) : 1559 - 1572