L1-norm Laplacian support vector machine for data reduction in semi-supervised learning

被引:4
|
作者
Zheng, Xiaohan [1 ,2 ,3 ]
Zhang, Li [1 ,2 ,3 ]
Xu, Zhiqiang [1 ,2 ,3 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou 215006, Jiangsu, Peoples R China
[2] Soochow Univ, Joint Int Res Lab Machine Learning & Neuromorph C, Suzhou 215006, Jiangsu, Peoples R China
[3] Soochow Univ, Prov Key Lab Comp Informat Proc Technol, Suzhou 215006, Jiangsu, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2023年 / 35卷 / 17期
关键词
Semi-supervised learning; Support vector machine; l(1)-norm regularization; Laplacian regularization; SPARSE REPRESENTATION; CLASSIFICATION; SELECTION; SVM;
D O I
10.1007/s00521-020-05609-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a semi-supervised learning method, Laplacian support vector machine (LapSVM) is popular. Unfortunately, the model generated by LapSVM has a poor sparsity. A sparse decision model has always been fascinating because it could implement data reduction and improve performance. To generate a sparse model of LapSVM, we propose an l(1)-norm Laplacian support vector machine (l(1)-norm LapSVM), which replaces the l(2)-norm with the l(1)-norm in LapSVM. The l(1)-norm LapSVM has two techniques that can induce sparsity: the l(1)-norm regularization and the hinge loss function. We discuss two situations for the l(1)-norm LapSVM, linear and nonlinear ones. In the linear l(1)-norm LapSVM, the sparse decision model implies that features with nonzero coefficients are contributive. In other words, the linear l(1)-norm LapSVM can perform feature selection to achieve the goal of data reduction. Moreover, the nonlinear (kernel) l(1)-norm LapSVM can also implement data reduction in terms of sample selection. In addition, the optimization problem of the l(1)-norm LapSVM is a convex quadratic programming one. That is, the l(1)-norm LapSVM has a unique and global solution. Experimental results on semi-supervised classification tasks have shown a comparable performance of our l(1)-norm LapSVM.
引用
收藏
页码:12343 / 12360
页数:18
相关论文
共 50 条
  • [1] L1-norm Laplacian support vector machine for data reduction in semi-supervised learning
    Xiaohan Zheng
    Li Zhang
    Zhiqiang Xu
    [J]. Neural Computing and Applications, 2023, 35 : 12343 - 12360
  • [2] Adaptive Laplacian Support Vector Machine for Semi-supervised Learning
    Hu, Rongyao
    Zhang, Leyuan
    Wei, Jian
    [J]. COMPUTER JOURNAL, 2021, 64 (07): : 1005 - 1015
  • [3] l1-norm based safe semi-supervised learning
    Gan, Haitao
    Yang, Zhi
    Wang, Ji
    Li, Bing
    [J]. MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2021, 18 (06) : 7727 - 7742
  • [4] Unsupervised and Semi-Supervised Learning via l1-Norm Graph
    Nie, Feiping
    Wang, Hua
    Huang, Heng
    Ding, Chris
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 2268 - 2273
  • [5] Laplacian p-norm proximal support vector machine for semi-supervised classification
    Tan, Junyan
    Zhen, Ling
    Deng, Naiyang
    Zhang, Zhiqiang
    [J]. NEUROCOMPUTING, 2014, 144 : 151 - 158
  • [6] Semi-supervised learning for lithology identification using Laplacian support vector machine
    Li, Zerui
    Kang, Yu
    Feng, Deyong
    Wang, Xing-Mou
    Lv, Wenjun
    Chang, Ji
    Zheng, Wei Xing
    [J]. JOURNAL OF PETROLEUM SCIENCE AND ENGINEERING, 2020, 195 (195)
  • [7] Laplacian twin support vector machine for semi-supervised classification
    Qi, Zhiquan
    Tian, Yingjie
    Shi, Yong
    [J]. NEURAL NETWORKS, 2012, 35 : 46 - 53
  • [8] Cost sensitive semi-supervised Laplacian support vector machine
    Wan, Jian-Wu
    Yang, Ming
    Chen, Yin-Juan
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2012, 40 (07): : 1410 - 1415
  • [9] l1-norm Nonparallel Support Vector Machine for PU Learning
    Bai, Fusheng
    Yuan, Yongjia
    [J]. 2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,
  • [10] Bayesian semi-supervised learning with support vector machine
    Chakraborty, Sounak
    [J]. STATISTICAL METHODOLOGY, 2011, 8 (01) : 68 - 82