Relaxed least square regression with l2,1-norm for pattern classification

被引:4
|
作者
Jin, Junwei [1 ,2 ,3 ]
Qin, Zhenhao [4 ]
Yu, Dengxiu [4 ]
Yang, Tiejun [3 ]
Chen, C. L. Philip [5 ]
Li, Yanting [6 ]
机构
[1] Henan Univ Technol, Key Lab Grain Informat Proc & Control, Minist Educ, Zhengzhou 450001, Peoples R China
[2] Henan Univ Technol, Henan Key Lab Grain Photoelect Detect & Control, Zhengzhou 450001, Peoples R China
[3] Henan Univ Technol, Sch Artificial Intelligence & Big Data, Zhengzhou 450001, Peoples R China
[4] Northwestern Polytech Univ, Unmanned Syst Res Inst, Xian 710072, Peoples R China
[5] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
[6] Zhengzhou Univ Light Ind, Coll Comp & Commun Engn, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
Least square regression; relaxed regression targets; l(2,1)-norm; optimization; FACE RECOGNITION; NETWORK;
D O I
10.1142/S021969132350025X
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This work aims to address two issues that often exist in least square regression (LSR) models for classification tasks, which are (1) learning a compact projection matrix for feature selection and (2) adopting relaxed regression targets. To this end, we first propose a sparse regularized LSR framework for feature selection by introducing the l(2,1) regularizer. Second, we utilize two different strategies to relax the strict regression targets based on the sparse framework. One way is to exploit the ??-dragging technique. Another strategy is to directly learn the labels from the inputs and constrain the distance between true and false classes simultaneously. Hence, more feasible regression schemes are constructed, and the models will be more flexible. Further, efficient iterative methods are derived to optimize the proposed models. Various experiments on image databases intend to manifest our proposed models have outstanding recognition capability compared with many state-of-the-art classifiers.
引用
下载
收藏
页数:29
相关论文
共 50 条
  • [1] l2,1-norm based Regression for Classification
    Ren, Chuan-Xian
    Dai, Dao-Qing
    Yan, Hong
    2011 FIRST ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2011, : 485 - 489
  • [2] Robust classification using l2,1-norm based regression model
    Ren, Chuan-Xian
    Dai, Dao-Qing
    Yan, Hong
    PATTERN RECOGNITION, 2012, 45 (07) : 2708 - 2718
  • [3] An Improved Kernel Minimum Square Error Classification Algorithm Based on L2,1-Norm Regularization
    Liu, Zhonghua
    Xue, Shan
    Zhang, Lin
    Pu, Jiexin
    Wang, Haijun
    IEEE ACCESS, 2017, 5 : 14133 - 14140
  • [4] L2,1-Norm Discriminant Manifold Learning
    Liu, Yang
    Gao, Quanxue
    Gao, Xinbo
    Shao, Ling
    IEEE ACCESS, 2018, 6 : 40723 - 40734
  • [5] Robust feature selection via l2,1-norm in finite mixture of regression
    Li, Xiangrui
    Zhu, Dongxiao
    PATTERN RECOGNITION LETTERS, 2018, 108 : 15 - 22
  • [6] L2,1-norm regularized multivariate regression model with applications to genomic prediction
    Mbebi, Alain J.
    Tong, Hao
    Nikoloski, Zoran
    BIOINFORMATICS, 2021, 37 (18) : 2896 - 2904
  • [7] Anchored Projection Based Capped l2,1-Norm Regression for Super-Resolution
    Ma, Xiaotian
    Zhao, Mingbo
    Zhang, Zhao
    Fan, Jicong
    Zhan, Choujun
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 10 - 18
  • [8] A Generalized Moreau Enhancement of l2,1-norm and Its Application to Group Sparse Classification
    Chen, Yang
    Yamagishi, Masao
    Yamada, Isao
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 2134 - 2138
  • [9] The l2,1-Norm Stacked Robust Autoencoders for Domain Adaptation
    Jiang, Wenhao
    Gao, Hongchang
    Chung, Fu-lai
    Huang, Heng
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1723 - 1729
  • [10] Transfer Learning for Survival Analysis via Efficient L2,1-norm Regularized Cox Regression
    Li, Yan
    Wang, Lu
    Wang, Jie
    Ye, Jieping
    Reddy, Chandan K.
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 231 - 240