Mixed-norm partial least squares

被引:3
|
作者
You, Xinge [1 ]
Mou, Yi [1 ,2 ]
Yu, Shujian [3 ]
Jiang, Xiubao [1 ]
Xu, Duanquan [1 ]
Zhou, Long [2 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Wuhan Polytech Univ, Sch Elect & Elect Engn, Wuhan 430074, Peoples R China
[3] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
基金
中国国家自然科学基金;
关键词
Modeling; Prediction; Regression analysis; l(2,1) norm; Variable selection; VARIABLE SELECTION; REGRESSION; CALIBRATION; PLS;
D O I
10.1016/j.chemolab.2016.01.004
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The partial least squares (PLS) method is designed for prediction problems when the number of predictors is larger than the number of training samples. PLS is based on latent components which are linear combinations of the original predictors, it automatically employs all predictors regardless of their relevance. This strategy will potentially degrade its performance, and make the obtained coefficients lack interpretability. Then, several sparse PLS (SPLS) methods are proposed to simultaneously conduct prediction and variable selection via sparsely combining original predictors. However, if information bleed across different components, common variables shared by these components should be selected with successive loadings. To address this issue, we propose a new SPLS model mixed-norm PLS (MNPLS) to select common variables during each deflation in this paper. More specifically, we introduced the l(2,1) norm to the direction matrix and then developed the corresponding solution to MNPLS. Moreover, we also conducted convergence analysis to mathematically support the proposed MNPLS. Experiments are conducted on four real datasets, experimental results verified our theoretical analysis, and also demonstrated that our MNPLS method can generally outperform the standard PLS and other existing methods in variable selection and prediction. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:42 / 53
页数:12
相关论文
共 50 条
  • [1] The least-squares mixed-norm support vector classifier
    Pao, Wei-Cheng
    Lan, Leu-Shing
    Yang, Dian-Rong
    Liao, Shih-Hung
    [J]. IEEE MWSCAS'06: PROCEEDINGS OF THE 2006 49TH MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS,, 2006, : 375 - +
  • [2] LEAST MEAN MIXED-NORM ADAPTIVE FILTERING
    CHAMBERS, JA
    TANRIKULU, O
    CONSTANTINIDES, AG
    [J]. ELECTRONICS LETTERS, 1994, 30 (19) : 1574 - 1575
  • [3] Quantized Kernel Least Mean Mixed-Norm Algorithm
    Yu, Shujian
    Fan, Ziqi
    Zhao, Yixiao
    Zhu, Jie
    Zhao, Kexin
    Wu, Dapeng
    [J]. 2014 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP), 2014, : 199 - 204
  • [4] Nonnegative least mean mixed-norm algorithm: Analysis and performance
    Sun, Zeyang
    Li, Yingsong
    [J]. DIGITAL SIGNAL PROCESSING, 2021, 118 (118)
  • [5] Iterative regularized least-mean mixed-norm image restoration
    Hong, MC
    Stathaki, T
    Katsaggelos, AK
    [J]. OPTICAL ENGINEERING, 2002, 41 (10) : 2515 - 2524
  • [6] Improved adaptive beamforming using the least mean mixed-norm algorithm
    Shubair, R. M.
    Jimaa, S. A.
    [J]. 2007 9TH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, VOLS 1-3, 2007, : 744 - 747
  • [7] Adaptive echo cancellation using least mean mixed-norm algorithm
    Zerguine, A
    Cowan, CFN
    Bettayeb, M
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 1997, 45 (05) : 1340 - 1343
  • [8] MIXED-NORM α-MODULATION SPACES
    Cleanthous, Galatia
    Georgiadis, Athanasios G.
    [J]. TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 2020, 373 (05) : 3323 - 3356
  • [9] MIXED-NORM SPACES AND INTERPOLATION
    ORTEGA, JM
    FABREGA, J
    [J]. STUDIA MATHEMATICA, 1994, 109 (03) : 233 - 254
  • [10] A Novel Entropy Optimized Kernel Least-Mean Mixed-Norm Algorithm
    Luo, Xiong
    Deng, Jing
    Liu, Ji
    Li, Ayong
    Wang, Weiping
    Zhao, Wenbing
    [J]. 2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 1716 - 1722