Gaussian Kernel Width Optimization for Sparse Bayesian Learning

被引:28
|
作者
Mohsenzadeh, Yalda [1 ]
Sheikhzadeh, Hamid [1 ]
机构
[1] Amirkabir Univ Technol, Dept Elect Engn, Tehran 1415854546, Iran
关键词
Adaptive kernel learning (AKL); expectation maximization (EM); kernel width optimization; regression; relevance vector machine (RVM); sparse Bayesian learning; supervised kernel methods; MACHINE; POSE;
D O I
10.1109/TNNLS.2014.2321134
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.
引用
收藏
页码:709 / 719
页数:11
相关论文
共 50 条
  • [21] Learning Sparse Gaussian Bayesian Network Structure by Variable Grouping
    Yang, Jie
    Leung, Henry C. M.
    Yiu, S. M.
    Cai, Yunpeng
    Chin, Francis Y. L.
    2014 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2014, : 1073 - 1078
  • [22] A simple trick for constructing Bayesian formulations of sparse kernel learning methods
    Cawley, GC
    Talbot, NLC
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), VOLS 1-5, 2005, : 1425 - 1430
  • [23] Sparse Bayesian Learning Based on Collaborative Neurodynamic Optimization
    Zhou, Wei
    Zhang, Hai-Tao
    Wang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) : 13669 - 13683
  • [24] Discriminative Brain Effective Connectivity Analysis for Alzheimer's Disease: A Kernel Learning Approach upon Sparse Gaussian Bayesian Network
    Zhou, Luping
    Wang, Lei
    Liu, Lingqiao
    Ogunbona, Philip
    Shen, Dinggang
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 2243 - 2250
  • [25] Deep Kernel Learning-Based Bayesian Optimization with Adaptive Kernel Functions
    Wang, Xizhe
    Hong, Xufeng
    Pang, Quanquan
    Jiang, Benben
    IFAC PAPERSONLINE, 2023, 56 (02): : 5531 - 5535
  • [26] Multiple kernel sparse representation based Gaussian kernel and Power kernel
    Zhu, Yanyong
    Dong, Jiwen
    Li, Hengjian
    2015 8TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2015, : 51 - 54
  • [27] Sparse Bayesian Optimization
    Liu, Sulin
    Feng, Qing
    Eriksson, David
    Letham, Benjamin
    Bakshy, Eytan
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [28] Scalable Bayesian optimization based on exploitation-enhanced sparse Gaussian process
    Aydogdu, Ibrahim
    Wang, Yan
    STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, 2024, 67 (12)
  • [29] Kernel online learning with adaptive kernel width
    Fan, Haijin
    Song, Qing
    Shrestha, Sumit B.
    NEUROCOMPUTING, 2016, 175 : 233 - 242
  • [30] Vector Approximate Message Passing with Sparse Bayesian Learning for Gaussian Mixture Prior
    Chengyao Ruan
    Zaichen Zhang
    Hao Jiang
    Jian Dang
    Liang Wu
    Hongming Zhang
    ChinaCommunications, 2023, 20 (05) : 57 - 69