Fully Bayesian logistic regression with hyper-LASSO priors for high-dimensional feature selection

被引:9
|
作者
Li, Longhai [1 ]
Yao, Weixin [2 ]
机构
[1] Univ Saskatchewan, Dept Math & Stat, Saskatoon, SK S7N5E6, Canada
[2] Univ Calif Riverside, Dept Stat, Riverside, CA 92521 USA
基金
美国国家科学基金会; 加拿大自然科学与工程研究理事会;
关键词
High-dimensional; feature selection; non-convex penalties; horseshoe; heavy-tailed prior; hyper-LASSO priors; MCMC; Hamiltonian Monte Carlo; Gibbs sampling; fully Bayesian; GENE-EXPRESSION DATA; VARIABLE SELECTION; PENALIZED REGRESSION; ORACLE PROPERTIES; CLASSIFICATION; MODELS; SHRINKAGE;
D O I
10.1080/00949655.2018.1490418
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Feature selection arises in many areas of modern science. For example, in genomic research, we want to find the genes that can be used to separate tissues of different classes (e.g. cancer and normal). One approach is to fit regression/classification models with certain penalization. In the past decade, hyper-LASSO penalization (priors) have received increasing attention in the literature. However, fully Bayesian methods that use Markov chain Monte Carlo (MCMC) for regression/classification with hyper-LASSO priors are still in lack of development. In this paper, we introduce an MCMC method for learning multinomial logistic regression with hyper-LASSO priors. Our MCMC algorithm uses Hamiltonian Monte Carlo in a restricted Gibbs sampling framework. We have used simulation studies and real data to demonstrate the superior performance of hyper-LASSO priors compared to LASSO, and to investigate the issues of choosing heaviness and scale of hyper-LASSO priors.
引用
收藏
页码:2827 / 2851
页数:25
相关论文
共 50 条
  • [1] Sparse Bayesian variable selection in high-dimensional logistic regression models with correlated priors
    Ma, Zhuanzhuan
    Han, Zifei
    Ghosh, Souparno
    Wu, Liucang
    Wang, Min
    STATISTICAL ANALYSIS AND DATA MINING, 2024, 17 (01)
  • [2] Robust adaptive LASSO in high-dimensional logistic regression
    Basu, Ayanendranath
    Ghosh, Abhik
    Jaenada, Maria
    Pardo, Leandro
    STATISTICAL METHODS AND APPLICATIONS, 2024,
  • [3] Minimax Sparse Logistic Regression for Very High-Dimensional Feature Selection
    Tan, Mingkui
    Tsang, Ivor W.
    Wang, Li
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2013, 24 (10) : 1609 - 1622
  • [4] Bayesian Hyper-LASSO Classification for Feature Selection with Application to Endometrial Cancer RNA-seq Data
    Jiang, Lai
    Greenwood, Celia M. T.
    Yao, Weixin
    Li, Longhai
    SCIENTIFIC REPORTS, 2020, 10 (01)
  • [5] Bayesian Hyper-LASSO Classification for Feature Selection with Application to Endometrial Cancer RNA-seq Data
    Lai Jiang
    Celia M. T. Greenwood
    Weixin Yao
    Longhai Li
    Scientific Reports, 10
  • [6] Penalized logistic regression with the adaptive LASSO for gene selection in high-dimensional cancer classification
    Algamal, Zakariya Yahya
    Lee, Muhammad Hisyam
    EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (23) : 9326 - 9332
  • [7] Bayesian feature selection in high-dimensional regression in presence of correlated noise
    Feldman, Guy
    Bhadra, Anindya
    Kirshner, Sergey
    STAT, 2014, 3 (01): : 258 - 272
  • [8] High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso
    Yamada, Makoto
    Jitkrittum, Wittawat
    Sigal, Leonid
    Xing, Eric P.
    Sugiyama, Masashi
    NEURAL COMPUTATION, 2014, 26 (01) : 185 - 207
  • [9] The sparsity and bias of the lasso selection in high-dimensional linear regression
    Zhang, Cun-Hui
    Huang, Jian
    ANNALS OF STATISTICS, 2008, 36 (04): : 1567 - 1594
  • [10] Bayesian inference for high-dimensional linear regression under mnet priors
    Tan, Aixin
    Huang, Jian
    CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 2016, 44 (02): : 180 - 197