Sparse Regression in Cancer Genomics: Comparing Variable Selection and Predictions in Real World Data

被引:2
|
作者
O'Shea, Robert J. [1 ]
Tsoka, Sophia [2 ]
Cook, Gary J. R. [1 ,3 ,4 ]
Goh, Vicky [1 ,5 ]
机构
[1] Kings Coll London, Sch Biomed Engn & Imaging Sci, Dept Canc Imaging, 5th Floor,Becket House,1 Lambeth Palace Rd, London SE1 7EU, England
[2] Kings Coll London, Sch Nat & Math Sci, Dept Informat, London, England
[3] Kings Coll London, London, England
[4] St Thomas Hosp, Guys & St Thomas PET Ctr, London, England
[5] Guys & St Thomas NHS Fdn Trust, Dept Radiol, London, England
基金
英国工程与自然科学研究理事会;
关键词
Artificial intelligence; gene regulatory networks; models; statistical; computational biology; genomics; GENE-EXPRESSION OMNIBUS; MODEL SELECTION; LASSO; SUBSET; REGULARIZATION; OPTIMIZATION;
D O I
10.1177/11769351211056298
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
BACKGROUND: Evaluation of gene interaction models in cancer genomics is challenging, as the true distribution is uncertain. Previous analyses have benchmarked models using synthetic data or databases of experimentally verified interactions - approaches which are susceptible to misrepresentation and incompleteness, respectively. The objectives of this analysis are to (1) provide a real-world data-driven approach for comparing performance of genomic model inference algorithms, (2) compare the performance of LASSO, elastic net, best-subset selection, L0L1 penalisation and L0L2 penalisation in real genomic data and (3) compare algorithmic preselection according to performance in our benchmark datasets to algorithmic selection by internal cross-validation. METHODS: Five large (n approximate to 4000) genomic datasets were extracted from Gene Expression Omnibus. 'Gold-standard' regression models were trained on subspaces of these datasets (n approximate to 4000, p = 500 ). Penalised regression models were trained on small samples from these subspaces (n is an element of {25, 75, 150}, p = 500) and validated against the gold-standard models. Variable selection performance and out-of-sample prediction were assessed. Penalty 'preselection' according to test performance in the other 4 datasets was compared to selection internal cross-validation error minimisation. RESULTS: L1L2-penalisation achieved the highest cosine similarity between estimated coefficients and those of gold-standard models. L0L2-penalised models explained the greatest proportion of variance in test responses, though performance was unreliable in low signal:noise conditions. L0L2 also attained the highest overall median variable selection F1 score. Penalty preselection significantly outperformed selection by internal cross-validation in each of 3 examined metrics. CONCLUSIONS: This analysis explores a novel approach for comparisons of model selection approaches in real genomic data from 5 cancers. Our benchmarking datasets have been made publicly available for use in future research. Our findings support the use of L0L2 penalisation for structural selection and L1L2 penalisation for coefficient recovery in genomic data. Evaluation of learning algorithms according to observed test performance in external genomic datasets yields valuable insights into actual test performance, providing a data-driven complement to internal cross-validation in genomic regression tasks.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Sparse linear regression in unions of bases via Bayesian variable selection
    Fevotte, Cedric
    Godsill, Simon J.
    IEEE SIGNAL PROCESSING LETTERS, 2006, 13 (07) : 441 - 444
  • [22] Statistical mechanical analysis of sparse linear regression as a variable selection problem
    Obuchi, Tomoyuki
    Nakanishi-Ohno, Yoshinori
    Okada, Masato
    Kabashima, Yoshiyuki
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2018,
  • [23] Fourier transform sparse inverse regression estimators for sufficient variable selection
    Weng, Jiaying
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2022, 168
  • [24] VARIABLE SELECTION AND REGRESSION ANALYSIS FOR GRAPH-STRUCTURED COVARIATES WITH AN APPLICATION TO GENOMICS
    Li, Caiyan
    Li, Hongzhe
    ANNALS OF APPLIED STATISTICS, 2010, 4 (03): : 1498 - 1516
  • [25] Sparse Bayesian variable selection in multinomial probit regression model with application to high-dimensional data classification
    Yang Aijun
    Jiang Xuejun
    Xiang Liming
    Lin Jinguan
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2017, 46 (12) : 6137 - 6150
  • [26] Variable selection in regression via repeated data splitting
    Thall, PF
    Russell, KE
    Simon, RM
    MINING AND MODELING MASSIVE DATA SETS IN SCIENCE, ENGINEERING, AND BUSINESS WITH A SUBTHEME IN ENVIRONMENTAL STATISTICS, 1997, 29 (01): : 545 - 545
  • [27] Variable selection in semiparametric regression analysis for longitudinal data
    Zhao, Peixin
    Xue, Liugen
    ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2012, 64 (01) : 213 - 231
  • [28] A note on variable selection in nonparametric regression with dependent data
    González-Manteiga, W
    Quintela-del-Río, A
    Vieu, P
    STATISTICS & PROBABILITY LETTERS, 2002, 57 (03) : 259 - 268
  • [29] Variable selection with LASSO regression for complex survey data
    Iparragirre, Amaia
    Lumley, Thomas
    Barrio, Irantzu
    Arostegui, Inmaculada
    STAT, 2023, 12 (01):
  • [30] Variable selection in semiparametric regression analysis for longitudinal data
    Peixin Zhao
    Liugen Xue
    Annals of the Institute of Statistical Mathematics, 2012, 64 : 213 - 231