Linear iterative feature embedding: an ensemble framework for an interpretable model

被引:0
|
作者
Sudjianto, Agus [1 ]
Qiu, Jinwen [1 ]
Li, Miaoqi [2 ]
Chen, Jie [3 ]
机构
[1] Wells Fargo, 401 S Tryon St, Charlotte, NC 28202 USA
[2] Wells Fargo, 11625 N Community House Rd, Charlotte, NC 28277 USA
[3] Well Fargo, 3440 Walnut Ave, Fremont, CA 94538 USA
来源
NEURAL COMPUTING & APPLICATIONS | 2023年 / 35卷 / 13期
关键词
Linear iterative feature embedding; Ensemble method; Loss decomposition; Variable importance; Interaction detection; DIVERSITY; LINK;
D O I
10.1007/s00521-023-08204-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A new ensemble framework for an interpretable model called linear iterative feature embedding (LIFE) has been developed to achieve high prediction accuracy, easy interpretation, and efficient computation simultaneously. The LIFE algorithm is able to fit a wide single-hidden-layer neural network (NN) accurately with three steps: defining the subsets of a dataset by the linear projections of neural nodes, creating the features from multiple narrow single-hidden-layer NNs trained on the different subsets of the data, combining the features with a linear model. The theoretical rationale behind LIFE is also provided by the connection to the loss ambiguity decomposition of stack ensemble methods. Both simulation and empirical experiments confirm that LIFE consistently outperforms directly trained single-hidden-layer NNs and also outperforms many other benchmark models, including multilayers feed forward neural network (FFNN), Xgboost, and random forest (RF) in many experiments. As a wide single-hidden-layer NN, LIFE is intrinsically interpretable. Meanwhile, both variable importance and global main and interaction effects can be easily created and visualized. In addition, the parallel nature of the base learner building makes LIFE computationally efficient by leveraging parallel computing.
引用
收藏
页码:9657 / 9685
页数:29
相关论文
共 50 条
  • [21] Generalized Embedding Regression: A Framework for Supervised Feature Extraction
    Lu, Jianglin
    Lai, Zhihui
    Wang, Hailing
    Chen, Yudong
    Zhou, Jie
    Shen, Linlin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (01) : 185 - 199
  • [22] Embedding ensemble tracking in a stochastic framework for robust object tracking
    Yu Gu
    Ping Li
    Bo Han
    Journal of Zhejiang University-SCIENCE A, 2009, 10 : 1476 - 1482
  • [23] Embedding ensemble tracking in a stochastic framework for robust object tracking
    Gu, Yu
    Li, Ping
    Han, Bo
    JOURNAL OF ZHEJIANG UNIVERSITY-SCIENCE A, 2009, 10 (10): : 1476 - 1482
  • [25] Enhancing Seismic Facies Classification Using Interpretable Feature Selection and Time Series Ensemble Learning Model With Uncertainty Assessment
    Ren, Quan
    Zhang, Hongbing
    Zhang, Dailu
    Zhao, Xiang
    Yu, Xiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [26] An improved locally linear embedding method for feature extraction
    Zhang Wei
    Zhou Weijia
    MATERIALS, MECHATRONICS AND AUTOMATION, PTS 1-3, 2011, 467-469 : 487 - 492
  • [27] Feature Fusion Using Locally Linear Embedding for Classification
    Sun, Bing-Yu
    Zhang, Xiao-Ming
    Li, Jiuyong
    Mao, Xue-Min
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (01): : 163 - 168
  • [28] Model selection in an ensemble framework
    Wichard, Joerg D.
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 2187 - 2192
  • [29] Accounting for model errors in iterative ensemble smoothers
    Evensen, Geir
    COMPUTATIONAL GEOSCIENCES, 2019, 23 (04) : 761 - 775
  • [30] A Dropout Prediction Framework Combined with Ensemble Feature Selection
    Ai, Dan
    Zhang, Tiancheng
    Yu, Ge
    Shao, Xinying
    ICIET 2020: 2020 8TH INTERNATIONAL CONFERENCE ON INFORMATION AND EDUCATION TECHNOLOGY, 2020, : 179 - 185