Toward interpretable machine learning: evaluating models of heterogeneous predictions

被引:0
|
作者
Zhang, Ruixun [1 ,2 ,3 ,4 ]
机构
[1] Peking Univ, Sch Math Sci, Beijing, Peoples R China
[2] Peking Univ, Ctr Stat Sci, Beijing, Peoples R China
[3] Peking Univ, Natl Engn Lab Big Data Anal & Applicat, Beijing, Peoples R China
[4] Peking Univ, Lab Math Econ & Quantitat Finance, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine learning; Interpretability; Heterogeneous prediction; Bayesian statistics; Loan default; SYSTEMIC RISK; FINANCE; DEFAULT; GAME; GO;
D O I
10.1007/s10479-024-06033-1
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
AI and machine learning have made significant progress in the past decade, powering many applications in FinTech and beyond. But few machine learning models, especially deep learning models, are interpretable by humans, creating challenges for risk management and model improvements. Here, we propose a simple yet powerful framework to evaluate and interpret any black-box model with binary outcomes and explanatory variables, and heterogeneous relationships between the two. Our new metric, the signal success share (SSS) cross-entropy loss, measures how well the model captures the relationship along any feature or dimension, thereby providing actionable guidance on model improvements. Simulations demonstrate that our metric works for heterogeneous and nonlinear predictions, and distinguishes itself from traditional loss functions in evaluating model interpretability. We apply the methodology to an example of predicting loan defaults with real data. Our framework is more broadly applicable to a wide range of problems in financial and information technology.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Evaluating interpretable machine learning predictions for cryptocurrencies
    El Majzoub, Ahmad
    Rabhi, Fethi A.
    Hussain, Walayat
    INTELLIGENT SYSTEMS IN ACCOUNTING FINANCE & MANAGEMENT, 2023, 30 (03): : 137 - 149
  • [2] Toward Interpretable Machine Learning Models for Materials Discovery
    Mikulskis, Paulius
    Alexander, Morgan R.
    Winkler, David Alan
    ADVANCED INTELLIGENT SYSTEMS, 2019, 1 (08)
  • [3] ETHICAL AI IN HEALTHCARE: EVALUATING FAIRNESS IN COLORECTAL CANCER SURVIVOR HEALTHCARE EXPENDITURES PREDICTIONS WITH INTERPRETABLE MACHINE LEARNING (ML) MODELS
    Zhou, B.
    Gupta, M.
    Pathak, M.
    Siddiqui, Z. A.
    Sambamoorthi, N.
    Niranjan, S.
    Sambamoorthi, U.
    VALUE IN HEALTH, 2024, 27 (06) : S2 - S3
  • [4] Interpretable Differencing of Machine Learning Models
    Haldar, Swagatam
    Saha, Diptikalyan
    Wei, Dennis
    Nair, Rahul
    Daly, Elizabeth M.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 788 - 797
  • [5] Toward Efficient Automation of Interpretable Machine Learning
    Kovalerchuk, Boris
    Neuhaus, Nathan
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 4940 - 4947
  • [6] Toward Interpretable Machine Learning: Constructing Polynomial Models Based on Feature Interaction Trees
    Jang, Jisoo
    Kim, Mina
    Bui, Tien-Cuong
    Li, Wen-Syan
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT II, 2023, 13936 : 159 - 170
  • [7] Interpretable machine learning for knowledge generation in heterogeneous catalysis
    Esterhuizen, Jacques A.
    Goldsmith, Bryan R.
    Linic, Suljo
    NATURE CATALYSIS, 2022, 5 (03) : 175 - 184
  • [8] Interpretable machine learning for knowledge generation in heterogeneous catalysis
    Jacques A. Esterhuizen
    Bryan R. Goldsmith
    Suljo Linic
    Nature Catalysis, 2022, 5 : 175 - 184
  • [9] Interpretable models for extrapolation in scientific machine learning
    Muckley, Eric S.
    Saal, James E.
    Meredig, Bryce
    Roper, Christopher S.
    Martin, John H.
    DIGITAL DISCOVERY, 2023, 2 (05): : 1425 - 1435
  • [10] Interpretable machine learning models for crime prediction
    Zhang, Xu
    Liu, Lin
    Lan, Minxuan
    Song, Guangwen
    Xiao, Luzi
    Chen, Jianguo
    COMPUTERS ENVIRONMENT AND URBAN SYSTEMS, 2022, 94