On the interpretability of machine learning-based model for predicting hypertension

被引:135
|
作者
Elshawi, Radwa [1 ]
Al-Mallah, Mouaz H. [2 ]
Sakr, Sherif [1 ]
机构
[1] Univ Tartu, Inst Comp Sci, Data Syst Grp, 2 J Liivi St, EE-50409 Tartu, Estonia
[2] Houston Methodist Ctr, Tartu, Estonia
关键词
Machine learning; Interpretability; Hypertension; BLOOD-PRESSURE; DECISION TREE; RULES; EXTRACTION; HISTORY;
D O I
10.1186/s12911-019-0874-0
中图分类号
R-058 [];
学科分类号
摘要
BackgroundAlthough complex machine learning models are commonly outperforming the traditional simple interpretable models, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. The aim of this study to demonstrate the utility of various model-agnostic explanation techniques of machine learning models with a case study for analyzing the outcomes of the machine learning random forest model for predicting the individuals at risk of developing hypertension based on cardiorespiratory fitness data.MethodsThe dataset used in this study contains information of 23,095 patients who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 10-year follow-up. Five global interpretability techniques (Feature Importance, Partial Dependence Plot, Individual Conditional Expectation, Feature Interaction, Global Surrogate Models) and two local interpretability techniques (Local Surrogate Models, Shapley Value) have been applied to present the role of the interpretability techniques on assisting the clinical staff to get better understanding and more trust of the outcomes of the machine learning-based predictions.ResultsSeveral experiments have been conducted and reported. The results show that different interpretability techniques can shed light on different insights on the model behavior where global interpretations can enable clinicians to understand the entire conditional distribution modeled by the trained response function. In contrast, local interpretations promote the understanding of small parts of the conditional distribution for specific instances.ConclusionsVarious interpretability techniques can vary in their explanations for the behavior of the machine learning model. The global interpretability techniques have the advantage that it can generalize over the entire population while local interpretability techniques focus on giving explanations at the level of instances. Both methods can be equally valid depending on the application need. Both methods are effective methods for assisting clinicians on the medical decision process, however, the clinicians will always remain to hold the final say on accepting or rejecting the outcome of the machine learning models and their explanations based on their domain expertise.
引用
收藏
页数:32
相关论文
共 50 条
  • [1] On the interpretability of machine learning-based model for predicting hypertension
    Radwa Elshawi
    Mouaz H. Al-Mallah
    Sherif Sakr
    [J]. BMC Medical Informatics and Decision Making, 19
  • [2] Interpretability of machine learning-based prediction models in healthcare
    Stiglic, Gregor
    Kocbek, Primoz
    Fijacko, Nino
    Zitnik, Marinka
    Verbert, Katrien
    Cilar, Leona
    [J]. WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 10 (05)
  • [4] A Machine Learning-Based Model for Predicting the Risk of Cardiovascular Disease
    Hsiao, Chiu-Han
    Yu, Po-Chun
    Hsieh, Chia-Ying
    Zhong, Bing-Zi
    Tsai, Yu-Ling
    Cheng, Hao-min
    Chang, Wei-Lun
    Lin, Frank Yeong-Sung
    Huang, Yennun
    [J]. ADVANCED INFORMATION NETWORKING AND APPLICATIONS, AINA-2022, VOL 1, 2022, 449 : 364 - 374
  • [5] MACHINE LEARNING-BASED MODEL FOR PREDICTING CONCRETE COMPRESSIVE STRENGTH
    Tu Trung Nguyen
    Long Tran Ngoc
    Hoang Hiep Vu
    Tung Pham Thanh
    [J]. INTERNATIONAL JOURNAL OF GEOMATE, 2021, 20 (77): : 197 - 204
  • [6] A Novel Machine Learning-Based Systolic Blood Pressure Predicting Model
    Zheng, Jiao
    Yu, Zhengyu
    [J]. JOURNAL OF NANOMATERIALS, 2021, 2021
  • [7] A machine learning-based nomogram model for predicting the recurrence of cystitis glandularis
    Liu, Xuhao
    Wang, Yuhang
    Wang, Yinzhao
    Dao, Pinghong
    Zhou, Tailai
    Zhu, Wenhao
    Huang, Chuyang
    Li, Yong
    Yan, Yuzhong
    Chen, Minfeng
    [J]. THERAPEUTIC ADVANCES IN UROLOGY, 2024, 16
  • [8] Interpretability of a Deep Learning-Based Prediction Model for Mandibular Osteoradionecrosis
    Humbert-Vidan, L.
    Patel, V.
    King, A. P.
    GuerreroUrbano, T.
    [J]. INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2023, 117 (02): : E468 - E469
  • [9] A machine learning-based radiomic model for predicting urinary infection stone reply
    Zheng, Junjiong
    Yu, Hao
    Wu, Zhuo
    Zou, Xiaoguang
    Lin, Tianxin
    [J]. KIDNEY INTERNATIONAL, 2021, 100 (05) : 1142 - 1143
  • [10] Development of a machine learning-based model for predicting individual responses to antihypertensive treatments
    Yi, Jiayi
    Wang, Lili
    Song, Jiali
    Liu, Yanchen
    Liu, Jiamin
    Zhang, Haibo
    Lu, Jiapeng
    Zheng, Xin
    [J]. NUTRITION METABOLISM AND CARDIOVASCULAR DISEASES, 2024, 34 (07) : 1660 - 1669