Enhancing healthcare decision support through explainable AI models for risk prediction

被引:2
|
作者
Niu, Shuai [1 ]
Yin, Qing [2 ]
Ma, Jing [1 ]
Song, Yunya [3 ]
Xu, Yida [4 ]
Bai, Liang [5 ]
Pan, Wei [6 ]
Yang, Xian [2 ,7 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
[2] Univ Manchester, Alliance Manchester Business Sch, Oxford Rd, Manchester M13 9PL, England
[3] Hong Kong Baptist Univ, AI & Media Res Lab, Kowloon Tong, Hong Kong, Peoples R China
[4] Hong Kong Baptist Univ, Dept Math, Kowloon Tong, Hong Kong, Peoples R China
[5] Shanxi Univ, Comp & Informat Technol Sch, Shanxi Rd, Taiyuan, Shan Xi, Peoples R China
[6] Univ Manchester, Dept Comp Sci, Oxford Rd, Manchester M13 9PL, England
[7] Imperial Coll London, Data Sci Inst, South Kensington Campus, London SW7 2AZ, England
基金
中国国家自然科学基金;
关键词
Explainable AI in healthcare; Healthcare decision support; Disease risk prediction; Modelling longitudinal patient data; Deep neural networks; HEART;
D O I
10.1016/j.dss.2024.114228
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electronic health records (EHRs) are a valuable source of information that can aid in understanding a patient's health condition and making informed healthcare decisions. However, modelling longitudinal EHRs with heterogeneous information is a challenging task. Although recurrent neural networks (RNNs) are frequently utilized in artificial intelligence (AI) models for capturing longitudinal data, their explanatory capabilities are limited. Predictive clustering stands as the most recent advancement within this domain, offering interpretable indications at the cluster level for predicting disease risk. Nonetheless, the challenge of determining the optimal number of clusters has put a brake on the widespread application of predictive clustering for disease risk prediction. In this paper, we introduce a novel non-parametric predictive clustering-based risk prediction model that integrates the Dirichlet Process Mixture Model (DPMM) with predictive clustering via neural networks. To enhance the model's interpretability, we integrate attention mechanisms that enable the capture of local level evidence in addition to the cluster-level evidence provided by predictive clustering. The outcome of this research is the development of a multi-level explainable artificial intelligence (AI) model. We evaluated the proposed model on two real-world datasets and demonstrated its effectiveness in capturing longitudinal EHR information for disease risk prediction. Moreover, the model successfully produced interpretable evidence bolster its predictions.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Machine learning for predicting readmission risk among the frail: Explainable AI for healthcare
    Mohanty, Somya D.
    Lekan, Deborah
    McCoy, Thomas P.
    Jenkins, Marjorie
    Manda, Prashanti
    PATTERNS, 2022, 3 (01):
  • [22] Enhancing cardiovascular risk prediction through AI-enabled calcium-omics
    Hoori, Ammar
    Al-Kindi, Sadeer
    Hu, Tao
    Song, Yingnan
    Wu, Hao
    Lee, Juhwan
    Tashtish, Nour
    Fu, Pingfu
    Gilkeson, Robert
    Rajagopalan, Sanjay
    Wilson, David L.
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [23] Towards a Knowledge Graph-Based Explainable Decision Support System in Healthcare
    Rajabi, Enayat
    Etminani, Kobra
    PUBLIC HEALTH AND INFORMATICS, PROCEEDINGS OF MIE 2021, 2021, 281 : 502 - 503
  • [24] Enhancing Going Concern Prediction With Anchor Explainable AI and Attention-Weighted XGBoost
    Thanathamathee, Putthiporn
    Sawangarreerak, Siriporn
    Nizam, Dinna Nina Mohd
    IEEE ACCESS, 2024, 12 : 68345 - 68363
  • [25] Optimized Tiny Machine Learning and Explainable AI for Trustable and Energy-Efficient Fog-Enabled Healthcare Decision Support System
    Arthi, R.
    Krishnaveni, S.
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [26] DECISION SUPPORT FOR RISK MANAGEMENT IN HEALTHCARE ORGANISATIONS
    Drnovsek, Rok
    Rajkovic, Uros
    35TH BLED ECONFERENCE DIGITAL RESTRUCTURING AND HUMAN (RE)ACTION, BLED ECONFERENCE 2022, 2022, : 705 - 715
  • [27] Explainable AI decision support improves accuracy during telehealth strep throat screening
    Gomez, Catalina
    Smith, Brittany-Lee
    Zayas, Alisa
    Unberath, Mathias
    Canares, Therese
    COMMUNICATIONS MEDICINE, 2024, 4 (01):
  • [28] Enhancing prediction of landslide dam stability through AI models: A comparative study with traditional approaches
    Li, Xianfeng
    Nishio, Mayuko
    Sugawara, Kentaro
    Iwanaga, Shoji
    Shimada, Toru
    Kanasaki, Hiroyuki
    Kanai, Hiromichi
    Zheng, Shitao
    Chun, Pang-jo
    GEOMORPHOLOGY, 2024, 454
  • [29] Co-design of Human-centered, Explainable AI for Clinical Decision Support
    Panigutti, Cecilia
    Beretta, Andrea
    Fadda, Daniele
    Giannotti, Fosca
    Pedreschi, Dino
    Perotti, Alan
    Rinzivillo, Salvatore
    ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2023, 13 (04)
  • [30] Reliable water quality prediction and parametric analysis using explainable AI models
    Nallakaruppan, M. K.
    Gangadevi, E.
    Shri, M. Lawanya
    Balusamy, Balamurugan
    Bhattacharya, Sweta
    Selvarajan, Shitharth
    SCIENTIFIC REPORTS, 2024, 14 (01)