An Interpretable Approach with Explainable AI for Heart Stroke Prediction

被引:5
|
作者
Srinivasu, Parvathaneni Naga [1 ,2 ]
Sirisha, Uddagiri [2 ]
Sandeep, Kotte [3 ]
Praveen, S. Phani [2 ]
Maguluri, Lakshmana Phaneendra [4 ]
Bikku, Thulasi [5 ]
机构
[1] Univ Fed Ceara, Dept Teleinformat Engn, BR-60455970 Fortaleza, Brazil
[2] Prasad V Potluri Siddhartha Inst Technol, Dept Comp Sci & Engn, Vijayawada 520007, India
[3] Dhanekula Inst Engn & Technol, Dept Informat Technol, Vijayawada 521139, India
[4] Koneru Lakshmaiah Educ Fdn, Dept Comp Sci & Engn, Guntur 522302, India
[5] Amrita Vishwa Vidyapeetham, Amrita Sch Comp Amaravati, Comp Sci & Engn, Amaravati 522503, India
关键词
Artificial Neural Network; deep learning; data leakage; sampling; feature selection; explainable AI; LIME tabular;
D O I
10.3390/diagnostics14020128
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Heart strokes are a significant global health concern, profoundly affecting the wellbeing of the population. Many research endeavors have focused on developing predictive models for heart strokes using ML and DL techniques. Nevertheless, prior studies have often failed to bridge the gap between complex ML models and their interpretability in clinical contexts, leaving healthcare professionals hesitant to embrace them for critical decision-making. This research introduces a meticulously designed, effective, and easily interpretable approach for heart stroke prediction, empowered by explainable AI techniques. Our contributions include a meticulously designed model, incorporating pivotal techniques such as resampling, data leakage prevention, feature selection, and emphasizing the model's comprehensibility for healthcare practitioners. This multifaceted approach holds the potential to significantly impact the field of healthcare by offering a reliable and understandable tool for heart stroke prediction. In our research, we harnessed the potential of the Stroke Prediction Dataset, a valuable resource containing 11 distinct attributes. Applying these techniques, including model interpretability measures such as permutation importance and explainability methods like LIME, has achieved impressive results. While permutation importance provides insights into feature importance globally, LIME complements this by offering local and instance-specific explanations. Together, they contribute to a comprehensive understanding of the Artificial Neural Network (ANN) model. The combination of these techniques not only aids in understanding the features that drive overall model performance but also helps in interpreting and validating individual predictions. The ANN model has achieved an outstanding accuracy rate of 95%.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Interpretable and explainable AI (XAI) model for spatial drought prediction
    Dikshit, Abhirup
    Pradhan, Biswajeet
    SCIENCE OF THE TOTAL ENVIRONMENT, 2021, 801 (801)
  • [2] Explainable AI: A Hybrid Approach to Generate Human-Interpretable Explanation for Deep Learning Prediction
    De, Tanusree
    Giri, Prasenjit
    Mevawala, Ahmeduvesh
    Nemani, Ramyasri
    Deo, Arati
    COMPLEX ADAPTIVE SYSTEMS, 2020, 168 : 40 - 48
  • [3] Optimized Ensemble Learning Approach with Explainable AI for Improved Heart Disease Prediction
    Mienye, Ibomoiye Domor
    Jere, Nobert
    INFORMATION, 2024, 15 (07)
  • [4] Understanding AI: Explainable AI with interpretable KPI Labels
    Felix, Rudolf
    ATP MAGAZINE, 2020, (09): : 38 - 41
  • [5] Explainable AI for Medical Event Prediction for Heart Failure Patients
    Wrazen, Weronika
    Gontarska, Kordian
    Grzelka, Felix
    Polze, Andreas
    ARTIFICIAL INTELLIGENCE IN MEDICINE, AIME 2023, 2023, 13897 : 97 - 107
  • [6] An Explainable AI Model for Interpretable Lung Disease Classification
    Pitroda, Vidhi
    Fouda, Mostafa M.
    Fadlullah, Zubair Md
    2021 IEEE INTERNATIONAL CONFERENCE ON INTERNET OF THINGS AND INTELLIGENCE SYSTEMS (IOTAIS), 2021, : 98 - 103
  • [7] Interpretable representations in explainable AI: from theory to practice
    Sokol, Kacper
    Flach, Peter
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (05) : 3102 - 3140
  • [8] Explainable AI Based Approach For Broadband Customers' Churn Prediction
    Ciabattoni, Lucio
    Maiolatesi, Marco
    Mancinelli, Martina
    Di Tillo, Maria
    Fiandra, Riccardo
    Gerosa, Nicolo
    Trimeloni, Lorenzo
    Borghi, Matteo
    Bertolotti, Massimo
    2024 IEEE GAMING, ENTERTAINMENT, AND MEDIA CONFERENCE, GEM 2024, 2024, : 17 - 18
  • [9] Design and Development of an Efficient Explainable AI Framework for Heart Disease Prediction
    Tenepalli, Deepika
    Navamani, T. M.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (06) : 1494 - 1503
  • [10] LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI
    Kundu, Ripan Kumar
    Islam, Rifatul
    Quarles, John
    Hoque, Khaza Anuarul
    2023 IEEE CONFERENCE VIRTUAL REALITY AND 3D USER INTERFACES, VR, 2023, : 609 - 619