An Interpretable Approach with Explainable AI for Heart Stroke Prediction

被引:10
|
作者
Srinivasu, Parvathaneni Naga [1 ,2 ]
Sirisha, Uddagiri [2 ]
Sandeep, Kotte [3 ]
Praveen, S. Phani [2 ]
Maguluri, Lakshmana Phaneendra [4 ]
Bikku, Thulasi [5 ]
机构
[1] Univ Fed Ceara, Dept Teleinformat Engn, BR-60455970 Fortaleza, Brazil
[2] Prasad V Potluri Siddhartha Inst Technol, Dept Comp Sci & Engn, Vijayawada 520007, India
[3] Dhanekula Inst Engn & Technol, Dept Informat Technol, Vijayawada 521139, India
[4] Koneru Lakshmaiah Educ Fdn, Dept Comp Sci & Engn, Guntur 522302, India
[5] Amrita Vishwa Vidyapeetham, Amrita Sch Comp Amaravati, Comp Sci & Engn, Amaravati 522503, India
关键词
Artificial Neural Network; deep learning; data leakage; sampling; feature selection; explainable AI; LIME tabular;
D O I
10.3390/diagnostics14020128
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Heart strokes are a significant global health concern, profoundly affecting the wellbeing of the population. Many research endeavors have focused on developing predictive models for heart strokes using ML and DL techniques. Nevertheless, prior studies have often failed to bridge the gap between complex ML models and their interpretability in clinical contexts, leaving healthcare professionals hesitant to embrace them for critical decision-making. This research introduces a meticulously designed, effective, and easily interpretable approach for heart stroke prediction, empowered by explainable AI techniques. Our contributions include a meticulously designed model, incorporating pivotal techniques such as resampling, data leakage prevention, feature selection, and emphasizing the model's comprehensibility for healthcare practitioners. This multifaceted approach holds the potential to significantly impact the field of healthcare by offering a reliable and understandable tool for heart stroke prediction. In our research, we harnessed the potential of the Stroke Prediction Dataset, a valuable resource containing 11 distinct attributes. Applying these techniques, including model interpretability measures such as permutation importance and explainability methods like LIME, has achieved impressive results. While permutation importance provides insights into feature importance globally, LIME complements this by offering local and instance-specific explanations. Together, they contribute to a comprehensive understanding of the Artificial Neural Network (ANN) model. The combination of these techniques not only aids in understanding the features that drive overall model performance but also helps in interpreting and validating individual predictions. The ANN model has achieved an outstanding accuracy rate of 95%.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] An Explainable AI Approach to Agrotechnical Monitoring and Crop Diseases Prediction in Dnipro Region of Ukraine
    Laktionov, Ivan
    Diachenko, Grygorii
    Rutkowska, Danuta
    Kisiel-Dorohinicki, Marek
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2023, 13 (04) : 247 - 272
  • [32] An Ensemble Approach for the Prediction of Diabetes Mellitus Using a Soft Voting Classifier with an Explainable AI
    Kibria, Hafsa Binte
    Nahiduzzaman, Md
    Goni, Md Omaer Faruq
    Ahsan, Mominul
    Haider, Julfikar
    SENSORS, 2022, 22 (19)
  • [33] Explainable AI-Driven Chatbot System for Heart Disease Prediction Using Machine Learning
    Muneer, Salman
    Ghazal, Taher M.
    Alyas, Tahir
    Raza, Muhammad Ahsan
    Abbas, Sagheer
    Alzoubi, Omar
    Ali, Oualid
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (12) : 249 - 261
  • [34] Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening
    Michela Proietti
    Alessio Ragno
    Biagio La Rosa
    Rino Ragno
    Roberto Capobianco
    Machine Learning, 2024, 113 : 2013 - 2044
  • [35] Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening
    Proietti, Michela
    Ragno, Alessio
    Rosa, Biagio La
    Ragno, Rino
    Capobianco, Roberto
    MACHINE LEARNING, 2024, 113 (04) : 2013 - 2044
  • [36] A New SVDD Approach to Reliable and Explainable AI
    Carlevaro, Alberto
    Mongelli, Maurizio
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 55 - 68
  • [37] EXplainable AI (XAI) approach to image captioning
    Han, Seung-Ho
    Kwon, Min-Su
    Choi, Ho-Jin
    JOURNAL OF ENGINEERING-JOE, 2020, 2020 (13): : 589 - 594
  • [38] Performance of Explainable AI Methods in Asset Failure Prediction
    Jakubowski, Jakub
    Stanisz, Przemyslaw
    Bobek, Szymon
    Nalepa, Grzegorz J.
    COMPUTATIONAL SCIENCE, ICCS 2022, PT IV, 2022, : 472 - 485
  • [39] Contextual Background Estimation for Explainable AI in Temperature Prediction
    Szostak, Bartosz
    Doroz, Rafal
    Marker, Magdalena
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [40] An optimized and interpretable carbon price prediction: Explainable deep learning model
    Sayed, Gehad Ismail
    El-Latif, Eman I. Abd
    Darwish, Ashraf
    Snasel, Vaclav
    Hassanien, Aboul Ella
    CHAOS SOLITONS & FRACTALS, 2024, 188