An Interpretable Approach with Explainable AI for Heart Stroke Prediction

被引:10
|
作者
Srinivasu, Parvathaneni Naga [1 ,2 ]
Sirisha, Uddagiri [2 ]
Sandeep, Kotte [3 ]
Praveen, S. Phani [2 ]
Maguluri, Lakshmana Phaneendra [4 ]
Bikku, Thulasi [5 ]
机构
[1] Univ Fed Ceara, Dept Teleinformat Engn, BR-60455970 Fortaleza, Brazil
[2] Prasad V Potluri Siddhartha Inst Technol, Dept Comp Sci & Engn, Vijayawada 520007, India
[3] Dhanekula Inst Engn & Technol, Dept Informat Technol, Vijayawada 521139, India
[4] Koneru Lakshmaiah Educ Fdn, Dept Comp Sci & Engn, Guntur 522302, India
[5] Amrita Vishwa Vidyapeetham, Amrita Sch Comp Amaravati, Comp Sci & Engn, Amaravati 522503, India
关键词
Artificial Neural Network; deep learning; data leakage; sampling; feature selection; explainable AI; LIME tabular;
D O I
10.3390/diagnostics14020128
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Heart strokes are a significant global health concern, profoundly affecting the wellbeing of the population. Many research endeavors have focused on developing predictive models for heart strokes using ML and DL techniques. Nevertheless, prior studies have often failed to bridge the gap between complex ML models and their interpretability in clinical contexts, leaving healthcare professionals hesitant to embrace them for critical decision-making. This research introduces a meticulously designed, effective, and easily interpretable approach for heart stroke prediction, empowered by explainable AI techniques. Our contributions include a meticulously designed model, incorporating pivotal techniques such as resampling, data leakage prevention, feature selection, and emphasizing the model's comprehensibility for healthcare practitioners. This multifaceted approach holds the potential to significantly impact the field of healthcare by offering a reliable and understandable tool for heart stroke prediction. In our research, we harnessed the potential of the Stroke Prediction Dataset, a valuable resource containing 11 distinct attributes. Applying these techniques, including model interpretability measures such as permutation importance and explainability methods like LIME, has achieved impressive results. While permutation importance provides insights into feature importance globally, LIME complements this by offering local and instance-specific explanations. Together, they contribute to a comprehensive understanding of the Artificial Neural Network (ANN) model. The combination of these techniques not only aids in understanding the features that drive overall model performance but also helps in interpreting and validating individual predictions. The ANN model has achieved an outstanding accuracy rate of 95%.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] An interpretable approach using hybrid graph networks and explainable AI for intelligent diagnosis recommendations in chronic disease care
    Huang, Mengxing
    Zhang, Xiu Shi
    Bhatti, Uzair Aslam
    Wu, YuanYuan
    Zhang, Yu
    Yasin Ghadi, Yazeed
    Biomedical Signal Processing and Control, 2024, 91
  • [22] A Hybrid Approach for an Interpretable and Explainable Intrusion Detection System
    Dias, Tiago
    Oliveira, Nuno
    Sousa, Norberto
    Praca, Isabel
    Sousa, Orlando
    INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, ISDA 2021, 2022, 418 : 1035 - 1045
  • [23] Routability Prediction and Optimization Using Explainable AI
    Park, Seonghyeon
    Kim, Daeyeon
    Kwon, Seongbin
    Kang, Seokhyeong
    2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2023,
  • [24] ExplainableFold: Understanding AlphaFold Prediction with Explainable AI
    Tan, Juntao
    Zhang, Yongfeng
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2166 - 2176
  • [25] Explainable AI analysis for smog rating prediction
    Ghadi, Yazeed Yasin
    Saqib, Sheikh Muhammad
    Mazhar, Tehseen
    Almogren, Ahmad
    Waheed, Wajahat
    Altameem, Ayman
    Hamam, Habib
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [26] Experimental Insights Towards Explainable and Interpretable Pedestrian Crossing Prediction
    Melo, Angie Nataly
    Salinas, Carlota
    Sotelo, Miguel Angel
    arXiv, 2023,
  • [27] Interpretable Motor Sound Classification for Enhanced Fault Detection leveraging Explainable AI
    Khan, Shaiq Ahmad
    Khan, Faiq Ahmad
    Jamil, Akhtar
    Hameed, Alaa Ali
    2024 IEEE 3RD INTERNATIONAL CONFERENCE ON COMPUTING AND MACHINE INTELLIGENCE, ICMI 2024, 2024,
  • [28] Interpretable Classification of Pneumonia Infection Using eXplainable AI (XAI-ICP)
    Sheu, Ruey-Kai
    Pardeshi, Mayuresh Sunil
    Pai, Kai-Chih
    Chen, Lun-Chi
    Wu, Chieh-Liang
    Chen, Wei-Cheng
    IEEE ACCESS, 2023, 11 : 28896 - 28919
  • [29] EXPLAINABLE AND INTERPRETABLE AI-ASSISTED REMAINING USEFUL LIFE ESTIMATION FOR AEROENGINES
    Protopapadakis, Georgios
    Apostolidis, Asteris
    Kalfas, Anestis I.
    PROCEEDINGS OF ASME TURBO EXPO 2022: TURBOMACHINERY TECHNICAL CONFERENCE AND EXPOSITION, GT2022, VOL 2, 2022,
  • [30] Hybrid AI based stroke characterization with explainable model
    Patil, R.
    Shreya, A.
    Maulik, P.
    Chaudhury, S.
    JOURNAL OF THE NEUROLOGICAL SCIENCES, 2019, 405