An Interpretable Approach with Explainable AI for Heart Stroke Prediction

被引:10
|
作者
Srinivasu, Parvathaneni Naga [1 ,2 ]
Sirisha, Uddagiri [2 ]
Sandeep, Kotte [3 ]
Praveen, S. Phani [2 ]
Maguluri, Lakshmana Phaneendra [4 ]
Bikku, Thulasi [5 ]
机构
[1] Univ Fed Ceara, Dept Teleinformat Engn, BR-60455970 Fortaleza, Brazil
[2] Prasad V Potluri Siddhartha Inst Technol, Dept Comp Sci & Engn, Vijayawada 520007, India
[3] Dhanekula Inst Engn & Technol, Dept Informat Technol, Vijayawada 521139, India
[4] Koneru Lakshmaiah Educ Fdn, Dept Comp Sci & Engn, Guntur 522302, India
[5] Amrita Vishwa Vidyapeetham, Amrita Sch Comp Amaravati, Comp Sci & Engn, Amaravati 522503, India
关键词
Artificial Neural Network; deep learning; data leakage; sampling; feature selection; explainable AI; LIME tabular;
D O I
10.3390/diagnostics14020128
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Heart strokes are a significant global health concern, profoundly affecting the wellbeing of the population. Many research endeavors have focused on developing predictive models for heart strokes using ML and DL techniques. Nevertheless, prior studies have often failed to bridge the gap between complex ML models and their interpretability in clinical contexts, leaving healthcare professionals hesitant to embrace them for critical decision-making. This research introduces a meticulously designed, effective, and easily interpretable approach for heart stroke prediction, empowered by explainable AI techniques. Our contributions include a meticulously designed model, incorporating pivotal techniques such as resampling, data leakage prevention, feature selection, and emphasizing the model's comprehensibility for healthcare practitioners. This multifaceted approach holds the potential to significantly impact the field of healthcare by offering a reliable and understandable tool for heart stroke prediction. In our research, we harnessed the potential of the Stroke Prediction Dataset, a valuable resource containing 11 distinct attributes. Applying these techniques, including model interpretability measures such as permutation importance and explainability methods like LIME, has achieved impressive results. While permutation importance provides insights into feature importance globally, LIME complements this by offering local and instance-specific explanations. Together, they contribute to a comprehensive understanding of the Artificial Neural Network (ANN) model. The combination of these techniques not only aids in understanding the features that drive overall model performance but also helps in interpreting and validating individual predictions. The ANN model has achieved an outstanding accuracy rate of 95%.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Explainable AI for Healthcare: A Study for Interpreting Diabetes Prediction
    Gandhi, Neel
    Mishra, Shakti
    MACHINE LEARNING AND BIG DATA ANALYTICS (PROCEEDINGS OF INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND BIG DATA ANALYTICS (ICMLBDA) 2021), 2022, 256 : 95 - 105
  • [42] Explainable AI techniques with application to NBA gameplay prediction
    Wang, Yuanchen
    Liu, Weibo
    Liu, Xiaohui
    NEUROCOMPUTING, 2022, 483 : 59 - 71
  • [43] Important Features Associated with Depression Prediction and Explainable AI
    Magboo, Vincent Peter C.
    Magboo, Ma Sheila A.
    WELL-BEING IN THE INFORMATION SOCIETY: WHEN THE MIND BREAKS, 2022, 1626 : 23 - 36
  • [44] On the Intersection of Explainable and Reliable AI for Physical Fatigue Prediction
    Narteni, Sara
    Orani, Vanessa
    Cambiaso, Enrico
    Rucco, Matteo
    Mongelli, Maurizio
    IEEE ACCESS, 2022, 10 : 76243 - 76260
  • [45] Utilizing Macroeconomic Factors for Sector Rotation based on Interpretable Machine Learning and Explainable AI
    Zhu, Ye
    Yi, Chao
    Chen, Yixin
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 5505 - 5510
  • [46] Towards clinical prediction with transparency: An explainable AI approach to survival modelling in residential aged care
    Susnjak, Teo
    Griffin, Elise
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2025, 263
  • [47] Explainable AI for Material Property Prediction Based on Energy Cloud: A Shapley-Driven Approach
    Qayyum, Faiza
    Khan, Murad Ali
    Kim, Do-Hyeun
    Ko, Hyunseok
    Ryu, Ga-Ae
    MATERIALS, 2023, 16 (23)
  • [48] Developing an explainable rockburst risk prediction method using monitored microseismicity based on interpretable machine learning approach
    Basnet, Prabhat Man Singh
    Jin, Aibing
    Mahtab, Shakil
    ACTA GEOPHYSICA, 2024, 72 (04) : 2597 - 2618
  • [49] An explainable AI based new deep learning solution for efficient heart disease prediction at early stages
    Ashfaq, Muhammad Talha
    Javaid, Nadeem
    Alrajeh, Nabil
    Ali, Syed Saqib
    EVOLVING SYSTEMS, 2025, 16 (01)
  • [50] Explainable AI
    Veerappa, Manjunatha
    Rinzivillo, Salvo
    ERCIM NEWS, 2023, (134):