Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data

被引:3
|
作者
Nambiar, Athira [1 ]
Harikrishnaa, S. [1 ]
Sharanprasath, S. [1 ]
机构
[1] SRM Inst Sci & Technol, Fac Engn & Technol, Dept Computat Intelligence, Kattankulathur, Tamil Nadu, India
来源
关键词
artificial intelligence; machine learning; COVID-19; explainable AI (XAI); data analysis; decision tree; XGBoost; neural network classifier;
D O I
10.3389/frai.2023.1272506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
IntroductionThe COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild.MethodsIn this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models.Results and discussionThe proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Explainable artificial intelligence analysis of brachytherapy boost receipt in cervical cancer during the COVID-19 era
    Ladbury, Colton
    Eustace, Nicholas
    Kassardjian, Ari
    Amini, Arya
    Chen, Yi-Jen
    Wang, Edward
    Kohut, Adrian
    Tergas, Ana
    Han, Ernest
    Song, Mihae
    Glaser, Scott
    BRACHYTHERAPY, 2024, 23 (03) : 237 - 247
  • [32] Using Explainable Artificial Intelligence and Knowledge Graph to Explain Sentiment Analysis of COVID-19 Post on the Twitter
    Lai, Yi-Wei
    Chen, Mu-Yen
    ARTIFICIAL INTELLIGENCE FOR INTERNET OF THINGS (IOT) AND HEALTH SYSTEMS OPERABILITY, IOTHIC 2023, 2024, 8 : 39 - 49
  • [33] Prediction Models for COVID-19 Mortality Using Artificial Intelligence
    Kim, Dong-Kyu
    JOURNAL OF PERSONALIZED MEDICINE, 2022, 12 (09):
  • [34] Prediction of COVID-19 Hospitalization and Mortality Using Artificial Intelligence
    Halwani, Marwah Ahmed
    Halwani, Manal Ahmed
    HEALTHCARE, 2024, 12 (17)
  • [35] Enhancing COVID-19 Diagnosis Accuracy and Transparency with Explainable Artificial Intelligence (XAI) Techniques
    Sonika Malik
    Preeti Rathee
    SN Computer Science, 5 (7)
  • [36] Analysis of COVID-19 Pandemic Using Artificial Intelligence
    Amjad, Maaz
    Rodriguez Chavez, Yuriria
    Nayab, Zaryyab
    Zhila, Alisa
    Sidorov, Grigori
    Gelbukh, Alexander
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2020, PT II, 2020, 12469 : 65 - 73
  • [37] Unveiling the Power of Model-Agnostic Multiscale Analysis for Enhancing Artificial Intelligence Models in Breast Cancer Histopathology Images
    Tsiknakis, Nikos
    Manikis, Georgios
    Tzoras, Evangelos
    Salgkamis, Dimitrios
    Vidal, Joan Martinez
    Wang, Kang
    Zaridis, Dimitris
    Sifakis, Emmanouil
    Zerdes, Ioannis
    Bergh, Jonas
    Hartman, Johan
    Acs, Balazs
    Marias, Kostas
    Foukakis, Theodoros
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5312 - 5322
  • [38] Artificial intelligence computing analysis of fractional order COVID-19 epidemic model
    Raza, Ali
    Baleanu, Dumitru
    Cheema, Tahir Nawaz
    Fadhal, Emad
    Ibrahim, Rashid I. H.
    Abdelli, Nouara
    AIP ADVANCES, 2023, 13 (08)
  • [39] Genetic Risk Prediction of COVID-19 Susceptibility and Severity in the Indian Population
    Prakrithi, P.
    Lakra, Priya
    Sundar, Durai
    Kapoor, Manav
    Mukerji, Mitali
    Gupta, Ishaan
    FRONTIERS IN GENETICS, 2021, 12
  • [40] An Explainable AI Model for ICU Admission Prediction of COVID-19 Patients
    Dazea, Eleni
    Stefaneas, Petros
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2023, 32 (07)