Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data

被引:3
|
作者
Nambiar, Athira [1 ]
Harikrishnaa, S. [1 ]
Sharanprasath, S. [1 ]
机构
[1] SRM Inst Sci & Technol, Fac Engn & Technol, Dept Computat Intelligence, Kattankulathur, Tamil Nadu, India
来源
关键词
artificial intelligence; machine learning; COVID-19; explainable AI (XAI); data analysis; decision tree; XGBoost; neural network classifier;
D O I
10.3389/frai.2023.1272506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
IntroductionThe COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild.MethodsIn this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models.Results and discussionThe proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Exploring metabolic anomalies in COVID-19 and post-COVID-19: a machine learning approach with explainable artificial intelligence
    Oropeza-Valdez, Juan Jose
    Padron-Manrique, Cristian
    Vazquez-Jimenez, Aaron
    Soberon, Xavier
    Resendis-Antonio, Osbaldo
    FRONTIERS IN MOLECULAR BIOSCIENCES, 2024, 11
  • [42] Benchmarking of Machine Learning classifiers on plasma proteomic for COVID-19 severity prediction through interpretable artificial intelligence
    Dimitsaki, Stella
    Gavriilidis, George I.
    Dimitriadis, Vlasios K.
    Natsiavas, Pantelis
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 137
  • [43] Artificial intelligence centric scientific research on COVID-19: an analysis based on scientometrics data
    Shukla, Amit K.
    Seth, Taniya
    Muhuri, Pranab K.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (21) : 32755 - 32787
  • [44] Forecasting COVID-19 Infection Rates with Artificial Intelligence Model
    Jingye, Jesse Yang
    INTERNATIONAL REAL ESTATE REVIEW, 2022, 25 (04): : 525 - 542
  • [45] Detection of Lung Diseases for Pneumonia, Tuberculosis, and COVID-19 with Artificial Intelligence Tools
    Yadav S.
    Rizvi S.A.M.
    Agarwal P.
    SN Computer Science, 5 (3)
  • [46] Artificial intelligence centric scientific research on COVID-19: an analysis based on scientometrics data
    Amit K. Shukla
    Taniya Seth
    Pranab K. Muhuri
    Multimedia Tools and Applications, 2023, 82 : 32755 - 32787
  • [47] An innovative artificial intelligence-based method to compress complex models into explainable, model-agnostic and reduced decision support systems with application to healthcare (NEAR)
    Kassem, Karim
    Sperti, Michela
    Cavallo, Andrea
    Vergani, Andrea Mario
    Fassino, Davide
    Moz, Monica
    Liscio, Alessandro
    Banali, Riccardo
    Dahlweid, Michael
    Benetti, Luciano
    Bruno, Francesco
    Gallone, Guglielmo
    De Filippo, Ovidio
    Iannaccone, Mario
    D'Ascenzo, Fabrizio
    De Ferrari, Gaetano Maria
    Morbiducci, Umberto
    Della Valle, Emanuele
    Deriu, Marco Agostino
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 151
  • [48] Explainable artificial intelligence-based edge fuzzy images for COVID-19 detection and identification
    Hu, Qinhua
    Gois, Francisco Nauber B.
    Costa, Rafael
    Zhang, Lijuan
    Yin, Ling
    Magaia, Naercio
    de Albuquerque, Victor Hugo C.
    APPLIED SOFT COMPUTING, 2022, 123
  • [49] Exploring COVID-19 Trends in Mexico during the Winter Season with Explainable Artificial Intelligence (XAI)
    Guzman-Ponce, Angelica
    Valdovinos-Rosas, Rosa Maria
    Gonzalez-Ruiz, Jacobo Leonardo
    Francisco-Valencia, Ivan
    Marcial-Romero, J. Raymundo
    IEEE LATIN AMERICA TRANSACTIONS, 2024, 22 (07) : 539 - 547
  • [50] Quantum Inspired Differential Evolution with Explainable Artificial Intelligence-Based COVID-19 Detection
    Basahel A.M.
    Yamin M.
    Computer Systems Science and Engineering, 2023, 46 (01): : 209 - 224