Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data

被引:3
|
作者
Nambiar, Athira [1 ]
Harikrishnaa, S. [1 ]
Sharanprasath, S. [1 ]
机构
[1] SRM Inst Sci & Technol, Fac Engn & Technol, Dept Computat Intelligence, Kattankulathur, Tamil Nadu, India
来源
关键词
artificial intelligence; machine learning; COVID-19; explainable AI (XAI); data analysis; decision tree; XGBoost; neural network classifier;
D O I
10.3389/frai.2023.1272506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
IntroductionThe COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild.MethodsIn this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models.Results and discussionThe proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.
引用
收藏
页数:17
相关论文
共 50 条
  • [11] Severity prediction in COVID-19 patients using clinical markers and explainable artificial intelligence: A stacked ensemble machine learning approach
    Chadaga, Krishnaraj
    Prabhu, Srikanth
    Sampathila, Niranjana
    Chadaga, Rajagopala
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2023, 17 (04): : 959 - 982
  • [12] A hybrid data envelopment analysis—artificial neural network prediction model for COVID-19 severity in transplant recipients
    Ignacio Revuelta
    Francisco J. Santos-Arteaga
    Enrique Montagud-Marrahi
    Pedro Ventura-Aguiar
    Debora Di Caprio
    Frederic Cofan
    David Cucchiari
    Vicens Torregrosa
    Gaston Julio Piñeiro
    Nuria Esforzado
    Marta Bodro
    Jessica Ugalde-Altamirano
    Asuncion Moreno
    Josep M. Campistol
    Antonio Alcaraz
    Beatriu Bayès
    Esteban Poch
    Federico Oppenheimer
    Fritz Diekmann
    Artificial Intelligence Review, 2021, 54 : 4653 - 4684
  • [13] Explainable Artificial Intelligence Approach for the Early Prediction of Ventilator Support and Mortality in COVID-19 Patients
    Aslam, Nida
    COMPUTATION, 2022, 10 (03)
  • [14] Severity-onset prediction of COVID-19 via artificial-intelligence analysis of multivariate factors
    Fu, Yu
    Zeng, Lijiao
    Huang, Pilai
    Liao, Mingfeng
    Li, Jialu
    Zhang, Mingxia
    Shi, Qinlang
    Xia, Zhaohua
    Ning, Xinzhong
    Mo, Jiu
    Zhou, Ziyuan
    Li, Zigang
    Yuan, Jing
    Wang, Lifei
    He, Qing
    Wu, Qikang
    Liu, Lei
    Liao, Yuhui
    Qiao, Kun
    HELIYON, 2023, 9 (08)
  • [15] Artificial Intelligence-Based Prediction of Covid-19 Severity on the Results of Protein Profiling
    Yasar, Seyma
    Colak, Cemil
    Yologlu, Saim
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2021, 202
  • [16] A hybrid data envelopment analysis-artificial neural network prediction model for COVID-19 severity in transplant recipients
    Revuelta, Ignacio
    Santos-Arteaga, Francisco J.
    Montagud-Marrahi, Enrique
    Ventura-Aguiar, Pedro
    Di Caprio, Debora
    Cofan, Frederic
    Cucchiari, David
    Torregrosa, Vicens
    Julio Pineiro, Gaston
    Esforzado, Nuria
    Bodro, Marta
    Ugalde-Altamirano, Jessica
    Moreno, Asuncion
    Campistol, Josep M.
    Alcaraz, Antonio
    Bayes, Beatriu
    Poch, Esteban
    Oppenheimer, Federico
    Diekmann, Fritz
    ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (06) : 4653 - 4684
  • [17] CNS: Hybrid Explainable Artificial Intelligence-Based Sentiment Analysis on COVID-19 Lockdown Using Twitter Data
    Priya, C.
    Vincent, P. M. Durai Raj
    INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS, 2022, 31 (3-4)
  • [18] COVID-19 and Artificial Intelligence: An Approach to Forecast the Severity of Diagnosis
    Udristoiu, Anca Loredana
    Ghenea, Alice Elena
    Udristoiu, Stefan
    Neaga, Manuela
    Zlatian, Ovidiu Mircea
    Vasile, Corina Maria
    Popescu, Mihaela
    Tieranu, Eugen Nicolae
    Salan, Alex-Ioan
    Turcu, Adina Andreea
    Nicolosu, Dragos
    Calina, Daniela
    Cioboata, Ramona
    LIFE-BASEL, 2021, 11 (11):
  • [19] Behavioral analysis of medical data COVID-19 through artificial intelligence
    Nunez, Antonio alvarez
    Diaz, Maria del Carmen Santiago
    Vazquez, Ana Claudia Zenteno
    Marcial, Judith Perez
    Linares, Gustavo Trinidad Rubin
    INTERNATIONAL JOURNAL OF COMBINATORIAL OPTIMIZATION PROBLEMS AND INFORMATICS, 2024, 15 (05): : 212 - 217
  • [20] An Efficient CSPK-FCM Explainable Artificial Intelligence Model on COVID-19 Data to Predict the Emotion Using Topic Modeling
    Priya, C.
    Vincent, Durai Raj P. M.
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2023, 14 (06) : 1390 - 1402