Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data

被引:3
|
作者
Nambiar, Athira [1 ]
Harikrishnaa, S. [1 ]
Sharanprasath, S. [1 ]
机构
[1] SRM Inst Sci & Technol, Fac Engn & Technol, Dept Computat Intelligence, Kattankulathur, Tamil Nadu, India
来源
关键词
artificial intelligence; machine learning; COVID-19; explainable AI (XAI); data analysis; decision tree; XGBoost; neural network classifier;
D O I
10.3389/frai.2023.1272506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
IntroductionThe COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild.MethodsIn this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models.Results and discussionThe proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Model-agnostic explainable artificial intelligence for object detection in image data
    Moradi, Milad
    Yan, Ke
    Colwell, David
    Samwald, Matthias
    Asgari, Rhona
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [2] Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review
    Ladbury, Colton
    Zarinshenas, Reza
    Semwal, Hemal
    Tam, Andrew
    Vaidehi, Nagarajan
    Rodin, Andrei S.
    Liu, An
    Glaser, Scott
    Salgia, Ravi
    Amini, Arya
    TRANSLATIONAL CANCER RESEARCH, 2022, : 3853 - 3868
  • [3] BeCaked: An Explainable Artificial Intelligence Model for COVID-19 Forecasting
    Duc Q Nguyen
    Nghia Q Vo
    Thinh T Nguyen
    Khuong Nguyen-An
    Quang H Nguyen
    Dang N Tran
    Tho T Quan
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [4] BeCaked: An Explainable Artificial Intelligence Model for COVID-19 Forecasting
    Duc Q. Nguyen
    Nghia Q. Vo
    Thinh T. Nguyen
    Khuong Nguyen-An
    Quang H. Nguyen
    Dang N. Tran
    Tho T. Quan
    Scientific Reports, 12
  • [5] Explainable artificial intelligence model for identifying COVID-19 gene biomarkers
    Yagin, Fatma Hilal
    Cicek, Ipek Balikci
    Alkhateeb, Abedalrhman
    Yagin, Burak
    Colak, Cemil
    Azzeh, Mohammad
    Akbulut, Sami
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 154
  • [6] Multi-scale Local Explanation Approach for Image Analysis Using Model-Agnostic Explainable Artificial Intelligence (XAI)
    Hajiyan, Hooria
    Ebrahimi, Mehran
    MEDICAL IMAGING 2023, 2023, 12471
  • [7] Explainable artificial intelligence approaches for COVID-19 prognosis prediction using clinical markers
    Chadaga, Krishnaraj
    Prabhu, Srikanth
    Sampathila, Niranjana
    Chadaga, Rajagopala
    Umakanth, Shashikiran
    Bhat, Devadas
    Kumar, G. S. Shashi
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [8] Explainable artificial intelligence approaches for COVID-19 prognosis prediction using clinical markers
    Krishnaraj Chadaga
    Srikanth Prabhu
    Niranjana Sampathila
    Rajagopala Chadaga
    Shashikiran Umakanth
    Devadas Bhat
    Shashi Kumar G S
    Scientific Reports, 14
  • [9] Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19
    Chung, Heewon
    Park, Chul
    Kang, Wu Seong
    Lee, Jinseok
    FRONTIERS IN PHYSIOLOGY, 2021, 12
  • [10] Prediction and Feature Importance Analysis for Severity of COVID-19 in South Korea Using Artificial Intelligence: Model Development and Validation
    Chung, Heewon
    Ko, Hoon
    Kang, Wu Seong
    Kim, Kyung Won
    Lee, Hooseok
    Park, Chul
    Song, Hyun-Ok
    Choi, Tae-Young
    Seo, Jae Ho
    Lee, Jinseok
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2021, 23 (04)