Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data

被引:3
|
作者
Nambiar, Athira [1 ]
Harikrishnaa, S. [1 ]
Sharanprasath, S. [1 ]
机构
[1] SRM Inst Sci & Technol, Fac Engn & Technol, Dept Computat Intelligence, Kattankulathur, Tamil Nadu, India
来源
关键词
artificial intelligence; machine learning; COVID-19; explainable AI (XAI); data analysis; decision tree; XGBoost; neural network classifier;
D O I
10.3389/frai.2023.1272506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
IntroductionThe COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild.MethodsIn this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models.Results and discussionThe proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Explainable Artificial Intelligence for COVID-19 Diagnosis Through Blood Test Variables
    Thimoteo, Lucas M.
    Vellasco, Marley M.
    Amaral, Jorge
    Figueiredo, Karla
    Yokoyama, Catia Lie
    Marques, Erito
    JOURNAL OF CONTROL AUTOMATION AND ELECTRICAL SYSTEMS, 2022, 33 (02) : 625 - 644
  • [22] Explainable Artificial Intelligence for COVID-19 Diagnosis Through Blood Test Variables
    Lucas M. Thimoteo
    Marley M. Vellasco
    Jorge Amaral
    Karla Figueiredo
    Cátia Lie Yokoyama
    Erito Marques
    Journal of Control, Automation and Electrical Systems, 2022, 33 : 625 - 644
  • [23] Combat COVID-19 with artificial intelligence and big data
    Lin, Leesa
    Hou, Zhiyuan
    JOURNAL OF TRAVEL MEDICINE, 2020, 27 (05)
  • [24] An Explainable Multichannel Model for COVID-19 Time Series Prediction
    He, Hongjian
    Xie, Jiang
    Lu, Xinwei
    Huang, Dingkai
    Zhang, Wenjun
    CURRENT BIOINFORMATICS, 2024, 19 (07) : 612 - 623
  • [25] An Analysis COVID-19 in Mexico: a Prediction of Severity
    Ulises Martinez-Martinez, Marco
    Alpizar-Rodriguez, Deshire
    Flores-Ramirez, Rogelio
    Patricia Portales-Perez, Diana
    Elena Soria-Guerra, Ruth
    Perez-Vazquez, Francisco
    Martinez-Gutierrez, Fidel
    JOURNAL OF GENERAL INTERNAL MEDICINE, 2022, 37 (03) : 624 - 631
  • [26] An Analysis COVID-19 in Mexico: a Prediction of Severity
    Marco Ulises Martínez-Martínez
    Deshiré Alpízar-Rodríguez
    Rogelio Flores-Ramírez
    Diana Patricia Portales-Pérez
    Ruth Elena Soria-Guerra
    Francisco Pérez-Vázquez
    Fidel Martinez-Gutierrez
    Journal of General Internal Medicine, 2022, 37 : 624 - 631
  • [27] Artificial intelligence for prediction of COVID-19 progression using CT imaging and clinical data
    Robin Wang
    Zhicheng Jiao
    Li Yang
    Ji Whae Choi
    Zeng Xiong
    Kasey Halsey
    Thi My Linh Tran
    Ian Pan
    Scott A. Collins
    Xue Feng
    Jing Wu
    Ken Chang
    Lin-Bo Shi
    Shuai Yang
    Qi-Zhi Yu
    Jie Liu
    Fei-Xian Fu
    Xiao-Long Jiang
    Dong-Cui Wang
    Li-Ping Zhu
    Xiao-Ping Yi
    Terrance T. Healey
    Qiu-Hua Zeng
    Tao Liu
    Ping-Feng Hu
    Raymond Y. Huang
    Yi-Hui Li
    Ronnie A. Sebro
    Paul J. L. Zhang
    Jianxin Wang
    Michael K. Atalay
    Wei-Hua Liao
    Yong Fan
    Harrison X. Bai
    European Radiology, 2022, 32 : 205 - 212
  • [28] An explainable model of host genetic interactions linked to COVID-19 severity
    Anthony Onoja
    Nicola Picchiotti
    Chiara Fallerini
    Margherita Baldassarri
    Francesca Fava
    Francesca Colombo
    Francesca Chiaromonte
    Alessandra Renieri
    Simone Furini
    Francesco Raimondi
    Communications Biology, 5
  • [29] An explainable model of host genetic interactions linked to COVID-19 severity
    Onoja, Anthony
    Picchiotti, Nicola
    Fallerini, Chiara
    Baldassarri, Margherita
    Fava, Francesca
    Colombo, Francesca
    Chiaromonte, Francesca
    Renieri, Alessandra
    Furini, Simone
    Raimondi, Francesco
    COMMUNICATIONS BIOLOGY, 2022, 5 (01)
  • [30] Artificial intelligence for prediction of COVID-19 progression using CT imaging and clinical data
    Wang, Robin
    Jiao, Zhicheng
    Li Yang
    Choi, Ji Whae
    Xiong, Zeng
    Halsey, Kasey
    Tran, Thi My Linh
    Pan, Ian
    Collins, Scott A.
    Feng, Xue
    Wu, Jing
    Chang, Ken
    Shi, Lin-Bo
    Yang, Shuai
    Yu, Qi-Zhi
    Liu, Jie
    Fu, Fei-Xian
    Jiang, Xiao-Long
    Wang, Dong-Cui
    Zhu, Li-Ping
    Yi, Xiao-Ping
    Healey, Terrance T.
    Zeng, Qiu-Hua
    Liu, Tao
    Hu, Ping-Feng
    Huang, Raymond Y.
    Li, Yi-Hui
    Sebro, Ronnie A.
    Zhang, Paul J. L.
    Wang, Jianxin
    Atalay, Michael K.
    Liao, Wei-Hua
    Fan, Yong
    Bai, Harrison X.
    EUROPEAN RADIOLOGY, 2022, 32 (01) : 205 - 212