An explainable Artificial Intelligence software system for predicting diabetes

被引:4
|
作者
Srinivasu, Parvathaneni Naga [1 ,2 ]
Ahmed, Shakeel [3 ]
Hassaballah, Mahmoud [4 ,5 ]
Almusallam, Naif [6 ]
机构
[1] Univ Fed Ceara, Dept Teleinformat Engn, BR-60455970 Fortaleza, Brazil
[2] Amrita Vishwa Vidyapeetham, Amrita Sch Comp, Amaravati 522503, Andhra Pradesh, India
[3] King Faisal Univ, Coll Comp Sci & Informat Technol, Dept Comp Sci, Al Hasa 31982, Saudi Arabia
[4] Prince Sattam Bin Abdulaziz Univ, Coll Comp Engn & Sci, Dept Comp Sci, Al Kharj 16278, Saudi Arabia
[5] South Valley Univ, Fac Comp & Informat, Dept Comp Sci, Qena, Egypt
[6] King Faisal Univ, Coll Business Adm, Dept Management Informat Syst, Al Hasa 31982, Saudi Arabia
关键词
Convolutional neural networks; Bi-LSTM; Blood glucose levels; Spectrogram images; Hyperparameters; ROC curves; DIAGNOSIS; CLASSIFIER; AI;
D O I
10.1016/j.heliyon.2024.e36112
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Implementing diabetes surveillance systems is paramount to mitigate the risk of incurring substantial medical expenses. Currently, blood glucose is measured by minimally invasive methods, which involve extracting a small blood sample and transmitting it to a blood glucose meter. This method is deemed discomforting for individuals who are undergoing it. The present study introduces an Explainable Artificial Intelligence (XAI) system, which aims to create an intelligible machine capable of explaining expected outcomes and decision models. To this end, we analyze abnormal glucose levels by utilizing Bi-directional Long Short-Term Memory (Bi-LSTM) and Convolutional Neural Network (CNN). In this regard, the glucose levels are acquired through the glucose oxidase (GOD) strips placed over the human body. Later, the signal data is converted to the spectrogram images, classified as low glucose, average glucose, and abnormal glucose levels. The labeled spectrogram images are then used to train the individualized monitoring model. The proposed XAI model to track real-time glucose levels uses the XAI-driven architecture in its feature processing. The model's effectiveness is evaluated by analyzing the performance of the proposed model and several evolutionary metrics used in the confusion matrix. The data revealed in the study demonstrate that the proposed model effectively identifies individuals with elevated glucose levels.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Explainable artificial intelligence on life satisfaction, diabetes mellitus and its comorbid condition
    Kim, Ranyeong
    Kim, Chae-Won
    Park, Hyuntae
    Lee, Kwang-Sig
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [32] Software Defects Identification: Results Using Machine Learning and Explainable Artificial Intelligence Techniques
    Begum, Momotaz
    Shuvo, Mehedi Hasan
    Ashraf, Imran
    Al Mamun, Abdullah
    Uddin, Jia
    Samad, Md Abdus
    IEEE ACCESS, 2023, 11 : 132750 - 132765
  • [33] Memristive Explainable Artificial Intelligence Hardware
    Song, Hanchan
    Park, Woojoon
    Kim, Gwangmin
    Choi, Moon Gu
    In, Jae Hyun
    Rhee, Hakseung
    Kim, Kyung Min
    ADVANCED MATERIALS, 2024, 36 (25)
  • [34] Effects of Explainable Artificial Intelligence in Neurology
    Gombolay, G.
    Silva, A.
    Schrum, M.
    Dutt, M.
    Hallman-Cooper, J.
    Gombolay, M.
    ANNALS OF NEUROLOGY, 2023, 94 : S145 - S145
  • [35] Drug discovery with explainable artificial intelligence
    Jimenez-Luna, Jose
    Grisoni, Francesca
    Schneider, Gisbert
    NATURE MACHINE INTELLIGENCE, 2020, 2 (10) : 573 - 584
  • [36] Explainable Artificial Intelligence for Combating Cyberbullying
    Tesfagergish, Senait Gebremichael
    Damasevicius, Robertas
    SOFT COMPUTING AND ITS ENGINEERING APPLICATIONS, PT 1, ICSOFTCOMP 2023, 2024, 2030 : 54 - 67
  • [37] Drug discovery with explainable artificial intelligence
    José Jiménez-Luna
    Francesca Grisoni
    Gisbert Schneider
    Nature Machine Intelligence, 2020, 2 : 573 - 584
  • [38] Explainable and responsible artificial intelligence PREFACE
    Meske, Christian
    Abedin, Babak
    Klier, Mathias
    Rabhi, Fethi
    ELECTRONIC MARKETS, 2022, 32 (04) : 2103 - 2106
  • [39] A Survey on Explainable Artificial Intelligence for Cybersecurity
    Rjoub, Gaith
    Bentahar, Jamal
    Wahab, Omar Abdel
    Mizouni, Rabeb
    Song, Alyssa
    Cohen, Robin
    Otrok, Hadi
    Mourad, Azzam
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (04): : 5115 - 5140
  • [40] Scientific Exploration and Explainable Artificial Intelligence
    Carlos Zednik
    Hannes Boelsen
    Minds and Machines, 2022, 32 : 219 - 239