A fully interpretable machine learning model for increasing the effectiveness of urine screening

被引:2
|
作者
Del Ben, Fabio [1 ]
Da Col, Giacomo [2 ]
Cobarzan, Doriana [2 ]
Turetta, Matteo [1 ]
Rubin, Daniela [3 ]
Buttazzi, Patrizio [3 ]
Antico, Antonio [3 ]
机构
[1] IRCCS, NCI, CRO Aviano, Aviano, Italy
[2] Fraunhofer Austria Res, KI4LIFE, Klagenfurt, Austria
[3] AULSS2 Marca Trevigiana, Treviso, Italy
关键词
urinalysis; machine learning; data science; decision tree; FLOW-CYTOMETRY; SYSMEX UF-1000I; DECISION TREES; DIAGNOSIS; CULTURE;
D O I
10.1093/ajcp/aqad099
中图分类号
R36 [病理学];
学科分类号
100104 ;
摘要
Objectives This article addresses the need for effective screening methods to identify negative urine samples before urine culture, reducing the workload, cost, and release time of results in the microbiology laboratory. We try to overcome the limitations of current solutions, which are either too simple, limiting effectiveness (1 or 2 parameters), or too complex, limiting interpretation, trust, and real-world implementation ("black box" machine learning models).Methods The study analyzed 15,312 samples from 10,534 patients with clinical features and the Sysmex Uf-1000i automated analyzer data. Decision tree (DT) models with or without lookahead strategy were used, as they offer a transparent set of logical rules that can be easily understood by medical professionals and implemented into automated analyzers.Results The best model achieved a sensitivity of 94.5% and classified negative samples based on age, bacteria, mucus, and 2 scattering parameters. The model reduced the workload by an additional 16% compared to the current procedure in the laboratory, with an estimated financial impact of euro40,000/y considering 15,000 samples/y. Identified logical rules have a scientific rationale matched to existing knowledge in the literature.Conclusions Overall, this study provides an effective and interpretable screening method for urine culture in microbiology laboratories, using data from the Sysmex UF-1000i automated analyzer. Unlike other machine learning models, our model is interpretable, generating trust and enabling real-world implementation.
引用
收藏
页码:620 / 632
页数:13
相关论文
共 50 条
  • [31] Interpretable machine learning model for prediction of overall survival in laryngeal cancer
    Alabi, Rasheed Omobolaji
    Almangush, Alhadi
    Elmusrati, Mohammed
    Leivo, Ilmo
    Makitie, Antti A.
    ACTA OTO-LARYNGOLOGICA, 2025, 145 (03) : 256 - 262
  • [32] Using Machine Learning to Create an Adaptable, Scalable, and Interpretable Behavioral Model
    Shoshan, Vered
    Hazan, Tamir
    Plonsky, Ori
    DECISION-WASHINGTON, 2024, 11 (04): : 652 - 667
  • [33] Interpretable Machine Learning Model for Default Risk Identification of Corporate Bonds
    Deng, Shangkun
    Ning, Hong
    Liu, Zonghua
    Zhu, Yingke
    Computer Engineering and Applications, 2024, 60 (12) : 334 - 345
  • [34] An Interpretable Machine Learning Model to Predict Cortical Atrophy in Multiple Sclerosis
    Conti, Allegra
    Treaba, Constantina Andrada
    Mehndiratta, Ambica
    Barletta, Valeria Teresa
    Mainero, Caterina
    Toschi, Nicola
    BRAIN SCIENCES, 2023, 13 (02)
  • [35] Interpretable Machine Learning Model for Predicting Postpartum Depression: Retrospective Study
    Zhang, Ren
    Liu, Yi
    Zhang, Zhiwei
    Luo, Rui
    Lv, Bin
    JMIR MEDICAL INFORMATICS, 2025, 13
  • [36] An Interpretable Machine Learning Based Model for Traumatic Severe Pneumothorax Evaluation
    Lv, Y.
    Weng, J.
    Li, J.
    Chen, W.
    Zhao, Y.
    Huang, H.
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2025, 20 (01)
  • [37] An Interpretable Machine Learning Model for Daily Global Solar Radiation Prediction
    Chaibi, Mohamed
    Benghoulam, El Mahjoub
    Tarik, Lhoussaine
    Berrada, Mohamed
    El Hmaidi, Abdellah
    ENERGIES, 2021, 14 (21)
  • [38] Multi-modal Machine Learning Model for Interpretable Malware Classification
    Lisa, Fahmida Tasnim
    Islam, Sheikh Rabiul
    Kumar, Neha Mohan
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT III, XAI 2024, 2024, 2155 : 334 - 349
  • [39] MGP-AttTCN: An interpretable machine learning model for the prediction of sepsis
    Rosnati, Margherita
    Fortuin, Vincent
    PLOS ONE, 2021, 16 (05):
  • [40] Hybrid interpretable predictive machine learning model for air pollution prediction
    Gu, Yuanlin
    Li, Baihua
    Meng, Qinggang
    NEUROCOMPUTING, 2022, 468 : 123 - 136