A human-interpretable machine learning pipeline based on ultrasound to support leiomyosarcoma diagnosis

被引:7
|
作者
Lombardi, Angela [1 ]
Arezzo, Francesca [2 ]
Di Sciascio, Eugenio [1 ]
Ardito, Carmelo [3 ]
Mongelli, Michele [4 ]
Di Lillo, Nicola [4 ]
Fascilla, Fabiana Divina [5 ]
Silvestris, Erica [2 ]
Kardhashi, Anila [2 ]
Putino, Carmela [4 ]
Cazzolla, Ambrogio [2 ]
Loizzi, Vera [2 ,6 ]
Cazzato, Gerardo [7 ]
Cormio, Gennaro [2 ,6 ]
Di Noia, Tommaso [1 ]
机构
[1] Politecn Bari, Dept Elect & Informat Engn DEI, Bari, Italy
[2] IRCCS Ist Tumori Giovanni Paolo II, Gynecol Oncol Unit, Interdisciplinar Dept Med, Bari, Italy
[3] LUM Giuseppe Degennaro Univ, Dept Engn, Casamassima, Bari, Italy
[4] Univ Bari Aldo Moro, Dept Biomed Sci & Human Oncol, Obstet & Gynecol Unit, Bari, Italy
[5] Di Venere Hosp, Obstet & Gynecol Unit, Bari, Italy
[6] Univ Bari Aldo Moro, Interdisciplinar Dept Med, Bari, Italy
[7] Univ Bari Aldo Moro, Dept Emergency & Organ Transplantat DETO, Sect Pathol, Bari, Italy
关键词
Human-centered AI; Machine learning; eXplainable artificial intelligence; Interpretability; Ultrasound; Leiomyosarcoma; CAD; DIFFERENTIAL-DIAGNOSIS; UTERINE SARCOMA; MORCELLATION; LEIOMYOMA; EXPLANATIONS; REGRESSION; SELECTION; OUTCOMES; IMPACT; CANCER;
D O I
10.1016/j.artmed.2023.102697
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The preoperative evaluation of myometrial tumors is essential to avoid delayed treatment and to establish the appropriate surgical approach. Specifically, the differential diagnosis of leiomyosarcoma (LMS) is particularly challenging due to the overlapping of clinical, laboratory and ultrasound features between fibroids and LMS. In this work, we present a human-interpretable machine learning (ML) pipeline to support the preoperative differential diagnosis of LMS from leiomyomas, based on both clinical data and gynecological ultrasound assessment of 68 patients (8 with LMS diagnosis). The pipeline provides the following novel contributions: (i) end-users have been involved both in the definition of the ML tasks and in the evaluation of the overall approach; (ii) clinical specialists get a full understanding of both the decision-making mechanisms of the ML algorithms and the impact of the features on each automatic decision. Moreover, the proposed pipeline addresses some of the problems concerning both the imbalance of the two classes by analyzing and selecting the best combination of the synthetic oversampling strategy of the minority class and the classification algorithm among different choices, and the explainability of the features at global and local levels. The results show very high performance of the best strategy (AUC = 0.99, F1 = 0.87) and the strong and stable impact of two ultrasound-based features (i.e., tumor borders and consistency of the lesions). Furthermore, the SHAP algorithm was exploited to quantify the impact of the features at the local level and a specific module was developed to provide a template-based natural language (NL) translation of the explanations for enhancing their interpretability and fostering the use of ML in the clinical setting.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Editorial: Human-Interpretable Machine Learning
    Tolomei, Gabriele
    Pinelli, Fabio
    Silvestri, Fabrizio
    FRONTIERS IN BIG DATA, 2022, 5
  • [2] Iris Recognition Based on Human-Interpretable Features
    Chen, Jianxu
    Shen, Feng
    Chen, Danny Z.
    Flynn, Patrick J.
    2015 IEEE INTERNATIONAL CONFERENCE ON IDENTITY, SECURITY AND BEHAVIOR ANALYSIS (ISBA), 2015,
  • [3] Iris Recognition Based on Human-Interpretable Features
    Chen, Jianxu
    Shen, Feng
    Chen, Danny Ziyi
    Flynn, Patrick J.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2016, 11 (07) : 1476 - 1485
  • [4] A class-contrastive human-interpretable machine learning approach to predict mortality in severe mental illness
    Banerjee, Soumya
    Lio, Pietro
    Jones, Peter B.
    Cardinal, Rudolf N.
    NPJ SCHIZOPHRENIA, 2021, 7 (01):
  • [5] Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning
    Marconato, Emanuele
    Passerini, Andrea
    Teso, Stefano
    ENTROPY, 2023, 25 (12)
  • [6] A class-contrastive human-interpretable machine learning approach to predict mortality in severe mental illness
    Soumya Banerjee
    Pietro Lio
    Peter B. Jones
    Rudolf N. Cardinal
    npj Schizophrenia, 7
  • [7] Toward human-interpretable, automated learning of feedback control for the mixing layer
    Li, Hao
    Maceda, Guy Y. Cornejo
    Li, Yiqing
    Tan, Jianguo
    Noack, Bernd R.
    PHYSICS OF FLUIDS, 2025, 37 (03)
  • [8] An interpretable machine learning pipeline based on transcriptomics predicts phenotypes of lupus patients
    Leventhal, Emily L.
    Daamen, Andrea R.
    Grammer, Amrie C.
    Lipsky, Peter E.
    ISCIENCE, 2023, 26 (10)
  • [9] Human-Interpretable Feature Pattern Classification System Using Learning Classifier Systems
    Ebadi, Toktam
    Kukenys, Ignas
    Browne, Will N.
    Zhang, Mengjie
    EVOLUTIONARY COMPUTATION, 2014, 22 (04) : 629 - 650
  • [10] Explainable AI: A Hybrid Approach to Generate Human-Interpretable Explanation for Deep Learning Prediction
    De, Tanusree
    Giri, Prasenjit
    Mevawala, Ahmeduvesh
    Nemani, Ramyasri
    Deo, Arati
    COMPLEX ADAPTIVE SYSTEMS, 2020, 168 : 40 - 48