ObjectiveGallbladder polyps (GBPs) are increasingly prevalent, with the majority being benign; however, neoplastic polyps carry a risk of malignant transformation, highlighting the importance of accurate differentiation. This study aimed to develop and validate interpretable machine learning (ML) models to accurately predict neoplastic GBPs in a retrospective cohort, identifying key features and providing model explanations using the Shapley additive explanations (SHAP) method. MethodsA total of 924 patients with GBPs who underwent cholecystectomy between January 2013 and December 2023 at Qilu Hospital of Shandong University were included. The patient characteristics, laboratory results, preoperative ultrasound findings, and postoperative pathological results were collected. The dataset was randomly split, with 80% used for model training and the remaining 20% used for model testing. This study employed nine ML algorithms to construct predictive models. Subsequently, model performance was evaluated and compared using several metrics, including the area under the receiver operating characteristic curve (AUC). Feature importance was ranked, and model interpretability was enhanced by the SHAP method. ResultsK-nearest neighbors, C5.0 decision tree algorithm, and gradient boosting machine models showed the highest performance, with the highest predictive efficacy for neoplastic polyps. The SHAP method revealed the top five predictors of neoplastic polyps according to the importance ranking. The polyp size was recognized as the most important predictor variable, indicating that lesions >= 18 mm should prompt heightened clinical surveillance and timely intervention. ConclusionsOur interpretable ML models accurately predict neoplastic polyps in GBP patients, providing guidance for treatment planning and resource allocation. The model's transparency fosters trust and understanding, empowering physicians to confidently use its predictions for improved patient care.