Context: Intensive Care Units (ICUs) treat patients in serious condition, demanding qualified professional assistance, modern equipment for full-time monitoring of patients, information systems for data collection, medications and other supplies. Problem: Patients can recover or die, and sepsis is one of the main causes of death. Predicting the likelihood of death in sepsis patients can help coordinate medical efforts, as incorrect initial decisions can increase the mortality rate. However, it is important that prediction models with machine learning are explainable to medical staff, so decision-making may be made conscientiously. Solution: This study aimed to identify which Machine Learning algorithms are best for predicting death by sepsis using SHapley Additive exPlanations (SHAP) to provide explainable models. Theoretical Approach: The paper draws from information processing theories based on Machine Learning and explainable artificial intelligence models. Method: 196 observations of real data were used to create Machine Learning models. Data characteristics were analyzed, followed by missing data imputation, preprocessing, feature selection and training of predictive models for SVM, Random Forest, Logistic Regression, KNN and Decision Tree. Two metrics were used to validate the models: accuracy and f1-weighted. For each generated method, SHAP values and models to generate an explainable model listing the factors that most contributed to death predictions. Summary of results: The study showed algorithms with best algorithms were SVM and Logistic Regression (80% for both metrics). The results also showed what models converged in their interpretation using the SHAP values. Contributions and Impact on the IS area: The analysis of the models generated with applied to different machine learning algorithms allow for explainable and transparent analyses by health specialists in decision-making contexts.