Accurate and interpretable satellite health monitoring systems play a crucial role in keeping a satellite operational. With potentially hundreds of sensors to monitor, identifying when and how a component exhibits anomalous behavior is essential to the longevity of a satellite's mission. Detecting these anomalies in their early stages can protect million-dollar assets and their missions by preventing the escalation of minor issues to system failure. Traditional methods for anomaly detection utilize expert domain knowledge to produce generally accurate and easy-to-interpret results. However, many are cost- and labor-intensive, and their scope is usually limited to a subset of anomalies [1]. Over the past decade, satellites have become increasingly complex, posing a significant challenge to dated methods. To combat this, state-of-the-art machine learning algorithms have been proposed. These methods include high-dimensional clustering [2], [3], large forest decision trees [4], and Long-Short-Term Memory RNNs [5], [6], [7]. Although having shown improved accuracy, these newer models lack interpretability -insight into how a model makes decisions. Satellite operators express caution in entrusting multi-million-dollar decisions solely to machine learning models that lack transparency. This missing trust leads to reliance on dated, semi-reliable algorithms, despite the risk of missing catastrophic anomalies. To bridge the gap between high detection accuracy and human interpretability, this paper explores methods of explainability incorporated with machine learning. Our method of investigation involves two steps: the implementation of machine learning and the development of explainability. First, we use current state-of-the-art machine learning algorithms applied to telemetry data from a previously flown Air Force Research Labs (AFRL) satellite to classify anomalies. Then, we apply and evaluate three explainability methods, namely SHAP (Shapley Additive Explanations) [8], LIME (Local Interpretable Model-Agnostic Explanations) [9], and LRP (Layer-wise Relevance Propagation) [10]. We propose the use of non-classifier machine learning models combined with post-hoc explainability methods to foster trust in machine learning by providing explanations for satellite operators to make more informed decisions.