An Explainable Machine Learning Approach for Anomaly Detection in Satellite Telemetry Data

被引:1
|
作者
Kricheff, Seth [1 ]
Maxwell, Emily [2 ]
Plaks, Connor [3 ]
Simon, Michelle [4 ]
机构
[1] Purdue Univ, Elmore Family Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
[2] Rose Hulman Inst Technol, Dept Elect & Comp Engn, Terre Haute, IN 47803 USA
[3] Purdue Univ, Sch Aeronaut & Astronaut, W Lafayette, IN 47907 USA
[4] Air Force Res Lab, Kirtland AFB, NM USA
关键词
D O I
10.1109/AERO58975.2024.10521300
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Accurate and interpretable satellite health monitoring systems play a crucial role in keeping a satellite operational. With potentially hundreds of sensors to monitor, identifying when and how a component exhibits anomalous behavior is essential to the longevity of a satellite's mission. Detecting these anomalies in their early stages can protect million-dollar assets and their missions by preventing the escalation of minor issues to system failure. Traditional methods for anomaly detection utilize expert domain knowledge to produce generally accurate and easy-to-interpret results. However, many are cost- and labor-intensive, and their scope is usually limited to a subset of anomalies [1]. Over the past decade, satellites have become increasingly complex, posing a significant challenge to dated methods. To combat this, state-of-the-art machine learning algorithms have been proposed. These methods include high-dimensional clustering [2], [3], large forest decision trees [4], and Long-Short-Term Memory RNNs [5], [6], [7]. Although having shown improved accuracy, these newer models lack interpretability -insight into how a model makes decisions. Satellite operators express caution in entrusting multi-million-dollar decisions solely to machine learning models that lack transparency. This missing trust leads to reliance on dated, semi-reliable algorithms, despite the risk of missing catastrophic anomalies. To bridge the gap between high detection accuracy and human interpretability, this paper explores methods of explainability incorporated with machine learning. Our method of investigation involves two steps: the implementation of machine learning and the development of explainability. First, we use current state-of-the-art machine learning algorithms applied to telemetry data from a previously flown Air Force Research Labs (AFRL) satellite to classify anomalies. Then, we apply and evaluate three explainability methods, namely SHAP (Shapley Additive Explanations) [8], LIME (Local Interpretable Model-Agnostic Explanations) [9], and LRP (Layer-wise Relevance Propagation) [10]. We propose the use of non-classifier machine learning models combined with post-hoc explainability methods to foster trust in machine learning by providing explanations for satellite operators to make more informed decisions.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] AREP: an adaptive, machine learning-based algorithm for real-time anomaly detection on network telemetry data
    Karoly Farkas
    Neural Computing and Applications, 2023, 35 : 6079 - 6094
  • [32] AREP: an adaptive, machine learning-based algorithm for real-time anomaly detection on network telemetry data
    Farkas, Karoly
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (08): : 6079 - 6094
  • [33] Adaptable and Explainable Predictive Maintenance: Semi-Supervised Deep Learning for Anomaly Detection and Diagnosis in Press Machine Data
    Serradilla, Oscar
    Zugasti, Ekhi
    Ramirez de Okariz, Julian
    Rodriguez, Jon
    Zurutuza, Urko
    APPLIED SCIENCES-BASEL, 2021, 11 (16):
  • [34] Fragment Anomaly Detection With Prediction and Statistical Analysis for Satellite Telemetry
    Liu, Datong
    Pang, Jingyue
    Song, Ge
    Xie, Wei
    Peng, Yu
    Peng, Xiyuan
    IEEE ACCESS, 2017, 5 : 19269 - 19281
  • [35] Anomaly Detection for Satellite Telemetry Series with Prediction Interval Optimization
    Pang, Jingyue
    Liu, Datong
    Peng, Yu
    Peng, Xiyuan
    2018 INTERNATIONAL CONFERENCE ON SENSING, DIAGNOSTICS, PROGNOSTICS, AND CONTROL (SDPC), 2018, : 408 - 414
  • [36] Explainable Machine Learning for Fraud Detection
    Psychoula, Ismini
    Gutmann, Andreas
    Mainali, Pradip
    Lee, S. H.
    Dunphy, Paul
    Petitcolas, Fabien A. P.
    COMPUTER, 2021, 54 (10) : 49 - 59
  • [37] Explainable Machine Learning for Intrusion Detection
    Bellegdi, Sameh
    Selamat, Ali
    Olatunji, Sunday O.
    Fujita, Hamido
    Krejcar, Ondfrej
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE: THEORY AND APPLICATIONS, IEA-AIE 2024, 2024, 14748 : 122 - 134
  • [38] Machine Learning Algorithms Applied to Telemetry Data of SCD-2 Brazilian Satellite
    Tavares, Isabela
    Oliveira, Junia Maisa
    Teixeira, Andre Ferreira
    Pereira, Marconi de Arruda
    Kakitani, Marcos Tomio
    Nogueira, Jose Marcos
    PROCEEDINGS OF THE 2022 LATIN AMERICA NETWORKING CONFERENCE, LANC 2022, 2022, : 50 - 57
  • [39] Satellite MicroAnomaly Detection Based Telemetry Data
    Sun, Chao
    Chen, Shaojun
    Mingzhang, E.
    Du, Ying
    Ruan, Chuanmin
    PROCEEDINGS OF 2020 IEEE 9TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS'20), 2020, : 140 - 144
  • [40] A review of explainable AI in the satellite data, deep machine learning, and human poverty domain
    Hall, Ola
    Ohlsson, Mattias
    Rognvaldsson, Thorsteinn
    PATTERNS, 2022, 3 (10):