Interpretable Deep Learning for Monitoring Combustion Instability

被引:3
|
作者
Gangopadhyay, Tryambak [1 ]
Tan, Sin Yong [1 ]
LoCurto, Anthony [1 ]
Michael, James B. [1 ]
Sarkar, Soumik [1 ]
机构
[1] Iowa State Univ, Dept Mech Engn, Ames, IA 50011 USA
来源
IFAC PAPERSONLINE | 2020年 / 53卷 / 02期
关键词
Deep Learning; Attention; LSTM; 3D CNN; Detection; DECOMPOSITION;
D O I
10.1016/j.ifacol.2020.12.839
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transitions from stable to unstable states occurring in dynamical systems can be sudden leading to catastrophic failure and huge revenue loss. For detecting these transitions during operation, it is of utmost importance to develop an accurate data-driven framework that is robust enough to classify stable and unstable scenarios. In this paper, we propose deep learning frameworks that show remarkable accuracy in the classification task of combustion instability on carefully designed diverse training and test sets. We train our model with data from a laboratory-scale combustion system showing stable and unstable states. The dataset is multimodal with correlated data of hi-speed video and acoustic signals. We develop a labeling mechanism for sequences by implementing Kullback-Leibler Divergence on the time-series data. We develop deep learning frameworks using 3D Convolutional Neural Network and Long Short Term Memory network for this classification task. To go beyond the accuracy and to gain insights into the predictions, we incorporate attention mechanism across the time-steps. This aids in understanding the time-periods which contribute significantly to the prediction outcome. We validate the insights from a domain knowledge perspective. By exploring inside the accurate black-box models, this framework can be used for the development of better detection frameworks in different dynamical systems. Copyright (C) 2020 The Authors.
引用
收藏
页码:832 / 837
页数:6
相关论文
共 50 条
  • [41] Towards Interpretable Deep Learning Models for Knowledge Tracing
    Lu, Yu
    Wang, Deliang
    Meng, Qinggang
    Chen, Penghe
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT II, 2020, 12164 : 185 - 190
  • [42] Interpretable patent recommendation with knowledge graph and deep learning
    Han Chen
    Weiwei Deng
    Scientific Reports, 13
  • [43] Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond
    Li, Xuhong
    Xiong, Haoyi
    Li, Xingjian
    Wu, Xuanyu
    Zhang, Xiao
    Liu, Ji
    Bian, Jiang
    Dou, Dejing
    KNOWLEDGE AND INFORMATION SYSTEMS, 2022, 64 (12) : 3197 - 3234
  • [44] Using interpretable deep learning to model cancer dependencies
    Lin, Chih-Hsu
    Lichtarge, Olivier
    BIOINFORMATICS, 2021, 37 (17) : 2675 - 2681
  • [45] Industry return prediction via interpretable deep learning
    Zografopoulos, Lazaros
    Iannino, Maria Chiara
    Psaradellis, Ioannis
    Sermpinis, Georgios
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2025, 321 (01) : 257 - 268
  • [46] Fully interpretable deep learning model of transcriptional control
    Liu, Yi
    Barr, Kenneth
    Reinitz, John
    BIOINFORMATICS, 2020, 36 : 499 - 507
  • [47] Interpretable Deep Learning for Spatial Analysis of Severe Hailstorms
    Gagne, David John, II
    Haupt, Sue Ellen
    Nychka, Douglas W.
    Thompson, Gregory
    MONTHLY WEATHER REVIEW, 2019, 147 (08) : 2827 - 2845
  • [48] Deep learning based monitoring of furnace combustion state and measurement of heat release rate
    Wang, Zhenyu
    Song, Chunfeng
    Chen, Tao
    ENERGY, 2017, 131 : 106 - 112
  • [49] Deep PLS: A Lightweight Deep Learning Model for Interpretable and Efficient Data Analytics
    Kong, Xiangyin
    Ge, Zhiqiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8923 - 8937
  • [50] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074