Interpretable Deep Learning for Monitoring Combustion Instability

被引:3
|
作者
Gangopadhyay, Tryambak [1 ]
Tan, Sin Yong [1 ]
LoCurto, Anthony [1 ]
Michael, James B. [1 ]
Sarkar, Soumik [1 ]
机构
[1] Iowa State Univ, Dept Mech Engn, Ames, IA 50011 USA
来源
IFAC PAPERSONLINE | 2020年 / 53卷 / 02期
关键词
Deep Learning; Attention; LSTM; 3D CNN; Detection; DECOMPOSITION;
D O I
10.1016/j.ifacol.2020.12.839
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transitions from stable to unstable states occurring in dynamical systems can be sudden leading to catastrophic failure and huge revenue loss. For detecting these transitions during operation, it is of utmost importance to develop an accurate data-driven framework that is robust enough to classify stable and unstable scenarios. In this paper, we propose deep learning frameworks that show remarkable accuracy in the classification task of combustion instability on carefully designed diverse training and test sets. We train our model with data from a laboratory-scale combustion system showing stable and unstable states. The dataset is multimodal with correlated data of hi-speed video and acoustic signals. We develop a labeling mechanism for sequences by implementing Kullback-Leibler Divergence on the time-series data. We develop deep learning frameworks using 3D Convolutional Neural Network and Long Short Term Memory network for this classification task. To go beyond the accuracy and to gain insights into the predictions, we incorporate attention mechanism across the time-steps. This aids in understanding the time-periods which contribute significantly to the prediction outcome. We validate the insights from a domain knowledge perspective. By exploring inside the accurate black-box models, this framework can be used for the development of better detection frameworks in different dynamical systems. Copyright (C) 2020 The Authors.
引用
收藏
页码:832 / 837
页数:6
相关论文
共 50 条
  • [31] DeepEnhancerPPO: An Interpretable Deep Learning Approach for Enhancer Classification
    Mu, Xuechen
    Huang, Zhenyu
    Chen, Qiufen
    Shi, Bocheng
    Xu, Long
    Xu, Ying
    Zhang, Kai
    INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES, 2024, 25 (23)
  • [32] The Structure of Deep Neural Network for Interpretable Transfer Learning
    Kim, Dowan
    Lim, Woohyun
    Hong, Minye
    Kim, Hyeoncheol
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2019, : 181 - 184
  • [33] Interpretable patent recommendation with knowledge graph and deep learning
    Chen, Han
    Deng, Weiwei
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [34] Antibody structure prediction using interpretable deep learning
    Ruffolo, Jeffrey A.
    Sulam, Jeremias
    Gray, Jeffrey J.
    PATTERNS, 2022, 3 (02):
  • [35] Feature Analysis Network: An Interpretable Idea in Deep Learning
    Li, Xinyu
    Gao, Xiaoguang
    Wang, Qianglong
    Wang, Chenfeng
    Li, Bo
    Wan, Kaifang
    COGNITIVE COMPUTATION, 2024, 16 (03) : 803 - 826
  • [36] Interpretable Sentiment Analysis based on Deep Learning: An overview
    Jawale, Shila
    Sawarkar, S. D.
    2020 IEEE PUNE SECTION INTERNATIONAL CONFERENCE (PUNECON), 2020, : 65 - 70
  • [37] Infusing theory into deep learning for interpretable reactivity prediction
    Wang, Shih-Han
    Pillai, Hemanth Somarajan
    Wang, Siwen
    Achenie, Luke E. K.
    Xin, Hongliang
    NATURE COMMUNICATIONS, 2021, 12 (01)
  • [38] A Novel Interpretable Deep Learning Model for Ozone Prediction
    Chen, Xingguo
    Li, Yang
    Xu, Xiaoyan
    Shao, Min
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [39] Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis
    Liao, WangMin
    Zou, BeiJi
    Zhao, RongChang
    Chen, YuanQiong
    He, ZhiYou
    Zhou, MengJie
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (05) : 1405 - 1412
  • [40] An Interpretable Deep Learning Model for Automatic Sound Classification
    Zinemanas, Pablo
    Rocamora, Martin
    Miron, Marius
    Font, Frederic
    Serra, Xavier
    ELECTRONICS, 2021, 10 (07)