Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems Using Feature Importance Fusion

被引:12
|
作者
Rengasamy, Divish [1 ]
Rothwell, Benjamin C. [1 ]
Figueredo, Grazziela P. [2 ]
机构
[1] Univ Nottingham, Gas Turbine & Transmiss Res Ctr, Nottingham NG7 2TU, England
[2] Univ Nottingham, Sch Comp Sci, Adv Data Anal Ctr, Nottingham NG8 1BB, England
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 24期
关键词
accountability; data fusion; deep learning; ensemble feature importance; explainable artificial intelligence; interpretability; machine learning; responsible artificial intelligence; NEURAL-NETWORKS; BLACK-BOX; PREDICTION; ALGORITHM; SELECTION; NOISE;
D O I
10.3390/app112411854
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
When machine learning supports decision-making in safety-critical systems, it is important to verify and understand the reasons why a particular output is produced. Although feature importance calculation approaches assist in interpretation, there is a lack of consensus regarding how features' importance is quantified, which makes the explanations offered for the outcomes mostly unreliable. A possible solution to address the lack of agreement is to combine the results from multiple feature importance quantifiers to reduce the variance in estimates and to improve the quality of explanations. Our hypothesis is that this leads to more robust and trustworthy explanations of the contribution of each feature to machine learning predictions. To test this hypothesis, we propose an extensible model-agnostic framework divided in four main parts: (i) traditional data pre-processing and preparation for predictive machine learning models, (ii) predictive machine learning, (iii) feature importance quantification, and (iv) feature importance decision fusion using an ensemble strategy. Our approach is tested on synthetic data, where the ground truth is known. We compare different fusion approaches and their results for both training and test sets. We also investigate how different characteristics within the datasets affect the quality of the feature importance ensembles studied. The results show that, overall, our feature importance ensemble framework produces 15% less feature importance errors compared with existing methods. Additionally, the results reveal that different levels of noise in the datasets do not affect the feature importance ensembles' ability to accurately quantify feature importance, whereas the feature importance quantification error increases with the number of features and number of orthogonal informative features. We also discuss the implications of our findings on the quality of explanations provided to safety-critical systems.
引用
收藏
页数:19
相关论文
共 23 条
  • [1] Sensitivity of Logic Learning Machine for Reliability in Safety-Critical Systems
    Narteni, Sara
    Orani, Vanessa
    Vaccari, Ivan
    Cambiaso, Enrico
    Mongelli, Maurizio
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (05) : 66 - 74
  • [2] On the Evaluation Measures for Machine Learning Algorithms for Safety-critical Systems
    Gharib, Mohamad
    Bondavalli, Andrea
    2019 15TH EUROPEAN DEPENDABLE COMPUTING CONFERENCE (EDCC 2019), 2019, : 141 - 144
  • [3] Towards online prediction of safety-critical landing metrics in aviation using supervised machine learning
    Puranik, Tejas G.
    Rodriguez, Nicolas
    Mavris, Dimitri N.
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2020, 120
  • [4] Understanding the Properness of Incorporating Machine Learning Algorithms in Safety-Critical Systems
    Gharib, Mohamad
    Zoppi, Tommaso
    Bondavalli, Andrea
    36TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2021, 2021, : 232 - 234
  • [5] Assuring Safety-Critical Machine Learning Enabled Systems: Challenges and Promise
    Goodloe, Alwyn E.
    2022 IEEE INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING WORKSHOPS (ISSREW 2022), 2022, : 326 - 332
  • [6] Error Resilient Machine Learning for Safety-Critical Systems: Position Paper
    Pattabiraman, Karthik
    Li, Guanpeng
    Chen, Zitao
    2020 26TH IEEE INTERNATIONAL SYMPOSIUM ON ON-LINE TESTING AND ROBUST SYSTEM DESIGN (IOLTS 2020), 2020,
  • [7] BinFI: An Efficient Fault Injector for Safety-Critical Machine Learning Systems
    Chen, Zitao
    Li, Guanpeng
    Pattabiraman, Karthik
    DeBardeleben, Nathan
    PROCEEDINGS OF SC19: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2019,
  • [8] Towards Making Safety-Critical Systems Safer: Learning from Mistakes
    Silva, Nuno
    Vieira, Marco
    2014 IEEE INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING WORKSHOPS (ISSREW), 2014, : 162 - 167
  • [9] On the Properness of Incorporating Binary Classification Machine Learning Algorithms Into Safety-Critical Systems
    Gharib, Mohamad
    Zoppi, Tommaso
    Bondavalli, Andrea
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (04) : 1671 - 1686
  • [10] Challenges of Machine Learning Applied to Safety-Critical Cyber-Physical Systems
    Pereira, Ana
    Thomas, Carsten
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2020, 2 (04): : 579 - 602