Explaining Deep Learning Models Through Rule-Based Approximation and Visualization

被引:19
|
作者
Soares, Eduardo [1 ]
Angelov, Plamen P. [1 ]
Costa, Bruno [2 ]
Gerardo Castro, Marcos P. [2 ]
Nageshrao, Subramanya [2 ]
Filev, Dimitar [2 ]
机构
[1] Univ Lancaster, Lancaster Intelligent Robot & Autonomous Syst Res, Sch Comp & Commun, Lancaster LA1 4WA, England
[2] Ford Motor Co, Ford Res & Innovat Ctr, Palo Alto, CA 94304 USA
关键词
Autonomous driving; deep reinforcement learning; density-based input selection; explainable artificial intelligence; prototype- and density-based models; rule-based models; IDENTIFICATION; CLASSIFIER;
D O I
10.1109/TFUZZ.2020.2999776
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article describes a novel approach to the problem of developing explainable machine learning models. We consider a deep reinforcement learning (DRL) model representing a highway path planning policy for autonomous highway driving [1]. The model constitutes a mapping from the continuous multidimensional state space characterizing vehicle positions and velocities to a discrete set of actions in longitudinal and lateral direction. It is obtained by applying a customized version of the double deep Q-network learning algorithm [2]. The main idea is to approximate the DRL model with a set of IF-THEN rules that provide an alternative interpretable model, which is further enhanced by visualizing the rules. This concept is rationalized by the universal approximation properties of the rule-based models with fuzzy predicates. The proposed approach includes a learning engine composed of zero-order fuzzy rules, which generalize locally around the prototypes by using multivariate function models. The adjacent (in the data space) prototypes, which correspond to the same action, are further grouped and merged into the so-called MegaClouds reducing significantly the number of fuzzy rules. The input selection method is based on ranking the density of the individual inputs. Experimental results show that the specific DRL agent can be interpreted by approximating with families of rules of different granularity. The method is computationally efficient and can be potentially extended to addressing the explainability of the broader set of fully connected deep neural network models.
引用
收藏
页码:2399 / 2407
页数:9
相关论文
共 50 条
  • [1] Automated visualization of rule-based models
    Sekar, John Arul Prakash
    Tapia, Jose-Juan
    Faeder, James R.
    PLOS COMPUTATIONAL BIOLOGY, 2017, 13 (11)
  • [2] bnglViz: online visualization of rule-based models
    Liguori-Bills, Noah
    Blinov, Michael L.
    BIOINFORMATICS, 2024, 40 (06)
  • [3] LEARNING BY BUILDING RULE-BASED MODELS
    WEBB, M
    COMPUTERS & EDUCATION, 1992, 18 (1-3) : 89 - 100
  • [4] Visualization of Rule-Based Programming
    Yu, Wenshan
    Verma, Rakesh M.
    APPLIED COMPUTING 2008, VOLS 1-3, 2008, : 1258 - 1259
  • [5] Rule-based database visualization
    Xiao, YC
    Yi, P
    VISUAL DATA EXPLORATION AND ANALYSIS VIII, 2001, 4302 : 219 - 226
  • [6] Rule-Based Collaborative Learning with Heterogeneous Local Learning Models
    Pang, Ying
    Zhang, Haibo
    Deng, Jeremiah D.
    Peng, Lizhi
    Teng, Fei
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT I, 2022, 13280 : 639 - 651
  • [7] Design of rule-based models through information granulation
    Kerr-Wilson, Jeremy
    Pedrycz, Witold
    EXPERT SYSTEMS WITH APPLICATIONS, 2016, 46 : 274 - 285
  • [8] EXPLAINING DEEP MODELS THROUGH FORGETTABLE LEARNING DYNAMICS
    Benkert, Ryan
    Aribido, Oluwaseun Joseph
    AlRegib, Ghassan
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3692 - 3696
  • [9] RULE-BASED VERSUS STRUCTURE-BASED MODELS FOR EXPLAINING AND GENERATING EXPERT BEHAVIOR
    DHAR, V
    POPLE, HE
    COMMUNICATIONS OF THE ACM, 1987, 30 (06) : 542 - 555
  • [10] Rule-based collaborative volume visualization
    Wang, Yunhai
    Yuan, Xiaoru
    Shan, Guihua
    Chi, Xuebin
    COOPERATIVE DESIGN, VISUALIZATION, AND ENGINEERING, 2007, 4674 : 261 - +