Explainability in deep reinforcement learning

被引:151
|
作者
Heuillet, Alexandre [1 ]
Couthouis, Fabien [2 ]
Diaz-Rodriguez, Natalia [3 ]
机构
[1] Bordeaux INP, ENSEIRB MATMECA, 1 Ave Docteur Albert Schweitzer, F-33400 Talence, France
[2] Bordeaux INP, ENSC, 109 Ave Roul, F-33400 Talence, France
[3] Inst Polytech Paris, Inria Flowers Team, ENSTA Paris, 828 Blvd Marechaux, F-91762 Palaiseau, France
关键词
Reinforcement Learning; Explainable artificial intelligence; Machine Learning; Deep Learning; Responsible artificial intelligence; Representation learning;
D O I
10.1016/j.knosys.2020.106685
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models that ingest image source data. However, assessing how XAI techniques can help understand models beyond classification tasks, e.g. for reinforcement learning (RL), has not been extensively studied. We review recent works in the direction to attain Explainable Reinforcement Learning (XRL), a relatively new subfield of Explainable Artificial Intelligence, intended to be used in general public applications, with diverse audiences, requiring ethical, responsible and trustable algorithms. In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box. We evaluate mainly studies directly linking explainability to RL, and split these into two categories according to the way the explanations are generated: transparent algorithms and post-hoc explainability. We also review the most prominent XAI works from the lenses of how they could potentially enlighten the further deployment of the latest advances in RL, in the demanding present and future of everyday problems. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Explainability of Deep Reinforcement Learning Method with Drones
    Cetin, Ender
    Barrado, Cristina
    Pastor, Enric
    [J]. 2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,
  • [2] Explainability in Deep Reinforcement Learning: A Review into Current Methods and Applications
    Hickling, Thomas
    Zenati, Abdelhafid
    Aouf, Nabil
    Spencer, Phillippa
    [J]. ACM COMPUTING SURVEYS, 2024, 56 (05)
  • [3] Assessing Explainability in Reinforcement Learning
    ZeIvelder, Amber E.
    Westberg, Marcus
    Framling, Kary
    [J]. EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2021, 2021, 12688 : 223 - 240
  • [4] HEX: Human-in-the-loop explainability via deep reinforcement learning
    Lash, Michael T.
    [J]. Decision Support Systems, 2024, 187
  • [5] Exploiting Explainability for Reinforcement Learning Model Assurance
    Tapley, Alexander
    Weissman, Joseph
    [J]. ASSURANCE AND SECURITY FOR AI-ENABLED SYSTEMS, 2024, 13054
  • [6] Reinforcement Learning with Explainability for Traffic Signal Control
    Rizzo, Stefano Giovanni
    Vantini, Giovanna
    Chawla, Sanjay
    [J]. 2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 3567 - 3572
  • [7] Enhancing Explainability of Deep Reinforcement Learning Through Selective Layer-Wise Relevance Propagation
    Huber, Tobias
    Schiller, Dominik
    Andre, Elisabeth
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2019, 2019, 11793 : 188 - 202
  • [8] Explainability of deep reinforcement learning algorithms in robotic domains by using Layer-wise Relevance Propagation
    Taghian, Mehran
    Miwa, Shotaro
    Mitsuka, Yoshihiro
    Gunther, Johannes
    Golestan, Shadan
    Zaiane, Osmar
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [9] Beyond Human: Deep Learning, Explainability and Representation
    Fazi, M. Beatrice
    [J]. THEORY CULTURE & SOCIETY, 2021, 38 (7-8) : 55 - 77
  • [10] Magnetic anomalies characterization: Deep learning and explainability
    Cardenas, J.
    Denis, C.
    Mousannif, H.
    Camerlynck, C.
    Florsch, N.
    [J]. COMPUTERS & GEOSCIENCES, 2022, 169