On the Explainability of Natural Language Processing Deep Models

被引:32
|
作者
El Zini, Julia [1 ]
Awad, Mariette [1 ]
机构
[1] Amer Univ Beirut, Dept Elect & Comp Engn, POB 11-0236, Beirut 11072020, Lebanon
关键词
ExAI; NLP; language models; transformers; neural machine translation; transparent embedding models; explaining decisions; NEURAL-NETWORKS; GAME;
D O I
10.1145/3529755
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Despite their success, deep networks are used as black-box models with outputs that are not easily explainable during the learning and the prediction phases. This lack of interpretability is significantly limiting the adoption of such models in domains where decisions are critical such as the medical and legal fields. Recently, researchers have been interested in developing methods that help explain individual decisions and decipher the hidden representations of machine learning models in general and deep networks specifically. While there has been a recent explosion of work on Explainable Artificial Intelligence (ExAI) on deep models that operate on imagery and tabular data, textual datasets present new challenges to the ExAI community. Such challenges can be attributed to the lack of input structure in textual data, the use of word embeddings that add to the opacity of the models and the difficulty of the visualization of the inner workings of deep models when they are trained on textual data. Lately, methods have been developed to address the aforementioned challenges and present satisfactory explanations on Natural Language Processing (NLP) models. However, such methods are yet to be studied in a comprehensive framework where common challenges are properly stated and rigorous evaluation practices and metrics are proposed. Motivated to democratize ExAI methods in the NLP field, we present in this work a survey that studies model-agnostic as well as model-specific explainability methods on NLP models. Such methods can either develop inherently interpretable NLP models or operate on pre-trained models in a post hoc manner. We make this distinction and we further decompose the methods into three categories according to what they explain: (1) word embeddings (input level), (2) inner workings of NLP models (processing level), and (3) models' decisions (output level). We also detail the different evaluation approaches interpretability methods in the NLP field. Finally, we present a case-study on the well-known neural machine translation in an appendix, and we propose promising future research directions for ExAl in the NLP field.
引用
收藏
页数:31
相关论文
共 50 条
  • [41] Shared computational principles for language processing in humans and deep language models
    Goldstein, Ariel
    Zada, Zaid
    Buchnik, Eliav
    Schain, Mariano
    Price, Amy
    Aubrey, Bobbi
    Nastase, Samuel A.
    Feder, Amir
    Emanuel, Dotan
    Cohen, Alon
    Jansen, Aren
    Gazula, Harshvardhan
    Choe, Gina
    Rao, Aditi
    Kim, Catherine
    Casto, Colton
    Fanda, Lora
    Doyle, Werner
    Friedman, Daniel
    Dugan, Patricia
    Melloni, Lucia
    Reichart, Roi
    Devore, Sasha
    Flinker, Adeen
    Hasenfratz, Liat
    Levy, Omer
    Hassidim, Avinatan
    Brenner, Michael
    Matias, Yossi
    Norman, Kenneth A.
    Devinsky, Orrin
    Hasson, Uri
    [J]. NATURE NEUROSCIENCE, 2022, 25 (03) : 369 - +
  • [42] Shared computational principles for language processing in humans and deep language models
    Ariel Goldstein
    Zaid Zada
    Eliav Buchnik
    Mariano Schain
    Amy Price
    Bobbi Aubrey
    Samuel A. Nastase
    Amir Feder
    Dotan Emanuel
    Alon Cohen
    Aren Jansen
    Harshvardhan Gazula
    Gina Choe
    Aditi Rao
    Catherine Kim
    Colton Casto
    Lora Fanda
    Werner Doyle
    Daniel Friedman
    Patricia Dugan
    Lucia Melloni
    Roi Reichart
    Sasha Devore
    Adeen Flinker
    Liat Hasenfratz
    Omer Levy
    Avinatan Hassidim
    Michael Brenner
    Yossi Matias
    Kenneth A. Norman
    Orrin Devinsky
    Uri Hasson
    [J]. Nature Neuroscience, 2022, 25 : 369 - 380
  • [43] Processing natural language without natural language processing
    Brill, E
    [J]. COMPUTATIONAL LINGUISTICS AND INTELLIGENT TEXT PROCESSING, PROCEEDINGS, 2003, 2588 : 360 - 369
  • [44] Robustness of GPT Large Language Models on Natural Language Processing Tasks
    Xuanting, Chen
    Junjie, Ye
    Can, Zu
    Nuo, Xu
    Tao, Gui
    Qi, Zhang
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (05): : 1128 - 1142
  • [45] Implementation of language models within an infrastructure designed for Natural Language Processing
    Walkowiak, Bartosz
    Walkowiak, Tomasz
    [J]. INTERNATIONAL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2024, 70 (01) : 153 - 159
  • [46] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [47] A Primer on Neural Network Models for Natural Language Processing
    Goldberg, Yoav
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2016, 57 : 345 - 420
  • [48] A Legal Perspective on Training Models for Natural Language Processing
    de Castilho, Richard Eckart
    Dore, Giulia
    Margoni, Thomas
    Labropoulou, Penny
    Gurevych, Iryna
    [J]. PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 1267 - 1274
  • [49] Optimizing Resource Allocation in Cloud for Large-Scale Deep Learning Models in Natural Language Processing
    Dhopavkar, Gauri
    Welekar, Rashmi R.
    Ingole, Piyush K.
    Vaidya, Chandu
    Wankhade, Shalini Vaibhav
    Vasgi, Bharati P.
    [J]. JOURNAL OF ELECTRICAL SYSTEMS, 2023, 19 (03) : 62 - 77
  • [50] Cross-Domain Learning in Deep HAR Models via Natural Language Processing on Action Labels
    Bacharidis, Konstantinos
    Argyros, Antonis
    [J]. ADVANCES IN VISUAL COMPUTING, ISVC 2022, PT I, 2022, 13598 : 347 - 361