From explainable to interpretable deep learning for natural language processing in healthcare: How far from reality?

被引:7
|
作者
Huang, Guangming [1 ]
Li, Yingya [2 ,3 ]
Jameel, Shoaib [4 ]
Long, Yunfei [1 ]
Papanastasiou, Giorgos [5 ]
机构
[1] Univ Essex, Sch Comp Sci & Elect Engn, Colchester CO4 3SQ, England
[2] Harvard Med Sch, Boston, MA 02115 USA
[3] Boston Childrens Hosp, Boston, MA 02115 USA
[4] Univ Southampton, Elect & Comp Sci, Southampton SO17 1BJ, England
[5] Athena Res Ctr, Archimedes Unit, Athens 15125, Greece
关键词
Explainable; Interpretable; Deep learning; NLP; Healthcare; REPRESENTATIONS; MODELS;
D O I
10.1016/j.csbj.2024.05.004
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
Deep learning (DL) has substantially enhanced natural language processing (NLP) in healthcare research. However, the increasing complexity of DL -based NLP necessitates transparent model interpretability, or at least explainability, for reliable decision -making. This work presents a thorough scoping review of explainable and interpretable DL in healthcare NLP. The term "eXplainable and Interpretable Artificial Intelligence" (XIAI) is introduced to distinguish XAI from IAI. Different models are further categorized based on their functionality (model-, input-, output -based) and scope (local, global). Our analysis shows that attention mechanisms are the most prevalent emerging IAI technique. The use of IAI is growing, distinguishing it from XAI. The major challenges identified are that most XIAI does not explore "global" modelling processes, the lack of best practices, and the lack of systematic evaluation and benchmarks. One important opportunity is to use attention mechanisms to enhance multi -modal XIAI for personalized medicine. Additionally, combining DL with causal logic holds promise. Our discussion encourages the integration of XIAI in Large Language Models (LLMs) and domain -specific smaller models. In conclusion, XIAI adoption in healthcare requires dedicated in-house expertise. Collaboration with domain experts, end -users, and policymakers can lead to ready -to -use XIAI methods across NLP and medical tasks. While challenges exist, XIAI techniques offer a valuable foundation for interpretable NLP algorithms in healthcare.
引用
收藏
页码:362 / 373
页数:12
相关论文
共 50 条
  • [1] This Reads Like That: Deep Learning for Interpretable Natural Language Processing
    Fanconi, Claudio
    Vandenhirtz, Moritz
    Husmann, Severin
    Vogt, Julia E.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 14067 - 14076
  • [2] Deep Natural Language Feature Learning for Interpretable Prediction
    Urrutia, Felipe
    Buc, Cristian
    Barriere, Valentin
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3736 - 3763
  • [3] Deep learning of the natural language processing
    Allauzen, Alexandre
    Schuetze, Hinrich
    TRAITEMENT AUTOMATIQUE DES LANGUES, 2018, 59 (02): : 7 - 14
  • [4] Deep Learning in Natural Language Processing
    Feng, Haoda
    Shi, Feng
    NATURAL LANGUAGE ENGINEERING, 2021, 27 (03) : 373 - 375
  • [5] Explainable deep learning in healthcare: A methodological survey from an attribution view
    Jin, Di
    Sergeeva, Elena
    Weng, Wei-Hung
    Chauhan, Geeticka
    Szolovits, Peter
    WIRES MECHANISMS OF DISEASE, 2022, 14 (03):
  • [6] Deep Learning for Natural Language Processing and Language Modelling
    Klosowski, Piotr
    2018 SIGNAL PROCESSING: ALGORITHMS, ARCHITECTURES, ARRANGEMENTS, AND APPLICATIONS (SPA), 2018, : 223 - 228
  • [7] Deep Learning on Graphs for Natural Language Processing
    Wu, Lingfei
    Chen, Yu
    Ji, Heng
    Liu, Bang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4084 - 4085
  • [8] Deep Learning Methods in Natural Language Processing
    Flores, Alexis Stalin Alulema
    APPLIED TECHNOLOGIES (ICAT 2019), PT II, 2020, 1194 : 92 - 107
  • [9] Deep Learning Techniques for Natural Language Processing
    Rodzin, Sergey
    Bova, Victoria
    Kravchenko, Yury
    Rodzina, Lada
    ARTIFICIAL INTELLIGENCE TRENDS IN SYSTEMS, VOL 2, 2022, 502 : 121 - 130
  • [10] Deep Learning for Natural Language Processing: A Survey
    Arkhangelskaya E.O.
    Nikolenko S.I.
    Journal of Mathematical Sciences, 2023, 273 (4) : 533 - 582