Interpreting Pretrained Language Models via Concept Bottlenecks

被引:0
|
作者
Tan, Zhen [1 ]
Cheng, Lu [2 ]
Wang, Song [3 ]
Yuan, Bo [4 ]
Li, Jundong [3 ]
Liu, Huan [1 ]
机构
[1] Arizona State Univ, Tempe, AZ 85281 USA
[2] Univ Illinois, Chicago, IL USA
[3] Univ Virginia, Charlottesville, VA USA
[4] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
关键词
Language Models; Interpretability; Conceptual Learning;
D O I
10.1007/978-981-97-2259-4_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks. However, the lack of interpretability due to their "black-box" nature poses challenges for responsible implementation. Although previous studies have attempted to improve interpretability by using, e.g., attention weights in self-attention layers, these weights often lack clarity, readability, and intuitiveness. In this research, we propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans. For example, we learn the concept of "Food" and investigate how it influences the prediction of a model's sentiment towards a restaurant review. We introduce C3 M, which combines human-annotated and machine-generated concepts to extract hidden neurons designed to encapsulate semantically meaningful and task-specific concepts. Through empirical evaluations on real-world datasets, we show that our approach offers valuable insights to interpret PLM behavior, helps diagnose model failures, and enhances model robustness amidst noisy concept labels.
引用
下载
收藏
页码:56 / 74
页数:19
相关论文
共 50 条
  • [21] Interpreting Pretrained Speech Models for Automatic Speech Assessment of Voice Disorders
    Lau, Hok Shing
    Huntly, Mark
    Morgan, Nathon
    Iyenoma, Adesua
    Zeng, Biao
    Bashford, Tim
    ARTIFICIAL INTELLIGENCE IN HEALTHCARE, PT I, AIIH 2024, 2024, 14975 : 59 - 72
  • [22] Practical Takes on Federated Learning with Pretrained Language Models
    Agarwal, Ankur
    Rezagholizadeh, Mehdi
    Parthasarathi, Prasanna
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 454 - 471
  • [23] Pretrained Language Models as Visual Planners for Human Assistance
    Patel, Dhruvesh
    Eghbalzadeh, Hamid
    Kamra, Nitin
    Iuzzolino, Michael Louis
    Jain, Unnat
    Desai, Ruta
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15256 - 15268
  • [24] Unlocking Multimedia Capabilities of Gigantic Pretrained Language Models
    Li, Boyang
    PROCEEDINGS OF THE 1ST WORKSHOP ON LARGE GENERATIVE MODELS MEET MULTIMODAL APPLICATIONS, LGM3A 2023, 2023, : 3 - 4
  • [25] A Survey of Sentiment Analysis Based on Pretrained Language Models
    Sun, Kaili
    Luo, Xudong
    Luo, Michael Y.
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 1239 - 1244
  • [26] Large Product Key Memory for Pretrained Language Models
    Kim, Gyuwan
    Jung, Tae-Hwan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4060 - 4069
  • [27] A study of Turkish emotion classification with pretrained language models
    Ucan, Alaettin
    Dorterler, Murat
    Akcapinar Sezer, Ebru
    JOURNAL OF INFORMATION SCIENCE, 2022, 48 (06) : 857 - 865
  • [28] Can Pretrained Language Models (Yet) Reason Deductively?
    Yuan, Zhangdie
    Hu, Songbo
    Vulic, Ivan
    Korhonen, Anna
    Meng, Zaiqiao
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1447 - 1462
  • [29] Topic Classification for Political Texts with Pretrained Language Models
    Wang, Yu
    POLITICAL ANALYSIS, 2023, 31 (04) : 662 - 668
  • [30] Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
    Zhao, Mengjie
    Lin, Tao
    Mi, Fei
    Jaggi, Martin
    Schutze, Hinrich
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2226 - 2241