"What is relevant in a text document?": An interpretable machine learning approach

被引:148
|
作者
Arras, Leila [1 ]
Horn, Franziska [2 ]
Montavon, Gregoire [2 ]
Mueller, Klaus-Robert [2 ,3 ,4 ]
Samek, Wojciech [1 ]
机构
[1] Fraunhofer Heinrich Hertz Inst, Machine Learning Grp, Berlin, Germany
[2] Tech Univ Berlin, Machine Learning Grp, Berlin, Germany
[3] Korea Univ, Dept Brain & Cognit Engn, Seoul, South Korea
[4] Max Planck Inst Informat, Saarbrucken, Germany
来源
PLOS ONE | 2017年 / 12卷 / 08期
基金
新加坡国家研究基金会;
关键词
NETWORKS;
D O I
10.1371/journal.pone.0181142
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] Learning Interpretable Negation Rules via Weak Supervision at Document Level: A Reinforcement Learning Approach
    Prollochs, Nicolas
    Feuerriegel, Stefan
    Neumann, Dirk
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 407 - 413
  • [22] Interpretable machine learning assessment
    Han, Henry
    Wu, Yi
    Wang, Jiacun
    Han, Ashley
    NEUROCOMPUTING, 2023, 561
  • [23] Algorithms for Interpretable Machine Learning
    Rudin, Cynthia
    PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, : 1519 - 1519
  • [24] Interpretable machine learning for genomics
    Watson, David S.
    HUMAN GENETICS, 2022, 141 (09) : 1499 - 1513
  • [25] Interpretable machine learning for genomics
    David S. Watson
    Human Genetics, 2022, 141 : 1499 - 1513
  • [26] Techniques for Interpretable Machine Learning
    Du, Mengnan
    Li, Ninghao
    Hu, Xia
    COMMUNICATIONS OF THE ACM, 2020, 63 (01) : 68 - 77
  • [27] Interpretable Machine Learning for TabPFN
    Rundel, David
    Kobialka, Julius
    von Crailsheim, Constantin
    Feurer, Matthias
    Nagler, Thomas
    Ruegamer, David
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT II, XAI 2024, 2024, 2154 : 465 - 476
  • [28] Interpretable Machine Learning in Healthcare
    Ahmad, Muhammad Aurangzeb
    Eckert, Carly
    Teredesai, Ankur
    2018 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI), 2018, : 447 - 447
  • [29] Interpretable Machine Learning in Healthcare
    Ahmad, Muhammad Aurangzeb
    Eckert, Carly
    Teredesai, Ankur
    ACM-BCB'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, 2018, : 559 - 560
  • [30] Towards expert-machine collaborations for technology valuation: An interpretable machine learning approach
    Kim, Juram
    Lee, Gyumin
    Lee, Seungbin
    Lee, Changyong
    TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2022, 183