"What is relevant in a text document?": An interpretable machine learning approach

被引:148
|
作者
Arras, Leila [1 ]
Horn, Franziska [2 ]
Montavon, Gregoire [2 ]
Mueller, Klaus-Robert [2 ,3 ,4 ]
Samek, Wojciech [1 ]
机构
[1] Fraunhofer Heinrich Hertz Inst, Machine Learning Grp, Berlin, Germany
[2] Tech Univ Berlin, Machine Learning Grp, Berlin, Germany
[3] Korea Univ, Dept Brain & Cognit Engn, Seoul, South Korea
[4] Max Planck Inst Informat, Saarbrucken, Germany
来源
PLOS ONE | 2017年 / 12卷 / 08期
基金
新加坡国家研究基金会;
关键词
NETWORKS;
D O I
10.1371/journal.pone.0181142
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Interpretable Machine Learning Approach to Predicting Electric Vehicle Buying Decisions
    Naseri, Hamed
    Waygood, E. O. D.
    Wang, Bobin
    Patterson, Zachary
    TRANSPORTATION RESEARCH RECORD, 2023, 2677 (12) : 704 - 717
  • [32] An Interpretable Machine Learning Approach to Prioritizing Factors Contributing to Clinician Burnout
    Pillai, Malvika
    Adapa, Karthik
    Foster, Meagan
    Kratzke, Ian
    Charguia, Nadia
    Mazur, Lukasz
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2022), 2022, 13515 : 149 - 161
  • [33] AN INTERPRETABLE MACHINE LEARNING APPROACH IN UNDERSTANDING LATERAL SPREADING CASE HISTORIES
    Torres, Emerzon S.
    Dungca, Jonathan R.
    INTERNATIONAL JOURNAL OF GEOMATE, 2024, 26 (116): : 110 - 117
  • [34] The Mechanical Bard: An Interpretable Machine Learning Approach to Shakespearean Sonnet Generation
    Agnew, Edwin
    Qiu, Michelle
    Zhu, Lily
    Wiseman, Sam
    Rudin, Cynthia
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1627 - 1638
  • [35] An Interpretable Machine Learning Approach for Predicting Hospital Length of Stay and Readmission
    Liu, Yuxi
    Qin, Shaowen
    ADVANCED DATA MINING AND APPLICATIONS, ADMA 2021, PT I, 2022, 13087 : 73 - 85
  • [36] Evaluating Wear Volume of Oligoether Esters with an Interpretable Machine Learning Approach
    Wang, Hanwen
    Zhang, Chunhua
    Yu, Xiaowen
    Li, Yangyang
    TRIBOLOGY LETTERS, 2023, 71 (02)
  • [37] Demystifying Thermal Comfort in Smart Buildings: An Interpretable Machine Learning Approach
    Zhang, Wei
    Wen, Yonggang
    Tseng, King Jet
    Jin, Guangyu
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (10) : 8021 - 8031
  • [38] Decoding drinking water flavor: A pioneering and interpretable machine learning approach
    Shuai, Youwen
    Zhang, Kejia
    Zhang, Tuqiao
    Zhu, Hui
    Jin, Sha
    Hu, Tingting
    Yu, Zhefan
    Liang, Xinyu
    JOURNAL OF WATER PROCESS ENGINEERING, 2025, 72
  • [39] Predicting Hospital No-Shows: Interpretable Machine Learning Models Approach
    Toffaha, Khaled M.
    Simsekler, Mecit Can Emre
    Alshehhi, Aamna
    Omar, Mohammed Atif
    IEEE ACCESS, 2024, 12 : 166058 - 166067
  • [40] Evaluating Wear Volume of Oligoether Esters with an Interpretable Machine Learning Approach
    Hanwen Wang
    Chunhua Zhang
    Xiaowen Yu
    Yangyang Li
    Tribology Letters, 2023, 71