"What is relevant in a text document?": An interpretable machine learning approach

被引:148
|
作者
Arras, Leila [1 ]
Horn, Franziska [2 ]
Montavon, Gregoire [2 ]
Mueller, Klaus-Robert [2 ,3 ,4 ]
Samek, Wojciech [1 ]
机构
[1] Fraunhofer Heinrich Hertz Inst, Machine Learning Grp, Berlin, Germany
[2] Tech Univ Berlin, Machine Learning Grp, Berlin, Germany
[3] Korea Univ, Dept Brain & Cognit Engn, Seoul, South Korea
[4] Max Planck Inst Informat, Saarbrucken, Germany
来源
PLOS ONE | 2017年 / 12卷 / 08期
基金
新加坡国家研究基金会;
关键词
NETWORKS;
D O I
10.1371/journal.pone.0181142
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] KNN based machine learning approach for text and document mining
    Institute of technology Gopeshwar, Chamoli, Uttarakhand, India
    不详
    不详
    不详
    1600, Science and Engineering Research Support Society (07):
  • [2] Injury severity on traffic crashes: A text mining with an interpretable machine-learning approach
    Arteaga, Cristian
    Paz, Alexander
    Park, JeeWoong
    SAFETY SCIENCE, 2020, 132
  • [3] Text Detection in Document Images by Machine Learning Algorithms
    Zelenika, Darko
    Povh, Janez
    Zenko, Bernard
    PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON COMPUTER RECOGNITION SYSTEMS, CORES 2015, 2016, 403 : 169 - 179
  • [4] Cardiovascular Risk Assessment: An Interpretable Machine Learning Approach
    Paredes, S.
    Rocha, T.
    de Carvalho, P.
    Roseiro, I.
    Henriques, J.
    Sousa, J.
    INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 95 - 103
  • [5] An Interpretable Machine Learning Approach for Hepatitis B Diagnosis
    Obaido, George
    Ogbuokiri, Blessing
    Swart, Theo G.
    Ayawei, Nimibofa
    Kasongo, Sydney Mambwe
    Aruleba, Kehinde
    Mienye, Ibomoiye Domor
    Aruleba, Idowu
    Chukwu, Williams
    Osaye, Fadekemi
    Egbelowo, Oluwaseun F.
    Simphiwe, Simelane
    Esenogho, Ebenezer
    APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [6] On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
    Wei, Dennis
    Nair, Rahul
    Dhurandhar, Amit
    Varshney, Kush R.
    Daly, Elizabeth M.
    Singh, Moninder
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [7] An interpretable and versatile machine learning approach for oocyte phenotyping
    Letort, Gaelle
    Eichmuller, Adrien
    Da Silva, Christelle
    Nikalayevich, Elvira
    Crozet, Flora
    Salle, Jeremy
    Minc, Nicolas
    Labrune, Elsa
    Wolf, Jean-Philippe
    Terret, Marie-Emilie
    Verlhac, Marie-Helene
    JOURNAL OF CELL SCIENCE, 2022, 135 (13)
  • [8] An Interpretable Machine Learning Approach for Laser Lifetime Prediction
    Abdelli, Khouloud
    Griesser, Helmut
    Pachnicke, Stephan
    JOURNAL OF LIGHTWAVE TECHNOLOGY, 2024, 42 (06) : 2094 - 2102
  • [9] Interpretable Machine Learning
    Chen V.
    Li J.
    Kim J.S.
    Plumb G.
    Talwalkar A.
    Queue, 2021, 19 (06): : 28 - 56
  • [10] What Determines Enterprise Borrowing from Self Help Groups? An Interpretable Supervised Machine Learning Approach
    Dasgupta, Madhura
    Gupta, Samarth
    JOURNAL OF FINANCIAL SERVICES RESEARCH, 2024, 66 (01) : 77 - 99