Rationalizing predictions by adversarial information calibration

被引:4
|
作者
Sha, Lei [1 ,2 ,6 ]
Camburu, Oana-Maria [3 ]
Lukasiewicz, Thomas [2 ,4 ,5 ]
机构
[1] Beihang Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[2] Univ Oxford, Dept Comp Sci, Oxford, England
[3] UCL, Dept Comp Sci, London, England
[4] TU Wien, Inst Log & Computat, Vienna, Austria
[5] Alan Turing Inst, London, England
[6] Zhongguancun Lab, Beijing, Peoples R China
基金
英国工程与自然科学研究理事会;
关键词
Rationale extraction; Interpretability; Natural language processing; Information calibration; Deep neural networks; NEURAL-NETWORK; ATTENTION; RULES;
D O I
10.1016/j.artint.2022.103828
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explaining the predictions of AI models is paramount in safety-critical applications, such as in legal or medical domains. One form of explanation for a prediction is an extractive rationale, i.e., a subset of features of an instance that lead the model to give its prediction on that instance. For example, the subphrase "he stole the mobile phone" can be an extractive rationale for the prediction of "Theft". Previous works on generating extractive rationales usually employ a two-phase model: a selector that selects the most important features (i.e., the rationale) followed by a predictor that makes the prediction based exclusively on the selected features. One disadvantage of these works is that the main signal for learning to select features comes from the comparison of the answers given by the predictor to the ground-truth answers. In this work, we propose to squeeze more information from the predictor via an information calibration method. More precisely, we train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction. The first model is used as a guide for the second model. We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features. In addition, for natural language tasks, we propose a language-model-based regularizer to encourage the extraction of fluent rationales. Experimental results on a sentiment analysis task, a hate speech recognition task, as well as on three tasks from the legal domain show the effectiveness of our approach to rationale extraction.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
    Sha, Lei
    Camburu, Oana-Maria
    Lukasiewicz, Thomas
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 13771 - 13779
  • [2] Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration
    Tomani, Christian
    Buettner, Florian
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9886 - 9896
  • [3] RETRACTION: Integrating confidence calibration and adversarial robustness via adversarial calibration entropy
    Chen, Yong
    Hu, Peng
    Yuan, Zhong
    Peng, Dezhong
    Wang, Xu
    INFORMATION SCIENCES, 2025, 702
  • [4] Calibration of predictions in regression
    Copas, JB
    Krebs-Brown, AJ
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2000, 29 (9-10) : 1973 - 1986
  • [5] Rationalizing the Formation of Belinostat Solvates with Experimental Screening and Computational Predictions
    Li, Zhonghua
    Ouyang, Ruiling
    Shi, Peng
    Du, Shichao
    Gong, Junbo
    Wu, Songgu
    CRYSTAL GROWTH & DESIGN, 2021, 21 (09) : 4986 - 4996
  • [6] An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory
    Melloni, Lucia
    Mudrik, Liad
    Pitts, Michael
    Bendtz, Katarina
    Ferrante, Oscar
    Gorska, Urszula
    Hirschhorn, Rony
    Khalaf, Aya
    Kozma, Csaba
    Lepauvre, Alex
    Liu, Ling
    Mazumder, David
    Richter, David
    Zhou, Hao J.
    Blumenfeld, Hal
    Boly, Melanie
    Chalmers, David P.
    Devore, Sasha
    Fallon, Francis
    de Lange, Floris
    Jensen, Ole I.
    Kreiman, Gabriel
    Luo, Huan
    Panagiotaropoulos, Theofanis
    Dehaene, Stanislas
    Koch, Christof
    Tononi, Giulio
    PLOS ONE, 2023, 18 (02):
  • [7] Rationalizing vertical information flow in a bilateral monopoly
    Jiang, Li
    Hao, Zhongyuan
    OPERATIONS RESEARCH LETTERS, 2016, 44 (03) : 419 - 424
  • [8] Metrics of Calibration for Probabilistic Predictions
    Arrieta-Ibarra, Imanol
    Gujral, Paman
    Tannen, Jonathan
    Tygert, Mark
    Xu, Cherie
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [9] Metrics of Calibration for Probabilistic Predictions
    Arrieta-Ibarra, Imanol
    Gujral, Paman
    Tannen, Jonathan
    Tygert, Mark
    Xu, Cherie
    Journal of Machine Learning Research, 2022, 23
  • [10] Adversarial Attack Detection via Fuzzy Predictions
    Li, Yi
    Angelov, Plamen
    Suri, Neeraj
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (12) : 7015 - 7024