CounterNet: End-to-End Training of Prediction Aware Counterfactual Explanations

被引:2
|
作者
Guo, Hangzhi [1 ]
Nguyen, Thanh H. [2 ]
Yadav, Amulya [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Univ Oregon, Eugene, OR 97403 USA
关键词
Counterfactual Explanation; Algorithmic Recourse; Explainable Artificial Intelligence; Interpretability;
D O I
10.1145/3580305.3599290
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This work presents CounterNet, a novel end-to-end learning framework which integrates Machine Learning (ML) model training and the generation of corresponding counterfactual (CF) explanations into a single end-to-end pipeline. Counterfactual explanations offer a contrastive case, i.e., they attempt to find the smallest modification to the feature values of an instance that changes the prediction of the ML model on that instance to a predefined output. Prior techniques for generating CF explanations suffer from two major limitations: (i) all of them are post-hoc methods designed for use with proprietary ML models - as a result, their procedure for generating CF explanations is uninformed by the training of the ML model, which leads to misalignment between model predictions and explanations; and (ii) most of them rely on solving separate time-intensive optimization problems to find CF explanations for each input data point (which negatively impacts their runtime). This work makes a novel departure from the prevalent post-hoc paradigm (of generating CF explanations) by presenting CounterNet, an end-to-end learning framework which integrates predictive model training and the generation of counterfactual (CF) explanations into a single pipeline. Unlike post-hoc methods, CounterNet enables the optimization of the CF explanation generation only once together with the predictive model. We adopt a block-wise coordinate descent procedure which helps in effectively training CounterNet's network. Our extensive experiments on multiple real-world datasets show that CounterNet generates high-quality predictions, and consistently achieves 100% CF validity and low proximity scores (thereby achieving a well-balanced cost-invalidity trade-off) for any new input instance, and runs 3X faster than existing state-of-the-art baselines.
引用
收藏
页码:577 / 589
页数:13
相关论文
共 50 条
  • [21] CONTEXT-AWARE MASK PREDICTION NETWORK FOR END-TO-END TEXT-BASED SPEECH EDITING
    Wang, Tao
    Yi, Jiangyan
    Deng, Liqun
    Fu, Ruibo
    Tao, Jianhua
    Wen, Zhengqi
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6082 - 6086
  • [22] End-to-end entity-aware neural machine translation
    Xie, Shufang
    Xia, Yingce
    Wu, Lijun
    Huang, Yiqing
    Fan, Yang
    Qin, Tao
    MACHINE LEARNING, 2022, 111 (03) : 1181 - 1203
  • [23] End-to-End Boundary Aware Networks for Medical Image Segmentation
    Hatamizadeh, Ali
    Terzopoulos, Demetri
    Myronenko, Andriy
    MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2019), 2019, 11861 : 187 - 194
  • [24] End-to-end framework for spoof-aware speaker verification
    Kang, Woo Hyun
    Alam, Jahangir
    Fathan, Abderrahim
    INTERSPEECH 2022, 2022, : 4362 - 4366
  • [25] An Energy-aware End-to-End Crowdsensing Platform: Sensarena
    Ben Messaoud, Rim
    Rejiba, Zeineb
    Ghamri-Doudane, Yacine
    2016 13TH IEEE ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2016,
  • [26] CampNet: Context-Aware Mask Prediction for End-to-End Text-Based Speech Editing
    Wang, Tao
    Yi, Jiangyan
    Fu, Ruibo
    Tao, Jianhua
    Wen, Zhengqi
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2241 - 2254
  • [27] Self-Training for End-to-End Speech Translation
    Pino, Juan
    Xu, Qiantong
    Ma, Xutai
    Dousti, Mohammad Javad
    Tang, Yun
    INTERSPEECH 2020, 2020, : 1476 - 1480
  • [28] A Survey of End-to-End Driving: Architectures and Training Methods
    Tampuu, Ardi
    Matiisen, Tambet
    Semikin, Maksym
    Fishman, Dmytro
    Muhammad, Naveed
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1364 - 1384
  • [29] SELF-TRAINING FOR END-TO-END SPEECH RECOGNITION
    Kahn, Jacob
    Lee, Ann
    Hannun, Awni
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7084 - 7088
  • [30] END-TO-END TRAINING APPROACHES FOR DISCRIMINATIVE SEGMENTAL MODELS
    Tang, Hao
    Wang, Weiran
    Gimpel, Kevin
    Livescu, Karen
    2016 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2016), 2016, : 496 - 502