Regional Tree Regularization for Interpretability in Deep Neural Networks

被引:0
|
作者
Wu, Mike [1 ]
Parbhoo, Sonali [2 ,3 ]
Hughes, Michael C. [4 ]
Kindle, Ryan [5 ]
Celi, Leo [6 ]
Zazzi, Maurizio [7 ]
Roth, Volker [2 ]
Doshi-Velez, Finale [3 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] Univ Basel, Basel, Switzerland
[3] Harvard Univ, SEAS, Cambridge, MA 02138 USA
[4] Tufts Univ, Medford, MA 02155 USA
[5] Massachusetts Gen Hosp, Boston, MA 02114 USA
[6] MIT, Cambridge, MA 02139 USA
[7] Univ Siena, Siena, Italy
基金
瑞士国家科学基金会;
关键词
PREDICTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The lack of interpretability remains a barrier to adopting deep neural networks across many safety-critical domains. Tree regularization was recently proposed to encourage a deep neural network's decisions to resemble those of a globally compact, axis-aligned decision tree. However, it is often unreasonable to expect a single tree to predict well across all possible inputs. In practice, doing so could lead to neither interpretable nor performant optima. To address this issue, we propose regional tree regularization - a method that encourages a deep model to be well-approximated by several separate decision trees specific to predefined regions of the input space. Across many datasets, including two healthcare applications, we show our approach delivers simpler explanations than other regularization schemes without compromising accuracy. Specifically, our regional regularizer finds many more "desirable" optima compared to global analogues.
引用
收藏
页码:6413 / 6421
页数:9
相关论文
共 50 条
  • [1] Optimizing for interpretability in deep neural networks with tree regularization
    Wu M.
    Parbhoo S.
    Hughes M.C.
    Roth V.
    Doshi-Velez F.
    Journal of Artificial Intelligence Research, 2021, 72
  • [2] Optimizing for Interpretability in Deep Neural Networks with Tree Regularization
    Wu, Mike
    Parbhoo, Sonali
    Hughes, Michael C.
    Roth, Volker
    Doshi-Velez, Finale
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 72 : 1 - 37
  • [3] Batch-wise Regularization of Deep Neural Networks for Interpretability
    Burkart, Nadia
    Faller, Philipp M.
    Peinsipp, Elisabeth
    Huber, Marco F.
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2020, : 216 - 222
  • [4] Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
    Wu, Mike
    Hughes, Michael C.
    Parbhoo, Sonali
    Zazzi, Maurizio
    Roth, Volker
    Doshi-Velez, Finale
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1670 - 1678
  • [5] New Perspective of Interpretability of Deep Neural Networks
    Kimura, Masanari
    Tanaka, Masayuki
    2020 3RD INTERNATIONAL CONFERENCE ON INFORMATION AND COMPUTER TECHNOLOGIES (ICICT 2020), 2020, : 78 - 85
  • [6] A Benchmark for Interpretability Methods in Deep Neural Networks
    Hooker, Sara
    Erhan, Dumitru
    Kindermans, Pieter-Jan
    Kim, Been
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [7] Threshout Regularization for Deep Neural Networks
    Williams, Travis
    Li, Robert
    SOUTHEASTCON 2021, 2021, : 728 - 735
  • [8] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong Y.-P.
    Su H.
    Zhu J.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [9] IMPROVING THE INTERPRETABILITY OF DEEP NEURAL NETWORKS WITH STIMULATED LEARNING
    Tan, Shawn
    Sim, Khe Chai
    Gales, Mark
    2015 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2015, : 617 - 623
  • [10] Improving Interpretability and Regularization in Deep Learning
    Wu, Chunyang
    Gales, Mark J. F.
    Ragni, Anton
    Karanasou, Penny
    Sim, Khe Chai
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (02) : 256 - 265