Interpretable Deep Learning under Fire

被引:0
|
作者
Zhang, Xinyang [1 ]
Wang, Ningfei [2 ]
Shen, Hua [1 ]
Ji, Shouling [3 ,4 ]
Luo, Xiapu [5 ]
Wang, Ting [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Univ Calif Irvine, Irvine, CA USA
[3] Zhejiang Univ, Hangzhou, Peoples R China
[4] Alibaba, ZJU Joint Inst Frontier Technol, Hangzhou, Peoples R China
[5] Hong Kong Polytech Univ, Hong Kong, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Providing explanations for deep neural network (DNN) models is crucial for their use in security-sensitive domains. A plethora of interpretation models have been proposed to help users understand the inner workings of DNNs: how does a DNN arrive at a specific decision for a given input? The improved interpretability is believed to offer a sense of security by involving human in the decision-making process. Yet, due to its data-driven nature, the interpretability itself is potentially susceptible to malicious manipulations, about which little is known thus far. Here we bridge this gap by conducting the first systematic study on the security of interpretable deep learning systems (IDLSes). We show that existing IDLSes are highly vulnerable to adversarial manipulations. Specifically, we present ADV2, a new class of attacks that generate adversarial inputs not only misleading target DNNs but also deceiving their coupled interpretation models. Through empirical evaluation against four major types of IDLSes on benchmark datasets and in security-critical applications (e.g., skin cancer diagnosis), we demonstrate that with ADV2 the adversary is able to arbitrarily designate an input's prediction and interpretation. Further, with both analytical and empirical evidence, we identify the prediction-interpretation gap as one root cause of this vulnerability - a DNN and its interpretation model are often misaligned, resulting in the possibility of exploiting both models simultaneously. Finally, we explore potential countermeasures against ADV2, including leveraging its low transferability and incorporating it in an adversarial training framework. Our findings shed light on designing and operating IDLSes in a more secure and informative fashion, leading to several promising research directions.
引用
收藏
页码:1659 / 1676
页数:18
相关论文
共 50 条
  • [21] Generalizable and Interpretable Deep Learning for Network Congestion Prediction
    Poularakis, Konstantinos
    Qin, Qiaofeng
    Le, Franck
    Kompella, Sastry
    Tassiulas, Leandros
    2021 IEEE 29TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP 2021), 2021,
  • [22] The Structure of Deep Neural Network for Interpretable Transfer Learning
    Kim, Dowan
    Lim, Woohyun
    Hong, Minye
    Kim, Hyeoncheol
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2019, : 181 - 184
  • [23] Infusing theory into deep learning for interpretable reactivity prediction
    Shih-Han Wang
    Hemanth Somarajan Pillai
    Siwen Wang
    Luke E. K. Achenie
    Hongliang Xin
    Nature Communications, 12
  • [24] DeepEnhancerPPO: An Interpretable Deep Learning Approach for Enhancer Classification
    Mu, Xuechen
    Huang, Zhenyu
    Chen, Qiufen
    Shi, Bocheng
    Xu, Long
    Xu, Ying
    Zhang, Kai
    INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES, 2024, 25 (23)
  • [25] Interpretable patent recommendation with knowledge graph and deep learning
    Chen, Han
    Deng, Weiwei
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [26] Antibody structure prediction using interpretable deep learning
    Ruffolo, Jeffrey A.
    Sulam, Jeremias
    Gray, Jeffrey J.
    PATTERNS, 2022, 3 (02):
  • [27] Feature Analysis Network: An Interpretable Idea in Deep Learning
    Li, Xinyu
    Gao, Xiaoguang
    Wang, Qianglong
    Wang, Chenfeng
    Li, Bo
    Wan, Kaifang
    COGNITIVE COMPUTATION, 2024, 16 (03) : 803 - 826
  • [28] Interpretable Sentiment Analysis based on Deep Learning: An overview
    Jawale, Shila
    Sawarkar, S. D.
    2020 IEEE PUNE SECTION INTERNATIONAL CONFERENCE (PUNECON), 2020, : 65 - 70
  • [29] Infusing theory into deep learning for interpretable reactivity prediction
    Wang, Shih-Han
    Pillai, Hemanth Somarajan
    Wang, Siwen
    Achenie, Luke E. K.
    Xin, Hongliang
    NATURE COMMUNICATIONS, 2021, 12 (01)
  • [30] An Interpretable Deep Learning Model for Automatic Sound Classification
    Zinemanas, Pablo
    Rocamora, Martin
    Miron, Marius
    Font, Frederic
    Serra, Xavier
    ELECTRONICS, 2021, 10 (07)