Analyzing the Noise Robustness of Deep Neural Networks

被引:0
|
作者
Liu, Mengchen [1 ]
Liu, Shixia [1 ]
Su, Hang [2 ]
Cao, Kelei [1 ]
Zhu, Jun [2 ]
机构
[1] Tsinghua Univ, State Key Lab Intell Tech Sys, TNList Lab, Sch Software, Beijing, Peoples R China
[2] Tsinghua Univ, CBICR Ctr, State Key Lab Intell Tech Sys, Dept Comp Sci Tech,TNList Lab, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Deep neural networks; robustness; adversarial examples; back propagation; multi-level visualization; VISUALIZATION; DESIGN; TRACKING; TOOL;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial examples. These examples are intentionally designed by making imperceptible perturbations and often mislead a DNN into making an incorrect prediction. This phenomenon means that there is significant risk in applying DNNs to safety-critical applications, such as driverless cars. To address this issue, we present a visual analytics approach to explain the primary cause of the wrong predictions introduced by adversarial examples. The key is to analyze the datapaths of the adversarial examples and compare them with those of the normal examples. A datapath is a group of critical neurons and their connections. To this end, we formulate the datapath extraction as a subset selection problem and approximately solve it based on back-propagation. A multi-level visualization consisting of a segmented DAG (layer level), an Euler diagram (feature map level), and a heat map (neuron level), has been designed to help experts investigate datapaths from the high-level layers to the detailed neuron activations. Two case studies are conducted that demonstrate the promise of our approach in support of explaining the working mechanism of adversarial examples.
引用
收藏
页码:60 / 71
页数:12
相关论文
共 50 条
  • [1] Analyzing the Noise Robustness of Deep Neural Networks
    Cao, Kelei
    Liu, Mengchen
    Su, Hang
    Wu, Jing
    Zhu, Jun
    Liu, Shixia
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (07) : 3289 - 3304
  • [2] An efficient test method for noise robustness of deep neural networks
    Yasuda, Muneki
    Sakata, Hironori
    Cho, Seung-Il
    Harada, Tomochika
    Tanaka, Atushi
    Yokoyama, Michio
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2019, 10 (02): : 221 - 235
  • [3] Evaluating Robustness to Noise and Compression of Deep Neural Networks for Keyword Spotting
    Pereira, Pedro H.
    Beccaro, Wesley
    Ramirez, Miguel A.
    [J]. IEEE ACCESS, 2023, 11 : 53224 - 53236
  • [4] Noise robustness in multilayer neural networks
    Copelli, M
    Eichhorn, R
    Kinouchi, O
    Biehl, M
    Simonetti, R
    Riegler, P
    Caticha, N
    [J]. EUROPHYSICS LETTERS, 1997, 37 (06): : 427 - 432
  • [5] Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks
    Vandat, Arash
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [6] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [7] Robustness guarantees for deep neural networks on videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, : 308 - 317
  • [8] Robustness Guarantees for Deep Neural Networks on Videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 308 - 317
  • [9] Robustness Verification Boosting for Deep Neural Networks
    Feng, Chendong
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, : 531 - 535
  • [10] SoK: Certified Robustness for Deep Neural Networks
    Li, Linyi
    Xie, Tao
    Li, Bo
    [J]. 2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1289 - 1310