Analyzing the Noise Robustness of Deep Neural Networks

被引:19
|
作者
Cao, Kelei [1 ]
Liu, Mengchen [2 ]
Su, Hang [3 ]
Wu, Jing [4 ]
Zhu, Jun [3 ]
Liu, Shixia [1 ]
机构
[1] Tsinghua Univ, Sch Software, BNRist, Beijing 100084, Peoples R China
[2] Microsoft, Redmond, WA 98052 USA
[3] Tsinghua Univ, Inst AI, Dept Comp Sci & Technol, THBI Lab, Beijing 100084, Peoples R China
[4] Cardiff Univ, Cardiff CF10 3AT, Wales
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Neurons; Visualization; Data visualization; Feature extraction; Training; Merging; Biological neural networks; Robustness; deep neural networks; adversarial examples; explainable machine learning; VISUAL ANALYTICS;
D O I
10.1109/TVCG.2020.2969185
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Adversarial examples, generated by adding small but intentionally imperceptible perturbations to normal examples, can mislead deep neural networks (DNNs) to make incorrect predictions. Although much work has been done on both adversarial attack and defense, a fine-grained understanding of adversarial examples is still lacking. To address this issue, we present a visual analysis method to explain why adversarial examples are misclassified. The key is to compare and analyze the datapaths of both the adversarial and normal examples. A datapath is a group of critical neurons along with their connections. We formulate the datapath extraction as a subset selection problem and solve it by constructing and training a neural network. A multi-level visualization consisting of a network-level visualization of data flows, a layer-level visualization of feature maps, and a neuron-level visualization of learned features, has been designed to help investigate how datapaths of adversarial and normal examples diverge and merge in the prediction process. A quantitative evaluation and a case study were conducted to demonstrate the promise of our method to explain the misclassification of adversarial examples.
引用
收藏
页码:3289 / 3304
页数:16
相关论文
共 50 条
  • [1] Analyzing the Noise Robustness of Deep Neural Networks
    Liu, Mengchen
    Liu, Shixia
    Su, Hang
    Cao, Kelei
    Zhu, Jun
    [J]. 2018 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), 2018, : 60 - 71
  • [2] An efficient test method for noise robustness of deep neural networks
    Yasuda, Muneki
    Sakata, Hironori
    Cho, Seung-Il
    Harada, Tomochika
    Tanaka, Atushi
    Yokoyama, Michio
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2019, 10 (02): : 221 - 235
  • [3] Evaluating Robustness to Noise and Compression of Deep Neural Networks for Keyword Spotting
    Pereira, Pedro H.
    Beccaro, Wesley
    Ramirez, Miguel A.
    [J]. IEEE ACCESS, 2023, 11 : 53224 - 53236
  • [4] Noise robustness in multilayer neural networks
    Copelli, M.
    Eichhorn, R.
    Kinouchi, O.
    Biehl, M.
    [J]. Europhysics Letters, 37 (06):
  • [5] Noise robustness in multilayer neural networks
    Copelli, M
    Eichhorn, R
    Kinouchi, O
    Biehl, M
    Simonetti, R
    Riegler, P
    Caticha, N
    [J]. EUROPHYSICS LETTERS, 1997, 37 (06): : 427 - 432
  • [6] Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks
    Vandat, Arash
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [7] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [8] Robustness guarantees for deep neural networks on videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, : 308 - 317
  • [9] Robustness Guarantees for Deep Neural Networks on Videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 308 - 317
  • [10] Robustness Verification Boosting for Deep Neural Networks
    Feng, Chendong
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, : 531 - 535