Comparing Speed Reduction of Adversarial Defense Systems on Deep Neural Networks

被引:0
|
作者
Bowman, Andrew [1 ]
Yang, Xin [1 ]
机构
[1] Middle Tennessee State Univ, Dept Comp Sci, Murfreesboro, TN 37132 USA
关键词
Adversarial Machine Learning; Defense Systems; Convolutional Neural Network; Deep Learning; MNIST; NULL Label; Adversarial Training; ATTACKS;
D O I
10.1109/IST50367.2021.9651377
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of Adversarial Machine Learning, several defense systems are employed to protect machine learning systems from data poisoning attacks, where the training or testing data is altered to invalidate the prediction model created by the system. Although many defense systems have been proposed to combat poisoning attacks, comparisons between the effectiveness and performance cost of these defense systems are few. To address this issue, the NULL label and Adversarial Training methods were implemented to protect a machine learning system against various amounts of data poisoning. The NULL Label proved to not only provide better defense on average, but also did not affect performance as much as Adversarial Training. Although no trade-off between performance and accuracy was detected in the experiment, this work will hopefully provide a framework for future experimentation into this matter.
引用
收藏
页数:5
相关论文
共 50 条
  • [11] HYBRID DEFENSE FOR DEEP NEURAL NETWORKS: AN INTEGRATION OF DETECTING AND CLEANING ADVERSARIAL PERTURBATIONS
    Fan, Weiqi
    Sun, Guangling
    Su, Yuying
    Liu, Zhi
    Lu, Xiaofeng
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 210 - 215
  • [12] Fortifying Deep Neural Networks for Industrial Applications: Feature Map Fusion for Adversarial Defense
    Ali, Mohsin
    Raza, Haider
    Gan, John Q.
    2024 IEEE 19TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, ICIEA 2024, 2024,
  • [13] Comparing dynamics: deep neural networks versus glassy systems
    Baity-Jesi, Marco
    Sagun, Levent
    Geiger, Mario
    Spigler, Stefano
    Ben Arpus, Gerard
    Cammarpta, Chiara
    LeCun, Yann
    Wyart, Matthieu
    Biroli, Giulio
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2019, 2019 (12):
  • [14] Comparing Dynamics: Deep Neural Networks versus Glassy Systems
    Baity-Jesi, Marco
    Sagun, Levent
    Geiger, Mario
    Spigler, Stefano
    Ben Arous, Gerard
    Cammarota, Chiara
    LeCun, Yann
    Wyart, Matthieu
    Biroli, Giulio
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [15] Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
    Liang, Bin
    Li, Hongcheng
    Su, Miaoqiang
    Li, Xirong
    Shi, Wenchang
    Wang, Xiaofeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (01) : 72 - 85
  • [16] Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks
    Chen, Zitao
    Dash, Pritam
    Pattabiraman, Karthik
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 689 - 703
  • [17] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [18] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [19] Adversarial image detection in deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Becarelli, Rudy
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (03) : 2815 - 2835
  • [20] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    Machine Vision and Applications, 2024, 35