Comparing Speed Reduction of Adversarial Defense Systems on Deep Neural Networks

被引:0
|
作者
Bowman, Andrew [1 ]
Yang, Xin [1 ]
机构
[1] Middle Tennessee State Univ, Dept Comp Sci, Murfreesboro, TN 37132 USA
关键词
Adversarial Machine Learning; Defense Systems; Convolutional Neural Network; Deep Learning; MNIST; NULL Label; Adversarial Training; ATTACKS;
D O I
10.1109/IST50367.2021.9651377
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of Adversarial Machine Learning, several defense systems are employed to protect machine learning systems from data poisoning attacks, where the training or testing data is altered to invalidate the prediction model created by the system. Although many defense systems have been proposed to combat poisoning attacks, comparisons between the effectiveness and performance cost of these defense systems are few. To address this issue, the NULL label and Adversarial Training methods were implemented to protect a machine learning system against various amounts of data poisoning. The NULL Label proved to not only provide better defense on average, but also did not affect performance as much as Adversarial Training. Although no trade-off between performance and accuracy was detected in the experiment, this work will hopefully provide a framework for future experimentation into this matter.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [42] Assessing Threat of Adversarial Examples on Deep Neural Networks
    Graese, Abigail
    Rozsa, Andras
    Boult, Terrance E.
    [J]. 2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 69 - 74
  • [43] Adversarial Attack and Defense in Breast Cancer Deep Learning Systems
    Li, Yang
    Liu, Shaoying
    [J]. BIOENGINEERING-BASEL, 2023, 10 (08):
  • [44] Adversarial attack and defense strategies for deep speaker recognition systems
    Jati, Arindam
    Hsu, Chin-Cheng
    Pal, Monisankha
    Peri, Raghuveer
    AbdAlmageed, Wael
    Narayanan, Shrikanth
    [J]. COMPUTER SPEECH AND LANGUAGE, 2021, 68
  • [45] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE ACCESS, 2022, 10 : 33602 - 33615
  • [46] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE Access, 2022, 10 : 33602 - 33615
  • [47] Adversarial Weight Prediction Networks for Defense of Industrial FDC Systems
    Yin, Zhenqin
    Ye, Lingjian
    Ge, Zhiqiang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, : 13201 - 13211
  • [48] Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
    Zhang, Mengmei
    Hu, Linmei
    Shi, Chuan
    Wang, Xiao
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 791 - 800
  • [49] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    [J]. 2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [50] A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks
    Deldjoo, Yashar
    Di Noia, Tommaso
    Merra, Felice Antonio
    [J]. ACM COMPUTING SURVEYS, 2021, 54 (02)