Comparing Speed Reduction of Adversarial Defense Systems on Deep Neural Networks

被引:0
|
作者
Bowman, Andrew [1 ]
Yang, Xin [1 ]
机构
[1] Middle Tennessee State Univ, Dept Comp Sci, Murfreesboro, TN 37132 USA
关键词
Adversarial Machine Learning; Defense Systems; Convolutional Neural Network; Deep Learning; MNIST; NULL Label; Adversarial Training; ATTACKS;
D O I
10.1109/IST50367.2021.9651377
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of Adversarial Machine Learning, several defense systems are employed to protect machine learning systems from data poisoning attacks, where the training or testing data is altered to invalidate the prediction model created by the system. Although many defense systems have been proposed to combat poisoning attacks, comparisons between the effectiveness and performance cost of these defense systems are few. To address this issue, the NULL label and Adversarial Training methods were implemented to protect a machine learning system against various amounts of data poisoning. The NULL Label proved to not only provide better defense on average, but also did not affect performance as much as Adversarial Training. Although no trade-off between performance and accuracy was detected in the experiment, this work will hopefully provide a framework for future experimentation into this matter.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Adversarial Perturbation Defense on Deep Neural Networks
    Zhang, Xingwei
    Zheng, Xiaolong
    Mao, Wenji
    [J]. ACM COMPUTING SURVEYS, 2021, 54 (08)
  • [2] Salient feature extractor for adversarial defense on deep neural networks
    Chen, Ruoxi
    Chen, Jinyin
    Zheng, Haibin
    Xuan, Qi
    Ming, Zhaoyan
    Jiang, Wenrong
    Cui, Chen
    [J]. INFORMATION SCIENCES, 2022, 600 : 118 - 143
  • [3] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
    Papernot, Nicolas
    McDaniel, Patrick
    Wu, Xi
    Jha, Somesh
    Swami, Ananthram
    [J]. 2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
  • [4] Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
    Mustafa, Aamir
    Khan, Salman
    Hayat, Munawar
    Goecke, Roland
    Shen, Jianbing
    Shao, Ling
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3384 - 3393
  • [5] Advances in Brain-Inspired Deep Neural Networks for Adversarial Defense
    Li, Ruyi
    Ke, Ming
    Dong, Zhanguo
    Wang, Lubin
    Zhang, Tielin
    Du, Minghua
    Wang, Gang
    [J]. ELECTRONICS, 2024, 13 (13)
  • [6] QNAD: Quantum Noise Injection for Adversarial Defense in Deep Neural Networks
    Kundu, Shamik
    Choudhury, Navnil
    Das, Sanjay
    Raha, Arnab
    Basu, Kanad
    [J]. 2024 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST, HOST, 2024, : 1 - 11
  • [7] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    [J]. CYBER SENSING 2018, 2018, 10630
  • [8] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [10] EFFICIENT RANDOMIZED DEFENSE AGAINST ADVERSARIAL ATTACKS IN DEEP CONVOLUTIONAL NEURAL NETWORKS
    Sheikholeslami, Fatemeh
    Jain, Swayambhoo
    Giannakis, Georgios B.
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3277 - 3281