Comparing Speed Reduction of Adversarial Defense Systems on Deep Neural Networks

被引:0
|
作者
Bowman, Andrew [1 ]
Yang, Xin [1 ]
机构
[1] Middle Tennessee State Univ, Dept Comp Sci, Murfreesboro, TN 37132 USA
关键词
Adversarial Machine Learning; Defense Systems; Convolutional Neural Network; Deep Learning; MNIST; NULL Label; Adversarial Training; ATTACKS;
D O I
10.1109/IST50367.2021.9651377
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of Adversarial Machine Learning, several defense systems are employed to protect machine learning systems from data poisoning attacks, where the training or testing data is altered to invalidate the prediction model created by the system. Although many defense systems have been proposed to combat poisoning attacks, comparisons between the effectiveness and performance cost of these defense systems are few. To address this issue, the NULL label and Adversarial Training methods were implemented to protect a machine learning system against various amounts of data poisoning. The NULL Label proved to not only provide better defense on average, but also did not affect performance as much as Adversarial Training. Although no trade-off between performance and accuracy was detected in the experiment, this work will hopefully provide a framework for future experimentation into this matter.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] Disrupting adversarial transferability in deep neural networks
    Wiedeman, Christopher
    Wang, Ge
    PATTERNS, 2022, 3 (05):
  • [22] Adversarial image detection in deep neural networks
    Fabio Carrara
    Fabrizio Falchi
    Roberto Caldelli
    Giuseppe Amato
    Rudy Becarelli
    Multimedia Tools and Applications, 2019, 78 : 2815 - 2835
  • [23] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [24] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [25] Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
    Chen, Xiaojiao
    Li, Sheng
    Huang, Hao
    APPLIED SCIENCES-BASEL, 2021, 11 (18):
  • [26] R2AD: Randomization and Reconstructor-based Adversarial Defense for Deep Neural Networks
    Ashrafiamiri, Marzieh
    Dinakarrao, Sai Manoj Pudukotai
    Zargari, Amir Hosein Afandizadeh
    Seo, Minjun
    Kurdahi, Fadi
    Homayoun, Houman
    PROCEEDINGS OF THE 2020 ACM/IEEE 2ND WORKSHOP ON MACHINE LEARNING FOR CAD (MLCAD '20), 2020, : 21 - 26
  • [27] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [28] A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
    Qiao, Zhi
    Wu, Zhenqiang
    Chen, Jiawang
    Ren, Ping'an
    Yu, Zhiliang
    ENTROPY, 2023, 25 (01)
  • [29] Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification
    Alzaidy, Sharoug
    Binsalleeh, Hamad
    APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [30] Generalizing universal adversarial perturbations for deep neural networks
    Yanghao Zhang
    Wenjie Ruan
    Fu Wang
    Xiaowei Huang
    Machine Learning, 2023, 112 : 1597 - 1626