Resilience of Pruned Neural Network Against Poisoning Attack

被引:0
|
作者
Zhao, Bingyin [1 ]
Lao, Yingjie [1 ]
机构
[1] Clemson Univ, Dept Elect & Comp Engn, Clemson, SC 29634 USA
来源
PROCEEDINGS OF THE 2018 13TH INTERNATIONAL CONFERENCE ON MALICIOUS AND UNWANTED SOFTWARE (MALWARE 2018) | 2018年
关键词
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the past several years, machine learning, especially deep learning, has achieved remarkable success in various fields. However, it has been shown recently that machine learning algorithms are vulnerable to well crafted attacks. For instance, poisoning attack is effective in manipulating the results of a predictive model by deliberately contaminating the training data. In this paper, we investigate the implication of network pruning on the resilience against poisoning attacks. Our experimental results show that pruning can effectively increase the difficulty of poisoning attack, possibly due to the reduced degrees of freedom in the pruned network. For example, in order to degrade the test accuracy below 60% for the MNIST-1.-7 dataset, only less than 10 retraining epochs with poisoning data are needed for the original network, while about 16 and 40 epochs are required for the 90% and 99% pruned networks, respectively.
引用
收藏
页码:78 / 83
页数:6
相关论文
共 50 条
  • [21] A Flexible Poisoning Attack Against Machine Learning
    Jiang, Wenbo
    Li, Hongwei
    Liu, Sen
    Ren, Yanzhi
    He, Miao
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [22] A Poisoning Attack Against Cryptocurrency Mining Pools
    Ahmed, Mohiuddin
    Wei, Jinpeng
    Wang, Yongge
    Al-Shaer, Ehab
    DATA PRIVACY MANAGEMENT, CRYPTOCURRENCIES AND BLOCKCHAIN TECHNOLOGY, 2018, 11025 : 140 - 154
  • [23] Understanding the Resilience of Neural Network Ensembles against Faulty Training Data
    Chan, Abraham
    Narayanan, Niranjhana
    Gujarati, Arpan
    Pattabiraman, Karthik
    Gopalakrishnan, Sathish
    2021 IEEE 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2021), 2021, : 1100 - 1111
  • [24] Reliability Evaluation of Pruned Neural Networks against Errors on Parameters
    Gao, Zhen
    Wei, Xiaohui
    Zhang, Han
    Li, Wenshuo
    Ge, Guangjun
    Wang, Yu
    Reviriego, Pedro
    2020 33RD IEEE INTERNATIONAL SYMPOSIUM ON DEFECT AND FAULT TOLERANCE IN VLSI AND NANOTECHNOLOGY SYSTEMS (DFT), 2020,
  • [25] LightNet: pruned sparsed convolution neural network for image classification
    Too, Edna C.
    INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2023, 26 (03) : 283 - 295
  • [26] Exploring Model Poisoning Attack to Convolutional Neural Network Based Brain Tumor Detection Systems
    Lata, Kusum
    Singh, Prashant
    Saini, Sandeep
    2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024, 2024,
  • [27] Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    2019 IEEE SECOND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE), 2019, : 136 - 139
  • [28] Dynamical resilience of networks against targeted attack
    Xu, Feifei
    Si, Shubin
    Duan, Dongli
    Lv, Changchun
    Xie, Junlan
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2019, 528
  • [29] Extracting rules from a GA-pruned neural network
    Zhang, ZH
    Zhou, YH
    Lu, YC
    Zhang, B
    INFORMATION INTELLIGENCE AND SYSTEMS, VOLS 1-4, 1996, : 1682 - 1685
  • [30] A distinguishing attack with a neural network
    de Souza, William A. R.
    Tomlinson, Allan
    2013 IEEE 13TH INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2013, : 154 - 161