Resilience of Pruned Neural Network Against Poisoning Attack

被引:0
|
作者
Zhao, Bingyin [1 ]
Lao, Yingjie [1 ]
机构
[1] Clemson Univ, Dept Elect & Comp Engn, Clemson, SC 29634 USA
来源
PROCEEDINGS OF THE 2018 13TH INTERNATIONAL CONFERENCE ON MALICIOUS AND UNWANTED SOFTWARE (MALWARE 2018) | 2018年
关键词
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the past several years, machine learning, especially deep learning, has achieved remarkable success in various fields. However, it has been shown recently that machine learning algorithms are vulnerable to well crafted attacks. For instance, poisoning attack is effective in manipulating the results of a predictive model by deliberately contaminating the training data. In this paper, we investigate the implication of network pruning on the resilience against poisoning attacks. Our experimental results show that pruning can effectively increase the difficulty of poisoning attack, possibly due to the reduced degrees of freedom in the pruned network. For example, in order to degrade the test accuracy below 60% for the MNIST-1.-7 dataset, only less than 10 retraining epochs with poisoning data are needed for the original network, while about 16 and 40 epochs are required for the 90% and 99% pruned networks, respectively.
引用
收藏
页码:78 / 83
页数:6
相关论文
共 50 条
  • [1] Model Poisoning Attack Against Neural Network Interpreters in IoT Devices
    Zhang, Xianglong
    Li, Feng
    Zhang, Huanle
    Zhang, Haoxin
    Huang, Zhijian
    Fan, Lisheng
    Cheng, Xiuzhen
    Hu, Pengfei
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (03) : 1715 - 1730
  • [2] TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications
    Shen, Juncheng
    Zhu, Xiaolei
    Ma, De
    IEEE ACCESS, 2019, 7 : 41498 - 41506
  • [3] Model Poisoning Attack on Neural Network Without Reference Data
    Zhang, Xianglong
    Zhang, Huanle
    Zhang, Guoming
    Li, Hong
    Yu, Dongxiao
    Cheng, Xiuzhen
    Hu, Pengfei
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (10) : 2978 - 2989
  • [4] COST AWARE UNTARGETED POISONING ATTACK AGAINST GRAPH NEURAL NETWORKS
    Han, Yuwei
    Lai, Yuni
    Zhu, Yulin
    Zhou, Kai
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 4940 - 4944
  • [5] Microsoft Windows vs. Apple Mac OS X: Resilience against ARP cache poisoning attack in a local area network
    Trabelsi, Zouheir
    INFORMATION SECURITY JOURNAL, 2016, 25 (1-3): : 68 - 82
  • [6] Transferring Robustness for Graph Neural Network Against Poisoning Attacks
    Tang, Xianfeng
    Li, Yandong
    Sun, Yiwei
    Yao, Huaxiu
    Mitra, Prasenjit
    Wang, Suhang
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 600 - 608
  • [7] Lancet: Better network resilience by designing for pruned failure sets
    Chang Y.
    Jiang C.
    Chandra A.
    Rao S.
    Tawarmalani M.
    Chang, Yiyang, 1600, Association for Computing Machinery (48): : 53 - 54
  • [8] Lancet: Better network resilience by designing for pruned failure sets
    Chang, Yiyang
    Jiang, Chuan
    Chandra, Ashish
    Rao, Sanjay
    Tawarmalani, Mohit
    PROCEEDINGS OF THE ACM ON MEASUREMENT AND ANALYSIS OF COMPUTING SYSTEMS, 2019, 3 (03)
  • [9] Secure neural network watermarking protocol against forging attack
    Zhu, Renjie
    Zhang, Xinpeng
    Shi, Mengte
    Tang, Zhenjun
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2020, 2020 (01)
  • [10] Secure neural network watermarking protocol against forging attack
    Renjie Zhu
    Xinpeng Zhang
    Mengte Shi
    Zhenjun Tang
    EURASIP Journal on Image and Video Processing, 2020