Resilience of Pruned Neural Network Against Poisoning Attack

被引:0
|
作者
Zhao, Bingyin [1 ]
Lao, Yingjie [1 ]
机构
[1] Clemson Univ, Dept Elect & Comp Engn, Clemson, SC 29634 USA
来源
PROCEEDINGS OF THE 2018 13TH INTERNATIONAL CONFERENCE ON MALICIOUS AND UNWANTED SOFTWARE (MALWARE 2018) | 2018年
关键词
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the past several years, machine learning, especially deep learning, has achieved remarkable success in various fields. However, it has been shown recently that machine learning algorithms are vulnerable to well crafted attacks. For instance, poisoning attack is effective in manipulating the results of a predictive model by deliberately contaminating the training data. In this paper, we investigate the implication of network pruning on the resilience against poisoning attacks. Our experimental results show that pruning can effectively increase the difficulty of poisoning attack, possibly due to the reduced degrees of freedom in the pruned network. For example, in order to degrade the test accuracy below 60% for the MNIST-1.-7 dataset, only less than 10 retraining epochs with poisoning data are needed for the original network, while about 16 and 40 epochs are required for the 90% and 99% pruned networks, respectively.
引用
收藏
页码:78 / 83
页数:6
相关论文
共 50 条
  • [31] A Deep Neural Network Attack Simulation against Data Storage of Autonomous Vehicles
    Kim, Insup
    Lee, Ganggyu
    Lee, Seyoung
    Choi, Wonsuk
    SAE INTERNATIONAL JOURNAL OF CONNECTED AND AUTOMATED VEHICLES, 2024, 7 (02):
  • [32] Chosen plaintext attack against neural network-based symmetric cipher
    Arvandi, M.
    Sadeghian, A.
    2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6, 2007, : 847 - +
  • [33] Securing ZigBee Communications Against Constant Jamming Attack Using Neural Network
    Pirayesh, Hossein
    Sangdeh, Pedram Kheirkhah
    Zeng, Huacheng
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (06) : 4957 - 4968
  • [34] A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
    Liu, Guanxiong
    Khalil, Issa
    Khreishah, Abdallah
    Phan, NhatHai
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 834 - 846
  • [35] Node Injection Attack Based on Label Propagation Against Graph Neural Network
    Zhu, Peican
    Pan, Zechen
    Tang, Keke
    Cui, Xiaodong
    Wang, Jinhuan
    Xuan, Qi
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, : 1 - 13
  • [36] Attack Resilience of the Evolving Scientific Collaboration Network
    Liu, Xiao Fan
    Xu, Xiao-Ke
    Small, Michael
    Tse, Chi K.
    PLOS ONE, 2011, 6 (10):
  • [37] A Cyber-Resilience Enhancement Method for Network Controlled Microgrid against Denial of Service Attack
    Dai, Jiahong
    Xu, Yan
    Wang, Yu
    Nguyen, Tung-Lam
    Dasgupta, Souvik
    IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 3511 - 3516
  • [38] Chronic Poisoning: Backdoor Attack against Split Learning
    Yu, Fangchao
    Zeng, Bo
    Zhao, Kai
    Pang, Zhi
    Wang, Lina
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16531 - 16538
  • [39] Poisoning Attack Against Estimating From Pairwise Comparisons
    Ma, Ke
    Xu, Qianqian
    Zeng, Jinshan
    Cao, Xiaochun
    Huang, Qingming
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 6393 - 6408
  • [40] A vibration response identification neural network with resilience against missing data anomalies
    Zhang, Ruiheng
    Zhou, Quan
    Tian, Lulu
    Zhang, Jie
    Bai, Libing
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2022, 33 (07)