Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks

被引:24
|
作者
Panda, Priyadarshini [1 ]
Chakraborty, Indranil [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Adversarial robustness; deep learning; discretization techniques; binarized neural networks;
D O I
10.1109/ACCESS.2019.2919463
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples are perturbed inputs that are designed (from a deep learning network's (DLN) parameter gradients) to mislead the DLN during test time. Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the "space" in which adversarial examples exist. Guided by this intuition, we demonstrate that discretization greatly improves the robustness of the DLNs against adversarial attacks. Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8bit to 4 values or 2bit) extensively improves the adversarial robustness of the DLNs for a substantial range of perturbations for minimal loss in test accuracy. Furthermore, we find that binary neural networks (BNNs) and related variants are intrinsically more robust than their full precision counterparts in adversarial scenarios. Combining input discretization with the BNNs furthers the robustness, even waiving the need for adversarial training for the certain magnitude of perturbation values. We evaluate the effect of discretization on MNIST, CIFAR10, CIFAR100, and ImageNet datasets. Across all datasets, we observe maximal adversarial resistance with 2bit input discretization that incurs an adversarial accuracy loss of just similar to 1% - 2% as compared to clean test accuracy against single-step attacks. We also show standalone discretization remains vulnerable to stronger multi-step attack scenarios necessitating the use of adversarial training with discretization as an improved defense strategy.
引用
收藏
页码:70157 / 70168
页数:12
相关论文
共 50 条
  • [31] SLC: A Permissioned Blockchain for Secure Distributed Machine Learning against Byzantine Attacks
    Liang, Lun
    Cao, Xianghui
    Zhang, Jun
    Sun, Changyin
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 7073 - 7078
  • [32] Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems
    Newaz, A. K. M. Iqtidar
    Haque, Nur Imtiazul
    Sikder, Amit Kumar
    Rahman, Mohammad Ashiqur
    Uluagac, A. Selcuk
    [J]. 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [33] Secure Quantum‐based Adder Design for Protecting Machine Learning Systems Against Side‐Channel Attacks
    Ain, Noor Ul
    Ahmadpour, Seyed-Sajad
    Navimipour, Nima Jafari
    Diakina, E.
    Kassa, Sankit R.
    [J]. Applied Soft Computing, 2025, 169
  • [34] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [35] Federated Machine Learning in Medical imaging and against Adversarial Attacks: A retrospective multicohort study
    Teo, Zhen Ling
    Zhang, Xiaoman
    Tan, Ting Fang
    Ravichandran, Narrendar
    Yong, Liu
    Ting, Daniel S. W.
    [J]. INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)
  • [36] Using Honeypots in a Decentralized Framework to Defend Against Adversarial Machine-Learning Attacks
    Younis, Fadi
    Miri, Ali
    [J]. APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, 2019, 11605 : 24 - 48
  • [37] AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, 2018, : 513 - 529
  • [38] Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies and Defense
    Alotaibi, Afnan
    Rassam, Murad A.
    [J]. FUTURE INTERNET, 2023, 15 (02)
  • [39] Secure localization techniques in wireless sensor networks against routing attacks based on hybrid machine learning models
    Gebremariam, Gebrekiros Gebreyesus
    Panda, J.
    Indu, S.
    [J]. ALEXANDRIA ENGINEERING JOURNAL, 2023, 82 : 82 - 100
  • [40] Secure PUF: Physically Unclonable Function Based on Arbiter with Enhanced Resistance Against Machine Learning (ML) Attacks
    El-Hajj, Mohammad
    Fadlallah, Ahmad
    Chamoun, Maroun
    Serhrouchni, Ahmed
    [J]. SENSORS AND ELECTRONIC INSTRUMENTATION ADVANCES (SEIA' 19), 2019, : 216 - 221