Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks

被引:24
|
作者
Panda, Priyadarshini [1 ]
Chakraborty, Indranil [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Adversarial robustness; deep learning; discretization techniques; binarized neural networks;
D O I
10.1109/ACCESS.2019.2919463
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples are perturbed inputs that are designed (from a deep learning network's (DLN) parameter gradients) to mislead the DLN during test time. Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the "space" in which adversarial examples exist. Guided by this intuition, we demonstrate that discretization greatly improves the robustness of the DLNs against adversarial attacks. Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8bit to 4 values or 2bit) extensively improves the adversarial robustness of the DLNs for a substantial range of perturbations for minimal loss in test accuracy. Furthermore, we find that binary neural networks (BNNs) and related variants are intrinsically more robust than their full precision counterparts in adversarial scenarios. Combining input discretization with the BNNs furthers the robustness, even waiving the need for adversarial training for the certain magnitude of perturbation values. We evaluate the effect of discretization on MNIST, CIFAR10, CIFAR100, and ImageNet datasets. Across all datasets, we observe maximal adversarial resistance with 2bit input discretization that incurs an adversarial accuracy loss of just similar to 1% - 2% as compared to clean test accuracy against single-step attacks. We also show standalone discretization remains vulnerable to stronger multi-step attack scenarios necessitating the use of adversarial training with discretization as an improved defense strategy.
引用
收藏
页码:70157 / 70168
页数:12
相关论文
共 50 条
  • [1] Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
    Apruzzese, Giovanni
    Colajanni, Michele
    Ferretti, Luca
    Marchetti, Mirco
    [J]. 2019 11TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT (CYCON): SILENT BATTLE, 2019, : 383 - 400
  • [2] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [3] Effectiveness of machine learning based android malware detectors against adversarial attacks
    Jyothish, A.
    Mathew, Ashik
    Vinod, P.
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (03): : 2549 - 2569
  • [4] Adversarial Attacks Against Machine Learning-Based Resource Provisioning Systems
    Nazari, Najmeh
    Makrani, Hosein Mohammadi
    Fang, Chongzhou
    Omidi, Behnam
    Rafatirad, Setareh
    Sayadi, Hossein
    Khasawneh, Khaled N.
    Homayoun, Houman
    [J]. IEEE MICRO, 2023, 43 (05) : 35 - 44
  • [5] Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks
    Taran, Olga
    Rezaeifar, Shideh
    Voloshynovskiy, Slava
    [J]. COMPUTER VISION - ECCV 2018 WORKSHOPS, PT II, 2019, 11130 : 267 - 279
  • [6] Secure machine learning against adversarial samples at test time
    Lin, Jing
    Njilla, Laurent L.
    Xiong, Kaiqi
    [J]. EURASIP JOURNAL ON INFORMATION SECURITY, 2022, 2022 (01)
  • [7] Secure machine learning against adversarial samples at test time
    Jing Lin
    Laurent L. Njilla
    Kaiqi Xiong
    [J]. EURASIP Journal on Information Security, 2022
  • [8] Adversarial attacks against supervised machine learning based network intrusion detection systems
    Alshahrani, Ebtihaj
    Alghazzawi, Daniyal
    Alotaibi, Reem
    Rabie, Osama
    [J]. PLOS ONE, 2022, 17 (10):
  • [9] ENSEMBLE ADVERSARIAL TRAINING BASED DEFENSE AGAINST ADVERSARIAL ATTACKS FOR MACHINE LEARNING-BASED INTRUSION DETECTION SYSTEM
    Haroon, M. S.
    Ali, H. M.
    [J]. NEURAL NETWORK WORLD, 2023, 33 (05) : 317 - 336
  • [10] Is image-based CAPTCHA secure against attacks based on machine learning? An experimental study
    Alqahtani, Fatmah H.
    Alsulaiman, Fawaz A.
    [J]. COMPUTERS & SECURITY, 2020, 88