Evaluating Adversarial Robustness of Secret Key-Based Defenses

被引:0
|
作者
Ali, Ziad Tariq Muhammad [1 ]
Mohammed, Ameer [1 ]
Ahmad, Imtiaz [1 ]
机构
[1] Kuwait Univ, Dept Comp Engn, Kuwait 13060, Kuwait
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Robustness; Adaptation models; Training; Perturbation methods; Measurement; Data preprocessing; Neural networks; Adversarial machine learning; gradient obfuscation; input transformation; key-based transformation; neural networks; IMAGE TRANSFORMATION;
D O I
10.1109/ACCESS.2022.3162874
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The vulnerability of neural networks to adversarial attacks has inspired the proposal of many defenses. Key-based input transformation techniques are the recently proposed methods that make use of gradient obfuscation to improve the adversarial robustness of models. However, most gradient obfuscation techniques can be broken by adaptive attacks that consider the knowledge of the new defense; thus, defenses that rely on gradient obfuscation require a thorough evaluation to identify their effectiveness. Block-wise transformation and randomized diversification are the two recently proposed key-based defenses that claim adversarial robustness. In this study, we developed adaptive attacks and used preexisting attacks against key-based defenses to show that they are still vulnerable to adversarial attacks. Our experiments demonstrate that for a block-wise transformation defense on the CIFAR-10 dataset with the block size of 4, our work can reduce the accuracy of pixel-shuffling to 7.45%, bit-flipping to 4.20% and feistel-based encryption to 9.45%, as compared to previous work that claims high adversarial robustness. In addition to block-wise transformation, we reduced the accuracy of the randomized diversification method by 25.30% on CIFAR-10.
引用
收藏
页码:34872 / 34882
页数:11
相关论文
共 50 条
  • [1] Secret key-based Authentication with a Privacy Constraint
    Kittichokechai, Kittipong
    Caire, Giuseppe
    [J]. 2015 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2015, : 1791 - 1795
  • [2] Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
    Croce, Francesco
    Gowal, Sven
    Brunner, Thomas
    Shelhamer, Evan
    Hein, Matthias
    Cemgil, Taylan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [3] Secret Key-Based Identification and Authentication With a Privacy Constraint
    Kittichokechai, Kittipong
    Caire, Giuseppe
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2016, 62 (11) : 6189 - 6203
  • [4] Secret Key-Based Image Steganography in Spatial Domain
    Gajabe, Rajashree
    Ali, Syed Taqi
    [J]. INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2022, 22 (02)
  • [5] A Secret Key-Based Security Architecture for Wireless Sensor Networks
    Chowdhury, Anuva
    Tanzila, Farzana Alam
    Chowdhury, Shanta
    Haque, Md. Mokammel
    [J]. 2015 INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION ENGINEERING (ICCIE), 2015, : 79 - 82
  • [6] Key-Based Input Transformation Defense Against Adversarial Examples
    Qin, Yi
    Yue, Chuan
    [J]. 2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [7] A Survey of Adversarial Defenses and Robustness in NLP
    Goyal, Shreya
    Doddapaneni, Sumanth
    Khapra, Mitesh M.
    Ravindran, Balaraman
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (14S)
  • [8] Demystifying the Adversarial Robustness of Random Transformation Defenses
    Sitawarin, Chawin
    Golan-Strieb, Zachary
    Wagner, David
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [9] Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
    Sriramanan, Gaurang
    Addepalli, Sravanti
    Baburaj, Arya
    Babu, R. Venkatesh
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [10] Efficient Key-Based Adversarial Defense for ImageNet by Using Pre-Trained Models
    Maungmaung, Aprilpyone
    Echizen, Isao
    Kiya, Hitoshi
    [J]. IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2024, 5 : 902 - 913