Towards Robust Ensemble Defense Against Adversarial Examples Attack

被引:3
|
作者
Mani, Nag [1 ]
Moh, Melody [1 ]
Moh, Teng-Sheng [1 ]
机构
[1] San Jose State Univ, Dept Comp Sci, San Jose, CA 95192 USA
关键词
adversarial examples; image recognition; gradientbased attacks; securing deep learning; adversarial retraining; ensemble defense;
D O I
10.1109/globecom38437.2019.9013408
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With recent advancements in the field of artificial intelligence, deep learning has created a niche in the technology space and is being actively used in autonomous and IoT systems globally. Unfortunately, these deep learning models have become susceptible to adversarial attacks that can severely impact its integrity. Research has shown that many state-of-the-art models are vulnerable to attacks by well-crafted adversarial examples. These adversarial examples are perturbed versions of clean data with a small amount of noise added to it. These adversarial samples are imperceptible to the human eye yet they can easily fool the targeted model. The exposed vulnerabilities of these models raise the question of their usability in safety-critical real-world applications such as autonomous driving and medical applications. In this work, we have documented the effectiveness of six different gradient-based adversarial attacks on ResNet image recognition model. Defending against these adversaries is challenging. Adversarial re-training has been one of the widely used defense technique. It aims at training a more robust model capable of handling the adversarial examples attack by itself. We showcase the limitations of traditional adversarial-retraining techniques that could be effective against some adversaries but does not protect against more sophisticated attacks. We present a new ensemble defense strategy using adversarial retraining technique that is capable of withstanding six adversarial attacks on cifar10 dataset with a minimum accuracy of 89.31%.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Morphence: Moving Target Defense Against Adversarial Examples
    Amich, Abderrahmen
    Eshete, Birhanu
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 61 - 75
  • [22] Towards a Robust Adversarial Patch Attack Against Unmanned Aerial Vehicles Object Detection
    Shrestha, Samridha
    Pathak, Saurabh
    Viegas, Eduardo K.
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 3256 - 3263
  • [23] Robust Optimal Classification Trees against Adversarial Examples
    Vos, Daniel
    Verwer, Sicco
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8520 - 8528
  • [24] TOWARDS ROBUST SPEECH-TO-TEXT ADVERSARIAL ATTACK
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro Lameiras
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2869 - 2873
  • [25] Towards the transferable audio adversarial attack via ensemble methods
    Guo, Feng
    Sun, Zheng
    Chen, Yuxuan
    Ju, Lei
    CYBERSECURITY, 2023, 6 (01)
  • [26] Towards the transferable audio adversarial attack via ensemble methods
    Feng Guo
    Zheng Sun
    Yuxuan Chen
    Lei Ju
    Cybersecurity, 6
  • [27] Deep image prior based defense against adversarial examples
    Dai, Tao
    Feng, Yan
    Chen, Bin
    Lu, Jian
    Xia, Shu-Tao
    PATTERN RECOGNITION, 2022, 122
  • [28] MagNet: a Two-Pronged Defense against Adversarial Examples
    Meng, Dongyu
    Chen, Hao
    CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 135 - 147
  • [29] Defense against adversarial examples based on wavelet domain analysis
    Sarvar, Armaghan
    Amirmazlaghani, Maryam
    APPLIED INTELLIGENCE, 2023, 53 (01) : 423 - 439
  • [30] Defense against adversarial examples based on wavelet domain analysis
    Armaghan Sarvar
    Maryam Amirmazlaghani
    Applied Intelligence, 2023, 53 : 423 - 439