Towards Robust Ensemble Defense Against Adversarial Examples Attack

被引:3
|
作者
Mani, Nag [1 ]
Moh, Melody [1 ]
Moh, Teng-Sheng [1 ]
机构
[1] San Jose State Univ, Dept Comp Sci, San Jose, CA 95192 USA
关键词
adversarial examples; image recognition; gradientbased attacks; securing deep learning; adversarial retraining; ensemble defense;
D O I
10.1109/globecom38437.2019.9013408
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With recent advancements in the field of artificial intelligence, deep learning has created a niche in the technology space and is being actively used in autonomous and IoT systems globally. Unfortunately, these deep learning models have become susceptible to adversarial attacks that can severely impact its integrity. Research has shown that many state-of-the-art models are vulnerable to attacks by well-crafted adversarial examples. These adversarial examples are perturbed versions of clean data with a small amount of noise added to it. These adversarial samples are imperceptible to the human eye yet they can easily fool the targeted model. The exposed vulnerabilities of these models raise the question of their usability in safety-critical real-world applications such as autonomous driving and medical applications. In this work, we have documented the effectiveness of six different gradient-based adversarial attacks on ResNet image recognition model. Defending against these adversaries is challenging. Adversarial re-training has been one of the widely used defense technique. It aims at training a more robust model capable of handling the adversarial examples attack by itself. We showcase the limitations of traditional adversarial-retraining techniques that could be effective against some adversaries but does not protect against more sophisticated attacks. We present a new ensemble defense strategy using adversarial retraining technique that is capable of withstanding six adversarial attacks on cifar10 dataset with a minimum accuracy of 89.31%.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497
  • [2] Towards Robust Detection of Adversarial Examples
    Pang, Tianyu
    Du, Chao
    Dong, Yinpeng
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Attack as Defense: Characterizing Adversarial Examples using Robustness
    Zhao, Zhe
    Chen, Guangke
    Wang, Jingyi
    Yang, Yiwei
    Song, Fu
    Sun, Jun
    ISSTA '21: PROCEEDINGS OF THE 30TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2021, : 42 - 55
  • [4] Towards Defending against Adversarial Examples via Attack-Invariant Features
    Zhou, Dawei
    Liu, Tongliang
    Han, Bo
    Wang, Nannan
    Peng, Chunlei
    Gao, Xinbo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] Attack-less adversarial training for a robust adversarial defense
    Ho, Jiacang
    Lee, Byung-Gook
    Kang, Dae-Ki
    APPLIED INTELLIGENCE, 2022, 52 (04) : 4364 - 4381
  • [6] Attack-less adversarial training for a robust adversarial defense
    Jiacang Ho
    Byung-Gook Lee
    Dae-Ki Kang
    Applied Intelligence, 2022, 52 : 4364 - 4381
  • [7] Towards an Efficient and Robust Adversarial Attack Against Neural Text Classifier
    Yi, Zibo
    Li, Shasha
    Ma, Jun
    Yu, Jie
    Tan, Yusong
    Wu, Qingbo
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (11)
  • [8] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [9] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [10] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255