Towards Robust Ensemble Defense Against Adversarial Examples Attack

被引:3
|
作者
Mani, Nag [1 ]
Moh, Melody [1 ]
Moh, Teng-Sheng [1 ]
机构
[1] San Jose State Univ, Dept Comp Sci, San Jose, CA 95192 USA
关键词
adversarial examples; image recognition; gradientbased attacks; securing deep learning; adversarial retraining; ensemble defense;
D O I
10.1109/globecom38437.2019.9013408
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With recent advancements in the field of artificial intelligence, deep learning has created a niche in the technology space and is being actively used in autonomous and IoT systems globally. Unfortunately, these deep learning models have become susceptible to adversarial attacks that can severely impact its integrity. Research has shown that many state-of-the-art models are vulnerable to attacks by well-crafted adversarial examples. These adversarial examples are perturbed versions of clean data with a small amount of noise added to it. These adversarial samples are imperceptible to the human eye yet they can easily fool the targeted model. The exposed vulnerabilities of these models raise the question of their usability in safety-critical real-world applications such as autonomous driving and medical applications. In this work, we have documented the effectiveness of six different gradient-based adversarial attacks on ResNet image recognition model. Defending against these adversaries is challenging. Adversarial re-training has been one of the widely used defense technique. It aims at training a more robust model capable of handling the adversarial examples attack by itself. We showcase the limitations of traditional adversarial-retraining techniques that could be effective against some adversaries but does not protect against more sophisticated attacks. We present a new ensemble defense strategy using adversarial retraining technique that is capable of withstanding six adversarial attacks on cifar10 dataset with a minimum accuracy of 89.31%.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
    Zhu, Yaoxuan
    Yang, Hua
    Zhu, Bin
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 1947 - 1968
  • [42] DIPDefend: Deep Image Prior Driven Defense against Adversarial Examples
    Dai, Tao
    Feng, Yan
    Wu, Dongxian
    Chen, Bin
    Lu, Jian
    Jiang, Yong
    Xia, Shu-Tao
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1404 - 1412
  • [43] Joint contrastive learning and frequency domain defense against adversarial examples
    Yang, Jin
    Li, Zhi
    Liu, Shuaiwei
    Hong, Bo
    Wang, Weidong
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25): : 18623 - 18639
  • [44] Joint contrastive learning and frequency domain defense against adversarial examples
    Jin Yang
    Zhi Li
    Shuaiwei Liu
    Bo Hong
    Weidong Wang
    Neural Computing and Applications, 2023, 35 : 18623 - 18639
  • [45] KLAttack: Towards Adversarial Attack and Defense on Neural Dependency Parsing Models
    Luo, Yutao
    Lu, Menghua
    Yang, Chaoqi
    Liu, Gongshen
    Wang, Shilin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [46] DSCAE: a denoising sparse convolutional autoencoder defense against adversarial examples
    Ye, Hongwei
    Liu, Xiaozhang
    Li, Chunlai
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2022, 13 (03) : 1419 - 1429
  • [47] Towards robust adversarial defense on perturbed graphs with noisy labels
    Li, Ding
    Xia, Hui
    Hu, Chunqiang
    Zhang, Rui
    Du, Yu
    Feng, Xiaolong
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 269
  • [48] Sinkhorn Adversarial Attack and Defense
    Subramanyam, A. V.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4039 - 4049
  • [49] Boosting the Transferability of Adversarial Examples with Gradient-Aligned Ensemble Attack for Speaker Recognition
    Li, Zhuhai
    Zhang, Jie
    Guo, Wu
    Wu, Haochen
    INTERSPEECH 2024, 2024, : 532 - 536
  • [50] Adversarial Attack and Defense: A Survey
    Liang, Hongshuo
    He, Erlu
    Zhao, Yangyang
    Jia, Zhe
    Li, Hao
    ELECTRONICS, 2022, 11 (08)