Exploring Robustness Connection between Artificial and Natural Adversarial Examples

被引:4
|
作者
Agarwal, Akshay [1 ]
Ratha, Nalini
Vatsa, Mayank
Singh, Richa
机构
[1] Buffalo, Buffalo, NY 14260 USA
关键词
DEFENSE;
D O I
10.1109/CVPRW56347.2022.00030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Although recent deep neural network algorithm has shown tremendous success in several computer vision tasks, their vulnerability against minute adversarial perturbations has raised a serious concern. In the early days of crafting these adversarial examples, artificial noises are optimized through the network and added in the images to decrease the confidence of the classifiers against the true class. However, recent efforts are showcasing the presence of natural adversarial examples which can also be effectively used to fool the deep neural networks with high confidence. In this paper, for the first time, we have raised the question that whether there is any robustness connection between artificial and natural adversarial examples. The possible robustness connection between natural and artificial adversarial examples is studied in the form that whether an adversarial example detector trained on artificial examples can detect the natural adversarial examples. We have analyzed several deep neural networks for the possible detection of artificial and natural adversarial examples in seen and unseen settings to set up a robust connection. The extensive experimental results reveal several interesting insights to defend the deep classifiers whether vulnerable against natural or artificially perturbed examples. We believe these findings can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.
引用
收藏
页码:178 / 185
页数:8
相关论文
共 50 条
  • [1] EXPLORING THE CONNECTION BETWEEN NEURON COVERAGE AND ADVERSARIAL ROBUSTNESS IN DNN CLASSIFIERS
    Piat, William
    Fadili, Jalal
    Jurie, Frederic
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 745 - 749
  • [2] On the Relationship between Generalization and Robustness to Adversarial Examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    SYMMETRY-BASEL, 2021, 13 (05):
  • [3] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394
  • [4] On the Connection Between Adversarial Robustness and Saliency Map Interpretability
    Etmann, Christian
    Lunz, Sebastian
    Maass, Peter
    Schonlieb, Carola-Bibiane
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [6] EXPLOITING DOUBLY ADVERSARIAL EXAMPLES FOR IMPROVING ADVERSARIAL ROBUSTNESS
    Byun, Junyoung
    Go, Hyojun
    Cho, Seungju
    Kim, Changick
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1331 - 1335
  • [7] On the robustness of randomized classifiers to adversarial examples
    Rafael Pinot
    Laurent Meunier
    Florian Yger
    Cédric Gouy-Pailler
    Yann Chevaleyre
    Jamal Atif
    Machine Learning, 2022, 111 : 3425 - 3457
  • [8] On the Robustness of Vision Transformers to Adversarial Examples
    Mahmood, Kaleel
    Mahmood, Rigel
    van Dijk, Marten
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7818 - 7827
  • [9] Effect of adversarial examples on the robustness of CAPTCHA
    Zhang, Yang
    Gao, Haichang
    Pei, Ge
    Kang, Shuai
    Zhou, Xin
    2018 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY (CYBERC 2018), 2018, : 1 - 10
  • [10] On the robustness of randomized classifiers to adversarial examples
    Pinot, Rafael
    Meunier, Laurent
    Yger, Florian
    Gouy-Pailler, Cedric
    Chevaleyre, Yann
    Atif, Jamal
    MACHINE LEARNING, 2022, 111 (09) : 3425 - 3457