Exploring Robustness Connection between Artificial and Natural Adversarial Examples

被引:4
|
作者
Agarwal, Akshay [1 ]
Ratha, Nalini
Vatsa, Mayank
Singh, Richa
机构
[1] Buffalo, Buffalo, NY 14260 USA
关键词
DEFENSE;
D O I
10.1109/CVPRW56347.2022.00030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Although recent deep neural network algorithm has shown tremendous success in several computer vision tasks, their vulnerability against minute adversarial perturbations has raised a serious concern. In the early days of crafting these adversarial examples, artificial noises are optimized through the network and added in the images to decrease the confidence of the classifiers against the true class. However, recent efforts are showcasing the presence of natural adversarial examples which can also be effectively used to fool the deep neural networks with high confidence. In this paper, for the first time, we have raised the question that whether there is any robustness connection between artificial and natural adversarial examples. The possible robustness connection between natural and artificial adversarial examples is studied in the form that whether an adversarial example detector trained on artificial examples can detect the natural adversarial examples. We have analyzed several deep neural networks for the possible detection of artificial and natural adversarial examples in seen and unseen settings to set up a robust connection. The extensive experimental results reveal several interesting insights to defend the deep classifiers whether vulnerable against natural or artificially perturbed examples. We believe these findings can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.
引用
收藏
页码:178 / 185
页数:8
相关论文
共 50 条
  • [21] Regularizing Hard Examples Improves Adversarial Robustness
    Lee, Hyungyu
    Lee, Saehyung
    Bae, Ho
    Yoon, Sungroh
    JOURNAL OF MACHINE LEARNING RESEARCH, 2025, 26
  • [22] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [23] Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
    Wang, Yizhen
    Jha, Somesh
    Chaudhuri, Kamalika
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [24] Really natural adversarial examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (04) : 1065 - 1077
  • [25] Really natural adversarial examples
    Anibal Pedraza
    Oscar Deniz
    Gloria Bueno
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 1065 - 1077
  • [26] Deep Fusion: Crafting Transferable Adversarial Examples and Improving Robustness of Industrial Artificial Intelligence of Things
    Wang, Yajie
    Tan, Yu-an
    Baker, Thar
    Kumar, Neeraj
    Zhang, Quanxin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (06) : 7480 - 7488
  • [27] IMPROVING ROBUSTNESS TO ADVERSARIAL EXAMPLES BY ENCOURAGING DISCRIMINATIVE FEATURES
    Agarwal, Chirag
    Anh Nguyen
    Schonfeld, Dan
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3801 - 3805
  • [28] Attack as Defense: Characterizing Adversarial Examples using Robustness
    Zhao, Zhe
    Chen, Guangke
    Wang, Jingyi
    Yang, Yiwei
    Song, Fu
    Sun, Jun
    ISSTA '21: PROCEEDINGS OF THE 30TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2021, : 42 - 55
  • [29] On the Robustness to Adversarial Examples of Neural ODE Image Classifiers
    Carrara, Fabio
    Caldelli, Roberto
    Falchi, Fabrizio
    Amato, Giuseppe
    2019 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2019,
  • [30] On the Robustness of Support Vector Machines against Adversarial Examples
    Langenberg, Peter
    Balda, Emilio
    Behboodi, Arash
    Mathar, Rudolf
    2019 13TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), 2019,