Exploring Robustness Connection between Artificial and Natural Adversarial Examples

被引:4
|
作者
Agarwal, Akshay [1 ]
Ratha, Nalini
Vatsa, Mayank
Singh, Richa
机构
[1] Buffalo, Buffalo, NY 14260 USA
关键词
DEFENSE;
D O I
10.1109/CVPRW56347.2022.00030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Although recent deep neural network algorithm has shown tremendous success in several computer vision tasks, their vulnerability against minute adversarial perturbations has raised a serious concern. In the early days of crafting these adversarial examples, artificial noises are optimized through the network and added in the images to decrease the confidence of the classifiers against the true class. However, recent efforts are showcasing the presence of natural adversarial examples which can also be effectively used to fool the deep neural networks with high confidence. In this paper, for the first time, we have raised the question that whether there is any robustness connection between artificial and natural adversarial examples. The possible robustness connection between natural and artificial adversarial examples is studied in the form that whether an adversarial example detector trained on artificial examples can detect the natural adversarial examples. We have analyzed several deep neural networks for the possible detection of artificial and natural adversarial examples in seen and unseen settings to set up a robust connection. The extensive experimental results reveal several interesting insights to defend the deep classifiers whether vulnerable against natural or artificially perturbed examples. We believe these findings can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.
引用
收藏
页码:178 / 185
页数:8
相关论文
共 50 条
  • [31] On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples
    Zhang, Richard Y.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [32] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [33] Generating Natural Language Adversarial Examples
    Alzantot, Moustafa
    Sharma, Yash
    Elgohary, Ahmed
    Ho, Bo-Jhang
    Srivastava, Mani B.
    Chang, Kai-Wei
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 2890 - 2896
  • [34] Reevaluating Adversarial Examples in Natural Language
    Morris, John X.
    Lifland, Eli
    Lanchantin, Jack
    Ji, Yangfeng
    Qi, Yanjun
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 3829 - 3839
  • [35] Exploring Robust Features for Improving Adversarial Robustness
    Wang, Hong
    Deng, Yuefan
    Yoo, Shinjae
    Lin, Yuewei
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (09) : 5141 - 5151
  • [36] CONNECTION BETWEEN THE POSSIBILITIES OF NATURAL AND ARTIFICIAL-INTELLIGENCE
    TYUKHTIN, VS
    VOPROSY FILOSOFII, 1979, (03) : 81 - 84
  • [37] Adversarial Examples in RF Deep Learning: Detection and Physical Robustness
    Kokalj-Filipovic, Silvija
    Miller, Rob
    Vanhoy, Garrett
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [38] An Empirical Evaluation of Adversarial Examples Defences, Combinations and Robustness Scores*
    Jankovic, Aleksandar
    Mayer, Rudolf
    PROCEEDINGS OF THE 2022 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA '22), 2022, : 86 - 92
  • [39] Using Adversarial Examples in Natural Language Processing
    Belohlavek, Petr
    Platek, Ondrej
    Zabokrtsky, Zdenek
    Straka, Milan
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 3693 - 3700
  • [40] Generating Fluent Adversarial Examples for Natural Languages
    Zhang, Huangzhao
    Zhou, Hao
    Miao, Ning
    Li, Lei
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5564 - 5569