Really natural adversarial examples

被引:0
|
作者
Anibal Pedraza
Oscar Deniz
Gloria Bueno
机构
[1] VISILAB,
[2] ETSI Industriales,undefined
关键词
Natural adversarial; Adversarial examples; Trustworthy machine learning; Computer vision;
D O I
暂无
中图分类号
学科分类号
摘要
The phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.
引用
收藏
页码:1065 / 1077
页数:12
相关论文
共 50 条
  • [1] Really natural adversarial examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (04) : 1065 - 1077
  • [2] Natural Adversarial Examples
    Hendrycks, Dan
    Zhao, Kevin
    Basart, Steven
    Steinhardt, Jacob
    Song, Dawn
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15257 - 15266
  • [3] Generating Natural Language Adversarial Examples
    Alzantot, Moustafa
    Sharma, Yash
    Elgohary, Ahmed
    Ho, Bo-Jhang
    Srivastava, Mani B.
    Chang, Kai-Wei
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 2890 - 2896
  • [4] Reevaluating Adversarial Examples in Natural Language
    Morris, John X.
    Lifland, Eli
    Lanchantin, Jack
    Ji, Yangfeng
    Qi, Yanjun
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 3829 - 3839
  • [5] Using Adversarial Examples in Natural Language Processing
    Belohlavek, Petr
    Platek, Ondrej
    Zabokrtsky, Zdenek
    Straka, Milan
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 3693 - 3700
  • [6] Generating Fluent Adversarial Examples for Natural Languages
    Zhang, Huangzhao
    Zhou, Hao
    Miao, Ning
    Li, Lei
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5564 - 5569
  • [7] MESDeceiver: Efficiently Generating Natural Language Adversarial Examples
    Zhao, Tengfei
    Ge, Zhaocheng
    Hu, Hanping
    Shi, Dingmeng
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [8] Adversarial Examples Are a Natural Consequence of Test Error in Noise
    Ford, Nicolas
    Gilmer, Justin
    Carlini, Nicholas
    Cubuk, Ekin D.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] Generating natural adversarial examples with universal perturbations for text classification
    Gao, Haoran
    Zhang, Hua
    Yang, Xingguo
    Li, Wenmin
    Gao, Fei
    Wen, Qiaoyan
    NEUROCOMPUTING, 2022, 471 : 175 - 182
  • [10] NATURAL-LOOKING ADVERSARIAL EXAMPLES FROM FREEHAND SKETCHES
    Kim, Hak Gu
    Nanni, Davide
    Suesstrunk, Sabine
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3723 - 3727