Really natural adversarial examples

被引:0
|
作者
Anibal Pedraza
Oscar Deniz
Gloria Bueno
机构
[1] VISILAB,
[2] ETSI Industriales,undefined
关键词
Natural adversarial; Adversarial examples; Trustworthy machine learning; Computer vision;
D O I
暂无
中图分类号
学科分类号
摘要
The phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.
引用
收藏
页码:1065 / 1077
页数:12
相关论文
共 50 条
  • [41] EXPLOITING DOUBLY ADVERSARIAL EXAMPLES FOR IMPROVING ADVERSARIAL ROBUSTNESS
    Byun, Junyoung
    Go, Hyojun
    Cho, Seungju
    Kim, Changick
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1331 - 1335
  • [42] Generating Adversarial Examples With Conditional Generative Adversarial Net
    Yu, Ping
    Song, Kaitao
    Lu, Jianfeng
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 676 - 681
  • [43] Rethinking Adversarial Examples in Wargames
    Chen, Yuwei
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 100 - 106
  • [44] Adversarial Examples are not Bugs, they are Features
    Ilyas, Andrew
    Santurkar, Shibani
    Tsipras, Dimitris
    Engstrom, Logan
    Tran, Brandon
    Madry, Aleksander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] Verifying the Causes of Adversarial Examples
    Li, Honglin
    Fan, Yifei
    Ganz, Frieder
    Yezzi, Anthony
    Barnaghi, Payam
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 6750 - 6757
  • [46] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [47] Adversarial Examples in Physical World
    Wang, Jiakai
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4925 - 4926
  • [48] Adversarial Examples: Opportunities and Challenges
    Zhang, Jiliang
    Li, Chen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (07) : 2578 - 2593
  • [49] On The Generation of Unrestricted Adversarial Examples
    Khoshpasand, Mehrgan
    Ghorbani, Ali
    50TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS (DSN-W 2020), 2020, : 9 - 15
  • [50] Curse of Dimensionality in Adversarial Examples
    Chattopadhyay, Nandish
    Chattopadhyay, Anupam
    Sen Gupta, Sourav
    Kasper, Michael
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,