On the Relationship between Generalization and Robustness to Adversarial Examples

被引:8
|
作者
Pedraza, Anibal [1 ]
Deniz, Oscar [1 ]
Bueno, Gloria [1 ]
机构
[1] Univ Castilla La Mancha, VISILAB, ETSII, Ciudad Real 13071, Spain
来源
SYMMETRY-BASEL | 2021年 / 13卷 / 05期
关键词
machine learning; computer vision; deep learning; adversarial examples; adversarial robustness; overfitting;
D O I
10.3390/sym13050817
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
One of the most intriguing phenomenons related to deep learning is the so-called adversarial examples. These samples are visually equivalent to normal inputs, undetectable for humans, yet they cause the networks to output wrong results. The phenomenon can be framed as a symmetry/asymmetry problem, whereby inputs to a neural network with a similar/symmetric appearance to regular images, produce an opposite/asymmetric output. Some researchers are focused on developing methods for generating adversarial examples, while others propose defense methods. In parallel, there is a growing interest in characterizing the phenomenon, which is also the focus of this paper. From some well known datasets of common images, like CIFAR-10 and STL-10, a neural network architecture is first trained in a normal regime, where training and validation performances increase, reaching generalization. Additionally, the same architectures and datasets are trained in an overfitting regime, where there is a growing disparity in training and validation performances. The behaviour of these two regimes against adversarial examples is then compared. From the results, we observe greater robustness to adversarial examples in the overfitting regime. We explain this simultaneous loss of generalization and gain in robustness to adversarial examples as another manifestation of the well-known fitting-generalization trade-off.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Disentangling Adversarial Robustness and Generalization
    Stutz, David
    Hein, Matthias
    Schiele, Bernt
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6969 - 6980
  • [2] Exploring Robustness Connection between Artificial and Natural Adversarial Examples
    Agarwal, Akshay
    Ratha, Nalini
    Vatsa, Mayank
    Singh, Richa
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 178 - 185
  • [3] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [4] EXPLOITING DOUBLY ADVERSARIAL EXAMPLES FOR IMPROVING ADVERSARIAL ROBUSTNESS
    Byun, Junyoung
    Go, Hyojun
    Cho, Seungju
    Kim, Changick
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1331 - 1335
  • [5] On the robustness of randomized classifiers to adversarial examples
    Rafael Pinot
    Laurent Meunier
    Florian Yger
    Cédric Gouy-Pailler
    Yann Chevaleyre
    Jamal Atif
    Machine Learning, 2022, 111 : 3425 - 3457
  • [6] On the Robustness of Vision Transformers to Adversarial Examples
    Mahmood, Kaleel
    Mahmood, Rigel
    van Dijk, Marten
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7818 - 7827
  • [7] Effect of adversarial examples on the robustness of CAPTCHA
    Zhang, Yang
    Gao, Haichang
    Pei, Ge
    Kang, Shuai
    Zhou, Xin
    2018 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY (CYBERC 2018), 2018, : 1 - 10
  • [8] On the robustness of randomized classifiers to adversarial examples
    Pinot, Rafael
    Meunier, Laurent
    Yger, Florian
    Gouy-Pailler, Cedric
    Chevaleyre, Yann
    Atif, Jamal
    MACHINE LEARNING, 2022, 111 (09) : 3425 - 3457
  • [9] On Robustness to Adversarial Examples and Polynomial Optimization
    Awasthi, Pranjal
    Dutta, Abhratanu
    Vijayaraghavan, Aravindan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] On the Role of Generalization in Transferability of Adversarial Examples
    Wang, Yilin
    Farnia, Farzan
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2259 - 2270