On the Relationship between Generalization and Robustness to Adversarial Examples

被引:8
|
作者
Pedraza, Anibal [1 ]
Deniz, Oscar [1 ]
Bueno, Gloria [1 ]
机构
[1] Univ Castilla La Mancha, VISILAB, ETSII, Ciudad Real 13071, Spain
来源
SYMMETRY-BASEL | 2021年 / 13卷 / 05期
关键词
machine learning; computer vision; deep learning; adversarial examples; adversarial robustness; overfitting;
D O I
10.3390/sym13050817
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
One of the most intriguing phenomenons related to deep learning is the so-called adversarial examples. These samples are visually equivalent to normal inputs, undetectable for humans, yet they cause the networks to output wrong results. The phenomenon can be framed as a symmetry/asymmetry problem, whereby inputs to a neural network with a similar/symmetric appearance to regular images, produce an opposite/asymmetric output. Some researchers are focused on developing methods for generating adversarial examples, while others propose defense methods. In parallel, there is a growing interest in characterizing the phenomenon, which is also the focus of this paper. From some well known datasets of common images, like CIFAR-10 and STL-10, a neural network architecture is first trained in a normal regime, where training and validation performances increase, reaching generalization. Additionally, the same architectures and datasets are trained in an overfitting regime, where there is a growing disparity in training and validation performances. The behaviour of these two regimes against adversarial examples is then compared. From the results, we observe greater robustness to adversarial examples in the overfitting regime. We explain this simultaneous loss of generalization and gain in robustness to adversarial examples as another manifestation of the well-known fitting-generalization trade-off.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [22] Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
    Wang, Yizhen
    Jha, Somesh
    Chaudhuri, Kamalika
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [23] On the Adversarial Robustness of Out-of-distribution Generalization Models
    Zou, Xin
    Liu, Weiwei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] IMPROVING ROBUSTNESS TO ADVERSARIAL EXAMPLES BY ENCOURAGING DISCRIMINATIVE FEATURES
    Agarwal, Chirag
    Anh Nguyen
    Schonfeld, Dan
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3801 - 3805
  • [25] Attack as Defense: Characterizing Adversarial Examples using Robustness
    Zhao, Zhe
    Chen, Guangke
    Wang, Jingyi
    Yang, Yiwei
    Song, Fu
    Sun, Jun
    ISSTA '21: PROCEEDINGS OF THE 30TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2021, : 42 - 55
  • [26] On the Robustness to Adversarial Examples of Neural ODE Image Classifiers
    Carrara, Fabio
    Caldelli, Roberto
    Falchi, Fabrizio
    Amato, Giuseppe
    2019 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2019,
  • [27] On the Robustness of Support Vector Machines against Adversarial Examples
    Langenberg, Peter
    Balda, Emilio
    Behboodi, Arash
    Mathar, Rudolf
    2019 13TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), 2019,
  • [28] On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples
    Zhang, Richard Y.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [29] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [30] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394