On the Relationship between Generalization and Robustness to Adversarial Examples

被引:8
|
作者
Pedraza, Anibal [1 ]
Deniz, Oscar [1 ]
Bueno, Gloria [1 ]
机构
[1] Univ Castilla La Mancha, VISILAB, ETSII, Ciudad Real 13071, Spain
来源
SYMMETRY-BASEL | 2021年 / 13卷 / 05期
关键词
machine learning; computer vision; deep learning; adversarial examples; adversarial robustness; overfitting;
D O I
10.3390/sym13050817
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
One of the most intriguing phenomenons related to deep learning is the so-called adversarial examples. These samples are visually equivalent to normal inputs, undetectable for humans, yet they cause the networks to output wrong results. The phenomenon can be framed as a symmetry/asymmetry problem, whereby inputs to a neural network with a similar/symmetric appearance to regular images, produce an opposite/asymmetric output. Some researchers are focused on developing methods for generating adversarial examples, while others propose defense methods. In parallel, there is a growing interest in characterizing the phenomenon, which is also the focus of this paper. From some well known datasets of common images, like CIFAR-10 and STL-10, a neural network architecture is first trained in a normal regime, where training and validation performances increase, reaching generalization. Additionally, the same architectures and datasets are trained in an overfitting regime, where there is a growing disparity in training and validation performances. The behaviour of these two regimes against adversarial examples is then compared. From the results, we observe greater robustness to adversarial examples in the overfitting regime. We explain this simultaneous loss of generalization and gain in robustness to adversarial examples as another manifestation of the well-known fitting-generalization trade-off.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities
    Chaudhury, Subhajit
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13714 - 13715
  • [32] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Revisiting single-step adversarial training for robustness and generalization
    Li, Zhuorong
    Yu, Daiwei
    Wu, Minghui
    Chan, Sixian
    Yu, Hongchuan
    Han, Zhike
    PATTERN RECOGNITION, 2024, 151
  • [34] Adversarial Examples in RF Deep Learning: Detection and Physical Robustness
    Kokalj-Filipovic, Silvija
    Miller, Rob
    Vanhoy, Garrett
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [35] An Empirical Evaluation of Adversarial Examples Defences, Combinations and Robustness Scores*
    Jankovic, Aleksandar
    Mayer, Rudolf
    PROCEEDINGS OF THE 2022 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA '22), 2022, : 86 - 92
  • [36] Improving Calibration through the Relationship with Adversarial Robustness
    Qin, Yao
    Wang, Xuezhi
    Beutel, Alex
    Chi, Ed H.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [37] ATGAN: Adversarial training-based GAN for improving adversarial robustness generalization on image classification
    Desheng Wang
    Weidong Jin
    Yunpu Wu
    Aamir Khan
    Applied Intelligence, 2023, 53 : 24492 - 24508
  • [38] ATGAN: Adversarial training-based GAN for improving adversarial robustness generalization on image classification
    Wang, Desheng
    Jin, Weidong
    Wu, Yunpu
    Khan, Aamir
    APPLIED INTELLIGENCE, 2023, 53 (20) : 24492 - 24508
  • [39] Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness
    Zhao, Long
    Liu, Ting
    Peng, Xi
    Metaxas, Dimitris
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [40] LSGAN-AT: enhancing malware detector robustness against adversarial examples
    Wang, Jianhua
    Chang, Xiaolin
    Wang, Yixiang
    Rodriguez, Ricardo J.
    Zhang, Jianan
    CYBERSECURITY, 2021, 4 (01)