On the Relationship between Generalization and Robustness to Adversarial Examples

被引:8
|
作者
Pedraza, Anibal [1 ]
Deniz, Oscar [1 ]
Bueno, Gloria [1 ]
机构
[1] Univ Castilla La Mancha, VISILAB, ETSII, Ciudad Real 13071, Spain
来源
SYMMETRY-BASEL | 2021年 / 13卷 / 05期
关键词
machine learning; computer vision; deep learning; adversarial examples; adversarial robustness; overfitting;
D O I
10.3390/sym13050817
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
One of the most intriguing phenomenons related to deep learning is the so-called adversarial examples. These samples are visually equivalent to normal inputs, undetectable for humans, yet they cause the networks to output wrong results. The phenomenon can be framed as a symmetry/asymmetry problem, whereby inputs to a neural network with a similar/symmetric appearance to regular images, produce an opposite/asymmetric output. Some researchers are focused on developing methods for generating adversarial examples, while others propose defense methods. In parallel, there is a growing interest in characterizing the phenomenon, which is also the focus of this paper. From some well known datasets of common images, like CIFAR-10 and STL-10, a neural network architecture is first trained in a normal regime, where training and validation performances increase, reaching generalization. Additionally, the same architectures and datasets are trained in an overfitting regime, where there is a growing disparity in training and validation performances. The behaviour of these two regimes against adversarial examples is then compared. From the results, we observe greater robustness to adversarial examples in the overfitting regime. We explain this simultaneous loss of generalization and gain in robustness to adversarial examples as another manifestation of the well-known fitting-generalization trade-off.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Generating Adversarial Examples for Holding Robustness of Source Code Processing Models
    Zhang, Huangzhao
    Li, Zhuo
    Li, Ge
    Ma, Lei
    Liu, Yang
    Jinl, Zhi
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1169 - 1176
  • [42] LSGAN-AT: enhancing malware detector robustness against adversarial examples
    Jianhua Wang
    Xiaolin Chang
    Yixiang Wang
    Ricardo J. Rodríguez
    Jianan Zhang
    Cybersecurity, 4
  • [43] There is more than one kind of robustness: Fooling Whisper with adversarial examples
    Olivier, Raphael
    Raj, Bhiksha
    INTERSPEECH 2023, 2023, : 4394 - 4398
  • [44] Adversarial Examples are a Manifestation of the Fitting-Generalization Trade-off
    Deniz, Oscar
    Vallez, Noelia
    Bueno, Gloria
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2019, PT I, 2019, 11506 : 569 - 580
  • [45] Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples
    Mahmood, Kaleel
    Gurevin, Deniz
    van Dijk, Marten
    Nguyen, Phuoung Ha
    ENTROPY, 2021, 23 (10)
  • [46] Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems
    Hashemi, Mohammad J.
    Keller, Eric
    2020 IEEE CONFERENCE ON NETWORK FUNCTION VIRTUALIZATION AND SOFTWARE DEFINED NETWORKS (NFV-SDN), 2020, : 37 - 43
  • [47] Evaluation of the Robustness against Adversarial Examples in Hardware-Trojan Detection
    Asia Pacific Conference on Postgraduate Research in Microelectronics and Electronics, 2021, 2021-November : 5 - 8
  • [48] Toward Enhanced Adversarial Robustness Generalization in Object Detection: Feature Disentangled Domain Adaptation for Adversarial Training
    Jung, Yoojin
    Song, Byung Cheol
    IEEE ACCESS, 2024, 12 : 179065 - 179076
  • [49] Signal Augmentation Method based on Mixing and Adversarial Training for Better Robustness and Generalization
    Zhang, Li
    Zhou, Gang
    Sun, Gangyin
    Wu, Chaopeng
    JOURNAL OF COMMUNICATIONS AND NETWORKS, 2024, 26 (06) : 679 - 688
  • [50] TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization
    Liu, Ziquan
    Xu, Yi
    Ji, Xiangyang
    Chan, Antoni B.
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 16436 - 16446