Exploring Robustness Connection between Artificial and Natural Adversarial Examples

被引:4
|
作者
Agarwal, Akshay [1 ]
Ratha, Nalini
Vatsa, Mayank
Singh, Richa
机构
[1] Buffalo, Buffalo, NY 14260 USA
关键词
DEFENSE;
D O I
10.1109/CVPRW56347.2022.00030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Although recent deep neural network algorithm has shown tremendous success in several computer vision tasks, their vulnerability against minute adversarial perturbations has raised a serious concern. In the early days of crafting these adversarial examples, artificial noises are optimized through the network and added in the images to decrease the confidence of the classifiers against the true class. However, recent efforts are showcasing the presence of natural adversarial examples which can also be effectively used to fool the deep neural networks with high confidence. In this paper, for the first time, we have raised the question that whether there is any robustness connection between artificial and natural adversarial examples. The possible robustness connection between natural and artificial adversarial examples is studied in the form that whether an adversarial example detector trained on artificial examples can detect the natural adversarial examples. We have analyzed several deep neural networks for the possible detection of artificial and natural adversarial examples in seen and unseen settings to set up a robust connection. The extensive experimental results reveal several interesting insights to defend the deep classifiers whether vulnerable against natural or artificially perturbed examples. We believe these findings can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.
引用
收藏
页码:178 / 185
页数:8
相关论文
共 50 条
  • [11] On Robustness to Adversarial Examples and Polynomial Optimization
    Awasthi, Pranjal
    Dutta, Abhratanu
    Vijayaraghavan, Aravindan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [12] Natural Adversarial Examples
    Hendrycks, Dan
    Zhao, Kevin
    Basart, Steven
    Steinhardt, Jacob
    Song, Dawn
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15257 - 15266
  • [13] Explicit Tradeoffs between Adversarial and Natural Distributional Robustness
    Moayeri, Mazda
    Banihashem, Kiarash
    Feizi, Soheil
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [14] Understanding adversarial robustness against on-manifold adversarial examples
    Xiao, Jiancong
    Yang, Liusha
    Fan, Yanbo
    Wang, Jue
    Luo, Zhi-Quan
    PATTERN RECOGNITION, 2025, 159
  • [15] Robustness to adversarial examples can be improved with overfitting
    Oscar Deniz
    Anibal Pedraza
    Noelia Vallez
    Jesus Salido
    Gloria Bueno
    International Journal of Machine Learning and Cybernetics, 2020, 11 : 935 - 944
  • [16] Parseval Networks: Improving Robustness to Adversarial Examples
    Cisse, Moustapha
    Bojanowski, Piotr
    Grave, Edouard
    Dauphin, Yann
    Usunier, Nicolas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [17] Certified Robustness to Adversarial Examples with Differential Privacy
    Lecuyer, Mathias
    Atlidakis, Vaggelis
    Geambasu, Roxana
    Hsu, Daniel
    Jana, Suman
    2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, : 656 - +
  • [18] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [19] Semidefinite relaxations for certifying robustness to adversarial examples
    Raghunathan, Aditi
    Steinhardt, Jacob
    Liang, Percy
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [20] Robustness to adversarial examples can be improved with overfitting
    Deniz, Oscar
    Pedraza, Anibal
    Vallez, Noelia
    Salido, Jesus
    Bueno, Gloria
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2020, 11 (04) : 935 - 944