Exploring Robustness Connection between Artificial and Natural Adversarial Examples

被引:4
|
作者
Agarwal, Akshay [1 ]
Ratha, Nalini
Vatsa, Mayank
Singh, Richa
机构
[1] Buffalo, Buffalo, NY 14260 USA
关键词
DEFENSE;
D O I
10.1109/CVPRW56347.2022.00030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Although recent deep neural network algorithm has shown tremendous success in several computer vision tasks, their vulnerability against minute adversarial perturbations has raised a serious concern. In the early days of crafting these adversarial examples, artificial noises are optimized through the network and added in the images to decrease the confidence of the classifiers against the true class. However, recent efforts are showcasing the presence of natural adversarial examples which can also be effectively used to fool the deep neural networks with high confidence. In this paper, for the first time, we have raised the question that whether there is any robustness connection between artificial and natural adversarial examples. The possible robustness connection between natural and artificial adversarial examples is studied in the form that whether an adversarial example detector trained on artificial examples can detect the natural adversarial examples. We have analyzed several deep neural networks for the possible detection of artificial and natural adversarial examples in seen and unseen settings to set up a robust connection. The extensive experimental results reveal several interesting insights to defend the deep classifiers whether vulnerable against natural or artificially perturbed examples. We believe these findings can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.
引用
收藏
页码:178 / 185
页数:8
相关论文
共 50 条
  • [41] On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems
    Tyukin, Ivan Y.
    Higham, Desmond J.
    Gorban, Alexander N.
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [42] LSGAN-AT: enhancing malware detector robustness against adversarial examples
    Wang, Jianhua
    Chang, Xiaolin
    Wang, Yixiang
    Rodriguez, Ricardo J.
    Zhang, Jianan
    CYBERSECURITY, 2021, 4 (01)
  • [43] Generating Adversarial Examples for Holding Robustness of Source Code Processing Models
    Zhang, Huangzhao
    Li, Zhuo
    Li, Ge
    Ma, Lei
    Liu, Yang
    Jinl, Zhi
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1169 - 1176
  • [44] LSGAN-AT: enhancing malware detector robustness against adversarial examples
    Jianhua Wang
    Xiaolin Chang
    Yixiang Wang
    Ricardo J. Rodríguez
    Jianan Zhang
    Cybersecurity, 4
  • [45] There is more than one kind of robustness: Fooling Whisper with adversarial examples
    Olivier, Raphael
    Raj, Bhiksha
    INTERSPEECH 2023, 2023, : 4394 - 4398
  • [46] Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples
    Mahmood, Kaleel
    Gurevin, Deniz
    van Dijk, Marten
    Nguyen, Phuoung Ha
    ENTROPY, 2021, 23 (10)
  • [47] Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems
    Hashemi, Mohammad J.
    Keller, Eric
    2020 IEEE CONFERENCE ON NETWORK FUNCTION VIRTUALIZATION AND SOFTWARE DEFINED NETWORKS (NFV-SDN), 2020, : 37 - 43
  • [48] Evaluation of the Robustness against Adversarial Examples in Hardware-Trojan Detection
    Asia Pacific Conference on Postgraduate Research in Microelectronics and Electronics, 2021, 2021-November : 5 - 8
  • [49] Exploring Data Correlation between Feature Pairs for Generating Constraint-based Adversarial Examples
    Tian, Yunzhe
    Wang, Yingdi
    Tong, Endong
    Niu, Wenjia
    Chang, Liang
    Chen, Qi Alfred
    Li, Gang
    Liu, Jiqiang
    2020 IEEE 26TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2020, : 430 - 437
  • [50] Adversarial Examples Are a Natural Consequence of Test Error in Noise
    Ford, Nicolas
    Gilmer, Justin
    Carlini, Nicholas
    Cubuk, Ekin D.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97