Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

被引:85
|
作者
Cao, Xiaoyu [1 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Iowa State Univ, ECE Dept, Ames, IA 50011 USA
关键词
adversarial machine learning; evasion attacks; region-based classification;
D O I
10.1145/3134600.3134606
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars. In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.
引用
收藏
页码:278 / 287
页数:10
相关论文
共 50 条
  • [21] Natural Backdoor Attacks on Deep Neural Networks via Raindrops
    Zhao, Feng
    Zhou, Li
    Zhong, Qi
    Lan, Rushi
    Zhang, Leo Yu
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [22] Region-based image classification
    Department of Electronic Engineering, School of Information Science and Technology, Beijing Institute of Technology, Beijing 100081, China
    不详
    [J]. Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2008, 28 (10): : 885 - 889
  • [23] Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
    Wang, Jialai
    Zhang, Ziyuan
    Wang, Meiqi
    Qiu, Han
    Zhang, Tianwei
    Li, Qi
    Li, Zongpeng
    Wei, Tao
    Zhang, Chao
    [J]. PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2329 - 2346
  • [24] Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
    Wang, Bolun
    Yao, Yuanshun
    Shan, Shawn
    Li, Huiying
    Viswanath, Bimal
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. 2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, : 707 - 723
  • [25] Universal adversarial attacks on deep neural networks for medical image classification
    Hirano, Hokuto
    Minagi, Akinori
    Takemoto, Kazuhiro
    [J]. BMC MEDICAL IMAGING, 2021, 21 (01)
  • [26] Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification
    Jiang, Xiaoyu
    Kong, Xiangyin
    Zheng, Junhua
    Ge, Zhiqiang
    Zhang, Xinmin
    Song, Zhihuan
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024,
  • [27] Grasping Adversarial Attacks on Deep Convolutional Neural Networks for Cholangiocarcinoma Classification
    Diyasa, I. Gede Susrama Mas
    Wahid, Radical Rakhman
    Amiruddin, Brilian Putra
    [J]. 2021 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB 2021), 9TH EDITION, 2021,
  • [28] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    [J]. BMC Medical Imaging, 21
  • [29] Model Evasion Attacks Against Partially Encrypted Deep Neural Networks in Isolated Execution Environment
    Yoshida, Kota
    Fujino, Takeshi
    [J]. APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2021, 2021, 12809 : 78 - 95
  • [30] A deep automated skeletal bone age assessment model via region-based convolutional neural network
    Liang, Baoyu
    Zhai, Yunkai
    Tong, Chao
    Zhao, Jie
    Li, Jun
    He, Xianying
    Ma, Qianqian
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2019, 98 : 54 - 59