Multi-Targeted Poisoning Attack in Deep Neural Networks

被引:0
|
作者
Kwon H. [1 ]
Cho S. [2 ]
机构
[1] Department of Artificial Intelligence and Data Science, Korea Military Academy
[2] Department of Electrical Engineering, Korea Military Academy
来源
基金
新加坡国家研究基金会;
关键词
deep neural network; different classes; machine learning; poisoning attack;
D O I
10.1587/transinf.2022NGL0006
中图分类号
学科分类号
摘要
Deep neural networks show good performance in image recognition, speech recognition, and pattern analysis. However, deep neural networks also have weaknesses, one of which is vulnerability to poisoning attacks. A poisoning attack reduces the accuracy of a model by training the model on malicious data. A number of studies have been conducted on such poisoning attacks. The existing type of poisoning attack causes misrecognition by one classifier. In certain situations, however, it is necessary for multiple models to misrecognize certain data as different specific classes. For example, if there are enemy autonomous vehicles A, B, and C, a poisoning attack could mislead A to turn to the left, B to stop, and C to turn to the right simply by using a traffic sign. In this paper, we propose a multi-targeted poisoning attack method that causes each of several models to misrecognize certain data as a different target class. This study used MNIST and CIFAR10 as datasets and Tensorflow as a machine learning library. The experimental results show that the proposed scheme has a 100% average attack success rate on MNIST and CIFAR10 when malicious data accounting for 5% of the training dataset have been used for training. Copyright © 2022 The Institute of Electronics, Information and Communication Engineers.
引用
收藏
页码:1916 / 1920
页数:4
相关论文
共 50 条
  • [1] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04): : 883 - 887
  • [2] Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
    Kwon, Hyun
    Kim, Yongchul
    Park, Ki-Woong
    Yoon, Hyunsoo
    Choi, Daeseon
    IEEE ACCESS, 2018, 6 : 46084 - 46096
  • [3] Selective Poisoning Attack on Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    SYMMETRY-BASEL, 2019, 11 (07):
  • [4] SpotOn: A Gradient-based Targeted Data Poisoning Attack on Deep Neural Networks
    Khare, Yash
    Lakara, Kumud
    Mittal, Sparsh
    Kaushik, Arvind
    Singhal, Rekha
    2023 24TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED, 2023, : 391 - 398
  • [5] Camouflaged Poisoning Attack on Graph Neural Networks
    Jiang, Chao
    He, Yi
    Chapman, Richard
    Wu, Hongyi
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 451 - 461
  • [6] A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
    Zheng, Junhao
    Chan, Patrick P. K.
    Chi, Huiyang
    He, Zhimin
    INFORMATION SCIENCES, 2022, 615 : 758 - 773
  • [7] Anti-interpolation based stealthy poisoning attack method on deep neural networks
    Chen J.-Y.
    Zou J.-F.
    Pang L.
    Li H.
    Kongzhi yu Juece/Control and Decision, 2023, 38 (12): : 3381 - 3389
  • [8] Attack on Deep Steganalysis Neural Networks
    Li, Shiyu
    Ye, Dengpan
    Jiang, Shunzhi
    Liu, Changrui
    Niu, Xiaoguang
    Luo, Xiangyang
    CLOUD COMPUTING AND SECURITY, PT IV, 2018, 11066 : 265 - 276
  • [9] Multi-Targeted Anticancer Agents
    Zheng, Wei
    Zhao, Yao
    Luo, Qun
    Zhang, Yang
    Wu, Kui
    Wang, Fuyi
    CURRENT TOPICS IN MEDICINAL CHEMISTRY, 2017, 17 (28) : 3084 - 3098
  • [10] TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications
    Shen, Juncheng
    Zhu, Xiaolei
    Ma, De
    IEEE ACCESS, 2019, 7 : 41498 - 41506