Multi-Targeted Poisoning Attack in Deep Neural Networks

被引:0
|
作者
Kwon H. [1 ]
Cho S. [2 ]
机构
[1] Department of Artificial Intelligence and Data Science, Korea Military Academy
[2] Department of Electrical Engineering, Korea Military Academy
来源
基金
新加坡国家研究基金会;
关键词
deep neural network; different classes; machine learning; poisoning attack;
D O I
10.1587/transinf.2022NGL0006
中图分类号
学科分类号
摘要
Deep neural networks show good performance in image recognition, speech recognition, and pattern analysis. However, deep neural networks also have weaknesses, one of which is vulnerability to poisoning attacks. A poisoning attack reduces the accuracy of a model by training the model on malicious data. A number of studies have been conducted on such poisoning attacks. The existing type of poisoning attack causes misrecognition by one classifier. In certain situations, however, it is necessary for multiple models to misrecognize certain data as different specific classes. For example, if there are enemy autonomous vehicles A, B, and C, a poisoning attack could mislead A to turn to the left, B to stop, and C to turn to the right simply by using a traffic sign. In this paper, we propose a multi-targeted poisoning attack method that causes each of several models to misrecognize certain data as a different target class. This study used MNIST and CIFAR10 as datasets and Tensorflow as a machine learning library. The experimental results show that the proposed scheme has a 100% average attack success rate on MNIST and CIFAR10 when malicious data accounting for 5% of the training dataset have been used for training. Copyright © 2022 The Institute of Electronics, Information and Communication Engineers.
引用
收藏
页码:1916 / 1920
页数:4
相关论文
共 50 条
  • [21] One Pixel Attack for Fooling Deep Neural Networks
    Su, Jiawei
    Vargas, Danilo Vasconcellos
    Sakurai, Kouichi
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2019, 23 (05) : 828 - 841
  • [22] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [23] Projan: A probabilistic trojan attack on deep neural networks
    Saremi, Mehrin
    Khalooei, Mohammad
    Rastgoo, Razieh
    Sabokrou, Mohammad
    Knowledge-Based Systems, 2024, 304
  • [24] POSTER: Practical Fault Attack on Deep Neural Networks
    Breier, Jakub
    Hou, Xiaolu
    Jap, Dirmanto
    Ma, Lei
    Bhasin, Shivam
    Liu, Yang
    PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 2204 - 2206
  • [25] Defending against backdoor attack on deep neural networks based on multi-scale inactivation
    Zhang, Anqing
    Chen, Honglong
    Wang, Xiaomeng
    Li, Junjian
    Gao, Yudong
    Wang, Xingang
    Information Sciences, 2025, 690
  • [26] Tyrosine kinase inhibitors: Multi-targeted or single-targeted?
    Broekman, Fleur
    Giovannetti, Elisa
    Peters, Godefridus J.
    WORLD JOURNAL OF CLINICAL ONCOLOGY, 2011, 2 (02): : 80 - 93
  • [27] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505
  • [28] Challenges and strategies in the design of multi-targeted ligands
    Rankovic, Zoran
    Morphy, Richard
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2010, 239
  • [29] A multi-targeted approach to treating bone metastases
    Daniel F. Camacho
    Kenneth J. Pienta
    Cancer and Metastasis Reviews, 2014, 33 : 545 - 553
  • [30] A Multi-Targeted Approach for a Complex Multifaceted Disease
    Shineman, D. W.
    Fillit, H. M.
    CURRENT ALZHEIMER RESEARCH, 2009, 6 (05) : 407 - 408