Projan: A probabilistic trojan attack on deep neural networks

被引:0
|
作者
Saremi, Mehrin [1 ]
Khalooei, Mohammad [2 ]
Rastgoo, Razieh [3 ]
Sabokrou, Mohammad [4 ,5 ]
机构
[1] Semnan University, Farzanegan Campus, Semnan,35131-19111, Iran
[2] Amirkabir University of Technology, Department of Computer Engineering, Tehran, Iran
[3] Faculty of Electrical and Computer Engineering, Semnan University, Semnan,35131-19111, Iran
[4] Institute for Research in Fundamental Sciences, Tehran, Iran
[5] Okinawa Institute of Science and Technology, Okinawa, Japan
关键词
D O I
10.1016/j.knosys.2024.112565
中图分类号
学科分类号
摘要
Deep neural networks have gained popularity due to their outstanding performance across various domains. However, because of their lack of explainability, they are vulnerable to some kinds of threats including the trojan or backdoor attack, in which an adversary can train the model to respond to a crafted peculiar input pattern (also called trigger) according to their will. Several trojan attack and defense methods have been proposed in the literature. Many of the defense methods are based on the assumption that the possibly existing trigger must be able to affect the model's behavior, making it output a certain class label for all inputs. In this work, we propose an alternative attack method that violates this assumption. Instead of a single trigger that works on all inputs, a few triggers are generated that will affect only some of the inputs. At attack time, the adversary will need to try more than one trigger to succeed, which might be possible in some real-world situations. Our experiments on MNIST and CIFAR-10 datasets show that such an attack can be implemented successfully, reaching an attack success rate similar to baseline methods called BadNet and N-to-One. We also tested wide range of defense methods and verified that in general, this kind of backdoor is more difficult for defense algorithms to detect. The code is available at https://github.com/programehr/Projan. © 2024 Elsevier B.V.
引用
收藏
相关论文
共 50 条
  • [21] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [22] One Pixel Attack for Fooling Deep Neural Networks
    Su, Jiawei
    Vargas, Danilo Vasconcellos
    Sakurai, Kouichi
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2019, 23 (05) : 828 - 841
  • [23] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [24] POSTER: Practical Fault Attack on Deep Neural Networks
    Breier, Jakub
    Hou, Xiaolu
    Jap, Dirmanto
    Ma, Lei
    Bhasin, Shivam
    Liu, Yang
    PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 2204 - 2206
  • [25] A probabilistic framework for mutation testing in deep neural networks
    Tambon, Florian
    Khomh, Foutse
    Antoniol, Giuliano
    INFORMATION AND SOFTWARE TECHNOLOGY, 2023, 155
  • [26] Probabilistic Forecasting of Symbol Sequences with Deep Neural Networks
    Hauser, Michael
    Fu, Yiwei
    Li, Yue
    Phoha, Shashi
    Ray, Asok
    2017 AMERICAN CONTROL CONFERENCE (ACC), 2017, : 3147 - 3152
  • [27] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505
  • [28] Trojan Attack on Deep Generative Models in Autonomous Driving
    Ding, Shaohua
    Tian, Yulong
    Xu, Fengyuan
    Li, Qun
    Zhong, Sheng
    SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, SECURECOMM, PT I, 2019, 304 : 299 - 318
  • [29] DEFEAT: Decoupled feature attack across deep neural networks
    Huang, Lifeng
    Gao, Chengying
    Liu, Ning
    NEURAL NETWORKS, 2022, 156 : 13 - 28
  • [30] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    APPLIED SOFT COMPUTING, 2023, 143