Invisible and Multi-triggers Backdoor Attack Approach on Deep Neural Networks through Frequency Domain

被引:0
|
作者
Sun, Fengxue [1 ]
Pei, Bei [2 ]
Chen, Guangyong [2 ]
机构
[1] Southeast Univ, Sch Cyber Sci & Engn, Nanjing, Peoples R China
[2] Natl Engn Res Ctr Classified Protect & Safeguard, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
artificial intelligence security; backdoor attack; frequency domain; discrete cosine transform;
D O I
10.1109/ICSIP61881.2024.10671403
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the security of deep neural networks (DNNs) has become a research hotspot with widespread deployments of machine learning models in our daily life. Backdoor attack is an emerging security threat to DNNs, where the infected model will output malicious targets for the images containing specific triggers. However, most existing backdoor attack approaches have only single trigger, and the triggers are often visible to human eyes. In order to overcome these limitations, in this paper, we propose an invisible and multi-triggers backdoor attack (IMT-BA) approach to simultaneously generate four invisible triggers. Firstly, in our IMT-BA approach, we divide the whole images into four blocks and apply Discrete Cosine Transform (DCT) algorithm to generate four invisible triggers aiming at four targets. Secondly, our IMT-BA approach can be easily deployed in real world without any knowledge of the hyperparameters and architectures of the DNNs models. Finally, we do the experiments with MNIST and CIFAR-10 datasets and the experiment results show our IMT-BA approach can fool both DNNs models and Human Visual System (HVS) with high success rate.
引用
收藏
页码:707 / 711
页数:5
相关论文
共 50 条
  • [21] TEST-TIME DETECTION OF BACKDOOR TRIGGERS FOR POISONED DEEP NEURAL NETWORKS
    Li, Xi
    Xiang, Zhen
    Miller, David J.
    Kesidis, George
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3333 - 3337
  • [22] Test-time detection of backdoor triggers for poisoned deep neural networks
    Li, Xi
    Xiang, Zhen
    Miller, David J.
    Kesidis, George
    arXiv, 2021,
  • [23] TEST-TIME DETECTION OF BACKDOOR TRIGGERS FOR POISONED DEEP NEURAL NETWORKS
    Li, Xi
    Xiang, Zhen
    Miller, David J.
    Kesidis, George
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2022, 2022-May : 3333 - 3337
  • [24] Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
    Oyama, Tatsuya
    Okura, Shunsuke
    Yoshida, Kota
    Fujino, Takeshi
    SENSORS, 2023, 23 (10)
  • [25] PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification
    Yuan, Yizhen
    Kong, Rui
    Xie, Shenghao
    Li, Yuanchun
    Liu, Yunxin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9134 - 9142
  • [26] Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
    Qi, Xiangyu
    Xie, Tinghao
    Pan, Ruizhe
    Zhu, Jifeng
    Yang, Yong
    Bu, Kai
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13337 - 13347
  • [27] Stealthy Backdoor Attack Against Federated Learning Through Frequency Domain by Backdoor Neuron Constraint and Model Camouflage
    Qiao, Yanqi
    Liu, Dazhuang
    Wang, Rui
    Liang, Kaitai
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2024, 14 (04) : 661 - 672
  • [28] Increasing depth, distribution distillation, and model soup: erasing backdoor triggers for deep neural networks
    Zhang, Yijian
    Zhang, Tianxing
    Liu, Qi
    Sun, Guangling
    Wu, Hanzhou
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [29] Protecting the Intellectual Property of Deep Neural Networks with Watermarking: The Frequency Domain Approach
    Li, Meng
    Zhong, Qi
    Zhang, Leo Yu
    Du, Yajuan
    Zhang, Jun
    Xiang, Yong
    2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, : 402 - 409
  • [30] DeepGuard: Backdoor Attack Detection and Identification Schemes in Privacy-Preserving Deep Neural Networks
    Chen, Congcong
    Wei, Lifei
    Zhang, Lei
    Peng, Ya
    Ning, Jianting
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022