Invisible and Multi-triggers Backdoor Attack Approach on Deep Neural Networks through Frequency Domain

被引:0
|
作者
Sun, Fengxue [1 ]
Pei, Bei [2 ]
Chen, Guangyong [2 ]
机构
[1] Southeast Univ, Sch Cyber Sci & Engn, Nanjing, Peoples R China
[2] Natl Engn Res Ctr Classified Protect & Safeguard, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
artificial intelligence security; backdoor attack; frequency domain; discrete cosine transform;
D O I
10.1109/ICSIP61881.2024.10671403
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the security of deep neural networks (DNNs) has become a research hotspot with widespread deployments of machine learning models in our daily life. Backdoor attack is an emerging security threat to DNNs, where the infected model will output malicious targets for the images containing specific triggers. However, most existing backdoor attack approaches have only single trigger, and the triggers are often visible to human eyes. In order to overcome these limitations, in this paper, we propose an invisible and multi-triggers backdoor attack (IMT-BA) approach to simultaneously generate four invisible triggers. Firstly, in our IMT-BA approach, we divide the whole images into four blocks and apply Discrete Cosine Transform (DCT) algorithm to generate four invisible triggers aiming at four targets. Secondly, our IMT-BA approach can be easily deployed in real world without any knowledge of the hyperparameters and architectures of the DNNs models. Finally, we do the experiments with MNIST and CIFAR-10 datasets and the experiment results show our IMT-BA approach can fool both DNNs models and Human Visual System (HVS) with high success rate.
引用
收藏
页码:707 / 711
页数:5
相关论文
共 50 条
  • [31] Spatialspectral-Backdoor: Realizing backdoor attack for deep neural networks in brain-computer interface via EEG characteristics
    Li, Fumin
    Huang, Mengjie
    You, Wenlong
    Zhu, Longsheng
    Cheng, Hanjing
    Yang, Rui
    NEUROCOMPUTING, 2025, 616
  • [32] Defending Deep Neural Networks Against Backdoor Attack by Using De-Trigger Autoencoder
    Kwon, Hyun
    IEEE ACCESS, 2025, 13 : 11159 - 11169
  • [33] Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
    Shen, Guangyu
    Liu, Yingqi
    Tao, Guanhong
    An, Shengwei
    Xu, Qiuling
    Cheng, Siyuan
    Ma, Shiqing
    Zhang, Xiangyu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [34] Only frequency domain diffractive deep neural networks
    Song, Mingzhu
    LI, Runze
    Wang, Junsheng
    APPLIED OPTICS, 2023, 62 (04) : 1082 - 1087
  • [35] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024
  • [36] MP-BADNet: A Backdoor-Attack Detection and Identification Protocol among Multi-Participants in Private Deep Neural Networks
    Chen, Congcong
    Wei, Lifei
    Zhang, Lei
    Ning, Jianting
    PROCEEDINGS OF ACM TURING AWARD CELEBRATION CONFERENCE, ACM TURC 2021, 2021, : 104 - 109
  • [37] Multi-Targeted Poisoning Attack in Deep Neural Networks
    Kwon H.
    Cho S.
    IEICE Transactions on Information and Systems, 2022, E105D (11): : 1916 - 1920
  • [38] An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
    Tang, Ruixiang
    Du, Mengnan
    Liu, Ninghao
    Yang, Fan
    Hu, Xia
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 218 - 228
  • [39] DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
    Li, Yuanchun
    Hua, Liayi
    Wang, Haoyu
    Chen, Chunyang
    Liu, Yunxin
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 263 - 274
  • [40] Efficient parametrization of multi-domain deep neural networks
    Rebuffi, Sylvestre-Alvise
    Bilen, Hakan
    Vedaldi, Andrea
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8119 - 8127