Invisible and Multi-triggers Backdoor Attack Approach on Deep Neural Networks through Frequency Domain

被引:0
|
作者
Sun, Fengxue [1 ]
Pei, Bei [2 ]
Chen, Guangyong [2 ]
机构
[1] Southeast Univ, Sch Cyber Sci & Engn, Nanjing, Peoples R China
[2] Natl Engn Res Ctr Classified Protect & Safeguard, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
artificial intelligence security; backdoor attack; frequency domain; discrete cosine transform;
D O I
10.1109/ICSIP61881.2024.10671403
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the security of deep neural networks (DNNs) has become a research hotspot with widespread deployments of machine learning models in our daily life. Backdoor attack is an emerging security threat to DNNs, where the infected model will output malicious targets for the images containing specific triggers. However, most existing backdoor attack approaches have only single trigger, and the triggers are often visible to human eyes. In order to overcome these limitations, in this paper, we propose an invisible and multi-triggers backdoor attack (IMT-BA) approach to simultaneously generate four invisible triggers. Firstly, in our IMT-BA approach, we divide the whole images into four blocks and apply Discrete Cosine Transform (DCT) algorithm to generate four invisible triggers aiming at four targets. Secondly, our IMT-BA approach can be easily deployed in real world without any knowledge of the hyperparameters and architectures of the DNNs models. Finally, we do the experiments with MNIST and CIFAR-10 datasets and the experiment results show our IMT-BA approach can fool both DNNs models and Human Visual System (HVS) with high success rate.
引用
收藏
页码:707 / 711
页数:5
相关论文
共 50 条
  • [1] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [2] An Invisible Black-Box Backdoor Attack Through Frequency Domain
    Wang, Tong
    Yao, Yuan
    Xu, Feng
    An, Shengwei
    Tong, Hanghang
    Wang, Ting
    COMPUTER VISION, ECCV 2022, PT XIII, 2022, 13673 : 396 - 413
  • [3] Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [4] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04): : 883 - 887
  • [5] An Approach to Generation Triggers for Parrying Backdoor in Neural Networks
    Artem, Menisov
    ARTIFICIAL GENERAL INTELLIGENCE, AGI 2022, 2023, 13539 : 304 - 314
  • [6] Patch Based Backdoor Attack on Deep Neural Networks
    Manna, Debasmita
    Tripathy, Somanath
    INFORMATION SYSTEMS SECURITY, ICISS 2024, 2025, 15416 : 422 - 440
  • [7] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [8] INVISIBLE AND EFFICIENT BACKDOOR ATTACKS FOR COMPRESSED DEEP NEURAL NETWORKS
    Phan, Huy
    Xie, Yi
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 96 - 100
  • [9] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [10] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114