STEALTHY BACKDOOR ATTACK WITH ADVERSARIAL TRAINING

被引:4
|
作者
Feng, Le [1 ]
Li, Sheng [1 ]
Qian, Zhenxing [1 ]
Zhang, Xinpeng [1 ]
机构
[1] Fudan Univ, Sch Comp Sci & Technol, Shanghai, Peoples R China
基金
美国国家科学基金会;
关键词
Backdoor; Invisibility; Example-dependent; Adversarial training;
D O I
10.1109/ICASSP43922.2022.9746008
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Research shows that deep neural networks are vulnerable to backdoor attacks. The backdoor network behaves normally on clean examples, but once backdoor patterns are attached to examples, backdoor examples will be classified into the target class. In the previous backdoor attack schemes, backdoor patterns are not stealthy and may be detected. Thus, to achieve the stealthiness of backdoor patterns, we explore an invisible and example-dependent backdoor attack scheme. Specifically, we employ the backdoor generation network to generate the invisible backdoor pattern for each example, and backdoor patterns are not generic to each other. However, without other measures, the backdoor attack scheme cannot bypass the neural cleanse detection. Thus, we propose adversarial training to bypass neural cleanse detection. Experiments show that the proposed backdoor attack achieves a considerable attack success rate, invisibility, and can bypass the existing defense strategies.
引用
收藏
页码:2969 / 2973
页数:5
相关论文
共 50 条
  • [41] Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios
    Mei, Haochen
    Li, Gaolei
    Wu, Jun
    Zheng, Longfei
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [42] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [43] Debiasing backdoor attack: A benign application of backdoor attack in eliminating data bias
    Wu, Shangxi
    He, Qiuyang
    Zhang, Yi
    Lu, Dongyuan
    Sang, Jitao
    INFORMATION SCIENCES, 2023, 643
  • [44] Stealthy Targeted Backdoor Attacks Against Image Captioning
    Fan, Wenshu
    Li, Hongwei
    Jiang, Wenbo
    Hao, Meng
    Yu, Shui
    Zhang, Xiao
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5655 - 5667
  • [45] Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon
    Zhong, Yiqi
    Liu, Xianming
    Zhai, Deming
    Jiang, Junjun
    Ji, Xiangyang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15324 - 15333
  • [46] Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks
    Mu, Bingxu
    Niu, Zhenxing
    Wang, Le
    Wang, Xue
    Miao, Qiguang
    Jin, Rong
    Hua, Gang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20495 - 20503
  • [47] Backdoor attack and defense in federated generative adversarial network-based medical image synthesis
    Jin, Ruinan
    Li, Xiaoxiao
    MEDICAL IMAGE ANALYSIS, 2023, 90
  • [48] DeHiB: Deep Hidden Backdoor Attack on Semi-supervised Learning via Adversarial Perturbation
    Yan, Zhicong
    Li, Gaolei
    Tian, Yuan
    Wu, Jun
    Li, Shenghong
    Chen, Mingzhe
    Poor, H. Vincent
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10585 - 10593
  • [49] Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light
    Li, Yufeng
    Yang, Fengyu
    Liu, Qi
    Li, Jiangtao
    Cao, Chenhong
    COMPUTERS & SECURITY, 2023, 132
  • [50] A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING
    Barni, M.
    Kallas, K.
    Tondi, B.
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 101 - 105