STEALTHY BACKDOOR ATTACK WITH ADVERSARIAL TRAINING

被引:4
|
作者
Feng, Le [1 ]
Li, Sheng [1 ]
Qian, Zhenxing [1 ]
Zhang, Xinpeng [1 ]
机构
[1] Fudan Univ, Sch Comp Sci & Technol, Shanghai, Peoples R China
基金
美国国家科学基金会;
关键词
Backdoor; Invisibility; Example-dependent; Adversarial training;
D O I
10.1109/ICASSP43922.2022.9746008
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Research shows that deep neural networks are vulnerable to backdoor attacks. The backdoor network behaves normally on clean examples, but once backdoor patterns are attached to examples, backdoor examples will be classified into the target class. In the previous backdoor attack schemes, backdoor patterns are not stealthy and may be detected. Thus, to achieve the stealthiness of backdoor patterns, we explore an invisible and example-dependent backdoor attack scheme. Specifically, we employ the backdoor generation network to generate the invisible backdoor pattern for each example, and backdoor patterns are not generic to each other. However, without other measures, the backdoor attack scheme cannot bypass the neural cleanse detection. Thus, we propose adversarial training to bypass neural cleanse detection. Experiments show that the proposed backdoor attack achieves a considerable attack success rate, invisibility, and can bypass the existing defense strategies.
引用
收藏
页码:2969 / 2973
页数:5
相关论文
共 50 条
  • [31] Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization
    Gong, Huihui
    Dong, Minjing
    Ma, Siqi
    Camtepe, Seyit
    Nepal, Surya
    Xu, Chang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5014 - 5025
  • [32] Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving
    Pourkeshavarz, Mozhgan
    Sabokrou, Mohammad
    Rasouli, Amir
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 14885 - 14894
  • [33] FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
    Breier, Jakub
    Hou, Xiaolu
    Ochoa, Martin
    Solano, Jesus
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 1895 - 1908
  • [34] Attack-less adversarial training for a robust adversarial defense
    Ho, Jiacang
    Lee, Byung-Gook
    Kang, Dae-Ki
    APPLIED INTELLIGENCE, 2022, 52 (04) : 4364 - 4381
  • [35] Attack-less adversarial training for a robust adversarial defense
    Jiacang Ho
    Byung-Gook Lee
    Dae-Ki Kang
    Applied Intelligence, 2022, 52 : 4364 - 4381
  • [36] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Liu, Tao
    Li, Mingjun
    Zheng, Haibin
    Ming, Zhaoyan
    Chen, Jinyin
    MULTIMEDIA SYSTEMS, 2023, 29 (02) : 553 - 568
  • [37] Adversarial catoptric light: An effective, stealthy and robust physical-world attack to DNNs
    Hu, Chengyin
    Shi, Weiwen
    Tian, Ling
    Li, Wen
    IET COMPUTER VISION, 2024, 18 (05) : 557 - 573
  • [38] Low-cost Adversarial Stealthy False Data Injection Attack and Detection Method
    Huang D.
    Ding Z.
    Hu A.
    Wang X.
    Shi S.
    Dianwang Jishu/Power System Technology, 2023, 47 (04): : 1531 - 1539
  • [39] UltraBD: Backdoor Attack against Automatic Speaker Verification Systems via Adversarial Ultrasound
    Ze, Junning
    Li, Xinfeng
    Cheng, Yushi
    Ji, Xiaoyu
    Xu, Wenyuan
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 193 - 200
  • [40] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Tao Liu
    Mingjun Li
    Haibin Zheng
    Zhaoyan Ming
    Jinyin Chen
    Multimedia Systems, 2023, 29 : 553 - 568