STEALTHY BACKDOOR ATTACK WITH ADVERSARIAL TRAINING

被引:4
|
作者
Feng, Le [1 ]
Li, Sheng [1 ]
Qian, Zhenxing [1 ]
Zhang, Xinpeng [1 ]
机构
[1] Fudan Univ, Sch Comp Sci & Technol, Shanghai, Peoples R China
基金
美国国家科学基金会;
关键词
Backdoor; Invisibility; Example-dependent; Adversarial training;
D O I
10.1109/ICASSP43922.2022.9746008
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Research shows that deep neural networks are vulnerable to backdoor attacks. The backdoor network behaves normally on clean examples, but once backdoor patterns are attached to examples, backdoor examples will be classified into the target class. In the previous backdoor attack schemes, backdoor patterns are not stealthy and may be detected. Thus, to achieve the stealthiness of backdoor patterns, we explore an invisible and example-dependent backdoor attack scheme. Specifically, we employ the backdoor generation network to generate the invisible backdoor pattern for each example, and backdoor patterns are not generic to each other. However, without other measures, the backdoor attack scheme cannot bypass the neural cleanse detection. Thus, we propose adversarial training to bypass neural cleanse detection. Experiments show that the proposed backdoor attack achieves a considerable attack success rate, invisibility, and can bypass the existing defense strategies.
引用
收藏
页码:2969 / 2973
页数:5
相关论文
共 50 条
  • [21] Inaudible Backdoor Attack via Stealthy Frequency Trigger Injection in Audio Spectrogram
    Zhang, Tianfang
    Huy Phan
    Tang, Zijie
    Shi, Cong
    Wang, Yan
    Yuan, Bo
    Chen, Yingying
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, ACM MOBICOM 2024, 2024, : 31 - 45
  • [22] RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World
    Wang, Donghua
    Yao, Wen
    Jiang, Tingsong
    Li, Chao
    Chen, Xiaoqian
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4432 - 4442
  • [23] StealthMask: Highly stealthy adversarial attack on face recognition system
    Mi, Jian-Xun
    Chen, Mingxuan
    Chen, Tao
    Cheng, Xiao
    APPLIED INTELLIGENCE, 2025, 55 (07)
  • [24] SA-Attack: Speed-adaptive stealthy adversarial attack on trajectory prediction
    Yin, Huilin
    Li, Jiaxiang
    Zhen, Pengju
    Yan, Jun
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 1772 - 1778
  • [25] Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
    Jin, Kaidi
    Zhang, Tianwei
    Shen, Chao
    Chen, Yufei
    Fan, Ming
    Lin, Chenhao
    Liu, Ting
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (04) : 2867 - 2881
  • [26] APBAM: Adversarial perturbation-driven backdoor attack in multimodal learning
    Zhang, Shaobo
    Chen, Wenli
    Li, Xiong
    Liu, Qin
    Wang, Guojun
    INFORMATION SCIENCES, 2025, 700
  • [27] Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network
    Chen D.
    Fu A.
    Zhou C.
    Chen Z.
    Fu, Anmin (fuam@njust.edu.cn); Fu, Anmin (fuam@njust.edu.cn), 1600, Science Press (58): : 2364 - 2373
  • [28] Stealthy Backdoor Attack Against Speaker Recognition Using Phase-Injection Hidden Trigger
    Ye, Zhe
    Yan, Diqun
    Dong, Li
    Deng, Jiacheng
    Yu, Shui
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1057 - 1061
  • [29] Perceptual Similarity-Based Multi-Objective Optimization for Stealthy Image Backdoor Attack
    Zhu S.
    Wang J.
    Sun G.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (05): : 1182 - 1192
  • [30] A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
    Liu, Guanxiong
    Khalil, Issa
    Khreishah, Abdallah
    Phan, NhatHai
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 834 - 846