Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

被引:590
|
作者
Wang, Bolun [1 ,2 ]
Yao, Yuanshun [2 ]
Shan, Shawn [2 ]
Li, Huiying [2 ]
Viswanath, Bimal [3 ]
Zheng, Haitao [2 ]
Zhao, Ben Y. [2 ]
机构
[1] UC Santa Barbara, Santa Barbara, CA 93106 USA
[2] Univ Chicago, Chicago, IL 60637 USA
[3] Virginia Tech, Blacksburg, VA USA
关键词
D O I
10.1109/SP.2019.00031
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor attacks, where hidden associations or triggers override normal classification to produce unexpected results. For example, a model with a backdoor always identifies a face as Bill Gates if a specific symbol is present in the input. Backdoors can stay hidden indefinitely until activated by an input, and present a serious security risk to many security or safety related applications, e.g., biometric authentication systems or self-driving cars. We present the first robust and generalizable detection and mitigation system for DNN backdoor attacks. Our techniques identify backdoors and reconstruct possible triggers. We identify multiple mitigation techniques via input filters, neuron pruning and unlearning. We demonstrate their efficacy via extensive experiments on a variety of DNNs, against two types of backdoor injection methods identified by prior work. Our techniques also prove robust against a number of variants of the backdoor attack.
引用
收藏
页码:707 / 723
页数:17
相关论文
共 50 条
  • [1] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26
  • [2] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    [J]. Computers and Security, 2022, 120
  • [3] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    [J]. COMPUTERS & SECURITY, 2022, 120
  • [4] Latent Backdoor Attacks on Deep Neural Networks
    Yao, Yuanshun
    Li, Huiying
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2041 - 2055
  • [5] Verifying Neural Networks Against Backdoor Attacks
    Pham, Long H.
    Sun, Jun
    [J]. COMPUTER AIDED VERIFICATION (CAV 2022), PT I, 2022, 13371 : 171 - 192
  • [6] Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
    Guan, Zihan
    Sun, Lichao
    Du, Mengnan
    Liu, Ninghao
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 608 - 618
  • [7] Watermarking Graph Neural Networks based on Backdoor Attacks
    Xu, Jing
    Koffas, Stefanos
    Ersoy, Oguzhan
    Picek, Stjepan
    [J]. 2023 IEEE 8TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P, 2023, : 1179 - 1197
  • [8] A defense method against backdoor attacks on neural networks
    Kaviani, Sara
    Shamshiri, Samaneh
    Sohn, Insoo
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [9] Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation
    Yashiki, Shingo
    Takahashi, Chako
    Suzuki, Koutarou
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2024, E107A (03) : 355 - 358
  • [10] INVISIBLE AND EFFICIENT BACKDOOR ATTACKS FOR COMPRESSED DEEP NEURAL NETWORKS
    Phan, Huy
    Xie, Yi
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 96 - 100