Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution

被引:0
|
作者
Qi, Fanchao [1 ,2 ]
Yao, Yuan [1 ,2 ]
Xu, Sophia [2 ,4 ]
Liu, Zhiyuan [1 ,2 ,3 ]
Sun, Maosong [1 ,2 ,3 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing, Peoples R China
[3] Tsinghua Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[4] McGill Univ, Montreal, PQ, Canada
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks. Injected with backdoors, models perform normally on benign examples but produce attacker-specified predictions when the backdoor is activated, presenting serious security threats to real-world applications. Since existing textual backdoor attacks pay little attention to the invisibility of backdoors, they can be easily detected and blocked. In this work, we present invisible backdoors that are activated by a learnable combination of word substitution. We show that NLP models can be injected with backdoors that lead to a nearly 100% attack success rate, whereas being highly invisible to existing defense strategies and even human inspections. The results raise a serious alarm to the security of NLP models, which requires further research to be resolved. All the data and code of this paper are released at https: //github.com/thunlp/BkdAtk-LWS.
引用
收藏
页码:4873 / 4883
页数:11
相关论文
共 2 条
  • [1] MIC: An Effective Defense Against Word-Level Textual Backdoor Attacks
    Yang, Shufan
    Li, Qianmu
    Lian, Zhichao
    Wang, Pengchuan
    Hou, Jun
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2023, PT VI, 2024, 14452 : 3 - 18
  • [2] Defense of Word-Level Adversarial Attacks via Random Substitution Encoding
    Wang, Zhaoyang
    Wang, Hongtao
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT II, 2020, 12275 : 312 - 324