Defending against Backdoor Attacks in Natural Language Generation

被引:0
|
作者
Sun, Xiaofei [1 ]
Li, Xiaoya [2 ]
Meng, Yuxian [2 ]
Ao, Xiang [3 ]
Lyu, Lingjuan [4 ]
Li, Jiwei [1 ,2 ]
Zhang, Tianwei [5 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Shannon AI, Beijing, Peoples R China
[3] Chinese Acad Sci, Beijing, Peoples R China
[4] Sony AI, Tokyo, Japan
[5] Nanyang Technol Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The frustratingly fragile nature of neural network models make current natural language generation (NLG) systems prone to backdoor attacks and generate malicious sequences that could be sexist or offensive. Unfortunately, little effort has been invested to how backdoor attacks can affect current NLG models and how to defend against these attacks. In this work, by giving a formal definition of backdoor attack and defense, we investigate this problem on two important NLG tasks, machine translation and dialog generation. Tailored to the inherent nature of NLG models (e.g., producing a sequence of coherent words given contexts), we design defending strategies against attacks. We find that testing the backward probability of generating sources given targets yields effective defense performance against all different types of attacks, and is able to handle the one-to-many issue in many NLG tasks such as dialog generation. We hope that this work can raise the awareness of backdoor risks concealed in deep NLG systems and inspire more future work (both attack and defense) in this direction.
引用
收藏
页码:5257 / 5265
页数:9
相关论文
共 50 条
  • [41] Efficient and Secure Federated Learning Against Backdoor Attacks
    Miao, Yinbin
    Xie, Rongpeng
    Li, Xinghua
    Liu, Zhiquan
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4619 - 4636
  • [42] BDDR: An Effective Defense Against Textual Backdoor Attacks
    Shao, Kun
    Yang, Junan
    Ai, Yang
    Liu, Hui
    Zhang, Yu
    Shao, Kun (1608053548@qq.com), 1600, Elsevier Ltd (110):
  • [43] BDDR: An Effective Defense Against Textual Backdoor Attacks
    Shao, Kun
    Yang, Junan
    Ai, Yang
    Liu, Hui
    Zhang, Yu
    COMPUTERS & SECURITY, 2021, 110
  • [44] Stealthy Targeted Backdoor Attacks Against Image Captioning
    Fan, Wenshu
    Li, Hongwei
    Jiang, Wenbo
    Hao, Meng
    Yu, Shui
    Zhang, Xiao
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5655 - 5667
  • [45] Dynamic Backdoor Attacks Against Machine Learning Models
    Salem, Ahmed
    Wen, Rui
    Backes, Michael
    Ma, Shiqing
    Zhang, Yang
    2022 IEEE 7TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2022), 2022, : 703 - 718
  • [46] Countermeasure against Backdoor Attacks using Epistemic Classifiers
    Yang, Zhaoyuan
    Virani, Nurali
    Iyer, Naresh S.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [47] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork
    Wang, Haotao
    Hong, Junyuan
    Zhang, Aston
    Zhou, Jiayu
    Wang, Zhangyang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [48] VILLAIN: Backdoor Attacks Against Vertical Split Learning
    Bai, Yijie
    Chen, Yanjiao
    Zhang, Hanlei
    Xu, Wenyuan
    Weng, Haiqin
    Goodman, Dou
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2743 - 2760
  • [49] Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor Attacks
    Yang, Wenhan
    Gao, Jingdong
    Mirzasoleiman, Baharan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] CBAs: Character-level Backdoor Attacks against Chinese Pre-trained Language Models
    He, Xinyu
    Hao, Fengrui
    Gu, Tianlong
    Chang, Liang
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2024, 27 (03)