Defending against Backdoor Attacks in Natural Language Generation

被引:0
|
作者
Sun, Xiaofei [1 ]
Li, Xiaoya [2 ]
Meng, Yuxian [2 ]
Ao, Xiang [3 ]
Lyu, Lingjuan [4 ]
Li, Jiwei [1 ,2 ]
Zhang, Tianwei [5 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Shannon AI, Beijing, Peoples R China
[3] Chinese Acad Sci, Beijing, Peoples R China
[4] Sony AI, Tokyo, Japan
[5] Nanyang Technol Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The frustratingly fragile nature of neural network models make current natural language generation (NLG) systems prone to backdoor attacks and generate malicious sequences that could be sexist or offensive. Unfortunately, little effort has been invested to how backdoor attacks can affect current NLG models and how to defend against these attacks. In this work, by giving a formal definition of backdoor attack and defense, we investigate this problem on two important NLG tasks, machine translation and dialog generation. Tailored to the inherent nature of NLG models (e.g., producing a sequence of coherent words given contexts), we design defending strategies against attacks. We find that testing the backward probability of generating sources given targets yields effective defense performance against all different types of attacks, and is able to handle the one-to-many issue in many NLG tasks such as dialog generation. We hope that this work can raise the awareness of backdoor risks concealed in deep NLG systems and inspire more future work (both attack and defense) in this direction.
引用
收藏
页码:5257 / 5265
页数:9
相关论文
共 50 条
  • [31] Defending Deep Learning Based Anomaly Detection Systems Against White-Box Adversarial Examples and Backdoor Attacks
    Alrawashdeh, Khaled
    Goldsmith, Stephen
    PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL SYMPOSIUM ON TECHNOLOGY AND SOCIETY (ISTAS), 2021, : 294 - 301
  • [32] VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models
    Liang, Jiawei
    Liang, Siyuan
    Luo, Man
    Liu, Aishan
    Han, Dongchen
    Chang, Ee-Chien
    Cao, Xiaochun
    arXiv,
  • [33] VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models
    Liang, Jiawei
    Liang, Siyuan
    Liu, Aishan
    Cao, Xiaochun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025,
  • [34] Backdoor Learning of Language Models in Natural Language Processing
    University of Michigan
    1600,
  • [35] DETECTING BACKDOOR ATTACKS AGAINST POINT CLOUD CLASSIFIERS
    Xiang, Zhen
    Miller, David J.
    Chen, Siheng
    Li, Xi
    Kesidis, George
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3159 - 3163
  • [36] A defense method against backdoor attacks on neural networks
    Kaviani, Sara
    Shamshiri, Samaneh
    Sohn, Insoo
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [37] Countermeasures Against Backdoor Attacks Towards Malware Detectors
    Narisada, Shintaro
    Matsumoto, Yuki
    Hidano, Seira
    Uchibayashi, Toshihiro
    Suganuma, Takuo
    Hiji, Masahiro
    Kiyomoto, Shinsaku
    CRYPTOLOGY AND NETWORK SECURITY, CANS 2021, 2021, 13099 : 295 - 314
  • [38] FLSAD: Defending Backdoor Attacks in Federated Learning via Self-Attention Distillation
    Chen, Lucheng
    Liu, Xiaoshuang
    Wang, Ailing
    Zhai, Weiwei
    Cheng, Xiang
    SYMMETRY-BASEL, 2024, 16 (11):
  • [39] ADFL: Defending backdoor attacks in fe derate d learning via adversarial distillation
    Zhu, Chengcheng
    Zhang, Jiale
    Sun, Xiaobing
    Chen, Bing
    Meng, Weizhi
    COMPUTERS & SECURITY, 2023, 132
  • [40] Backdoor Attacks against Voice Recognition Systems: A Survey
    Yan, Baochen
    Lan, Jiahe
    Yan, Zheng
    ACM COMPUTING SURVEYS, 2025, 57 (03)