Defending LLMs against Jailbreaking Attacks via Backtranslation

被引:0
|
作者
Wang, Yihan [1 ]
Shi, Zhouxing [1 ]
Bai, Andrew [1 ]
Hsieh, Cho-Jui [1 ]
机构
[1] UCLA, Los Angeles, CA 90095 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although many large language models (LLMs) have been trained to refuse harmful requests, they are still vulnerable to jailbreaking attacks which rewrite the original prompt to conceal its harmful intent. In this paper, we propose a new method for defending LLMs against jailbreaking attacks by "backtranslation". Specifically, given an initial response generated by the target LLM from an input prompt, our backtranslation prompts a language model to infer an input prompt that can lead to the response. The inferred prompt is called the backtranslated prompt which tends to reveal the actual intent of the original prompt, since it is generated based on the LLM's response and not directly manipulated by the attacker. We then run the target LLM again on the backtranslated prompt, and we refuse the original prompt if the model refuses the backtranslated prompt. We explain that the proposed defense provides several benefits on its effectiveness and efficiency. We empirically demonstrate that our defense significantly outperforms the baselines, in the cases that are hard for the baselines, and our defense also has little impact on the generation quality for benign input prompts. Our implementation is based on our library for LLM jailbreaking defense algorithms at https://github.com/YihanWang617/ llm-jailbreaking- defense, and the code for reproducing our experiments is available at https://github.com/YihanWang617/ LLM-Jailbreaking- Defense- Backtranslation.
引用
收藏
页码:16031 / 16046
页数:16
相关论文
共 50 条
  • [31] Defending Distributed Systems Against Adversarial Attacks
    Su L.
    Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [32] Defending Against Attacks on Main Memory Persistence
    Enck, William
    Butler, Kevin
    Richardson, Thomas
    McDaniel, Patrick
    Smith, Adam
    24TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, PROCEEDINGS, 2008, : 65 - 74
  • [33] Defending against Sybil Attacks in Vehicular Platoons
    Santhosh, Jesty
    Sankaran, Sriram
    13TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED NETWORKS AND TELECOMMUNICATION SYSTEMS (IEEE ANTS), 2019,
  • [34] DefenseVGAE: Defending Against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder
    Zhang, Ao
    Ma, Jinwen
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14865 : 313 - 324
  • [35] DefenseVGAE: Defending against adversarial attacks on graph data via a variational graph autoencoder
    Department of Information Science, School of Mathematical Sciences, Peking University, Beijing
    100871, China
    arXiv, 1600,
  • [36] Defending against gradient inversion attacks in federated learning via statistical machine unlearning
    Gao, Kun
    Zhu, Tianqing
    Ye, Dayong
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [37] Occluded Person Re-Identification via Defending Against Attacks From Obstacles
    Wang, Shujuan
    Liu, Run
    Li, Huafeng
    Qi, Guanqiu
    Yu, Zhengtao
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 147 - 161
  • [38] Defending Against Label-Only Attacks via Meta-Reinforcement Learning
    Ye, Dayong
    Zhu, Tianqing
    Gao, Kun
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3295 - 3308
  • [39] One Parameter Defense-Defending Against Data Inference Attacks via Differential Privacy
    Ye, Dayong
    Shen, Sheng
    Zhu, Tianqing
    Liu, Bo
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1466 - 1480
  • [40] Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing
    Zhao, Wei
    Li, Zhe
    Li, Yige
    Zhang, Ye
    Sun, Jun
    EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024, 2024, : 5094 - 5109