Detoxifying Large Language Models via Knowledge Editing

被引:0
|
作者
Wang, Mengru [1 ]
Zhang, Ningyu [1 ,6 ]
Xu, Ziwen [1 ]
Xi, Zekun [1 ]
Deng, Shumin [3 ]
Yao, Yunzhi [1 ]
Zhang, Qishen [2 ]
Yang, Linyi [4 ]
Wang, Jindong [5 ]
Chen, Huajun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Ant Grp, Hangzhou, Peoples R China
[3] Natl Univ Singapore, NUS NCS Joint Lab, Singapore, Singapore
[4] Westlake Univ, Hangzhou, Peoples R China
[5] Microsoft Res Asia, Beijing, Peoples R China
[6] Southeast Univ, Key Lab New Generat Artificial Intelligence Techn, Minist Educ, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs). We construct a benchmark, SafeEdit, which covers nine unsafe categories with various powerful attack prompts and equips comprehensive metrics for systematic evaluation. We conduct experiments with several knowledge editing approaches, indicating that knowledge editing has the potential to detoxify LLMs with a limited impact on general performance efficiently. Then, we propose a simple yet effective baseline, dubbed Detoxifying with Intraoperative Neural Monitoring (DINM), to diminish the toxicity of LLMs within a few tuning steps via only one instance. We further provide an in-depth analysis of the internal mechanism for various detoxifying approaches, demonstrating that previous methods like SFT and DPO may merely suppress the activations of toxic parameters, while DINM mitigates the toxicity of the toxic parameters to a certain extent, making permanent adjustments. We hope that these insights could shed light on future work of developing detoxifying approaches and the underlying knowledge mechanisms of LLMs1.
引用
收藏
页码:3093 / 3118
页数:26
相关论文
共 50 条
  • [41] Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
    Wang, Haoran
    Shu, Kai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 6288 - 6304
  • [42] Quo Vadis ChatGPT? From large language models to Large Knowledge Models
    Venkatasubramanian, Venkat
    Chakraborty, Arijit
    COMPUTERS & CHEMICAL ENGINEERING, 2025, 192
  • [43] Editing Knowledge Representation of Language Model via Rephrased Prefix Prompts
    Cai, Yuchen
    Cao, Ding
    Guo, Rongxi
    Wen, Yaqin
    Liu, Guiquan
    Chen, Enhong
    Zhang, Jingyun
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14878 : 459 - 470
  • [44] TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
    Zhang, Shaolei
    Yu, Tian
    Feng, Yang
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 8908 - 8949
  • [45] Game Generation via Large Language Models
    Hu, Chengpeng
    Zhao, Yunlong
    Liu, Jialin
    2024 IEEE CONFERENCE ON GAMES, COG 2024, 2024,
  • [46] Text Classification via Large Language Models
    Sun, Xiaofei
    Li, Xiaoya
    Li, Jiwei
    Wu, Fei
    Guo, Shangwei
    Zhang, Tianwei
    Wang, Guoyin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8990 - 9005
  • [47] Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
    Wang, Boxin
    Ping, Wei
    Xiao, Chaowei
    Xu, Peng
    Patwary, Mostofa
    Shoeybi, Mohammad
    Li, Bo
    Anandkumar, Anima
    Catanzaro, Bryan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [48] Intelligent Checking Method for Construction Schemes via Fusion of Knowledge Graph and Large Language Models
    Li, Hao
    Yang, Rongzheng
    Xu, Shuangshuang
    Xiao, Yao
    Zhao, Hongyu
    BUILDINGS, 2024, 14 (08)
  • [49] LoRAMoE: AlleviatingWorld Knowledge Forgetting in Large Language Models via MoE-Style Plugin
    Dou, Shihan
    Zhou, Enyu
    Liu, Yan
    Gao, Songyang
    Shen, Wei
    Xiong, Limao
    Zhou, Yuhao
    Wang, Xiao
    Xi, Zhiheng
    Fan, Xiaoran
    Pu, Shiliang
    Zhu, Jiang
    Zheng, Rui
    Gui, Tao
    Zhang, Qi
    Huang, Xuanjing
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1932 - 1945
  • [50] Automated taxonomy alignment via large language models: bridging the gap between knowledge domains
    Cui, Wentao
    Xiao, Meng
    Wang, Ludi
    Wang, Xuezhi
    Du, Yi
    Zhou, Yuanchun
    SCIENTOMETRICS, 2024, 129 (09) : 5287 - 5312