共 50 条
A divide and conquer framework for Knowledge Editing
被引:6
|作者:
Han, Xiaoqi
[1
]
Li, Ru
[1
,2
]
Li, Xiaoli
[3
]
Pan, Jeff Z.
[4
]
机构:
[1] Shanxi Univ, Sch Comp & Informat Technol, Taiyuan, Peoples R China
[2] Shanxi Univ, Key Lab Computat Intelligence & Chinese Informat P, Minist Educ, Taiyuan, Peoples R China
[3] ASTAR, Inst Infocomm Res, Singapore, Singapore
[4] Univ Edinburgh, Sch Informat, Edinburgh, Scotland
基金:
中国国家自然科学基金;
关键词:
Pre-trained language model;
Knowledge Editing;
Dynamantic interfence;
D O I:
10.1016/j.knosys.2023.110826
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
As Pre-trained language models (LMs) play an important role in various Natural Language Processing (NLP) tasks, it is becoming increasingly important to make sure the knowledge learned from LMs is valid and correct. Unlike conventional knowledge bases, LMs implicitly memorize knowledge in their parameters, which makes it harder to correct if some knowledge is incorrectly inferred or obsolete. The task of Knowledge Editing is to correct errors in language models, avoiding the expensive overhead associated with retraining the model from scratch. While existing methods have shown some promising results, they fail on multi-edits as they ignore the conflicts between these edits.In the paper, we propose a novel framework to divide-and-conquer edits with parallel Editors. Specifically, we design explicit and implicit multi-editor models to learn diverse editing strategies in terms of dynamic structure and dynamic parameters respectively, which allows solving the conflict data in an efficient end-to-end manner.Our main findings are: (i) State of the art Knowledge Editing methods with multiple editing capability, such as MEND and ENN, can hardly outperform the fine-tuning method; (ii) Our proposed models outperform the fine-tuning method over the two widely used datasets for Knowledge Editing; (iii) Additional analytical experiments verify that our approach can learn diverse editing strategies, thus better adapting to multiple editing than state-of-the-art methods.& COPY; 2023 Published by Elsevier B.V.
引用
收藏
页数:13
相关论文