4.7 Article

A divide and conquer framework for Knowledge Editing

期刊

KNOWLEDGE-BASED SYSTEMS
卷 279, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2023.110826

关键词

Pre-trained language model; Knowledge Editing; Dynamantic interfence

向作者/读者索取更多资源

This paper proposes a novel framework to address the challenge of correcting errors in language models through divide-and-conquer edits with parallel Editors. Research findings reveal that existing methods often ignore conflicts in multi-edits, whereas our approach can learn diverse editing strategies, resulting in better adaptation to multiple edits.
As Pre-trained language models (LMs) play an important role in various Natural Language Processing (NLP) tasks, it is becoming increasingly important to make sure the knowledge learned from LMs is valid and correct. Unlike conventional knowledge bases, LMs implicitly memorize knowledge in their parameters, which makes it harder to correct if some knowledge is incorrectly inferred or obsolete. The task of Knowledge Editing is to correct errors in language models, avoiding the expensive overhead associated with retraining the model from scratch. While existing methods have shown some promising results, they fail on multi-edits as they ignore the conflicts between these edits.In the paper, we propose a novel framework to divide-and-conquer edits with parallel Editors. Specifically, we design explicit and implicit multi-editor models to learn diverse editing strategies in terms of dynamic structure and dynamic parameters respectively, which allows solving the conflict data in an efficient end-to-end manner.Our main findings are: (i) State of the art Knowledge Editing methods with multiple editing capability, such as MEND and ENN, can hardly outperform the fine-tuning method; (ii) Our proposed models outperform the fine-tuning method over the two widely used datasets for Knowledge Editing; (iii) Additional analytical experiments verify that our approach can learn diverse editing strategies, thus better adapting to multiple editing than state-of-the-art methods.& COPY; 2023 Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据