4.7 Article

A divide and conquer framework for Knowledge Editing

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 279, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2023.110826

Keywords

Pre-trained language model; Knowledge Editing; Dynamantic interfence

Ask authors/readers for more resources

This paper proposes a novel framework to address the challenge of correcting errors in language models through divide-and-conquer edits with parallel Editors. Research findings reveal that existing methods often ignore conflicts in multi-edits, whereas our approach can learn diverse editing strategies, resulting in better adaptation to multiple edits.
As Pre-trained language models (LMs) play an important role in various Natural Language Processing (NLP) tasks, it is becoming increasingly important to make sure the knowledge learned from LMs is valid and correct. Unlike conventional knowledge bases, LMs implicitly memorize knowledge in their parameters, which makes it harder to correct if some knowledge is incorrectly inferred or obsolete. The task of Knowledge Editing is to correct errors in language models, avoiding the expensive overhead associated with retraining the model from scratch. While existing methods have shown some promising results, they fail on multi-edits as they ignore the conflicts between these edits.In the paper, we propose a novel framework to divide-and-conquer edits with parallel Editors. Specifically, we design explicit and implicit multi-editor models to learn diverse editing strategies in terms of dynamic structure and dynamic parameters respectively, which allows solving the conflict data in an efficient end-to-end manner.Our main findings are: (i) State of the art Knowledge Editing methods with multiple editing capability, such as MEND and ENN, can hardly outperform the fine-tuning method; (ii) Our proposed models outperform the fine-tuning method over the two widely used datasets for Knowledge Editing; (iii) Additional analytical experiments verify that our approach can learn diverse editing strategies, thus better adapting to multiple editing than state-of-the-art methods.& COPY; 2023 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available