4.5 Article

A lightweight approach based on prompt for few-shot relation extraction

Journal

COMPUTER SPEECH AND LANGUAGE
Volume 84, Issue -, Pages -

Publisher

ACADEMIC PRESS LTD- ELSEVIER SCIENCE LTD
DOI: 10.1016/j.csl.2023.101580

Keywords

Relation classification; Few-shot relation extraction; Prompt-tuning; Language model; Prototypical network

Ask authors/readers for more resources

This paper introduces a lightweight approach to address the problem of few-shot relation extraction, using prompt-learning to assist in fine-tuning the model and designing an enhanced fusion module to fuse relation information and original prototype. Experimental results show that the proposed method achieves state-of-the-art performance on common datasets.
Few-shot relation extraction (FSRE) aims to predict the relation between two entities in a sen-tence using a few annotated samples. Many works solve the FSRE problem by training complex models with a huge number of parameters, which results in longer processing times to obtain results. Some recent works focus on introducing relation information into Prototype Networks in various ways. However, most of these methods obtain entity and relation representations by fine-tuning large pre-trained language models. This implies that a copy of the complete pre-trained model needs to be saved after fine-tuning for each specific task, leading to a shortage of computing and space resources. To address this problem, in this paper, we introduce a light approach that utilizes prompt-learning to assist in fine-tuning model by adjusting fewer parameters. To obtain a better prototype of relation, we design a new enhanced fusion module to fuse relation information and original prototype. We conduct extensive experiments on the common FSRE datasets FewRel 1.0 and FewRel 2.0 to varify the advantages of our method, the results show that our model achieves state-of-the-art performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available