4.6 Article

Sentence transition matrix: An efficient approach that preserves sentence semantics

Journal

COMPUTER SPEECH AND LANGUAGE
Volume 71, Issue -, Pages -

Publisher

ACADEMIC PRESS LTD- ELSEVIER SCIENCE LTD
DOI: 10.1016/j.csl.2021.101266

Keywords

Sentence embedding; Sentence semantics; Transition matrix; Paraphrase; Natural language processing

Funding

  1. National Research Foundation of Korea (NRF) - Korea government (MSIT) [NRF-2019R1F1A1060338]
  2. Korea Institute for Advancement of Technology (KIAT) - Korea Government (MOTIE) [P0008691]

Ask authors/readers for more resources

Sentence embedding is a significant research topic in natural language processing (NLP), aiming to generate vectors that represent the intrinsic meaning of sentences and improve performance in NLP tasks. Various approaches have been proposed and evaluated using semantic textual similarity (STS) tasks, with supervised neural network-based models delivering state-of-the-art performance. However, these models have limitations in terms of learnable parameters and the required amount of labeled training data. Pretrained language model-based approaches have emerged as a dominant trend, but acquiring sufficient labeled data is still necessary for fine-tuning.
Sentence embedding is an influential research topic in natural language processing (NLP). Generation of sentence vectors that reflect the intrinsic meaning of sentences is crucial for improving performance in various NLP tasks. Therefore, numerous supervised and unsupervised sentence-representation approaches have been proposed since the advent of the distributed representation of words. These approaches have been evaluated on semantic textual similarity (STS) tasks designed to measure the degree of semantic information preservation; neural network-based supervised embedding models typically deliver state-of-the-art performance. However, these models have limitations in that they have numerous learnable parameters and thus require large amounts of specific types of labeled training data. Pretrained language modelbased approaches, which have become a predominant trend in the NLP field, alleviate this issue to some extent; however, it is still necessary to collect sufficient labeled data for the fine-tuning process is still necessary. Herein, we propose an efficient approach that learns a transition matrix tuning a sentence embedding vector to capture the latent semantic meaning. Our proposed method has two practical advantages: (1) it can be applied to any sentence embedding method, and (2) it can deliver robust performance in STS tasks with only a few training examples.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available