4.7 Article

A Brief Overview of Universal Sentence Representation Methods: A Linguistic View

Journal

ACM COMPUTING SURVEYS
Volume 55, Issue 3, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3482853

Keywords

Sentence embedding; universal representation; deep learning; representation learning

Ask authors/readers for more resources

This survey summarizes the current universal sentence-embedding methods, categorizes them into four groups, and analyzes their performance. Different training schemes for sentence embeddings lead to different performance patterns, and incentive strategies and future research directions are proposed.
How to transfer the semantic information in a sentence to a computable numerical embedding form is a fundamental problem in natural language processing. An informative universal sentence embedding can greatly promote subsequent natural language processing tasks. However, unlike universal word embeddings, a widely accepted general-purpose sentence embedding technique has not been developed. This survey summarizes the current universal sentence-embedding methods, categorizes them into four groups from a linguistic view, and ultimately analyzes their reported performance. Sentence embeddings trained from words in a bottom-up manner are observed to have different, nearly opposite, performance patterns in downstream tasks compared to those trained from logical relationships between sentences. By comparing differences of training schemes in and between groups, we analyze possible essential reasons for different performance patterns. We additionally collect incentive strategies handling sentences from other models and propose potentially inspiring future research directions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available