4.6 Article

Unsupervised statistical text simplification using pre-trained language modeling for initialization

Journal

FRONTIERS OF COMPUTER SCIENCE
Volume 17, Issue 1, Pages -

Publisher

HIGHER EDUCATION PRESS
DOI: 10.1007/s11704-022-1244-0

Keywords

text simplification; pre-trained language modeling; BERT; word embeddings

Ask authors/readers for more resources

This paper proposes an unsupervised statistical text simplification method using pre-trained language modeling BERT for initialization. Experimental results show that the method outperforms some supervised baselines.
Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available