4.5 Article

Comparison of text preprocessing methods

期刊

NATURAL LANGUAGE ENGINEERING
卷 29, 期 3, 页码 509-553

出版社

CAMBRIDGE UNIV PRESS
DOI: 10.1017/S1351324922000213

关键词

Data preprocessing; Parsing; Text data mining

向作者/读者索取更多资源

This article discusses the importance of text preprocessing and its direct impact on the results of natural language processing applications. The authors explore various common text preprocessing methods and provide examples of special cases that require customized preprocessing. This article serves as a guideline for selecting and fine-tuning text preprocessing methods.
Text preprocessing is not only an essential step to prepare the corpus for modeling but also a key area that directly affects the natural language processing (NLP) application results. For instance, precise tokenization increases the accuracy of part-of-speech (POS) tagging, and retaining multiword expressions improves reasoning and machine translation. The text corpus needs to be appropriately preprocessed before it is ready to serve as the input to computer models. The preprocessing requirements depend on both the nature of the corpus and the NLP application itself, that is, what researchers would like to achieve from analyzing the data. Conventional text preprocessing practices generally suffice, but there exist situations where the text preprocessing needs to be customized for better analysis results. Hence, we discuss the pros and cons of several common text preprocessing methods: removing formatting, tokenization, text normalization, handling punctuation, removing stopwords, stemming and lemmatization, n-gramming, and identifying multiword expressions. Then, we provide examples of text datasets which require special preprocessing and how previous researchers handled the challenge. We expect this article to be a starting guideline on how to select and fine-tune text preprocessing methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据