4.5 Article

Self-attention-based conditional random fields latent variables model for sequence labeling

期刊

PATTERN RECOGNITION LETTERS
卷 145, 期 -, 页码 157-164

出版社

ELSEVIER
DOI: 10.1016/j.patrec.2021.02.008

关键词

Latent CRF; Sequence labeling; Encoding schema; Natural language processing; VQA; Big data

向作者/读者索取更多资源

To process data like text and speech, Natural Language Processing (NLP) is a valuable tool. Sequence labeling is a vital part of NLP through techniques like text classification, machine translation, and sentiment analysis. Two novel frameworks, SA-CRFLV-I and SA-CRFLV-II, using latent variables within random fields show better performance in terms of well-known metrics compared to 4 well-known sequence prediction methodologies.
To process data like text and speech, Natural Language Processing (NLP) is a valuable tool. As on of NLP?s upstream tasks, sequence labeling is a vital part of NLP through techniques like text classification, machine translation, and sentiment analysis. In this paper, our focus is on sequence labeling where we assign semantic labels within input sequences. We present two novel frameworks, namely SA-CRFLV-I and SA-CRFLV-II, that use latent variables within random fields. These frameworks make use of an encoding schema in the form of a latent variable to be able to capture the latent structure in the observed data. SA-CRFLV-I shows the best performance at the sentence level whereas SA-CRFLV-II works best at the word level. In our in-depth experimental results, we compare our frameworks with 4 well-known sequence prediction methodologies which include NER, reference parsing, chunking as well as POS tagging. The proposed frameworks are shown to have better performance in terms of many well-known metrics. (c)& nbsp;2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据