4.6 Article

Deep Refinement: capsule network with attention mechanism-based system for text classification

期刊

NEURAL COMPUTING & APPLICATIONS
卷 32, 期 7, 页码 1839-1856

出版社

SPRINGER LONDON LTD
DOI: 10.1007/s00521-019-04620-z

关键词

Text classification; Capsule; Attention; LSTM; GRU; Neural network; NLP

资金

  1. Key Laboratory of Intelligent Air-Ground Cooperative Control for Universities in Chongqing
  2. Key Laboratory of Industrial IoT and Networked Control, Ministry of Education, College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
  3. Hong Kong Baptist University Tier 1 Start-up Grant

向作者/读者索取更多资源

Most of the text in the questions of community question-answering systems does not consist of a definite mechanism for the restriction of inappropriate and insincere content. A given piece of text can be insincere if it asserts false claims or assumes something which is debatable or has a non-neutral or exaggerated tone about an individual or a group. In this paper, we propose a pipeline called Deep Refinement which utilizes some of the state-of-the-art methods for information retrieval from highly sparse data such as capsule network and attention mechanism. We have applied the Deep Refinement pipeline to classify the text primarily into two categories, namely sincere and insincere. Our novel approach 'Deep Refinement' provides a system for the classification of such questions in order to ensure enhanced monitoring and information quality. The database used to understand the real concept of what actually makes up sincere and insincere includes quora insincere question dataset. Our proposed question classification method outperformed previously used text classification methods, as evident from the F1 score of 0.978.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据