4.6 Article

Deep Refinement: capsule network with attention mechanism-based system for text classification

Journal

NEURAL COMPUTING & APPLICATIONS
Volume 32, Issue 7, Pages 1839-1856

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s00521-019-04620-z

Keywords

Text classification; Capsule; Attention; LSTM; GRU; Neural network; NLP

Funding

  1. Key Laboratory of Intelligent Air-Ground Cooperative Control for Universities in Chongqing
  2. Key Laboratory of Industrial IoT and Networked Control, Ministry of Education, College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
  3. Hong Kong Baptist University Tier 1 Start-up Grant

Ask authors/readers for more resources

Most of the text in the questions of community question-answering systems does not consist of a definite mechanism for the restriction of inappropriate and insincere content. A given piece of text can be insincere if it asserts false claims or assumes something which is debatable or has a non-neutral or exaggerated tone about an individual or a group. In this paper, we propose a pipeline called Deep Refinement which utilizes some of the state-of-the-art methods for information retrieval from highly sparse data such as capsule network and attention mechanism. We have applied the Deep Refinement pipeline to classify the text primarily into two categories, namely sincere and insincere. Our novel approach 'Deep Refinement' provides a system for the classification of such questions in order to ensure enhanced monitoring and information quality. The database used to understand the real concept of what actually makes up sincere and insincere includes quora insincere question dataset. Our proposed question classification method outperformed previously used text classification methods, as evident from the F1 score of 0.978.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available