4.7 Article

Learn#: A Novel incremental learning method for text classification

期刊

EXPERT SYSTEMS WITH APPLICATIONS
卷 147, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2020.113198

关键词

Learn#; Incremental learning; Reinforcement leaming

资金

  1. National Natural Science Foundation of China [71571136]
  2. Project of Science and Technology Commission of Shanghai Municipality [16JC1403000]

向作者/读者索取更多资源

Deep learning is an effective method for extracting the underlying information in text. However, it performs better on closed datasets and is less effective in real-world scenarios for text classification. As the data is updated and the amount of data increases, the models need to be retrained, in what is often a long training process. Therefore, we propose a novel incremental learning strategy to solve these problems. Our method, called Learn#, includes four components: a Student model, a reinforcement learning (RL) module, a Teacher model, and a discriminator model. The Student models first extract the features from the texts, then the RL module filters the results of multiple Student models. After that, the Teacher model reclassifies the filtered results to obtain the final texts category. To avoid increasing the Student models unlimitedly as the number of samples increases, the discriminator model is used to filter the Student models based on their similarity. The Learn# method has the advantage of a shorter training time than the One-Time model, because it only needs to train a new Student model each time, without changing the existing Student models. Furthermore, it can also obtain feedback during application and tune the models parameters over time. Experiments on different datasets show that our method for text classification outperforms many traditional One-Time methods, reducing training time by nearly 80%. (C) 2020 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据