3.8 Proceedings Paper

How to Fine-Tune BERT for Text Classification?

期刊

CHINESE COMPUTATIONAL LINGUISTICS, CCL 2019
卷 11856, 期 -, 页码 194-206

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-32381-3_16

关键词

Transfer learning; BERT; Text classification

资金

  1. China National Key RD Program [2018YFC0831103]

向作者/读者索取更多资源

Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据