Journal
CHINESE COMPUTATIONAL LINGUISTICS, CCL 2019
Volume 11856, Issue -, Pages 194-206Publisher
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-32381-3_16
Keywords
Transfer learning; BERT; Text classification
Categories
Funding
- China National Key RD Program [2018YFC0831103]
Ask authors/readers for more resources
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available