Journal
MACHINE LEARNING
Volume 109, Issue 9-10, Pages 1749-1778Publisher
SPRINGER
DOI: 10.1007/s10994-020-05897-1
Keywords
Active learning; Transparency; Robustness to labeling noise; Black-box models; Clustering; Named entity recognition
Categories
Funding
- Center for Data Science
- Center for Intelligent Information Retrieval
- Chan Zuckerberg Initiative
- Collaborative RD Fund
- National Science Foundation (NSF) [DMR-1534431, IIS-1514053]
Ask authors/readers for more resources
Existing deep active learning algorithms achieve impressive sampling efficiency on natural language processing tasks. However, they exhibit several weaknesses in practice, including (a) inability to use uncertainty sampling with black-box models, (b) lack of robustness to labeling noise, and (c) lack of transparency. In response, we propose a transparent batch active sampling framework by estimating the error decay curves of multiple feature-defined subsets of the data. Experiments on four named entity recognition (NER) tasks demonstrate that the proposed methods significantly outperform diversification-based methods for black-box NER taggers, and can make the sampling process more robust to labeling noise when combined with uncertainty-based methods. Furthermore, the analysis of experimental results sheds light on the weaknesses of different active sampling strategies, and when traditional uncertainty-based or diversification-based methods can be expected to work well.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available