4.7 Article

Robust scientific text classification using prompt tuning based on data augmentation with L2 regularization

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.ipm.2023.103531

关键词

Scientific text classification; Pre-training model; Prompt tuning; Data augmentation; Pairwise training; L2 regularization

向作者/读者索取更多资源

This research proposes a method to enhance prompt tuning using data augmentation with L2 regularization and demonstrates significant improvements in the accuracy and robustness of language models on two scientific text datasets.
Recently, the prompt tuning technique, which incorporates prompts into the input of the pretraining language model (like BERT, GPT), has shown promise in improving the performance of language models when facing limited annotated data. However, the equivalence of template semantics in learning is not related to the effect of prompts and the prompt tuning often exhibits unstable performance, which is more severe in the domain of the scientific domain. To address this challenge, we propose to enhance prompt tuning using data augmentation with L2 regularization. Namely, pairing-wise training for the pair of the original and transformed data is performed. Our experiments on two scientific text datasets (ACL-ARC and SciCite) demonstrate that our proposed method significantly improves both accuracy and robustness. By using 1000 samples out of 1688 in the ACL-ARC training set, our method achieved an F1 score 3.33% higher than the same model trained on all 1688-sample data. In the SciCite dataset, our method surpassed the same model with labeled data reduced by over 93%. Our method is also proved to have high robustness, reaching F1 scores from 1% to 8% higher than those models without our method after the Probability Weighted Word Saliency attack.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据