4.6 Article

Localizing in-domain adaptation of transformer-based biomedical language models

期刊

JOURNAL OF BIOMEDICAL INFORMATICS
卷 144, 期 -, 页码 -

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jbi.2023.104431

关键词

Natural language processing; Deep learning; Language model; Biomedical text mining; Transformer

向作者/读者索取更多资源

In the era of digital healthcare, the underused textual information in hospitals could be effectively utilized with task-specific, fine-tuned biomedical language representation models. However, less-resourced languages face challenges in accessing in-domain adaptation resources. To address this issue, our study investigates two accessible approaches to derive biomedical language models in languages like Italian, and demonstrates that data quantity is a harder constraint than data quality for biomedical adaptation. The models developed from our investigations have the potential to unlock important research opportunities for Italian healthcare institutions and academia, and also provide insights towards building generalizable biomedical language models for less-resourced languages and different domains.
In the era of digital healthcare, the huge volumes of textual information generated every day in hospitals constitute an essential but underused asset that could be exploited with task-specific, fine-tuned biomedical language representation models, improving patient care and management. For such specialized domains, previous research has shown that fine-tuning models stemming from broad-coverage checkpoints can largely benefit additional training rounds over large-scale in-domain resources. However, these resources are often unreachable for less-resourced languages like Italian, preventing local medical institutions to employ in-domain adaptation. In order to reduce this gap, our work investigates two accessible approaches to derive biomedical language models in languages other than English, taking Italian as a concrete use-case: one based on neural machine translation of English resources, favoring quantity over quality; the other based on a high-grade, narrow-scoped corpus natively written in Italian, thus preferring quality over quantity. Our study shows that data quantity is a harder constraint than data quality for biomedical adaptation, but the concatenation of high-quality data can improve model performance even when dealing with relatively size-limited corpora. The models published from our investigations have the potential to unlock important research opportunities for Italian hospitals and academia. Finally, the set of lessons learned from the study constitutes valuable insights towards a solution to build biomedical language models that are generalizable to other less-resourced languages and different domain settings.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据