期刊
ELECTRONICS
卷 10, 期 21, 页码 -出版社
MDPI
DOI: 10.3390/electronics10212656
关键词
ontology; automation; natural language processing (NLP); pretrained model
Automatic ontology generation has gained attention, and we proposed a new method based on neural networks, utilizing two-stage learning which improved accuracy by over 12.5%. Applied to a real dataset, some exceptions were shown, indicating future research directions to enhance quality.
In recent years, automatic ontology generation has received significant attention in information science as a means of systemizing vast amounts of online data. As our initial attempt of ontology generation with a neural network, we proposed a recurrent neural network-based method. However, updating the architecture is possible because of the development in natural language processing (NLP). By contrast, the transfer learning of language models trained by a large, unlabeled corpus has yielded a breakthrough in NLP. Inspired by these achievements, we propose a novel workflow for ontology generation comprising two-stage learning. Our results showed that our best method improved accuracy by over 12.5%. As an application example, we applied our model to the Stanford Question Answering Dataset to show ontology generation in a real field. The results showed that our model can generate a good ontology, with some exceptions in the real field, indicating future research directions to improve the quality.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据