4.8 Article

Bringing Light Into the Dark: A Large-Scale Evaluation of Knowledge Graph Embedding Models Under a Unified Framework

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3124805

关键词

Computational modeling; Benchmark testing; Magnetic heads; Training; Predictive models; Task analysis; Reproducibility of results; Knowledge graph embeddings; link prediction; reproducibility; benchmarking

资金

  1. German Federal Ministry of Education and Research (BMBF) [01IS18036A, 01IS18050D]
  2. Defense Advanced Research Projects Agency (DARPA) Automating Scientific Knowledge Extraction (ASKE) program [HR00111990009]
  3. Innovation Fund Denmark with the Danish Center for Big Data Analytics driven Innovation (DABAI)

向作者/读者索取更多资源

This study re-implemented and evaluated 21 knowledge graph embedding models and performed large-scale benchmarking to assess the reproducibility of previously published results. The results highlight that a model's performance is determined by multiple factors, not just its architecture. The study provides insights into best practices and configurations, as well as suggestions for further improvements.
The heterogeneity in recently published knowledge graph embedding models' implementations, training, and evaluation has made fair and thorough comparisons difficult. To assess the reproducibility of previously published results, we re-implemented and evaluated 21 models in the PyKEEN software package. In this paper, we outline which results could be reproduced with their reported hyper-parameters, which could only be reproduced with alternate hyper-parameters, and which could not be reproduced at all, as well as provide insight as to why this might be the case. We then performed a large-scale benchmarking on four datasets with several thousands of experiments and 24,804 GPU hours of computation time. We present insights gained as to best practices, best configurations for each model, and where improvements could be made over previously published best configurations. Our results highlight that the combination of model architecture, training approach, loss function, and the explicit modeling of inverse relations is crucial for a model's performance and is not only determined by its architecture. We provide evidence that several architectures can obtain results competitive to the state of the art when configured carefully. We have made all code, experimental configurations, results, and analyses available at https://github.com/pykeen/pykeen and https://github.com/pykeen/benchmarking.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据