4.8 Article

Bringing Light Into the Dark: A Large-Scale Evaluation of Knowledge Graph Embedding Models Under a Unified Framework

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3124805

Keywords

Computational modeling; Benchmark testing; Magnetic heads; Training; Predictive models; Task analysis; Reproducibility of results; Knowledge graph embeddings; link prediction; reproducibility; benchmarking

Funding

  1. German Federal Ministry of Education and Research (BMBF) [01IS18036A, 01IS18050D]
  2. Defense Advanced Research Projects Agency (DARPA) Automating Scientific Knowledge Extraction (ASKE) program [HR00111990009]
  3. Innovation Fund Denmark with the Danish Center for Big Data Analytics driven Innovation (DABAI)

Ask authors/readers for more resources

This study re-implemented and evaluated 21 knowledge graph embedding models and performed large-scale benchmarking to assess the reproducibility of previously published results. The results highlight that a model's performance is determined by multiple factors, not just its architecture. The study provides insights into best practices and configurations, as well as suggestions for further improvements.
The heterogeneity in recently published knowledge graph embedding models' implementations, training, and evaluation has made fair and thorough comparisons difficult. To assess the reproducibility of previously published results, we re-implemented and evaluated 21 models in the PyKEEN software package. In this paper, we outline which results could be reproduced with their reported hyper-parameters, which could only be reproduced with alternate hyper-parameters, and which could not be reproduced at all, as well as provide insight as to why this might be the case. We then performed a large-scale benchmarking on four datasets with several thousands of experiments and 24,804 GPU hours of computation time. We present insights gained as to best practices, best configurations for each model, and where improvements could be made over previously published best configurations. Our results highlight that the combination of model architecture, training approach, loss function, and the explicit modeling of inverse relations is crucial for a model's performance and is not only determined by its architecture. We provide evidence that several architectures can obtain results competitive to the state of the art when configured carefully. We have made all code, experimental configurations, results, and analyses available at https://github.com/pykeen/pykeen and https://github.com/pykeen/benchmarking.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available