4.7 Article

Generative adversarial networks based on Wasserstein distance for knowledge graph embeddings

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 190, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2019.105165

Keywords

Knowledge graph embedding; Generative adversarial networks; Wasserstein distance; Weak supervision information

Funding

  1. National Key RAMP
  2. D Program of China [2018YFB1004800]
  3. National Natural Science Foundation of China [U1705262, 61672159]
  4. Technology Innovation Platform Project of Fujian Province, China [2014H2005, 2009J1007]
  5. Fujian Collaborative Innovation Center for Big Data Application in Governments, China
  6. Fujian Engineering Research Center of Big Data Analysis and Processing, China

Ask authors/readers for more resources

Knowledge graph embedding aims to project entities and relations into low-dimensional and continuous semantic feature spaces, which has captured more attention in recent years. Most of the existing models roughly construct negative samples via a uniformly random mode, by which these corrupted samples are practically trivial for training the embedding model. Inspired by generative adversarial networks (GANs), the generator can be employed to sample more plausible negative triplets, that boosts the discriminator to improve its embedding performance further. However, vanishing gradient on discrete data is an inherent problem in traditional GANs. In this paper, we propose a generative adversarial network based knowledge graph representation learning model by introducing the Wasserstein distance to replace traditional divergence for settling this issue. Moreover, the additional weak supervision information is also absorbed to refine the performance of embedding model since these textual information contains detailed semantic description and offers abundant semantic relevance. In the experiments, we evaluate our method on the tasks of link prediction and triplet classification. The experimental results indicate that the Wasserstein distance is capable of solving the problem of vanishing gradient on discrete data and accelerating the convergence, additional weak supervision information also can significantly improve the performance of the model. (C) 2019 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available