4.6 Article

Attribute-based regularization of latent spaces for variational auto-encoders

Journal

NEURAL COMPUTING & APPLICATIONS
Volume 33, Issue 9, Pages 4429-4444

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s00521-020-05270-2

Keywords

Representation learning; Latent space disentanglement; Latent space regularization; Generative modeling

Funding

  1. Nvidia Corporation

Ask authors/readers for more resources

This paper introduces a novel method to structure the latent space of a variational auto-encoder to explicitly encode different continuous-valued attributes. The proposed approach leads to disentangled and interpretable latent spaces, enabling effective manipulation of a wide range of data attributes.
Selective manipulation of data attributes using deep generative models is an active area of research. In this paper, we present a novel method to structure the latent space of a variational auto-encoder to encode different continuous-valued attributes explicitly. This is accomplished by using an attribute regularization loss which enforces a monotonic relationship between the attribute values and the latent code of the dimension along which the attribute is to be encoded. Consequently, post training, the model can be used to manipulate the attribute by simply changing the latent code of the corresponding regularized dimension. The results obtained from several quantitative and qualitative experiments show that the proposed method leads to disentangled and interpretable latent spaces which can be used to effectively manipulate a wide range of data attributes spanning image and symbolic music domains.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available