4.6 Article

Probabilistic Autoencoder Using Fisher Information

期刊

ENTROPY
卷 23, 期 12, 页码 -

出版社

MDPI
DOI: 10.3390/e23121640

关键词

machine learning; Fisher information metric; variational methods; variational autoencoder; deep generative network

向作者/读者索取更多资源

This study introduces an extension to the autoencoder architecture called FisherNet, which derives uncertainty in latent space from the decoder using the Fisher information metric. The FisherNet demonstrates more accurate data reconstructions and improved learning performance with increasing number of latent space dimensions compared to a comparable VAE.
Neural networks play a growing role in many scientific disciplines, including physics. Variational autoencoders (VAEs) are neural networks that are able to represent the essential information of a high dimensional data set in a low dimensional latent space, which have a probabilistic interpretation. In particular, the so-called encoder network, the first part of the VAE, which maps its input onto a position in latent space, additionally provides uncertainty information in terms of variance around this position. In this work, an extension to the autoencoder architecture is introduced, the FisherNet. In this architecture, the latent space uncertainty is not generated using an additional information channel in the encoder but derived from the decoder by means of the Fisher information metric. This architecture has advantages from a theoretical point of view as it provides a direct uncertainty quantification derived from the model and also accounts for uncertainty cross-correlations. We can show experimentally that the FisherNet produces more accurate data reconstructions than a comparable VAE and its learning performance also apparently scales better with the number of latent space dimensions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据