4.7 Article

A robust variational autoencoder using beta divergence

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 238, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2021.107886

Keywords

RVAE; Robust anomaly detection; Outlier; VAE; β divergence

Funding

  1. DOD, USA [W81XWH-18-1-061]
  2. NIH, USA [R01 NS074980, R01 EB026299]

Ask authors/readers for more resources

The presence of outliers can significantly affect the performance and training process of deep learning methods, particularly in anomaly detection tasks using variational autoencoders (VAEs). In this paper, the authors propose a robust VAE model that incorporates the concept of robust statistics to control the robustness to outliers in the training data. The model maintains the same computational complexity as the standard VAE and includes a single tuning parameter for controlling the degree of robustness. The authors show the improved robustness of the proposed model using various datasets and demonstrate its application in detecting brain lesions in medical images. They also present a method for unsupervised hyperparameter tuning.
The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only possible when similar anomalies (or outliers) are not present in the training data. Here we focus on variational autoencoders (VAEs). While the VAE is a popular framework for anomaly detection tasks, we observe that the VAE is unable to detect outliers when the training data contains anomalies that have the same distribution as those in test data. In this paper we focus on robustness to outliers in training data in VAE settings using concepts from robust statistics. We propose a variational lower bound that leads to a robust VAE model that has the same computational complexity as the standard VAE and contains a single automatically-adjusted tuning parameter to control the degree of robustness. We present mathematical formulations for robust variational autoencoders (RVAEs) for Bernoulli, Gaussian and categorical variables. The RVAE model is based on beta-divergence rather than the standard Kullback-Leibler (KL) divergence. We demonstrate the performance of our proposed /3-divergence-based autoencoder for a variety of image and categorical datasets showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for detection of lesions in brain images, formulated as an anomaly detection task. Finally, we suggest a method to tune the hyperparameter of RVAE which makes our model completely unsupervised.(c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available