Variational Autoencoders

Variational Autoencoders (VAE) are a more recent take on the autoencoding problem. Unlike autoencoders, which learn a compressed representation of the data, Variational Autoencoders learn the random process that generates such data, instead of learning an essentially arbitrary function as we previously did with our neural networks.

VAEs have also an encoder and decoder part. The encoder learns the mean and standard deviation of a normal distribution that is assumed to have generated the data. The mean and standard deviation are called latent variables because they are not observed explicitly, rather inferred from the data. 

The decoder part of VAEs maps back these latent space points into the data. As before, we need a loss function to measure the difference between the original inputs and their reconstruction. Sometimes an extra term is added, called the Kullback-Leibler divergence, or simply KL divergence. The KL divergence computes, roughly, how much a probability distribution differs from another. Adding the KL divergence, forces the posterior distribution to be similar to the prior. This, in turn, helps to both learn better representations of the data and to reduce overfitting.

Unlike autoencoders, VAEs have a solid probabilistic foundation, so the score you get is indeed the probability of an observation being an outlier. In autoencoders, the score we get has no such interpretation, therefore the choice of the cutoff or threshold value is entirely reliant on the input of a human expert, and is strictly data specific.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset