Jump to content

Variational autoencoder

From Simple English Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

In machine learning, a variational autoencoder (VAE), is a generative model, meaning that it can generate things that it has not seen before. It is comprised by artificial neural networks. It was introduced by Diederik P. Kingma and Max Welling.[1]

The architecture has two main components. The encoder, and the decoder. The encoder compresses inputs into a probability distribution.

The VAE is famous for its use of the reparameterization trick. The trick was used to give information called gradients to the encoder. That is because normally you cannot pass gradients from a probability distribution.

References

  1. Kingma, Diederik P.; Welling, Max (2022-12-10). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [cs, stat].