Variational Autoencoders

Introduction to Autoencoders

Autoencoders are a class of neural networks primarily used for unsupervised learning and dimensionality reduction. The fundamental idea behind autoencoders is to encode input data into a lower-dimensional representation and then decode it back to the original data, aiming to minimize the reconstruction error. The basic architecture of an autoencoder consists of two main components - the encoder and the decoder

Vanilla Autoencoder Image - Lilian Weng Blogx)Thisencodermodelconsistsofanencodernetwork(representedas(\gϕ)x)\\ This encoder model consists of an encoder network (represented as \\(\g_\phi) and a decoder network (represented asfθ)).Thelowdimensionalrepresentationislearnedinthebottlenecklayeraszandthereconstructedoutputisrepresentedas(x=fθ(gϕ(x)))withthegoalas(xxf_\theta)\\ ). The low-dimensional representation is learned in the bottleneck layer as z and the reconstructed output is represented as \\( x'=f_\theta(g_\phi(x)))\\ with the goal as \\(x\approx x' . A common loss function used in such vanilla autoencoders isL(θ,ϕ)=1ni=1n(x(i)fθ(gϕ(x(i))))2L(\theta, \phi) = \frac{1}{n}\sum_{i=1}^n (\mathbf{x}^{(i)} - f_\theta(g_\phi(\mathbf{x}^{(i)})))^2 with tries to minimize the error between the original image and the reconstructed one and is also known as the reconstruction loss Autoencoders are useful for tasks such as data denoising, feature learning, and compression. However, traditional autoencoders lack the probabilistic nature that makes VAEs particularly intriguing and also useful for generational tasks

Variational Autoencoders (VAEs) Overview

Variational Autoencoders (VAEs) address some of the limitations of traditional autoencoders by introducing a probabilistic approach to encoding and decoding. The motivation behind VAEs lies in their ability to generate new data samples by sampling from a learned distribution in the latent space rather than from a latent vector as was the case with Vanilla Autoencoders which makes them suitable for generation tasks.

The concept can be elucidated through a straightforward example, as presented below. Encoders within a neural network are tasked with acquiring a representation of input images in the form of a vector. This vector encapsulates various features such as a subject’s smile, hair color, gender, age, etc., denoted as a vector akin to [0.4, 0.03, 0.032, …]. In this illustration, the focus is narrowed to a singular latent representation, specifically the attribute of a “smile.” Autoencoders vs VAEs - Sciforce Medium In the context of Vanilla Autoencoders (AE), the smile feature is encapsulated as a fixed, deterministic value. In contrast, Variational Autoencoders (VAEs) are deliberately crafted to encapsulate this feature as a probabilistic distribution. This design choice facilitates the introduction of variability in generated images by enabling the sampling of values from the specified probability distribution.

Mathematics Behind VAEs

Understanding the mathematical concepts behind VAEs involves grasping the principles of probabilistic modeling and variational inference. Variational Autoencoder - Lilian Weng Blog

In summary, VAEs go beyond mere data reconstruction; they generate new samples and provide a probabilistic framework for understanding latent representations. The inclusion of probabilistic elements in the model’s architecture sets VAEs apart from traditional autoencoders. Compared to traditional autoencoders, VAEs provide a richer understanding of the data distribution, making them particularly powerful for generative tasks.

References

  1. Lilian Weng’s Awesome Blog on Autoencoders
  2. Generative models under a microscope: Comparing VAEs, GANs, and Flow-Based Models
  3. Autoencoders, Variational Autoencoders (VAE) and β-VAE
< > Update on GitHub