Autoencoders are a class of neural networks primarily used for unsupervised learning and dimensionality reduction. The fundamental idea behind autoencoders is to encode input data into a lower-dimensional representation and then decode it back to the original data, aiming to minimize the reconstruction error. The basic architecture of an autoencoder consists of two main components - the encoder
and the decoder
This encoder model consists of an encoder network (represented as $$\g\phi$$) and a decoder network (represented as $$f\theta$$).
The low-dimensional representation is learned in the bottleneck layer as $$z$$ and the reconstructed output is represented as $$x’=f\theta(g\phi(x))$$ with the goal as $$x\approx x’$$.
A common loss function used in such vanilla autoencoders is $$L(\theta, \phi) = \frac{1}{n}\sum{i=1}^n (\mathbf{x}^{(i)} - f\theta(g_\phi(\mathbf{x}^{(i)})))^2$$ with tries to minimize the error between the original image and the reconstructed one and is also known as the reconstruction loss
Autoencoders are useful for tasks such as data denoising, feature learning, and compression. However, traditional autoencoders lack the probabilistic nature that makes VAEs particularly intriguing and also useful for generational tasks
Variational Autoencoders (VAEs) address some of the limitations of traditional autoencoders by introducing a probabilistic approach
to encoding and decoding. The motivation behind VAEs lies in their ability to generate new data samples by sampling from a learned distribution in the latent space rather than from a latent vector as was the case with Vanilla Autoencoders which makes them suitable for generation tasks.
The concept can be elucidated through a straightforward example, as presented below. Encoders within a neural network are tasked with acquiring a representation of input images in the form of a vector. This vector encapsulates various features such as a subject’s smile, hair color, gender, age, etc., denoted as a vector akin to [0.4, 0.03, 0.032, …]. In this illustration, the focus is narrowed to a singular latent representation, specifically the attribute of a “smile.”
In the context of Vanilla Autoencoders (AE), the smile feature is encapsulated as a fixed, deterministic value. In contrast, Variational Autoencoders (VAEs) are deliberately crafted to encapsulate this feature as a probabilistic distribution. This design choice facilitates the introduction of variability in generated images by enabling the sampling of values from the specified probability distribution.
Understanding the mathematical concepts behind VAEs involves grasping the principles of probabilistic modeling and variational inference.
smaller latent loss
tends to result in generated images closely resembling those present in the training set but lacking in visual quality. Conversely, a smaller reconstruction loss
leads to well-reconstructed images during training but hampers the generation of novel images that deviate significantly from the training set. Striking a harmonious balance between these two aspects becomes imperative to achieve desirable outcomes in both image reconstruction and generation.In summary, VAEs go beyond mere data reconstruction; they generate new samples and provide a probabilistic framework for understanding latent representations. The inclusion of probabilistic elements in the model’s architecture sets VAEs apart from traditional autoencoders. Compared to traditional autoencoders, VAEs provide a richer understanding of the data distribution, making them particularly powerful for generative tasks.