{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Variational AutoEncoder Model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```{note}\n",
    "We will refer to Auto Encoders as AE.\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```{note}\n",
    "We will refer to Variational Auto Encoders as VAE.\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What do you mean by variational?\n",
    "\n",
    "VAEs are normal AEs with a twist: the encoded vector is constrained to be a noisy Gaussian distribution. In VAEs, during encoding of images, the encoded latents are assumed to be the mean, `mean`, and standard deviation, `stddev`, of some distribution corresponding to the image, and noises are added to the `mean` latent, with scale `stddev` to ensure that the model is robust."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Are VAEs better?\n",
    "\n",
    "For generators, definitely. VAEs usually generates better than normal AEs do, because it has to learn to generate images when the latent is a little bit off (because of how the noises are added to `mean`).\n",
    "\n",
    "However, for training a good compression model, VAEs usually cannot reduce the latent size as aggressively as AEs because of the same reason, it has to be more robust so more information has to be passed through to ensure that."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
