{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Generalizations of the normal distriution and Hyperbolic VAEs\n",
    "\n",
    "Welcome to our fifth notebook for the ECCV 2022 Tutorial \"[Hyperbolic Representation Learning for Computer Vision](https://sites.google.com/view/hyperbolic-tutorial-eccv22)\"!\n",
    "\n",
    "**Open notebook:**\n",
    "[![View on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/MinaGhadimiAtigh/hyperbolic_representation_learning/blob/main/notebooks/5_Hyperbolic_VAEs.ipynb)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/MinaGhadimiAtigh/hyperbolic_representation_learning/blob/main/notebooks/5_Hyperbolic_VAEs.ipynb) \n",
    "\n",
    "**Author:** Jeffrey Gu"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this tutorial, we will go through [A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based\n",
    "Learning](https://proceedings.mlr.press/v97/nagano19a.html) (Nagano et al. 2019), an ICML 2019 paper, and [Continuous Hierarchical Representations with\n",
    "Poincaré Variational Auto-Encoders](https://proceedings.neurips.cc/paper/2019/hash/0ec04cb3912c4f08874dd03716f80df1-Abstract.html) (Mathieu et al. 2019), a NeurIPS 2019 paper. The first paper introduces the wrapped normal distribution, a generalization of the Euclidean normal distribution to hyperbolic space. The first paper then uses wrapped normal distribution to create a hyperbolic variational autoencoder (VAE). The second paper builds on the first paper by proposing a new max-entropy generalization of the normal distribution, which they call the Riemannian normal, and propose reparametrizable sampling schemes and algorithms to calculate the probability density function for both generalizations. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's start with installing and importing libraries. Also, we set a manual seed using `set_seed`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## standard libraries\n",
    "import numpy as np\n",
    "import math\n",
    "import warnings\n",
    "from IPython.display import clear_output\n",
    "\n",
    "## Imports for plotting\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "## PyTorch\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "## PyTorch Torchvision\n",
    "import torchvision\n",
    "\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Function for setting the seed\n",
    "def set_seed(seed):\n",
    "    np.random.seed(seed)\n",
    "    torch.manual_seed(seed)\n",
    "    if torch.cuda.is_available():\n",
    "        torch.cuda.manual_seed(seed)\n",
    "        torch.cuda.manual_seed_all(seed)\n",
    "set_seed(42)\n",
    "\n",
    "# Ensure that all operations are deterministic on GPU (if using) for reproducibility\n",
    "torch.backends.cudnn.determinstic = True\n",
    "torch.backends.cudnn.benchmark = False\n",
    "\n",
    "# Fetching the device that will be used throughout this notebook\n",
    "device = torch.device(\"cpu\") if not torch.cuda.is_available() else torch.device(\"cuda:0\")\n",
    "print(\"Using device\", device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the Hyperbolic layers and functions, we're going to use geoopt library in this notebook."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -q git+https://github.com/geoopt/geoopt.git"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we define the paths which will be used in this notebook."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "DATA_PATH = './data'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's start with setting up the dataset. In this notebook, you will work with `MNIST` dataset. MNIST consists of 70000 tiny (28*28) gray scale images of handwritten digits, from zero to nine. The goal is to achieve a good log-likelihood."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tx=torchvision.transforms.Compose([\n",
    "    torchvision.transforms.ToTensor(),\n",
    "    torchvision.transforms.Lambda(lambda p: p.clamp(1e-5, 1 - 1e-5))\n",
    "])\n",
    "\n",
    "# Train dataset - downloading the training dataset. The training dataset is split into train and val parts.\n",
    "train_dataset = torchvision.datasets.MNIST(root=DATA_PATH, train=True, download=True, transform=tx)\n",
    "test_dataset = torchvision.datasets.MNIST(root=DATA_PATH, train=False, download=True, transform=tx)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Notation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will follow the notation of Mathieu et al. 2019. Unless otherwise stated, you may assume that all operations, such as the norm $||\\cdot||$, are the usual Euclidean operations. We denote the Poincare ball of curvature $c$ in dimension $d$ by $\\mathbb{B}_c^d$. Recall that the Poincare ball is one model of hyperbolic geometry where all elements lie in an open ball of radius $1/\\sqrt{c}$. The distance measure on the hyperbolic ball is\n",
    "\\begin{align}\n",
    "    d_p^c(z, y) = \\frac{1}{\\sqrt{c}} \\cosh^{-1} \\left(1 + 2c\\frac{||z - y||^2}{(1 - c||z||^2)(1 - c||y||^2)} \\right)\n",
    "\\end{align}\n",
    "We will also introduce the following useful term $\\lambda_z^c$, which is the factor from which the metric of the Poincare ball differs from the Euclidean metric:\n",
    "\\begin{align}\n",
    "    \\lambda_z^c = \\frac{2}{1 - c||z||^2}\n",
    "\\end{align}\n",
    "Finally, we will define the exponential and logarithmic maps in terms of gyrovector space addition. Gyrovector addition (also called Mobius addition) wass introduced by Ungar 2008 and is a type of hyperbolic translation. It is defined as\n",
    "\\begin{align}\n",
    "    z \\oplus_c y = \\frac{(1 + 2c \\langle z, y \\rangle + (1 - c||z||^2))y}{1 + 2c \\langle z, y \\rangle + c^2||z||^2||y||^2}\n",
    "\\end{align}\n",
    "As one might expect, one covers Euclidean vector addition as curvature $c \\to 0$ (recall that Euclidean space has curvature 0). Ganea et al 2018 then derived formulas for the exponential map (maps from Euclidean space to hyperbolic space) and the logarithm map (\"inverse\" of exponential map, maps hyperbolic space to Euclidean space):\n",
    "\\begin{align}\n",
    "    \\exp_z^c(v) &= z \\oplus_c \\left( \\tanh \\left( \\sqrt{c} \\frac{\\lambda_z^c||v||}{2} \\right) \\frac{v}{\\sqrt{c}||v||} \\right) \\\\\n",
    "    \\log_z^c(y) &= \\frac{2}{\\sqrt{c} \\lambda_z^c} \\tanh^{-1} (\\sqrt{c}||-z \\oplus_c y||) \\frac{-z \\oplus_c y }{||-z \\oplus_c y||}\n",
    "\\end{align}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we will define our Poincare ball manifold, using code adapted from Mathieu et al. 2019's official Github repository:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from geoopt.manifolds import PoincareBall as PoincareBallParent\n",
    "from geoopt.manifolds.stereographic.math import _lambda_x, arsinh, tanh"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class PoincareBall(PoincareBallParent):\n",
    "\n",
    "    def __init__(self, dim, c=1.0):\n",
    "        super().__init__(c)\n",
    "        self.register_buffer(\"dim\", torch.as_tensor(dim, dtype=torch.int))\n",
    "        \n",
    "    @property\n",
    "    def coord_dim(self):\n",
    "        return int(self.dim)\n",
    "    \n",
    "    @property\n",
    "    def zero(self):\n",
    "        return torch.zeros(1, self.dim).to(self.device)\n",
    "\n",
    "    def logdetexp(self, x, y, is_vector=False, keepdim=False):\n",
    "        \"\"\"\n",
    "        The log-determinant of the exponential map. This is used for calculating the \n",
    "        log-probability (PDF) of the wrapped normal. \n",
    "        \"\"\"\n",
    "        d = self.norm(x, y, keepdim=keepdim) if is_vector else self.dist(x, y, keepdim=keepdim)\n",
    "        return (self.dim - 1) * (torch.sinh(self.c.sqrt()*d) / self.c.sqrt() / d).log()\n",
    "    \n",
    "    def normdist2plane(self, x, a, p, keepdim: bool = False, signed: bool = False, dim: int = -1, norm: bool = False):\n",
    "        \"\"\"\n",
    "        Finds the distance of a point to a plane in hyperbolic space. Used to implement the gyroplane layer.\n",
    "        \"\"\"\n",
    "        c = self.c\n",
    "        sqrt_c = c ** 0.5\n",
    "        diff = self.mobius_add(-p, x, dim=dim)\n",
    "        diff_norm2 = diff.pow(2).sum(dim=dim, keepdim=keepdim).clamp_min(1e-15)\n",
    "        sc_diff_a = (diff * a).sum(dim=dim, keepdim=keepdim)\n",
    "        if not signed:\n",
    "            sc_diff_a = sc_diff_a.abs()\n",
    "        a_norm = a.norm(dim=dim, keepdim=keepdim, p=2).clamp_min(1e-15)\n",
    "        # computing the numerator (see below)\n",
    "        num = 2 * sqrt_c * sc_diff_a\n",
    "        # computing the denominator (see below)\n",
    "        denom = (1 - c * diff_norm2) * a_norm\n",
    "        res = arsinh(num / denom.clamp_min(1e-15)) / sqrt_c\n",
    "        if norm:\n",
    "            res = res * a_norm # * self.lambda_x(a, dim=dim, keepdim=keepdim)\n",
    "        return res"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The Wrapped Normal Distribution"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One common strategy to generalize the Euclidean normal distribution to an arbitrary manifold is to simply map it onto the manifold using the manifold's exponential map. We will call this wrapped normal distribution (of the manifold) and will be denoted $\\mathcal{N}^{\\mathrm{W}}$. This induces a probability measure on the manifold, which mathematicians call the pushforward measure. This induced density can be calculated to be\n",
    "\\begin{align}\n",
    "    \\mathcal{N}^{\\mathrm{W}}(z|\\mu, \\Sigma) = \\mathcal{N}(\\lambda_\\mu^c \\log_\\mu(z)|0, \\Sigma) \\left(\\frac{\\sqrt{c} d_p^c(\\mu, z)}{\\sinh(\\sqrt{c} d_p^c(\\mu, z)} \\right)^{-1}\n",
    "\\end{align}\n",
    "Mathieu et al. 2019 simplified the sampling scheme of Nagano et al. 2019 to the following reparametrisable sampling scheme (Algorithm 1 of Mathieu et al. 2019):"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"hyperbolic_normal_sampling.png\" width=\"300\" height=\"300\" align=\"center\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Calculating the PDF (likelihood) can be done using a change-of-variables formula, which allows us to calculate the PDF of an induced distribution given the original distribution (Nagano et al. 2019)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we now show how sampling is implemented for the wrapped normal distribution, using code from Mathieu et al. 2019's official implementation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from numbers import Number\n",
    "\n",
    "class WrappedNormal(torch.distributions.Distribution):\n",
    "\n",
    "    arg_constraints = {'loc': torch.distributions.constraints.real,\n",
    "                       'scale': torch.distributions.constraints.positive}\n",
    "    support = torch.distributions.constraints.real\n",
    "    has_rsample = True\n",
    "    _mean_carrier_measure = 0\n",
    "\n",
    "    @property\n",
    "    def mean(self):\n",
    "        return self.loc\n",
    "\n",
    "    @property\n",
    "    def stddev(self):\n",
    "        raise NotImplementedError\n",
    "\n",
    "    @property\n",
    "    def scale(self):\n",
    "        return F.softplus(self._scale) if self.softplus else self._scale\n",
    "\n",
    "    def __init__(self, loc, scale, manifold, validate_args=None, softplus=False):\n",
    "        self.dtype = loc.dtype\n",
    "        self.softplus = softplus\n",
    "        self.loc, self._scale = torch.distributions.utils.broadcast_all(loc, scale)\n",
    "        self.manifold = manifold\n",
    "        self.manifold.assert_check_point_on_manifold(self.loc)\n",
    "        self.device = loc.device\n",
    "        if isinstance(loc, Number) and isinstance(scale, Number):\n",
    "            batch_shape, event_shape = torch.Size(), torch.Size()\n",
    "        else:\n",
    "            batch_shape = self.loc.shape[:-1]\n",
    "            event_shape = torch.Size([self.manifold.dim])\n",
    "        super(WrappedNormal, self).__init__(batch_shape, event_shape, validate_args=validate_args)\n",
    "\n",
    "    def sample(self, shape=torch.Size()):\n",
    "        with torch.no_grad():\n",
    "            return self.rsample(shape)\n",
    "\n",
    "    def rsample(self, sample_shape=torch.Size()):\n",
    "        \"\"\"\n",
    "        Implementation of the above reparametrizable sampling scheme. \n",
    "        \"\"\"\n",
    "        shape = self._extended_shape(sample_shape)\n",
    "        # 1. sample standard normal and multiply by the standard deivation \n",
    "        v = self.scale * torch.distributions.utils._standard_normal(shape, dtype=self.loc.dtype, device=self.loc.device)\n",
    "        self.manifold.assert_check_vector_on_tangent(self.manifold.zero, v)\n",
    "        # 2. divide by the factor of lambda as in the algorithm above\n",
    "        v = v / self.manifold.lambda_x(self.manifold.zero, keepdim=True)\n",
    "        u = self.manifold.transp(self.manifold.zero, self.loc, v)\n",
    "        # 3. calculate expmap\n",
    "        z = self.manifold.expmap(self.loc, u)\n",
    "        return z\n",
    "\n",
    "    def log_prob(self, x):\n",
    "        \"\"\"\n",
    "        Calculation of the PDf via calculating the log-probability. The calculation is done \n",
    "        using the algorithm of Nagano et al. 2019 (Algorithm 2 of Nagano et al. 2019). For \n",
    "        more details, see the paper.\n",
    "        \"\"\"\n",
    "        shape = x.shape\n",
    "        loc = self.loc.unsqueeze(0).expand(x.shape[0], *self.batch_shape, self.manifold.coord_dim)\n",
    "        if len(shape) < len(loc.shape): x = x.unsqueeze(1)\n",
    "        # 1. take the inverse exponential map (log map)\n",
    "        v = self.manifold.logmap(loc, x)\n",
    "        # 2. parallel transport to the corret location\n",
    "        v = self.manifold.transp(loc, self.manifold.zero, v)\n",
    "        # 3. calculate log-pdf using change of variables (Eqn 7 of Nagano et al. 2019)\n",
    "        u = v * self.manifold.lambda_x(self.manifold.zero, keepdim=True)\n",
    "        norm_pdf = torch.distributions.Normal(torch.zeros_like(self.scale), self.scale).log_prob(u).sum(-1, keepdim=True)\n",
    "        logdetexp = self.manifold.logdetexp(loc, x, keepdim=True)\n",
    "        result = norm_pdf - logdetexp\n",
    "        return result"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The Riemannian Normal Distribution"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The Riemannian normal, denoted $\\mathcal{N}^{\\mathrm{R}}$, generalizes the normal distribution by considering the normal distribution as the distribution that maximizes entropy for a given mean and variance. Mathieu et al. 2019  a reparametrizable sampling scheme via acceptance-rejection sampling (see above) and its PDF:\n",
    "\\begin{align}\n",
    "    \\mathcal{N}^{\\mathrm{R}}(z|\\mu, \\sigma^2) = \\frac{1}{Z^\\mathrm{R}} \\exp \\left(-\\frac{d_p^c(\\mu, z)^2}{2\\sigma^2} \\right)\n",
    "\\end{align}\n",
    "where $\\mu$ is the mean, $\\sigma$ is a dispersion parameter analogous to the standard deviation, and $Z^{\\mathrm{R}}$ is a normalizing constant (for a derivation, see Appendix B.4.3 of Mathieu et al. 2019). We will not implement the Riemannian normal here, but the code is available on the official Github repository. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## From Euclidean VAE to Hyperbolic VAE"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Recall that a VAE has an encoder-decoder structure, where the encoder produces the parameters of a chosen distribution (the posterior distribution) given an input $x$. We then sample from this latent distribution to get a latent code $z$, which is then fed into the decoder, which reconstructs $x$. The VAE is then trained using a KL-divergence loss between the posterior distribution and a chosen prior distribution. The typical choice of prior and posterior distributions in Euclidean space is the normal distribution for both distributions.\n",
    "\n",
    "Then, to create a hyperbolic VAE, we need to only choose the prior and posterior distributions to be one of the hyperbolic normal distributions we just defined! The only caveat is that if we only replace the prior and posterior distribution, the encoder and decoder networks are still fully Euclidean networks, but some parameters of our distribution as well as the output of sampling live in hyperbolic space. Nagano et al. 2019 resolves this in a simple way: just apply an exponential map at the end of encoder, and a logarithm map at the start of the decoder. The decoder of Mathieu et al. 2019 is a bit more complicated: there is an additional gyroplane layer as the first layer of the decoder, which the paper argues better handles the geometry of the hyperbolic latent space (not implemented in this notebook). "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we define our hyperbolic VAE model. First, we define the encoder, closely adapted from Mathieu et al. 2019's official implementation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from numpy import prod"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def extra_hidden_layer(hidden_dim, non_lin):\n",
    "    return nn.Sequential(nn.Linear(hidden_dim, hidden_dim), non_lin)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Enc(nn.Module):\n",
    "    \"\"\"\n",
    "    The usual Euclidean encoder, with an exponential map on the mean head to \n",
    "    produce a mean in the correct latent space.\n",
    "    \"\"\"\n",
    "    def __init__(self, manifold, data_size, non_lin, num_hidden_layers, hidden_dim):\n",
    "        super(Enc, self).__init__()\n",
    "        self.manifold = manifold\n",
    "        self.data_size = data_size\n",
    "        modules = []\n",
    "        modules.append(nn.Sequential(nn.Linear(prod(data_size), hidden_dim), non_lin))\n",
    "        modules.extend([extra_hidden_layer(hidden_dim, non_lin) for _ in range(num_hidden_layers - 1)])\n",
    "        self.enc = nn.Sequential(*modules)\n",
    "        self.mean_head = nn.Linear(hidden_dim, manifold.coord_dim)\n",
    "        self.sigma_head = nn.Linear(hidden_dim, 1)\n",
    "\n",
    "    def forward(self, x):\n",
    "        e = self.enc(x.view(*x.size()[:-len(self.data_size)], -1))\n",
    "        mu = self.mean_head(e)\n",
    "        mu = self.manifold.expmap0(mu)\n",
    "        return mu, F.softplus(self.sigma_head(e)) + 1e-5,  self.manifold # want to ensure sigma is non-zero"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we define our decoder, which is just the usual VAE decoder. As described above, there is a log map to map from the hyperbolic latent space to the Euclidean space expected by the linear decoder layers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Dec(nn.Module):\n",
    "    \"\"\"\n",
    "    The usual Euclidean decoder, with a logarithm map at the beginning in \n",
    "    order to map the latent code to Euclidean space. \n",
    "    \"\"\"\n",
    "    def __init__(self, manifold, data_size, non_lin, num_hidden_layers, hidden_dim):\n",
    "        super(Dec, self).__init__()\n",
    "        self.data_size = data_size\n",
    "        self.manifold = manifold\n",
    "        modules = []\n",
    "        modules.append(nn.Sequential(nn.Linear(manifold.coord_dim, hidden_dim), non_lin))\n",
    "        modules.extend([extra_hidden_layer(hidden_dim, non_lin) for _ in range(num_hidden_layers - 1)])\n",
    "        self.dec = nn.Sequential(*modules)\n",
    "        self.output = nn.Linear(hidden_dim, np.prod(data_size))\n",
    "\n",
    "    def forward(self, z):\n",
    "        z = self.manifold.logmap0(z)\n",
    "        d = self.dec(z)\n",
    "        mu = self.output(d).view(*z.size()[:-1], *self.data_size)\n",
    "        return torch.tensor(1.0).to(z.device), mu"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One contribution of Mathieu et al. 2019 is to replace the first layer of the decoder with a hyperbolic layer, which better respects the geometry of the hyperbolic latent space. This layer generalizes the Euclidean affine transform, which can be written as\n",
    "\\begin{align}\n",
    "    f_{a, p}(z) = \\langle a, z - p \\rangle = \\mathrm{sign}(\\langle a, z - p \\rangle) ||a|| d_E(z, H_{a, p}^c)\n",
    "\\end{align}\n",
    "where $H_{a, p}^c = \\{z \\in \\mathbb{R}^p \\langle a, z - p \\rangle = 0 \\}$ is the decision hyperplane. The corresponding hyperbolic layer, called the gyroplane layer, is a map from hyperbolic space to Euclidean space that has the formula\n",
    "\\begin{align}\n",
    "    f_{a, p}^c(z) = \\mathrm{sign}(\\langle a, \\log_p^c(z)\\rangle_p) ||a||_p d_p^c(z, H_{a, p}^c)\n",
    "\\end{align}\n",
    "This operation was first introduced Ganea et al. 2018, which also computed a closed-form formula for $d_p^c(z, H_{a, p}^c)$\n",
    "\\begin{align}\n",
    "    d_p^c(z, H_{a, p}^c) = \\frac{1}{\\sqrt{c}} \\left( \\frac{2\\sqrt{c}|\\langle -p \\oplus_c z, a \\rangle|}{(1 -c||-p \\oplus_c z ||^2)||a||} \\right)\n",
    "\\end{align}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RiemannianLayer(nn.Module):\n",
    "    def __init__(self, in_features, out_features, manifold, over_param, weight_norm):\n",
    "        super(RiemannianLayer, self).__init__()\n",
    "        self.in_features = in_features\n",
    "        self.out_features = out_features\n",
    "        self.manifold = manifold\n",
    "\n",
    "        self._weight = nn.Parameter(torch.Tensor(out_features, in_features))\n",
    "        self.over_param = over_param\n",
    "        self.weight_norm = weight_norm\n",
    "        if self.over_param:\n",
    "            self._bias = ManifoldParameter(torch.Tensor(out_features, in_features), manifold=manifold)\n",
    "        else:\n",
    "            self._bias = nn.Parameter(torch.Tensor(out_features, 1))\n",
    "        self.reset_parameters()\n",
    "\n",
    "    @property\n",
    "    def weight(self):\n",
    "        return self.manifold.transp0(self.bias, self._weight) # weight \\in T_0 => weight \\in T_bias\n",
    "\n",
    "    @property\n",
    "    def bias(self):\n",
    "        if self.over_param:\n",
    "            return self._bias\n",
    "        else:\n",
    "            return self.manifold.expmap0(self._weight * self._bias) # reparameterisation of a point on the manifold\n",
    "\n",
    "    def reset_parameters(self):\n",
    "        torch.nn.init.kaiming_normal_(self._weight, a=math.sqrt(5))\n",
    "        fan_in, _ = torch.nn.init._calculate_fan_in_and_fan_out(self._weight)\n",
    "        bound = 4 / math.sqrt(fan_in)\n",
    "        torch.nn.init.uniform_(self._bias, -bound, bound)\n",
    "        if self.over_param:\n",
    "            with torch.no_grad(): self._bias.set_(self.manifold.expmap0(self._bias))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class GyroplaneLayer(RiemannianLayer):\n",
    "    def __init__(self, in_features, out_features, manifold, over_param=False, weight_norm=False):\n",
    "        super(GyroplaneLayer, self).__init__(in_features, out_features, manifold, over_param, weight_norm)\n",
    "\n",
    "    def forward(self, input):\n",
    "        input = input.unsqueeze(-2)\n",
    "        input = input.expand(*input.shape[:-(len(input.shape) - 2)], self.out_features, self.in_features)\n",
    "        # Compute the gyroplane layer using the distance formula of Ganea et al. 2018\n",
    "        res = self.manifold.normdist2plane(input, self.bias, self.weight,\n",
    "                                           signed=True, norm=self.weight_norm)\n",
    "        return res"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class GyroDec(nn.Module):\n",
    "    \"\"\" First layer is a Hypergyroplane followed by usual decoder \"\"\"\n",
    "    def __init__(self, manifold, data_size, non_lin, num_hidden_layers, hidden_dim):\n",
    "        super(GyroDec, self).__init__()\n",
    "        self.data_size = data_size\n",
    "        modules = []\n",
    "        # The decoder is the same except the first layer is replaced with a Gyroplane layer\n",
    "        modules.append(nn.Sequential(GyroplaneLayer(manifold.coord_dim, hidden_dim, manifold), non_lin))\n",
    "        modules.extend([extra_hidden_layer(hidden_dim, non_lin) for _ in range(num_hidden_layers - 1)])\n",
    "        self.dec = nn.Sequential(*modules)\n",
    "        self.output = nn.Linear(hidden_dim, prod(data_size))\n",
    "\n",
    "    def forward(self, z):\n",
    "        d = self.dec(z)\n",
    "        mu = self.output(d).view(*z.size()[:-1], *self.data_size)  # reshape data\n",
    "        return torch.tensor(1.0).to(z.device), mu"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We now create our final hyperbolic VAE model using the encoder and decoder that we just defined."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class VAE(nn.Module):\n",
    "    def __init__(self, prior_dist, posterior_dist, likelihood_dist, enc, dec, params):\n",
    "        super(VAE, self).__init__()\n",
    "        self.pz = prior_dist\n",
    "        self.px_z = likelihood_dist\n",
    "        self.qz_x = posterior_dist\n",
    "        self.enc = enc\n",
    "        self.dec = dec\n",
    "        self.modelName = None\n",
    "        self.params = params\n",
    "        self.data_size = params.data_size\n",
    "        self.prior_std = params.prior_std\n",
    "\n",
    "        if self.px_z == torch.distributions.RelaxedBernoulli:\n",
    "            self.px_z.log_prob = lambda self, value: \\\n",
    "                -F.binary_cross_entropy_with_logits(\n",
    "                    self.logits if value.dim() <= self.logits.dim() else self.logits.expand_as(value),\n",
    "                    value.expand(self.batch_shape) if value.dim() <= self.logits.dim() else value,\n",
    "                    reduction='none'\n",
    "                )\n",
    "            \n",
    "    def generate(self, N, K):\n",
    "        self.eval()\n",
    "        with torch.no_grad():\n",
    "            mean_pz = get_mean_param(self.pz_params)\n",
    "            mean = get_mean_param(self.dec(mean_pz))\n",
    "            px_z_params = self.dec(self.pz(*self.pz_params).sample(torch.Size([N])))\n",
    "            px_z_tmp, px_z_mu = px_z_params\n",
    "            means = get_mean_param(px_z_params)\n",
    "            samples = self.px_z(px_z_tmp, logits=px_z_mu).sample(torch.Size([K]))\n",
    "\n",
    "        return mean, \\\n",
    "            means.view(-1, *means.size()[2:]), \\\n",
    "            samples.view(-1, *samples.size()[3:])\n",
    "\n",
    "    def reconstruct(self, data):\n",
    "        self.eval()\n",
    "        with torch.no_grad():\n",
    "            qz_x = self.qz_x(*self.enc(data))\n",
    "            px_z_params = self.dec(qz_x.rsample(torch.Size([1])).squeeze(0))\n",
    "            \n",
    "    def forward(self, x, K=1):\n",
    "        qz_x = self.qz_x(*self.enc(x))\n",
    "        zs = qz_x.rsample(torch.Size([K]))\n",
    "        temp, mu = self.dec(zs)\n",
    "        px_z = self.px_z(temp, logits=mu)\n",
    "        return qz_x, px_z, zs\n",
    "    \n",
    "    @property\n",
    "    def pz_params(self):\n",
    "        return self._pz_mu.mul(1), F.softplus(self._pz_logvar).div(math.log(2)).mul(self.prior_std_scale)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we set our model parameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Config:\n",
    "    def __init__(self, manifold='PoincareBall', latent_dim=2, c=0.7, prior='WrappedNormal', posterior='WrappedNormal', \n",
    "                 prior_std=1., num_hidden_layers=1, hidden_dim=600, nl='ReLU', enc='Enc', dec='GyroDec', beta=1.0, K=1, \n",
    "                 epochs=80, batch_size=128, lr=5e-4, beta1=0.9, beta2=0.999, data_size=torch.Size([1, 28, 28])):\n",
    "        self.manifold = manifold                           # Manifold: Euclidean or Hyperbolic\n",
    "        self.latent_dim = latent_dim                       # Latent dimension of manifold\n",
    "        self.c = c                                         # Curvature of manifold\n",
    "        self.prior = prior                                 # VAE prior distribution\n",
    "        self.posterior = posterior                         # VAE posterior distribution\n",
    "        self.prior_std = prior_std                         # Standard dev. of prior distribution\n",
    "        \n",
    "        self.num_hidden_layers = num_hidden_layers         # Num hidden layers in encoder and decoder\n",
    "        self.hidden_dim = hidden_dim                       # Hidden dimension\n",
    "        self.nl = nl                                       # Non-linearity for encoder and decoder\n",
    "        self.enc = enc                                     # VAE encoder                         \n",
    "        self.dec = dec                                     # VAE decoder\n",
    "        \n",
    "        self.beta = beta                                   # Beta parameter for beta-VAE\n",
    "        self.K = K                                         # Number of samples for ELBO\n",
    "        \n",
    "        self.epochs = epochs                               # Epochs\n",
    "        self.batch_size = batch_size                       # Batch size\n",
    "        self.lr = lr                                       # Learning rate\n",
    "        self.beta1 = beta1                                 # beta1 for Adam optimizer\n",
    "        self.beta2 = beta2                                 # beta2 for Adam optimizer\n",
    "        \n",
    "        self.data_size = data_size                         # Data size of input data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "params = Config()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, following Mathieu et al. 2019, we create a MNIST-specific VAE:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Mnist(VAE):\n",
    "    def __init__(self, params):\n",
    "        c = nn.Parameter(params.c * torch.ones(1), requires_grad=False)\n",
    "        manifold = eval(params.manifold)(params.latent_dim, c)\n",
    "        super(Mnist, self).__init__(\n",
    "            eval(params.prior),                       # prior distribution\n",
    "            eval(params.posterior),                   # posterior distribution\n",
    "            torch.distributions.RelaxedBernoulli,     # likelihood distribution\n",
    "            eval(params.enc)(\n",
    "                manifold, \n",
    "                params.data_size, \n",
    "                getattr(nn, params.nl)(), \n",
    "                params.num_hidden_layers, \n",
    "                params.hidden_dim\n",
    "            ),\n",
    "            eval(params.dec)(\n",
    "                manifold, \n",
    "                params.data_size, \n",
    "                getattr(nn, params.nl)(), \n",
    "                params.num_hidden_layers, \n",
    "                params.hidden_dim\n",
    "            ),\n",
    "            params\n",
    "        )\n",
    "        self.manifold = manifold\n",
    "        self.c = c\n",
    "        self._pz_mu = nn.Parameter(torch.zeros(1, params.latent_dim), requires_grad=False)\n",
    "        self._pz_logvar = nn.Parameter(torch.zeros(1, 1), requires_grad=False)\n",
    "        self.modelName = 'Mnist'\n",
    "\n",
    "    def init_last_layer_bias(self, train_loader):\n",
    "        if not hasattr(self.dec.output, 'bias'): return\n",
    "        with torch.no_grad():\n",
    "            p = torch.zeros(prod(params.data_size[1:]), device=self._pz_mu.device)\n",
    "            N = 0\n",
    "            for i, (data, _) in enumerate(train_loader):\n",
    "                data = data.to(self._pz_mu.device)\n",
    "                B = data.size(0)\n",
    "                N += B\n",
    "                p += data.view(-1, prod(params.data_size[1:])).sum(0)\n",
    "            p /= N\n",
    "            p += 1e-4\n",
    "            self.dec.output.bias.set_(p.log() - (1 - p).log())\n",
    "\n",
    "    @property\n",
    "    def pz_params(self):\n",
    "        return self._pz_mu.mul(1), F.softplus(self._pz_logvar).div(math.log(2)).mul(self.prior_std), self.manifold"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = Mnist(params)\n",
    "model.to(device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Training objective"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Both Nagano et al. 2019 and Mathieu et al. 2019 use a $\\beta$-VAE (Higgins et al. 2017), whose objective applies a scalar weight of $\\beta$ to the KL-divergence term."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def vae_objective(model, x, K=1, beta=1.0, components=False, **kwargs):\n",
    "    \"\"\"\n",
    "    The beta-VAE objective. \n",
    "    \"\"\"\n",
    "    qz_x, px_z, zs = model(x, K)\n",
    "    _, B, D = zs.size()\n",
    "    flat_rest = torch.Size([*px_z.batch_shape[:2], -1])\n",
    "    lpx_z = px_z.log_prob(x.expand(px_z.batch_shape)).view(flat_rest).sum(-1)\n",
    "\n",
    "    pz = model.pz(*model.pz_params)\n",
    "    kld = qz_x.log_prob(zs).sum(-1) - pz.log_prob(zs).sum(-1)\n",
    "\n",
    "    obj = -lpx_z.mean(0).sum() + beta * kld.mean(0).sum()\n",
    "    return (qz_x, px_z, lpx_z, kld, obj) if components else obj"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss_function = vae_objective"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Optimizer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will use the Adam optimizer, which is used by both Nagano et al. 2019 and Mathieu et al. 2019."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = optim.Adam(model.parameters(), lr=params.lr, amsgrad=True, betas=(params.beta1, params.beta2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prepare dataloaders"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_loader = torch.utils.data.DataLoader(\n",
    "    train_dataset, \n",
    "    batch_size=params.batch_size, \n",
    "    shuffle=True, \n",
    "    num_workers=1, \n",
    "    pin_memory=True\n",
    ")\n",
    "test_loader = torch.utils.data.DataLoader(\n",
    "    test_dataset, \n",
    "    batch_size=params.batch_size, \n",
    "    shuffle=True, \n",
    "    num_workers=1, \n",
    "    pin_memory=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we train the hyperbolic VAE. Note that we do not need Riemannian SGD (as required by some hyperbolic models) since the parameters of our model living in hyperbolic space are parametrized via the exponential map. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def probe_infnan(v, name, extras={}):\n",
    "    nps = torch.isnan(v)\n",
    "    s = nps.sum().item()\n",
    "    if s > 0:\n",
    "        print('>>> {} >>>'.format(name))\n",
    "        print(name, s)\n",
    "        print(v[nps])\n",
    "        for k, val in extras.items():\n",
    "            print(k, val, val.sum().item())\n",
    "        quit()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from collections import defaultdict\n",
    "\n",
    "# The training loop\n",
    "model.init_last_layer_bias(train_loader)\n",
    "agg = defaultdict(list)\n",
    "for epoch in range(1, params.epochs + 1):\n",
    "    model.train()\n",
    "    b_loss, b_recon, b_kl = 0., 0., 0.\n",
    "    for i, (data, _) in enumerate(train_loader):\n",
    "        data = data.to(device)\n",
    "        optimizer.zero_grad()\n",
    "        qz_x, px_z, lik, kl, loss = loss_function(model, data, K=params.K, beta=params.beta, components=True)\n",
    "        # The Poincare ball model can have numerical instability close to the boundary of the ball\n",
    "        probe_infnan(loss, \"Training loss:\") \n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        b_loss += loss.item()\n",
    "        b_recon += -lik.mean(0).sum().item()\n",
    "        b_kl += kl.sum(-1).mean(0).sum().item()\n",
    "\n",
    "    agg['train_loss'].append(b_loss / len(train_loader.dataset))\n",
    "    agg['train_recon'].append(b_recon / len(train_loader.dataset))\n",
    "    agg['train_kl'].append(b_kl / len(train_loader.dataset))\n",
    "    if epoch % 1 == 0:\n",
    "        print('====> Epoch: {:03d} Loss: {:.2f} Recon: {:.2f} KL: {:.2f}'.format(\n",
    "            epoch, agg['train_loss'][-1], agg['train_recon'][-1], agg['train_kl'][-1])\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Testing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we evaluate the likelihood of the test dataset using our trained model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.eval()\n",
    "b_loss = 0.\n",
    "with torch.no_grad():\n",
    "    for i, (data, labels) in enumerate(test_loader):\n",
    "        data = data.to(device)\n",
    "        qz_x, px_z, lik, kl, loss = loss_function(model, data, K=params.K, beta=params.beta, components=True)\n",
    "        b_loss += loss.item()\n",
    "\n",
    "agg['test_loss'].append(b_loss / len(test_loader.dataset))\n",
    "print('====>             Test loss: {:.4f}'.format(agg['test_loss'][-1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we generate sample MNIST digits."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_mean_param(params):\n",
    "    \"\"\"Return the parameter used to show reconstructions or generations.\n",
    "    For example, the mean for Normal, or probs for Bernoulli.\n",
    "    For Bernoulli, skip first parameter, as that's (scalar) temperature\n",
    "    \"\"\"\n",
    "    if params[0].dim() == 0:\n",
    "        return params[1]\n",
    "    # elif len(params) == 3:\n",
    "    #     return params[1]\n",
    "    else:\n",
    "        return params[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mean, means, samples = model.generate(64, 9)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "vis = []\n",
    "for i in range(8):\n",
    "    vis.append(means[i].squeeze().cpu())\n",
    "vis = torch.cat(vis, dim=1)\n",
    "plt.axis('off')\n",
    "plt.imshow(vis.numpy())\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
