{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "ULndVKL12doD"
      },
      "source": [
        "##### Copyright 2019 Google LLC.\n",
        "\n",
        "Licensed under the Apache License, Version 2.0 (the \"License\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "9Mye5mzaTuxo"
      },
      "outputs": [],
      "source": [
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "ipDfza0Wj8wF"
      },
      "source": [
        "# Basic setup\n",
        "\n",
        "**About this Colab**\n",
        "\n",
        "This Colab accompanies the NeurIPS 2019 paper:\n",
        "\u003cbr/\u003e\n",
        "\n",
        "*Practical and Consistent Estimation of f-Divergences* \\\n",
        "*Paul K. Rubenstein, Olivier Bousquet, Josip Djolonga, Carlos Riquelme, Ilya Tolstikhin*\n",
        "\n",
        "The paper can be found at https://arxiv.org/abs/1905.11112\n",
        "\n",
        "\n",
        "This Colab reproduces Figures 1, 2 and 3 from the paper. By default, the Colab just loads precomputed data and makes the plots from these. Recomputing everything from scratch takes ~5 hours."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "2JgzsJJKloMc"
      },
      "outputs": [],
      "source": [
        "#@title Imports { display-mode: \"form\" }\n",
        "import tensorflow as tf\n",
        "tf.enable_eager_execution()\n",
        "import tensorflow_probability as tfp\n",
        "from tensorflow_probability import distributions as tfd\n",
        "from matplotlib import pyplot as plt\n",
        "import numpy as np\n",
        "import cvxpy as cp\n",
        "import time\n",
        "import os\n",
        "\n",
        "import matplotlib.cm as cm\n",
        "from scipy import stats\n",
        "from scipy.special import logsumexp\n",
        "\n",
        "import h5py\n",
        "\n",
        "import seaborn as sns\n",
        "\n",
        "from matplotlib import rc\n",
        "rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})\n",
        "rc('text', usetex=False)\n",
        "\n",
        "ROOT_PATH = \"gs://rammc_data/\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "nzA5B99ClrOn"
      },
      "source": [
        "# Figure 1"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "RxYXMEqxmqqC"
      },
      "source": [
        "## Experimental design:\n",
        "\n",
        "*   We estimate $D_f(Q_z, P_z)$ for different $f$, $Q_z$ and $P_z$;\n",
        "*   We consider $d$-variate normal $P_z$ with mean $b_0\\in R^d$ and identity covariance;\n",
        "*   Base distribution $P_x$ is standard normal in $R^k$;\n",
        "*   We always take $Q(Z|X)$ to be $d$-variate normal with mean value $AX + b\\in R^d$ and covariance $\\epsilon^2 I_d$;\n",
        "*   In this case $Q_z := \\int Q(Z|X) dP_x(X)$ is a $d$-variate Gaussian with mean value $b$ and covariance $A A' + \\epsilon^2 I_d$;\n",
        "*  The distribution $Q_z$ is parametrized with one scalar $\\lambda\\in [-\\Lambda, \\Lambda]$ as follows:\n",
        "*   \u003e $A = \\lambda  A_0$,\n",
        "*   \u003e $b = \\lambda  b_0$.\n",
        "*   We consider dimensions: $d=32,64,128$\n",
        "*  We consider the following divergences: KL, $\\chi^2$ and squared Hellinger divergences.\n",
        "*  We run the propsed RAM-MC estimator with ${n=1}$ and ${n=500}$ points sampled from $P_x$ and $128$ Monte-Carlo samples.\n",
        "\n",
        "Squared Hellinger divergence is:\n",
        "$$\n",
        "H^2(P, Q) := \\int (\\sqrt{p(z)} - \\sqrt{q(z)})^2 dz \\in [0, 2],\n",
        "$$\n",
        "$\\chi^2$ divergence is:\n",
        "$$\n",
        "\\chi^2(P, Q) = \\int \\left(\\frac{p(z)}{q(z)}\\right)^2 q(z) dz - 1.\n",
        "$$\n",
        "KL divergence is:\n",
        "$$\n",
        "KL(Q, P) = \\int \\log\\left(\\frac{q(z)}{p(z)}\\right) q(z) dz\n",
        "$$"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "j3fPIXXpvonM"
      },
      "outputs": [],
      "source": [
        "#@title Closed form divergence computations { display-mode: \"form\" }\n",
        "\n",
        "#@markdown Please see Appendix C.1 of the paper for analytical expressions for\n",
        "#@markdown the divergences considered here.\n",
        "\n",
        "\n",
        "def get_dims(A):\n",
        "  \"\"\"Infers input and latent dimensions from a matrix.\"\"\"\n",
        "  d_latent, d_input = A.get_shape().as_list()\n",
        "  return d_latent, d_input\n",
        "\n",
        "def get_q_cov(A, b, std):\n",
        "  d_latent, _ = get_dims(A)\n",
        "  return tf.matmul(A, A, transpose_b=True) + std**2 * tf.eye(d_latent)\n",
        "\n",
        "def compute_kl(A, b, std):\n",
        "  \"\"\"Computes the squared Hellinger distance between unit Gaussian Pz\n",
        "  and Gaussian Qz with mean b and covariance AA^t + (std**2)I .\"\"\"\n",
        "  d_latent, d_input = get_dims(A)\n",
        "  p = tfd.MultivariateNormalDiag(loc=tf.zeros(shape=(d_latent,)),\n",
        "                                 scale_diag=tf.ones(d_latent))\n",
        "  q_cov = get_q_cov(A, b, std)\n",
        "  q = tfd.MultivariateNormalFullCovariance(loc=b, covariance_matrix=q_cov)\n",
        "  return q.kl_divergence(p).numpy()\n",
        "\n",
        "def compute_hsq(A, b, std):\n",
        "  \"\"\"Computes the squared Hellinger distance between unit Gaussian Pz\n",
        "  and Gaussian Qz with mean b and covariance AA^t + (std**2)I .\"\"\"\n",
        "  d_latent, d_input = get_dims(A)\n",
        "  Sigma1 = tf.eye(d_latent)\n",
        "  Sigma2 = tf.matmul(A, A, transpose_b=True) + std**2 * tf.eye(d_latent)\n",
        "  res = tf.linalg.logdet(Sigma1) / 4. + tf.linalg.logdet(Sigma2) / 4.\n",
        "  res -= tf.linalg.logdet(0.5 * Sigma1 + 0.5 * Sigma2) / 2.\n",
        "  res = tf.exp(res)\n",
        "  quad_form = tf.matmul(tf.linalg.inv(0.5 * Sigma1 + 0.5 * Sigma2),\n",
        "                        tf.reshape(b, (d_latent, -1)))\n",
        "  quad_form = tf.matmul(tf.reshape(b, (-1, d_latent)), quad_form)\n",
        "  res *= tf.exp(- 1. / 8 * quad_form)\n",
        "  return (2. - 2. * res[0, 0]).numpy()\n",
        "\n",
        "def compute_chi2(A, b, std):\n",
        "  \"\"\"Computes the chi square divergence between unit Gaussian Pz\n",
        "  and Gaussian Qz with mean b and covariance AA^t + (std**2)I .\"\"\"\n",
        "  def quadform(v, M):\n",
        "    d, _ = get_dims(M)\n",
        "    quad_form = tf.matmul(M, tf.reshape(v, (d, -1)))\n",
        "    quad_form = tf.matmul(tf.reshape(v, (-1, d)), quad_form)\n",
        "    return quad_form[0, 0]\n",
        "\n",
        "  A = tf.cast(A, tf.float64)\n",
        "  b = tf.cast(b, tf.float64)\n",
        "  std = tf.cast(std, tf.float64)\n",
        "\n",
        "  d_latent, d_input = get_dims(A)\n",
        "  Sigma1 = (tf.matmul(A, A, transpose_b=True)\n",
        "            + std**2 * tf.eye(d_latent, dtype=tf.float64))\n",
        "  Sigma1inv = tf.linalg.inv(Sigma1)\n",
        "  Sigma2 = tf.eye(d_latent, dtype=tf.float64)\n",
        "  Sigma2inv = Sigma2\n",
        "  mu1 = b\n",
        "  mu2 = tf.zeros(shape=(d_latent,), dtype=tf.float64)\n",
        "\n",
        "  S = 2. * Sigma1inv - Sigma2inv\n",
        "\n",
        "  # Check that chi2 is well defined.\n",
        "  if tf.linalg.det(Sigma1) \u003c= 0:\n",
        "    raise ValueError(\"Sigma1 cannot have non-positive determinant.\")\n",
        "\n",
        "  eig, _ = tf.linalg.eigh(S)\n",
        "  if tf.reduce_min(eig) \u003c= 0:\n",
        "    return float('Inf')\n",
        "\n",
        "  scale = 1.0 / tf.sqrt(tf.linalg.det(2. * Sigma1 - tf.matmul(Sigma1, Sigma1)))\n",
        "  if scale.numpy() \u003c 1:\n",
        "    print('Scale ', scale.numpy())\n",
        "  res = 0.5 * quadform(mu2, Sigma2inv) - quadform(mu1, Sigma1inv)\n",
        "  v = 2. * tf.matmul(Sigma1inv, tf.reshape(mu1, (d_latent, -1)))\n",
        "  v -= tf.matmul(Sigma2inv, tf.reshape(mu2, (d_latent, -1)))\n",
        "  M = tf.linalg.inv(- S / 2.)\n",
        "  res -= 1. / 4 * quadform(v, M)\n",
        "  if res.numpy() \u003c 0:\n",
        "    print(res.numpy())\n",
        "  res = scale * tf.exp(res) - 1.\n",
        "  return res.numpy()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Pibz23wmG_eF"
      },
      "source": [
        "## Estimators"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "jaUJ_5wt3YjF"
      },
      "outputs": [],
      "source": [
        "#@title RAM-MC { display-mode: \"form\" }\n",
        "#@markdown Our proposed estimator. See Equation (3) of the paper.\n",
        "\n",
        "def compute_ram_mc(n, m, A, b, std, f, n_batches):\n",
        "  \"\"\"Estimates Df(Qz, Pz) with RAM-MC estimator. Pz is unit Gaussian and\n",
        "  Qz is Gaussian with mean b and covariance AA^t + (std**2)I.\n",
        "\n",
        "  Args:\n",
        "    n: Number of mixture components to approximate Qz.\n",
        "    m: Number of MC samples to use.\n",
        "    A: Parameter determining covariance matrix of Qz\n",
        "    b: Mean of Qz\n",
        "    std: Parameter determining covariance matrix of Qz\n",
        "    f: Which f-divergence to compute. \"KL\", \"Chi2\" or \"Hsq\".\n",
        "    n_batches: Number of repetitions to perform.\n",
        "  Returns:\n",
        "    estimates: A numpy array of estimates, one per n_batch.\n",
        "  \"\"\"\n",
        "  d_latent, d_input = get_dims(A)\n",
        "  p = tfd.MultivariateNormalDiag(loc=tf.zeros(shape=(d_latent,)),\n",
        "                                 scale_diag=tf.ones(d_latent))\n",
        "\n",
        "  # Base P(X) distribution, which is standard normal in d_input.\n",
        "  data = tfd.MultivariateNormalDiag(loc=tf.zeros(d_input),\n",
        "                                    scale_diag=tf.ones(d_input))\n",
        "  data_samples = data.sample(n * n_batches)  # Minibatch from P(x).\n",
        "  data_samples = tf.reshape(data_samples, [n_batches, n, d_input])\n",
        "  A = tf.reshape(A, [1, d_latent, d_input])\n",
        "  A = tf.tile(A, [n_batches, 1, 1])\n",
        "  data_posterior = tfd.MultivariateNormalDiag(\n",
        "      loc=tf.matmul(data_samples, A, transpose_b=True) + b,\n",
        "      scale_diag=std * tf.ones(d_latent))\n",
        "  # Compose a mixture distribution. Experiment-specific parameters are indexed\n",
        "  # with the first dimension in data_posterior.\n",
        "  mixture = tfd.MixtureSameFamily(\n",
        "      mixture_distribution=tfd.Categorical(probs=[1. / n] * n),\n",
        "      components_distribution=data_posterior)\n",
        "  if f == 'KL':\n",
        "    # Estimate is 1/m \\sum_i log ( dQn(zi) / dP(zi) ) with zi ~ Qn.\n",
        "    mc_samples = mixture.sample(m)\n",
        "    log_density_ratios = (mixture.log_prob(mc_samples) -\n",
        "                          p.log_prob(mc_samples))\n",
        "    estimates = (tf.reduce_mean(log_density_ratios, axis=0)).numpy()\n",
        "  elif f == 'Chi2':\n",
        "    # Estimate is 1/m \\sum_i dQn(zi) / dP(zi) - 1 with zi ~ Qn.\n",
        "    mc_samples = mixture.sample(m)\n",
        "    logratio = mixture.log_prob(mc_samples) - p.log_prob(mc_samples)\n",
        "    estimates = (tf.exp(tf.reduce_logsumexp(logratio, axis=0)) / m - 1.).numpy()\n",
        "  elif f == 'Hsq':\n",
        "    ## Estimate is 2 - 2 / m \\sum_i exp(0.5 log (dP(zi) / dQn(zi))), zi ~ Qn.\n",
        "    mc_samples = mixture.sample(m)\n",
        "    logratio = -mixture.log_prob(mc_samples) + p.log_prob(mc_samples)\n",
        "    estimates = 2.\n",
        "    estimates -= 2. * tf.exp(tf.reduce_logsumexp(0.5 * logratio, axis=0)) / m\n",
        "    estimates = estimates.numpy()\n",
        "  else:\n",
        "    raise ValueError(\"f must be one of 'KL', 'Chi2', 'Hsq'.\")\n",
        "  return estimates"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "zX46XOCfHaFS"
      },
      "outputs": [],
      "source": [
        "#@title Plug-in estimator { display-mode: \"form\" }\n",
        "\n",
        "#@markdown Perform kernel density estimation, then do Monte-Carlo sampling by\n",
        "#@markdown plugging the estimated densities into the divergence formulae\n",
        "def estimate_plugin(n, m, A, b, std, f, n_batches, eps=1e-8):\n",
        "  \"\"\"Estimates Df(Qz, Pz) with the plugin estimator. Pz is unit Gaussian and Qz\n",
        "  is Gaussian with mean b and covariance AA^t + (std**2)I. First perform kernel\n",
        "  density estimation of two densities, then plug in.\n",
        "  \"\"\"\n",
        "\n",
        "  def numpy_sample(p, n, d):\n",
        "    points = p.sample(n)\n",
        "    points = tf.reshape(points, [d, -1]).numpy()\n",
        "    return points\n",
        "\n",
        "  d_latent, d_input = get_dims(A)\n",
        "  p = tfd.MultivariateNormalDiag(loc=tf.zeros(shape=(d_latent,)),\n",
        "                                     scale_diag=tf.ones(d_latent))\n",
        "  q_cov = get_q_cov(A, b, std)\n",
        "  q = tfd.MultivariateNormalFullCovariance(\n",
        "      loc=b, covariance_matrix=q_cov)\n",
        "\n",
        "  # Repeat computations n_batches times.\n",
        "  res = []\n",
        "  for exp in range(n_batches):\n",
        "\n",
        "    # I.i.d. points from p and q to estimate their densities.\n",
        "    p_kde_points = numpy_sample(p, n, d_latent)\n",
        "    q_kde_points = numpy_sample(q, n, d_latent)\n",
        "\n",
        "    try:\n",
        "      p_hat = stats.gaussian_kde(p_kde_points)\n",
        "      q_hat = stats.gaussian_kde(q_kde_points)\n",
        "    except:\n",
        "      res.append(np.nan)\n",
        "      continue\n",
        "\n",
        "    mc_points = numpy_sample(q, m, d_latent)\n",
        "    try:\n",
        "      q_vals = q_hat.evaluate(mc_points)\n",
        "      p_vals = p_hat.evaluate(mc_points) + eps\n",
        "      log_q_vals = q_hat.logpdf(mc_points)\n",
        "      log_p_vals = p_hat.logpdf(mc_points) + eps\n",
        "    except:\n",
        "      res.append(np.nan)\n",
        "      continue\n",
        "\n",
        "    if f == 'KL':\n",
        "      res.append(np.mean(log_q_vals - log_p_vals))\n",
        "    elif f == 'Hsq':\n",
        "      logratio = log_p_vals - log_q_vals\n",
        "      estimate_val = 2.\n",
        "      estimate_val -= 2. * np.exp(logsumexp(0.5 * logratio)) / m\n",
        "      res.append(estimate_val)\n",
        "    elif f == 'Chi2':\n",
        "      logratio = log_q_vals - log_p_vals\n",
        "      estimate_val = np.exp(logsumexp(logratio)) / m - 1.\n",
        "      res.append(estimate_val)\n",
        "    else:\n",
        "      raise ValueError(\"f must be one of 'KL', 'Chi2', 'Hsq'.\")\n",
        "  return np.array(res)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "IbbByQQwWgYO"
      },
      "outputs": [],
      "source": [
        "#@title Estimator of Nguyen et al. { display-mode: \"form\" }\n",
        "#@markdown The M1 estimator proposed by Nguyen et al., *Estimating divergence\n",
        "#@markdown functionals and the likelihood ratio by convex risk minimization.*\n",
        "#@markdown For full reference see [28] in the paper.\n",
        "\n",
        "def nguyen_estimate_rkhs(n, A, b, std, lmbd, rkhs_sigma2=None, n_exps=1):\n",
        "  \"\"\" Compute estimator of Nguyen et al. of KL(Qz, Pz) for the RKHS family.\n",
        "\n",
        "  Args:\n",
        "    lmbd: positive regularizer\n",
        "    rkhs_sigma2: width of the Gaussian kernel\n",
        "  \"\"\"\n",
        "  def kernel_matrices(X, Y, sigma2=None, eps=1e-4):\n",
        "    # X.\n",
        "    norms_x_sq = tf.reduce_sum(tf.square(X), axis=1, keepdims=True)\n",
        "    dotprods_x = tf.matmul(X, X, transpose_b=True)\n",
        "    dists_x_sq = norms_x_sq + tf.transpose(norms_x_sq) - 2. * dotprods_x\n",
        "    # Y.\n",
        "    norms_y_sq = tf.reduce_sum(tf.square(Y), axis=1, keepdims=True)\n",
        "    dotprods_y = tf.matmul(Y, Y, transpose_b=True)\n",
        "    dists_y_sq = norms_y_sq + tf.transpose(norms_y_sq) - 2. * dotprods_y\n",
        "    # XY.\n",
        "    dotprods_xy = tf.matmul(X, Y, transpose_b=True)\n",
        "    dists_xy_sq = norms_x_sq + tf.transpose(norms_y_sq) - 2. * dotprods_xy\n",
        "\n",
        "    if sigma2 is None:\n",
        "      sigma2 = np.median(dists_xy_sq)\n",
        "\n",
        "    Kx = tf.exp(- dists_x_sq / 2. / sigma2) + eps * tf.eye(n)\n",
        "    Ky = tf.exp(- dists_y_sq / 2. / sigma2) + eps * tf.eye(n)\n",
        "    Kxy = tf.exp(- dists_xy_sq / 2. / sigma2)\n",
        "    return (Kx, Ky, Kxy)\n",
        "\n",
        "  def is_pos_def(x):\n",
        "    eig = np.linalg.eigvals(x)\n",
        "    res = np.all(eig \u003e 0)\n",
        "    if not res:\n",
        "      print(np.sort(eig))\n",
        "    return res\n",
        "\n",
        "  d_latent, _ = get_dims(A)\n",
        "  prior = tfd.MultivariateNormalDiag(loc=tf.zeros(d_latent),\n",
        "                                     scale_diag=tf.ones(d_latent))\n",
        "  q_cov = get_q_cov(A, b, std)\n",
        "  q = tfd.MultivariateNormalFullCovariance(\n",
        "      loc=b, covariance_matrix=q_cov)\n",
        "\n",
        "  Y = q.sample(n * n_exps)  # Minibatch from P(x).\n",
        "  Y = tf.reshape(Y, [n_exps, n, d_latent])\n",
        "  X = prior.sample(n * n_exps)\n",
        "  X = tf.reshape(X, [n_exps, n, d_latent])\n",
        "\n",
        "  # Perform n_exps experiments.\n",
        "  estimates = []\n",
        "  for i in range(n_exps):\n",
        "    Kx, Ky, Kxy = kernel_matrices(X[i], Y[i], sigma2=rkhs_sigma2)\n",
        "    Kx = Kx.numpy()\n",
        "    Ky = Ky.numpy()\n",
        "    Kxy = Kxy.numpy()\n",
        "\n",
        "    # Get objective of the dual convex program and solve.\n",
        "    alpha = cp.Variable(n)\n",
        "    obj = cp.Minimize( -1. - 1. / n * cp.sum(cp.log(n * alpha)) +\n",
        "                        1. / lmbd / 2. * cp.quad_form(alpha, Ky) +\n",
        "                        1. / 2. / lmbd / n / n * np.sum(Kx) -\n",
        "                        1. / lmbd / n * cp.sum(cp.matmul(Kxy, alpha)))\n",
        "    prob = cp.Problem(obj, [alpha \u003e= 0])  # Constraints.\n",
        "    prob.solve()\n",
        "    estimates.append(- 1. / n *  np.sum(np.log(n * alpha.value)))\n",
        "  return np.array(estimates)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Nfyi4gri3nTz"
      },
      "source": [
        "## Run experiments and make plots"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "koFScHK8oY_j"
      },
      "outputs": [],
      "source": [
        "#@title Set experiment parameters { display-mode: \"form\" }\n",
        "#@markdown And generate base matrix and vector A_0 and b_0.\n",
        "\n",
        "N_RANGE = [1, 500]  # Sample sizes.\n",
        "MC_NUM = 128  # Number of Monte-Carlo samples for RAM-MC.\n",
        "N_EXP = 10  # Number of times to repeat each experiment.\n",
        "K = 20  # Base space dimensionality.\n",
        "STD = 0.5  # Gaussian covariance noise.\n",
        "BETA = 0.5  # Scale for base covariance.\n",
        "D_RANGE = [1, 4, 16]  # Latent space dimensionality.\n",
        "LBD_MAX = 2.  # lambda range.\n",
        "\n",
        "tf.random.set_random_seed(345)\n",
        "\n",
        "# Generating A and b parameters for various dimensions.\n",
        "BASE_PARAMS = {}\n",
        "for d in D_RANGE:\n",
        "  b0 = tf.random.normal(shape=(d,))\n",
        "  b0 /= np.linalg.norm(b0)\n",
        "  A0 = tf.random.normal(shape=(d, K))\n",
        "  A0 /= tf.linalg.norm(A0)\n",
        "  BASE_PARAMS[d] = {'b0': b0, 'A0': A0}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "-JyaqcS-pJQ8"
      },
      "outputs": [],
      "source": [
        "#@title Run experiments or load precomputed results { display-mode: \"form\" }\n",
        "\n",
        "#@markdown Leaving the following boxes unchecked will load precomputed\n",
        "#@markdown results. \u003cbr/\u003e\n",
        "\n",
        "#@markdown Running the RAM-MC and plugin experiments takes ~5 minutes.\n",
        "RUN_RAM_MC_PLUGIN_EXPERIMENTS = False #@param { type: \"boolean\"}\n",
        "\n",
        "#@markdown Running the Nguyen et al. M1 experiments takes ~1.5 hours.\n",
        "RUN_NGUYEN_EXPERIMENTS = False #@param { type: \"boolean\"}\n",
        "\n",
        "def load_figure1_data(file_name):\n",
        "  data = {}\n",
        "  path = os.path.join(ROOT_PATH, file_name)\n",
        "  !gsutil cp $path $file_name\n",
        "  with h5py.File(file_name, 'r') as f:\n",
        "    for i in f:\n",
        "      data[int(i)] = {}\n",
        "      for j in f[i]:\n",
        "        data[int(i)][int(j)] = {}\n",
        "        for k in f[i][j]:\n",
        "          data[int(i)][int(j)][k] = list(f[i][j][k])\n",
        "  return data\n",
        "\n",
        "if RUN_RAM_MC_PLUGIN_EXPERIMENTS:\n",
        "  ram_mc_plugin_results = {}\n",
        "  for d in D_RANGE:\n",
        "    if d not in ram_mc_plugin_results:\n",
        "      ram_mc_plugin_results[d] = {}\n",
        "    for n in N_RANGE:\n",
        "      print(d, n)\n",
        "      if n not in ram_mc_plugin_results[d]:\n",
        "        ram_mc_plugin_results[d][n] = {}\n",
        "      for lbd in np.linspace(-LBD_MAX, LBD_MAX, 51):\n",
        "        # Create Abase with ones on diagonal\n",
        "        Abase = np.zeros((d, K))\n",
        "        np.fill_diagonal(Abase, 1.)\n",
        "        Abase = tf.convert_to_tensor(Abase, tf.dtypes.float32)\n",
        "        Albd = Abase * BETA + lbd * BASE_PARAMS[d]['A0']\n",
        "        blbd = lbd * BASE_PARAMS[d]['b0']\n",
        "\n",
        "        # Compute true closed form values (only once)\n",
        "        if n == N_RANGE[0]:\n",
        "          true_kl = compute_kl(Albd, blbd, STD)\n",
        "          true_hsq = compute_hsq(Albd, blbd, STD)\n",
        "          true_chi2 = compute_chi2(Albd, blbd, STD)\n",
        "        else:\n",
        "          true_kl = None\n",
        "          true_hsq = None\n",
        "          true_chi2 = None\n",
        "\n",
        "        for dvg in ['KL', 'Chi2', 'Hsq']:\n",
        "          if dvg not in ram_mc_plugin_results[d][n]:\n",
        "            ram_mc_plugin_results[d][n][dvg] = []\n",
        "\n",
        "          batch_ram_mc = compute_ram_mc(n, MC_NUM, Albd, blbd, STD,\n",
        "                                  f=dvg, n_batches=N_EXP)\n",
        "\n",
        "          batch_plugin = estimate_plugin(n, MC_NUM, Albd, blbd, STD,\n",
        "                                        f=dvg, n_batches=N_EXP)\n",
        "\n",
        "          ram_mc_plugin_results[d][n][dvg].append(\n",
        "              (true_kl, true_hsq, true_chi2, batch_ram_mc, batch_plugin))\n",
        "else:\n",
        "  ram_mc_plugin_results = load_figure1_data('ram_mc_plugin_results.hdf5')\n",
        "\n",
        "if RUN_NGUYEN_EXPERIMENTS:\n",
        "  nguyen_results = {}\n",
        "  for d in D_RANGE:\n",
        "    if d not in nguyen_results:\n",
        "      nguyen_results[d] = {}\n",
        "    n = N_RANGE[-1]\n",
        "    print((d, n))\n",
        "    if n not in nguyen_results[d]:\n",
        "      nguyen_results[d][n] = {}\n",
        "    for lbd in np.linspace(-LBD_MAX, LBD_MAX, 51):\n",
        "      print(lbd)\n",
        "      # Create Abase with ones on diagonal\n",
        "      Abase = np.zeros((d, K))\n",
        "      np.fill_diagonal(Abase, 1.)\n",
        "      Abase = tf.convert_to_tensor(Abase, tf.dtypes.float32)\n",
        "      Albd = Abase * BETA + lbd * BASE_PARAMS[d]['A0']\n",
        "      blbd = lbd * BASE_PARAMS[d]['b0']\n",
        "      for dvg in ['KL']:\n",
        "        if dvg not in nguyen_results[d][n]:\n",
        "          nguyen_results[d][n][dvg] = []\n",
        "        batch_nguyen = nguyen_estimate_rkhs(n, Albd, blbd, STD,\n",
        "                                            1. / n, n_exps=N_EXP)\n",
        "        nguyen_results[d][n][dvg].append(batch_nguyen)\n",
        "else:\n",
        "  nguyen_results = load_figure1_data('nguyen_results.hdf5')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "A1zEpudrsveB"
      },
      "outputs": [],
      "source": [
        "#@title Generate plots { display-mode: \"form\" }\n",
        "\n",
        "def make_plot_figure1(ram_mc_plugin_results, nguyen_results):\n",
        "  sns.set_style(\"white\")\n",
        "  fig = plt.figure(figsize = (13, 8))\n",
        "  elinewidth = 0.4  # Width of errorbars\n",
        "  errorevery = 3  # Set spacing of error bars to avoid crowding of figure.\n",
        "\n",
        "  def overflow_std(array):\n",
        "    \"\"\"Calculates std of array, but if overflow error would occur returns a \n",
        "    finite number larger than the range of any axes used in plots.\"\"\"\n",
        "    if (np.inf in array) or (np.nan in array) or any(1e20 \u003c array):\n",
        "      std = 1e20\n",
        "    else:\n",
        "      std = np.std(array)\n",
        "    return std\n",
        "\n",
        "  for i in range(1, 10):\n",
        "    sp = plt.subplot(3, 3, i)\n",
        "    d = D_RANGE[(i - 1) % 3]\n",
        "    dvg = ['KL', 'Chi2', 'Hsq'][int((i - 1) / 3)]\n",
        "    colors = cm.rainbow(np.linspace(0, 1, len(N_RANGE)))\n",
        "    for color, n in zip(colors, N_RANGE):\n",
        "\n",
        "      if n == N_RANGE[0]:\n",
        "        # Plot true values\n",
        "        idx = N_RANGE[0]\n",
        "        true_kl = np.array([el[0] for el in ram_mc_plugin_results[d][idx][dvg]])\n",
        "        true_hsq = np.array(\n",
        "            [el[1] for el in ram_mc_plugin_results[d][idx][dvg]])\n",
        "        true_chi2 = np.array([el[2] if isinstance(el[2], float) else el[2]\n",
        "                              for el in ram_mc_plugin_results[d][idx][dvg]])\n",
        "        if dvg == 'KL':\n",
        "          plt.plot(true_kl, color='blue', linewidth=3, label='Truth')\n",
        "          plt.yscale('log')\n",
        "        if dvg == 'Hsq':\n",
        "          plt.plot(true_hsq, color='blue', linewidth=3, label='Truth')\n",
        "        if dvg == 'Chi2':\n",
        "          plt.plot(true_chi2, color='blue', linewidth=3, label='Truth')\n",
        "          plt.yscale('log')\n",
        "\n",
        "      # Plot RAM-MC estimates for N=500.\n",
        "      if n == 500:\n",
        "        mean_ram_mc_n500 = np.array(\n",
        "            [np.mean(el[3]) for el in ram_mc_plugin_results[d][n][dvg]])\n",
        "        std_ram_mc_n500 = np.array(\n",
        "            [np.std(el[3]) for el in ram_mc_plugin_results[d][n][dvg]])\n",
        "        color = 'red'\n",
        "        plt.errorbar(range(51),\n",
        "                    mean_ram_mc_n500,\n",
        "                    errorevery=errorevery,\n",
        "                    yerr=std_ram_mc_n500,\n",
        "                    elinewidth=elinewidth,\n",
        "                    color=color, label='RAM-MC estimator, N=' + str(n),\n",
        "                    marker=\"^\", markersize=5, markevery=10)\n",
        "\n",
        "      # Plot Nguyen estimates\n",
        "      if n == 500 and dvg == 'KL':\n",
        "        color = 'green'\n",
        "        mean_nguyen = np.array(\n",
        "            [np.mean(el) for el in nguyen_results[d][n][dvg]])\n",
        "        std_nguyen = np.array(\n",
        "            [np.std(el) for el in nguyen_results[d][n][dvg]])\n",
        "        plt.errorbar(range(51),\n",
        "                    mean_nguyen,\n",
        "                    errorevery=errorevery,\n",
        "                    yerr=std_nguyen,\n",
        "                    elinewidth=elinewidth,\n",
        "                    color=color, label='M1 estimator, N=' + str(n),\n",
        "                    marker=\"v\", markersize=5, markevery=10)\n",
        "\n",
        "      # Plot plug-in estimates\n",
        "      if n == 500:\n",
        "        mean_plugin = np.array(\n",
        "            [np.mean(el[4]) for el in ram_mc_plugin_results[d][n][dvg]])\n",
        "        std_plugin = np.array(\n",
        "            [overflow_std(el[4]) for el in ram_mc_plugin_results[d][n][dvg]])\n",
        "        color = 'darkorange'\n",
        "        plt.errorbar(range(51),\n",
        "                    mean_plugin,\n",
        "                    errorevery=errorevery,\n",
        "                    yerr=std_plugin,\n",
        "                    elinewidth=elinewidth,\n",
        "                    color=color, label='Plug-in estimator, N=' + str(n),\n",
        "                    marker=\"s\", markersize=5, markevery=10)\n",
        "\n",
        "      # Plot RAM-MC with N=1.\n",
        "      if n == N_RANGE[0]:\n",
        "        color = 'black'\n",
        "        mean_ram_mc1 = np.array(\n",
        "            [np.mean(el[3]) for el in ram_mc_plugin_results[d][n][dvg]])\n",
        "        std_ram_mc1 = np.array(\n",
        "            [np.std(el[3]) for el in ram_mc_plugin_results[d][n][dvg]])\n",
        "        plt.errorbar(range(51) + 0.3 * np.ones(51),\n",
        "                    mean_ram_mc1,\n",
        "                    errorevery=errorevery,\n",
        "                    yerr=std_ram_mc1,\n",
        "                    elinewidth=elinewidth,\n",
        "                    color=color, label='RAM-MC estimator, N=1',\n",
        "                    marker=\"o\", markersize=5, markevery=10)\n",
        "\n",
        "      if dvg == 'KL':\n",
        "        plt.ylim((0.03, 15))\n",
        "      if dvg == 'Chi2':\n",
        "        plt.ylim((0.1, 1e6))\n",
        "      if dvg == 'Hsq':\n",
        "        plt.ylim((0., 2))\n",
        "\n",
        "    sp.axes.get_xaxis().set_ticklabels([])\n",
        "    if d != 1:\n",
        "      sp.axes.get_yaxis().set_ticklabels([])\n",
        "    else:\n",
        "      sp.axes.tick_params(axis='both', labelsize=15)\n",
        "\n",
        "    if i \u003c 4:\n",
        "      plt.title(\"d = {}\".format(d), fontsize=18)\n",
        "    if i == 1:\n",
        "      plt.ylabel('KL', fontsize=18)\n",
        "    if i == 4:\n",
        "      plt.ylabel(r'$\\chi^2$', fontsize=18)\n",
        "    if i == 7:\n",
        "      plt.ylabel(r'$H^2$', fontsize=18)\n",
        "\n",
        "\n",
        "    # Hide the right and top spines.\n",
        "    ax = plt.gca()\n",
        "    ax.spines['right'].set_visible(False)\n",
        "    ax.spines['top'].set_visible(False)\n",
        "    # Only show ticks on the left and bottom spines.\n",
        "    ax.yaxis.set_ticks_position('left')\n",
        "    ax.xaxis.set_ticks_position('bottom')\n",
        "    plt.tick_params(\n",
        "      axis='x',          # changes apply to the x-axis\n",
        "      which='both',      # both major and minor ticks are affected\n",
        "      bottom=False,      # ticks along the bottom edge are off\n",
        "      top=False,         # ticks along the top edge are off\n",
        "      labelbottom=False) # labels along the bottom edge are off\n",
        "    plt.tick_params(\n",
        "      axis='y',          # changes apply to the x-axis\n",
        "      which='both',      # both major and minor ticks are affected\n",
        "      left=False,      # ticks along the bottom edge are off\n",
        "      right=False,         # ticks along the top edge are off\n",
        "      labelbottom=False) # labels along the bottom edge are off\n",
        "    ax.yaxis.grid()\n",
        "    plt.xlim((-2, 51))\n",
        "\n",
        "    if i \u003e 6:\n",
        "      plt.xlabel(r\"$\\lambda$\", fontsize=17)\n",
        "\n",
        "  ax = fig.axes[1]\n",
        "  handles, labels = ax.get_legend_handles_labels()\n",
        "  labels, handles = zip(*sorted(zip(labels, handles), key=lambda t: t[0]))\n",
        "  fig.legend(handles, labels, loc='lower center', bbox_to_anchor=(0.51, 1.0),\n",
        "            ncol=5, fancybox=True, shadow=True, fontsize=12, frameon=True)\n",
        "  plt.tight_layout()\n",
        "  plt.show()\n",
        "\n",
        "make_plot_figure1(ram_mc_plugin_results, nguyen_results)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "SoCC6J8Qtydy"
      },
      "source": [
        "# Figures 2 and 3"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "KeSm6qT3bXdC"
      },
      "outputs": [],
      "source": [
        "tf.disable_eager_execution()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "xXlmx0lzM49k"
      },
      "source": [
        "## Experiment design:\n",
        "\n",
        "We take the encoders of 6 trained Autoencder models (a mixture of Variational AEs and Wasserstein AEs). These models were trained on the *CelebA* dataset with ~200K images. The encoders are probabilistic, meaning that an image is mapped to a distribution in the latent space.\n",
        "\n",
        "We consider the use of RAM-MC as a method to estimate f-divergences between the prior distributions of these models and the *aggregate posteriors*, the mixture of encodings of all of the training data."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "ddpep-dtRe5T"
      },
      "outputs": [],
      "source": [
        "#@title Setup for mixture distributions { display-mode: \"form\" }\n",
        "\n",
        "def create_qk(means, log_vars, weights):\n",
        "  \"\"\"Creates a mixture of K Gaussians and a standard normal prior.\"\"\"\n",
        "  k, d_latent = means.shape\n",
        "  pz = tfd.MultivariateNormalDiag(loc=tf.zeros(shape=(d_latent,)),\n",
        "                                  scale_diag=tf.ones(d_latent))\n",
        "\n",
        "  stds = tf.exp(tf.clip_by_value(log_vars, -30, 30) / 2.)\n",
        "  qk_components = tfd.MultivariateNormalDiag(loc=means, scale_diag=stds)\n",
        "  qk = tfd.MixtureSameFamily(\n",
        "      mixture_distribution=tfd.Categorical(probs=weights),\n",
        "      components_distribution=qk_components)\n",
        "  return pz, qk\n",
        "\n",
        "def sample_qk(qk, k, d_latent, n_samples):\n",
        "  \"\"\"Sample points from the mixture qk with k components.\"\"\"\n",
        "  INT32_MAXVAL = 2147483647\n",
        "  MAX_SAMPLE_SIZE = 1000000000\n",
        "\n",
        "  if n_samples * d_latent * k \u003e INT32_MAXVAL:\n",
        "    print(('Warning: a very large tensor will be internally instantiated, '\n",
        "           'this may cause an OOM error.'))\n",
        "\n",
        "  if n_samples * d_latent * k \u003e MAX_SAMPLE_SIZE:\n",
        "    n_batches = n_samples * d_latent * k // MAX_SAMPLE_SIZE + 1\n",
        "    mc_samples = []\n",
        "    for _ in range(n_batches):\n",
        "      mc_samples.append(qk.sample(n_samples // n_batches))\n",
        "    if n_samples % n_batches \u003e 0:\n",
        "      mc_samples.append(qk.sample(n_samples % n_batches))\n",
        "    mc_samples = tf.concat(mc_samples, axis=0)\n",
        "    assert mc_samples.get_shape()[0] == n_samples\n",
        "  else:\n",
        "    mc_samples = qk.sample(n_samples)\n",
        "  return mc_samples"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "r6X2LOTLOLxQ"
      },
      "source": [
        "## Estimators of Df(Qz, Pz)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "X6lC0b5ZS6Zy"
      },
      "outputs": [],
      "source": [
        "#@title Monte-Carlo estimation { display-mode: \"form\" }\n",
        "\n",
        "def mc_benchmark(means, log_vars, weights, mc_num, f, num_exps):\n",
        "  \"\"\"MC-based estimation of the f-divergence.\n",
        "    Df(Q, P) = int_z f(q(z) / p(z)) p(z) dz\n",
        "  Estimate Df(QK, Pz) for standard gaussian Pz and a K-mixture of Gaussians QK.\n",
        "  Parameters of all the component Gaussians are stored in means and log_vars,\n",
        "  with both having shapes (K, d_latent).\n",
        "\n",
        "  For estimating, sample mc_num points from Pz or QK depending on the f and\n",
        "  average the values of f evaluated at the density ratios dQK/dPz over those\n",
        "  points.\n",
        "  \"\"\"\n",
        "\n",
        "  tf.reset_default_graph()\n",
        "  (k, d_latent) = means.shape\n",
        "\n",
        "  pz, qk = create_qk(means, log_vars, weights)\n",
        "  mc_samples = sample_qk(qk, k, d_latent, mc_num)\n",
        "  log_density_ratios = qk.log_prob(mc_samples) - pz.log_prob(mc_samples)\n",
        "\n",
        "  if f == 'KL':\n",
        "    mc_estimate = tf.reduce_mean(log_density_ratios)\n",
        "    surrogate_estimate = tf.log(mc_estimate)\n",
        "  elif f == 'Hsq':\n",
        "    mc_estimate = 2.\n",
        "    mc_estimate -= 2. * tf.exp(tf.reduce_logsumexp(\n",
        "        -0.5 * log_density_ratios, axis=0)) / mc_num\n",
        "    # Since estimate may be very close (but below) 2 we also compute\n",
        "    # log(2 - estimate)\n",
        "    surrogate_estimate = tf.log(2. / mc_num) + tf.reduce_logsumexp(\n",
        "        -0.5 * log_density_ratios, axis=0)\n",
        "\n",
        "  estimates = []\n",
        "  with tf.Session() as sess:\n",
        "    for i in range(num_exps):\n",
        "      mc_estimate_val, surrogate_estimate_val = sess.run(\n",
        "          [mc_estimate, surrogate_estimate])\n",
        "      estimates.append((mc_estimate_val, surrogate_estimate_val))\n",
        "  return np.array(estimates)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "5JKJ3EzxS8T1"
      },
      "outputs": [],
      "source": [
        "#@title RAM-MC { display-mode: \"form\" }\n",
        "\n",
        "def estimate_ram_mc(means_all, log_vars_all, n, mc_num, f, num_exps=1):\n",
        "  \"\"\"Report RAM-MC estimate for Df(QK|Pz).\n",
        "\n",
        "  Args:\n",
        "    means_all, log_vars_all: (k, d_latent) shaped tensors containing\n",
        "      encodings of all the examples.\n",
        "    n: Number of encoded examples/mixture components to use in RAM-MC.\n",
        "    mc_num: Number of MC samples used in RAM-MC.\n",
        "  \"\"\"\n",
        "  (k, d_latent) = means_all.shape\n",
        "  estimates = []\n",
        "  for _ in range(num_exps):\n",
        "    # Sample n components out of all k and use MC sampling.\n",
        "    counts = np.random.multinomial(n, [1. / k] * k)\n",
        "    nonzero_ids = np.nonzero(counts)[0]\n",
        "    counts = counts[nonzero_ids]\n",
        "    freqs = counts / (n + 0.)\n",
        "    freqs = np.float32(freqs)\n",
        "    estimate_val = mc_benchmark(\n",
        "        means_all[nonzero_ids],\n",
        "        log_vars_all[nonzero_ids],\n",
        "        weights=freqs, mc_num=mc_num, f=f, num_exps=1)[0]\n",
        "    estimates.append(estimate_val)\n",
        "  return np.array(estimates)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "kRisL6AQWOZg"
      },
      "source": [
        "## Process data"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "e3EpBa0hWQzm"
      },
      "outputs": [],
      "source": [
        "#@title Load precomputed embeddings of CelebA data { display-mode: \"form\" }\n",
        "\n",
        "#@markdown Gets means and log-variances of encoding distributions of all data\n",
        "#@markdown for each encoder.\n",
        "\n",
        "\n",
        "def load_celebA_embeddings():\n",
        "  means_logvars = {}\n",
        "  file_name = 'means_logvars.hdf5'\n",
        "  path = os.path.join(ROOT_PATH, file_name)\n",
        "  !gsutil cp $path $file_name\n",
        "\n",
        "  with h5py.File(file_name, 'r') as f:\n",
        "    for i in f:\n",
        "      means_logvars[int(i)] = {}\n",
        "      for j in f[i]:\n",
        "        means_logvars[int(i)][j] = f[i][j][:]\n",
        "  return means_logvars"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "JGIqkg3UGDTE"
      },
      "source": [
        "## Divergence estimates"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "cellView": "both",
        "colab": {},
        "colab_type": "code",
        "id": "yeAYQY8qMMgr"
      },
      "outputs": [],
      "source": [
        "#@title Calculate divergence estimates or load precomputed results.\n",
        "#@title { display-mode: \"form\" }\n",
        "\n",
        "#@markdown Calculates Monte-Carlo baseline and RAM-MC estimates of KL and\n",
        "#@markdown Squared Hellinger divergences between mixture of encoded data\n",
        "#@markdown and prior. Takes ~4 hours to run.\n",
        "CALCULATE_DIVERGENCE_ESTIMATES = False #@param { type: \"boolean\"}\n",
        "\n",
        "# Note: in the paper we used a 10,000 MC samples for the baseline.\n",
        "# In order to speed up computation here, we use a smaller number of samples.\n",
        "MC_BASELINE_N_SAMPLES = 100\n",
        "MC_BASELINE_N_EXPS = 10\n",
        "RAM_MC_N_EXPS = 50\n",
        "\n",
        "N_RANGE_REAL_EXPS = [2 ** i for i in range(15)[::-1]]\n",
        "MC_RANGE = [1000, 10]\n",
        "\n",
        "\n",
        "def load_ram_mc_results():\n",
        "  results_ram_mc = {}\n",
        "  file_name = 'results_ram_mc.hdf5'\n",
        "  path = os.path.join(ROOT_PATH, file_name)\n",
        "  !gsutil cp $path $file_name\n",
        "\n",
        "  with h5py.File(file_name, 'r') as f:\n",
        "    for i in f:\n",
        "      results_ram_mc[int(i)] = {}\n",
        "      for j in f[i]:\n",
        "        results_ram_mc[int(i)][int(j)] = {}\n",
        "        for k in f[i][j]:\n",
        "          results_ram_mc[int(i)][int(j)][int(k)] = {}\n",
        "          for m in f[i][j][k]:\n",
        "            results_ram_mc[int(i)][int(j)][int(k)][m] = f[i][j][k][m][:]\n",
        "  return results_ram_mc\n",
        "\n",
        "def load_mc_benchmark_results():\n",
        "  results_mc_benchmark = {}\n",
        "  file_name = 'results_mc_benchmark.hdf5'\n",
        "  path = os.path.join(ROOT_PATH, file_name)\n",
        "  !gsutil cp $path $file_name\n",
        "\n",
        "  with h5py.File(file_name, 'r') as f:\n",
        "    for i in f:\n",
        "      results_mc_benchmark[int(i)] = {}\n",
        "      for j in f[i]:\n",
        "        results_mc_benchmark[int(i)][j] = f[i][j][:]\n",
        "  return results_mc_benchmark\n",
        "\n",
        "if CALCULATE_DIVERGENCE_ESTIMATES:\n",
        "  t_init = time.time()\n",
        "\n",
        "  means_logvars = load_celebA_embeddings()\n",
        "\n",
        "  results_mc_benchmark = {}\n",
        "  results_ram_mc = {}\n",
        "\n",
        "  for i in range(1,7):\n",
        "    results_mc_benchmark[i] = {}\n",
        "    results_ram_mc[i] = {}\n",
        "\n",
        "    means, log_variances = [means_logvars[i][key]\n",
        "                            for key in ['means', 'log_variances']]\n",
        "    (k, d_latent) = means.shape\n",
        "\n",
        "    print('z_dim=%d' % d_latent)\n",
        "\n",
        "    # Compute the benchmark MC estimator as a reference value\n",
        "    for dvg in ['KL', 'Hsq']:\n",
        "      print('Computing MC estimate benchmark for f={}'.format(dvg))\n",
        "      mc_vals = mc_benchmark(means, log_variances, [1. / k] * k,\n",
        "                             MC_BASELINE_N_SAMPLES, dvg, MC_BASELINE_N_EXPS)\n",
        "      results_mc_benchmark[i][dvg] = mc_vals\n",
        "\n",
        "    for n in N_RANGE_REAL_EXPS:\n",
        "      results_ram_mc[i][n] = {}\n",
        "      for mc_num in MC_RANGE:\n",
        "        print('N=%d, MC=%d' % (n, mc_num))\n",
        "        results_ram_mc[i][n][mc_num] = {}\n",
        "        for dvg in ['KL', 'Hsq']:\n",
        "          print('Evaluating for f=%s' % dvg)\n",
        "          # Compute our estimate\n",
        "          ram_mc_vals = estimate_ram_mc(means, log_variances, n,\n",
        "                                        mc_num, dvg, num_exps=RAM_MC_N_EXPS)\n",
        "          results_ram_mc[i][n][mc_num][dvg] = ram_mc_vals\n",
        "  print(\"It took {} seconds to complete.\".format(time.time() - t_init))\n",
        "else:\n",
        "  results_ram_mc = load_ram_mc_results()\n",
        "  results_mc_benchmark = load_mc_benchmark_results()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "PLdjjRigimSM"
      },
      "source": [
        "## Make plots"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "rgG3f1DQrcAQ"
      },
      "outputs": [],
      "source": [
        "#@title Plotting function { display-mode: \"form\" }\n",
        "\n",
        "def make_plots_real_data(dvg):\n",
        "  num_steps = len(N_RANGE_REAL_EXPS)\n",
        "  errorbar_width = 2\n",
        "  # Amount by which to shift the curves relatively to each other.\n",
        "  delta_step = 0.35\n",
        "\n",
        "  # Keys to the results dict.\n",
        "  MODELS = [1, 2, 3, 4, 5, 6]\n",
        "  # Hardcode their corresponding latent space dims.\n",
        "  MODELS_D = [32, 32, 64, 64, 128, 128]\n",
        "  # Hardcode the order of the models which we want to plot.\n",
        "  MODEL_IDS = [0, 2, 4, 1, 3, 5]\n",
        "\n",
        "  current_palette = sns.color_palette()\n",
        "\n",
        "  def get_log_error_bars(log_estimates, scale_std=1.):\n",
        "    \"\"\" Given set of log-observations, compute log(mean /pm std)\n",
        "\n",
        "    Computing variance is more tricky. We will compute log(std) which is\n",
        "    0.5 log(var) = 0.5 log( mean(X^2) - (mean(X))^2 ). Notice that\n",
        "    log mean(X^2) can be computed using logsumexp. log((mean(X))^2) is\n",
        "    simply 2 log (mean(X)).\n",
        "    \"\"\"\n",
        "    m = len(log_estimates)\n",
        "\n",
        "    log_mean_value = logsumexp(log_estimates) - np.log(m + 0.)\n",
        "    log_term1 = logsumexp(2 * log_estimates) - np.log(m + 0.)\n",
        "    log_term2 = 2 * log_mean_value\n",
        "    log_var = logsumexp(a=[log_term1, log_term2], b=[1., -1.])\n",
        "    log_unb_var = log_var + np.log(m / (m - 1.))\n",
        "    log_std = log_unb_var / 2.\n",
        "\n",
        "    error_plus = logsumexp([log_std, log_mean_value],\n",
        "                           b=[scale_std, 1.]) - log_mean_value\n",
        "\n",
        "    if log_mean_value \u003e log_std + np.log(scale_std):\n",
        "      error_minus = log_mean_value - logsumexp([log_mean_value, log_std],\n",
        "                                               b=[1., -scale_std])\n",
        "    else:\n",
        "      error_minus = 0.\n",
        "    return log_mean_value, error_plus, error_minus\n",
        "  fig = plt.figure(figsize = (13, 8))\n",
        "  for plot_id in range(1, 7):\n",
        "    plt.subplot(2, 3, plot_id)\n",
        "\n",
        "    model = MODELS[MODEL_IDS[plot_id - 1]]\n",
        "\n",
        "    # MC benchmark\n",
        "    if dvg == 'KL':\n",
        "      mc_vals = results_mc_benchmark[model][dvg][:,0]\n",
        "      mean_value = np.mean(mc_vals)\n",
        "      error = np.std(mc_vals, ddof=1)\n",
        "    elif dvg == 'Hsq':\n",
        "      log_mc_vals = results_mc_benchmark[model][dvg][:,1]\n",
        "      mean_value, error_plus, error_minus = get_log_error_bars(log_mc_vals)\n",
        "      error = [np.reshape([error_minus, error_plus], (2, 1))\n",
        "               for _ in range(num_steps)]\n",
        "      error = np.hstack(error)\n",
        "    else:\n",
        "      raise ValueError(\n",
        "          \"Argument dvg must be 'KL' or 'Hsq', not: {}\".format(dvg))\n",
        "\n",
        "    plt.errorbar(range(num_steps),\n",
        "                 [mean_value] * num_steps,\n",
        "                 yerr= [error] * num_steps if dvg == 'KL' else error,\n",
        "                 linewidth=3,\n",
        "                 color='blue',\n",
        "                 elinewidth=errorbar_width,\n",
        "                 label='True MC reference')\n",
        "\n",
        "    # RAM-MC estimates.\n",
        "    for (j, mc_num) in enumerate(reversed(MC_RANGE)):\n",
        "      if dvg == 'KL':\n",
        "        values = [np.mean(results_ram_mc[model][n][mc_num][dvg][:,0])\n",
        "                  for n in N_RANGE_REAL_EXPS[::-1]]\n",
        "        errors = [np.std(results_ram_mc[model][n][mc_num][dvg][:,0], ddof=1)\n",
        "                  for n in N_RANGE_REAL_EXPS[::-1]]\n",
        "      else:\n",
        "        values = []\n",
        "        errors = []\n",
        "        for n in N_RANGE_REAL_EXPS[::-1]:\n",
        "          log_mc_vals = results_ram_mc[model][n][mc_num][dvg][:,1]\n",
        "          log_error_bars_out = get_log_error_bars(log_mc_vals)\n",
        "          log_mean_value, error_plus, error_minus = log_error_bars_out\n",
        "          values.append(log_mean_value)\n",
        "          errors.append(np.reshape([error_minus, error_plus], (2,1)))\n",
        "        errors = np.hstack(errors)\n",
        "\n",
        "      color_to_show = current_palette[2] if mc_num \u003c 20 else current_palette[3]\n",
        "      marker_to_show = 's' if mc_num \u003c 20 else 'o'\n",
        "      plt.errorbar(np.array(range(num_steps)) + (j + 1) * delta_step,\n",
        "                   values, yerr=errors,\n",
        "                   linewidth=3,\n",
        "                   elinewidth=errorbar_width,\n",
        "                   capthick = 2,\n",
        "                   color=color_to_show,\n",
        "                   marker=marker_to_show, markersize=10, markevery=100,\n",
        "                   label='RAM-MC estimator, M=%d' % mc_num)\n",
        "\n",
        "    if dvg == 'KL':\n",
        "      if plot_id == 1:\n",
        "        plt.ylim((200, 300))\n",
        "      if plot_id == 2:\n",
        "        plt.ylim((420, 530))\n",
        "      if plot_id == 3:\n",
        "        plt.ylim((740, 900))\n",
        "\n",
        "    if plot_id \u003c 4:\n",
        "      plt.title('d=%d' % (MODELS_D[MODEL_IDS[plot_id - 1]]), fontsize=18)\n",
        "    if plot_id in [1, 4]:\n",
        "      plt.ylabel('estimate value', fontsize=18)\n",
        "    plt.xlim((-1, num_steps))\n",
        "    if plot_id \u003e 3:\n",
        "      plt.xlabel(r'$\\log_2 N$', fontsize=17)\n",
        "    plt.xticks(range(num_steps), map(str, range(num_steps)))\n",
        "\n",
        "    # Hide the right and top spines.\n",
        "    ax = plt.gca()\n",
        "    ax.spines['right'].set_visible(False)\n",
        "    ax.spines['top'].set_visible(False)\n",
        "    # Only show ticks on the left and bottom spines.\n",
        "    ax.yaxis.set_ticks_position('left')\n",
        "    ax.xaxis.set_ticks_position('bottom')\n",
        "    plt.tick_params(axis='x', which='both',\n",
        "                    bottom=True, top=False, labelbottom=True)\n",
        "    plt.tick_params(axis='y', which='both',\n",
        "                    left=False, right=False, labelbottom=False)\n",
        "    ax.yaxis.grid()\n",
        "\n",
        "  ax = fig.axes[1]\n",
        "  handles, labels = ax.get_legend_handles_labels()\n",
        "  labels, handles = zip(*sorted(zip(labels, handles), key=lambda t: t[0]))\n",
        "  fig.legend(handles, labels, loc='upper center', bbox_to_anchor=(0.5, 1.05),\n",
        "             ncol=3, fancybox=True, shadow=True, fontsize=14)\n",
        "\n",
        "  plt.tight_layout()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "10CY8fYbTf-s"
      },
      "outputs": [],
      "source": [
        "#@title Make KL divergence plot (Figure 2) { display-mode: \"form\" }\n",
        "\n",
        "make_plots_real_data(\"KL\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "ZT9uRgraTGhR"
      },
      "outputs": [],
      "source": [
        "#@title Make Squared Hellinger divergence plot (Figure 3) { display-mode: \"form\" }\n",
        "\n",
        "make_plots_real_data(\"Hsq\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "EiqmjXk99qAp"
      },
      "outputs": [],
      "source": [
        ""
      ]
    }
  ],
  "metadata": {
    "accelerator": "TPU",
    "colab": {
      "collapsed_sections": [
        "Nfyi4gri3nTz"
      ],
      "machine_shape": "hm",
      "name": "RAM-MC open source.ipynb",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
