{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a2237115",
   "metadata": {},
   "source": [
    "# Theory for Energy-Based Models (EBM)\n",
    "\n",
    "The density given by an EBM is:\n",
    "\\begin{eqnarray*}\n",
    "p_{\\theta}(x) = \\frac{\\exp(-E_\\theta(x))}{Z_\\theta},\n",
    "\\end{eqnarray*}\n",
    "where $E_\\theta:\\mathbb{R}^d \\to \\mathbb{R}$ and $Z_\\theta=\\int \\exp(-E_\\theta(x)) dx$.\n",
    "\n",
    "Given samples $x_1,\\dots, x_N$ in $\\mathbb{R}^d$, we want to find the parameter $\\theta$ maximizing the log-likelihood $\\max_\\theta \\sum_{i=1}^N \\log p_{\\theta}(x_i)$. Since $Z_\\theta$ is a function of $\\theta$, evaluation and differentiation of $\\log p_{\\theta}(x)$ w.r.t. $\\theta$ involves a typically intractable integral.\n",
    "\n",
    "## Maximum Likelihood Training with MCMC\n",
    "\n",
    "We can estimate the gradient of the log-likelihood with MCMC approaches:\n",
    "\\begin{eqnarray*}\n",
    "\\nabla_\\theta \\log p_\\theta(x) = -\\nabla_\\theta E_\\theta(x)-\\nabla_\\theta \n",
    "\\log Z_\\theta.\n",
    "\\end{eqnarray*}\n",
    "The first term is simple to compute (with automatic differentiation).\n",
    "\n",
    "### Question 1 (Maths)\n",
    "Show that for the second term, we have:\n",
    "\\begin{eqnarray*}\n",
    "\\nabla_\\theta \\log Z_\\theta = \\mathbb{E}_{p_{\\theta}(x)}\\left[-\\nabla_\\theta E_\\theta(x)\\right] \\left(= \\int p_{\\theta}(x) \\left[-\\nabla_\\theta E_\\theta(x)\\right] dx \\right).\n",
    "\\end{eqnarray*}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "93018a32",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "edbb3c8a",
   "metadata": {},
   "source": [
    "Thus, we can obtain an unbiased one-sample Monte Carlo estimate of the log-likelihood gradient by\n",
    "\\begin{eqnarray*}\n",
    "\\nabla_\\theta \\log Z_\\theta \\approx -\\nabla_\\theta E_\\theta(\\tilde{x}),\n",
    "\\end{eqnarray*}\n",
    "with $\\tilde{x}\\sim p_\\theta(x)$, i.e. a random sample from the distribution given by the EBM. Therefore, we need to draw random samples from the model. As explained during the course, this can be done using Langevin MCMC. First note that the gradient of the log-probability w.r.t. $x$ (which is the score) is easy to calculate:\n",
    "\\begin{eqnarray*}\n",
    "\\nabla_x \\log p_\\theta(x) = -\\nabla_x E_\\theta(x) \\text{ since }  \\nabla_x \\log Z_\\theta = 0.\n",
    "\\end{eqnarray*}\n",
    "Hence, in this case, Langevin MCMC is given by:\n",
    "\\begin{eqnarray*}\n",
    "x_t = x_{t-1} - \\epsilon \\nabla_x E_\\theta(x_{t-1}) +\\sqrt{2\\epsilon}z_t, \n",
    "\\end{eqnarray*}\n",
    "where $z_t\\sim \\mathcal{N}(0,I)$. When $\\epsilon\\to 0$ and $t\\to \\infty$, $x_t$ will be distributed as $p_\\theta(x)$ (under some regularity conditions).\n",
    "\n",
    "In this homework, we will consider an alternative learning procedure.\n",
    "\n",
    "## Score Matching\n",
    "\n",
    "The score (which was used in Langevin MCMC above) is defined as $$ s_\\theta(x) = \\nabla_x\\log p_\\theta(x) = -\\nabla_x E_\\theta(x) = -\\left( \\frac{\\partial E_\\theta(x)}{\\partial x_1},\\dots, \\frac{\\partial E_\\theta(x)}{\\partial x_d}\\right).$$\n",
    "\n",
    "If $p(x)$ denote the (unknown) data distribution, the basic score matching objective minimizes:\n",
    "$$\n",
    "\\mathbb{E}_{p(x)} \\|\\nabla_x \\log p(x) - s_\\theta(x)\\|^2.\n",
    "$$\n",
    "\n",
    "### Question 2 (Maths)\n",
    "\n",
    "The problem with this objective is that we cannot compute $\\nabla_x \\log p(x)$ as $p(x)$ is unknown. We can only compute (approximate) averages with respect to $p(x)$ with empirical averages.\n",
    "Show that we can solve this issue as we have:\n",
    "$$\n",
    "\\mathbb{E}_{p(x)} \\|\\nabla_x \\log p(x) - s_\\theta(x)\\|^2 = c + \\mathbb{E}_{p(x)}\\left[ \\sum_{i=1}^d\\left ( \\frac{\\partial E_\\theta(x)}{\\partial x_i}\\right)^2+2\\frac{\\partial^2 E_\\theta(x)}{\\partial x^2_i}\\right],\n",
    "$$\n",
    "where $c$ is a constant (not depending on $\\theta$)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4eabece",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "1dee6fa4",
   "metadata": {},
   "source": [
    "## Denoising Score Matching\n",
    "\n",
    "There are several drawbacks about the score matching approach: computing the trace of the Hessian is expensive and scores will not be accurately estimated in low-density regions, see [Generative Modeling by Estimating Gradients of the Data Distribution](https://yang-song.net/blog/2021/score/#naive-score-based-generative-modeling-and-its-pitfalls)\n",
    "\n",
    "Denoising score matching is an elegant and scalable solution. Consider the random variable $Y = X+\\sigma Z$, where $X\\sim p(x)$ and $Z\\sim\\mathcal{N}(0,I)$. We denote by $p^\\sigma(y)$ the distribution of $Y$.\n",
    "\n",
    "\n",
    "### Question 3 (Maths)\n",
    "Shows that\n",
    "$$\n",
    "\\nabla_y\\log p^\\sigma(y) = -\\frac{1}{\\sigma}\\mathbb{E}\\left[ Z |Y=y\\right] = -\\frac{1}{\\sigma}\\mathbb{E}\\left[ Z |X+\\sigma Z=y\\right].\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b70dc0d",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "f0422a90",
   "metadata": {},
   "source": [
    "The denoising score matching objective is now\n",
    "$$\n",
    "\\mathbb{E}_{p^\\sigma(y)}\\|\\nabla_y \\log p^\\sigma(y) - s_\\theta(y)\\|^2,\n",
    "$$\n",
    "that we will minimize thanks to a gradient descent in the parameter $\\theta$.\n",
    "\n",
    "### Question 4 (Maths)\n",
    "\n",
    "Show that\n",
    "$$\n",
    "\\mathbb{E}_{p^\\sigma(y)}\\|\\nabla_y \\log p^\\sigma(y) - s_\\theta(y)\\|^2 = \\mathbb{E}\\left\\| \\frac{Z}{\\sigma}+s_\\theta(X+\\sigma Z)\\right\\|^2 -C\n",
    "$$\n",
    "where $C$ does not depend on $\\theta$."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3aa7962d",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "640c83bc",
   "metadata": {},
   "source": [
    "Hence, in practice, we will minimize the (random) loss:\n",
    "$$\n",
    "\\ell(\\theta; x_1,\\dots, x_N) = \\frac{1}{N} \\sum_{i=1}^N \\left\\| \\frac{z_i}{\\sigma}+s_\\theta(x_i+\\sigma z_i)\\right\\|^2,\n",
    "$$\n",
    "where the $z_i$ are iid Gaussian. As the dataset is too large, we will run SGD algorithm, i.e. make batches and use automatic differentiation to get the gradient w.r.t. $\\theta$ over each batch."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f21bd9a",
   "metadata": {},
   "source": [
    "# Code for Energy Based Models\n",
    "\n",
    "You will code a EBM where the energy function will be a neural network as a python class with the following methods:\n",
    "- `energy_fn` taking as argument a batch of samples $x_1,\\dots,x_B$ and computing the corresponding energies $E_\\theta(x_1),\\dots, E_\\theta(x_B)$.\n",
    "- `score` taking as argument a batch of samples $x_1,\\dots,x_B$ and computing the corresponding scores $s_\\theta(x_1),\\dots, s_\\theta(x_B)$.\n",
    "- `sample_langevin` taking as argument a batch of starting points $x_1,\\dots, x_B$, a default step size `eps=0.1` and a default number of steps  `n_steps=1000`\n",
    "- `dsm_loss` taking as argumenta batch of samples $x_1,\\dots,x_B$ with a default parameter for the noise `sigma=0.1` and computing the corresponding (denoising score matching) loss above.\n",
    "- `train_epoch` taking as argument a `dataloader` and running SGD algorithm for one epoch.\n",
    "- `fit` in order to train the model for several epoch. This last method is already provided to you and will allow you to visualize your results thanks to the code below that should not be modified."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8d02b6a8",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "from torch import nn\n",
    "import time\n",
    "import logging\n",
    "import matplotlib.pyplot as plt\n",
    "import functools\n",
    "import os"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c148c96d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Borrowed from this repo\n",
    "#    https://github.com/kamenbliznashki/normalizing_flows\n",
    "\n",
    "def sample_2d(dataset, n_samples):\n",
    "\n",
    "    z = torch.randn(n_samples, 2)\n",
    "\n",
    "    if dataset == '8gaussians':\n",
    "        scale = 4\n",
    "        sq2 = 1/math.sqrt(2)\n",
    "        centers = [(1,0), (-1,0), (0,1), (0,-1), (sq2,sq2), (-sq2,sq2), (sq2,-sq2), (-sq2,-sq2)]\n",
    "        centers = torch.tensor([(scale * x, scale * y) for x,y in centers])\n",
    "        return sq2 * (0.5 * z + centers[torch.randint(len(centers), size=(n_samples,))])\n",
    "\n",
    "    elif dataset == '2spirals':\n",
    "        n = torch.sqrt(torch.rand(n_samples // 2)) * 540 * (2 * math.pi) / 360\n",
    "        d1x = - torch.cos(n) * n + torch.rand(n_samples // 2) * 0.5\n",
    "        d1y =   torch.sin(n) * n + torch.rand(n_samples // 2) * 0.5\n",
    "        x = torch.cat([torch.stack([ d1x,  d1y], dim=1),\n",
    "                       torch.stack([-d1x, -d1y], dim=1)], dim=0) / 3\n",
    "        return x + 0.1*z\n",
    "\n",
    "    elif dataset == 'checkerboard':\n",
    "        x1 = torch.rand(n_samples) * 4 - 2\n",
    "        x2_ = torch.rand(n_samples) - torch.randint(0, 2, (n_samples,), dtype=torch.float) * 2\n",
    "        x2 = x2_ + x1.floor() % 2\n",
    "        return torch.stack([x1, x2], dim=1) * 2\n",
    "\n",
    "    elif dataset == 'rings':\n",
    "        n_samples4 = n_samples3 = n_samples2 = n_samples // 4\n",
    "        n_samples1 = n_samples - n_samples4 - n_samples3 - n_samples2\n",
    "\n",
    "        # so as not to have the first point = last point, set endpoint=False in np; here shifted by one\n",
    "        linspace4 = torch.linspace(0, 2 * math.pi, n_samples4 + 1)[:-1]\n",
    "        linspace3 = torch.linspace(0, 2 * math.pi, n_samples3 + 1)[:-1]\n",
    "        linspace2 = torch.linspace(0, 2 * math.pi, n_samples2 + 1)[:-1]\n",
    "        linspace1 = torch.linspace(0, 2 * math.pi, n_samples1 + 1)[:-1]\n",
    "\n",
    "        circ4_x = torch.cos(linspace4)\n",
    "        circ4_y = torch.sin(linspace4)\n",
    "        circ3_x = torch.cos(linspace4) * 0.75\n",
    "        circ3_y = torch.sin(linspace3) * 0.75\n",
    "        circ2_x = torch.cos(linspace2) * 0.5\n",
    "        circ2_y = torch.sin(linspace2) * 0.5\n",
    "        circ1_x = torch.cos(linspace1) * 0.25\n",
    "        circ1_y = torch.sin(linspace1) * 0.25\n",
    "\n",
    "        x = torch.stack([torch.cat([circ4_x, circ3_x, circ2_x, circ1_x]),\n",
    "                         torch.cat([circ4_y, circ3_y, circ2_y, circ1_y])], dim=1) * 3.0\n",
    "\n",
    "        # random sample\n",
    "        x = x[torch.randint(0, n_samples, size=(n_samples,))]\n",
    "\n",
    "        # Add noise\n",
    "        return x + torch.normal(mean=torch.zeros_like(x), std=0.08*torch.ones_like(x))\n",
    "    elif dataset == 'gaussian':\n",
    "        return z + 2*torch.ones(2)\n",
    "    else:\n",
    "        raise RuntimeError('Invalid `dataset` to sample from.')\n",
    "\n",
    "def plot_data(\n",
    "    ax,\n",
    "    data,\n",
    "    range_lim=4,\n",
    "    bins=1000,\n",
    "    cmap=plt.cm.viridis\n",
    "):\n",
    "    rng = [[-range_lim, range_lim], [-range_lim, range_lim]]\n",
    "    ax.hist2d(data[:,0], data[:, 1], range=rng, bins=bins, cmap=plt.cm.viridis)\n",
    "\n",
    "def plot_scores(\n",
    "    ax,\n",
    "    mesh,\n",
    "    scores,\n",
    "    width=0.002\n",
    "):\n",
    "    \"\"\"Plot score field\n",
    "\n",
    "    Args:\n",
    "        ax (): canvas\n",
    "        mesh (np.ndarray): mesh grid\n",
    "        scores (np.ndarray): scores\n",
    "        width (float, optional): vector width. Defaults to 0.002\n",
    "    \"\"\"\n",
    "    ax.quiver(mesh[:, 0], mesh[:, 1], scores[:, 0], scores[:, 1], width=width)\n",
    "\n",
    "def plot_energy(\n",
    "    ax,\n",
    "    energy,\n",
    "    cmap=plt.cm.viridis,\n",
    "    flip_y=True\n",
    "):\n",
    "    if flip_y:\n",
    "        energy = energy[::-1] # flip y\n",
    "    ax.imshow(energy, cmap=cmap)\n",
    "    \n",
    "def plot_score_field(ax, energy_model):\n",
    "    mesh, scores = sample_score_field(\n",
    "        energy_model.score,\n",
    "        device=energy_model.device\n",
    "    )\n",
    "    # draw scores\n",
    "    ax.grid(False)\n",
    "    ax.axis('off')\n",
    "    plot_scores(ax, mesh, scores)\n",
    "    ax.set_title('Estimated scores', fontsize=16)\n",
    "\n",
    "def plot_energy_field(ax, energy_model):\n",
    "    energy = sample_energy_field(\n",
    "        energy_model.energy_fn,\n",
    "        device=energy_model.device\n",
    "    )\n",
    "    # draw energy\n",
    "    ax.grid(False)\n",
    "    ax.axis('off')\n",
    "    plot_energy(ax, energy)\n",
    "    ax.set_title('Estimated energy', fontsize=16)\n",
    "\n",
    "def plot_samples(ax, energy_model, steps, eps):\n",
    "    samples = []\n",
    "    for i in range(1000):\n",
    "        x = torch.rand(1000, 2) * 8 - 4\n",
    "        x = x.to(device=energy_model.device)\n",
    "        x = energy_model.sample_langevin(\n",
    "            x,\n",
    "            n_steps=steps,\n",
    "            eps=eps\n",
    "        ).detach().cpu().numpy()\n",
    "        samples.append(x)\n",
    "    samples = np.concatenate(samples, axis=0)\n",
    "    # draw energy\n",
    "    ax.grid(False)\n",
    "    ax.axis('off')\n",
    "    plot_data(ax, samples)\n",
    "    ax.set_title('Sampled data', fontsize=16)\n",
    "\n",
    "def sample_score_field(\n",
    "    score_fn,\n",
    "    range_lim=4,\n",
    "    grid_size=50,\n",
    "    device='cpu'\n",
    "):\n",
    "    \"\"\"Sampling score field from an energy model\n",
    "\n",
    "    Args:\n",
    "        score_fn (callable): a score function with the following sign\n",
    "            func(x: torch.Tensor) -> torch.Tensor\n",
    "        range_lim (int, optional): Range of x, y coordimates. Defaults to 4.\n",
    "        grid_size (int, optional): Grid size. Defaults to 50.\n",
    "        device (str, optional): torch device. Defaults to 'cpu'.\n",
    "    \"\"\"\n",
    "    mesh = []\n",
    "    x = np.linspace(-range_lim, range_lim, grid_size)\n",
    "    y = np.linspace(-range_lim, range_lim, grid_size)\n",
    "    for i in x:\n",
    "        for j in y:\n",
    "            mesh.append(np.asarray([i, j]))\n",
    "    mesh = np.stack(mesh, axis=0)\n",
    "    x = torch.from_numpy(mesh).float()\n",
    "    x = x.to(device=device)\n",
    "    scores = score_fn(x.detach()).detach()\n",
    "    scores = scores.cpu().numpy()\n",
    "    return mesh, scores\n",
    "\n",
    "def sample_energy_field(\n",
    "    energy_fn,\n",
    "    range_lim=4,\n",
    "    grid_size=1000,\n",
    "    device='cpu'\n",
    "):\n",
    "    \"\"\"Sampling energy field from an energy model\n",
    "\n",
    "    Args:\n",
    "        energy_fn (callable): an energy function with the following sign\n",
    "            func(x: torch.Tensor) -> torch.Tensor\n",
    "        range_lim (int, optional): range of x, y coordinates. Defaults to 4.\n",
    "        grid_size (int, optional): grid size. Defaults to 1000.\n",
    "        device (str, optional): torch device. Defaults to 'cpu'.\n",
    "    \"\"\"\n",
    "    energy = []\n",
    "    x = np.linspace(-range_lim, range_lim, grid_size)\n",
    "    y = np.linspace(-range_lim, range_lim, grid_size)\n",
    "    for i in y:\n",
    "        mesh = []\n",
    "        for j in x:\n",
    "            mesh.append(np.asarray([j, i]))\n",
    "        mesh = np.stack(mesh, axis=0)\n",
    "        inputs = torch.from_numpy(mesh).float()\n",
    "        inputs = inputs.to(device=device)\n",
    "        e = energy_fn(inputs.detach()).detach()\n",
    "        e = e.view(grid_size).cpu().numpy()\n",
    "        energy.append(e)\n",
    "    energy = np.stack(energy, axis=0) # (grid_size, grid_size)\n",
    "    return energy\n",
    "    \n",
    "    \n",
    "def visualize(energy_model, data, steps, eps):\n",
    "    fig, axs = plt.subplots(figsize=(24, 6), ncols=4)\n",
    "    # draw data samples\n",
    "    axs[0].grid(False)\n",
    "    axs[0].axis('off')\n",
    "    plot_data(axs[0], data)\n",
    "    axs[0].set_title('Ground truth data', fontsize=16)\n",
    "    plot_samples(axs[1], energy_model, steps, eps)\n",
    "    plot_energy_field(axs[2], energy_model)\n",
    "    plot_score_field(axs[3], energy_model)\n",
    "    for ax in axs:\n",
    "        ax.set_box_aspect(1)\n",
    "    plt.tight_layout()\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "557c7769",
   "metadata": {},
   "source": [
    "## Dataset\n",
    "\n",
    "Below is an histogram showing the final dataset that will be used to learn the EDM. You can change of dtaset by choosing an appropriate name."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "21c7f66c",
   "metadata": {},
   "outputs": [],
   "source": [
    "data_name = 'checkerboard'\n",
    "size = 1000000\n",
    "data_np = sample_2d(dataset=data_name, n_samples=size).numpy()\n",
    "plot_data(plt, data_np)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "61cc4848",
   "metadata": {},
   "source": [
    "## Toy example: Gaussian\n",
    "\n",
    "Before doing the genral EBM, we will deal with a simple example to see how to compute automatically scores and then running SGD.\n",
    "\n",
    "First, the code below shows you how to use [`torch.autograd.grad`](https://pytorch.org/docs/stable/generated/torch.autograd.grad.html) in order to compute score:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a763d6b4",
   "metadata": {},
   "outputs": [],
   "source": [
    "theta = torch.zeros(2)\n",
    "data = torch.randn(10,2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9b87244f",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = data\n",
    "x = x.requires_grad_()\n",
    "logp = -torch.sum((x-theta)**2)/2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d2feb60e",
   "metadata": {},
   "outputs": [],
   "source": [
    "score = torch.autograd.grad(logp, x)[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ebfe8f07",
   "metadata": {},
   "outputs": [],
   "source": [
    "def my_score(x, theta=theta):\n",
    "    return theta-x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14bb8978",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.allclose(score, my_score(data))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f1aaaf6",
   "metadata": {},
   "source": [
    "However, we want to take another derivative with respect to the parameter $\\theta$ for the SGD algorithm. Let see how it goes with a fake loss as defined below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f8b663dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = data\n",
    "x = x.requires_grad_()\n",
    "theta = theta.requires_grad_()\n",
    "logp = -torch.sum((x-theta)**2)/2\n",
    "score = torch.autograd.grad(logp, x)[0]\n",
    "fake_loss = score.norm()**2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e3db3c8c",
   "metadata": {},
   "outputs": [],
   "source": [
    "theta.requires_grad"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "48eca4b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "theta.grad"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7b936b0",
   "metadata": {},
   "source": [
    "The code below will produce an error because by default, PyTorch do not keep the computation graph for the derivatives."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9dabe637",
   "metadata": {},
   "outputs": [],
   "source": [
    "# produce an error\n",
    "#fake_loss.backward()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0e82a41",
   "metadata": {},
   "source": [
    "Using `create_graph=True`, we tell PyTorch that we will take another derivative of score (which is already a derivative w.r.t. $x$)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ab89cc59",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = data\n",
    "x = x.requires_grad_()\n",
    "theta = theta.requires_grad_()\n",
    "logp = -torch.sum((x-theta)**2)/2\n",
    "score = torch.autograd.grad(logp, x, create_graph=True)[0]\n",
    "fake_loss = score.norm()**2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4d06268c",
   "metadata": {},
   "outputs": [],
   "source": [
    "fake_loss.backward()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9ff921b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "theta.grad"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9aaaf808",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.allclose(theta.grad, 2*my_score(x).sum(0))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6f1be1c",
   "metadata": {},
   "source": [
    "### Question 5 (code)\n",
    "\n",
    "Recode the example above with a function computing the energy, the score and finally sample with Langevin MCMC. All functions should work with batches."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0e02237a",
   "metadata": {},
   "outputs": [],
   "source": [
    "theta = torch.zeros(2)\n",
    "def energy_fn(x, theta=theta):\n",
    "    # your code here"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6e74a8a3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# should be True\n",
    "torch.allclose(torch.sum(energy_fn(x)), -logp)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "27958156",
   "metadata": {},
   "outputs": [],
   "source": [
    "def score(x, fn = energy_fn):\n",
    "    # your code here"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "75a9ac7f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# should be True\n",
    "torch.allclose(score(x), my_score(x))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "69705ee6",
   "metadata": {},
   "outputs": [],
   "source": [
    "def sample_langevin(x, score=score, eps=0.1, n_steps=1000):\n",
    "    # your code here"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f90809d0",
   "metadata": {},
   "outputs": [],
   "source": [
    "data_name = 'gaussian'\n",
    "size = 1000000\n",
    "data_np = sample_2d(dataset=data_name, n_samples=size).numpy()\n",
    "dtype=torch.float32\n",
    "data_t = torch.from_numpy(data_np).type(dtype)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "591b8ac1",
   "metadata": {},
   "outputs": [],
   "source": [
    "data_simu = sample_langevin(data_t, eps=0.1, n_steps=100)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cca12a36",
   "metadata": {},
   "outputs": [],
   "source": [
    "fig, axs = plt.subplots(figsize=(24, 6), ncols=2)\n",
    "plot_data(axs[0], data_np)\n",
    "axs[0].set_title('Ground truth data', fontsize=16)\n",
    "plot_data(axs[1], data_simu.detach().numpy())\n",
    "axs[1].set_title('Langevin sampling (before fit)', fontsize=16);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6b4ac373",
   "metadata": {},
   "source": [
    "Here, you should have a mismatch between the ground truth and the Langevin sampling which is expected as you did not code the training step. The mean of the Gaussian for the ground truth is $(2,2)$ and the default mean for your (Gaussian) EBM  is $(0,0)$. So now you will code the training of your model. \n",
    "\n",
    "### Question 6 (code)\n",
    "\n",
    "Now repackage your code above into a python class with methods:\n",
    "- `energy_fn` taking as argument a batch of samples $x_1,\\dots,x_B$ and computing the corresponding energies $(x_1-\\theta)^2/2,\\dots, (x_B-\\theta)^2/2$.\n",
    "- `score` taking as argument a batch of samples $x_1,\\dots,x_B$ and computing the corresponding scores $s_\\theta(x_1),\\dots, s_\\theta(x_B)$.\n",
    "- `sample_langevin` taking as argument a batch of starting points $x_1,\\dots, x_B$, a default step size `eps=0.1` and a default number of steps  `n_steps=1000`\n",
    "- `dsm_loss` taking as argumenta batch of samples $x_1,\\dots,x_B$ with a default parameter for the noise `sigma=0.1` and computing the corresponding (denoising score matching) loss.\n",
    "- `train_epoch` taking as argument a `dataloader` and running SGD algorithm for one epoch.\n",
    "- `fit` in order to train the model for several epoch (given to you below)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3bf4128f",
   "metadata": {},
   "outputs": [],
   "source": [
    "class Energy_Gaussian():\n",
    "    def __init__(self, theta=theta, learning_rate =1e-3):\n",
    "        self.theta = theta.requires_grad_()\n",
    "        self.learning_rate = learning_rate\n",
    "        self.optimizer = torch.optim.Adam([self.theta], lr=self.learning_rate)\n",
    "        \n",
    "        \n",
    "    def energy_fn(self, x):\n",
    "        # your code here\n",
    "    \n",
    "    def score(self, x):\n",
    "        # your code here\n",
    "    \n",
    "    def sample_langevin(self, x, eps=0.1, n_steps=1000):\n",
    "        # your code here\n",
    "    \n",
    "    def dsm_loss(self, x, sigma=0.1):\n",
    "        # your code here\n",
    "    \n",
    "    def train_epoch(self, dataloader):\n",
    "        all_losses = []\n",
    "        # your code here\n",
    "        m_loss = np.mean(all_losses).astype(np.float32)\n",
    "        return m_loss\n",
    "    \n",
    "    def fit(self, train_dataloader, \n",
    "            n_epochs = 1,\n",
    "            log_freq = 1,\n",
    "            vis_freq = 1,\n",
    "            vis_callback = None):\n",
    "        \n",
    "        total_epochs = n_epochs\n",
    "        num_epochs = 0\n",
    "\n",
    "        for epoch in range(n_epochs):\n",
    "            num_epochs += 1\n",
    "            # train one epoch\n",
    "            loss = self.train_epoch(train_dataloader)\n",
    "            \n",
    "            if (log_freq is not None) and (num_epochs % log_freq == 0):\n",
    "                print(\n",
    "                    f\"[Epoch {num_epochs}/{total_epochs}]: loss: {loss}  theta: {self.theta.data}\"\n",
    "                )\n",
    "\n",
    "            if (vis_callback is not None) and (num_epochs % vis_freq == 0):\n",
    "                print(\"Visualizing\")\n",
    "                vis_callback(self)\n",
    "        pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "53e35202",
   "metadata": {},
   "outputs": [],
   "source": [
    "dtype=torch.float32\n",
    "data = torch.from_numpy(data_np).type(dtype)\n",
    "loader_train = torch.utils.data.DataLoader(data, batch_size=100, shuffle=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "16e20ee3",
   "metadata": {},
   "outputs": [],
   "source": [
    "energy_gauss = Energy_Gaussian()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "62db7175",
   "metadata": {},
   "outputs": [],
   "source": [
    "energy_gauss.fit(loader_train, vis_callback = functools.partial(\n",
    "            visualize,\n",
    "            data = data_np,\n",
    "            steps = 100,\n",
    "            eps = 0.1\n",
    "        ))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f088387f",
   "metadata": {},
   "source": [
    "We see that after only one epoch, we have a good estiamte. Now, we will deal with the more challenging dataset\n",
    "\n",
    "\n",
    "## Energy model for the checkerboard\n",
    "\n",
    "The neural network used to compute the energy is given below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7816fb75",
   "metadata": {},
   "outputs": [],
   "source": [
    "hidden_units = 128\n",
    "my_mlp = nn.Sequential(\n",
    "            nn.Linear(2, hidden_units),\n",
    "            nn.Softplus(),\n",
    "            nn.Linear(hidden_units, hidden_units),\n",
    "            nn.Softplus(),\n",
    "            nn.Linear(hidden_units, 1),\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e3a6a1a",
   "metadata": {},
   "source": [
    "### Question 7 (code)\n",
    "\n",
    "Now adapt the example above to this more general setting by creating a PyTorch module."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cbc0d2e9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- energy model ---\n",
    "class Energy(nn.Module):\n",
    "    def __init__(self, net, learning_rate =1e-3, device = 'cuda'):\n",
    "        super().__init__()\n",
    "        self.device = device\n",
    "        self.net = net.to(device=self.device)\n",
    "        self.learning_rate = learning_rate\n",
    "        self.optimizer = torch.optim.Adam(self.net.parameters(), lr=self.learning_rate)\n",
    "\n",
    "    def energy_fn(self, x):\n",
    "        return self.net(x)\n",
    "\n",
    "    def score(self, x):\n",
    "        # your code here\n",
    "    \n",
    "    def sample_langevin(self, x, eps=0.1, n_steps=1000):\n",
    "        # your code here\n",
    "    \n",
    "    def dsm_loss(self, x, sigma=0.1):\n",
    "        # your code here\n",
    "    \n",
    "    def train_epoch(self, dataloader):\n",
    "        # your code here\n",
    "    \n",
    "    def fit(self, train_dataloader, \n",
    "            n_epochs = 5,\n",
    "            batch_size = 100,\n",
    "            log_freq = 1,\n",
    "            vis_freq = 1,\n",
    "            vis_callback = None):\n",
    "        \n",
    "        total_epochs = n_epochs\n",
    "        num_epochs = 0\n",
    "\n",
    "        for epoch in range(n_epochs):\n",
    "            num_epochs += 1\n",
    "            # train one epoch\n",
    "            loss = self.train_epoch(train_dataloader)\n",
    "            \n",
    "            if (log_freq is not None) and (num_epochs % log_freq == 0):\n",
    "                print(\n",
    "                    f\"[Epoch {num_epochs}/{total_epochs}]: loss: {loss}\"\n",
    "                )\n",
    "\n",
    "            if (vis_callback is not None) and (num_epochs % vis_freq == 0):\n",
    "                print(\"Visualizing\")\n",
    "                self.net.eval()\n",
    "                vis_callback(self)\n",
    "                self.net.train()\n",
    "        pass\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "553b0bdb",
   "metadata": {},
   "outputs": [],
   "source": [
    "data_name = 'checkerboard'\n",
    "size = 1000000\n",
    "data_np = sample_2d(dataset=data_name, n_samples=size).numpy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "232a5e48",
   "metadata": {},
   "outputs": [],
   "source": [
    "dtype=torch.float32\n",
    "data = torch.from_numpy(data_np).type(dtype)\n",
    "loader_train = torch.utils.data.DataLoader(data, batch_size=100, shuffle=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b70bc78b",
   "metadata": {},
   "outputs": [],
   "source": [
    "energy_model = Energy(my_mlp)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "64c10112",
   "metadata": {},
   "outputs": [],
   "source": [
    "energy_model.fit(loader_train, vis_callback = functools.partial(\n",
    "            visualize,\n",
    "            data = data_np,\n",
    "            steps = 100,\n",
    "            eps = 0.01\n",
    "        ))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17d7e165",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "dldiy",
   "language": "python",
   "name": "dldiy"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
