{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D5_Statistics/W0D5_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "# Neuromatch Academy: Precourse Week, Day 5, Tutorial 2\n",
    "# Introduction to Probability and Statistics\n",
    "\n",
    "__Content creators:__ Ulrik Beierholm\n",
    "\n",
    "If an editor really did a lot of content creation add \"with help from Name Surname\" to the above\n",
    "\n",
    "__Content reviewers:__ Ethan Cheng, Manisha Sinha\n",
    "\n",
    "Name Surname, Name Surname. This includes both reviewers and editors. Add reviewers first then editors (paper-like seniority :) ).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "#Tutorial Objectives\n",
    "\n",
    "This tutorial builds on Tutorial 1 by explaining how to do inference through inverting the generative process.\n",
    "\n",
    "By completing the exercises in this tutorial, you should:\n",
    "* understand what the likelihood function is, and have some intuition of why it is important\n",
    "* know how to summarise the Gaussian distribution using mean and variance \n",
    "* know how to maximise a likelihood function\n",
    "* be able to do simple inference in both classical and Bayesian ways\n",
    "* (Optional) understand how Bayes Net can be used to model causal relationships"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:31.055637Z",
     "iopub.status.busy": "2021-06-02T20:55:31.055070Z",
     "iopub.status.idle": "2021-06-02T20:55:31.073902Z",
     "shell.execute_reply": "2021-06-02T20:55:31.074371Z"
    }
   },
   "outputs": [],
   "source": [
    "#@markdown Tutorial slides (to be added)\n",
    "# you should link the slides for all tutorial videos here (we will store pdfs on osf)\n",
    "\n",
    "from IPython.display import HTML\n",
    "HTML('<iframe src=\"https://mfr.ca-1.osf.io/render?url=https://osf.io/kaq2x/?direct%26mode=render%26action=download%26mode=render\" frameborder=\"0\" width=\"960\" height=\"569\" allowfullscreen=\"true\" mozallowfullscreen=\"true\" webkitallowfullscreen=\"true\"></iframe>')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Setup\n",
    "Make sure to run this before you get started"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "code",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:31.078673Z",
     "iopub.status.busy": "2021-06-02T20:55:31.078117Z",
     "iopub.status.idle": "2021-06-02T20:55:31.606686Z",
     "shell.execute_reply": "2021-06-02T20:55:31.605587Z"
    }
   },
   "outputs": [],
   "source": [
    "# Imports\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import scipy as sp\n",
    "from numpy.random import default_rng   # a default random number generator\n",
    "from scipy.stats import norm  # the normal probability distribution"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:31.614177Z",
     "iopub.status.busy": "2021-06-02T20:55:31.613591Z",
     "iopub.status.idle": "2021-06-02T20:55:31.820995Z",
     "shell.execute_reply": "2021-06-02T20:55:31.819767Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Figure settings\n",
    "import ipywidgets as widgets       # interactive display\n",
    "from ipywidgets import interact, fixed, HBox, Layout, VBox, interactive, Label, interact_manual\n",
    "%config InlineBackend.figure_format = 'retina'\n",
    "# plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\n",
    "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:31.830606Z",
     "iopub.status.busy": "2021-06-02T20:55:31.824364Z",
     "iopub.status.idle": "2021-06-02T20:55:31.841220Z",
     "shell.execute_reply": "2021-06-02T20:55:31.840723Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Plotting & Helper functions\n",
    "\n",
    "def plot_hist(data, xlabel, figtitle = None, num_bins = None):\n",
    "  \"\"\" Plot the given data as a histogram.\n",
    "\n",
    "    Args:\n",
    "      data (ndarray): array with data to plot as histogram\n",
    "      xlabel (str): label of x-axis\n",
    "      figtitle (str): title of histogram plot (default is no title)\n",
    "      num_bins (int): number of bins for histogram (default is 10)\n",
    "\n",
    "    Returns:\n",
    "      count (ndarray): number of samples in each histogram bin\n",
    "      bins (ndarray): center of each histogram bin\n",
    "  \"\"\"\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.set_xlabel(xlabel)\n",
    "  ax.set_ylabel('Count')\n",
    "  if num_bins is not None:\n",
    "    count, bins, _ = plt.hist(data, max(data), bins = num_bins)\n",
    "  else:\n",
    "    count, bins, _ = plt.hist(data, max(data))  # 10 bins default\n",
    "  if figtitle is not None:\n",
    "    fig.suptitle(figtitle, size=16)\n",
    "  plt.show()\n",
    "  return count, bins\n",
    "\n",
    "def plot_gaussian_samples_true(samples, xspace, mu, sigma, xlabel, ylabel):\n",
    "  \"\"\" Plot a histogram of the data samples on the same plot as the gaussian\n",
    "  distribution specified by the give mu and sigma values.\n",
    "\n",
    "    Args:\n",
    "      samples (ndarray): data samples for gaussian distribution\n",
    "      xspace (ndarray): x values to sample from normal distribution\n",
    "      mu (scalar): mean parameter of normal distribution\n",
    "      sigma (scalar): variance parameter of normal distribution\n",
    "      xlabel (str): the label of the x-axis of the histogram\n",
    "      ylabel (str): the label of the y-axis of the histogram\n",
    "\n",
    "    Returns:\n",
    "      Nothing.\n",
    "  \"\"\"\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.set_xlabel(xlabel)\n",
    "  ax.set_ylabel(ylabel)\n",
    "  # num_samples = samples.shape[0]\n",
    "\n",
    "  count, bins, _ = plt.hist(samples, density=True) # probability density function\n",
    "\n",
    "  plt.plot(xspace, norm.pdf(xspace, mu, sigma),'r-')\n",
    "  plt.show()\n",
    "\n",
    "def plot_likelihoods(likelihoods, mean_vals, variance_vals):\n",
    "  \"\"\" Plot the likelihood values on a heatmap plot where the x and y axes match\n",
    "  the mean and variance parameter values the likelihoods were computed for.\n",
    "\n",
    "    Args:\n",
    "      likelihoods (ndarray): array of computed likelihood values\n",
    "      mean_vals (ndarray): array of mean parameter values for which the\n",
    "                            likelihood was computed\n",
    "      variance_vals (ndarray): array of variance parameter values for which the\n",
    "                            likelihood was computed\n",
    "\n",
    "    Returns:\n",
    "      Nothing.\n",
    "  \"\"\"\n",
    "  fig, ax = plt.subplots()\n",
    "  im = ax.imshow(likelihoods)\n",
    "\n",
    "  cbar = ax.figure.colorbar(im, ax=ax)\n",
    "  cbar.ax.set_ylabel('log likelihood', rotation=-90, va=\"bottom\")\n",
    "\n",
    "  ax.set_xticks(np.arange(len(mean_vals)))\n",
    "  ax.set_yticks(np.arange(len(variance_vals)))\n",
    "  ax.set_xticklabels(mean_vals)\n",
    "  ax.set_yticklabels(variance_vals)\n",
    "  ax.set_xlabel('Mean')\n",
    "  ax.set_ylabel('Variance')\n",
    "\n",
    "def posterior_plot(x, likelihood=None, prior=None, posterior_pointwise=None, ax=None):\n",
    "  \"\"\"\n",
    "  Plots normalized Gaussian distributions and posterior.\n",
    "\n",
    "    Args:\n",
    "        x (numpy array of floats):         points at which the likelihood has been evaluated\n",
    "        auditory (numpy array of floats):  normalized probabilities for auditory likelihood evaluated at each `x`\n",
    "        visual (numpy array of floats):    normalized probabilities for visual likelihood evaluated at each `x`\n",
    "        posterior (numpy array of floats): normalized probabilities for the posterior evaluated at each `x`\n",
    "        ax: Axis in which to plot. If None, create new axis.\n",
    "\n",
    "    Returns:\n",
    "        Nothing.\n",
    "  \"\"\"\n",
    "  if likelihood is None:\n",
    "      likelihood = np.zeros_like(x)\n",
    "\n",
    "  if prior is None:\n",
    "      prior = np.zeros_like(x)\n",
    "\n",
    "  if posterior_pointwise is None:\n",
    "      posterior_pointwise = np.zeros_like(x)\n",
    "\n",
    "  if ax is None:\n",
    "    fig, ax = plt.subplots()\n",
    "\n",
    "  ax.plot(x, likelihood, '-C1', LineWidth=2, label='Auditory')\n",
    "  ax.plot(x, prior, '-C0', LineWidth=2, label='Visual')\n",
    "  ax.plot(x, posterior_pointwise, '-C2', LineWidth=2, label='Posterior')\n",
    "  ax.legend()\n",
    "  ax.set_ylabel('Probability')\n",
    "  ax.set_xlabel('Orientation (Degrees)')\n",
    "  plt.show()\n",
    "\n",
    "  return ax\n",
    "\n",
    "def plot_classical_vs_bayesian_normal(num_points, mu_classic, var_classic,\n",
    "                                      mu_bayes, var_bayes):\n",
    "  \"\"\" Helper function to plot optimal normal distribution parameters for varying\n",
    "  observed sample sizes using both classic and Bayesian inference methods.\n",
    "\n",
    "    Args:\n",
    "      num_points (int): max observed sample size to perform inference with\n",
    "      mu_classic (ndarray): estimated mean parameter for each observed sample size\n",
    "                                using classic inference method\n",
    "      var_classic (ndarray): estimated variance parameter for each observed sample size\n",
    "                                using classic inference method\n",
    "      mu_bayes (ndarray): estimated mean parameter for each observed sample size\n",
    "                                using Bayesian inference method\n",
    "      var_bayes (ndarray): estimated variance parameter for each observed sample size\n",
    "                                using Bayesian inference method\n",
    "\n",
    "    Returns:\n",
    "      Nothing.\n",
    "  \"\"\"\n",
    "  xspace = np.linspace(0, num_points, num_points)\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.set_xlabel('n data points')\n",
    "  ax.set_ylabel('mu')\n",
    "  plt.plot(xspace, mu_classic,'r-', label = \"Classical\")\n",
    "  plt.plot(xspace, mu_bayes,'b-', label = \"Bayes\")\n",
    "  plt.legend()\n",
    "  plt.show()\n",
    "\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.set_xlabel('n data points')\n",
    "  ax.set_ylabel('sigma^2')\n",
    "  plt.plot(xspace, var_classic,'r-', label = \"Classical\")\n",
    "  plt.plot(xspace, var_bayes,'b-', label = \"Bayes\")\n",
    "  plt.legend()\n",
    "  plt.show()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Section 1: Statistical Inference and Likelihood"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:31.848102Z",
     "iopub.status.busy": "2021-06-02T20:55:31.847512Z",
     "iopub.status.idle": "2021-06-02T20:55:31.921758Z",
     "shell.execute_reply": "2021-06-02T20:55:31.922239Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 4: Inference\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"765S2XKYoJ8\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A generative model (such as the Gaussian distribution from the previous tutorial) allows us to make prediction about outcomes. \n",
    "\n",
    "However, after we observe $n$ data points, we can also evaluate our model (and any of its associated parameters) by calculating the **likelihood** of our model having generated each of those data points $x_i$.\n",
    "\n",
    "$$P(x_i|\\mu,\\sigma)=\\mathcal{N}(x_i,\\mu,\\sigma)$$\n",
    "\n",
    "For all data points $\\mathbf{x}=(x_1, x_2, x_3, ...x_n) $ we can then calculate the likelihood for the whole dataset by computing the product of the likelihood for each single data point.\n",
    "\n",
    "$$P(\\mathbf{x}|\\mu,\\sigma)=\\prod_{i=1}^n \\mathcal{N}(x_i,\\mu,\\sigma)$$\n",
    "\n",
    "As a function of the parameters (when the data points $x$ are fixed), this is referred to as the **likelihood function**, $L(\\mu,\\sigma)$.\n",
    "\n",
    "In the last tutorial we reviewed how the data was generated given the selected parameters of the generative process. If we do not know the parameters $\\mu$, $\\sigma$ that generated the data, we can ask which parameter values (given our model) gives the best (highest) likelihood. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exercise 1A: Likelihood, mean and variance\n",
    "\n",
    "\n",
    "We can use the likelihood to find the set of parameters that are most likely to have generated the data (given the model we are using). That is, we want to infer the parameters that gave rise to the data we observed. We will try a couple of ways of doing statistical inference.\n",
    "\n",
    "In the following exercise, we will sample from the Gaussian distribution (again), plot a histogram and the Gaussian probability density function, and calculate some statistics from the samples.\n",
    "\n",
    "Specifically we will calculate:\n",
    "\n",
    "*   Likelihood\n",
    "*   Mean\n",
    "*   Standard deviation\n",
    "\n",
    "Statistical moments are defined based on the expectations. The first moment is the expected value, i.e. the mean, the second moment is the expected squared value, i.e. variance, and so on.\n",
    "\n",
    "The special thing about the Gaussian is that mean and standard deviation of the random sample can effectively approximate the two parameters of a Gaussian, $\\mu, \\sigma$.\n",
    "\n",
    "Hence using the sample mean, $\\bar{x}=\\frac{1}{n}\\sum_i x_i$, and variance, $\\bar{\\sigma}^2=\\frac{1}{n} \\sum_i (x_i-\\bar{x})^2 $ should give us the best/maximum likelihood, $L(\\bar{x},\\bar{\\sigma}^2)$.\n",
    "\n",
    "Let's see if that actually works. If we search through different combinations of $\\mu$ and $\\sigma$ values, do the sample mean and variance values give us the maximum likelihood (of observing our data)?\n",
    "\n",
    "You need to modify two lines below to generate the data from a normal distribution $N(5, 1)$, and plot the theoretical distribution. Note that we are reusing functions from tutorial 1, so review that tutorial if needed. Then you will use this random sample to calculate the likelihood for a variety of potential mean and variance parameter values. For this tutorial we have chosen a variance parameter of 1, meaning the standard deviation is also 1 in this case. Most of our functions take the standard deviation sigma as a parameter, so we will write $\\sigma = 1$.\n",
    "\n",
    "(Note that in practice computing the sample variance like this $$\\bar{\\sigma}^2=\\frac{1}{(n-1)} \\sum_i (x_i-\\bar{x})^2 $$ is actually better, take a look at any statistics textbook for an explanation of this.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:31.931645Z",
     "iopub.status.busy": "2021-06-02T20:55:31.924957Z",
     "iopub.status.idle": "2021-06-02T20:55:32.033164Z",
     "shell.execute_reply": "2021-06-02T20:55:32.032535Z"
    }
   },
   "outputs": [],
   "source": [
    "def generate_normal_samples(mu, sigma, num_samples):\n",
    "  \"\"\" Generates a desired number of samples from a normal distribution,\n",
    "  Normal(mu, sigma).\n",
    "\n",
    "  Args:\n",
    "    mu (scalar): mean parameter of the normal distribution\n",
    "    sigma (scalar): standard deviation parameter of the normal distribution\n",
    "    num_samples (int): number of samples drawn from normal distribution\n",
    "\n",
    "  Returns:\n",
    "    sampled_values (ndarray): a array of shape (samples, ) containing the samples\n",
    "  \"\"\"\n",
    "  random_num_generator = default_rng(0)\n",
    "  sampled_values = random_num_generator.normal(mu, sigma, num_samples)\n",
    "  return sampled_values\n",
    "\n",
    "def compute_likelihoods_normal(x, mean_vals, variance_vals):\n",
    "  \"\"\" Computes the log-likelihood values given a observed data sample x, and\n",
    "  potential mean and variance values for a normal distribution\n",
    "\n",
    "    Args:\n",
    "      x (ndarray): 1-D array with all the observed data\n",
    "      mean_vals (ndarray): 1-D array with all potential mean values to\n",
    "                              compute the likelihood function for\n",
    "      variance_vales (ndarray): 1-D array with all potential variance values to\n",
    "                              compute the likelihood function for\n",
    "\n",
    "    Returns:\n",
    "      likelihood (ndarray): 2-D array of shape (number of mean_vals,\n",
    "                              number of variance_vals) for which the likelihood\n",
    "                              of the observed data was computed\n",
    "  \"\"\"\n",
    "  # Initialise likelihood collection array\n",
    "  likelihood = np.zeros((mean_vals.shape[0], variance_vals.shape[0]))\n",
    "\n",
    "  # Compute the likelihood for observing the gvien data x assuming\n",
    "  # each combination of mean and variance values\n",
    "  for idxMean in range(mean_vals.shape[0]):\n",
    "    for idxVar in range(variance_vals.shape[0]):\n",
    "      likelihood[idxVar,idxMean]= sum(np.log(norm.pdf(x, mean_vals[idxMean],\n",
    "                                              variance_vals[idxVar])))\n",
    "\n",
    "  return likelihood\n",
    "\n",
    "###################################################################\n",
    "## TODO for students: Generate 1000 random samples from a normal distribution\n",
    "## with mu = 5 and sigma = 1\n",
    "# Fill out the following then remove\n",
    "raise NotImplementedError(\"Student exercise: need to generate samples\")\n",
    "###################################################################\n",
    "\n",
    "# Generate data\n",
    "mu = 5\n",
    "sigma = 1  # since variance = 1, sigma = 1\n",
    "x = ...\n",
    "\n",
    "# You can calculate mean and variance through either numpy or scipy\n",
    "print(\"This is the sample mean as estimated by numpy: \" + str(np.mean(x)))\n",
    "print(\"This is the sample standard deviation as estimated by numpy: \" + str(np.std(x)))\n",
    "# or\n",
    "meanX, varX = sp.stats.norm.stats(x)\n",
    "print(\"This is the sample mean as estimated by scipy: \" + str(meanX[0]))\n",
    "print(\"This is the sample standard deviation as estimated by scipy: \" + str(varX[0]))\n",
    "\n",
    "###################################################################\n",
    "## TODO for students: Use the given function to compute the likelihood for\n",
    "## a variety of mean and variance values\n",
    "# Fill out the following then remove\n",
    "raise NotImplementedError(\"Student exercise: need to compute likelihoods\")\n",
    "###################################################################\n",
    "\n",
    "# Let's look through possible mean and variance values for the highest likelihood\n",
    "# using the compute_likelihood function\n",
    "meanTest = np.linspace(1, 10, 10) # potential mean values to try\n",
    "varTest = np.array([0.7, 0.8, 0.9, 1, 1.2, 1.5, 2, 3, 4, 5]) # potential variance values to try\n",
    "likelihoods = ...\n",
    "\n",
    "# Uncomment once you've generated the samples and compute likelihoods\n",
    "# xspace = np.linspace(0, 10, 100)\n",
    "# plot_gaussian_samples_true(x, xspace, mu, sigma, \"x\", \"Count\")\n",
    "# plot_likelihoods(likelihoods, meanTest, varTest)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:32.043714Z",
     "iopub.status.busy": "2021-06-02T20:55:32.041012Z",
     "iopub.status.idle": "2021-06-02T20:55:32.758852Z",
     "shell.execute_reply": "2021-06-02T20:55:32.758342Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "\n",
    "def generate_normal_samples(mu, sigma, num_samples):\n",
    "  \"\"\" Generates a desired number of samples from a normal distribution,\n",
    "  Normal(mu, sigma).\n",
    "\n",
    "  Args:\n",
    "    mu (scalar): mean parameter of the normal distribution\n",
    "    sigma (scalar): standard deviation parameter of the normal distribution\n",
    "    num_samples (int): number of samples drawn from normal distribution\n",
    "\n",
    "  Returns:\n",
    "    sampled_values (ndarray): a array of shape (samples, ) containing the samples\n",
    "  \"\"\"\n",
    "  random_num_generator = default_rng(0)\n",
    "  sampled_values = random_num_generator.normal(mu, sigma, num_samples)\n",
    "  return sampled_values\n",
    "\n",
    "def compute_likelihoods_normal(x, mean_vals, variance_vals):\n",
    "  \"\"\" Computes the log-likelihood values given a observed data sample x, and\n",
    "  potential mean and variance values for a normal distribution\n",
    "\n",
    "    Args:\n",
    "      x (ndarray): 1-D array with all the observed data\n",
    "      mean_vals (ndarray): 1-D array with all potential mean values to\n",
    "                              compute the likelihood function for\n",
    "      variance_vales (ndarray): 1-D array with all potential variance values to\n",
    "                              compute the likelihood function for\n",
    "\n",
    "    Returns:\n",
    "      likelihood (ndarray): 2-D array of shape (number of mean_vals,\n",
    "                              number of variance_vals) for which the likelihood\n",
    "                              of the observed data was computed\n",
    "  \"\"\"\n",
    "  # Initialise likelihood collection array\n",
    "  likelihood = np.zeros((mean_vals.shape[0], variance_vals.shape[0]))\n",
    "\n",
    "  # Compute the likelihood for observing the gvien data x assuming\n",
    "  # each combination of mean and variance values\n",
    "  for idxMean in range(mean_vals.shape[0]):\n",
    "    for idxVar in range(variance_vals.shape[0]):\n",
    "      likelihood[idxVar,idxMean]= sum(np.log(norm.pdf(x, mean_vals[idxMean],\n",
    "                                              variance_vals[idxVar])))\n",
    "\n",
    "  return likelihood\n",
    "\n",
    "# Generate data\n",
    "mu = 5\n",
    "sigma = 1  # since variance = 1, sigma = 1\n",
    "x = generate_normal_samples(mu, sigma, 1000)\n",
    "\n",
    "# You can calculate mean and variance through either numpy or scipy\n",
    "print(\"This is the sample mean as estimated by numpy: \" + str(np.mean(x)))\n",
    "print(\"This is the sample standard deviation as estimated by numpy: \" + str(np.std(x)))\n",
    "# or\n",
    "meanX, varX = sp.stats.norm.stats(x)\n",
    "print(\"This is the sample mean as estimated by scipy: \" + str(meanX[0]))\n",
    "print(\"This is the sample standard deviation as estimated by scipy: \" + str(varX[0]))\n",
    "\n",
    "# Let's look through possible mean and variance values for the highest likelihood\n",
    "# using the compute_likelihood function\n",
    "meanTest = np.linspace(1, 10, 10) # potential mean values to try\n",
    "varTest = np.array([0.7, 0.8, 0.9, 1, 1.2, 1.5, 2, 3, 4, 5]) # potential variance values to try\n",
    "likelihoods = compute_likelihoods_normal(x, meanTest, varTest)\n",
    "\n",
    "# Uncomment once you've generated the samples and compute likelihoods\n",
    "xspace = np.linspace(0, 10, 100)\n",
    "with plt.xkcd():\n",
    "  plot_gaussian_samples_true(x, xspace, mu, sigma, \"x\", \"Count\")\n",
    "  plot_likelihoods(likelihoods, meanTest, varTest)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The top figure shows hopefully a nice fit between the histogram and the distribution that generated the data. So far so good.\n",
    "\n",
    "Underneath you should see the sample mean and variance values, which are close to the true values (that we happen to know here).\n",
    "\n",
    "In the heatmap we should be able to see that the mean and variance parameters values yielding the highest likelihood (yellow) corresponds to (roughly) the combination of the calculated sample mean and variance from the dataset.\n",
    "But it can be hard to see from such a rough **grid-search** simulation, as it is only as precise as the resolution of the grid we are searching. \n",
    "\n",
    "Implicitly, by looking for the parameters that give the highest likelihood, we have been searching for the **maximum likelihood** estimate.\n",
    "$$(\\hat{\\mu},\\hat{\\sigma})=argmax_{\\mu,\\sigma}L(\\mu,\\sigma)=argmax_{\\mu,\\sigma} \\prod_{i=1}^n \\mathcal{N}(x_i,\\mu,\\sigma)$$.\n",
    "\n",
    "For a simple Gaussian this can actually be done analytically (you have likely already done so yourself), using the statistical moments: mean and standard deviation (variance). \n",
    "\n",
    "In next section we will look at other ways of inferring such parameter variables."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Interactive Demo: Maximum likelihood inference"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We want to do inference on this data set, i.e. we want to infer the parameters that most likely gave rise to the data given our model. Intuitively that means that we want as good as possible a fit between the observed data and the probability distribution function with the best inferred parameters. \n",
    "\n",
    "For now, just try to see how well you can fit the probability distribution to the data by using the demo sliders to control the mean and standard deviation parameters of the distribution."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:32.790479Z",
     "iopub.status.busy": "2021-06-02T20:55:32.765314Z",
     "iopub.status.idle": "2021-06-02T20:55:33.048650Z",
     "shell.execute_reply": "2021-06-02T20:55:33.048139Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Make sure you execute this cell to enable the widget and fit by hand!\n",
    "vals = generate_normal_samples(mu, sigma, 1000)\n",
    "def plotFnc(mu,sigma):\n",
    "  #prepare to plot\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.set_xlabel('x')\n",
    "  ax.set_ylabel('probability')\n",
    "  loglikelihood= sum(np.log(norm.pdf(vals,mu,sigma)))\n",
    "\n",
    "  #calculate histogram\n",
    "  count, bins, ignored = plt.hist(vals,density=True)\n",
    "  x = np.linspace(0,10,100)\n",
    "  #plot\n",
    "  plt.plot(x, norm.pdf(x,mu,sigma),'r-')\n",
    "  plt.show()\n",
    "  print(\"The log-likelihood for the selected parameters is: \" + str(loglikelihood))\n",
    "\n",
    "#interact(plotFnc, mu=5.0, sigma=2.1);\n",
    "#interact(plotFnc, mu=widgets.IntSlider(min=0.0, max=10.0, step=1, value=4.0),sigma=widgets.IntSlider(min=0.1, max=10.0, step=1, value=4.0));\n",
    "interact(plotFnc, mu=(0.0, 10.0, 0.1),sigma=(0.1, 10.0, 0.1));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Did you notice the number below the plot? That is the summed log-likelihood, which increases (becomes less negative) as the fit improves. The log-likelihood should be greatest when $\\mu$ = 5 and $\\sigma$ = 1.\n",
    "\n",
    "Building upon what we did in the previous exercise, we want to see if we can do inference on observed data in a bit more principled way.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exercise 1B: Maximum Likelihood Estimation\n",
    "\n",
    "Let's again assume that we have a data set, $\\mathbf{x}$, assumed to be generated by a normal distribution (we actually generate it ourselves in line 1, so we know how it was generated!).\n",
    "We want to maximise the likelihood of the parameters $\\mu$ and $\\sigma^2$. We can do so using a couple of tricks:\n",
    "\n",
    "*   Using a log transform will not change the maximum of the function, but will allow us to work with very small numbers that could lead to problems with machine precision.\n",
    "*   Maximising a function is the same as minimising the negative of a function, allowing us to use the minimize optimisation provided by scipy.\n",
    "\n",
    "In the code below, insert the missing line (see the `compute_likelihoods_normal` function from previous exercise), with the mean as theta[0] and variance as theta[1].\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.059797Z",
     "iopub.status.busy": "2021-06-02T20:55:33.059127Z",
     "iopub.status.idle": "2021-06-02T20:55:33.060866Z",
     "shell.execute_reply": "2021-06-02T20:55:33.060367Z"
    }
   },
   "outputs": [],
   "source": [
    "mu = 5\n",
    "sigma = 1\n",
    "\n",
    "# Generate 1000 random samples from a Gaussian distribution\n",
    "dataX = generate_normal_samples(mu, sigma, 1000)\n",
    "\n",
    "# We define the function to optimise, the negative log likelihood\n",
    "def negLogLike(theta):\n",
    "  \"\"\" Function for computing the negative log-likelihood given the observed data\n",
    "  and given parameter values stored in theta.\n",
    "\n",
    "    Args:\n",
    "      dataX (ndarray): array with observed data points\n",
    "      theta (ndarray): normal distribution parameters (mean is theta[0],\n",
    "                            variance is theta[1])\n",
    "\n",
    "    Returns:\n",
    "      Calculated negative Log Likelihood value!\n",
    "  \"\"\"\n",
    "  ###################################################################\n",
    "  ## TODO for students: Compute the negative log-likelihood value for the\n",
    "  ## given observed data values and parameters (theta)\n",
    "  # Fill out the following then remove\n",
    "  raise NotImplementedError(\"Student exercise: need to compute the negative \\\n",
    "                                log-likelihood value\")\n",
    "  ###################################################################\n",
    "  return ...\n",
    "\n",
    "# Define bounds, var has to be positive\n",
    "bnds = ((None, None), (0, None))\n",
    "\n",
    "# Optimize with scipy!\n",
    "# Uncomment once function above is implemented\n",
    "# optimal_parameters = sp.optimize.minimize(negLogLike, (2, 2), bounds = bnds)\n",
    "# print(\"The optimal mean estimate is: \" + str(optimal_parameters.x[0]))\n",
    "# print(\"The optimal variance estimate is: \" + str(optimal_parameters.x[1]))\n",
    "\n",
    "# optimal_parameters contains a lot of information about the optimization,\n",
    "# but we mostly want the mean and variance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.067097Z",
     "iopub.status.busy": "2021-06-02T20:55:33.066524Z",
     "iopub.status.idle": "2021-06-02T20:55:33.083367Z",
     "shell.execute_reply": "2021-06-02T20:55:33.082878Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "\n",
    "mu = 5\n",
    "sigma = 1\n",
    "\n",
    "# Generate 1000 random samples from a Gaussian distribution\n",
    "dataX = generate_normal_samples(mu, sigma, 1000)\n",
    "\n",
    "# We define the function to optimise, the negative log likelihood\n",
    "def negLogLike(theta):\n",
    "  \"\"\" Function for computing the negative log-likelihood given the observed data\n",
    "  and given parameter values stored in theta.\n",
    "\n",
    "    Args:\n",
    "      theta (ndarray): normal distribution parameters (mean is theta[0],\n",
    "                            variance is theta[1])\n",
    "\n",
    "    Returns:\n",
    "      Calculated negative Log Likelihood value!\n",
    "  \"\"\"\n",
    "  return -sum(np.log(norm.pdf(dataX, theta[0], theta[1])))\n",
    "\n",
    "# Define bounds, var has to be positive\n",
    "bnds = ((None, None), (0, None))\n",
    "\n",
    "# Optimize with scipy!\n",
    "# Uncomment once function above is implemented\n",
    "optimal_parameters = sp.optimize.minimize(negLogLike, (2, 2), bounds = bnds)\n",
    "print(\"The optimal mean estimate is: \" + str(optimal_parameters.x[0]))\n",
    "print(\"The optimal variance estimate is: \" + str(optimal_parameters.x[1]))\n",
    "\n",
    "# optimal_parameters contains a lot of information about the optimization,\n",
    "# but we mostly want the mean and variance"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "These are the approximations of the parameters that maximise the likelihood ($\\mu$ ~ 5.281 and $\\sigma$ ~ 1.170)\n",
    "\n",
    "Compare these values to the first and second moment (sample mean and variance) from the previous exercise, as well as to the true values (which we only know because we generated the numbers!). Consider the relationship discussed about statistical moments and maximising likelihood.\n",
    "\n",
    "Go back to the previous exercise and modify the mean and standard deviation values used to generate the observed data $x$, and verify that the values still work out."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.088289Z",
     "iopub.status.busy": "2021-06-02T20:55:33.087470Z",
     "iopub.status.idle": "2021-06-02T20:55:33.090197Z",
     "shell.execute_reply": "2021-06-02T20:55:33.090655Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove explanation\n",
    "\n",
    "\"\"\" You should notice that the parameters estimated by maximum likelihood\n",
    "estimation/inference are very close to the true parameters (mu = 5, sigma = 1),\n",
    "as well as the parameters estimated in Exercise 1A where all likelihood values\n",
    "were calculated explicitly. You should also see that changing the mean and\n",
    "sigma parameter values (and generating new data from a distribution with these\n",
    "parameters) makes no difference as MLE methods can still recover these\n",
    "parameters.\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Section 2: Bayesian Inference"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.095923Z",
     "iopub.status.busy": "2021-06-02T20:55:33.095354Z",
     "iopub.status.idle": "2021-06-02T20:55:33.162492Z",
     "shell.execute_reply": "2021-06-02T20:55:33.162964Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 5: Bayes\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"12tk5FsVMBQ\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "For Bayesian inference we do not focus on the likelihood function $L(y)=P(x|y)$, but instead focus on the posterior distribution: \n",
    "\n",
    "$$P(y|x)=\\frac{P(x|y)P(y)}{P(x)}$$\n",
    "\n",
    "which is composed of the likelihood function $P(x|y)$, the prior $P(y)$ and a normalising term $P(x)$ (which we will ignore for now).\n",
    "\n",
    "While there are other advantages to using Bayesian inference (such as the ability to derive Bayesian Nets, see optional bonus task below), we will first mostly focus on the role of the prior in inference."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exercise 2A: Performing Bayesian inference\n",
    "\n",
    "In the above sections we performed inference using maximum likelihood, i.e. finding the parameters that maximised the likelihood of a set of parameters, given the model and data.\n",
    "\n",
    "We will now repeat the inference process, but with an added Bayesian prior, and compare it to the classical inference process we did before (Section 1). When using conjugate priors we can just update the parameter values of the distributions (here Gaussian distributions). \n",
    "\n",
    "For the prior we start by guessing a mean of 6 (mean of observed data points 5 and 7) and variance of 1 (variance of 5 and 7). This is a simplified way of applying a prior, that allows us to just add these 2 values (pseudo-data) to the real data.\n",
    "\n",
    "In the code below, complete the missing lines."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.173256Z",
     "iopub.status.busy": "2021-06-02T20:55:33.171816Z",
     "iopub.status.idle": "2021-06-02T20:55:33.175298Z",
     "shell.execute_reply": "2021-06-02T20:55:33.174731Z"
    }
   },
   "outputs": [],
   "source": [
    "def classic_vs_bayesian_normal(mu, sigma, num_points, prior):\n",
    "  \"\"\" Compute both classical and Bayesian inference processes over the range of\n",
    "  data sample sizes (num_points) for a normal distribution with parameters\n",
    "  mu, sigma for comparison.\n",
    "\n",
    "  Args:\n",
    "    mu (scalar): the mean parameter of the normal distribution\n",
    "    sigma (scalar): the standard deviation parameter of the normal distribution\n",
    "    num_points (int): max number of points to use for inference\n",
    "    prior (ndarray): prior data points for Bayesian inference\n",
    "\n",
    "  Returns:\n",
    "    mean_classic (ndarray): estimate mean parameter via classic inference\n",
    "    var_classic (ndarray): estimate variance parameter via classic inference\n",
    "    mean_bayes (ndarray): estimate mean parameter via Bayesian inference\n",
    "    var_bayes (ndarray): estimate variance parameter via Bayesian inference\n",
    "  \"\"\"\n",
    "\n",
    "  # Initialize the classical and Bayesian inference arrays that will estimate\n",
    "  # the normal parameters given a certain number of randomly sampled data points\n",
    "  mean_classic = np.zeros(num_points)\n",
    "  var_classic = np.zeros(num_points)\n",
    "\n",
    "  mean_bayes = np.zeros(num_points)\n",
    "  var_bayes = np.zeros(num_points)\n",
    "\n",
    "  for nData in range(num_points):\n",
    "\n",
    "    ###################################################################\n",
    "    ## TODO for students: Complete classical inference for increasingly\n",
    "    ## larger sets of random data points\n",
    "    # Fill out the following then remove\n",
    "    raise NotImplementedError(\"Student exercise: need to code classical inference\")\n",
    "    ###################################################################\n",
    "\n",
    "    # Randomly sample nData + 1 number of points\n",
    "    x = ...\n",
    "    # Compute the mean of those points and set the corresponding array entry to this value\n",
    "    mean_classic[nData] = ...\n",
    "    # Compute the variance of those points and set the corresponding array entry to this value\n",
    "    var_classic[nData] = ...\n",
    "\n",
    "    # Bayesian inference with the given prior is performed below for you\n",
    "    xsupp = np.hstack((x, prior))\n",
    "    mean_bayes[nData] = np.mean(xsupp)\n",
    "    var_bayes[nData] = np.var(xsupp)\n",
    "\n",
    "  return mean_classic, var_classic, mean_bayes, var_bayes\n",
    "\n",
    "# Set normal distribution parameters, mu and sigma\n",
    "mu = 5\n",
    "sigma = 1\n",
    "\n",
    "# Set the prior to be two new data points, 5 and 7, and print the mean and variance\n",
    "prior = np.array((5, 7))\n",
    "print(\"The mean of the data comprising the prior is: \" + str(np.mean(prior)))\n",
    "print(\"The variance of the data comprising the prior is: \" + str(np.var(prior)))\n",
    "\n",
    "# Uncomment once the function above is completed\n",
    "# mean_classic, var_classic, mean_bayes, var_bayes = classic_vs_bayesian_normal(mu, sigma, 30, prior)\n",
    "# plot_classical_vs_bayesian_normal(30, mean_classic, var_classic, mean_bayes, var_bayes)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.209804Z",
     "iopub.status.busy": "2021-06-02T20:55:33.203103Z",
     "iopub.status.idle": "2021-06-02T20:55:33.713624Z",
     "shell.execute_reply": "2021-06-02T20:55:33.712934Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "\n",
    "def classic_vs_bayesian_normal(mu, sigma, num_points, prior):\n",
    "  \"\"\" Compute both classical and Bayesian inference processes over the range of\n",
    "  data sample sizes (num_points) for a normal distribution with parameters\n",
    "  mu,sigma for comparison.\n",
    "\n",
    "  Args:\n",
    "    mu (scalar): the mean parameter of the normal distribution\n",
    "    sigma (scalar): the standard deviation parameter of the normal distribution\n",
    "    num_points (int): max number of points to use for inference\n",
    "    prior (ndarray): prior data points for Bayesian inference\n",
    "\n",
    "  Returns:\n",
    "    mean_classic (ndarray): estimate mean parameter via classic inference\n",
    "    var_classic (ndarray): estimate variance parameter via classic inference\n",
    "    mean_bayes (ndarray): estimate mean parameter via Bayesian inference\n",
    "    var_bayes (ndarray): estimate variance parameter via Bayesian inference\n",
    "  \"\"\"\n",
    "\n",
    "  # Initialize the classical and Bayesian inference arrays that will estimate\n",
    "  # the normal parameters given a certain number of randomly sampled data points\n",
    "  mean_classic = np.zeros(num_points)\n",
    "  var_classic = np.zeros(num_points)\n",
    "\n",
    "  mean_bayes = np.zeros(num_points)\n",
    "  var_bayes = np.zeros(num_points)\n",
    "\n",
    "  for nData in range(num_points):\n",
    "    # Randomly sample nData + 1 number of points\n",
    "    x = generate_normal_samples(mu, sigma, nData + 1)\n",
    "    # Compute the mean of those points and set the corresponding array entry to this value\n",
    "    mean_classic[nData] = np.mean(x)\n",
    "    # Compute the variance of those points and set the corresponding array entry to this value\n",
    "    var_classic[nData] = np.var(x)\n",
    "\n",
    "    # Bayesian inference with the given prior is performed below for you\n",
    "    xsupp = np.hstack((x, prior))\n",
    "    mean_bayes[nData] = np.mean(xsupp)\n",
    "    var_bayes[nData] = np.var(xsupp)\n",
    "\n",
    "  return mean_classic, var_classic, mean_bayes, var_bayes\n",
    "\n",
    "# Set normal distribution parameters, mu and sigma\n",
    "mu = 5\n",
    "sigma = 1\n",
    "\n",
    "# Set the prior to be two new data points, 5 and 7, and print the mean and variance\n",
    "prior = np.array((5, 7))\n",
    "print(\"The mean of the data comprising the prior is: \" + str(np.mean(prior)))\n",
    "print(\"The variance of the data comprising the prior is: \" + str(np.var(prior)))\n",
    "\n",
    "# Uncomment once the function above is completed\n",
    "mean_classic, var_classic, mean_bayes, var_bayes = classic_vs_bayesian_normal(mu, sigma, 30, prior)\n",
    "with plt.xkcd():\n",
    "  plot_classical_vs_bayesian_normal(30, mean_classic, var_classic, mean_bayes, var_bayes)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Hopefully you can see that the blue line stays a little closer to the true values ($\\mu=5$, $\\sigma^2=1$). Having a simple prior in the Bayesian inference process (blue) helps to regularise the inference of the mean and variance parameters when you have very little data, but has little effect with large data. You can see that as the number of data points (x-axis) increases, both inference processes (blue and red lines) get closer and closer together, i.e. their estimates for the true parameters converge as sample size increases."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Think! 2A: Bayesian Brains\n",
    "It should be clear how Bayesian inference can help you when doing data analysis. But consider whether the brain might be able to benefit from this too. If the brain needs to make inferences about the world, would it be useful to do regularisation on the input? "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.719284Z",
     "iopub.status.busy": "2021-06-02T20:55:33.718309Z",
     "iopub.status.idle": "2021-06-02T20:55:33.721095Z",
     "shell.execute_reply": "2021-06-02T20:55:33.721560Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove explanation\n",
    "\n",
    "\"\"\" You will learn more about \"Bayesian brains\" and the theory surrounding\n",
    "these ideas once the course begins. Here is a brief explanation: it may\n",
    "be ideal for human brains to implement Bayesian inference by integrating \"prior\"\n",
    "information the brain has about the world (memories, prior knowledge, etc.) with\n",
    "new evidence that updates its \"beliefs\"/prior. This process seems to parallel\n",
    "the brain's method of learning about its environment, making it a compelling\n",
    "theory for many neuroscience researchers. The next exercise examines a possible\n",
    "real world model for Bayesian inference: sound localization.\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exercise 2B: Finding the posterior computationally\n",
    "***(Exercise moved  from NMA2020 Bayes day, all credit to original creators!)***\n",
    "\n",
    "Imagine an experiment where participants estimate the location of a noise-emitting object. To estimate its position, the participants can use two sources of information: \n",
    "  1. new noisy auditory information (the likelihood)\n",
    "  2. prior visual expectations of where the stimulus is likely to come from (visual prior). \n",
    "\n",
    "The auditory and visual information are both noisy, so participants will combine these sources of information to better estimate the position of the object.\n",
    "\n",
    "We will use Gaussian distributions to represent the auditory likelihood (in red), and a Gaussian visual prior (expectations - in blue). Using Bayes rule, you will combine them into a posterior distribution that summarizes the probability that the object is in each possible location. \n",
    "\n",
    "We have provided you with a ready-to-use plotting function, and a code skeleton.\n",
    "\n",
    "* You can use `my_gaussian` from Tutorial 1 (also included below), to generate an auditory likelihood with parameters $\\mu$ = 3 and $\\sigma$ = 1.5\n",
    "* Generate a visual prior with parameters $\\mu$ = -1 and $\\sigma$ = 1.5\n",
    "* Calculate the posterior using pointwise multiplication of the likelihood and prior. Don't forget to normalize so the posterior adds up to 1\n",
    "* Plot the likelihood, prior and posterior using the predefined function `posterior_plot`\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.729894Z",
     "iopub.status.busy": "2021-06-02T20:55:33.728614Z",
     "iopub.status.idle": "2021-06-02T20:55:33.730537Z",
     "shell.execute_reply": "2021-06-02T20:55:33.731004Z"
    }
   },
   "outputs": [],
   "source": [
    "def my_gaussian(x_points, mu, sigma):\n",
    "  \"\"\" Returns normalized Gaussian estimated at points `x_points`, with parameters:\n",
    "  mean `mu` and standard deviation `sigma`\n",
    "\n",
    "  Args:\n",
    "      x_points (ndarray of floats): points at which the gaussian is evaluated\n",
    "      mu (scalar): mean of the Gaussian\n",
    "      sigma (scalar): standard deviation of the gaussian\n",
    "\n",
    "  Returns:\n",
    "      (numpy array of floats) : normalized Gaussian evaluated at `x`\n",
    "  \"\"\"\n",
    "  px = 1/(2*np.pi*sigma**2)**1/2 *np.exp(-(x_points-mu)**2/(2*sigma**2))\n",
    "\n",
    "  # as we are doing numerical integration we may have to remember to normalise\n",
    "  # taking into account the stepsize (0.1)\n",
    "  px = px/(0.1*sum(px))\n",
    "  return px\n",
    "\n",
    "def compute_posterior_pointwise(prior, likelihood):\n",
    "  \"\"\" Compute the posterior probability distribution point-by-point using Bayes\n",
    "  Rule.\n",
    "\n",
    "    Args:\n",
    "      prior (ndarray): probability distribution of prior\n",
    "      likelihood (ndarray): probability distribution of likelihood\n",
    "\n",
    "    Returns:\n",
    "      posterior (ndarray): probability distribution of posterior\n",
    "  \"\"\"\n",
    "  ##############################################################################\n",
    "  # TODO for students: Write code to compute the posterior from the prior and\n",
    "  # likelihood via pointwise multiplication. (You may assume both are defined\n",
    "  # over the same x-axis)\n",
    "  #\n",
    "  # Comment out the line below to test your solution\n",
    "  raise NotImplementedError(\"Finish the simulation code first\")\n",
    "  ##############################################################################\n",
    "\n",
    "  posterior = ...\n",
    "\n",
    "  return posterior\n",
    "\n",
    "def localization_simulation(mu_auditory = 3.0, sigma_auditory = 1.5,\n",
    "                            mu_visual = -1.0, sigma_visual = 1.5):\n",
    "  \"\"\" Perform a sound localization simulation with an auditory prior.\n",
    "\n",
    "    Args:\n",
    "      mu_auditory (float): mean parameter value for auditory prior\n",
    "      sigma_auditory (float): standard deviation parameter value for auditory\n",
    "                                prior\n",
    "      mu_visual (float): mean parameter value for visual likelihood distribution\n",
    "      sigma_visual (float): standard deviation parameter value for visual\n",
    "                                likelihood distribution\n",
    "\n",
    "    Returns:\n",
    "      x (ndarray): range of values for which to compute probabilities\n",
    "      auditory (ndarray): probability distribution of the auditory prior\n",
    "      visual (ndarray): probability distribution of the visual likelihood\n",
    "      posterior_pointwise (ndarray): posterior probability distribution\n",
    "  \"\"\"\n",
    "  ##############################################################################\n",
    "  ## Using the x variable below,\n",
    "  ##      create a gaussian called 'auditory' with mean 3, and std 1.5\n",
    "  ##      create a gaussian called 'visual' with mean -1, and std 1.5\n",
    "  #\n",
    "  #\n",
    "  ## Comment out the line below to test your solution\n",
    "  raise NotImplementedError(\"Finish the simulation code first\")\n",
    "  ###############################################################################\n",
    "  x = np.arange(-8, 9, 0.1)\n",
    "\n",
    "  auditory = ...\n",
    "  visual = ...\n",
    "  posterior = compute_posterior_pointwise(auditory, visual)\n",
    "\n",
    "  return x, auditory, visual, posterior\n",
    "\n",
    "\n",
    "# Uncomment the lines below to plot the results\n",
    "# x, auditory, visual, posterior_pointwise = localization_simulation()\n",
    "# _ = posterior_plot(x, auditory, visual, posterior_pointwise)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:33.753287Z",
     "iopub.status.busy": "2021-06-02T20:55:33.733161Z",
     "iopub.status.idle": "2021-06-02T20:55:34.046655Z",
     "shell.execute_reply": "2021-06-02T20:55:34.045983Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "\n",
    "def my_gaussian(x_points, mu, sigma):\n",
    "  \"\"\" Returns normalized Gaussian estimated at points `x_points`, with parameters:\n",
    "  mean `mu` and standard deviation `sigma`\n",
    "\n",
    "  Args:\n",
    "      x_points (ndarray of floats): points at which the gaussian is evaluated\n",
    "      mu (scalar): mean of the Gaussian\n",
    "      sigma (scalar): standard deviation of the gaussian\n",
    "\n",
    "  Returns:\n",
    "      (numpy array of floats) : normalized Gaussian evaluated at `x`\n",
    "  \"\"\"\n",
    "  px = 1/(2*np.pi*sigma**2)**1/2 *np.exp(-(x_points-mu)**2/(2*sigma**2))\n",
    "\n",
    "  # as we are doing numerical integration we may have to remember to normalise\n",
    "  # taking into account the stepsize (0.1)\n",
    "  px = px/(0.1*sum(px))\n",
    "  return px\n",
    "\n",
    "def compute_posterior_pointwise(prior, likelihood):\n",
    "  \"\"\" Compute the posterior probability distribution point-by-point using Bayes\n",
    "  Rule.\n",
    "\n",
    "    Args:\n",
    "      prior (ndarray): probability distribution of prior\n",
    "      likelihood (ndarray): probability distribution of likelihood\n",
    "\n",
    "    Returns:\n",
    "      posterior (ndarray): probability distribution of posterior\n",
    "  \"\"\"\n",
    "\n",
    "  posterior = likelihood * prior\n",
    "  posterior /= posterior.sum()\n",
    "\n",
    "  return posterior\n",
    "\n",
    "def localization_simulation(mu_auditory = 3.0, sigma_auditory = 1.5,\n",
    "                            mu_visual = -1.0, sigma_visual = 1.5):\n",
    "  \"\"\" Perform a sound localization simulation with an auditory prior.\n",
    "\n",
    "    Args:\n",
    "      mu_auditory (float): mean parameter value for auditory prior\n",
    "      sigma_auditory (float): standard deviation parameter value for auditory\n",
    "                                prior\n",
    "      mu_visual (float): mean parameter value for visual likelihood distribution\n",
    "      sigma_visual (float): standard deviation parameter value for visual\n",
    "                                likelihood distribution\n",
    "\n",
    "    Returns:\n",
    "      x (ndarray): range of values for which to compute probabilities\n",
    "      auditory (ndarray): probability distribution of the auditory prior\n",
    "      visual (ndarray): probability distribution of the visual likelihood\n",
    "      posterior_pointwise (ndarray): posterior probability distribution\n",
    "  \"\"\"\n",
    "\n",
    "  x = np.arange(-8, 9, 0.1)\n",
    "\n",
    "  auditory = my_gaussian(x, mu_auditory, sigma_auditory)\n",
    "  visual = my_gaussian(x, mu_visual, mu_visual)\n",
    "  posterior = compute_posterior_pointwise(auditory, visual)\n",
    "\n",
    "  return x, auditory, visual, posterior\n",
    "\n",
    "# Uncomment the lines below to plot the results\n",
    "x, auditory, visual, posterior_pointwise = localization_simulation()\n",
    "with plt.xkcd():\n",
    "  _ = posterior_plot(x, auditory, visual, posterior_pointwise)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Combining the the visual and auditory information could help the brain get a better estimate of the location of an audio-visual object, with lower variance. For this specific example we did not use a Bayesian prior for simplicity, although it would be a good idea in a practical modeling study.\n",
    "\n",
    "**Main course preview:** On Week 3 Day 1 (W3D1) there will be a whole day devoted to examining whether the brain uses Bayesian inference. Is the brain Bayesian?!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Summary\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:34.052612Z",
     "iopub.status.busy": "2021-06-02T20:55:34.051935Z",
     "iopub.status.idle": "2021-06-02T20:55:34.115225Z",
     "shell.execute_reply": "2021-06-02T20:55:34.115701Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 6: Outro\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id= \"BL5qNdZS-XQ\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Having done the different exercises you should now:\n",
    "* understand what the likelihood function is, and have some intuition of why it is important\n",
    "* know how to summarise the Gaussian distribution using mean and variance \n",
    "* know how to maximise a likelihood function\n",
    "* be able to do simple inference in both classical and Bayesian ways"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Bonus\n",
    "\n",
    "For more reading on these topics see:\n",
    "Textbook"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "## Extra exercise: Bayes Net\n",
    "If you have the time, here is another extra exercise.\n",
    "\n",
    "Bayes Net, or Bayesian Belief Networks, provide a way to make inferences about multiple levels of information, which would be very difficult to do in a classical frequentist paradigm.\n",
    "\n",
    "We can encapsulate our knowledge about causal relationships and use this to make inferences about hidden properties."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will try a simple example of a Bayesian Net (aka belief network). Imagine that you have a house with an unreliable sprinkler system installed for watering the grass. This is set to water the grass independently of whether it has rained that day. We have three variables, rain ($r$), sprinklers ($s$) and wet grass ($w$). Each of these can be true (1) or false (0). See the graphical model representing the relationship between the variables."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is a table below describing all the relationships between $w, r$, and s$.\n",
    "\n",
    "Obviously the grass is more likely to be wet if either the sprinklers were on or it was raining. On any given day the sprinklers have probability 0.25 of being on, $P(s = 1) = 0.25$, while there is a probability 0.1 of rain, $P (r = 1) = 0.1$. The table then lists the conditional probabilities for the given being wet, given a rain and sprinkler condition for that day."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\\begin{array}{|l |  l || ll |} \\hline\n",
    "r &s&P(w=0|r,s) &P(w=1|r,s)$\\\\ \\hline\n",
    "0& 0  &0.999 &0.001\\\\\n",
    "0& 1 &0.1& 0.9\\\\\n",
    "1& 0 &0.01 &0.99\\\\\n",
    "1& 1& 0.001 &0.999\\\\ \\hline\n",
    "\\end{array}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "You come home and find that the the grass is wet, what is the probability the sprinklers were on today (you do not know if it was raining)?\n",
    "\n",
    "We can start by writing out the joint probability:\n",
    "$P(r,w,s)=P(w|r,s)P(r)P(s)$\n",
    "\n",
    "The conditional probability is then:\n",
    "\n",
    "$\n",
    "P(s|w)=\\frac{\\sum_{r} P(w|s,r)P(s)  P(r)}{P(w)}=\\frac{P(s) \\sum_{r} P(w|s,r) P(r)}{P(w)}\n",
    "$\n",
    "\n",
    "Note that we are summing over all possible conditions for $r$ as we do not know if it was raining. Specifically, we want to know the probability of sprinklers having been on given the wet grass, $P(s=1|w=1)$:\n",
    "\n",
    "$\n",
    "P(s=1|w=1)=\\frac{P(s = 1)( P(w = 1|s = 1, r = 1) P(r = 1)+ P(w = 1|s = 1,r = 0)  P(r = 0))}{P(w = 1)} \n",
    "$\n",
    "\n",
    "where\n",
    "\n",
    "\\begin{eqnarray}\n",
    "P(w=1)=P(s=1)( P(w=1|s=1,r=1 ) P(r=1) &+ P(w=1|s=1,r=0)  P(r=0))\\\\\n",
    "+P(s=0)( P(w=1|s=0,r=1 )  P(r=1) &+ P(w=1|s=0,r=0)  P(r=0))\\\\\n",
    "\\end{eqnarray}\n",
    "\n",
    "This code has been written out below, you just need to insert the right numbers from the table."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:34.122298Z",
     "iopub.status.busy": "2021-06-02T20:55:34.121717Z",
     "iopub.status.idle": "2021-06-02T20:55:34.128460Z",
     "shell.execute_reply": "2021-06-02T20:55:34.127831Z"
    }
   },
   "outputs": [],
   "source": [
    "##############################################################################\n",
    "# TODO for student: Write code to insert the correct conditional probabilities\n",
    "# from the table; see the comments to match variable with table entry.\n",
    "# Comment out the line below to test your solution\n",
    "raise NotImplementedError(\"Finish the simulation code first\")\n",
    "##############################################################################\n",
    "\n",
    "Pw1r1s1 = ...  # the probability of wet grass given rain and sprinklers on\n",
    "Pw1r1s0 = ...  # the probability of wet grass given rain and sprinklers off\n",
    "Pw1r0s1 = ...  # the probability of wet grass given no rain and sprinklers on\n",
    "Pw1r0s0 = ...  # the probability of wet grass given no rain and sprinklers off\n",
    "Ps = ... # the probability of the sprinkler being on\n",
    "Pr = ... # the probability of rain that day\n",
    "\n",
    "\n",
    "# Uncomment once variables are assigned above\n",
    "# A= Ps * (Pw1r1s1 * Pr + (Pw1r0s1) * (1 - Pr))\n",
    "# B= (1 - Ps) * (Pw1r1s0 *Pr + (Pw1r0s0) * (1 - Pr))\n",
    "# print(\"Given that the grass is wet, the probability the sprinkler was on is: \" +\n",
    "#       str(A/(A + B)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T20:55:34.134940Z",
     "iopub.status.busy": "2021-06-02T20:55:34.133672Z",
     "iopub.status.idle": "2021-06-02T20:55:34.137056Z",
     "shell.execute_reply": "2021-06-02T20:55:34.136323Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "\n",
    "Pw1r1s1 = 0.999  # the probability of wet grass given rain and sprinklers on\n",
    "Pw1r1s0 = 0.99   # the probability of wet grass given rain and sprinklers off\n",
    "Pw1r0s1 = 0.9    # the probability of wet grass given no rain and sprinklers on\n",
    "Pw1r0s0 = 0.001  # the probability of wet grass given no rain and sprinklers off\n",
    "Ps = 0.25  # the probability of the sprinkler being on\n",
    "Pr = 0.1   # the probability of rain that day\n",
    "\n",
    "# Uncomment once variables are assigned above\n",
    "A= Ps * (Pw1r1s1 * Pr + (Pw1r0s1) * (1 - Pr))\n",
    "B= (1 - Ps) * (Pw1r1s0 *Pr + (Pw1r0s0) * (1 - Pr))\n",
    "print(\"Given that the grass is wet, the probability the sprinkler was on is: \" +\n",
    "      str(A/(A + B)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The probability you should get is about 0.7522.\n",
    "\n",
    "Your neighbour now tells you that it was indeed \n",
    "raining today, $P (r = 1) = 1$, so what is now the probability the sprinklers were on? Try changing the numbers above.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Think! Bonus: Causality in the Brain\n",
    "\n",
    "In a causal stucture this is the correct way to calculate the probabilities. Do you think this is how the brain solves such problems? Would it be different for task involving novel stimuli (e.g. for someone with no previous exposure to sprinklers), as opposed to common stimuli?\n",
    "\n",
    "**Main course preview:** On W3D5 we will discuss causality further!"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "W0D5_Tutorial2",
   "provenance": [],
   "toc_visible": true
  },
  "kernel": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  },
  "toc-autonumbering": true
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
