{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_HiddenDynamics/W3D2_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Neuromatch Academy: Week 3, Day 2, Tutorial 1\n",
    "# Hidden Dynamics: Sequential Probability Ratio Test\n",
    "\n",
    "__Content creators:__ Yicheng Fei and Xaq Pitkow\n",
    "\n",
    "__Content reviewers:__ John Butler, Matt Krause, Spiros Chavlis, Michael Waskom, Jesse Livezey, and Byron Galbraith"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "#Tutorial Objectives\n",
    "\n",
    "In W3D1, we learned how to combine the sensory evidence and our prior experience with Bayes' Theorem, producing a posterior probability distribution that would let us choose between the most probable of *two* options (fish being on the left or fish being on the right).\n",
    "\n",
    "Here, we add a *third* option: choosing to collect more evidence before making a decision.\n",
    "\n",
    "---\n",
    "\n",
    "In this notebook we will perform a *Sequential Probability Ratio Test* (SPRT) between two hypotheses $s=+1$ and $s=-1$ by running simulations of a *Drift Diffusion Model (DDM)*. As data comes in, we accumulate evidence linearly until a stopping criterion is met before deciding which hypothesis to accept.\n",
    "\n",
    "In this tutorial, you will\n",
    "* Simulate the Drift-Diffusion Model.\n",
    "* Gain intuition about the tradeoff between decision speed and accuracy."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:50.171795Z",
     "iopub.status.busy": "2021-06-03T13:59:50.171049Z",
     "iopub.status.idle": "2021-06-03T13:59:50.246455Z",
     "shell.execute_reply": "2021-06-03T13:59:50.245805Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 1: Overview of Tutorials on Hidden Dynamics\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"ofNRpSpRxl4\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "both",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:50.250130Z",
     "iopub.status.busy": "2021-06-03T13:59:50.248828Z",
     "iopub.status.idle": "2021-06-03T13:59:53.065777Z",
     "shell.execute_reply": "2021-06-03T13:59:53.065227Z"
    }
   },
   "outputs": [],
   "source": [
    "# Imports\n",
    "import numpy as np\n",
    "from scipy import stats\n",
    "import matplotlib.pyplot as plt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.071986Z",
     "iopub.status.busy": "2021-06-03T13:59:53.070197Z",
     "iopub.status.idle": "2021-06-03T13:59:53.156194Z",
     "shell.execute_reply": "2021-06-03T13:59:53.155647Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Figure settings\n",
    "import ipywidgets as widgets       # interactive display\n",
    "%config InlineBackend.figure_format = 'retina'\n",
    "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.164605Z",
     "iopub.status.busy": "2021-06-03T13:59:53.158725Z",
     "iopub.status.idle": "2021-06-03T13:59:53.174568Z",
     "shell.execute_reply": "2021-06-03T13:59:53.175065Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Helper functions\n",
    "\n",
    "def simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample,\n",
    "                                     verbose=True):\n",
    "  \"\"\"Simulate and plot a SPRT for a fixed amount of time given a std.\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of the observations.\n",
    "    stop_time (int): Number of steps to run before stopping.\n",
    "    num_sample (int): The number of samples to plot.\n",
    "    \"\"\"\n",
    "\n",
    "  evidence_history_list = []\n",
    "  if verbose:\n",
    "    print(\"#Trial\\tTotal_Evidence\\tDecision\")\n",
    "  for i in range(num_sample):\n",
    "    evidence_history, decision, Mvec = simulate_SPRT_fixedtime(sigma, stop_time)\n",
    "    if verbose:\n",
    "      print(\"{}\\t{:f}\\t{}\".format(i, evidence_history[-1], decision))\n",
    "    evidence_history_list.append(evidence_history)\n",
    "\n",
    "  fig, ax = plt.subplots()\n",
    "  maxlen_evidence = np.max(list(map(len,evidence_history_list)))\n",
    "  ax.plot(np.zeros(maxlen_evidence), '--', c='red', alpha=1.0)\n",
    "  for evidences in evidence_history_list:\n",
    "    ax.plot(np.arange(len(evidences)), evidences)\n",
    "    ax.set_xlabel(\"Time\")\n",
    "    ax.set_ylabel(\"Cumulated log likelihood ratio\")\n",
    "    ax.set_title(\"Log likelihood ratio trajectories under the fixed-time \" +\n",
    "                  \"stopping rule\")\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def plot_accuracy_vs_stoptime(stop_time_list, accuracy_list):\n",
    "  \"\"\"Simulate and plot a SPRT for a fixed amount of times given a std.\n",
    "\n",
    "  Args:\n",
    "    stop_time_list (int): List of number of steps to run before stopping.\n",
    "    accuracy_list (int): List of accuracies for each stop time\n",
    "  \"\"\"\n",
    "\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.plot(stop_time_list, accuracy_list)\n",
    "  ax.set_xlabel('Stop Time')\n",
    "  ax.set_ylabel('Average Accuracy')\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha,\n",
    "                                          verbose=True):\n",
    "  \"\"\"Simulate and plot a SPRT for a fixed amount of times given a std.\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of the observations.\n",
    "    num_sample (int): The number of samples to plot.\n",
    "    alpha (float): Threshold for making a decision.\n",
    "  \"\"\"\n",
    "  # calculate evidence threshold from error rate\n",
    "  threshold = threshold_from_errorrate(alpha)\n",
    "\n",
    "  # run simulation\n",
    "  evidence_history_list = []\n",
    "  if verbose:\n",
    "    print(\"#Trial\\tTime\\tCumulated Evidence\\tDecision\")\n",
    "  for i in range(num_sample):\n",
    "    evidence_history, decision, Mvec = simulate_SPRT_threshold(sigma, threshold)\n",
    "    if verbose:\n",
    "      print(\"{}\\t{}\\t{:f}\\t{}\".format(i, len(Mvec), evidence_history[-1],\n",
    "                                      decision))\n",
    "    evidence_history_list.append(evidence_history)\n",
    "\n",
    "  fig, ax = plt.subplots()\n",
    "  maxlen_evidence = np.max(list(map(len,evidence_history_list)))\n",
    "  ax.plot(np.repeat(threshold,maxlen_evidence + 1), c=\"red\")\n",
    "  ax.plot(-np.repeat(threshold,maxlen_evidence + 1), c=\"red\")\n",
    "  ax.plot(np.zeros(maxlen_evidence + 1), '--', c='red', alpha=0.5)\n",
    "\n",
    "  for evidences in evidence_history_list:\n",
    "      ax.plot(np.arange(len(evidences) + 1), np.concatenate([[0], evidences]))\n",
    "\n",
    "  ax.set_xlabel(\"Time\")\n",
    "  ax.set_ylabel(\"Cumulated log likelihood ratio\")\n",
    "  ax.set_title(\"Log likelihood ratio trajectories under the threshold rule\")\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample):\n",
    "  \"\"\"Simulate and plot a SPRT for a set of thresholds given a std.\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of the observations.\n",
    "    alpha_list (float): List of thresholds for making a decision.\n",
    "    num_sample (int): The number of samples to plot.\n",
    "  \"\"\"\n",
    "  accuracies, decision_speeds = simulate_accuracy_vs_threshold(sigma,\n",
    "                                                               threshold_list,\n",
    "                                                               num_sample)\n",
    "\n",
    "  # Plotting\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.plot(decision_speeds, accuracies, linestyle=\"--\", marker=\"o\")\n",
    "  ax.plot([np.amin(decision_speeds), np.amax(decision_speeds)],\n",
    "          [0.5, 0.5], c='red')\n",
    "  ax.set_xlabel(\"Average Decision speed\")\n",
    "  ax.set_ylabel('Average Accuracy')\n",
    "  ax.set_title(\"Speed/Accuracy Tradeoff\")\n",
    "  ax.set_ylim(0.45, 1.05)\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def threshold_from_errorrate(alpha):\n",
    "  \"\"\"Calculate log likelihood ratio threshold from desired error rate `alpha`\n",
    "\n",
    "  Args:\n",
    "    alpha (float): in (0,1), the desired error rate\n",
    "\n",
    "  Return:\n",
    "    threshold: corresponding evidence threshold\n",
    "  \"\"\"\n",
    "  threshold = np.log((1. - alpha) / alpha)\n",
    "  return threshold"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "# Section 1: Introduction to the SPRT\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 1.1: The random dot task\n",
    "\n",
    "A classic experimental task in neuroscience is the random dot kinematogram ([Newsome, Britten, Movshon 1989](https://www.nature.com/articles/341052a0.pdf)), in which a pattern of moving dots are moving in random directions but with some weak coherence that favors a net rightward or leftward motion. The observer must guess the direction. Neurons in the brain are informative about this task, and have responses that correlate with the choice.\n",
    "\n",
    "Below is a video by Pamela Reinagle of a rat guessing the direction of motion in such a task."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.181087Z",
     "iopub.status.busy": "2021-06-03T13:59:53.180315Z",
     "iopub.status.idle": "2021-06-03T13:59:53.207072Z",
     "shell.execute_reply": "2021-06-03T13:59:53.207561Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 2: Rat performing the Random Dot Motion Task\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"oDxcyTn-0os\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this tutorial, we will consider a model for this random dot motion task. In each time bin $t$, we are shown dots moving at net measured velocity $m_t$, either in a negative ($m<0$) or positive ($m>0$) direction. Although the dots' velocities varies over time, the $m_t$ are generated by a fixed probability distribution $p(m|s)$ that depends on a fixed latent variable $s=\\pm 1$:\n",
    "$$\n",
    "\\\\\n",
    "\\begin{eqnarray}\n",
    "p(m|s=+1) &=& \\mathcal{N}\\left(\\mu_+,\\sigma^2\\right) \\\\\n",
    "&&\\textrm{or} \\\\\n",
    "p(m|s=-1) &=& \\mathcal{N}\\left(\\mu_-,\\sigma^2\\right) \\\\\n",
    "\\end{eqnarray} \n",
    "\\\\\n",
    "$$\n",
    "Here we assume the measurement probabilities have the same variances regardless of $s$, and different means. We, like the rat, want to synthesize our evidence to determine whether $s=+1$ or $-1$. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 1.2: Sequential Probability Ratio Test(SPRT)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.213681Z",
     "iopub.status.busy": "2021-06-03T13:59:53.213095Z",
     "iopub.status.idle": "2021-06-03T13:59:53.250345Z",
     "shell.execute_reply": "2021-06-03T13:59:53.250764Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 3: Decision making: Sequential Probability Ratio Test\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"DGoPoLkDiUw\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "<!-- <img alt=\"PGM\" width=\"400\" src=\"https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W2D3_DecisionMaking/static/W2D3_Tutorial1_PGM.png?raw=true\"> -->\n",
    "\n",
    "\n",
    "<img src=\"https://drive.google.com/uc?export=view&id=1vE2XQ5qMQ_pJgzgZRCnNVQEpP-nupt87\" alt=\"HMM drawing\" width=\"400\">\n",
    "\n",
    "Suppose we obtain a sequence of independent measurements $m_{1:T}$ from a distribution $p(m_{1:T}|s)$. Remember that $s$ is our hidden state and for now, is either -1 or 1. Our measurements come from either $p(m_t|s=-1)$ or $p(m_t|s=1)$. We wish to test which value of $s$ is more likely given our sequence of measurements.\n",
    "\n",
    "A crucial assumption in Hidden Markov Models is that all measurements are drawn independently given the latent state. In the fishing example, you might have a high or low probability of catching fish depending on where the school of fish is --- but if you already *knew* the location $s$ of the school (which is what we mean by a conditional probability $p(m|s)$), then your chances of catching fish at one time is unaffected by whether you caught fish there previously.\n",
    "\n",
    "Mathematically, we write this independence as $p(m_1, m_2|s)=p(m_1|s)p(m_2|s)$, using the product rule of probabilities.  When we consider a whole time series of measurements $m_{1:T}$, we can compute the product $p(m_{1:T}|s)=\\prod_{t=1}^T p(m_t|s)$.\n",
    "\n",
    "We can then compare the total evidence up to time $T$ for our two hypotheses (of whether our state is -1 or 1) by taking a ratio\n",
    "of the likelihoods.\n",
    "\n",
    "$$L_t=\\frac{\\prod_{t=1}^T p(m_t|s=+1)}{\\prod_{t=1}^T p(m_t|s=-1)}$$\n",
    "\n",
    "The above tells us the likelihood of the measurements if $s = 1$ divided by the likelihood of the measurements if $s = -1$.\n",
    "\n",
    "It is convenient to take the _log_ of this likelihood ratio, converting the products to sums:\n",
    "\n",
    "$$S_t = \\log L_t = \\sum_{t=1}^T \\log \\frac{p(m_t|s=+1)}{p(m_t|s=-1)} \\tag{1}$$\n",
    "We can name each term in the sum as\n",
    "$$\\Delta_t= \\log \\frac{p(m_t|s=+1)}{p(m_t|s=-1)}$$\n",
    "Due to the independence of measurements, this can be calculated recursively _online_ as new data points arrive:\n",
    "\n",
    "$$ S_t =  S_{t-1} + \\Delta_t \\tag{2}$$\n",
    "where we update our log-likelihood ratio $S_t$ by $\\Delta_t$ every time we see a new measurement $m_t$.\n",
    "\n",
    "We will use $S_t$ to make our decisions! If $S_t$ is positive, the likelihood of $s = 1$ is higher. If it is negative, the likelihood of $s = -1$ is higher. We need to figure out when we make our decision though, as $S_t$ can change with each new measurement.\n",
    "\n",
    "\n",
    "A rule for making a decision can be implemented in two ways:\n",
    "\n",
    "1. Fixed time (Section 2): Stop collecting data after a predetermined number of measurements $t$, and accept the hypothesis that $s=+1$ if $S_t>0$, otherwise accept $s=-1$ if $S_t<0$ (and choose randomly if $S_t=0$). The significance level or desired error rate $\\alpha$ can then be determined as $\\alpha = (1+\\exp(|S_t|))^{-1}$.\n",
    "\n",
    "2. Confidence threshold (Bonus Section 1): Choose an acceptable error rate $\\alpha$. Then accept the hypothesis $s=1$ when $S_t \\ge b=\\log \\frac{1-\\alpha}{\\alpha}$, analogously accept $s=-1$ when $S_t\\le -b$, and keep collecting data until one of those confidence thresholds is reached. Historical note: this is the rule that Alan Turing used to break the Enigma code and win World War II!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 1.3: SPRT as a Drift Diffusion Model (DDM)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.256803Z",
     "iopub.status.busy": "2021-06-03T13:59:53.256215Z",
     "iopub.status.idle": "2021-06-03T13:59:53.289628Z",
     "shell.execute_reply": "2021-06-03T13:59:53.290061Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 4: SPRT and the Random Dot Motion Task\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"7WBB4M_Vf58\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The evidence favoring the two latent states is random, but according to our model it will weakly favor one hypothesis over another. The accumulation of evidence will thus \"drift\" toward one outcome, while \"diffusing\" in random directions, hence the term \"drift-diffusion model\" (DDM). The process is most likely (but not guaranteed) to reach the correct outcome eventually. We can do a little math below to show that the update $\\Delta_t$ to the log-likelihood ratio is a gaussian random number. You can derive this yourself, filling in the steps below, or skip to the end result.\n",
    "\n",
    "**Bonus exercise: derive Drift Diffusion Model from SPRT**\n",
    "\n",
    "Assume measurements are Gaussian-distributed with different means depending on the discrete latent variable $s$:\n",
    "$$p(m|s=\\pm 1) = \\mathcal{N}\\left(\\mu_\\pm,\\sigma^2\\right)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp{\\left[-\\frac{(m-\\mu_\\pm)^2}{2\\sigma^2}\\right]}$$\n",
    "\n",
    "In the log likelihood ratio for a single data point $m_i$, the normalizations cancel to give\n",
    "$$\\Delta_t=\\log \\frac{p(m_t|s=+1)}{p(m_t|s=-1)} = \\frac{1}{2\\sigma^2}\\left[-\\left(m_t-\\mu_+\\right)^2 + (m_t-\\mu_-)^2\\right] \\tag{5}$$\n",
    "\n",
    "It's convenient to rewrite $m=\\mu_\\pm + \\sigma \\epsilon$, where $\\epsilon\\sim \\mathcal{N}(0,1)$ is a standard Gaussian variable with zero mean and unit variance. (Why does this give the correct probability for $m$?). The preceding formula can then be rewritten as \n",
    "$$\\Delta_t = \\frac{1}{2\\sigma^2}\\left( -((\\mu_\\pm+\\sigma\\epsilon)-\\mu_+)^2 + ((\\mu_\\pm+\\sigma\\epsilon)-\\mu_-)^2\\right) \\tag{5}$$\n",
    "Let's assume that $s=+1$ so $\\mu_\\pm=\\mu_+$ (if $s=-1$ then the result is the same with a reversed sign). In that case, the means in the first term $m_t-\\mu_+$ cancel, leaving\n",
    "$$\\Delta_t = \\frac{(\\mu_+-\\mu_-)^2}{2\\sigma^2}+\\frac{\\mu_+-\\mu_-}{\\sigma}\\epsilon \\tag{5}$$\n",
    "where the first term is the constant *drift*, and the second term is the random *diffusion*. Adding these $\\Delta_t$ over time gives a biased random walk known as the Drift Diffusion Model, $S_t=\\sum_t \\Delta_t$. The log-likelihood ratio is then normally distributed with a time-dependent mean and variance, \n",
    "$$S_t\\sim\\mathcal{N}\\left(\\tfrac{1}{2}\\frac{\\delta\\mu^2}{\\sigma^2}t,\\ \\frac{\\delta\\mu^2}{\\sigma^2}t\\right)$$\n",
    "where $\\delta\\mu=\\mu_+-\\mu_-$. The mean and the variance both increase linearly with time, so the standard deviation grows more slowly, as only $\\sqrt{t}$. This means that the distributions becomes more and more distinct as evidence is acquired over time. You will simulate this process below.\n",
    "\n",
    "**Neural application**\n",
    "\n",
    "Neural responses in lateral intraparietal cortex (LIP) to the random-dots kinematogram has been well-described by this drift-diffusion process (Huk and Shadlen 2005), suggesting that these neurons gradually integrate evidence. Interestingly there is also a more recent competing hypothesis that neural activity jumps from low to high at random latent times, such that on average it looks like a gradual ramping (Latimer et al 2015). Scientific evidence about these processes are judged by how well the corresponding Hidden Markov Models fit the data!\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Section 2: DDM with fixed-time stopping rule\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 2.1: Simulation of DDM with fixed-time stopping rule"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.295681Z",
     "iopub.status.busy": "2021-06-03T13:59:53.295033Z",
     "iopub.status.idle": "2021-06-03T13:59:53.329413Z",
     "shell.execute_reply": "2021-06-03T13:59:53.329920Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 5: Simulate the DDM with a fixed-time stopping rule\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"9WNAZnEa64Y\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Coding Exercise 1: Simulating an SPRT model\n",
    "\n",
    "Assume we are performing a random dot motion task and at each time we see a moving dot with a sensory measurement $m_t$ of velocity. All data points are sampled from the same distribution $p$, which is either $p_+=\\mathcal{N}\\left(\\mu,\\sigma^2\\right)$ or $p_-=\\mathcal{N}\\left(-\\mu,\\sigma^2\\right)$, depending on which direction the dots are moving in. Let's now generate some simulated data under this setting and perform SPRT using the fixed time stopping rule. \n",
    "\n",
    "In this exercise, without loss of generality, we assume the true data-generating model is $p_+$.\n",
    "\n",
    "We will implement a function `simulate_SPRT_fixedtime`, which will generate measurements based on $\\mu$, $\\sigma$, and the true state. It will then accumulate evidence and output a decision on the state. We will use the helper function `log_likelihood_ratio`, implemented in the next cell, which computes the log of the likelihood of the state being 1 divided by the likelihood of the state being -1. \n",
    "\n",
    "We will then run and visualize 10 simulations of evidence accumulation and decision.\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.336222Z",
     "iopub.status.busy": "2021-06-03T13:59:53.334411Z",
     "iopub.status.idle": "2021-06-03T13:59:53.337005Z",
     "shell.execute_reply": "2021-06-03T13:59:53.337480Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to enable the helper function log_likelihood_ratio\n",
    "\n",
    "def log_likelihood_ratio(Mvec, p0, p1):\n",
    "  \"\"\"Given a sequence(vector) of observed data, calculate the log of\n",
    "  likelihood ratio of p1 and p0\n",
    "\n",
    "  Args:\n",
    "    Mvec (numpy vector):           A vector of scalar measurements\n",
    "    p0 (Gaussian random variable): A normal random variable with `logpdf'\n",
    "                                    method\n",
    "    p1 (Gaussian random variable): A normal random variable with `logpdf`\n",
    "                                    method\n",
    "\n",
    "  Returns:\n",
    "    llvec: a vector of log likelihood ratios for each input data point\n",
    "  \"\"\"\n",
    "  return p1.logpdf(Mvec) - p0.logpdf(Mvec)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.344648Z",
     "iopub.status.busy": "2021-06-03T13:59:53.339662Z",
     "iopub.status.idle": "2021-06-03T13:59:53.457570Z",
     "shell.execute_reply": "2021-06-03T13:59:53.456899Z"
    }
   },
   "outputs": [],
   "source": [
    "def simulate_SPRT_fixedtime(sigma, stop_time, true_dist = 1):\n",
    "  \"\"\"Simulate a Sequential Probability Ratio Test with fixed time stopping\n",
    "  rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and\n",
    "  N(-1,sigma^2).\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of observation models\n",
    "    stop_time (int): Number of samples to take before stopping\n",
    "    true_dist (1 or -1): Which state is the true state.\n",
    "\n",
    "  Returns:\n",
    "    evidence_history (numpy vector): the history of cumulated evidence given\n",
    "                                      generated data\n",
    "    decision (int): 1 for s = 1, -1 for s = -1\n",
    "    Mvec (numpy vector): the generated sequences of measurement data in this trial\n",
    "  \"\"\"\n",
    "\n",
    "  #################################################\n",
    "  ## TODO for students ##\n",
    "  # Fill out function and remove\n",
    "  raise NotImplementedError(\"Student exercise: complete simulate_SPRT_fixedtime\")\n",
    "  #################################################\n",
    "\n",
    "  # Set means of observation distributions\n",
    "  mu_pos = 1.0\n",
    "  mu_neg = -1.0\n",
    "\n",
    "  # Make observation distributions\n",
    "  p_pos = stats.norm(loc = mu_pos, scale = sigma)\n",
    "  p_neg = stats.norm(loc = mu_neg, scale = sigma)\n",
    "\n",
    "  # Generate a random sequence of measurements\n",
    "  if true_dist == 1:\n",
    "    Mvec = p_pos.rvs(size = stop_time)\n",
    "  else:\n",
    "    Mvec = p_neg.rvs(size = stop_time)\n",
    "\n",
    "  # Calculate log likelihood ratio for each measurement (delta_t)\n",
    "  ll_ratio_vec = log_likelihood_ratio(Mvec, p_neg, p_pos)\n",
    "\n",
    "  # Calculate cumulated evidence (S) given a vector of individual evidences (hint: np.cumsum)\n",
    "  evidence_history = ...\n",
    "\n",
    "  # Make decision\n",
    "  if evidence_history[-1] > 0:\n",
    "    # Decision given positive S_t (last value of evidence history)\n",
    "    decision = ...\n",
    "  elif evidence_history[-1] < 0:\n",
    "    # Decision given negative S_t (last value of evidence history)\n",
    "    decision = ...\n",
    "  else:\n",
    "    # Random decision if S_t is 0\n",
    "    decision = np.random.randint(2)\n",
    "\n",
    "  return evidence_history, decision, Mvec\n",
    "\n",
    "\n",
    "# Set random seed\n",
    "np.random.seed(100)\n",
    "\n",
    "# Set model parameters\n",
    "sigma = 3.5  # standard deviation for p+ and p-\n",
    "num_sample = 10  # number of simulations to run\n",
    "stop_time = 150 # number of steps before stopping\n",
    "\n",
    "simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.465341Z",
     "iopub.status.busy": "2021-06-03T13:59:53.459898Z",
     "iopub.status.idle": "2021-06-03T13:59:53.890381Z",
     "shell.execute_reply": "2021-06-03T13:59:53.890803Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def simulate_SPRT_fixedtime(sigma, stop_time, true_dist = 1):\n",
    "  \"\"\"Simulate a Sequential Probability Ratio Test with fixed time stopping\n",
    "  rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and\n",
    "  N(-1,sigma^2).\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of observation models\n",
    "    stop_time (int): Number of samples to take before stopping\n",
    "    true_dist (1 or -1): Which state is the true state.\n",
    "\n",
    "  Returns:\n",
    "    evidence_history (numpy vector): the history of cumulated evidence given\n",
    "                                      generated data\n",
    "    decision (int): 1 for s = 1, -1 for s = -1\n",
    "    Mvec (numpy vector): the generated sequences of measurement data in this trial\n",
    "  \"\"\"\n",
    "\n",
    "  # Set means of observation distributions\n",
    "  mu_pos = 1.0\n",
    "  mu_neg = -1.0\n",
    "\n",
    "  # Make observation distributions\n",
    "  p_pos = stats.norm(loc = mu_pos, scale = sigma)\n",
    "  p_neg = stats.norm(loc = mu_neg, scale = sigma)\n",
    "\n",
    "  # Generate a random sequence of measurements\n",
    "  if true_dist == 1:\n",
    "    Mvec = p_pos.rvs(size = stop_time)\n",
    "  else:\n",
    "    Mvec = p_neg.rvs(size = stop_time)\n",
    "\n",
    "  # Calculate log likelihood ratio for each measurement (delta_t)\n",
    "  ll_ratio_vec = log_likelihood_ratio(Mvec, p_neg, p_pos)\n",
    "\n",
    "  # Calculate cumulated evidence (S) given a vector of individual evidences (hint: np.cumsum)\n",
    "  evidence_history = np.cumsum(ll_ratio_vec)\n",
    "\n",
    "  # Make decision\n",
    "  if evidence_history[-1] > 0:\n",
    "    # Decision given positive S_t (last value of evidence history)\n",
    "    decision = 1\n",
    "  elif evidence_history[-1] < 0:\n",
    "    # Decision given negative S_t (last value of evidence history)\n",
    "    decision = -1\n",
    "  else:\n",
    "    # Random decision if S_t is 0\n",
    "    decision = np.random.randint(2)\n",
    "\n",
    "  return evidence_history, decision, Mvec\n",
    "\n",
    "\n",
    "# Set random seed\n",
    "np.random.seed(100)\n",
    "\n",
    "# Set model parameters\n",
    "sigma = 3.5  # standard deviation for p+ and p-\n",
    "num_sample = 10  # number of simulations to run\n",
    "stop_time = 150 # number of steps before stopping\n",
    "\n",
    "with plt.xkcd():\n",
    "  simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo 2.1: Trajectories under the fixed-time stopping rule\n",
    "\n",
    "In the following demo, you can change the noise level in the observation model (sigma) and the number of time steps before stopping (stop_time) using the sliders. You will then observe 10 simulations with those parameters. \n",
    " \n",
    "\n",
    "\n",
    "1.   Are you more likely to make the wrong decision (choose the incorrect state) with high or low noise?\n",
    "2. What happens when sigma is very small? Why?\n",
    "3.   Are you more likely to make the wrong decision (choose the incorrect state) with fewer or more time steps before stopping?\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:53.898848Z",
     "iopub.status.busy": "2021-06-03T13:59:53.896231Z",
     "iopub.status.idle": "2021-06-03T13:59:54.283629Z",
     "shell.execute_reply": "2021-06-03T13:59:54.283001Z"
    }
   },
   "outputs": [],
   "source": [
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "def simulate_SPRT_fixedtime(sigma, stop_time, true_dist = 1):\n",
    "  \"\"\"Simulate a Sequential Probability Ratio Test with fixed time stopping\n",
    "  rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and\n",
    "  N(-1,sigma^2).\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of observation models\n",
    "    stop_time (int): Number of samples to take before stopping\n",
    "    true_dist (1 or -1): Which state is the true state.\n",
    "\n",
    "  Returns:\n",
    "    evidence_history (numpy vector): the history of cumulated evidence given\n",
    "                                      generated data\n",
    "    decision (int): 1 for s = 1, -1 for s = -1\n",
    "    Mvec (numpy vector): the generated sequences of measurement data in this trial\n",
    "  \"\"\"\n",
    "\n",
    "  # Set means of observation distributions\n",
    "  mu_pos = 1.0\n",
    "  mu_neg = -1.0\n",
    "\n",
    "  # Make observation distributions\n",
    "  p_pos = stats.norm(loc = mu_pos, scale = sigma)\n",
    "  p_neg = stats.norm(loc = mu_neg, scale = sigma)\n",
    "\n",
    "  # Generate a random sequence of measurements\n",
    "  if true_dist == 1:\n",
    "    Mvec = p_pos.rvs(size = stop_time)\n",
    "  else:\n",
    "    Mvec = p_neg.rvs(size = stop_time)\n",
    "\n",
    "  # Calculate log likelihood ratio for each measurement (delta_t)\n",
    "  ll_ratio_vec = log_likelihood_ratio(Mvec, p_neg, p_pos)\n",
    "\n",
    "  # Calculate cumulated evidence (S) given a vector of individual evidences (hint: np.cumsum)\n",
    "  evidence_history = np.cumsum(ll_ratio_vec)\n",
    "\n",
    "  # Make decision\n",
    "  if evidence_history[-1] > 0:\n",
    "    # Decision given positive S_t (last value of evidence history)\n",
    "    decision = 1\n",
    "  elif evidence_history[-1] < 0:\n",
    "    # Decision given negative S_t (last value of evidence history)\n",
    "    decision = -1\n",
    "  else:\n",
    "    # Random decision if S_t is 0\n",
    "    decision = np.random.randint(2)\n",
    "\n",
    "  return evidence_history, decision, Mvec\n",
    "\n",
    "np.random.seed(100)\n",
    "num_sample = 10\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05), stop_time=(5, 500, 1)):\n",
    "  simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample, verbose=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:54.293766Z",
     "iopub.status.busy": "2021-06-03T13:59:54.292476Z",
     "iopub.status.idle": "2021-06-03T13:59:54.296051Z",
     "shell.execute_reply": "2021-06-03T13:59:54.295527Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove explanation\n",
    "\n",
    "\"\"\"\n",
    "\n",
    "1) Higher noise, or higher sigma, means that the evidence accumulation varies up\n",
    "   and down more widely. You are more likely to make a wrong decision with high noise\n",
    "   as the cumulated log likelihood ratio is more likely to be negative at the end\n",
    "   despite the true distribution being s = 1.\n",
    "\n",
    "2) When sigma is very small, the cumulated log likelihood ratios are basically a linear\n",
    "   diagonal line. This is because each new measurement will be very similar (since they are\n",
    "   being drawn from a Gaussian with a tiny standard deviation)\n",
    "\n",
    "3) You are more likely to be wrong with a small number of time steps before decision. There is\n",
    "   more change that the noise will affect the decision. We will explore this in the next section.\n",
    "\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 2.2: Accuracy vs stopping time\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you stop taking samples too early, (e.g., make a decision after only seeing 5 samples), or there's a huge amount of observation noise that buries the signal, you are likely to be driven by observation noise to a negative cumulated log likelihood ratio and thus make a wrong decision. You could get a sense of this by increasing noise level or decreasing stopping time in the last exercise.\n",
    "\n",
    "Now let's look at how decision accuracy varies with the number of samples we see quantitatively.Accuracy is simply defined as the proportion of correct trials across our repeated simulations: $\\frac{\\# \\textrm{ correct decisions}}{\\# \\textrm{ total simulation runs}}$.\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Coding Exercise 2: The Speed/Accuracy Tradeoff\n",
    "\n",
    "We will fix our observation noise level. In this exercise you will implement a function to run several repeated simulations for a certain stopping time and calculate the average decision accuracy. We will then visualize the relation average decision accuracy and stopping time. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:54.305141Z",
     "iopub.status.busy": "2021-06-03T13:59:54.302013Z",
     "iopub.status.idle": "2021-06-03T13:59:54.314779Z",
     "shell.execute_reply": "2021-06-03T13:59:54.314209Z"
    }
   },
   "outputs": [],
   "source": [
    "def simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample):\n",
    "  \"\"\"Calculate the average decision accuracy vs. stopping time by running\n",
    "  repeated SPRT simulations for each stop time.\n",
    "\n",
    "  Args:\n",
    "      sigma (float): standard deviation for observation model\n",
    "      stop_list_list (list-like object): a list of stopping times to run over\n",
    "      num_sample (int): number of simulations to run per stopping time\n",
    "\n",
    "  Returns:\n",
    "      accuracy_list: a list of average accuracies corresponding to input\n",
    "                      `stop_time_list`\n",
    "      decisions_list: a list of decisions made in all trials\n",
    "  \"\"\"\n",
    "\n",
    "  #################################################\n",
    "  ## TODO for students##\n",
    "  # Fill out function and remove\n",
    "  raise NotImplementedError(\"Student exercise: complete simulate_accuracy_vs_stoptime\")\n",
    "  #################################################\n",
    "\n",
    "  # Determine true state (1 or -1)\n",
    "  true_dist = 1\n",
    "\n",
    "  # Set up tracker of accuracy and decisions\n",
    "  accuracies = np.zeros(len(stop_time_list),)\n",
    "  decisions_list = []\n",
    "\n",
    "  # Loop over stop times\n",
    "  for i_stop_time, stop_time in enumerate(stop_time_list):\n",
    "\n",
    "    # Set up tracker of decisions for this stop time\n",
    "    decisions = np.zeros((num_sample,))\n",
    "\n",
    "    # Loop over samples\n",
    "    for i in range(num_sample):\n",
    "\n",
    "      # Simulate run for this stop time (hint: last exercise)\n",
    "      _, decision, _= ...\n",
    "\n",
    "      # Log decision\n",
    "      decisions[i] = decision\n",
    "\n",
    "    # Calculate accuracy\n",
    "    accuracies[i_stop_time] = ...\n",
    "\n",
    "    # Log decisions\n",
    "    decisions_list.append(decisions)\n",
    "\n",
    "  return accuracies, decisions_list\n",
    "\n",
    "\n",
    "# Set random seed\n",
    "np.random.seed(100)\n",
    "\n",
    "# Set parameters of model\n",
    "sigma = 4.65  # standard deviation for observation noise\n",
    "num_sample = 200  # number of simulations to run for each stopping time\n",
    "stop_time_list = np.arange(1, 150, 10) # Array of stopping times to use\n",
    "\n",
    "# Calculate accuracies for each stop time\n",
    "accuracies, _ = simulate_accuracy_vs_stoptime(sigma, stop_time_list,\n",
    "                                                   num_sample)\n",
    "\n",
    "# Visualize\n",
    "plot_accuracy_vs_stoptime(stop_time_list, accuracies)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T13:59:54.344622Z",
     "iopub.status.busy": "2021-06-03T13:59:54.343929Z",
     "iopub.status.idle": "2021-06-03T14:00:00.156404Z",
     "shell.execute_reply": "2021-06-03T14:00:00.156926Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample):\n",
    "  \"\"\"Calculate the average decision accuracy vs. stopping time by running\n",
    "  repeated SPRT simulations for each stop time.\n",
    "\n",
    "  Args:\n",
    "      sigma (float): standard deviation for observation model\n",
    "      stop_list_list (list-like object): a list of stopping times to run over\n",
    "      num_sample (int): number of simulations to run per stopping time\n",
    "\n",
    "  Returns:\n",
    "      accuracy_list: a list of average accuracies corresponding to input\n",
    "                      `stop_time_list`\n",
    "      decisions_list: a list of decisions made in all trials\n",
    "  \"\"\"\n",
    "\n",
    "  # Determine true state (1 or -1)\n",
    "  true_dist = 1\n",
    "\n",
    "  # Set up tracker of accuracy and decisions\n",
    "  accuracies = np.zeros(len(stop_time_list),)\n",
    "  decisions_list = []\n",
    "\n",
    "  # Loop over stop times\n",
    "  for i_stop_time, stop_time in enumerate(stop_time_list):\n",
    "\n",
    "    # Set up tracker of decisions for this stop time\n",
    "    decisions = np.zeros((num_sample,))\n",
    "\n",
    "    # Loop over samples\n",
    "    for i in range(num_sample):\n",
    "\n",
    "      # Simulate run for this stop time (hint: last exercise)\n",
    "      _, decision, _= simulate_SPRT_fixedtime(sigma, stop_time, true_dist)\n",
    "\n",
    "      # Log decision\n",
    "      decisions[i] = decision\n",
    "\n",
    "    # Calculate accuracy\n",
    "    accuracies[i_stop_time] = np.sum(decisions == true_dist) / decisions.shape[0]\n",
    "\n",
    "    # Log decisions\n",
    "    decisions_list.append(decisions)\n",
    "\n",
    "  return accuracies, decisions_list\n",
    "\n",
    "\n",
    "# Set random seed\n",
    "np.random.seed(100)\n",
    "\n",
    "# Set parameters of model\n",
    "sigma = 4.65  # standard deviation for observation noise\n",
    "num_sample = 200  # number of simulations to run for each stopping time\n",
    "stop_time_list = np.arange(1, 150, 10) # Array of stopping times to use\n",
    "\n",
    "# Calculate accuracies for each stop time\n",
    "accuracies, _ = simulate_accuracy_vs_stoptime(sigma, stop_time_list,\n",
    "                                                   num_sample)\n",
    "\n",
    "# Visualize\n",
    "with plt.xkcd():\n",
    "  plot_accuracy_vs_stoptime(stop_time_list, accuracies)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo 2.2: Accuracy versus stop-time\n",
    "\n",
    "In the following demo, we will show the same visualization as in the previous exercise, but you will be able to vary the noise level `sigma` of the observation distributions. First think and discuss, \n",
    "\n",
    "\n",
    "\n",
    "1.   What do you expect low levels of noise to do to the accuracy vs stop time plot?\n",
    "2.   What do you expect high levels of noise to do to the accuracy vs stop time plot?\n",
    "\n",
    "Play with the demo and see if you were correct or not.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:00.180787Z",
     "iopub.status.busy": "2021-06-03T14:00:00.180134Z",
     "iopub.status.idle": "2021-06-03T14:00:03.170753Z",
     "shell.execute_reply": "2021-06-03T14:00:03.171215Z"
    }
   },
   "outputs": [],
   "source": [
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "def simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample):\n",
    "  \"\"\"Calculate the average decision accuracy vs. stopping time by running\n",
    "  repeated SPRT simulations for each stop time.\n",
    "\n",
    "  Args:\n",
    "      sigma (float): standard deviation for observation model\n",
    "      stop_list_list (list-like object): a list of stopping times to run over\n",
    "      num_sample (int): number of simulations to run per stopping time\n",
    "\n",
    "  Returns:\n",
    "      accuracy_list: a list of average accuracies corresponding to input\n",
    "                      `stop_time_list`\n",
    "      decisions_list: a list of decisions made in all trials\n",
    "  \"\"\"\n",
    "\n",
    "  # Determine true state (1 or -1)\n",
    "  true_dist = 1\n",
    "\n",
    "  # Set up tracker of accuracy and decisions\n",
    "  accuracies = np.zeros(len(stop_time_list),)\n",
    "  decisions_list = []\n",
    "\n",
    "  # Loop over stop times\n",
    "  for i_stop_time, stop_time in enumerate(stop_time_list):\n",
    "\n",
    "    # Set up tracker of decisions for this stop time (hint: last exercise)\n",
    "    decisions = np.zeros((num_sample,))\n",
    "\n",
    "    # Loop over samples\n",
    "    for i in range(num_sample):\n",
    "\n",
    "      # Simulate run for this stop time\n",
    "      _, decision, _= simulate_SPRT_fixedtime(sigma, stop_time, true_dist)\n",
    "\n",
    "      # Log decision\n",
    "      decisions[i] = decision\n",
    "\n",
    "    # Calculate accuracy\n",
    "    accuracies[i_stop_time] = np.sum(decisions == true_dist) / decisions.shape[0]\n",
    "\n",
    "    # Log decisions\n",
    "    decisions_list.append(decisions)\n",
    "\n",
    "  return accuracies, decisions_list\n",
    "\n",
    "np.random.seed(100)\n",
    "num_sample = 100\n",
    "stop_time_list = np.arange(1, 150, 10)\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05)):\n",
    " # Calculate accuracies for each stop time\n",
    "  accuracies, _ = simulate_accuracy_vs_stoptime(sigma, stop_time_list,                                             num_sample)\n",
    "\n",
    "  # Visualize\n",
    "  plot_accuracy_vs_stoptime(stop_time_list, accuracies)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:03.178649Z",
     "iopub.status.busy": "2021-06-03T14:00:03.177616Z",
     "iopub.status.idle": "2021-06-03T14:00:03.180614Z",
     "shell.execute_reply": "2021-06-03T14:00:03.181074Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove explanation\n",
    "\n",
    "\"\"\"\n",
    "\n",
    "1) Low levels of noise results in higher accuracies generally, especially\n",
    "   at early stop times.\n",
    "\n",
    "2) High levels of noise results in lower accuracies generally.\n",
    "\n",
    "\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Please see Bonus Section 1 to learn about and work with a different stopping rule for DDMs: a fixed threshold on confidence."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Summary"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Good job! By simulating Drift Diffusion Models to perform decision making, you have learnt how to \n",
    "\n",
    "1. Calculate individual sample evidence as the log likelihood ratio of two candidate models, accumulate evidence from new data points, and make decision based on current evidence in `Exercise 1`\n",
    "2. Run repeated simulations to get an estimate of decision accuraries in `Exercise 2`\n",
    "3. Implement the thresholding stopping rule where we can control our error rate by taking adequate amounts of data, and calculate the evidence threshold from desired error rate in `Exercise 3`\n",
    "4. Explore and gain intuition about the speed/accuracy tradeoff for perceptual decision making in `Exercise 4`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Bonus "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Bonus Section 1: DDM with fixed thresholds on confidence"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:03.187210Z",
     "iopub.status.busy": "2021-06-03T14:00:03.186652Z",
     "iopub.status.idle": "2021-06-03T14:00:03.218700Z",
     "shell.execute_reply": "2021-06-03T14:00:03.219176Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 6: Fixed threshold on confidence\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"E8lvgFeIGQM\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The next exercises consider a variant of the DDM with fixed confidence thresholds instead of fixed decision time. This may be a better description of neural integration. Please complete this material after you have finished the main content of all tutorials, if you would like extra information about this topic."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Exercise 3: Simulating the DDM with fixed thresholds\n",
    "\n",
    "In this exercise, we will use thresholding as our stopping rule and observe the behavior of the DDM. \n",
    "\n",
    "With thresholding stopping rule, we define a desired error rate and will continue making measurements until that error rate is reached. Experimental evidence suggested that evidence accumulation and thresholding stopping strategy happens at neuronal level (see [this article](https://www.annualreviews.org/doi/full/10.1146/annurev.neuro.29.051605.113038) for further reading).\n",
    "\n",
    "* Complete the function `threshold_from_errorrate` to calculate the evidence threshold from desired error rate $\\alpha$ as described in the formulas below. The evidence thresholds $th_1$ and $th_0$ for $p_+$ and $p_-$ are opposite of each other as shown below, so you can just return the absolute value.\n",
    "$$\n",
    "\\begin{align}\n",
    " th_{L} &= \\log \\frac{\\alpha}{1-\\alpha} &= -th_{R} \\\\\n",
    " th_{R} &= \\log \\frac{1-\\alpha}{\\alpha} &= -th{_1}\\\\\n",
    " \\end{align}\n",
    " $$\n",
    "\n",
    "* Complete the function `simulate_SPRT_threshold` to simulate an SPRT with thresholding stopping rule given noise level and desired threshold \n",
    "\n",
    "* Run repeated simulations for a given noise level and a desired error rate visualize the DDM traces using our provided code \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:03.229320Z",
     "iopub.status.busy": "2021-06-03T14:00:03.228039Z",
     "iopub.status.idle": "2021-06-03T14:00:03.230006Z",
     "shell.execute_reply": "2021-06-03T14:00:03.230448Z"
    }
   },
   "outputs": [],
   "source": [
    "def simulate_SPRT_threshold(sigma, threshold , true_dist=1):\n",
    "  \"\"\"Simulate a Sequential Probability Ratio Test with thresholding stopping\n",
    "  rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and\n",
    "  N(-1,sigma^2).\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation\n",
    "    threshold (float): Desired log likelihood ratio threshold to achieve\n",
    "                        before making decision\n",
    "\n",
    "  Returns:\n",
    "    evidence_history (numpy vector): the history of cumulated evidence given\n",
    "                                      generated data\n",
    "    decision (int): 1 for pR, 0 for pL\n",
    "    data (numpy vector): the generated sequences of data in this trial\n",
    "  \"\"\"\n",
    "  muL = -1.0\n",
    "  muR = 1.0\n",
    "\n",
    "  pL = stats.norm(muL, sigma)\n",
    "  pR = stats.norm(muR, sigma)\n",
    "\n",
    "  has_enough_data = False\n",
    "\n",
    "  data_history = []\n",
    "  evidence_history = []\n",
    "  current_evidence = 0.0\n",
    "\n",
    "  # Keep sampling data until threshold is crossed\n",
    "  while not has_enough_data:\n",
    "    if true_dist == 1:\n",
    "      Mvec = pR.rvs()\n",
    "    else:\n",
    "      Mvec = pL.rvs()\n",
    "\n",
    "    ########################################################################\n",
    "    # Insert your code here to:\n",
    "    #      * Calculate the log-likelihood ratio for the new sample\n",
    "    #      * Update the accumulated evidence\n",
    "    raise NotImplementedError(\"`simulate_SPRT_threshold` is incomplete\")\n",
    "    ########################################################################\n",
    "    # individual log likelihood ratios\n",
    "    ll_ratio = log_likelihood_ratio(...)\n",
    "    # cumulated evidence for this chunk\n",
    "    evidence_history.append(...)\n",
    "    # update the collection of all data\n",
    "    data_history.append(Mvec)\n",
    "    current_evidence = evidence_history[-1]\n",
    "\n",
    "    # check if we've got enough data\n",
    "    if abs(current_evidence) > threshold:\n",
    "      has_enough_data = True\n",
    "\n",
    "  data_history = np.array(data_history)\n",
    "  evidence_history = np.array(evidence_history)\n",
    "\n",
    "  # Make decision\n",
    "  if evidence_history[-1] > 0:\n",
    "    decision = 1\n",
    "  elif evidence_history[-1] < 0:\n",
    "    decision = 0\n",
    "  else:\n",
    "    decision = np.random.randint(2)\n",
    "\n",
    "  return evidence_history, decision, data_history\n",
    "\n",
    "\n",
    "np.random.seed(100)\n",
    "sigma = 2.8\n",
    "num_sample = 10\n",
    "log10_alpha = -6.5 # log10(alpha)\n",
    "alpha = np.power(10.0, log10_alpha)\n",
    "\n",
    "################################################################################\n",
    "# Un-comment the following code after completing this exercise\n",
    "################################################################################\n",
    "# simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:03.328673Z",
     "iopub.status.busy": "2021-06-03T14:00:03.255030Z",
     "iopub.status.idle": "2021-06-03T14:00:03.855983Z",
     "shell.execute_reply": "2021-06-03T14:00:03.856655Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def simulate_SPRT_threshold(sigma, threshold , true_dist=1):\n",
    "  \"\"\"Simulate a Sequential Probability Ratio Test with thresholding stopping\n",
    "  rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and\n",
    "  N(-1,sigma^2).\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation\n",
    "    threshold (float): Desired log likelihood ratio threshold to achieve\n",
    "                        before making decision\n",
    "\n",
    "  Returns:\n",
    "    evidence_history (numpy vector): the history of cumulated evidence given\n",
    "                                      generated data\n",
    "    decision (int): 1 for pR, 0 for pL\n",
    "    data (numpy vector): the generated sequences of data in this trial\n",
    "  \"\"\"\n",
    "  muL = -1.0\n",
    "  muR = 1.0\n",
    "\n",
    "  pL = stats.norm(muL, sigma)\n",
    "  pR = stats.norm(muR, sigma)\n",
    "\n",
    "  has_enough_data = False\n",
    "\n",
    "  data_history = []\n",
    "  evidence_history = []\n",
    "  current_evidence = 0.0\n",
    "\n",
    "  # Keep sampling data until threshold is crossed\n",
    "  while not has_enough_data:\n",
    "    if true_dist == 1:\n",
    "      Mvec = pR.rvs()\n",
    "    else:\n",
    "      Mvec = pL.rvs()\n",
    "\n",
    "    # individual log likelihood ratios\n",
    "    ll_ratio = log_likelihood_ratio(Mvec, pL, pR)\n",
    "    # cumulated evidence for this chunk\n",
    "    evidence_history.append(ll_ratio + current_evidence)\n",
    "    # update the collection of all data\n",
    "    data_history.append(Mvec)\n",
    "    current_evidence = evidence_history[-1]\n",
    "\n",
    "    # check if we've got enough data\n",
    "    if abs(current_evidence) > threshold:\n",
    "      has_enough_data = True\n",
    "\n",
    "  data_history = np.array(data_history)\n",
    "  evidence_history = np.array(evidence_history)\n",
    "\n",
    "  # Make decision\n",
    "  if evidence_history[-1] > 0:\n",
    "    decision = 1\n",
    "  elif evidence_history[-1] < 0:\n",
    "    decision = 0\n",
    "  else:\n",
    "    decision = np.random.randint(2)\n",
    "\n",
    "  return evidence_history, decision, data_history\n",
    "\n",
    "\n",
    "np.random.seed(100)\n",
    "sigma = 2.8\n",
    "num_sample = 10\n",
    "log10_alpha = -6.5 # log10(alpha)\n",
    "alpha = np.power(10.0, log10_alpha)\n",
    "\n",
    "with plt.xkcd():\n",
    "  simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo: DDM with fixed threshold\n",
    "\n",
    "**Suggestion**\n",
    "\n",
    "* Play with difference values of `alpha` and `sigma` and observe how that affects the dynamics of Drift-Diffusion Model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:03.887745Z",
     "iopub.status.busy": "2021-06-03T14:00:03.887066Z",
     "iopub.status.idle": "2021-06-03T14:00:04.690080Z",
     "shell.execute_reply": "2021-06-03T14:00:04.689514Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "np.random.seed(100)\n",
    "num_sample = 10\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05), log10_alpha=(-8, -1, .1)):\n",
    "  alpha = np.power(10.0, log10_alpha)\n",
    "  simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha, verbose=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Exercise 4: Speed/Accuracy Tradeoff Revisited\n",
    "\n",
    "The faster you make a decision, the lower your accuracy often is. This phenomenon is known as the **speed/accuracy tradeoff**. Humans can make this tradeoff in a wide range of situations, and many animal species, including ants, bees, rodents, and monkeys also show similar effects. \n",
    "\n",
    "To illustrate the speed/accuracy tradeoff under thresholding stopping rule, let's run some simulations under different thresholds and look at how average decision \"speed\" (1/length) changes with average decision accuracy. We use speed rather than accuracy because in real experiments, subjects can be incentivized to respond faster or slower; it's much harder to precisely control their decision time or error threshold. \n",
    "\n",
    "* Complete the function `simulate_accuracy_vs_threshold` to simulate and compute average accuracies vs. average decision lengths for a list of error thresholds. You will need to supply code to calculate average decision 'speed' from the lengths of trials. You should also calculate the overall accuracy across these trials. \n",
    "\n",
    "* We've set up a list of error thresholds. Run repeated simulations and collect average accuracy with average length for each error rate in this list, and use our provided code to visualize the speed/accuracy tradeoff. You should see a positive correlation between length and accuracy.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:04.699614Z",
     "iopub.status.busy": "2021-06-03T14:00:04.695842Z",
     "iopub.status.idle": "2021-06-03T14:00:04.702054Z",
     "shell.execute_reply": "2021-06-03T14:00:04.701328Z"
    }
   },
   "outputs": [],
   "source": [
    "def simulate_accuracy_vs_threshold(sigma, threshold_list, num_sample):\n",
    "  \"\"\"Calculate the average decision accuracy vs. average decision length by\n",
    "  running repeated SPRT simulations with thresholding stopping rule for each\n",
    "  threshold.\n",
    "\n",
    "  Args:\n",
    "      sigma (float): standard deviation for observation model\n",
    "      threshold_list (list-like object): a list of evidence thresholds to run\n",
    "                                          over\n",
    "      num_sample (int): number of simulations to run per stopping time\n",
    "\n",
    "  Returns:\n",
    "      accuracy_list: a list of average accuracies corresponding to input\n",
    "                      `threshold_list`\n",
    "      decision_speed_list: a list of average decision speeds\n",
    "  \"\"\"\n",
    "  decision_speed_list = []\n",
    "  accuracy_list = []\n",
    "  for threshold in threshold_list:\n",
    "    decision_time_list = []\n",
    "    decision_list = []\n",
    "    for i in range(num_sample):\n",
    "      # run simulation and get decision of current simulation\n",
    "      _, decision, Mvec = simulate_SPRT_threshold(sigma, threshold)\n",
    "      decision_time = len(Mvec)\n",
    "      decision_list.append(decision)\n",
    "      decision_time_list.append(decision_time)\n",
    "\n",
    "    ########################################################################\n",
    "    # Insert your code here to:\n",
    "    #      * Calculate mean decision speed given a list of decision times\n",
    "    #      * Hint: Think about speed as being inversely proportional\n",
    "    #        to decision_length. If it takes 10 seconds to make one decision,\n",
    "    #        our \"decision speed\" is 0.1 decisions per second.\n",
    "    #      * Calculate the decision accuracy\n",
    "    raise NotImplementedError(\"`simulate_accuracy_vs_threshold` is incomplete\")\n",
    "    ########################################################################\n",
    "    # Calculate and store average decision speed and accuracy\n",
    "    decision_speed = ...\n",
    "    decision_accuracy = ...\n",
    "    decision_speed_list.append(decision_speed)\n",
    "    accuracy_list.append(decision_accuracy)\n",
    "\n",
    "  return accuracy_list, decision_speed_list\n",
    "\n",
    "\n",
    "################################################################################\n",
    "# Un-comment the following code block after completing this exercise\n",
    "################################################################################\n",
    "# np.random.seed(100)\n",
    "# sigma = 3.75\n",
    "# num_sample = 200\n",
    "# alpha_list = np.logspace(-2, -0.1, 8)\n",
    "# threshold_list = threshold_from_errorrate(alpha_list)\n",
    "# simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:04.973959Z",
     "iopub.status.busy": "2021-06-03T14:00:04.746845Z",
     "iopub.status.idle": "2021-06-03T14:00:16.515878Z",
     "shell.execute_reply": "2021-06-03T14:00:16.515163Z"
    }
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def simulate_accuracy_vs_threshold(sigma, threshold_list, num_sample):\n",
    "  \"\"\"Calculate the average decision accuracy vs. average decision speed by\n",
    "  running repeated SPRT simulations with thresholding stopping rule for each\n",
    "  threshold.\n",
    "\n",
    "  Args:\n",
    "      sigma (float): standard deviation for observation model\n",
    "      threshold_list (list-like object): a list of evidence thresholds to run\n",
    "                                          over\n",
    "      num_sample (int): number of simulations to run per stopping time\n",
    "\n",
    "  Returns:\n",
    "      accuracy_list: a list of average accuracies corresponding to input\n",
    "                      `threshold_list`\n",
    "      decision_speed_list: a list of average decision speeds\n",
    "  \"\"\"\n",
    "  decision_speed_list = []\n",
    "  accuracy_list = []\n",
    "  for threshold in threshold_list:\n",
    "    decision_time_list = []\n",
    "    decision_list = []\n",
    "    for i in range(num_sample):\n",
    "      # run simulation and get decision of current simulation\n",
    "      _, decision, Mvec = simulate_SPRT_threshold(sigma, threshold)\n",
    "      decision_time = len(Mvec)\n",
    "      decision_list.append(decision)\n",
    "      decision_time_list.append(decision_time)\n",
    "\n",
    "    # Calculate and store average decision speed and accuracy\n",
    "    decision_speed = np.mean(1. / np.array(decision_time_list))\n",
    "    decision_accuracy = sum(decision_list) / len(decision_list)\n",
    "    decision_speed_list.append(decision_speed)\n",
    "    accuracy_list.append(decision_accuracy)\n",
    "\n",
    "  return accuracy_list, decision_speed_list\n",
    "\n",
    "\n",
    "np.random.seed(100)\n",
    "sigma = 3.75\n",
    "num_sample = 200\n",
    "alpha_list = np.logspace(-2, -0.1, 8)\n",
    "threshold_list = threshold_from_errorrate(alpha_list)\n",
    "with plt.xkcd():\n",
    "  simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo: Speed/Accuracy with a threshold rule\n",
    "\n",
    "**Suggestions**\n",
    "\n",
    "* Play with difference values of  noise level `sigma` and observe how that affects the speed/accuracy tradeoff."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-03T14:00:16.537332Z",
     "iopub.status.busy": "2021-06-03T14:00:16.530121Z",
     "iopub.status.idle": "2021-06-03T14:00:25.697319Z",
     "shell.execute_reply": "2021-06-03T14:00:25.697843Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "np.random.seed(100)\n",
    "num_sample = 100\n",
    "alpha_list = np.logspace(-2, -0.1, 8)\n",
    "threshold_list = threshold_from_errorrate(alpha_list)\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05)):\n",
    "  alpha = np.power(10.0, log10_alpha)\n",
    "  simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample)"
   ]
  }
 ],
 "metadata": {
  "@webio": {
   "lastCommId": null,
   "lastKernelId": null
  },
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "W3D2_Tutorial1",
   "provenance": [],
   "toc_visible": true
  },
  "kernel": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": true,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
