{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/W2D1-postcourse-bugfix/tutorials/W2D3_DecisionMaking/student/W2D3_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "# Neuromatch Academy: Week 2, Day 3, Tutorial 1\n",
    "# Sequential Probability Ratio Test\n",
    "\n",
    "__Content creators:__ Yicheng Fei\n",
    "\n",
    "__Content reviewers:__ John Butler, Matt Krause, Spiros Chavlis, Michael Waskom, Jesse Livezey, and Byron Galbraith"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "---\n",
    "#Tutorial Objectives\n",
    "\n",
    "In this tutorial, we will consider a simplified random dot motion task. On each trial $i$, we are shown a dot moving at velocity $v_i$, either in a leftward ($v<0$) or rightward ($v>0$) direction. Although the dots' velocity varies from trial to trial, the set of all $v_i$ are  generated by a fixed probability distribution, which we know to be either:\n",
    "$$\n",
    "\\\\\n",
    "\\begin{eqnarray}\n",
    "p_L &=& \\mathcal{N}\\left(-1,\\; \\sigma^2\\right) \\\\\n",
    "&&\\textrm{or} \\\\\n",
    "p_R &=& \\mathcal{N}\\left(+1,\\; \\sigma^2\\right) \\\\\n",
    "\\end{eqnarray} \n",
    "\\\\\n",
    "$$\n",
    "We want to determine whether $p_L$ or $p_R$ is the true data generating distribution. \n",
    "\n",
    "In W2D1, we learned how to combine the sensory evidence and our prior experience with Bayes' Theorem, producing a posterior probability distribution that would let us choose between the most probable of these *two* options: accepting hypothesis $H_L$, that the data comes from the $p_L$ distribution, or accepting $H_R$, that it comes from $p_R$. \n",
    "\n",
    "Here, we add a *third* option: choosing to collect more evidence before making a decision.\n",
    "\n",
    "---\n",
    "\n",
    "In this notebook we will perform a *Sequential Probability Ratio Test* between two hypotheses $H_L$ and $H_R$ by running simulations of a *Drift Diffusion Model (DDM)*. \n",
    "\n",
    "As independent and identically distributed (*i.i.d*) samples from the true data-generating distribution coming in, we accumulate our evidence linearly until a certain criterion is met before deciding which hypothesis to accept.  Two types of stopping criterion/stopping rule will be implemented: after seeing a fixed amount of data, and after the likelihood ratio passes a pre-defined threshold. Due to the noisy nature of observations, there will be a *drifting* term governed by expected mean output and a *diffusion* term governed by observation noise.\n",
    "\n",
    "In this tutorial, you will\n",
    "\n",
    "* Simulate Drift-Diffusion Model with different stopping rules.\n",
    "* Observe the relation between accuracy and reaction time, get an intuition about the speed/accuracy tradeoff."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 519
    },
    "colab_type": "code",
    "outputId": "e5280303-cbdc-40ae-8b6e-9655f626c6f2"
   },
   "outputs": [],
   "source": [
    "#@title Video 1: Tutorial Objectives\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"DGoPoLkDiUw\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "---\n",
    "# Setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "both",
    "colab": {},
    "colab_type": "code"
   },
   "outputs": [],
   "source": [
    "# Imports\n",
    "import numpy as np\n",
    "from scipy import stats\n",
    "import matplotlib.pyplot as plt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {},
    "colab_type": "code"
   },
   "outputs": [],
   "source": [
    "#@title Figure settings\n",
    "import ipywidgets as widgets       # interactive display\n",
    "%config InlineBackend.figure_format = 'retina'\n",
    "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {},
    "colab_type": "code"
   },
   "outputs": [],
   "source": [
    "#@title Helper functions\n",
    "\n",
    "def log_likelihood_ratio(xvec, p0, p1):\n",
    "  \"\"\"Given a sequence(vector) of observed data, calculate the log of\n",
    "  likelihood ratio of p1 and p0\n",
    "\n",
    "  Args:\n",
    "    xvec (numpy vector):           A vector of scalar measurements\n",
    "    p0 (Gaussian random variable): A normal random variable with `logpdf'\n",
    "                                    method\n",
    "    p1 (Gaussian random variable): A normal random variable with `logpdf`\n",
    "                                    method\n",
    "\n",
    "  Returns:\n",
    "    llvec: a vector of log likelihood ratios for each input data point\n",
    "  \"\"\"\n",
    "  return p1.logpdf(xvec) - p0.logpdf(xvec)\n",
    "\n",
    "\n",
    "def simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample,\n",
    "                                     verbose=True):\n",
    "  \"\"\"Simulate and plot a SPRT for a fixed amount of time given a std.\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of the observations.\n",
    "    stop_time (int): Number of steps to run before stopping.\n",
    "    num_sample (int): The number of samples to plot.\n",
    "    \"\"\"\n",
    "\n",
    "  evidence_history_list = []\n",
    "  if verbose:\n",
    "    print(\"#Trial\\tTotal_Evidence\\tDecision\")\n",
    "  for i in range(num_sample):\n",
    "    evidence_history, decision, data = simulate_SPRT_fixedtime(sigma, stop_time)\n",
    "    if verbose:\n",
    "      print(\"{}\\t{:f}\\t{}\".format(i, evidence_history[-1], decision))\n",
    "    evidence_history_list.append(evidence_history)\n",
    "\n",
    "  fig, ax = plt.subplots()\n",
    "  maxlen_evidence = np.max(list(map(len,evidence_history_list)))\n",
    "  ax.plot(np.zeros(maxlen_evidence), '--', c='red', alpha=1.0)\n",
    "  for evidences in evidence_history_list:\n",
    "    ax.plot(np.arange(len(evidences)), evidences)\n",
    "    ax.set_xlabel(\"Time\")\n",
    "    ax.set_ylabel(\"Cumulated log likelihood ratio\")\n",
    "    ax.set_title(\"Log likelihood ratio trajectories under the fixed-time \" +\n",
    "                  \"stopping rule\")\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def simulate_and_plot_accuracy_vs_stoptime(sigma, stop_time_list, num_sample):\n",
    "  \"\"\"Simulate and plot a SPRT for a fixed amount of times given a std.\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of the observations.\n",
    "    stop_time_list (int): List of number of steps to run before stopping.\n",
    "    num_sample (int): The number of samples to plot.\n",
    "  \"\"\"\n",
    "  accuracy_list, _ = simulate_accuracy_vs_stoptime(sigma, stop_time_list,\n",
    "                                                   num_sample)\n",
    "\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.plot(stop_time_list, accuracy_list)\n",
    "  ax.set_xlabel('Stop Time')\n",
    "  ax.set_ylabel('Average Accuracy')\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha,\n",
    "                                          verbose=True):\n",
    "  \"\"\"Simulate and plot a SPRT for a fixed amount of times given a std.\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of the observations.\n",
    "    num_sample (int): The number of samples to plot.\n",
    "    alpha (float): Threshold for making a decision.\n",
    "  \"\"\"\n",
    "  # calculate evidence threshold from error rate\n",
    "  threshold = threshold_from_errorrate(alpha)\n",
    "\n",
    "  # run simulation\n",
    "  evidence_history_list = []\n",
    "  if verbose:\n",
    "    print(\"#Trial\\tTime\\tCumulated Evidence\\tDecision\")\n",
    "  for i in range(num_sample):\n",
    "    evidence_history, decision, data = simulate_SPRT_threshold(sigma, threshold)\n",
    "    if verbose:\n",
    "      print(\"{}\\t{}\\t{:f}\\t{}\".format(i, len(data), evidence_history[-1],\n",
    "                                      decision))\n",
    "    evidence_history_list.append(evidence_history)\n",
    "\n",
    "  fig, ax = plt.subplots()\n",
    "  maxlen_evidence = np.max(list(map(len,evidence_history_list)))\n",
    "  ax.plot(np.repeat(threshold,maxlen_evidence + 1), c=\"red\")\n",
    "  ax.plot(-np.repeat(threshold,maxlen_evidence + 1), c=\"red\")\n",
    "  ax.plot(np.zeros(maxlen_evidence + 1), '--', c='red', alpha=0.5)\n",
    "\n",
    "  for evidences in evidence_history_list:\n",
    "      ax.plot(np.arange(len(evidences) + 1), np.concatenate([[0], evidences]))\n",
    "\n",
    "  ax.set_xlabel(\"Time\")\n",
    "  ax.set_ylabel(\"Cumulated log likelihood ratio\")\n",
    "  ax.set_title(\"Log likelihood ratio trajectories under the threshold rule\")\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample):\n",
    "  \"\"\"Simulate and plot a SPRT for a set of thresholds given a std.\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation of the observations.\n",
    "    alpha_list (float): List of thresholds for making a decision.\n",
    "    num_sample (int): The number of samples to plot.\n",
    "  \"\"\"\n",
    "  accuracies, decision_speeds = simulate_accuracy_vs_threshold(sigma,\n",
    "                                                               threshold_list,\n",
    "                                                               num_sample)\n",
    "\n",
    "  # Plotting\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.plot(decision_speeds, accuracies, linestyle=\"--\", marker=\"o\")\n",
    "  ax.plot([np.amin(decision_speeds), np.amax(decision_speeds)],\n",
    "          [0.5, 0.5], c='red')\n",
    "  ax.set_xlabel(\"Average Decision speed\")\n",
    "  ax.set_ylabel('Average Accuracy')\n",
    "  ax.set_title(\"Speed/Accuracy Tradeoff\")\n",
    "  ax.set_ylim(0.45, 1.05)\n",
    "\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def threshold_from_errorrate(alpha):\n",
    "  \"\"\"Calculate log likelihood ratio threshold from desired error rate `alpha`\n",
    "\n",
    "  Args:\n",
    "    alpha (float): in (0,1), the desired error rate\n",
    "\n",
    "  Return:\n",
    "    threshold: corresponding evidence threshold\n",
    "  \"\"\"\n",
    "  threshold = np.log((1. - alpha) / alpha)\n",
    "  return threshold"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "---\n",
    "\n",
    "# Section 1: Introduction to the SPRT\n",
    "\n",
    "### Sequential Probability Ratio Test(SPRT)\n",
    "\n",
    "<img alt=\"PGM\" width=\"400\" src=\"https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W2D3_DecisionMaking/static/W2D3_Tutorial1_PGM.png?raw=true\">\n",
    "\n",
    "Suppose we receive a sequence of independent samples from distribution $p$. We know that $p$ is from $\\{p_0,p_1\\}$ determined by a binary latent variable $x$ and need to test between the two hyptheses:\n",
    "\n",
    "$H_L: p=p_L \\text{ or } x=0$\n",
    "\n",
    "$H_R: p=p_R \\text{ or } x=1$\n",
    "\n",
    "When we see $n$ samples $\\{x_{1}...x_n\\}$, we want to calculate the total log likelihood ratio as our evidence for decision:\n",
    "\n",
    "$$S_n = \\log \\frac{\\prod_{i=1}^n p_R(x_i)}{\\prod_{i=1}^n p_L(x_i)} = \\sum_{i=1}^n \\log p_R(x_i) - \\sum_{i=1}^n \\log p_L(x_i) \\tag{1}$$\n",
    "\n",
    "Due to the independence of samples, this can be calculated in a incremental way when new data points come in sequentially:\n",
    "\n",
    "$$ S_n =  S_{n-1} + \\log \\frac{p_R(x_n)}{p_L(x_n)} = S_{n-1} + \\log \\Lambda_n \\tag{2}$$\n",
    "\n",
    "The stopping rule can be implemented in two ways:\n",
    "\n",
    "\n",
    "\n",
    "1. Fixed time \n",
    "\n",
    "Make a decision based on $S_n$ immediately when we have collected $n$ samples. That is, accept $H_R$ if $S_n > 0$, accept $H_L$ if $S_n < 0$, and accept $H_R$ with probability $\\frac{1}{2}$ if $S_n = 0$. The significance level or desired error rate $\\alpha$ can then be determined as \n",
    "\n",
    "$$\\alpha = \\frac{1}{1+\\exp(|S_n|)} \\tag{4}$$\n",
    "\n",
    "2. Thresholding \n",
    "\n",
    "We assume equal probability to make a false positive decision and to make a false negative decision, and denote it with $\\alpha$. Then we accept hypothesis $H_R$ if $S_n \\ge b$ or accept hypothesis $H_L$ if $S_n \\le a$ where the thresholds are determined by \n",
    "\n",
    "$$a=\\log \\frac{\\alpha}{1-\\alpha} \\;\\; \\; b=\\log \\frac{1-\\alpha}{\\alpha} \n",
    "\\tag{3}$$\n",
    "(Note that these are the same value with opposite signs: $a = -b).$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "## Section 1.1: SPRT as a Drift Diffusion Model (DDM)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 519
    },
    "colab_type": "code",
    "outputId": "b8141986-dfd4-43b9-b485-b458f0e75720"
   },
   "outputs": [],
   "source": [
    "#@title Video 2: SPRT and the Random Dot Motion Task\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"7WBB4M_Vf58\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "\n",
    "\n",
    "SPRT as a Drift Diffusion Model (DDM)\n",
    "\n",
    "Let's assume two different Gaussian observation models conditioned on discrete latent variable $z$ \n",
    "\n",
    "$$p_L(x|z=0) = \\mathcal{N}\\left(\\mu_L,\\sigma_L^2\\right)$$\n",
    "\n",
    "$$p_R(x|z=1) = \\mathcal{N}\\left(\\mu_R,\\sigma_R^2\\right)$$\n",
    "\n",
    "Then the log likelihood ratio for a single data point $x_i$ is \n",
    "\n",
    "$$ \\log \\Lambda_i = \\log \\frac{\\sigma_L}{\\sigma_R} -0.5 \\left[\\frac{\\left(x_i-\\mu_R\\right)^2}{\\sigma_R^2} - \\frac{(x_i-\\mu_L)^2}{\\sigma_L^2}\\right] \\tag{5}$$\n",
    "\n",
    "Without loss of generality, let's further assume the true data generating distribution is $p_R$. In this case $x_i$ can be expressed as $x_i = \\mu_R + \\sigma_R \\epsilon$ where $\\epsilon$ comes from a standard Gaussian. The foregoing formula can then be rewritten as \n",
    "\n",
    "$$\n",
    "\\log \\Lambda_i = \\left( \\log \\frac{\\sigma_L}{\\sigma_R} + 0.5 \\frac{\\left(\\mu_R-\\mu_L\\right)^2}{\\sigma_L^2} \\right) + \\left( \\frac{\\mu_R-\\mu_L}{\\sigma_L^2}\\epsilon -0.5\\left[1-\\left(\\frac{\\sigma_R}{\\sigma_L}\\right)^2\\right]\\epsilon^2  \\right) \\tag{6}\n",
    "$$\n",
    "\n",
    "where the first two constant terms serve as the *drifting* part and the last terms are the *diffusion* part. If we further let $\\sigma_L=\\sigma_R$, we can get rid of the quadratic term and this reduces to the classical discrete drift-diffusion equation where we have analytical solutions for mean and expected auto-covariance:\n",
    "\n",
    "$$\n",
    "\\log \\Lambda_i = 0.5 \\frac{(\\mu_R-\\mu_L)^2}{\\sigma_L^2}  + \\frac{\\mu_R-\\mu_L}{\\sigma_L^2}\\epsilon, \\text{where } \\epsilon \\sim \\mathcal{N}(0,1)\n",
    "$$\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "## Section 1.2: Simulating DDM with fixed-time stopping rule\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 519
    },
    "colab_type": "code",
    "outputId": "8c106098-c9ff-4c96-8b4d-342e57c5fbb4"
   },
   "outputs": [],
   "source": [
    "#@title Video 3: Simulate the DDM with a fixed-time stopping rule\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"9WNAZnEa64Y\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "Exercise 1: Simulating an SPRT model\n",
    "---\n",
    "\n",
    "Assume we are performing a random dot motion task and at each time we see a moving dot with velocity $x_t$. All data points are sampled from the same distribution $p$, which is either $p_L=\\mathcal{N}\\left(-\\mu,\\sigma^2\\right)$ or $p_R=\\mathcal{N}\\left(\\mu,\\sigma^2\\right)$. Let's now generate some simulated data under this setting and perform SPRT using the fixed time stopping rule. \n",
    "\n",
    "In this exercise, without loss of generality, we assume the true data-generating model is $p_R$.\n",
    "\n",
    "* Complete the code in function `simulate_SPRT_fixedtime` to create two Gaussian random variables to represent our observation models.\n",
    "* Complete the function `log_likelihood_ratio` to calculate log likelihood ratio for a sequence of data points\n",
    "* Complete the code in function `simulate_SPRT_fixedtime` to calculate cumulated evidence given a list of individual evidences\n",
    "* Run 10 simulations and plot the DDM traces by commenting out our provided code\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code"
   },
   "outputs": [],
   "source": [
    "def simulate_SPRT_fixedtime(sigma, stop_time, true_dist=1):\n",
    "  \"\"\"Simulate a Sequential Probability Ratio Test with fixed time stopping\n",
    "  rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and\n",
    "  N(-1,sigma^2).\n",
    "\n",
    "  Args:\n",
    "    sigma (float):      Standard deviation\n",
    "    stop_time (int):    Number of samples to take before stopping\n",
    "    true_dist (0 or 1): Which state is the true state.\n",
    "\n",
    "  Returns:\n",
    "    evidence_history (numpy vector): the history of cumulated evidence given\n",
    "                                      generated data\n",
    "    decision (int): 1 for pR, 0 for pL\n",
    "    data (numpy vector): the generated sequences of data in this trial\n",
    "  \"\"\"\n",
    "  muL = -1.0\n",
    "  muR = 1.0\n",
    "  ############################################################################\n",
    "  # Insert your code here to:\n",
    "  #      Create two Gaussian variables `pL` and `pR` with mean `muL` and\n",
    "  #      `muR` respectively and same std. `sigma`\n",
    "  #      Hint: using `stats.norm(loc=..., scale=...)` to construct an\n",
    "  #      instance of 1D Gaussian distribution\n",
    "  raise NotImplementedError(\"`simulate_SPRT_fixedtime` is incomplete\")\n",
    "  ############################################################################\n",
    "  pL = stats.norm(loc=..., scale=...)\n",
    "  pR = stats.norm(loc=..., scale=...)\n",
    "\n",
    "  # Generate a random sequence of data\n",
    "  if true_dist == 1:\n",
    "    data = pR.rvs(size=stop_time)\n",
    "  else:\n",
    "    data = pL.rvs(size=stop_time)\n",
    "\n",
    "  # Calculate cumulated evidence\n",
    "  ll_ratio_vec = log_likelihood_ratio(data, pL, pR)\n",
    "\n",
    "  ############################################################################\n",
    "  # Insert your code here to:\n",
    "  #      Calculate cumulated evidence given a vector of individual evidences\n",
    "  #      Hint: use `np.cumsum`\n",
    "  ############################################################################\n",
    "  evidence_history = ...\n",
    "\n",
    "  # Make decision\n",
    "  if evidence_history[-1] > 0:\n",
    "    decision = 1\n",
    "  elif evidence_history[-1] < 0:\n",
    "    decision = 0\n",
    "  else:\n",
    "    decision = np.random.randint(2)\n",
    "\n",
    "  return evidence_history, decision, data\n",
    "\n",
    "\n",
    "np.random.seed(100)\n",
    "sigma = 3.5  # standard deviation for pL and pR\n",
    "num_sample = 10  # number of simulations to run for each stopping time\n",
    "stop_time = 150 # stopping time\n",
    "\n",
    "\n",
    "################################################################################\n",
    "# Un-comment the following code block after completing this exercise\n",
    "################################################################################\n",
    "# simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 624
    },
    "colab_type": "text",
    "outputId": "f1f4e2e5-4619-4822-e63d-d2c153a81206"
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_DecisionMaking/solutions/W2D3_Tutorial1_Solution_51fcdced.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=573 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D3_DecisionMaking/static/W2D3_Tutorial1_Solution_51fcdced_1.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "### Interactive Demo: Trajectories under the fixed-time stopping rule\n",
    "\n",
    "Now let's look at how the dynamics change if you change the noise level and stopping time. \n",
    "\n",
    "\n",
    "* Play with different noise levels and stopping times and observe the corresponding trajectories. Once you have completed exercise 1, check the box to enable the demo."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 494,
     "referenced_widgets": [
      "292546b52a7b4426af68b3fd4505171e",
      "edf951e4e4c543a88bdd4ff0c47154d1",
      "0e78058f61984082ab5028342c88b3b1",
      "596d718147ce4c778afe7a4a058c3eea",
      "e248b74f27bf4684acf1f4ab9124c80f",
      "30024df82a2f49edbb9c5f74bb076f17",
      "fb20c943dec04b56946c19b8d153ee2c",
      "c8a1f0b078964c83a3ba1f261f12f125",
      "f2a27f8280804a689393063ea054eb9e",
      "caef46e6f8ae40269b696823e9be9576"
     ]
    },
    "colab_type": "code",
    "outputId": "9181c4e9-d258-4d8a-8ca6-ef703cd015a8"
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "np.random.seed(100)\n",
    "num_sample = 10\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05), stop_time=(5, 500, 1)):\n",
    "  simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample, verbose=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "---\n",
    "# Section 2: Accuracy vs. Stopping time\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 519
    },
    "colab_type": "code",
    "outputId": "ee2429a6-f71f-4adc-d5a8-2b106003c941"
   },
   "outputs": [],
   "source": [
    "#@title Video 4: Speed/Accuracy Tradeoff\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"E8lvgFeIGQM\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "### Exercise 2: The Speed/Accuracy Tradeoff\n",
    "\n",
    "If you stop taking samples too early, (e.g., make a decision after only seeing 5 samples), or there's a huge amount of observation noise that buries the signal, you are likely to be driven by observation noise to a negative cumulated log likelihood ratio and thus make a wrong decision. You could get a sense of this by increasing noise level or decreasing stopping time in the last exercise.\n",
    "\n",
    "Now let's look at how decision accuracy varies with the number of samples we see quantitatively. First we'll fix our observation noise level. In this exercise you will run several repeated simulations for a certain stopping time to calculate the average decision accuracy. Accuracy is simply defined as the proportion of correct trials across our repeated simulations: $\\frac{\\# \\textrm{ correct decisions}}{\\# \\textrm{ total simulation runs}}$\n",
    "\n",
    "Do this for a range of stopping times and plot the relation between average decision accuracy and stopping time. You should get a positive correlation between these two quantities.\n",
    "\n",
    "* Choose a noise level. For example, $\\sigma=3$  \n",
    "* Complete the function `simulate_accuracy_vs_stoptime` to simulate and compute corresponding average accuracies for a list of stopping times.\n",
    "* Plot accuracy versus stopping time using the pre-written codes \n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code"
   },
   "outputs": [],
   "source": [
    "sigma = 4.65  # standard deviation for observation noise\n",
    "num_sample = 200  # number of simulations to run for each stopping time\n",
    "stop_time_list = np.arange(1, 150, 10) # Stopping times to play with\n",
    "\n",
    "def simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample):\n",
    "  \"\"\"Calculate the average decision accuracy vs. stopping time by running\n",
    "  repeated SPRT simulations for each stop time.\n",
    "\n",
    "  Args:\n",
    "      sigma (float): standard deviation for observation model\n",
    "      stop_list_list (list-like object): a list of stopping times to run over\n",
    "      num_sample (int): number of simulations to run per stopping time\n",
    "\n",
    "  Returns:\n",
    "      accuracy_list: a list of average accuracies corresponding to input\n",
    "                      `stop_time_list`\n",
    "      decisions_list: a list of decisions made in all trials\n",
    "  \"\"\"\n",
    "  accuracy_list = []\n",
    "  decisions_list = []\n",
    "  for stop_time in stop_time_list:\n",
    "    decision_list = []\n",
    "    ########################################################################\n",
    "    # Insert your code here to:\n",
    "    #      * Run `num_sample` repeated simulations, collect decision into\n",
    "    #        `decision_list`\n",
    "    #      * Calculate average decision accuracy as `accuracy`\n",
    "    #      * Hint: use the function you wrote in the last exercise\n",
    "    raise NotImplementedError(\"`simulate_accuracy_vs_stoptime` is incomplete\")\n",
    "    ########################################################################\n",
    "    for i in range(num_sample):\n",
    "      _, decision, _= ...\n",
    "      decision_list.append(decision)\n",
    "\n",
    "    # Calculate accuracy given the true decision is 1\n",
    "    accuracy = ...\n",
    "    accuracy_list.append(accuracy)\n",
    "    decisions_list.append(decision_list)\n",
    "\n",
    "  return accuracy_list, decisions_list\n",
    "\n",
    "\n",
    "np.random.seed(100)\n",
    "################################################################################\n",
    "# Un-comment the following code after completing this exercise\n",
    "################################################################################\n",
    "# simulate_and_plot_accuracy_vs_stoptime(sigma, stop_time_list, num_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 433
    },
    "colab_type": "text",
    "outputId": "4070a60a-96fe-4084-dcf8-dcdee93984a4"
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_DecisionMaking/solutions/W2D3_Tutorial1_Solution_ce09121d.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D3_DecisionMaking/static/W2D3_Tutorial1_Solution_ce09121d_0.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "### Interactive Demo: Accuracy versus stop-time\n",
    "\n",
    "**Suggestions** \n",
    "\n",
    "* Play with difference values of  noise level `sigma` and observe how that affects the curve. What does that mean?\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 462,
     "referenced_widgets": [
      "53ef84ccd6c24ecca8f1fed8533cf151",
      "5fc2d6cd569f4a3abba9c2b34abfb277",
      "a06b952df95447d09226157f38846845",
      "30667ba50032457fb7588ac6d0815bee",
      "ffa9a4fbf25941d3afdb67c388f3b1b8",
      "bfd7c4faa8684320a093bf89e4d2de5e",
      "ca2a2f13fe6841aea914c126030a8524"
     ]
    },
    "colab_type": "code",
    "outputId": "ebd4f913-b4d0-4442-e9cd-e07586b247ba"
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "np.random.seed(100)\n",
    "num_sample = 100\n",
    "stop_time_list = np.arange(1, 150, 10)\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05)):\n",
    "  simulate_and_plot_accuracy_vs_stoptime(sigma, stop_time_list, num_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "---\n",
    "# Section 3: DDM with fixed thresholds\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "### Exercise 3: Simulating the DDM with fixed thresholds\n",
    "\n",
    "In this exercise, we will use thresholding as our stopping rule and observe the behavior of the DDM. \n",
    "\n",
    "With thresholding stopping rule, we define a desired error rate and will continue making measurements until that error rate is reached. Experimental evidence suggested that evidence accumulation and thresholding stopping strategy happens at neuronal level (see [this article](https://www.annualreviews.org/doi/full/10.1146/annurev.neuro.29.051605.113038) for further reading).\n",
    "\n",
    "* Complete the function `threshold_from_errorrate` to calculate the evidence threshold from desired error rate $\\alpha$ as described in the formulas below. The evidence thresholds $th_L$ and $th_R$ for $p_L$ and $p_R$ are opposite of each other as shown below, so you can just return the absolute value.\n",
    "$$\n",
    "\\begin{align}\n",
    " th_{L} &= \\log \\frac{\\alpha}{1-\\alpha} &= -th_{R} \\\\\n",
    " th_{R} &= \\log \\frac{1-\\alpha}{\\alpha} &= -th{_L}\\\\\n",
    " \\end{align}\n",
    " $$\n",
    "\n",
    "* Complete the function `simulate_SPRT_threshold` to simulate an SPRT with thresholding stopping rule given noise level and desired threshold \n",
    "\n",
    "* Run repeated simulations for a given noise level and a desired error rate visualize the DDM traces using our provided code \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code"
   },
   "outputs": [],
   "source": [
    "def simulate_SPRT_threshold(sigma, threshold , true_dist=1):\n",
    "  \"\"\"Simulate a Sequential Probability Ratio Test with thresholding stopping\n",
    "  rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and\n",
    "  N(-1,sigma^2).\n",
    "\n",
    "  Args:\n",
    "    sigma (float): Standard deviation\n",
    "    threshold (float): Desired log likelihood ratio threshold to achieve\n",
    "                        before making decision\n",
    "\n",
    "  Returns:\n",
    "    evidence_history (numpy vector): the history of cumulated evidence given\n",
    "                                      generated data\n",
    "    decision (int): 1 for pR, 0 for pL\n",
    "    data (numpy vector): the generated sequences of data in this trial\n",
    "  \"\"\"\n",
    "  muL = -1.0\n",
    "  muR = 1.0\n",
    "\n",
    "  pL = stats.norm(muL, sigma)\n",
    "  pR = stats.norm(muR, sigma)\n",
    "\n",
    "  has_enough_data = False\n",
    "\n",
    "  data_history = []\n",
    "  evidence_history = []\n",
    "  current_evidence = 0.0\n",
    "\n",
    "  # Keep sampling data until threshold is crossed\n",
    "  while not has_enough_data:\n",
    "    if true_dist == 1:\n",
    "      data = pR.rvs()\n",
    "    else:\n",
    "      data = pL.rvs()\n",
    "\n",
    "    ########################################################################\n",
    "    # Insert your code here to:\n",
    "    #      * Calculate the log-likelihood ratio for the new sample\n",
    "    #      * Update the accumulated evidence\n",
    "    raise NotImplementedError(\"`simulate_SPRT_threshold` is incomplete\")\n",
    "    ########################################################################\n",
    "    # individual log likelihood ratios\n",
    "    ll_ratio = log_likelihood_ratio(...)\n",
    "    # cumulated evidence for this chunk\n",
    "    evidence_history.append(...)\n",
    "    # update the collection of all data\n",
    "    data_history.append(data)\n",
    "    current_evidence = evidence_history[-1]\n",
    "\n",
    "    # check if we've got enough data\n",
    "    if abs(current_evidence) > threshold:\n",
    "      has_enough_data = True\n",
    "\n",
    "  data_history = np.array(data_history)\n",
    "  evidence_history = np.array(evidence_history)\n",
    "\n",
    "  # Make decision\n",
    "  if evidence_history[-1] > 0:\n",
    "    decision = 1\n",
    "  elif evidence_history[-1] < 0:\n",
    "    decision = 0\n",
    "  else:\n",
    "    decision = np.random.randint(2)\n",
    "\n",
    "  return evidence_history, decision, data_history\n",
    "\n",
    "\n",
    "np.random.seed(100)\n",
    "sigma = 2.8\n",
    "num_sample = 10\n",
    "log10_alpha = -6.5 # log10(alpha)\n",
    "alpha = np.power(10.0, log10_alpha)\n",
    "\n",
    "################################################################################\n",
    "# Un-comment the following code after completing this exercise\n",
    "################################################################################\n",
    "# simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 624
    },
    "colab_type": "text",
    "outputId": "7d8c4369-963d-4338-ddc7-3fa1335a8657"
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_DecisionMaking/solutions/W2D3_Tutorial1_Solution_aed14feb.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D3_DecisionMaking/static/W2D3_Tutorial1_Solution_aed14feb_1.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "### Interactive Demo: DDM with fixed threshold\n",
    "\n",
    "**Suggestion**\n",
    "\n",
    "* Play with difference values of `alpha` and `sigma` and observe how that affects the dynamics of Drift-Diffusion Model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 494,
     "referenced_widgets": [
      "9e0c63981392487a80da63901797f20f",
      "745d281bc8f945d5bbf9775a2b47695f",
      "dbcd6c3cdbed41c09f85b922ae81e2ad",
      "1ef8ab642dbd43e3aaa95c585ac2f549",
      "ec5e481b2fb14f8db654ed3038db5d97",
      "00d426a42a89408fad237ea6408d36a2",
      "1cef724e00454c939472fdf66fb20d55",
      "89a976703b0443e99b5cd8f863fe5be3",
      "fd155fe57b614588a76bf8e4d2b8def1",
      "ccbeaa27d2a64b1eb7580a018de0877c"
     ]
    },
    "colab_type": "code",
    "outputId": "50b01c11-da22-46c1-f07d-032cd73a1055"
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "np.random.seed(100)\n",
    "num_sample = 10\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05), log10_alpha=(-8, -1, .1)):\n",
    "  alpha = np.power(10.0, log10_alpha)\n",
    "  simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha, verbose=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "---\n",
    "\n",
    "# Section 4: Accuracy vs. Threshold"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "### Exercise 4: Speed/Accuracy Tradeoff Revisited\n",
    "\n",
    "The faster you make a decision, the lower your accuracy often is. This phenomenon is known as the **speed/accuracy tradeoff**. Humans can make this tradeoff in a wide range of situations, and many animal species, including ants, bees, rodents, and monkeys also show similar effects. \n",
    "\n",
    "To illustrate the speed/accuracy tradeoff under thresholding stopping rule, let's run some simulations under different thresholds and look at how average decision \"speed\" (1/length) changes with average decision accuracy. We use speed rather than accuracy because in real experiments, subjects can be incentivized to respond faster or slower; it's much harder to precisely control their decision time or error threshold. \n",
    "\n",
    "* Complete the function `simulate_accuracy_vs_threshold` to simulate and compute average accuracies vs. average decision lengths for a list of error thresholds. You will need to supply code to calculate average decision 'speed' from the lengths of trials. You should also calculate the overall accuracy across these trials. \n",
    "\n",
    "* We've set up a list of error thresholds. Run repeated simulations and collect average accuracy with average length for each error rate in this list, and use our provided code to visualize the speed/accuracy tradeoff. You should see a positive correlation between length and accuracy.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code"
   },
   "outputs": [],
   "source": [
    "def simulate_accuracy_vs_threshold(sigma, threshold_list, num_sample):\n",
    "  \"\"\"Calculate the average decision accuracy vs. average decision length by\n",
    "  running repeated SPRT simulations with thresholding stopping rule for each\n",
    "  threshold.\n",
    "\n",
    "  Args:\n",
    "      sigma (float): standard deviation for observation model\n",
    "      threshold_list (list-like object): a list of evidence thresholds to run\n",
    "                                          over\n",
    "      num_sample (int): number of simulations to run per stopping time\n",
    "\n",
    "  Returns:\n",
    "      accuracy_list: a list of average accuracies corresponding to input\n",
    "                      `threshold_list`\n",
    "      decision_speed_list: a list of average decision speeds\n",
    "  \"\"\"\n",
    "  decision_speed_list = []\n",
    "  accuracy_list = []\n",
    "  for threshold in threshold_list:\n",
    "    decision_time_list = []\n",
    "    decision_list = []\n",
    "    for i in range(num_sample):\n",
    "      # run simulation and get decision of current simulation\n",
    "      _, decision, data = simulate_SPRT_threshold(sigma, threshold)\n",
    "      decision_time = len(data)\n",
    "      decision_list.append(decision)\n",
    "      decision_time_list.append(decision_time)\n",
    "\n",
    "    ########################################################################\n",
    "    # Insert your code here to:\n",
    "    #      * Calculate mean decision speed given a list of decision times\n",
    "    #      * Hint: Think about speed as being inversely proportional\n",
    "    #        to decision_length. If it takes 10 seconds to make one decision,\n",
    "    #        our \"decision speed\" is 0.1 decisions per second.\n",
    "    #      * Calculate the decision accuracy\n",
    "    raise NotImplementedError(\"`simulate_accuracy_vs_threshold` is incomplete\")\n",
    "    ########################################################################\n",
    "    # Calculate and store average decision speed and accuracy\n",
    "    decision_speed = ...\n",
    "    decision_accuracy = ...\n",
    "    decision_speed_list.append(decision_speed)\n",
    "    accuracy_list.append(decision_accuracy)\n",
    "\n",
    "  return accuracy_list, decision_speed_list\n",
    "\n",
    "\n",
    "################################################################################\n",
    "# Un-comment the following code block after completing this exercise\n",
    "################################################################################\n",
    "# np.random.seed(100)\n",
    "# sigma = 3.75\n",
    "# num_sample = 200\n",
    "# alpha_list = np.logspace(-2, -0.1, 8)\n",
    "# threshold_list = threshold_from_errorrate(alpha_list)\n",
    "# simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 433
    },
    "colab_type": "text",
    "outputId": "510f0f4b-4387-4773-fac9-f101feca0050"
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_DecisionMaking/solutions/W2D3_Tutorial1_Solution_45ad88b0.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D3_DecisionMaking/static/W2D3_Tutorial1_Solution_45ad88b0_0.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "### Interactive Demo: Speed/Accuracy with a threshold rule\n",
    "\n",
    "**Suggestions**\n",
    "\n",
    "* Play with difference values of  noise level `sigma` and observe how that affects the speed/accuracy tradeoff."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 462,
     "referenced_widgets": [
      "fa5e781f49b2460daead775ae2054faf",
      "bb1f27f1606c491bae549adc3417a9cb",
      "fd2fdd32cb814c4b81bf00e9cf8e6ba2",
      "f7ac9fe3e6394333a8a590a5e6c808e8",
      "1d6de25de8a04529ac2d8761430835aa",
      "5a01e08152db459ab1458d4c84b31d36",
      "7147b6871cc84e31807bd6e62faad0c2"
     ]
    },
    "colab_type": "code",
    "outputId": "2dcd74ea-4496-4d4e-eed9-1206fcb27f8a"
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Make sure you execute this cell to enable the widget!\n",
    "np.random.seed(100)\n",
    "num_sample = 100\n",
    "alpha_list = np.logspace(-2, -0.1, 8)\n",
    "threshold_list = threshold_from_errorrate(alpha_list)\n",
    "\n",
    "@widgets.interact\n",
    "def plot(sigma=(0.05, 10.0, 0.05)):\n",
    "  alpha = np.power(10.0, log10_alpha)\n",
    "  simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "---\n",
    "# Summary"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text"
   },
   "source": [
    "Good job! By simulating Drift Diffusion Models to perform decision making, you have learnt how to \n",
    "\n",
    "1. Calculate individual sample evidence as the log likelihood ratio of two candidate models, accumulate evidence from new data points, and make decision based on current evidence in `Exercise 1`\n",
    "2. Run repeated simulations to get an estimate of decision accuraries in `Exercise 2`\n",
    "3. Implement the thresholding stopping rule where we can control our error rate by taking adequate amounts of data, and calculate the evidence threshold from desired error rate in `Exercise 3`\n",
    "4. Explore and gain intuition about the speed/accuracy tradeoff for perceptual decision making in `Exercise 4`"
   ]
  }
 ],
 "metadata": {
  "@webio": {
   "lastCommId": null,
   "lastKernelId": null
  },
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "W2D3_Tutorial1",
   "provenance": [],
   "toc_visible": true
  },
  "kernel": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": true,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
