{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {},
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> &nbsp; <a href=\"https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeoNeuron/professional-workshop-3/master/tutorials/W8_HiddenDynamics/student/W8_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open in Kaggle\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "# Tutorial 2: Hidden Markov Model\n",
    "\n",
    "__Content creators:__ Yicheng Fei with help from Jesse Livezey and Xaq Pitkow\n",
    "\n",
    "__Content modified:__ Kai Chen\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "# Tutorial objectives\n",
    "\n",
    "The world around us is often changing, but we only have noisy sensory measurements. Similarly, neural systems switch between discrete states (e.g. sleep/wake) which are observable only indirectly, through their impact on neural activity. **Hidden Markov Models** (HMM) let us reason about these unobserved (also called hidden or latent) states using a time series of measurements. \n",
    "\n",
    "Here we'll learn how changing the HMM's transition probability and measurement noise impacts the data. We'll look at how uncertainty increases as we predict the future, and how to gain information from the measurements.\n",
    "\n",
    "We will use a binary latent variable $s_t \\in \\{0,1\\}$ that switches randomly between the two states, and a 1D Gaussian emission model $m_t|s_t \\sim \\mathcal{N}(\\mu_{s_t},\\sigma^2_{s_t})$ that provides evidence about the current state.\n",
    "\n",
    "By the end of this tutorial, you should be able to:\n",
    "- Describe how the hidden states in a Hidden Markov model evolve over time, both in words, mathematically, and in code\n",
    "- Estimate hidden states from data using forward inference in a Hidden Markov model\n",
    "- Describe how measurement noise and state transition probabilities affect uncertainty in predictions in the future and the ability to estimate hidden states.\n",
    "\n",
    "**Summary of Exercises**\n",
    "\n",
    "1. Generate data from an HMM.\n",
    "\n",
    "2. Calculate how predictions propagate in a Markov Chain without evidence.\n",
    "\n",
    "3. Combine new evidence and prediction from past evidence to estimate hidden states."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "# Setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# Imports\n",
    "\n",
    "import numpy as np\n",
    "import time\n",
    "from scipy import stats\n",
    "from scipy.optimize import linear_sum_assignment\n",
    "from collections import namedtuple\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "from matplotlib import patches"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "#@title Figure Settings\n",
    "# import ipywidgets as widgets       # interactive display\n",
    "from IPython.html import widgets\n",
    "from ipywidgets import interactive, interact, HBox, Layout,VBox\n",
    "from IPython.display import HTML\n",
    "%config InlineBackend.figure_format = 'retina'\n",
    "\n",
    "nma_style = {\n",
    "    'figure.figsize' : (8, 6),\n",
    "    'figure.autolayout' : True,\n",
    "    'font.size' : 15,\n",
    "    'xtick.labelsize' : 'small',\n",
    "    'ytick.labelsize' : 'small',\n",
    "    'legend.fontsize' : 'small',\n",
    "    'axes.spines.top' : False,\n",
    "    'axes.spines.right' : False,\n",
    "    'xtick.major.size' : 5,\n",
    "    'ytick.major.size' : 5,\n",
    "}\n",
    "for key, value in nma_style.items():\n",
    "    plt.rcParams[key] = value\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# @title Plotting Functions\n",
    "\n",
    "def plot_hmm1(model, states, measurements, flag_m=True):\n",
    "  \"\"\"Plots HMM states and measurements for 1d states and measurements.\n",
    "\n",
    "  Args:\n",
    "    model (hmmlearn model):               hmmlearn model used to get state means.\n",
    "    states (numpy array of floats):       Samples of the states.\n",
    "    measurements (numpy array of floats): Samples of the states.\n",
    "  \"\"\"\n",
    "  T = states.shape[0]\n",
    "  nsteps = states.size\n",
    "  aspect_ratio = 2\n",
    "  fig, ax1 = plt.subplots(figsize=(8,4))\n",
    "  states_forplot = list(map(lambda s: model.means[s], states))\n",
    "  ax1.step(np.arange(nstep), states_forplot, \"-\", where=\"mid\", alpha=1.0, c=\"green\")\n",
    "  ax1.set_xlabel(\"Time\")\n",
    "  ax1.set_ylabel(\"Latent State\", c=\"green\")\n",
    "  ax1.set_yticks([-1, 1])\n",
    "  ax1.set_yticklabels([\"-1\", \"+1\"])\n",
    "  ax1.set_xticks(np.arange(0,T,10))\n",
    "  ymin = min(measurements)\n",
    "  ymax = max(measurements)\n",
    "\n",
    "  ax2 = ax1.twinx()\n",
    "  ax2.set_ylabel(\"Measurements\", c=\"crimson\")\n",
    "\n",
    "  # show measurement gaussian\n",
    "  if flag_m:\n",
    "    ax2.plot([T,T],ax2.get_ylim(), color=\"maroon\", alpha=0.6)\n",
    "    for i in range(model.n_components):\n",
    "      mu = model.means[i]\n",
    "      scale = np.sqrt(model.vars[i])\n",
    "      rv = stats.norm(mu, scale)\n",
    "      num_points = 50\n",
    "      domain = np.linspace(mu-3*scale, mu+3*scale, num_points)\n",
    "\n",
    "      left = np.repeat(float(T), num_points)\n",
    "      # left = np.repeat(0.0, num_points)\n",
    "      offset = rv.pdf(domain)\n",
    "      offset *= T / 15\n",
    "      lbl = \"measurement\" if i == 0 else \"\"\n",
    "      # ax2.fill_betweenx(domain, left, left-offset, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n",
    "      ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n",
    "      ax2.scatter(np.arange(nstep), measurements, c=\"crimson\", s=4)\n",
    "      ax2.legend(loc=\"upper left\")\n",
    "    ax1.set_ylim(ax2.get_ylim())\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def plot_marginal_seq(predictive_probs, switch_prob):\n",
    "  \"\"\"Plots the sequence of marginal predictive distributions.\n",
    "\n",
    "    Args:\n",
    "      predictive_probs (list of numpy vectors): sequence of predictive probability vectors\n",
    "      switch_prob (float):                      Probability of switching states.\n",
    "  \"\"\"\n",
    "  T = len(predictive_probs)\n",
    "  prob_neg = [p_vec[0] for p_vec in predictive_probs]\n",
    "  prob_pos = [p_vec[1] for p_vec in predictive_probs]\n",
    "  fig, ax = plt.subplots()\n",
    "  ax.plot(np.arange(T), prob_neg, color=\"blue\")\n",
    "  ax.plot(np.arange(T), prob_pos, color=\"orange\")\n",
    "  ax.legend([\n",
    "    \"prob in state -1\", \"prob in state 1\"\n",
    "  ])\n",
    "  ax.text(T/2, 0.05, \"switching probability={}\".format(switch_prob), fontsize=12,\n",
    "          bbox=dict(boxstyle=\"round\", facecolor=\"wheat\", alpha=0.6))\n",
    "  ax.set_xlabel(\"Time\")\n",
    "  ax.set_ylabel(\"Probability\")\n",
    "  ax.set_title(\"Forgetting curve in a changing world\")\n",
    "  #ax.set_aspect(aspect_ratio)\n",
    "  plt.show(fig)\n",
    "\n",
    "def plot_evidence_vs_noevidence(posterior_matrix, predictive_probs):\n",
    "  \"\"\"Plots the average posterior probabilities with evidence v.s. no evidence\n",
    "\n",
    "  Args:\n",
    "    posterior_matrix: (2d numpy array of floats): The posterior probabilities in state 1 from evidence (samples, time)\n",
    "    predictive_probs (numpy array of floats):  Predictive probabilities in state 1 without evidence\n",
    "  \"\"\"\n",
    "  nsample, T = posterior_matrix.shape\n",
    "  posterior_mean = posterior_matrix.mean(axis=0)\n",
    "  fig, ax = plt.subplots(1)\n",
    "  # ax.plot([0.0, T],[0.5, 0.5], color=\"red\", linestyle=\"dashed\")\n",
    "  ax.plot([0.0, T],[0., 0.], color=\"red\", linestyle=\"dashed\")\n",
    "  ax.plot(np.arange(T), predictive_probs, c=\"orange\", linewidth=2, label=\"No evidence\")\n",
    "  ax.scatter(np.tile(np.arange(T), (nsample, 1)), posterior_matrix, s=0.8, c=\"green\", alpha=0.3, label=\"With evidence(Sample)\")\n",
    "  ax.plot(np.arange(T), posterior_mean, c='green', linewidth=2, label=\"With evidence(Average)\")\n",
    "  ax.legend()\n",
    "  ax.set_yticks([0.0, 0.25, 0.5, 0.75, 1.0])\n",
    "  ax.set_xlabel(\"Time\")\n",
    "  ax.set_ylabel(\"Probability in State +1\")\n",
    "  ax.set_title(\"Gain confidence with evidence\")\n",
    "  plt.show(fig)\n",
    "\n",
    "\n",
    "def plot_forward_inference(model, states, measurements, states_inferred,\n",
    "                           predictive_probs, likelihoods, posterior_probs,\n",
    "                           t=None,\n",
    "                           flag_m=True, flag_d=True, flag_pre=True, flag_like=True, flag_post=True,\n",
    "                           ):\n",
    "  \"\"\"Plot ground truth state sequence with noisy measurements, and ground truth states v.s. inferred ones\n",
    "\n",
    "      Args:\n",
    "          model (instance of hmmlearn.GaussianHMM): an instance of HMM\n",
    "          states (numpy vector): vector of 0 or 1(int or Bool), the sequences of true latent states\n",
    "          measurements (numpy vector of numpy vector): the un-flattened Gaussian measurements at each time point, element has size (1,)\n",
    "          states_inferred (numpy vector): vector of 0 or 1(int or Bool), the sequences of inferred latent states\n",
    "  \"\"\"\n",
    "  T = states.shape[0]\n",
    "  if t is None:\n",
    "    t = T-1\n",
    "  nsteps = states.size\n",
    "  fig, ax1 = plt.subplots(figsize=(11,6))\n",
    "  # inferred states\n",
    "  #ax1.step(np.arange(nstep)[:t+1], states_forplot[:t+1], \"-\", where=\"mid\", alpha=1.0, c=\"orange\", label=\"inferred\")\n",
    "  # true states\n",
    "  states_forplot = list(map(lambda s: model.means[s], states))\n",
    "  ax1.step(np.arange(nstep)[:t+1], states_forplot[:t+1], \"-\", where=\"mid\", alpha=1.0, c=\"green\", label=\"true\")\n",
    "  ax1.step(np.arange(nstep)[t+1:], states_forplot[t+1:], \"-\", where=\"mid\", alpha=0.3, c=\"green\", label=\"\")\n",
    "  # Posterior curve\n",
    "  delta = model.means[1] - model.means[0]\n",
    "  states_interpolation = model.means[0] + delta * posterior_probs[:,1]\n",
    "  if flag_post:\n",
    "    ax1.step(np.arange(nstep)[:t+1], states_interpolation[:t+1], \"-\", where=\"mid\", c=\"grey\", label=\"posterior\")\n",
    "\n",
    "  ax1.set_xlabel(\"Time\")\n",
    "  ax1.set_ylabel(\"Latent State\", c=\"green\")\n",
    "  ax1.set_yticks([-1, 1])\n",
    "  ax1.set_yticklabels([\"-1\", \"+1\"])\n",
    "  ax1.legend(bbox_to_anchor=(0,1.02,0.2,0.1), borderaxespad=0, ncol=2)\n",
    "\n",
    "\n",
    "\n",
    "  ax2 = ax1.twinx()\n",
    "  ax2.set_ylim(\n",
    "      min(-1.2, np.min(measurements)),\n",
    "      max(1.2, np.max(measurements))\n",
    "      )\n",
    "  if flag_d:\n",
    "    ax2.scatter(np.arange(nstep)[:t+1], measurements[:t+1], c=\"crimson\", s=4, label=\"measurement\")\n",
    "    ax2.set_ylabel(\"Measurements\", c=\"crimson\")\n",
    "\n",
    "  # show measurement distributions\n",
    "  if flag_m:\n",
    "    for i in range(model.n_components):\n",
    "      mu = model.means[i]\n",
    "      scale = np.sqrt(model.vars[i])\n",
    "      rv = stats.norm(mu, scale)\n",
    "      num_points = 50\n",
    "      domain = np.linspace(mu-3*scale, mu+3*scale, num_points)\n",
    "\n",
    "      left = np.repeat(float(T), num_points)\n",
    "      offset = rv.pdf(domain)\n",
    "      offset *= T /15\n",
    "      # lbl = \"measurement\" if i == 0 else \"\"\n",
    "      lbl = \"\"\n",
    "      # ax2.fill_betweenx(domain, left, left-offset, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n",
    "      ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n",
    "  ymin, ymax = ax2.get_ylim()\n",
    "  width = 0.1 * (ymax-ymin) / 2.0\n",
    "  centers = [-1.0, 1.0]\n",
    "  bar_scale = 15\n",
    "\n",
    "  # Predictions\n",
    "  data = predictive_probs\n",
    "  if flag_pre:\n",
    "    for i in range(model.n_components):\n",
    "      domain = np.array([centers[i]-1.5*width, centers[i]-0.5*width])\n",
    "      left = np.array([t,t])\n",
    "      offset = np.array([data[t,i]]*2)\n",
    "      offset *= bar_scale\n",
    "      lbl = \"todays prior\" if i == 0 else \"\"\n",
    "      ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"dodgerblue\", label=lbl)\n",
    "\n",
    "  # Likelihoods\n",
    "  # data = np.stack([likelihoods, 1.0-likelihoods],axis=-1)\n",
    "  data = likelihoods\n",
    "  data /= np.sum(data,axis=-1, keepdims=True)\n",
    "  if flag_like:\n",
    "    for i in range(model.n_components):\n",
    "      domain = np.array([centers[i]+0.5*width, centers[i]+1.5*width])\n",
    "      left = np.array([t,t])\n",
    "      offset = np.array([data[t,i]]*2)\n",
    "      offset *= bar_scale\n",
    "      lbl = \"likelihood\" if i == 0 else \"\"\n",
    "      ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"crimson\", label=lbl)\n",
    "  # Posteriors\n",
    "  data = posterior_probs\n",
    "  if flag_post:\n",
    "    for i in range(model.n_components):\n",
    "      domain = np.array([centers[i]-0.5*width, centers[i]+0.5*width])\n",
    "      left = np.array([t,t])\n",
    "      offset = np.array([data[t,i]]*2)\n",
    "      offset *= bar_scale\n",
    "      lbl = \"posterior\" if i == 0 else \"\"\n",
    "      ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"grey\", label=lbl)\n",
    "  if t<T-1:\n",
    "    ax2.plot([t,t],ax2.get_ylim(), color='black',alpha=0.6)\n",
    "  if flag_pre or flag_like or flag_post:\n",
    "    ax2.plot([t,t],ax2.get_ylim(), color='black',alpha=0.6)\n",
    "\n",
    "    ax2.legend(bbox_to_anchor=(0.4,1.02,0.6, 0.1), borderaxespad=0, ncol=4)\n",
    "  ax1.set_ylim(ax2.get_ylim())\n",
    "  return fig\n",
    "  # plt.show(fig)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "---\n",
    "# Section 1: Binary HMM with Gaussian measurements\n",
    "\n",
    "In contrast to last tutorial, the latent state in an HMM is not fixed, but may switch to a different state at each time step. The time dependence is simple: the probability of the state at time $t$ is wholely determined by the state at time $t-1$. This is called called the **Markov property** and the dependency of the whole state sequence $\\{s_1,...,s_t\\}$ can be described by a chain structure called a Markov Chain. You have seen a Markov chain in the [pre-reqs Statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.html#section-1-2-markov-chains) and in the [Linear Systems Tutorial 2](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial2.html).\n",
    "\n",
    "\n",
    "**Markov model for binary latent dynamics**\n",
    "\n",
    "Let's reuse the binary switching process you saw in the [Linear Systems Tutorial 2](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial2.html): our state can be either +1 or -1. The probability of switching to state $s_t=j$ from the previous state $s_{t-1}=i$ is the conditional probability distribution $p(s_t = j| s_{t-1} = i)$. We can summarize these as a $2\\times 2$ matrix we will denote $D$ for Dynamics.\n",
    "\n",
    "\\begin{align*}\n",
    "D = \\begin{bmatrix}p(s_t = +1 | s_{t-1} = +1) & p(s_t = -1 | s_{t-1} = +1)\\\\p(s_t = +1 | s_{t-1} = -1)& p(s_t = -1 | s_{t-1} = -1)\\end{bmatrix}\n",
    "\\end{align*}\n",
    "\n",
    "$D_{ij}$ represents the transition probability to switch from state $i$ to state $j$ at next time step. Please note that this is contrast to the meaning used in the intro and in Linear Systems (their transition matrices are the transpose of ours) but syncs with the [pre-reqs Statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.html#section-1-2-markov-chains).\n",
    "\n",
    "We can represent the probability of the _current_ state as a 2-dimensional vector \n",
    "\n",
    "$ P_t = [p(s_t = +1), p(s_t = -1)]$\n",
    "\n",
    ". The entries are the probability that the current state is +1 and the probability that the current state is -1 so these must sum up to 1.\n",
    "\n",
    "We then update the probabilities over time following the Markov process:\n",
    "\n",
    "\\begin{align*}\n",
    "P_{t}= P_{t-1}D \\tag{1}\n",
    "\\end{align*}\n",
    "\n",
    "If you know the state, the entries of $P_{t-1}$ would be either 1 or 0 as there is no uncertainty.\n",
    "\n",
    "**Measurements**\n",
    "\n",
    "In a _Hidden_ Markov model, we cannot directly observe the latent states $s_t$. Instead we get noisy measurements $m_t\\sim p(m|s_t)$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "## Coding Exercise 1.1: Simulate a binary HMM with Gaussian measurements\n",
    "\n",
    "In this exercise, you will implement a binary HMM with Gaussian measurements. Your HMM will start in State +1 and transition between states (both $-1 \\rightarrow 1$ and $1 \\rightarrow -1$) with probability `switch_prob`. Each state emits measurements drawn from a Gaussian with mean $+1$ for State +1 and mean $-1$ for State -1. The standard deviation of both states is given by `noise_level`.\n",
    "\n",
    "The exercises in the next cell have three steps:\n",
    "\n",
    "**STEP 1**. In `create_HMM`, complete the transition matrix  `transmat_` (i.e., $D$) in the code. \n",
    "\\begin{equation*}\n",
    "D = \n",
    "\\begin{pmatrix}\n",
    "p_{\\rm stay} & p_{\\rm switch} \\\\\n",
    "p_{\\rm switch} & p_{\\rm stay} \\\\\n",
    "\\end{pmatrix}\n",
    "\\end{equation*}\n",
    "with $p_{\\rm stay} = 1 - p_{\\rm switch}$. \n",
    "\n",
    "**STEP 2**. In `create_HMM`, specify gaussian measurements $m_t | s_t$, by specifying the means for each state, and the standard deviation.\n",
    "\n",
    "**STEP 3**. In `sample`, use the transition matrix to specify the probabilities for the next state $s_t$ given the previous state $s_{t-1}$.\n",
    "\n",
    "\n",
    "In this exercise, we will use a helper data structure named `GaussianHMM1D`, implemented in the following cell. This allows us to set the information we need about the HMM model (the starting probabilities of state, the transition matrix, the means and variances of the Gaussian distributions, and the number of components) and easily access it. For example, if we can set our model using:\n",
    "\n",
    "\n",
    "```\n",
    "  model = GaussianHMM1D(\n",
    "    startprob = startprob_vec,\n",
    "    transmat = transmat_mat,\n",
    "    means = means_vec,\n",
    "    vars = vars_vec,\n",
    "    n_components = n_components\n",
    "  )\n",
    "```\n",
    "and then access the variances as:\n",
    "\n",
    "```\n",
    "model.vars\n",
    "```\n",
    "\n",
    "Also note that we refer to the states as `0` and `1` in the code, instead of as `-1` and `+1`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "GaussianHMM1D = namedtuple('GaussianHMM1D', ['startprob', 'transmat','means','vars','n_components'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def create_HMM(switch_prob=0.1, noise_level=1e-1, startprob=[1.0, 0.0]):\n",
    "  \"\"\"Create an HMM with binary state variable and 1D Gaussian measurements\n",
    "  The probability to switch to the other state is `switch_prob`. Two\n",
    "  measurement models have mean 1.0 and -1.0 respectively. `noise_level`\n",
    "  specifies the standard deviation of the measurement models.\n",
    "\n",
    "  Args:\n",
    "      switch_prob (float): probability to jump to the other state\n",
    "      noise_level (float): standard deviation of measurement models. Same for\n",
    "      two components\n",
    "\n",
    "  Returns:\n",
    "      model (GaussianHMM instance): the described HMM\n",
    "  \"\"\"\n",
    "\n",
    "  ############################################################################\n",
    "  # Insert your code here to:\n",
    "  #   * Create the transition matrix, `transmat_mat` so that the odds of\n",
    "  #      switching is `switch_prob`\n",
    "  #\t\t* Set the measurement model variances, to `noise_level ^ 2` for both\n",
    "  #      states\n",
    "  raise NotImplementedError(\"`create_HMM` is incomplete\")\n",
    "  ############################################################################\n",
    "\n",
    "  n_components = 2\n",
    "\n",
    "  startprob_vec = np.asarray(startprob)\n",
    "\n",
    "  # STEP 1: Transition probabilities\n",
    "  transmat_mat = ... # np.array([[...], [...]])\n",
    "\n",
    "  # STEP 2: Measurement probabilities\n",
    "\n",
    "  # Mean measurements for each state\n",
    "  means_vec = ...\n",
    "\n",
    "  # Noise for each state\n",
    "  vars_vec = np.ones(2) * ...\n",
    "\n",
    "  # Initialize model\n",
    "  model = GaussianHMM1D(\n",
    "    startprob = startprob_vec,\n",
    "    transmat = transmat_mat,\n",
    "    means = means_vec,\n",
    "    vars = vars_vec,\n",
    "    n_components = n_components\n",
    "  )\n",
    "\n",
    "  return model\n",
    "\n",
    "def sample(model, T):\n",
    "  \"\"\"Generate samples from the given HMM\n",
    "\n",
    "  Args:\n",
    "    model (GaussianHMM1D): the HMM with Gaussian measurement\n",
    "    T (int): number of time steps to sample\n",
    "\n",
    "  Returns:\n",
    "    M (numpy vector): the series of measurements\n",
    "    S (numpy vector): the series of latent states\n",
    "\n",
    "  \"\"\"\n",
    "  ############################################################################\n",
    "  # Insert your code here to:\n",
    "  #   * take row i from `model.transmat` to get the transition probabilities\n",
    "  #       from state i to all states\n",
    "  raise NotImplementedError(\"`sample` is incomplete\")\n",
    "  ############################################################################\n",
    "  # Initialize S and M\n",
    "  S = np.zeros((T,),dtype=int)\n",
    "  M = np.zeros((T,))\n",
    "\n",
    "  # Calculate initial state\n",
    "  S[0] = np.random.choice([0,1],p=model.startprob)\n",
    "\n",
    "  # Latent state at time `t` depends on `t-1` and the corresponding transition probabilities to other states\n",
    "  for t in range(1,T):\n",
    "\n",
    "    # STEP 3: Get vector of probabilities for all possible `S[t]` given a particular `S[t-1]`\n",
    "    transition_vector = ...\n",
    "\n",
    "    # Calculate latent state at time `t`\n",
    "    S[t] = np.random.choice([0,1],p=transition_vector)\n",
    "\n",
    "  # Calculate measurements conditioned on the latent states\n",
    "  # Since measurements are independent of each other given the latent states, we could calculate them as a batch\n",
    "  means = model.means[S]\n",
    "  scales = np.sqrt(model.vars[S])\n",
    "  M = np.random.normal(loc=means, scale=scales, size=(T,))\n",
    "\n",
    "  return M, S\n",
    "\n",
    "\n",
    "# Set random seed\n",
    "np.random.seed(101)\n",
    "\n",
    "# Set parameters of HMM\n",
    "T = 100\n",
    "switch_prob = 0.1\n",
    "noise_level = 2.0\n",
    "\n",
    "# Create HMM\n",
    "model = create_HMM(switch_prob=switch_prob, noise_level=noise_level)\n",
    "\n",
    "# Sample from HMM\n",
    "M, S = sample(model,T)\n",
    "assert M.shape==(T,)\n",
    "assert S.shape==(T,)\n",
    "\n",
    "# Print values\n",
    "print(M[:5])\n",
    "print(S[:5])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "[*Click for solution*](https://github.com/NeoNeuron/professional-workshop-3/tree/master//tutorials/W8_HiddenDynamics/solutions/W8_Tutorial2_Solution_76573dcd.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "You should see that the first five measurements are:\n",
    " \n",
    " `[-3.09355908  1.58552915 -3.93502804 -1.98819072 -1.32506947]`\n",
    "\n",
    " while the first five states are:\n",
    "\n",
    " `[0 0 0 0 0]`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "## Interactive Demo 1.2: Binary HMM\n",
    "\n",
    "In the demo below, we simulate and plot a similar HMM. You can change the probability of switching states and the noise level (the standard deviation of the Gaussian distributions for measurements). You can click the empty box to also visualize the measurements.\n",
    "\n",
    "**First**, think about and discuss these questions:\n",
    "\n",
    "1.   What will the states do if the switching probability is zero? One?\n",
    "2.   What will measurements look like with high noise? Low?\n",
    "\n",
    "\n",
    "\n",
    "**Then**, play with the demo to see if you were correct or not."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "\n",
    "#@markdown Execute this cell to enable the widget!\n",
    "\n",
    "nstep = 100\n",
    "\n",
    "@widgets.interact\n",
    "def plot_samples_widget(\n",
    "    switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.02, value=0.1),\n",
    "    log10_noise_level=widgets.FloatSlider(min=-1., max=1., step=.01, value=-0.3),\n",
    "    flag_m=widgets.Checkbox(value=False, description='measurements', disabled=False, indent=False)\n",
    "    ):\n",
    "  np.random.seed(101)\n",
    "  model = create_HMM(switch_prob=switch_prob,\n",
    "                     noise_level=10.**log10_noise_level)\n",
    "  print(model)\n",
    "  observations, states = sample(model, nstep)\n",
    "  plot_hmm1(model, states, observations, flag_m=flag_m)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "[*Click for solution*](https://github.com/NeoNeuron/professional-workshop-3/tree/master//tutorials/W8_HiddenDynamics/solutions/W8_Tutorial2_Solution_507ce9e9.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "\n",
    "**Applications**. Measurements could be:\n",
    "* fish caught at different times as the school of fish moves from left to right\n",
    "* membrane voltage when an ion channel changes between open and closed\n",
    "* EEG frequency measurements as the brain moves between sleep states\n",
    "\n",
    "What phenomena can you imagine modeling with these HMMs?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "----\n",
    "\n",
    "# Section 2: Predicting the future in an HMM\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "### Interactive Demo 2.1: Forgetting in a changing world\n",
    "\n",
    "\n",
    "Even if we know the world state for sure, the world changes. We become less and less certain as time goes by since our last measurement. In this exercise, we'll see how a Hidden Markov Model gradually \"forgets\" the current state when predicting the future without measurements.\n",
    "\n",
    "Assume we know that the initial state is -1, $s_0=-1$, so $p(s_0)=[1,0]$. We will plot $p(s_t)$ versus time.\n",
    "\n",
    "1. Examine helper function `simulate_prediction_only` and understand how the predicted distribution changes over time.\n",
    "\n",
    "2. Using our provided code, plot this distribution over time, and manipulate the process dynamics via the slider controlling the switching probability.\n",
    "\n",
    "Do you forget more quickly with low or high switching probability? Why? How does the curve look when `prob_switch` $>0.5$? Why?\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to enable helper function `simulate_prediction_only`\n",
    "\n",
    "def simulate_prediction_only(model, nstep):\n",
    "  \"\"\"\n",
    "  Simulate the diffusion of HMM with no observations\n",
    "\n",
    "  Args:\n",
    "    model (GaussianHMM1D instance): the HMM instance\n",
    "    nstep (int): total number of time steps to simulate(include initial time)\n",
    "\n",
    "  Returns:\n",
    "    predictive_probs (list of numpy vector): the list of marginal probabilities\n",
    "  \"\"\"\n",
    "  entropy_list = []\n",
    "  predictive_probs = []\n",
    "  prob = model.startprob\n",
    "  for i in range(nstep):\n",
    "\n",
    "    # Log probabilities\n",
    "    predictive_probs.append(prob)\n",
    "\n",
    "    # One step forward\n",
    "    prob = prob @ model.transmat\n",
    "\n",
    "  return predictive_probs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to enable the widget!\n",
    "\n",
    "np.random.seed(101)\n",
    "T = 100\n",
    "noise_level = 0.5\n",
    "\n",
    "@widgets.interact(switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.1))\n",
    "def plot(switch_prob=switch_prob):\n",
    "  model = create_HMM(switch_prob=switch_prob, noise_level=noise_level)\n",
    "  predictive_probs = simulate_prediction_only(model, T)\n",
    "  plot_marginal_seq(predictive_probs, switch_prob)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "[*Click for solution*](https://github.com/NeoNeuron/professional-workshop-3/tree/master//tutorials/W8_HiddenDynamics/solutions/W8_Tutorial2_Solution_8357dee2.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "# Section 3: Forward inference in an HMM"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "### Coding Exercise 3.1: Forward inference of HMM\n",
    "\n",
    "As a recursive algorithm, let's assume we already have yesterday's posterior from time $t-1$: $p(s_{t-1}|m_{1:t-1})$. When the new data $m_{t}$ comes in, the algorithm performs the following steps:\n",
    "\n",
    "* **Predict**: transform yesterday's posterior over $s_{t-1}$ into today's prior over $s_t$ using the transition matrix $D$:\n",
    "\n",
    "$$\\text{today's prior}=p(s_t|m_{1:t-1})= p(s_{t-1}|m_{1:t-1}) D$$\n",
    "\n",
    "* **Update**: Incorporate measurement $m_t$ to calculate the posterior $p(s_t|m_{0:t})$\n",
    "\n",
    "$$\\text{posterior} \\propto \\text{prior}\\cdot \\text{likelihood}=p(m_t|s_t)p(s_t|m_{0:t-1})$$\n",
    "\n",
    "In this exercise, you will:\n",
    "\n",
    "* STEP 1: Complete the code in function `markov_forward` to calculate the predictive marginal distribution at next time step\n",
    "\n",
    "* STEP 2: Complete the code in function `one_step_update` to combine predictive probabilities and data likelihood into a new posterior\n",
    "  * Hint: We have provided a function to calculate the likelihood of $m_t$ under the two possible states: `compute_likelihood(model,M_t)`.\n",
    "\n",
    "* STEP 3: Using code we provide, plot the posterior and compare with the true values \n",
    "\n",
    "The complete forward inference is implemented in `simulate_forward_inference` which just calls `one_step_update` recursively.\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# @markdown Execute to enable helper functions `compute_likelihood` and `simulate_forward_inference`\n",
    "\n",
    "def compute_likelihood(model, M):\n",
    "  \"\"\"\n",
    "  Calculate likelihood of seeing data `M` for all measurement models\n",
    "\n",
    "  Args:\n",
    "    model (GaussianHMM1D): HMM\n",
    "    M (float or numpy vector)\n",
    "\n",
    "  Returns:\n",
    "    L (numpy vector or matrix): the likelihood\n",
    "  \"\"\"\n",
    "  rv0 = stats.norm(model.means[0], np.sqrt(model.vars[0]))\n",
    "  rv1 = stats.norm(model.means[1], np.sqrt(model.vars[1]))\n",
    "  L = np.stack([rv0.pdf(M), rv1.pdf(M)],axis=0)\n",
    "  if L.size==2:\n",
    "    L = L.flatten()\n",
    "  return L\n",
    "\n",
    "\n",
    "def simulate_forward_inference(model, T, data=None):\n",
    "  \"\"\"\n",
    "  Given HMM `model`, calculate posterior marginal predictions of x_t for T-1 time steps ahead based on\n",
    "  evidence `data`. If `data` is not give, generate a sequence of measurements from first component.\n",
    "\n",
    "  Args:\n",
    "    model (GaussianHMM instance): the HMM\n",
    "    T (int): length of returned array\n",
    "\n",
    "  Returns:\n",
    "    predictive_state1: predictive probabilities in first state w.r.t no evidence\n",
    "    posterior_state1: posterior probabilities in first state w.r.t evidence\n",
    "  \"\"\"\n",
    "\n",
    "  # First re-calculate hte predictive probabilities without evidence\n",
    "  # predictive_probs = simulate_prediction_only(model, T)\n",
    "  predictive_probs = np.zeros((T,2))\n",
    "  likelihoods = np.zeros((T,2))\n",
    "  posterior_probs = np.zeros((T, 2))\n",
    "  # Generate an measurement trajectory condtioned on that latent state x is always 1\n",
    "  if data is not None:\n",
    "    M = data\n",
    "  else:\n",
    "    M = np.random.normal(model.means[0], np.sqrt(model.vars[0]), (T,))\n",
    "\n",
    "  # Calculate marginal for each latent state x_t\n",
    "  predictive_probs[0,:] = model.startprob\n",
    "  likelihoods[0,:] = compute_likelihood(model, M[[0]])\n",
    "  posterior = predictive_probs[0,:] * likelihoods[0,:]\n",
    "  posterior /= np.sum(posterior)\n",
    "  posterior_probs[0,:] = posterior\n",
    "\n",
    "  for t in range(1, T):\n",
    "    prediction, likelihood, posterior = one_step_update(model, posterior_probs[t-1], M[[t]])\n",
    "    # normalize and add to the list\n",
    "    posterior /= np.sum(posterior)\n",
    "    predictive_probs[t,:] = prediction\n",
    "    likelihoods[t,:] = likelihood\n",
    "    posterior_probs[t,:] = posterior\n",
    "  return predictive_probs, likelihoods, posterior_probs\n",
    "\n",
    "help(compute_likelihood)\n",
    "help(simulate_forward_inference)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def markov_forward(p0, D):\n",
    "  \"\"\"Calculate the forward predictive distribution in a discrete Markov chain\n",
    "\n",
    "  Args:\n",
    "    p0 (numpy vector): a discrete probability vector\n",
    "    D (numpy matrix): the transition matrix, D[i,j] means the prob. to\n",
    "    switch FROM i TO j\n",
    "\n",
    "  Returns:\n",
    "    p1 (numpy vector): the predictive probabilities in next time step\n",
    "  \"\"\"\n",
    "  ##############################################################################\n",
    "  # Insert your code here to:\n",
    "  #    1. Calculate the predicted probabilities at next time step using the\n",
    "  #      probabilities at current time and the transition matrix\n",
    "  raise NotImplementedError(\"`markov_forward` is incomplete\")\n",
    "  ##############################################################################\n",
    "\n",
    "  # Calculate predictive probabilities (prior)\n",
    "  p1 = ...\n",
    "\n",
    "  return p1\n",
    "\n",
    "def one_step_update(model, posterior_tm1, M_t):\n",
    "  \"\"\"Given a HMM model, calculate the one-time-step updates to the posterior.\n",
    "  Args:\n",
    "    model (GaussianHMM1D instance): the HMM\n",
    "    posterior_tm1 (numpy vector): Posterior at `t-1`\n",
    "    M_t (numpy array): measurement at `t`\n",
    "\n",
    "  Returns:\n",
    "    posterior_t (numpy array): Posterior at `t`\n",
    "  \"\"\"\n",
    "  ##############################################################################\n",
    "  # Insert your code here to:\n",
    "  #    1. Call function `markov_forward` to calculate the prior for next time\n",
    "  #      step\n",
    "  #    2. Calculate likelihood of seeing current data `M_t` under both states\n",
    "  #      as a vector.\n",
    "  #    3. Calculate the posterior which is proportional to\n",
    "  #      likelihood x prediction elementwise,\n",
    "  #    4. Don't forget to normalize\n",
    "  raise NotImplementedError(\"`one_step_update` is incomplete\")\n",
    "  ##############################################################################\n",
    "\n",
    "  # Calculate predictive probabilities (prior)\n",
    "  prediction = markov_forward(...)\n",
    "\n",
    "  # Get the likelihood\n",
    "  likelihood = compute_likelihood(...)\n",
    "\n",
    "  # Calculate posterior\n",
    "  posterior_t = ...\n",
    "\n",
    "  # Normalize\n",
    "  posterior_t /= ...\n",
    "\n",
    "  return prediction, likelihood, posterior_t\n",
    "\n",
    "\n",
    "# Set random seed\n",
    "np.random.seed(12)\n",
    "\n",
    "# Set parameters\n",
    "switch_prob = 0.4\n",
    "noise_level = .4\n",
    "t = 75\n",
    "\n",
    "# Create and sample from model\n",
    "model = create_HMM(switch_prob = switch_prob,\n",
    "                    noise_level = noise_level,\n",
    "                    startprob=[0.5, 0.5])\n",
    "\n",
    "measurements, states = sample(model, nstep)\n",
    "\n",
    "# Infer state sequence\n",
    "predictive_probs, likelihoods, posterior_probs = simulate_forward_inference(model, nstep,\n",
    "                                                            measurements)\n",
    "states_inferred = np.asarray(posterior_probs[:,0] <= 0.5, dtype=int)\n",
    "\n",
    "# Visualize\n",
    "plot_forward_inference(\n",
    "      model, states, measurements, states_inferred,\n",
    "      predictive_probs, likelihoods, posterior_probs,t=t, flag_m = 0\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "[*Click for solution*](https://github.com/NeoNeuron/professional-workshop-3/tree/master//tutorials/W8_HiddenDynamics/solutions/W8_Tutorial2_Solution_69ce2879.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=1537.0 height=826.0 src=https://raw.githubusercontent.com/NeoNeuron/professional-workshop-3/master/tutorials/W8_HiddenDynamics/static/W8_Tutorial2_Solution_69ce2879_2.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "## Interactive Demo 3.2: Forward inference in binary HMM\n",
    "\n",
    "Now visualize your inference algorithm. Play with the sliders and checkboxes to help you gain intuition. \n",
    "\n",
    "* Use the sliders `switch_prob` and `log10_noise_level` to change the switching probability and measurement noise level.\n",
    "\n",
    "* Use the slider `t` to view prediction (prior) probabilities, likelihood, and posteriors at different times.\n",
    "\n",
    "When does the inference make a mistake? For example, set `switch_prob=0.1`, `log_10_noise_level=-0.2`, and take a look at the probabilities at time `t=2`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to enable the demo\n",
    "\n",
    "nstep = 100\n",
    "\n",
    "@widgets.interact\n",
    "def plot_forward_inference_widget(\n",
    "    switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.05),\n",
    "    log10_noise_level=widgets.FloatSlider(min=-1., max=1., step=.01, value=0.1),\n",
    "    t=widgets.IntSlider(min=0, max=nstep-1, step=1, value=nstep//2),\n",
    "    #flag_m=widgets.Checkbox(value=True, description='measurement distribution', disabled=False, indent=False),\n",
    "    flag_d=widgets.Checkbox(value=True, description='measurements', disabled=False, indent=False),\n",
    "    flag_pre=widgets.Checkbox(value=True, description='todays prior', disabled=False, indent=False),\n",
    "    flag_like=widgets.Checkbox(value=True, description='likelihood', disabled=False, indent=False),\n",
    "    flag_post=widgets.Checkbox(value=True, description='posterior', disabled=False, indent=False),\n",
    "    ):\n",
    "\n",
    "  np.random.seed(102)\n",
    "\n",
    "  # global model, measurements, states, states_inferred, predictive_probs, likelihoods, posterior_probs\n",
    "  model = create_HMM(switch_prob=switch_prob,\n",
    "                      noise_level=10.**log10_noise_level,\n",
    "                      startprob=[0.5, 0.5])\n",
    "  measurements, states = sample(model, nstep)\n",
    "\n",
    "  # Infer state sequence\n",
    "  predictive_probs, likelihoods, posterior_probs = simulate_forward_inference(model, nstep,\n",
    "                                                              measurements)\n",
    "  states_inferred = np.asarray(posterior_probs[:,0] <= 0.5, dtype=int)\n",
    "\n",
    "  fig = plot_forward_inference(\n",
    "        model, states, measurements, states_inferred,\n",
    "        predictive_probs, likelihoods, posterior_probs,t=t,\n",
    "        flag_m=0,\n",
    "        flag_d=flag_d,flag_pre=flag_pre,flag_like=flag_like,flag_post=flag_post\n",
    "      )\n",
    "  plt.show(fig)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "---\n",
    "# Summary\n",
    "\n",
    "In this tutorial, you\n",
    "\n",
    "* Simulated the dynamics of the hidden state in a Hidden Markov model and visualized the measured data (Section 1)\n",
    "* Explored how uncertainty in a future hidden state changes based on the probabilities of switching between states (Section 2)\n",
    "* Estimated hidden states from the measurements using forward inference, connected this to Bayesian ideas, and explored the effects of noise and transition matrix probabilities on this process (Section 3)"
   ]
  }
 ],
 "metadata": {
  "@webio": {
   "lastCommId": null,
   "lastKernelId": null
  },
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "W8_Tutorial2",
   "provenance": [],
   "toc_visible": true
  },
  "interpreter": {
   "hash": "9516f62da91337f10c2adbe814d9c63a4b08f8271333386358218606edb781e3"
  },
  "kernel": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "kernelspec": {
   "display_name": "Python 3.7.11 64-bit ('pw3': conda)",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.11"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": false,
   "sideBar": true,
   "skip_h1_title": true,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
