{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {},
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D3_NetworkCausality/W3D3_Tutorial3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> &nbsp; <a href=\"https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeoNeuron/professional-workshop-3/master/tutorials/W6_NetworkCausality/W6_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open in Kaggle\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "# Network Causality : Model Based Inference\n",
    "\n",
    "**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording\n",
    "\n",
    "**Content modified**: Kai Chen"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "---\n",
    "# Tutorial objectives\n",
    "\n",
    "1.   Master definitions of causality\n",
    "2.   Understand that estimating causality is possible\n",
    "3.   Infer causality with model based method: simultaneous fitting/regression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "#### References:\n",
    "- D. Zhou, Y. Xiao, Y. Zhang, Z. Xu and D. Cai, “Granger causality network reconstruction of conductance-based integrate-and-fire neuronal systems”, PLoS ONE, 9 (2), e87636, 2014. ([PDF](https://ins.sjtu.edu.cn/people/zdz/publication_papers/Granger_Causality_Reconstruction.pdf))\n",
    "- Matlab-based fast GC-estimator: [GC_clean](https://github.com/bewantbe/GC_clean)\n",
    "- SciPy $\\chi^2$ statistics [documents](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2.html)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "---\n",
    "# Setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "from sklearn.multioutput import MultiOutputRegressor\n",
    "from sklearn.linear_model import Lasso, LinearRegression\n",
    "from scipy.stats import chi2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "#@title Figure settings\n",
    "import ipywidgets as widgets       # interactive display\n",
    "%config InlineBackend.figure_format = 'retina'\n",
    "\n",
    "nma_style = {\n",
    "    'figure.figsize' : (8, 6),\n",
    "    'figure.autolayout' : True,\n",
    "    'font.size' : 15,\n",
    "    'xtick.labelsize' : 'small',\n",
    "    'ytick.labelsize' : 'small',\n",
    "    'legend.fontsize' : 'small',\n",
    "    'axes.spines.top' : False,\n",
    "    'axes.spines.right' : False,\n",
    "    'xtick.major.size' : 5,\n",
    "    'ytick.major.size' : 5,\n",
    "}\n",
    "for key, value in nma_style.items():\n",
    "    plt.rcParams[key] = value\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# @title Helper functions\n",
    "\n",
    "\n",
    "def sigmoid(x):\n",
    "    \"\"\"\n",
    "    Compute sigmoid nonlinearity element-wise on x.\n",
    "\n",
    "    Args:\n",
    "        x (np.ndarray): the numpy data array we want to transform\n",
    "    Returns\n",
    "        (np.ndarray): x with sigmoid nonlinearity applied\n",
    "    \"\"\"\n",
    "    return 1 / (1 + np.exp(-x))\n",
    "\n",
    "\n",
    "def logit(x):\n",
    "    \"\"\"\n",
    "\n",
    "    Applies the logit (inverse sigmoid) transformation\n",
    "\n",
    "    Args:\n",
    "        x (np.ndarray): the numpy data array we want to transform\n",
    "    Returns\n",
    "        (np.ndarray): x with logit nonlinearity applied\n",
    "    \"\"\"\n",
    "    return np.log(x/(1-x))\n",
    "\n",
    "\n",
    "def create_connectivity(n_neurons, random_state=42, p=0.1):\n",
    "    \"\"\"\n",
    "    Generate our nxn causal connectivity matrix.\n",
    "\n",
    "    Args:\n",
    "        n_neurons (int): the number of neurons in our system.\n",
    "        random_state (int): random seed for reproducibility\n",
    "\n",
    "    Returns:\n",
    "        A (np.ndarray): our 0.1 sparse connectivity matrix\n",
    "    \"\"\"\n",
    "    np.random.seed(random_state)\n",
    "    A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[1 - p, p])\n",
    "\n",
    "    # set the timescale of the dynamical system to about 100 steps\n",
    "    _, s_vals, _ = np.linalg.svd(A_0)\n",
    "    A = A_0 / (1.01 * s_vals[0])\n",
    "\n",
    "    # _, s_val_test, _ = np.linalg.svd(A)\n",
    "    # assert s_val_test[0] < 1, \"largest singular value >= 1\"\n",
    "\n",
    "    return A\n",
    "\n",
    "\n",
    "def get_regression_estimate_full_connectivity(X):\n",
    "    \"\"\"\n",
    "    Estimates the connectivity matrix using lasso regression.\n",
    "\n",
    "    Args:\n",
    "        X (np.ndarray): our simulated system of shape (n_neurons, timesteps)\n",
    "        neuron_idx (int): optionally provide a neuron idx to compute connectivity for\n",
    "    Returns:\n",
    "        V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).\n",
    "                        if neuron_idx is specified, V is of shape (n_neurons,).\n",
    "    \"\"\"\n",
    "    n_neurons = X.shape[0]\n",
    "\n",
    "    # Extract Y and W as defined above\n",
    "    W = X[:, :-1].transpose()\n",
    "    Y = X[:, 1:].transpose()\n",
    "\n",
    "    # apply inverse sigmoid transformation\n",
    "    Y = logit(Y)\n",
    "\n",
    "    # fit multioutput regression\n",
    "    reg = MultiOutputRegressor(Lasso(fit_intercept=False,\n",
    "                                     alpha=0.01, max_iter=250 ), n_jobs=-1)\n",
    "    reg.fit(W, Y)\n",
    "\n",
    "    V = np.zeros((n_neurons, n_neurons))\n",
    "    for i, estimator in enumerate(reg.estimators_):\n",
    "        V[i, :] = estimator.coef_\n",
    "\n",
    "    return V\n",
    "\n",
    "\n",
    "def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):\n",
    "    \"\"\"\n",
    "    A wrapper function for our correlation calculations between A and the V estimated\n",
    "    from regression.\n",
    "\n",
    "    Args:\n",
    "        n_neurons (int): number of neurons\n",
    "        A (np.ndarray): connectivity matrix\n",
    "        X (np.ndarray): dynamical system\n",
    "        observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.\n",
    "        regression_args (dict): dictionary of lasso regression arguments and hyperparameters\n",
    "\n",
    "    Returns:\n",
    "        A single float correlation value representing the similarity between A and R\n",
    "    \"\"\"\n",
    "    assert (observed_ratio > 0) and (observed_ratio <= 1)\n",
    "\n",
    "    sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)\n",
    "\n",
    "    sel_X = X[:sel_idx, :]\n",
    "    sel_A = A[:sel_idx, :sel_idx]\n",
    "\n",
    "    sel_V = get_regression_estimate_full_connectivity(sel_X)\n",
    "    return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V\n",
    "\n",
    "\n",
    "def see_neurons(A, ax, ratio_observed=1, arrows=True):\n",
    "    \"\"\"\n",
    "    Visualizes the connectivity matrix.\n",
    "\n",
    "    Args:\n",
    "        A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)\n",
    "        ax (plt.axis): the matplotlib axis to display on\n",
    "\n",
    "    Returns:\n",
    "        Nothing, but visualizes A.\n",
    "    \"\"\"\n",
    "    n = len(A)\n",
    "\n",
    "    ax.set_aspect('equal')\n",
    "    thetas = np.linspace(0, np.pi * 2, n, endpoint=False)\n",
    "    x, y = np.cos(thetas), np.sin(thetas),\n",
    "    if arrows:\n",
    "      for i in range(n):\n",
    "          for j in range(n):\n",
    "              if A[i, j] > 0:\n",
    "                  ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,\n",
    "                          width = A[i, j] / 25,shape='right', length_includes_head=True,\n",
    "                          alpha = .2)\n",
    "    if ratio_observed < 1:\n",
    "      nn = int(n * ratio_observed)\n",
    "      ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')\n",
    "      ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')\n",
    "      ax.legend(fontsize=15)\n",
    "    else:\n",
    "      ax.scatter(x, y, c='k', s=150)\n",
    "    ax.axis('off')\n",
    "\n",
    "\n",
    "def simulate_neurons(A, timesteps, random_state=42):\n",
    "    \"\"\"\n",
    "    Simulates a dynamical system for the specified number of neurons and timesteps.\n",
    "\n",
    "    Args:\n",
    "        A (np.array): the connectivity matrix\n",
    "        timesteps (int): the number of timesteps to simulate our system.\n",
    "        random_state (int): random seed for reproducibility\n",
    "\n",
    "    Returns:\n",
    "        - X has shape (n_neurons, timeteps).\n",
    "    \"\"\"\n",
    "    np.random.seed(random_state)\n",
    "\n",
    "\n",
    "    n_neurons = len(A)\n",
    "    X = np.zeros((n_neurons, timesteps))\n",
    "\n",
    "    for t in range(timesteps - 1):\n",
    "        # solution\n",
    "        epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))\n",
    "        X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)\n",
    "\n",
    "        assert epsilon.shape == (n_neurons,)\n",
    "    return X\n",
    "\n",
    "\n",
    "def correlation_for_all_neurons(X):\n",
    "  \"\"\"Computes the connectivity matrix for the all neurons using correlations\n",
    "\n",
    "    Args:\n",
    "        X: the matrix of activities\n",
    "\n",
    "    Returns:\n",
    "        estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)\n",
    "        \"\"\"\n",
    "  n_neurons = len(X)\n",
    "  S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)\n",
    "  R = np.corrcoef(S)[:n_neurons, n_neurons:]\n",
    "  return R\n",
    "\n",
    "\n",
    "def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):\n",
    "    \"\"\"\n",
    "    A wrapper function for our correlation calculations between A and R.\n",
    "\n",
    "    Args:\n",
    "        n_neurons (int): the number of neurons in our system.\n",
    "        timesteps (int): the number of timesteps to simulate our system.\n",
    "        random_state (int): seed for reproducibility\n",
    "        neuron_idx (int): optionally provide a neuron idx to slice out\n",
    "\n",
    "    Returns:\n",
    "        A single float correlation value representing the similarity between A and R\n",
    "    \"\"\"\n",
    "\n",
    "    A = create_connectivity(n_neurons, random_state)\n",
    "    X = simulate_neurons(A, timesteps)\n",
    "\n",
    "    R = correlation_for_all_neurons(X)\n",
    "\n",
    "    return np.corrcoef(A.flatten(), R.flatten())[0, 1]\n",
    "\n",
    "\n",
    "def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):\n",
    "    \"\"\"\n",
    "\n",
    "    A wrapper function for our correlation calculations between A and the V estimated\n",
    "    from regression.\n",
    "\n",
    "    Args:\n",
    "        n_neurons (int): the number of neurons in our system.\n",
    "        A (np.array): the true connectivity\n",
    "        X (np.array): the simulated system\n",
    "        observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.\n",
    "        regression_args (dict): dictionary of lasso regression arguments and hyperparameters\n",
    "        neuron_idx (int): optionally provide a neuron idx to compute connectivity for\n",
    "\n",
    "    Returns:\n",
    "        A single float correlation value representing the similarity between A and R\n",
    "    \"\"\"\n",
    "    assert (observed_ratio > 0) and (observed_ratio <= 1)\n",
    "\n",
    "    sel_idx = np.clip(int(n_neurons * observed_ratio), 1, n_neurons)\n",
    "    selected_X = X[:sel_idx, :]\n",
    "    selected_connectivity = A[:sel_idx, :sel_idx]\n",
    "\n",
    "    estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)\n",
    "    if neuron_idx is None:\n",
    "        return np.corrcoef(selected_connectivity.flatten(),\n",
    "                           estimated_selected_connectivity.flatten())[1, 0], estimated_selected_connectivity\n",
    "    else:\n",
    "        return np.corrcoef(selected_connectivity[neuron_idx, :],\n",
    "                           estimated_selected_connectivity)[1, 0], estimated_selected_connectivity\n",
    "\n",
    "\n",
    "def plot_connectivity_matrix(A, ax=None):\n",
    "  \"\"\"Plot the (weighted) connectivity matrix A as a heatmap\n",
    "\n",
    "    Args:\n",
    "      A (ndarray): connectivity matrix (n_neurons by n_neurons)\n",
    "      ax: axis on which to display connectivity matrix\n",
    "  \"\"\"\n",
    "  if ax is None:\n",
    "    ax = plt.gca()\n",
    "  lim = np.abs(A).max()\n",
    "  ax.imshow(A, vmin=-lim, vmax=lim, cmap=\"coolwarm\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "---\n",
    "# Section 1: Defining and estimating causality\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "\n",
    "Let's think carefully about the statement \"**A causes B**\". To be concrete, let's take two neurons. What does it mean to say that neuron $A$ causes neuron $B$ to fire?\n",
    "\n",
    "The *interventional* definition of causality says that:\n",
    "$$\n",
    "(A \\text{ causes } B) \\Leftrightarrow ( \\text{ If we force }A \\text { to be different, then }B\\text{ changes})\n",
    "$$\n",
    "\n",
    "To determine if $A$ causes $B$ to fire, we can inject current into neuron $A$ and see what happens to $B$.\n",
    "\n",
    "**A mathematical definition of causality**: \n",
    "Over many trials, the average causal effect $\\delta_{A\\to B}$ of neuron $A$ upon neuron $B$ is the average change in neuron $B$'s activity when we set $A=1$ versus when we set $A=0$.\n",
    "\n",
    "\n",
    "$$\n",
    "\\delta_{A\\to B} = \\mathbb{E}[B | A=1] -  \\mathbb{E}[B | A=0] \n",
    "$$\n",
    "\n",
    "Note that this is an average effect. While one can get more sophisticated about conditional effects ($A$ only effects $B$ when it's not refractory, perhaps), we will only consider average effects today.\n",
    "\n",
    "**Relation to a randomized controlled trial (RCT)**:\n",
    "The logic we just described is the logic of a randomized control trial (RCT). If you randomly give 100 people a drug and 100 people a placebo, the effect is the difference in outcomes.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "## Exercise 1: Randomized controlled trial for two neurons\n",
    "\n",
    "Let's pretend we can perform a randomized controlled trial for two neurons. Our model will have neuron $A$ synapsing on Neuron $B$:\n",
    "$$B = \\alpha A + \\epsilon$$\n",
    " where $A$ and $B$ represent the activities of the two neurons, $\\alpha=0.9$ is the coupling strength between two neurons, and $\\epsilon$ is standard normal noise $\\epsilon\\sim\\mathcal{N}(0,1)$.\n",
    "\n",
    "Our goal is to randomly assign the state of $A$ and confirm that $B$ changes. \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "both",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def neuron_B(activity_of_A, alpha=0.9):\n",
    "  \"\"\"Model activity of neuron B as neuron A activity + noise\n",
    "\n",
    "  Args:\n",
    "    activity_of_A (ndarray): An array of shape (T,) containing the neural activity of neuron A\n",
    "    alpha (float): coupling strength between A and B\n",
    "\n",
    "  Returns:\n",
    "    ndarray: activity of neuron B\n",
    "  \"\"\"\n",
    "  noise = np.random.randn(activity_of_A.shape[0])\n",
    "  return alpha * activity_of_A + noise\n",
    "\n",
    "np.random.seed(12)\n",
    "\n",
    "# Randomly assigned (binary) activity of neuron A\n",
    "A = np.random.randint(2, size=(100000,))\n",
    "\n",
    "###########################################################################\n",
    "## TODO for students: Estimate the causal effect of A upon B\n",
    "## Use eq above (difference in mean of B when A=0 vs. A=1)\n",
    "###########################################################################\n",
    "diff_in_means = ...\n",
    "#print(diff_in_means)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "both",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def neuron_B(activity_of_A, alpha=0.9):\n",
    "  \"\"\"Model activity of neuron B as neuron A activity + noise\n",
    "\n",
    "  Args:\n",
    "    activity_of_A (ndarray): An array of shape (T,) containing the neural activity of neuron A\n",
    "    alpha (float): coupling strength between A and B\n",
    "\n",
    "  Returns:\n",
    "    ndarray: activity of neuron B\n",
    "  \"\"\"\n",
    "  noise = np.random.randn(activity_of_A.shape[0])\n",
    "  return alpha * activity_of_A + noise\n",
    "\n",
    "np.random.seed(12)\n",
    "\n",
    "# Randomly assigned (binary) activity of neuron A\n",
    "A = np.random.randint(2, size=(100000,))\n",
    "\n",
    "\n",
    "diff_in_means = neuron_B(A)[A==1].mean() - neuron_B(A)[A==0].mean()\n",
    "print(diff_in_means)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "You should get a difference in means of `0.8999919` (so very close to $\\alpha$ = `0.9`). "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "---\n",
    "# Section 2: Regression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "According to previous definition of causality, you may realize that the implied causation may also come from hidden *confounders*. \n",
    "\n",
    "**A confounding example**:\n",
    "Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.\n",
    "\n",
    "A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. \n",
    "\n",
    "**Controlling for a confound**: \n",
    "Confonds can be controlled for by adding them into the conditional terms as covariates in a regression. But for your coefficients to be causal effects, you need three things:\n",
    " \n",
    "1.   **All** confounds are included as covariates\n",
    "2.   Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)\n",
    "3.   No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)\n",
    "\n",
    "In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "## Section 2.1: Recovering connectivity by model fitting\n",
    "\n",
    "Recall that in our system each neuron effects every other via:\n",
    "\n",
    "$$\n",
    "\\vec{x}_{t+1} = \\sigma(A\\vec{x}_t + \\epsilon_t), \n",
    "$$\n",
    "\n",
    "where $\\sigma$ is our sigmoid nonlinearity from before: $\\sigma(x) = \\frac{1}{1 + e^{-x}}$\n",
    "\n",
    "Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "We will use a regression approach to estimate the causal influence of all neurons to neuron #1. Specifically, we will use linear regression to determine the $A$ in:\n",
    "\n",
    "$$\n",
    "\\sigma^{-1}(\\vec{x}_{t+1}) = A\\vec{x}_t + \\epsilon_t ,\n",
    "$$\n",
    "\n",
    "where $\\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\\sigma^{-1}(x) = \\log(\\frac{x}{1-x})$.\n",
    "\n",
    "Let $W$ be the $\\vec{x}_t$ values, up to the second-to-last timestep $T-1$:\n",
    "\n",
    "$$\n",
    "W = \n",
    "\\begin{bmatrix}\n",
    "\\mid & \\mid & ... & \\mid \\\\ \n",
    "\\vec{x}_0  & \\vec{x}_1  & ... & \\vec{x}_{T-1}  \\\\ \n",
    "\\mid & \\mid & ... & \\mid\n",
    "\\end{bmatrix}_{n \\times (T-1)}\n",
    "$$\n",
    "\n",
    "Let $Y$ be the $\\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:\n",
    "\n",
    "$$\n",
    "Y = \n",
    "\\begin{bmatrix}\n",
    "x_{i,1}  & x_{i,2}  & ... & x_{i, T}  \\\\ \n",
    "\\end{bmatrix}_{1 \\times (T-1)}\n",
    "$$\n",
    "\n",
    "You will then fit the following model:\n",
    "\n",
    "$$\n",
    "\\sigma^{-1}(Y^T) = W^TV\n",
    "$$\n",
    "\n",
    "where $V$ is the $n \\times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.\n",
    "\n",
    "**Review**: As you learned Week 4, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "## Exercise 2: Use linear regression plus lasso to estimate causal connectivities\n",
    "\n",
    "You will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.\n",
    "\n",
    "**Code**:\n",
    "\n",
    "You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? \n",
    "\n",
    "This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.\n",
    "\n",
    "\n",
    "- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function\n",
    "- Use the following hyperparameters for the `Lasso` estimator:\n",
    "    - `alpha = 0.01`\n",
    "    - `fit_intercept = False`\n",
    "- How do we obtain $V$ from the fitted model?\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "both",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def get_regression_estimate(X, neuron_idx):\n",
    "    \"\"\"\n",
    "    Estimates the connectivity matrix using lasso regression.\n",
    "\n",
    "    Args:\n",
    "        X (np.ndarray): our simulated system of shape (n_neurons, timesteps)\n",
    "        neuron_idx (int):  a neuron index to compute connectivity for\n",
    "\n",
    "    Returns:\n",
    "        V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).\n",
    "                        if neuron_idx is specified, V is of shape (n_neurons,).\n",
    "    \"\"\"\n",
    "    # Extract Y and W as defined above\n",
    "    W = X[:, :-1].transpose()\n",
    "    Y = X[[neuron_idx], 1:].transpose()\n",
    "\n",
    "    # Apply inverse sigmoid transformation\n",
    "    Y = logit(Y)\n",
    "\n",
    "    ############################################################################\n",
    "    ## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures\n",
    "    ## our assumption that most connections are precisely 0.\n",
    "    ## Fill in function and remove\n",
    "    raise NotImplementedError(\"Please complete the regression exercise\")\n",
    "    ############################################################################\n",
    "\n",
    "    # Initialize regression model with no intercept and alpha=0.01\n",
    "    regression = ...\n",
    "\n",
    "    # Fit regression to the data\n",
    "    regression.fit(...)\n",
    "\n",
    "    V = regression.coef_\n",
    "\n",
    "    return V\n",
    "\n",
    "# Parameters\n",
    "n_neurons = 50  # the size of our system\n",
    "timesteps = 10000  # the number of timesteps to take\n",
    "random_state = 42\n",
    "neuron_idx = 1\n",
    "\n",
    "A = create_connectivity(n_neurons, random_state)\n",
    "X = simulate_neurons(A, timesteps)\n",
    "\n",
    "\n",
    "# Uncomment below to test your function\n",
    "# V = get_regression_estimate(X, neuron_idx)\n",
    "\n",
    "#print(\"Regression: correlation of estimated connectivity with true connectivity: {:.3f}\".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))\n",
    "\n",
    "#print(\"Lagged correlation of estimated connectivity with true connectivity: {:.3f}\".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def get_regression_estimate(X, neuron_idx):\n",
    "    \"\"\"\n",
    "    Estimates the connectivity matrix using lasso regression.\n",
    "\n",
    "    Args:\n",
    "        X (np.ndarray): our simulated system of shape (n_neurons, timesteps)\n",
    "        neuron_idx (int):  a neuron index to compute connectivity for\n",
    "\n",
    "    Returns:\n",
    "        V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).\n",
    "                        if neuron_idx is specified, V is of shape (n_neurons,).\n",
    "    \"\"\"\n",
    "    # Extract Y and W as defined above\n",
    "    W = X[:, :-1].transpose()\n",
    "    Y = X[[neuron_idx], 1:].transpose()\n",
    "\n",
    "    # Apply inverse sigmoid transformation\n",
    "    Y = logit(Y)\n",
    "\n",
    "    # Initialize regression model with no intercept and alpha=0.01\n",
    "    regression = Lasso(fit_intercept=False, alpha=0.01)\n",
    "\n",
    "    # Fit regression to the data\n",
    "    regression.fit(W, Y)\n",
    "\n",
    "    V = regression.coef_\n",
    "\n",
    "    return V\n",
    "\n",
    "# Parameters\n",
    "n_neurons = 50  # the size of our system\n",
    "timesteps = 10000  # the number of timesteps to take\n",
    "random_state = 42\n",
    "neuron_idx = 1\n",
    "\n",
    "A = create_connectivity(n_neurons, random_state)\n",
    "X = simulate_neurons(A, timesteps)\n",
    "\n",
    "\n",
    "# Uncomment below to test your function\n",
    "V = get_regression_estimate(X, neuron_idx)\n",
    "\n",
    "print(\"Regression: correlation of estimated connectivity with true connectivity: {:.3f}\".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))\n",
    "\n",
    "print(\"Lagged correlation of estimated connectivity with true connectivity: {:.3f}\".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.\n",
    "\n",
    "We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "You may also try Linear regression without Lasso regularization, `regression = LinearRegression(fit_intercept=False)`, and see how it effects the performance as well as fitted weights."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "---\n",
    "# Section 3: Granger Causality"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "Next, we are going to introduce another method of estimating causality, Granger causality(GC). The original paper of GC is [here](https://www.jstor.org/stable/1912791?origin=crossref&seq=1#metadata_info_tab_contents). GC is also developed from the linear regression analysis, and the basic idea goes as follows.\n",
    "\n",
    "Suppose we have two time series {$x_t$} and {$y_t$}. The first thing we can do is the auto-regression analysis for the time series {$x_t$}."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "$$\n",
    "x_{t}=\\sum_{i=1}^{m} \\hat{a}_{i} x_{t-i}+\\hat{\\varepsilon}_{t}\n",
    "$$\n",
    "where $m$ is the regression order, $\\hat{a}_{i}$ is the regression coefficient, and $\\hat{\\varepsilon}_{t}$ is the residue after auto-regression. Similarly, we can do joint regression analysis of {$x_t$} by taking the information of time series {$y_t$} into account.\n",
    "$$\n",
    "x_{t}=\\sum_{i=1}^{m} a_{i} x_{t-i}+\\sum_{j=1}^{m} b_{j} y_{t-j}+\\varepsilon_{t}.\n",
    "$$\n",
    "Similarly, $m$ is the regression order, $a_i$, $b_j$ is the regression coefficient, and $\\varepsilon_t$ is the residual of joint regression analysis.\n",
    "\n",
    "Now it comes to the key step of GC analysis. Once you got residuals from two different regression, you may start to compare the variance of these two regression. The definition of GC from $y$ to $x$ in time domain is defined as\n",
    "\n",
    "$$\n",
    "F_{y \\rightarrow x}=\\log \\frac{\\operatorname{var}\\left(\\hat{\\varepsilon}_{t}\\right)}{\\operatorname{var}\\left(\\varepsilon_{t}\\right)}\n",
    "$$\n",
    "\n",
    "Intuitively speaking, if {$y_t$} do have causal effect onto {$x_t$}, the participation of information of $y_t$ will definitely help improve the regression analysis and lead to smaller variance of regression residual, i.e., $\\text{Var}(\\hat{\\varepsilon}_t)>\\text{Var}(\\varepsilon_t)$.\n",
    "\n",
    "On the other hand, if {$x_t$} and {$y_t$} are completely independent, the set of joint regression coefficient $b_j=0$, which leads to $\\text{Var}(\\hat{\\varepsilon}_t)=\\text{Var}(\\varepsilon_t)$. Therefore, $F_{y\\rightarrow x}=0$.\n",
    "\n",
    "Now, let's start to build up codes for estimating GC and play with some generated data.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "### Section 3.1: Create structure array for regression analysis.\n",
    "\n",
    "Rewrite the auto regression model into matrix form as\n",
    "$$\n",
    "\\mathbf{x}_{t}= \\mathbf{X}\\mathbf{a}+\\mathbf{\\hat{\\varepsilon}_t}\n",
    "$$\n",
    "\n",
    "where $\\mathbf{X}_{(t-m)\\times m} = \\left[\\mathbf{x}_{t-1}, \\mathbf{x}_{t-2}, \\cdots, \\mathbf{x}_{t-m}\\right]$, and $\\mathbf{a} = \\left[a_1, a_2, \\cdots, a_m\\right]^T$. Here, $\\mathbf{X}$ is the structure array that we want in the regression model. \n",
    "\n",
    "Similarly for joint regression case,\n",
    "\n",
    "$$\n",
    "\\mathbf{x}_{t}= \\mathbf{Z}\\mathbf{p}+\\mathbf{\\varepsilon_t}\n",
    "$$\n",
    "\n",
    "where $\\mathbf{Z}_{(t-m)\\times 2m} = \\left[\\mathbf{x}_{t-1}, \\mathbf{x}_{t-2}, \\cdots, \\mathbf{x}_{t-m}, \\mathbf{y}_{t-1}, \\mathbf{y}_{t-2}, \\cdots, \\mathbf{y}_{t-m}\\right]$, and $\\mathbf{p} = \\left[a_1, a_2, \\cdots, a_m, b_1, \\cdots, b_m\\right]^T$.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def create_structure_array(x:np.ndarray, order:int)->np.ndarray:\n",
    "    '''\n",
    "    Prepare structure array for regression analysis.\n",
    "\n",
    "    Args:\n",
    "    x         : original time series\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    x_array   : structure array with shape (len(x)-order) by (order).\n",
    "\n",
    "    '''\n",
    "    N = len(x) - order\n",
    "    x_array = np.zeros((N, order))\n",
    "\n",
    "    ############################################################################\n",
    "    ## TODO: Insert your code here to create the structrure array.\n",
    "    ## Fill in function and remove\n",
    "    raise NotImplementedError(\"Please complete the exercise\")\n",
    "    ############################################################################\n",
    "\n",
    "    for i in range(order):\n",
    "        x_array[:, i] = ...\n",
    "\n",
    "    return x_array\n",
    "\n",
    "\n",
    "# Uncomment below to test create_structure_array\n",
    "# np.random.seed(10)\n",
    "# test_X = np.random.randn(10)\n",
    "# X_array = create_structure_array(test_X, 3)\n",
    "# print(X_array)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def create_structure_array(x:np.ndarray, order:int)->np.ndarray:\n",
    "    '''\n",
    "    Prepare structure array for regression analysis.\n",
    "\n",
    "    Args:\n",
    "    x         : original time series\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    x_array   : structure array with shape (len(x)-order) by (order).\n",
    "\n",
    "    '''\n",
    "    N = len(x) - order\n",
    "    x_array = np.zeros((N, order))\n",
    "\n",
    "    for i in range(order):\n",
    "        x_array[:, i] = x[-i-1-N:-i-1]\n",
    "\n",
    "    return x_array\n",
    "\n",
    "\n",
    "# Uncomment below to test create_structure_array\n",
    "np.random.seed(10)\n",
    "test_X = np.random.randn(10)\n",
    "X_array = create_structure_array(test_X, 3)\n",
    "print(X_array)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "**Expected Outputs**\n",
    "```\n",
    "[[-1.54540029  0.71527897  1.3315865 ]\n",
    " [-0.00838385 -1.54540029  0.71527897]\n",
    " [ 0.62133597 -0.00838385 -1.54540029]\n",
    " [-0.72008556  0.62133597 -0.00838385]\n",
    " [ 0.26551159 -0.72008556  0.62133597]\n",
    " [ 0.10854853  0.26551159 -0.72008556]\n",
    " [ 0.00429143  0.10854853  0.26551159]]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "### Section 3.2: Solve regression problem\n",
    "After you prepare the structure array, then we need to solve the regression problem. Directly from the matrix expression of regression model, we can treat it as an optimization problem or least square problem.\n",
    "\n",
    "$$\n",
    "\\min_\\mathbf{a} \\left\\lVert\\mathbf{x}_{t} - \\mathbf{X}\\mathbf{a}\\right\\rVert_2\n",
    "$$\n",
    "\n",
    "Theoretically, we can calculate it using the expression below\n",
    "$$\n",
    "\\hat{\\mathbf{a}}  = \\left(\\mathbf{X}^T\\mathbf{X}\\right)^{-1} \\mathbf{X}^T\\mathbf{x}_t\n",
    "$$\n",
    "\n",
    "Numerically, in order to by pass the direct matrix inverse for large matrix, here we use the least square solver in numpy, named as `numpy.linalg.lstsq()`, which use SVD to accelerate the numerical solver of least square problem.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def auto_reg(x, order)->np.ndarray:\n",
    "    '''\n",
    "    Auto regression analysis of time series.\n",
    "\n",
    "    Args:\n",
    "    x         : original time series\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    res       : residual vector\n",
    "\n",
    "    '''\n",
    "    reg_array = create_structure_array(x, order)\n",
    "\n",
    "    ############################################################################\n",
    "    ## TODO: Insert your code here to complete auto-regression.\n",
    "    ## Hint: using np.linalg.lstsq() to solve least square problem\n",
    "    ## Fill in function and remove\n",
    "    raise NotImplementedError(\"Please complete the exercise\")\n",
    "    ############################################################################\n",
    "\n",
    "    res = ...\n",
    "\n",
    "    return res\n",
    "\n",
    "# Uncomment below to test auto_reg()\n",
    "# np.random.seed(10)\n",
    "# test_x = np.random.randn(10)\n",
    "# print(auto_reg(test_x, 3))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def auto_reg(x, order)->np.ndarray:\n",
    "    '''\n",
    "    Auto regression analysis of time series.\n",
    "\n",
    "    Args:\n",
    "    x         : original time series\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    res       : residual vector\n",
    "\n",
    "    '''\n",
    "    reg_array = create_structure_array(x, order)\n",
    "\n",
    "    coef = np.linalg.lstsq(reg_array, x[order:], rcond=None)[0]\n",
    "    res = x[order:] - reg_array @ coef\n",
    "\n",
    "    return res\n",
    "\n",
    "# Uncomment below to test auto_reg()\n",
    "np.random.seed(10)\n",
    "test_x = np.random.randn(10)\n",
    "print(auto_reg(test_x, 3))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "**Expected Outputs**\n",
    "```\n",
    "[-0.30409876  0.05363705 -0.40523902  0.28590032 -0.12029773  0.19177217 -0.16742596]\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def joint_reg(x, y, order)->np.ndarray:\n",
    "    '''\n",
    "    Joint regression analysis of time series.\n",
    "\n",
    "    Args:\n",
    "    x         : original time series 1\n",
    "    y         : original time series 2\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    res       : residual vector\n",
    "\n",
    "    '''\n",
    "    reg_array_x = create_structure_array(x, order)\n",
    "    reg_array_y = create_structure_array(y, order)\n",
    "\n",
    "    ############################################################################\n",
    "    ## TODO: Insert your code here to complete joint-regression.\n",
    "    ## Hint: using np.linalg.lstsq() to solve least square problem\n",
    "    ## Fill in function and remove\n",
    "    raise NotImplementedError(\"Please complete the exercise\")\n",
    "    ############################################################################\n",
    "\n",
    "    res = ...\n",
    "\n",
    "    return res\n",
    "\n",
    "# Uncomment below to test joint_reg()\n",
    "# np.random.seed(10)\n",
    "# test_x = np.random.randn(10)\n",
    "# test_y = np.random.randn(10)\n",
    "# print(joint_reg(test_x, test_y, 3))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def joint_reg(x, y, order)->np.ndarray:\n",
    "    '''\n",
    "    Joint regression analysis of time series.\n",
    "\n",
    "    Args:\n",
    "    x         : original time series 1\n",
    "    y         : original time series 2\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    res       : residual vector\n",
    "\n",
    "    '''\n",
    "    reg_array_x = create_structure_array(x, order)\n",
    "    reg_array_y = create_structure_array(y, order)\n",
    "\n",
    "    reg_array = np.hstack((reg_array_x, reg_array_y))\n",
    "    coef = np.linalg.lstsq(reg_array, x[order:], rcond=None)[0]\n",
    "    res = x[order:] - reg_array @ coef\n",
    "\n",
    "    return res\n",
    "\n",
    "# Uncomment below to test joint_reg()\n",
    "np.random.seed(10)\n",
    "test_x = np.random.randn(10)\n",
    "test_y = np.random.randn(10)\n",
    "print(joint_reg(test_x, test_y, 3))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "**Expected Outputs**\n",
    "```\n",
    "[-0.00748994  0.00186204 -0.0087194  -0.0129643  -0.04156986 -0.03974958 -0.02913724]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "### Section 3.3: Calculate GC\n",
    "\n",
    "Using the regression model defined above to calculate GC value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def GC(x, y, order):\n",
    "    '''\n",
    "    Granger Causality from y to x\n",
    "\n",
    "    Args:\n",
    "    x         : original time series (dest)\n",
    "    y         : original time series (source)\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    GC_value  : residual vector\n",
    "\n",
    "    '''\n",
    "\n",
    "    ############################################################################\n",
    "    ## TODO: Insert your code here to complete the GC calculation.\n",
    "    ## Fill in function and remove\n",
    "    raise NotImplementedError(\"Please complete the exercise\")\n",
    "    ############################################################################\n",
    "\n",
    "    GC_value = ...\n",
    "\n",
    "    return GC_value\n",
    "\n",
    "# Uncomment to test our GC code with 2-node regression model\n",
    "# np.random.rand(10)\n",
    "# test_X = np.random.randn(100000)\n",
    "# test_Y = np.random.randn(100000)\n",
    "# test_Y[1:] += test_X[:-1]*0.5\n",
    "# print(f'GC Y->X: {GC(test_X, test_Y, 100):.3e}')\n",
    "# print(f'GC X->Y: {GC(test_Y, test_X, 100):.3e}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "# to_remove solution\n",
    "def GC(x, y, order):\n",
    "    '''\n",
    "    Granger Causality from y to x\n",
    "\n",
    "    Args:\n",
    "    x         : original time series (dest)\n",
    "    y         : original time series (source)\n",
    "    order     : regression order\n",
    "\n",
    "    Return:\n",
    "    GC_value  : residual vector\n",
    "\n",
    "    '''\n",
    "\n",
    "    res_auto = auto_reg(x, order)\n",
    "    res_joint = joint_reg(x, y, order)\n",
    "    GC_value = 2.*np.log(res_auto.std()/res_joint.std())\n",
    "\n",
    "    return GC_value\n",
    "\n",
    "# Uncomment to test our GC code with 2-node regression model\n",
    "np.random.rand(10)\n",
    "test_X = np.random.randn(100000)\n",
    "test_Y = np.random.randn(100000)\n",
    "test_Y[1:] += test_X[:-1]*0.5\n",
    "print(f'GC Y->X: {GC(test_X, test_Y, 100):.3e}')\n",
    "print(f'GC X->Y: {GC(test_Y, test_X, 100):.3e}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "**Expected Outputs**:\n",
    "```\n",
    "GC Y->X: 8.761e-04\n",
    "GC X->Y: 2.251e-01\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "### Section 3.4: Determin the significance level of GC value.\n",
    "\n",
    "Suppose time seies $x_t$ and $y_t$ do not have any causal relationship, then theoretically speaking, the GC value between them $F_{x\\rightarrow y}$ = $F_{y\\rightarrow x}$ = 0. However, due to the finite data length and finite regression order, the empirical value of GC $\\tilde{F}_{x\\rightarrow y}$ and $\\tilde{F}_{y\\rightarrow x}$ asymtotically follows $\\chi^2$ distribution, \n",
    "\n",
    "$$l\\tilde{F}_{x\\rightarrow y},l\\tilde{F}_{y\\rightarrow x}\\sim \\chi^2(m),$$\n",
    "\n",
    "where $l$ is the data length and $m$ is the regression order.\n",
    "\n",
    "Using this as the null-hypothesis, we can calculate the significance level (threshold) to reject null-hypothesis with given $p$-value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "def GC_SI(p, order, length):\n",
    "    '''\n",
    "    Significant level of GC value.\n",
    "\n",
    "    Args\n",
    "    p       : p-value\n",
    "    order   : parameter of chi^2 distribution\n",
    "    length  : length of data.\n",
    "\n",
    "    Return:\n",
    "    significant level of null hypothesis (GC\n",
    "        between two independent time seies)\n",
    "\n",
    "    '''\n",
    "    return chi2.ppf(1-p, order)/length"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {}
   },
   "outputs": [],
   "source": [
    "length = 1e5\n",
    "order = 10\n",
    "p = 0.0001\n",
    "print(f'GC threshold for p={p:.1e}, length={length:.1e}, order={order:d}: \\\n",
    "      {GC_SI(p, order, length):.3e}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {}
   },
   "source": [
    "So far we've build the primitive code for GC estimation. In the next tutorial, we will compare its inference performance with another model-free inference method.\n",
    "\n",
    "__Resources:__ Matlab-based fast conditional GC-estimator: [GC_clean](https://github.com/bewantbe/GC_clean)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "---\n",
    "# Summary\n",
    "\n",
    "In this tutorial, we explored:\n",
    "\n",
    "1) Using regression for estimating causality\n",
    "\n",
    "2) And further develop the idea of regression into Granger Causality methods.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "---\n",
    "# (Bonus) Section 4: Omitted Variable Bias\n",
    "\n",
    "If we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately?\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "## Section 4.1: Visualizing subsets of the connectivity matrix\n",
    "\n",
    "We first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.\n",
    "\n",
    "Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "#@markdown Execute this cell to visualize subsets of connectivity matrix\n",
    "\n",
    "# Run this cell to visualize the subsets of variables we observe\n",
    "n_neurons = 25\n",
    "A = create_connectivity(n_neurons)\n",
    "\n",
    "fig, axs = plt.subplots(2, 2, figsize=(10, 10))\n",
    "ratio_observed = [0.75, 0.25]  # the proportion of neurons observed in our system\n",
    "\n",
    "for i, ratio in enumerate(ratio_observed):\n",
    "    sel_idx = int(n_neurons * ratio)\n",
    "\n",
    "    offset = np.zeros((n_neurons, n_neurons))\n",
    "    axs[i,1].title.set_text(\"{}% neurons observed\".format(int(ratio * 100)))\n",
    "    offset[:sel_idx, :sel_idx] =  1 + A[:sel_idx, :sel_idx]\n",
    "    im = axs[i, 1].imshow(offset, cmap=\"coolwarm\", vmin=0, vmax=A.max() + 1)\n",
    "    axs[i, 1].set_xlabel(\"Connectivity from\")\n",
    "    axs[i, 1].set_ylabel(\"Connectivity to\")\n",
    "    plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)\n",
    "    see_neurons(A,axs[i, 0],ratio)\n",
    "\n",
    "plt.suptitle(\"Visualizing subsets of the connectivity matrix\", y = 1.05)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "## Section 4.2: Effects of partial observability"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "### Interactive Demo: Regression performance as a function of the number of observed neurons\n",
    "\n",
    "We will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?\n",
    "\n",
    "**Note:** the plots will take a moment or so to update after moving the slider."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "#@markdown Execute this cell to enable demo\n",
    "n_neurons = 50\n",
    "A = create_connectivity(n_neurons, random_state=42)\n",
    "X = simulate_neurons(A, 4000, random_state=42)\n",
    "\n",
    "reg_args = {\n",
    "    \"fit_intercept\": False,\n",
    "    \"alpha\": 0.001\n",
    "}\n",
    "\n",
    "@widgets.interact\n",
    "def plot_observed(n_observed=(5, 45, 5)):\n",
    "  to_neuron = 0\n",
    "  fig, axs = plt.subplots(1, 3, figsize=(15, 5))\n",
    "  sel_idx = n_observed\n",
    "  ratio = (n_observed) / n_neurons\n",
    "  offset = np.zeros((n_neurons, n_neurons))\n",
    "  axs[0].title.set_text(\"{}% neurons observed\".format(int(ratio * 100)))\n",
    "  offset[:sel_idx, :sel_idx] =  1 + A[:sel_idx, :sel_idx]\n",
    "  im = axs[1].imshow(offset, cmap=\"coolwarm\", vmin=0, vmax=A.max() + 1)\n",
    "  plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)\n",
    "\n",
    "  see_neurons(A,axs[0], ratio, False)\n",
    "  corr, R =  get_regression_corr_full_connectivity(n_neurons,\n",
    "                                  A,\n",
    "                                  X,\n",
    "                                  ratio,\n",
    "                                  reg_args)\n",
    "\n",
    "  #rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')\n",
    "  #axs[1].add_patch(rect)\n",
    "  big_R = np.zeros(A.shape)\n",
    "  big_R[:sel_idx, :sel_idx] =  1 + R\n",
    "  #big_R[to_neuron, :sel_idx] =  1 + R\n",
    "  im = axs[2].imshow(big_R, cmap=\"coolwarm\", vmin=0, vmax=A.max() + 1)\n",
    "  plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)\n",
    "  c = 'w' if n_observed<(n_neurons-3) else 'k'\n",
    "  axs[2].text(0,n_observed+3,\"Correlation : {:.2f}\".format(corr), color=c, size=15)\n",
    "  #axs[2].axis(\"off\")\n",
    "  axs[1].title.set_text(\"True connectivity\")\n",
    "  axs[1].set_xlabel(\"Connectivity from\")\n",
    "  axs[1].set_ylabel(\"Connectivity to\")\n",
    "\n",
    "  axs[2].title.set_text(\"Estimated connectivity\")\n",
    "  axs[2].set_xlabel(\"Connectivity from\")\n",
    "  #axs[2].set_ylabel(\"Connectivity to\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {}
   },
   "source": [
    "Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.\n",
    "What is the relationship that you see between performance and the number of neurons observed?\n",
    "\n",
    "**Note:** the cell below will take about 25-30 seconds to run.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "colab_type": "code",
    "execution": {}
   },
   "outputs": [],
   "source": [
    "#@title\n",
    "#@markdown Plot correlation vs. subsampling\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')\n",
    "\n",
    "# we'll simulate many systems for various ratios of observed neurons\n",
    "n_neurons = 50\n",
    "timesteps = 5000\n",
    "ratio_observed = [1, 0.75, 0.5, .25, .12]  # the proportion of neurons observed in our system\n",
    "n_trials = 3  # run it this many times to get variability in our results\n",
    "\n",
    "reg_args = {\n",
    "    \"fit_intercept\": False,\n",
    "    \"alpha\": 0.001\n",
    "}\n",
    "\n",
    "corr_data = np.zeros((n_trials, len(ratio_observed)))\n",
    "for trial in range(n_trials):\n",
    "\n",
    "  A = create_connectivity(n_neurons, random_state=trial)\n",
    "  X = simulate_neurons(A, timesteps)\n",
    "  print(\"simulating trial {} of {}\".format(trial + 1, n_trials))\n",
    "\n",
    "\n",
    "  for j, ratio in enumerate(ratio_observed):\n",
    "      result,_ = get_regression_corr_full_connectivity(n_neurons,\n",
    "                                    A,\n",
    "                                    X,\n",
    "                                    ratio,\n",
    "                                    reg_args)\n",
    "      corr_data[trial, j] = result\n",
    "\n",
    "corr_mean = np.nanmean(corr_data, axis=0)\n",
    "corr_std = np.nanstd(corr_data, axis=0)\n",
    "\n",
    "plt.plot(np.asarray(ratio_observed) * 100, corr_mean)\n",
    "plt.fill_between(np.asarray(ratio_observed) * 100,\n",
    "                    corr_mean - corr_std,\n",
    "                    corr_mean + corr_std,\n",
    "                    alpha=.2)\n",
    "plt.xlim([100, 10])\n",
    "plt.xlabel(\"Percent of neurons observed\")\n",
    "plt.ylabel(\"connectivity matrices correlation\")\n",
    "plt.title(\"Performance of regression as a function of the number of neurons observed\");"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "W6_Tutorial1",
   "provenance": [],
   "toc_visible": true
  },
  "interpreter": {
   "hash": "9516f62da91337f10c2adbe814d9c63a4b08f8271333386358218606edb781e3"
  },
  "kernel": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "kernelspec": {
   "display_name": "Python 3.7.11 64-bit ('pw3': conda)",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.11"
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
