{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D4_Calculus/student/W0D4_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# NMA Refresher Course: Week 0, Day 4, Tutorial 1\n",
    "## Calculus: Basics of Differential and Integral Calculus \n",
    "__Content creators:__ John S Butler, Arvind Kumar with help from Ella Batty\n",
    "\n",
    "__Content reviewers:__  ??\n",
    "\n",
    "__Production editors:__ Matthew McCann, Manisha Sinha"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Tutorial Objectives\n",
    "\n",
    "In this tutorial, we will cover aspects of differential calculus that will be frequently used in the main NMA course. We assume that you  have some familiarty with differential calculus, but may be a bit rusty or may not have done much practice.  Specifically the objectives of this tutorial are\n",
    "\n",
    "*   Get an intuitive understanding of derivative and integration operations\n",
    "*   Learn to calculate the derivatives of 1- and 2-dimensional functions/signals numerically\n",
    "*   Familiarize with the concept of neuron transfer function in 1- and 2-dimensions.\n",
    "*   Familiarize with the idea of numerical integration using Riemann sum\n",
    "*   Learn about the notion of eigenfunction\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:45:55.629541Z",
     "iopub.status.busy": "2021-06-02T02:45:55.626466Z",
     "iopub.status.idle": "2021-06-02T02:45:55.741921Z",
     "shell.execute_reply": "2021-06-02T02:45:55.741307Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 1: Why do we care about calculus?\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"781o_1hRtpk\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T02:45:55.748215Z",
     "iopub.status.busy": "2021-06-02T02:45:55.747654Z",
     "iopub.status.idle": "2021-06-02T02:46:10.312567Z",
     "shell.execute_reply": "2021-06-02T02:46:10.313063Z"
    }
   },
   "outputs": [],
   "source": [
    "# Imports\n",
    "!pip install sympy --quiet\n",
    "\n",
    "import numpy as np\n",
    "import scipy.optimize as opt       # import root-finding algorithm\n",
    "import sympy as sp                 # Python toolbox for symbolic maths\n",
    "import matplotlib.pyplot as plt\n",
    "from mpl_toolkits.mplot3d import Axes3D # Toolbox for rendring 3D figures\n",
    "from mpl_toolkits import mplot3d   # Toolbox for rendring 3D figures"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:10.321302Z",
     "iopub.status.busy": "2021-06-02T02:46:10.320712Z",
     "iopub.status.idle": "2021-06-02T02:46:10.475680Z",
     "shell.execute_reply": "2021-06-02T02:46:10.475117Z"
    }
   },
   "outputs": [],
   "source": [
    "# @title Figure Settings\n",
    "import ipywidgets as widgets  # interactive display\n",
    "from ipywidgets import interact\n",
    "%config InlineBackend.figure_format = 'retina'\n",
    "# use NMA plot style\n",
    "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\n",
    "my_layout = widgets.Layout()\n",
    "\n",
    "fig_w, fig_h = 12, 4.5\n",
    "my_fontsize = 16\n",
    "my_params = {'axes.labelsize': my_fontsize,\n",
    "          'axes.titlesize': my_fontsize,\n",
    "          'figure.figsize': [fig_w, fig_h],\n",
    "          'font.size': my_fontsize,\n",
    "          'legend.fontsize': my_fontsize-4,\n",
    "          'lines.markersize': 8.,\n",
    "          'lines.linewidth': 2.,\n",
    "          'xtick.labelsize': my_fontsize-2,\n",
    "          'ytick.labelsize': my_fontsize-2}\n",
    "\n",
    "plt.rcParams.update(my_params)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:10.498192Z",
     "iopub.status.busy": "2021-06-02T02:46:10.478285Z",
     "iopub.status.idle": "2021-06-02T02:46:10.501769Z",
     "shell.execute_reply": "2021-06-02T02:46:10.501165Z"
    }
   },
   "outputs": [],
   "source": [
    "# @title Plotting Functions\n",
    "def move_sympyplot_to_axes(p, ax):\n",
    "    backend = p.backend(p)\n",
    "    backend.ax = ax\n",
    "    backend.process_series()\n",
    "    backend.ax.spines['right'].set_color('none')\n",
    "    backend.ax.spines['bottom'].set_position('zero')\n",
    "    backend.ax.spines['top'].set_color('none')\n",
    "    plt.close(backend.fig)\n",
    "\n",
    "def plot_functions(function, show_derivative, show_integral):\n",
    "\n",
    "  # For sympy we first define our symbolic variable\n",
    "  x, y, z, t, f = sp.symbols('x y z t f')\n",
    "\n",
    "  # We define our function\n",
    "  if function == 'Linear':\n",
    "    f = -2*t\n",
    "    name = r'$-2t$'\n",
    "  elif function == 'Parabolic':\n",
    "    f =  t**2\n",
    "    name = r'$t^2$'\n",
    "  elif function == 'Exponential':\n",
    "    f =  sp.exp(t)\n",
    "    name = r'$e^t$'\n",
    "  elif function == 'Sine':\n",
    "    f =  sp.sin(t)\n",
    "    name = r'$sin(t)$'\n",
    "  elif function == 'Sigmoid':\n",
    "    f = 1/(1 + sp.exp(-(t-5)))\n",
    "    name = r'$\\frac{1}{1+e^{-(t-5)}}$'\n",
    "\n",
    "  if show_derivative and not show_integral:\n",
    "    # Calculate the derivative of sin(t) as a function of t\n",
    "    diff_f = sp.diff(f)\n",
    "    print('Derivative of', f, 'is ', diff_f)\n",
    "\n",
    "    p1 = sp.plot(f, diff_f, show=False)\n",
    "    p1[0].line_color='r'\n",
    "    p1[1].line_color='b'\n",
    "    p1[0].label='Function'\n",
    "    p1[1].label='Derivative'\n",
    "    p1.legend=True\n",
    "    p1.title = 'Function = ' + name + '\\n'\n",
    "    p1.show()\n",
    "  elif show_integral and not show_derivative:\n",
    "\n",
    "    int_f = sp.integrate(f)\n",
    "    int_f = int_f - int_f.subs(t, -10)\n",
    "    print('Integral of', f, 'is ', int_f)\n",
    "\n",
    "\n",
    "    p1 = sp.plot(f, int_f, show=False)\n",
    "    p1[0].line_color='r'\n",
    "    p1[1].line_color='g'\n",
    "    p1[0].label='Function'\n",
    "    p1[1].label='Integral'\n",
    "    p1.legend=True\n",
    "    p1.title = 'Function = ' + name + '\\n'\n",
    "    p1.show()\n",
    "\n",
    "\n",
    "  elif show_integral and show_derivative:\n",
    "\n",
    "    diff_f = sp.diff(f)\n",
    "    print('Derivative of', f, 'is ', diff_f)\n",
    "\n",
    "    int_f = sp.integrate(f)\n",
    "    int_f = int_f - int_f.subs(t, -10)\n",
    "    print('Integral of', f, 'is ', int_f)\n",
    "\n",
    "    p1 = sp.plot(f, diff_f, int_f, show=False)\n",
    "    p1[0].line_color='r'\n",
    "    p1[1].line_color='b'\n",
    "    p1[2].line_color='g'\n",
    "    p1[0].label='Function'\n",
    "    p1[1].label='Derivative'\n",
    "    p1[2].label='Integral'\n",
    "    p1.legend=True\n",
    "    p1.title = 'Function = ' + name + '\\n'\n",
    "    p1.show()\n",
    "\n",
    "  else:\n",
    "\n",
    "    p1 = sp.plot(f, show=False)\n",
    "    p1[0].line_color='r'\n",
    "    p1[0].label='Function'\n",
    "    p1.legend=True\n",
    "    p1.title = 'Function = ' + name + '\\n'\n",
    "    p1.show()\n",
    "\n",
    "def plot_alpha_func(t, f, df_dt):\n",
    "\n",
    "  plt.figure()\n",
    "  plt.subplot(2,1,1)\n",
    "  plt.plot(t, f, 'r', label='Alpha function')\n",
    "  plt.xlabel('Time (au)')\n",
    "  plt.ylabel('Voltage')\n",
    "  plt.title('Alpha function (f(t))')\n",
    "  #plt.legend()\n",
    "\n",
    "  plt.subplot(2,1,2)\n",
    "  plt.plot(t, df_dt, 'b', label='Derivative')\n",
    "  plt.title('Derivative of alpha function')\n",
    "  plt.xlabel('Time (au)')\n",
    "  plt.ylabel('df/dt')\n",
    "  #plt.legend()\n",
    "\n",
    "def plot_rate_and_gain(I, rate, gain):\n",
    "\n",
    "    plt.figure()\n",
    "    plt.subplot(1,2,1)\n",
    "    plt.plot(I,rate)\n",
    "    plt.xlabel('Injected current (au)')\n",
    "    plt.ylabel('Output firing rate (normalized)')\n",
    "    plt.title('Transfer function')\n",
    "\n",
    "    plt.subplot(1,2,2)\n",
    "    # Uncomment to plot\n",
    "    plt.plot(I[0:-1], gain)\n",
    "    plt.xlabel('Injected current (au)')\n",
    "    plt.ylabel('Gain')\n",
    "    plt.title('Gain')\n",
    "\n",
    "def plot_charge_transfer(t, PSP, numerical_integral):\n",
    "\n",
    "  fig, axes = plt.subplots(1, 2)\n",
    "\n",
    "  axes[0].plot(t, PSP)\n",
    "  axes[0].set(xlabel = 't', ylabel = 'PSP')\n",
    "\n",
    "  axes[1].plot(t, numerical_integral)\n",
    "  axes[1].set(xlabel = 't', ylabel = 'Charge Transferred')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Section 1: What is differentiation and integration?\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:10.507503Z",
     "iopub.status.busy": "2021-06-02T02:46:10.506916Z",
     "iopub.status.idle": "2021-06-02T02:46:10.590067Z",
     "shell.execute_reply": "2021-06-02T02:46:10.582193Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 2: What is differentiation and integration?\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"eOyGG3m-7gA\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Calculus is a part of mathematics concerned with **continous change**. There are two branches of calculus: differential calculus and integral calculus. Both of these concepts are useful not only in science, but also in daily life. We encounter differentiation and integration everywhere.\n",
    "\n",
    "\n",
    "Differentiation of a function $f(t)$ gives you the derivative of that function $\\frac{d(f(t))}{dt}$. A derivative captures how sensitive a function is to slight changes in the input for different ranges of inputs. Geometrically, the derivative of a function at a certain input is the slope of the function at that input. For example, as you drive, the distance traveled changes continuously with time. If you take the derivative of the distance traveled with respect to time, you get the velocity of the vehicle at each point in time. The velocity tells you the rate of change of the distance traveled at different points in time. If you have slow velocity (a small derivative), the distance traveled doesn't change much for small changes in time. A high velocity (big derivative) means that the distance traveled changes a lot for small changes in time.\n",
    "\n",
    "The sign of the derivative of a function (or signal) tells whether the signal is increasing or decreasing. For a signal going through changes as a function of time, the derivative will become zero when the signal changes its direction of change (e.g. from increasing to decreasing). That is, at local minimum or maximum values, the slope of the signal will be zero. This property is used in optimizing problems. But we can also use it to find peaks in a signal. \n",
    "\n",
    "Integration can be thought of as the reverse of differentation. If we integrate the velocity with respect to time, we can calculate the distance traveled. By integrating a function, we are basically trying to find functions that would have the original one as their derivative. When we integrate a function, our integral will have an added unknown scalar constant, $C$. \n",
    "For example, if $$ g(t) = 1.5t^2 + 4t - 1$$, \n",
    "our integral function $f(t)$ will be:\n",
    "$$ f(t) = \\int g(t) dt = 0.5t^3 + 2t^2 - t + C$$.\n",
    "\n",
    "This constant exists because the derivative of a constant is 0 so we cannot know what the constant should be. This is an indefinite integral. If we compute a definite integral, that is the integral between two limits of the input, we will not have this unknown constant and the integral of a function will capture the area under the curve of that function between those two limits.\n",
    "\n",
    "Some functions, when differentiated or integrated, equal a scalar times the same function. This is a similar idea to eigenvectors of a matrix being those that, when multipled by the matrix, equal a scalar times themselves, as you saw yesterday!\n",
    "\n",
    "When \n",
    "\n",
    "\\begin{align*}\n",
    "\\frac{d(f(t)}{dt} = a\\cdot f(t), \n",
    "\\end{align*}\n",
    "\n",
    "we say that $f(t)$ is an **eigenfunction** for derivative operator, where $a$ is a scaling factor. Similarly, when \n",
    "\n",
    "\\begin{align*}\n",
    "\\int f(t)dt = a\\cdot f(t), \n",
    "\\end{align*}\n",
    "\n",
    "we say that $f(t)$ is an **eigenfunction** for integral operator.\n",
    "\n",
    "As you can imagine, working with eigenfunctions can make mathematical analysis easy. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo 1: Geometrical understanding\n",
    "\n",
    "In the interactive demo below, you can pick different functions to examine in the drop down menu. You can then choose to show the derivative function and/or the integral function. \n",
    "\n",
    "For the integral, we have chosen the unknown constant $C$ so that the integral function at the left x-axis limit is 0 (f(t = -10) = 0). So the integral will reflect the area under the curve starting from that position.\n",
    "\n",
    "For each function:\n",
    "\n",
    "*  Examine just the function first. Discuss and predict what the derivative and integral will look like. Remember that derivative = slope of function, integral = area under curve from t = -10 to that t.\n",
    "*  Check the derivative - does it match your expectations?\n",
    "*  Check the integral - does it match your expectations?\n",
    "*  Identify whether the function is an eigenfunction for the derivative operator, an eigenfunction for the integral operator, or neither.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:10.604744Z",
     "iopub.status.busy": "2021-06-02T02:46:10.598164Z",
     "iopub.status.idle": "2021-06-02T02:46:11.148537Z",
     "shell.execute_reply": "2021-06-02T02:46:11.148027Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to enable the widget\n",
    "function_options = widgets.Dropdown(\n",
    "    options=['Linear', 'Parabolic', 'Exponential', 'Sine', 'Sigmoid'],\n",
    "    description='Function',\n",
    "    disabled=False,\n",
    ")\n",
    "\n",
    "derivative = widgets.Checkbox(\n",
    "    value=False,\n",
    "    description='Show derivative',\n",
    "    disabled=False,\n",
    "    indent=False\n",
    ")\n",
    "\n",
    "integral = widgets.Checkbox(\n",
    "    value=False,\n",
    "    description='Show integral',\n",
    "    disabled=False,\n",
    "    indent=False\n",
    ")\n",
    "\n",
    "def on_value_change(change):\n",
    "    derivative.value = False\n",
    "    integral.value = False\n",
    "\n",
    "function_options.observe(on_value_change, names='value')\n",
    "\n",
    "interact(plot_functions, function = function_options, show_derivative = derivative, show_integral = integral);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:11.154542Z",
     "iopub.status.busy": "2021-06-02T02:46:11.153204Z",
     "iopub.status.idle": "2021-06-02T02:46:11.155273Z",
     "shell.execute_reply": "2021-06-02T02:46:11.155750Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_2ce1573c.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Section 2: Analytical & Numerical Differentiation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:11.161283Z",
     "iopub.status.busy": "2021-06-02T02:46:11.160522Z",
     "iopub.status.idle": "2021-06-02T02:46:11.226976Z",
     "shell.execute_reply": "2021-06-02T02:46:11.227488Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 3: Analytical & Numerical Differentiation\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"C7U8zgI5rdk\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "In this section, we will delve into how we actually find the derivative of a function, both analytically and numerically.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 2.1: Analytical Differentiation\n",
    "\n",
    "When we find the derivative analytically, we are finding the exact formula for the derivative function. \n",
    "\n",
    "To do this, instead of having to do some fancy math every time, we can often consult [an online resource](https://en.wikipedia.org/wiki/Differentiation_rules) for a list of common derivatives, in this case our trusty friend Wikipedia.\n",
    "\n",
    "If I told you to find the derivative of $f(t) = x^3$, you could consult that site and find in Section 2.1, that if $f(t) = x^r$, then $\\frac{d(f(t))}{dt} = rx^{r-1}$. So you would be able to tell me that the derivative of $f(t) = x^3$ is $\\frac{d(f(t))}{dt} = 3x^{2}$.\n",
    "\n",
    "This list of common derivatives often contains only very simple functions. Luckily, as we'll see in the next two sections, we can often break the derivative of a complex function down into the derivatives of more simple components."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Section 2.1.1: Product Rule\n",
    "Sometimes we encounter functions which are the product of two functions that both depend on the variable. \n",
    "How do we take the derivative of such functions? For this we use the [Product Rule](https://en.wikipedia.org/wiki/Product_rule).\n",
    "\n",
    "\\begin{align}\n",
    "f(t) = u(t)\\cdot v(t)\\\\\n",
    "\\frac{d(f(t))}{dt} = v\\cdot \\frac{du}{dt} + u\\cdot \\frac{dv}{dt}\\\\\n",
    "\\end{align}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Coding Exercise 2.1.1: Derivative of the postsynaptic potential alpha function \n",
    "\n",
    "Let's use the product rule to get the derivative of the post-synaptic potential alpha function. As we saw in Video 3, the shape of the postsynaptic potential is given by the so called alpha function:\n",
    "\n",
    "\\begin{align*}\n",
    "f(t) = t \\cdot exp(-\\frac{t}{\\tau})\n",
    "\\end{align*}\n",
    "\n",
    "Here $f(t)$ is a product of $t$ and $exp(-\\frac{t}{\\tau})$. The variable $\\tau$ is the time constant of the synapse.\n",
    "\n",
    "We have defined $u(t)$ and $v(t)$ in the code below, in terms of the variable $t$ which is an array of time steps from 0 to 10. Define $\\frac{du}{dt}$ and $\\frac{dv}{dt}$, the compute the full derivative of the alpha function using the product rule. You can always consult wikipedia to figure out $\\frac{du}{dt}$ and $\\frac{dv}{dt}$!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:11.234204Z",
     "iopub.status.busy": "2021-06-02T02:46:11.232835Z",
     "iopub.status.idle": "2021-06-02T02:46:11.234948Z",
     "shell.execute_reply": "2021-06-02T02:46:11.235441Z"
    }
   },
   "outputs": [],
   "source": [
    "# Define time, time constant\n",
    "t = np.arange(0, 10, .1)\n",
    "tau = 0.5\n",
    "\n",
    "# Compute alpha function\n",
    "f = t * np.exp(-t/tau)\n",
    "\n",
    "# Define u(t), v(t)\n",
    "u_t = t\n",
    "v_t = np.exp(-t/tau)\n",
    "\n",
    "# Define du/dt, dv/dt\n",
    "du_dt = ...\n",
    "dv_dt = ...\n",
    "\n",
    "# Define full derivative\n",
    "df_dt = ...\n",
    "\n",
    "# Uncomment below to visualize\n",
    "#plot_alpha_func(t, f, df_dt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:11.242173Z",
     "iopub.status.busy": "2021-06-02T02:46:11.241514Z",
     "iopub.status.idle": "2021-06-02T02:46:11.659535Z",
     "shell.execute_reply": "2021-06-02T02:46:11.660008Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_366c0574.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=843 height=303 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/static/W0D4_Tutorial1_Solution_366c0574_0.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Section 2.1.2: Chain Rule\n",
    "Many times we encounter situations in which the variable $a$ is changing with time ($t$) and affecting another variable $r$. How can we estimate the derivative of $r$ with respect to $a$ i.e. $\\frac{dr}{da} = ?$\n",
    "\n",
    "To calculate $\\frac{dr}{da}$ we use the [Chain Rule](https://en.wikipedia.org/wiki/Chain_rule).\n",
    "\n",
    "\\begin{align}\n",
    "\\frac{dr}{da} = \\frac{dr}{dt}\\cdot\\frac{dt}{da}\n",
    "\\end{align}\n",
    "\n",
    "That is, we calculate the derivative of both variables with respect to time and divide the time derivative of $r$ by that of time derivative of $a$. \n",
    "\n",
    "We will step back from applications for a second: we can use this to simplify taking derivatives of complex functions, as you will see in the next exercise.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Math Exercise 2.1.2: Chain Rule\n",
    "\n",
    "Let's say that:\n",
    "$$ r(a) = e^{a^4 + 1} $$\n",
    "\n",
    "What is $\\frac{dr}{da}$? This is a more complex function so we can't simply consult a table of common derivatives. Can you use the chain rule to help?\n",
    "\n",
    "Hint: we didn't define t but you could set t equal to the function in the exponent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:11.664951Z",
     "iopub.status.busy": "2021-06-02T02:46:11.664216Z",
     "iopub.status.idle": "2021-06-02T02:46:11.670439Z",
     "shell.execute_reply": "2021-06-02T02:46:11.669929Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_a0e42694.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Section 2.2.3: Derivatives in Python using Sympy\n",
    "\n",
    "There is a useful Python library for getting the analytical derivatives of functions: Sympy. We actually used in Interactive Demo 1, under the hood.\n",
    "\n",
    "See the following cell for an example of setting up a sympy function and finding the derivative."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:11.675476Z",
     "iopub.status.busy": "2021-06-02T02:46:11.674857Z",
     "iopub.status.idle": "2021-06-02T02:46:11.745356Z",
     "shell.execute_reply": "2021-06-02T02:46:11.744818Z"
    }
   },
   "outputs": [],
   "source": [
    "# For sympy we first define our symbolic variables\n",
    "f, t = sp.symbols('f, t')\n",
    "\n",
    "# Function definition (sigmoid)\n",
    "f = 1/(1 + sp.exp(-(t-5)))\n",
    "\n",
    "# Get the derivative\n",
    "diff_f = sp.diff(f)\n",
    "\n",
    "# Print the resulting function\n",
    "print('Derivative of', f, 'is ', diff_f)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 2.2: Numerical Differentiation\n",
    "\n",
    "\n",
    "Formally, the derivative of a function $\\mathcal{f}(x)$ at any value $a$ is given by the finite difference formula (FD): \n",
    "\n",
    "\\begin{align*}\n",
    "FD = \\frac{f(a+h) - f(a)}{h}\n",
    "\\end{align*}\n",
    "\n",
    "As $h\\rightarrow 0$, the FD approaches the actual value of the derivative. Let's check this.\n",
    "\n",
    "*Note that the numerical estimate of the derivative will result\n",
    "in a time series whose length is one short of the original time series.*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo 2.2: Numerical Differentiation of the Sine Function\n",
    "\n",
    "Below, we find the numerical derivative of the sine function for different values of $h$, and and compare the result the analytical solution.\n",
    "\n",
    "- What values of h result in more accurate numerical derivatives?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:11.772578Z",
     "iopub.status.busy": "2021-06-02T02:46:11.771951Z",
     "iopub.status.idle": "2021-06-02T02:46:12.106248Z",
     "shell.execute_reply": "2021-06-02T02:46:12.105560Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown *Execute this cell to enable the widget.*\n",
    "def numerical_derivative_demo(h = 0.2):\n",
    "  # Now lets create a sequence of numbers which change according to the sine function\n",
    "  dt = 0.01\n",
    "  tx = np.arange(-10, 10, dt)\n",
    "  sine_fun = np.sin(tx)\n",
    "\n",
    "  # symbolic diffrentiation tells us that the derivative of sin(t) is cos(t)\n",
    "  cos_fun = np.cos(tx)\n",
    "\n",
    "  # Numerical derivative using difference formula\n",
    "  n_tx = np.arange(-10,10,h) # create new time axis\n",
    "  n_sine_fun = np.sin(n_tx) # calculate the sine function on the new time axis\n",
    "  sine_diff = (n_sine_fun[1:] - n_sine_fun[0:-1]) / h\n",
    "\n",
    "  fig = plt.figure()\n",
    "  ax = plt.subplot(111)\n",
    "  plt.plot(tx, sine_fun, label='sine function')\n",
    "  plt.plot(tx, cos_fun, label='analytical derivative of sine')\n",
    "\n",
    "  with plt.xkcd():\n",
    "    # notice that numerical derivative will have one element less\n",
    "    plt.plot(n_tx[0:-1], sine_diff, label='numerical derivative of sine')\n",
    "    plt.xlim([-10, 10])\n",
    "    plt.xlabel('Time (au)')\n",
    "    plt.ylabel('f(x) or df(x)/dt')\n",
    "    ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),\n",
    "              ncol=3, fancybox=True)\n",
    "    plt.show()\n",
    "\n",
    "_ = widgets.interact(numerical_derivative_demo, h = (0.01, 0.5, .02))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:12.115213Z",
     "iopub.status.busy": "2021-06-02T02:46:12.114595Z",
     "iopub.status.idle": "2021-06-02T02:46:12.119020Z",
     "shell.execute_reply": "2021-06-02T02:46:12.118384Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_36cd3b93.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 2.3: Transfer Function and Gain of a Neuron\n",
    "When we inject a constant current (DC) in a neuron, its firing rate changes as a function of strength of the injected current. This is called the **input-output transfer function** or just the *transfer function* or *I/O Curve* of the neuron. For most neurons this can be approximated by a sigmoid function e.g.\n",
    "\n",
    "\\begin{align}\n",
    "rate(I) = \\frac{1}{1+\\text{e}^{-a*(I-\\theta)}} - \\frac{1}{exp(a*\\theta)} + \\eta\n",
    "\\end{align}\n",
    "\n",
    "where $I$ is injected current, $rate$ is the neuron firing rate and $\\eta$ is noise (Gaussian noise with zero mean and $\\sigma$ standard deviation).\n",
    "\n",
    "*You will visit this equation in a different context in Week 3*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Coding Exercise 2.1: Calculating the Transfer Function and Gain of a Neuron\n",
    "The slope of a neurons input-output transfer function ($\\frac{d(r(I)}{dI}$) is called the **gain** of the neuron, as it tells how the neuron output will change if the input is changed.\n",
    "\n",
    "Estimate the gain of the following neuron transfer function using numerical differentiaton. We will use our timestep as h.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:12.125056Z",
     "iopub.status.busy": "2021-06-02T02:46:12.123786Z",
     "iopub.status.idle": "2021-06-02T02:46:12.125716Z",
     "shell.execute_reply": "2021-06-02T02:46:12.126184Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown *Execute this cell to enable the numerical differentiation function: `numerical_derivative`*\n",
    "\n",
    "def numerical_derivative(x, h):\n",
    "  '''Numerical derivative calculation\n",
    "    Args:\n",
    "      x: array of number\n",
    "      h: time step for differentiation\n",
    "\n",
    "    Returns:\n",
    "      Numerical derivative of f for a time step of h\n",
    "  '''\n",
    "\n",
    "  dxdt = np.zeros(len(x)-1)\n",
    "  dxdt = (x[1:] - x[0:-1])/h\n",
    "  return dxdt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:12.133366Z",
     "iopub.status.busy": "2021-06-02T02:46:12.132731Z",
     "iopub.status.idle": "2021-06-02T02:46:12.295910Z",
     "shell.execute_reply": "2021-06-02T02:46:12.293430Z"
    }
   },
   "outputs": [],
   "source": [
    "def compute_rate_and_gain(I, a, theta, current_timestep):\n",
    "  \"\"\" Compute rate and gain of neuron based on parameters\n",
    "\n",
    "  Args:\n",
    "    I (ndarray): different possible values of the current\n",
    "    a (scalar): parameter of the transfer function\n",
    "    theta (scalar): parameter of the transfer function\n",
    "    current_timestep (scalar): the time we're using to take steps\n",
    "\n",
    "  Returns:\n",
    "    (ndarray, ndarray): rate and gain for each possible value of I\n",
    "  \"\"\"\n",
    "\n",
    "  ########################################################################\n",
    "  ## TODO for students: calculate the gain of the neural firing rate ##\n",
    "  ## Complete line of code and remove\n",
    "  raise NotImplementedError(\"Calculate the gain\")\n",
    "  ########################################################################\n",
    "\n",
    "  # Compute rate\n",
    "  rate = (1+np.exp(-a*(I-theta)))**-1 - (1+np.exp(a*theta))**-1\n",
    "\n",
    "  # Compute gain\n",
    "  gain = ...\n",
    "\n",
    "  return rate, gain\n",
    "\n",
    "\n",
    "current_timestep = 0.1\n",
    "I = np.arange(0, 8, current_timestep)\n",
    "\n",
    "# Neuron transfer function\n",
    "a = 1.2     # You can change this value\n",
    "theta = 5 # You can change this value\n",
    "\n",
    "# Compute rate and gain\n",
    "rate, gain = compute_rate_and_gain(I, a, theta, current_timestep)\n",
    "\n",
    "# Visualize rate and gain\n",
    "plot_rate_and_gain(I, rate, gain)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:12.320448Z",
     "iopub.status.busy": "2021-06-02T02:46:12.318074Z",
     "iopub.status.idle": "2021-06-02T02:46:12.702943Z",
     "shell.execute_reply": "2021-06-02T02:46:12.703437Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_9fc5d678.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=843 height=303 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/static/W0D4_Tutorial1_Solution_9fc5d678_0.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The slope of the transfer function tells us in which range of inputs the neuron is most sensitive to changes in its input. Change the parameters of the neuron transfer function (i.e. $a$ and $\\theta$) and see if you can predict the value of $I$ for which the neuron has maximal slope and which parameter determines the peak value of the gain."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Section 3: Functions of Multiple Variables\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:12.708909Z",
     "iopub.status.busy": "2021-06-02T02:46:12.708293Z",
     "iopub.status.idle": "2021-06-02T02:46:12.777278Z",
     "shell.execute_reply": "2021-06-02T02:46:12.776788Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 4: Functions of Multiple Variables\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"rLsLOWsNOGw\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the previous section, you looked at function of single variable $t$ or $x$. In most cases, we encounter functions of multiple variables. For example, in the brain, the firing rate of a neuron is a function of both excitatory and inhibitory input rates. In the following, we will look into how to calculate derivatives of such functions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, let's create a function of two variables. We take the example of a neuron driven by excitatory and inhibitory inputs. Because this is for illustrative purposes, we will not go in the details of the numerical range of the input and output variables.\n",
    "\n",
    "In the function below, we assume that the firing rate of a neuron increases motonotically with an increase in excitation and decreases monotonically with an increase in inhibition. The inhibition is modelled as a subtraction. Like for the 1-dimensional transfer function, here we assume that we can approximate the transfer function as a sigmoid function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:12.802012Z",
     "iopub.status.busy": "2021-06-02T02:46:12.785640Z",
     "iopub.status.idle": "2021-06-02T02:46:14.571195Z",
     "shell.execute_reply": "2021-06-02T02:46:14.571701Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to visualize the neuron firing rate surface\n",
    "def sigmoid_function(x,a,theta):\n",
    "    '''\n",
    "    Population activation function.\n",
    "\n",
    "    Expects:\n",
    "    x     : the population input\n",
    "    a     : the gain of the function\n",
    "    theta : the threshold of the function\n",
    "\n",
    "    Returns:\n",
    "    the population activation response F(x) for input x\n",
    "    '''\n",
    "    # add the expression of f = F(x)\n",
    "    f = (1+np.exp(-a*(x-theta)))**-1 - (1+np.exp(a*theta))**-1\n",
    "\n",
    "    return f\n",
    "\n",
    "# Neuron Transfer function\n",
    "step_size = 0.1\n",
    "exc_input = np.arange(2,9,step_size)\n",
    "inh_input = np.arange(0,7,step_size)\n",
    "exc_a = 1.2\n",
    "exc_theta = 2.4\n",
    "inh_a = 1.\n",
    "inh_theta = 4.\n",
    "\n",
    "rate = np.zeros((len(exc_input),len(inh_input)))\n",
    "\n",
    "for ii in range(len(exc_input)):\n",
    "  for jj in range(len(inh_input)):\n",
    "    rate[ii,jj] = sigmoid_function(exc_input[ii],exc_a,exc_theta) - sigmoid_function(inh_input[jj],inh_a,inh_theta)*0.5\n",
    "\n",
    "with plt.xkcd():\n",
    "  X, Y = np.meshgrid(exc_input, inh_input)\n",
    "  fig = plt.figure(figsize=(15,15))\n",
    "  ax1 = fig.add_subplot(2,2,1)\n",
    "  lg_txt = 'Inhibition = ' + str(inh_input[0])\n",
    "  ax1.plot(exc_input,rate[:,0],label=lg_txt)\n",
    "  lg_txt = 'Inhibition = ' + str(inh_input[20])\n",
    "  ax1.plot(exc_input,rate[:,20],label=lg_txt)\n",
    "  lg_txt = 'Inhibition = ' + str(inh_input[40])\n",
    "  ax1.plot(exc_input,rate[:,40],label=lg_txt)\n",
    "  ax1.legend()\n",
    "  ax1.set_xlabel('Excitatory input (au)')\n",
    "  ax1.set_ylabel('Neuron output rate (au)');\n",
    "\n",
    "  ax2 = fig.add_subplot(2,2,2)\n",
    "  lg_txt = 'Excitation = ' + str(exc_input[0])\n",
    "  ax2.plot(inh_input,rate[0,:],label=lg_txt)\n",
    "  lg_txt = 'Excitation = ' + str(exc_input[20])\n",
    "  ax2.plot(inh_input,rate[20,:],label=lg_txt)\n",
    "  lg_txt = 'Excitation = ' + str(exc_input[40])\n",
    "  ax2.plot(inh_input,rate[40,:],label=lg_txt)\n",
    "  ax2.legend()\n",
    "  ax2.set_xlabel('Inhibitory input (au)')\n",
    "  ax2.set_ylabel('Neuron output rate (au)');\n",
    "\n",
    "  ax3 = fig.add_subplot(2, 1, 2, projection='3d')\n",
    "  surf= ax3.plot_surface(Y.T, X.T, rate, rstride=1, cstride=1,\n",
    "                  cmap='viridis', edgecolor='none')\n",
    "  ax3.set_xlabel('Inhibitory input (au)')\n",
    "  ax3.set_ylabel('Excitatory input (au)')\n",
    "  ax3.set_zlabel('Neuron output rate (au)');\n",
    "  fig.colorbar(surf)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the **Top-Left** plot, we see how the neuron output rate increases as a function of excitatory input (e.g. the blue trace). However, as we increase inhibition, expectedly the neuron output decreases and the curve is shifted downwards. This constant shift in the curve suggests that the effect of inhibition is subtractive, and the amount of subtraction does not depend on the neuron output. \n",
    "\n",
    "We can alternatively see how the neuron output changes with respect to inhibition and study how excitation affects that. This is visualized in the **Top-Right** plot.\n",
    "\n",
    "This type of plotting is very intuitive, but it becomes very tedious to visualize when there are larger numbers of lines to be plotted. A nice solution to this visualization problem is to render the data as color, as surfaces, or both. \n",
    "\n",
    "This is what we have done in the plot on the bottom. The colormap on the right shows the output of the neuron as a function of inhibitory input and excitatory input. The output rate is shown both as height along the z-axis and as the color. Blue means low firing rate and yellow means high firing rate (see the color bar).\n",
    "\n",
    "In the above plot, the output rate of the neuron goes below zero. This is of course not physiological. In models, we either choose the operating point such that the output does not go below zero, or else we clamp the neuron output to zero if it goes below zero. You will learn about it more in Week 3."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 3.1: Partial derivatives\n",
    "The above function is like a surface and when we are thinking of the derivative of the surface we can make a physical analogy. \n",
    "\n",
    "Consider putting a ball on this surface. In which direction the ball will move? \n",
    "\n",
    "The movement along one of the directions (along the x-axis) will be determined by inhibitory input and in the other direction (along y-axis) it will be determined by excitatory inputs. The effective movement direction will be the vector sum of the two (perhaps you recall vector sum from yesterday).\n",
    "\n",
    "That is, we can calculate the derivative of the surface for the inhibitory input and then for the excitatory inputs. \n",
    "\n",
    "When we take the derrivative of a multivariable function with respect to one of the variables it is called the **partial derivative**. For example if we have a function:\n",
    "\n",
    "\\begin{align}\n",
    "f(x,y) = x^2 + 2xy + y^2\n",
    "\\end{align}\n",
    "\n",
    "The we can define the partial derivatives as\n",
    "\n",
    "\\begin{align}\n",
    "\\frac{\\partial(f(x,y))}{\\partial x} = 2x + 2y + 0 \\\\\\\\\n",
    "\\frac{\\partial(f(x,y))}{\\partial y} = 0 + 2x + 2y\n",
    "\\end{align}\n",
    "\n",
    "In the above, the derivative of the last term ($y^2$) with respect to $x$ is zero because it does not change with respect to $x$. Similarly, the derivative of $x^2$ with respect to $y$ is also zero.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo 3.1: Visualize partial derivatives\n",
    "\n",
    "In the demo below, you can input any function of x and y and then visualize both the function and partial derivatives. \n",
    "\n",
    "We visualized the 2-dimensional function as a surface plot in which the values of the function are rendered as color. Yellow represents a high value and blue represents a low value. The height of the surface also shows the numerical value of the function. The first plot is that of our function. And the two bottom plots are the derivative surfaces with respect to $x$ and $y$ variables.\n",
    "\n",
    "1.   Ensure you understand how the plots relate to each other - if not, review the above material\n",
    "2.   Can you come up with a function where the partial derivative with respect to x will be a linear plane and the derivative with respect to y will be more curvy?\n",
    "3.   What happens to the partial derivatives if there are no terms involving multiplying x and y together?\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:14.601025Z",
     "iopub.status.busy": "2021-06-02T02:46:14.594916Z",
     "iopub.status.idle": "2021-06-02T02:46:15.809571Z",
     "shell.execute_reply": "2021-06-02T02:46:15.809016Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this widget to enable the demo\n",
    "\n",
    "# Let's use sympy to calculate Partial derivatives of a function of 2-variables\n",
    "@interact(f2d_string = 'x**2 + 2*x*y + y**2')\n",
    "def plot_partial_derivs(f2d_string):\n",
    "  f, x, y = sp.symbols('f, x, y')\n",
    "\n",
    "  f2d = eval(f2d_string)\n",
    "  f2d_dx = sp.diff(f2d,x)\n",
    "  f2d_dy = sp.diff(f2d,y)\n",
    "\n",
    "  print('Partial derivative of ', f2d, 'with respect to x is', f2d_dx)\n",
    "  print('Partial derivative of ', f2d, 'with respect to y is', f2d_dy)\n",
    "\n",
    "  p1 = sp.plotting.plot3d(f2d, (x, -5, 5), (y, -5, 5),show=True,xlabel='x', ylabel='y', zlabel='f(x,y)',title='Our function')\n",
    "\n",
    "  p2 = sp.plotting.plot3d(f2d_dx, (x, -5, 5), (y, -5, 5),show=True,xlabel='x', ylabel='y', zlabel='df(x,y)/dx',title='Derivative w.r.t. x')\n",
    "\n",
    "  p3 = sp.plotting.plot3d(f2d_dy, (x, -5, 5), (y, -5, 5),show=True,xlabel='x', ylabel='y', zlabel='df(x,y)/dy',title='Derivative w.r.t. y')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:15.826618Z",
     "iopub.status.busy": "2021-06-02T02:46:15.825798Z",
     "iopub.status.idle": "2021-06-02T02:46:15.828920Z",
     "shell.execute_reply": "2021-06-02T02:46:15.829385Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_5deca1d0.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Section 3.2: Numerical calculation of partial derivatives\n",
    "Now that you have an intuition about multivariable functions and partial derivatives we can go back to the neuron transfer function we evaluated earlier. \n",
    "To evaluate the partial derivatives we can use the same numerical differentiation as before but now we apply it to each row and column separately. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:15.837504Z",
     "iopub.status.busy": "2021-06-02T02:46:15.832141Z",
     "iopub.status.idle": "2021-06-02T02:46:18.925450Z",
     "shell.execute_reply": "2021-06-02T02:46:18.926060Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to visualize the transfer function\n",
    "# Neuron Transfer Function\n",
    "step_size = 0.1\n",
    "exc_input = np.arange(1,10,step_size)\n",
    "inh_input = np.arange(0,7,step_size)\n",
    "exc_a = 1.2\n",
    "exc_theta = 2.4\n",
    "inh_a = 1.\n",
    "inh_theta = 4.\n",
    "\n",
    "rate = np.zeros((len(exc_input),len(inh_input)))\n",
    "for ii in range(len(exc_input)):\n",
    "  for jj in range(len(inh_input)):\n",
    "    rate[ii,jj] = sigmoid_function(exc_input[ii],exc_a,exc_theta) - sigmoid_function(inh_input[jj],inh_a,inh_theta)*0.5\n",
    "\n",
    "# Derivative with respect to excitatory input rate\n",
    "rate_de = np.zeros((len(exc_input)-1,len(inh_input)))# this will have one row less than the rate matrix\n",
    "for ii in range(len(inh_input)):\n",
    "  rate_de[:,ii] = (rate[1:,ii] - rate[0:-1,ii])/step_size\n",
    "\n",
    "# Derivative with respect to inhibitory input rate\n",
    "rate_di = np.zeros((len(exc_input),len(inh_input)-1))# this will have one column less than the rate matrix\n",
    "for ii in range(len(exc_input)):\n",
    "  rate_di[ii,:] = (rate[ii,1:] - rate[ii,0:-1])/step_size\n",
    "\n",
    "with plt.xkcd():\n",
    "  X, Y = np.meshgrid(exc_input, inh_input)\n",
    "  fig = plt.figure(figsize=(20,8))\n",
    "  ax1 = fig.add_subplot(1, 3, 1, projection='3d')\n",
    "  surf1 = ax1.plot_surface(Y.T, X.T, rate, rstride=1, cstride=1, cmap='viridis', edgecolor='none')\n",
    "  ax1.set_xlabel('Inhibitory input (au)')\n",
    "  ax1.set_ylabel('Excitatory input (au)')\n",
    "  ax1.set_zlabel('Neuron output rate (au)')\n",
    "  ax1.set_title('Rate as a function of Exc. and Inh');\n",
    "  ax1.view_init(45, 10)\n",
    "  fig.colorbar(surf1)\n",
    "\n",
    "  Xde, Yde = np.meshgrid(exc_input[0:-1], inh_input)\n",
    "  ax2 = fig.add_subplot(1, 3, 2, projection='3d')\n",
    "  surf2 = ax2.plot_surface(Yde.T, Xde.T, rate_de, rstride=1, cstride=1, cmap='viridis', edgecolor='none')\n",
    "  ax2.set_xlabel('Inhibitory input (au)')\n",
    "  ax2.set_ylabel('Excitatory input (au)')\n",
    "  ax2.set_zlabel('Neuron output rate (au)');\n",
    "  ax2.set_title('Derivative wrt Excitation');\n",
    "  ax2.view_init(45, 10)\n",
    "  fig.colorbar(surf2)\n",
    "\n",
    "  Xdi, Ydi = np.meshgrid(exc_input, inh_input[:-1])\n",
    "  ax3 = fig.add_subplot(1, 3, 3, projection='3d')\n",
    "  surf3 = ax3.plot_surface(Ydi.T, Xdi.T, rate_di, rstride=1, cstride=1, cmap='viridis', edgecolor='none')\n",
    "  ax3.set_xlabel('Inhibitory input (au)')\n",
    "  ax3.set_ylabel('Excitatory input (au)')\n",
    "  ax3.set_zlabel('Neuron output rate (au)');\n",
    "  ax3.set_title('Derivative wrt Inhibition');\n",
    "  ax3.view_init(15, -115)\n",
    "  fig.colorbar(surf3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Is this what you expeced? Vary the inputs and see if your intuitions are correct. Change the time varying variable and test your intuitions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Section 4: Numerical Integration\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:18.936125Z",
     "iopub.status.busy": "2021-06-02T02:46:18.935413Z",
     "iopub.status.idle": "2021-06-02T02:46:19.012084Z",
     "shell.execute_reply": "2021-06-02T02:46:19.011456Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 5: Numerical Integration\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"sj_83_811j0\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Geometrically, integration is the area under the curve. This interpretation gives two formal ways to calculate the integral of a function numerically. \n",
    "\n",
    "**[Riemann sum](https://en.wikipedia.org/wiki/Riemann_sum)**: \n",
    "If we wish to integrate a function $f(t)$ with respect to $t$, then first we divide the function into $n$ intervals of size $dt = a-b$, where $a$ is the starting of the interval. Thus, each interval gives a rectangle with height $f(a)$ and width $dt$. By summing the area of all the rectangles, we can approximate the area under the curve. As the size $dt$ approaches to zero, our estimate of the integral approcahes the analytical calculation. Essentially, the Riemann sum is cutting the region under the curve in vertical stripes, calculating area of the each stripe and summing them up.\n",
    "\n",
    "**[Lebesgue integral](https://en.wikipedia.org/wiki/Lebesgue_integral)**: In the Lebesgue integral, we divide the area under the curve into horizontal stripes. That is, instead of the independent variable, the range of the function $f(t)$ is divided into small intervals."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 4.1: Demonstration of the Riemann Sum"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interactive Demo 4.1: Riemann Sum vs. Analytical Integral with changing step size\n",
    "\n",
    "Below, we will compare numerical integration using the Riemann Sum with the analytical solution. You can change the interval size $dt$ using the slider.\n",
    "\n",
    "\n",
    "\n",
    "1.   What values of dt result in the best numerical integration?\n",
    "2.   What is the downside of choosing that value of dt?\n",
    "3.   With large dt, why are we underestimating the integral (as opposed to overestimating?\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:19.043484Z",
     "iopub.status.busy": "2021-06-02T02:46:19.042114Z",
     "iopub.status.idle": "2021-06-02T02:46:19.593509Z",
     "shell.execute_reply": "2021-06-02T02:46:19.592996Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown Run this cell to enable the widget!\n",
    "def riemann_sum_demo(dt = 0.5):\n",
    "  step_size = 0.1\n",
    "  min_val = 0.\n",
    "  max_val = 10.\n",
    "  tx = np.arange(min_val, max_val, step_size)\n",
    "\n",
    "  # Our function\n",
    "  ftn = tx**2 - tx + 1\n",
    "  # And the integral analytical formula calculates using sympy\n",
    "  int_ftn = tx**3/3 - tx**2/2 + tx\n",
    "\n",
    "  # Numerical integration of f(t) using Riemann Sum\n",
    "  n = int((max_val-min_val)/dt)\n",
    "  r_tx = np.zeros(n)\n",
    "  fun_value = np.zeros(n)\n",
    "  for ii in range(n):\n",
    "    a = min_val+ii*dt\n",
    "    fun_value[ii] = a**2 - a + 1\n",
    "    r_tx[ii] = a;\n",
    "\n",
    "  # Riemann sum is just cumulative sum of the fun_value multiplied by the\n",
    "  r_sum = np.cumsum(fun_value)*dt\n",
    "  with plt.xkcd():\n",
    "    plt.figure(figsize=(20,5))\n",
    "    ax = plt.subplot(1,2,1)\n",
    "    plt.plot(tx,ftn,label='Function')\n",
    "\n",
    "    for ii in range(n):\n",
    "      plt.plot([r_tx[ii], r_tx[ii], r_tx[ii]+dt, r_tx[ii]+dt], [0, fun_value[ii], fun_value[ii], 0] ,color='r')\n",
    "\n",
    "    plt.xlabel('Time (au)')\n",
    "    plt.ylabel('f(t)')\n",
    "    plt.title('f(t)')\n",
    "    plt.grid()\n",
    "\n",
    "    plt.subplot(1,2,2)\n",
    "    plt.plot(tx,int_ftn,label='Analytical')\n",
    "    plt.plot(r_tx+dt,r_sum,color = 'r',label='Riemann Sum')\n",
    "    plt.xlabel('Time (au)')\n",
    "    plt.ylabel('int(f(t))')\n",
    "    plt.title('Integral of f(t)')\n",
    "    plt.grid()\n",
    "    plt.legend()\n",
    "    plt.show()\n",
    "\n",
    "\n",
    "_ = widgets.interact(riemann_sum_demo, dt = (0.1, 1., .02))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:19.601558Z",
     "iopub.status.busy": "2021-06-02T02:46:19.600752Z",
     "iopub.status.idle": "2021-06-02T02:46:19.603578Z",
     "shell.execute_reply": "2021-06-02T02:46:19.604043Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_fd942e45.py)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are more advanced methods of numerical integration, such as Rungga-Kutta. In any case, the Riemann sum is the basis of Euler's method of integration for solving ordinary differential equations - something you will do in a later tutorial today.\n",
    "\n",
    "See Bonus Section 1 to work through some examples of neural applications of numerical integration."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 4.2: Neural Applications of Numerical Integration"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Coding Exercise 4.2: Calculating Charge Transfer with Excitatory Input\n",
    "An incoming spike elicits a change in the post-synaptic membrane potential which can be captured by the following function\n",
    "\n",
    "\\begin{align}\n",
    "PSP(t) = J\\times t\\times exp\\big(-\\frac{t-t_{sp}}{\\tau_{s}}\\big)\n",
    "\\end{align}\n",
    "\n",
    "where $J$ is the synaptic amplitude, $t_{sp}$ is the spike time and $\\tau_s$ is the synaptic time constant.\n",
    "\n",
    "Estimate the total charge transfered to the postsynaptic neuron during an PSP with amplitude $J=1.0$, $\\tau_s = 1.0$ and $t_{sp} = 1.$ (that is the spike occured at 1ms). The total charge will be the integral of the PSP function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:19.609889Z",
     "iopub.status.busy": "2021-06-02T02:46:19.608645Z",
     "iopub.status.idle": "2021-06-02T02:46:19.610535Z",
     "shell.execute_reply": "2021-06-02T02:46:19.611019Z"
    }
   },
   "outputs": [],
   "source": [
    "# Set up parameters\n",
    "J = 1\n",
    "tau_s = 1\n",
    "t_sp = 1\n",
    "dt = .1\n",
    "t = np.arange(0, 10, dt)\n",
    "\n",
    "# Code PSP formula\n",
    "PSP = ...\n",
    "\n",
    "# Compute numerical integral\n",
    "# We already have PSP at every time step (height of rectangles). We need to\n",
    "#.  multiply by width of rectangles (dt) to get areas\n",
    "rectangle_areas = ...\n",
    "\n",
    "# Cumulatively sum rectangles (hint: use np.cumsum)\n",
    "numerical_integral = ...\n",
    "\n",
    "# Visualize\n",
    "# plot_charge_transfer(t, PSP, numerical_integral)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:19.672643Z",
     "iopub.status.busy": "2021-06-02T02:46:19.634830Z",
     "iopub.status.idle": "2021-06-02T02:46:19.985868Z",
     "shell.execute_reply": "2021-06-02T02:46:19.986319Z"
    }
   },
   "source": [
    "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_200c1e98.py)\n",
    "\n",
    "*Example output:*\n",
    "\n",
    "<img alt='Solution hint' align='left' width=843 height=303 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/static/W0D4_Tutorial1_Solution_200c1e98_0.png>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can see from the figure that the total charge transferred is a little over 2.5. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Section 5: Integration and Differentiation as Filtering Operations\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:19.991796Z",
     "iopub.status.busy": "2021-06-02T02:46:19.991222Z",
     "iopub.status.idle": "2021-06-02T02:46:20.058853Z",
     "shell.execute_reply": "2021-06-02T02:46:20.059368Z"
    }
   },
   "outputs": [],
   "source": [
    "#@title Video 6: Filtering Operations\n",
    "# Insert the ID of the corresponding youtube video\n",
    "from IPython.display import YouTubeVideo\n",
    "video = YouTubeVideo(id=\"TQ0t-S3__OA\", width=854, height=480, fs=1)\n",
    "print(\"Video available at https://youtu.be/\" + video.id)\n",
    "video"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the above, we used the notions that geometrically integration is the area under the curve and differentiation is the slope of the curve. There is another interpretation of these two operations.\n",
    "\n",
    "As we calculate the derivative of a function, we take the difference of adjacent values of the function. This results in the removal of common part between the two values. As a consequence, we end up removing the unchanging part of the signal. If we now think in terms of frequencies, differentiation removes low frequencies, or slow changes. That is, differentiation acts as a high pass filter. \n",
    "\n",
    "Integration does the opposite because in the estimation of an integral we keep adding adjacent values of the signal. So, again thinking in terms of frequencies, integration is akin to the removal of high frequencies or fast changes (low-pass filter). The shock absorbers in your bike are an example of integrators. \n",
    "\n",
    "We can see this behavior the demo below. Here we will not work with functions, but with signals. As such, functions and signals are the same. Just that in most cases our signals are measurements with respect to time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "execution": {
     "iopub.execute_input": "2021-06-02T02:46:20.077487Z",
     "iopub.status.busy": "2021-06-02T02:46:20.074742Z",
     "iopub.status.idle": "2021-06-02T02:46:20.872404Z",
     "shell.execute_reply": "2021-06-02T02:46:20.872915Z"
    }
   },
   "outputs": [],
   "source": [
    "# @markdown Execute this cell to see visualization\n",
    "h = 0.01\n",
    "tx = np.arange(0,2,h)\n",
    "noise_signal = np.random.uniform(0,1,(len(tx)))*0.5\n",
    "x1 = np.sin(0.5*np.pi*tx) + noise_signal # This will generate a 1 Hz sin wave\n",
    "# In the signal x1 we have added random noise which contributs the high frequencies\n",
    "\n",
    "# Take the derivative equivalent of the signal i.e. subtract the adjacent values\n",
    "x1_diff = (x1[1:] - x1[:-1])\n",
    "\n",
    "# Take the integration equivalent of the signal i.e. sum the adjacent values. Ans divide by 2 (take average essentially)\n",
    "x1_integrate = (x1[1:] + x1[:-1])/2\n",
    "\n",
    "plt.figure(figsize=(15,10))\n",
    "plt.subplot(3,1,1)\n",
    "plt.plot(tx,x1,label='Original Signal')\n",
    "#plt.xlabel('Time (sec)')\n",
    "plt.ylabel('Signal Value(au)')\n",
    "plt.legend()\n",
    "\n",
    "plt.subplot(3,1,2)\n",
    "plt.plot(tx[0:-1],x1_diff,label='Differentiated Signal')\n",
    "# plt.xlabel('Time (sec)')\n",
    "plt.ylabel('Differentiated Value(au)')\n",
    "plt.legend()\n",
    "\n",
    "plt.subplot(3,1,3)\n",
    "plt.plot(tx,x1,label='Original Signal')\n",
    "plt.plot(tx[0:-1],x1_integrate,label='Integrate Signal')\n",
    "plt.xlabel('Time (sec)')\n",
    "plt.ylabel('Integrate Value(au)')\n",
    "plt.legend()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Notice how the differentiation operation amplifies the fast changes which were contributed by noise. By contrast, the integration operation supresses the fast changing noise. Such sums and subtractions form the basis of digital filters. \n",
    "Vary the signal characteristic to see how fast you can use these operations to enhance or supress noise.\n",
    "\n",
    "Also if you want to be adventurous, you may try to use these operations in series and see what happens. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Summary\n",
    "* Geometrically, integration is the area under the curve and differentiation is the slope of the function\n",
    "* The concepts of slope and area can be easily extended to higher dimensions. We saw this when we took the derivative of a 2-dimensional transfer function of a neuron\n",
    "* Numerical estimates of both derivatives and integrals require us to choose a time step $h$. The smaller the $h$, the better the estimate, but for small values of $h$, more computations are needed. So there is always some tradeoff.\n",
    "* Partial derivatives are just the estimate of the slope along one of the many dimensions of the function. We can combine the slopes in different directions using vector sum to find the direction of the slope.\n",
    "* Because the derivative of a function is zero at the local peak or trough, derivatives are used to solve optimization problems.\n",
    "* When thinking of signal, integration operation is equivalent to smoothening the signals (i.e. remove fast changes)\n",
    "* Differentiation operations remove slow changes and enhance high frequency content of a signal"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "W0D4_Tutorial1",
   "provenance": [],
   "toc_visible": true
  },
  "kernel": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
