{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Recurrent Neural Network: Training \n",
    "\n",
    "In this notebook, we discuss how to train RNNs.\n",
    "\n",
    "\n",
    "The RNNs are trained by using the backpropagation algorithm. However, applying the backpropagation is slightly tricky in RNNs. We need to unroll the network in time and back propagate the loss gradients. This technique is known as **backpropagation through time (BPTT)**.\n",
    "\n",
    "Let's formalize the training problem. We have a collection of sequence inputs $\\pmb{X}$ and outputs $\\pmb{Y}^*$, where\n",
    "\n",
    "- $\\pmb{X} = \\pmb{X}_1, \\pmb{X}_2, ..., \\pmb{X}_T$ \n",
    "- $\\pmb{Y}^* = \\pmb{Y}_1^{*}, \\pmb{Y}_2^{*}, ..., \\pmb{Y}_T^{*}$ \n",
    "\n",
    "\n",
    "We want to train RNN weight matrices to minimize the loss between the predicted outputs of the network $\\pmb{Y}$ and the desired outputs $\\pmb{Y}^*$. This is the most generic setting. In other settings we just \"remove\" some of the input or output entries.\n",
    "\n",
    "## Forward Propagation\n",
    "\n",
    "We pass the entire input sequence through the network, generate outputs. Following is the pseudocode description of the forward propagation in a single layer RNN. We use one training instance for the illustration. At each timestep $t$, the current input $\\vec{x}_t$ and the previous hidden state $\\vec{h}_{t-1}$ are linearly combined to create a preactivation hidden state $\\vec{z}_t$. This is actually an affine combination. Then, the RNN produces an activation hidden state $\\vec{h}_t$.\n",
    "\n",
    "The output layer linearly combines the activation hidden state to produce an output $\\vec{o}_t$, which is passed through an activation creating $\\vec{y}_t$.\n",
    "\n",
    "For t = 0 : T\n",
    "\n",
    "\tz(t) = W_xh * x(t) + W_hh * h(t-1) + b  # hidden state preactivation\n",
    "    h(t) = activation(z(t)) # hidden state activation\n",
    "    o(t) = W_hy * h(t) + b  # output preactivation\n",
    "    y(t) = activation(o(t)) # output activation\n",
    "    \n",
    "    \n",
    "<img src=\"https://cse.unl.edu/~hasan/Pics/RNN_Forward.png\" width=300, height=200>\n",
    "\n",
    "\n",
    "## Backward Propagation\n",
    "\n",
    "RNN has various types of weights that are updated via the BPTT algorithm. This is done by calculating the gradients of the loss with respect to three weight matrices:\n",
    "\n",
    "- $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hy}}$ : loss due to the output weight\n",
    "- $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}}$ : loss due to the input weight\n",
    "- $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}}$ : loss due to the recurrent weight\n",
    "\n",
    "\n",
    "<img src=\"https://cse.unl.edu/~hasan/Pics/RNN_Backward.png\" width=400, height=300>\n",
    "\n",
    "\n",
    "The unrolled computation in the BPTT algorithm is just a giant shared-parameter neural network. All columns are identical and share parameters. First, we sketch BPTT from high level using the following figure.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "### Backpropagation through time (BPTT): Sketch\n",
    "\n",
    "The output signal is calculated at each timestep via forward propagation. It gives a sequence of output vectors $\\vec{y_1}, \\vec{y_2}, ..., \\vec{y_T}$, where $T$ is the maximum timestep. For each output vector, its loss $l(\\vec{y_t}, \\vec{y_t}^*)$ is computed using a suitable loss function. Here $\\vec{y_t}^*$ is the true output. \n",
    "\n",
    "\n",
    "\n",
    "A suitable choice for the loss function is cross-entropy:\n",
    "\n",
    "$l(\\vec{y_t}, \\vec{y_t}^*) = -\\vec{y_t}^* log \\vec{y_t}$\n",
    "\n",
    "The total loss $\\mathcal{L}$ incrurred by the output sequence is calculated by taking the sum of the individual losses at each timestep from $t=0 \\sim T$:\n",
    "\n",
    "$\\mathcal{L} = \\sum_{t = 1}^{T}l(\\vec{y_t}, \\vec{y_t}^*)$\n",
    "\n",
    "=> $\\mathcal{L} = -\\sum_{t = 1}^{T}\\vec{y_t}^* log (\\vec{y_t})$\n",
    "\n",
    "Here $\\mathcal{L}$ is a scalar function of series of vectors. Note that the total loss may only include the output of the last few steps (or just the last step). It depends on the problem and the architecture. For example, in a sequence-to-vector model, such as in sentiment classification, the loss is computed based on the output of the final timestep. Also, the total loss is not always the sum of the losses of each input. For example, in speech recognition, or machine translation, the predicted output sequence length may differ from the desired output sequence length. In this illustration, we consider a special case in which both the predicted and the desired output sequence has the same length.\n",
    "\n",
    "\n",
    "Based on the total loss, the loss gradients are computed and propagated backward through the unrolled network, shown by the red arrows. Finally the model parameters are updated using the gradients computed during backward propagation.\n",
    "\n",
    "\n",
    "<img src=\"https://cse.unl.edu/~hasan/Pics/RNN_Backprop_1_new.png\" width=900, height=700>\n",
    "\n",
    "\n",
    "### Backpropagation through time (BPTT): Calculation of Loss Gradients\n",
    "\n",
    "We now calculate the three loss gradients. We will use the following two forward equations for computing the hidden state and output at timestep $t$ for a single input vector $x_t$. In the forward equations, we separate the preactivation and activation terms. The preactivation terms are denoted by $z$, and the activation terms are denoted by $h$.\n",
    "\n",
    "\n",
    "<img src=\"https://cse.unl.edu/~hasan/Pics/RNN_Forward.png\" width=300, height=200>\n",
    "\n",
    "First, the input and the previous hidden state output are linearly combined to create the preactivation signal.\n",
    "\n",
    "$\\vec{z}_t = \\pmb{W}_{xh} \\vec{x}_t + \\pmb{W}_{hh} \\vec{h}_{t-1} + \\vec{b}_h$\n",
    "\n",
    "Then, the preactivation signal is transformed through an activation function.\n",
    "\n",
    "$\\vec{h}_t = \\phi \\otimes \\vec{z}_t$\n",
    "\n",
    "Similarly, for the output layer, the preactivation is computed by the linear combination of the hidden states activation (denoted by $o$) and output weights.\n",
    "\n",
    "$\\vec{o}_t = \\pmb{W}_{hy} \\vec{h}_t + \\vec{b}_y$\n",
    "\n",
    "Then, an activation is applied to create an output.\n",
    "\n",
    "$\\vec{y}_t = \\phi \\otimes \\vec{o}_t$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "### Compute of the loss gradient w.r.t. $\\pmb{W}_{hy}$\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hy}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\pmb{W}_{hy}}$\n",
    "\n",
    "Applying the chain rule of calculus, we can write.\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hy}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} \\frac{\\partial \\vec{y}_t}{\\partial \\vec{o}_t} \\frac{\\partial \\vec{o}_t}{ \\partial \\pmb{W}_{hy}}$\n",
    "\n",
    "=> $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hy}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\vec{h}_t$\n",
    "\n",
    "\n",
    "To compute the 2nd term, we take the derivative of a vector function $\\vec{y}_t$ w.r.t. a vector $\\vec{o}_t$. Thus, the result is matrix whose elements are all the pointwise derivatives. It is known as the Jacobian matrix. Activation values depend on the choice of activation function.\n",
    "\n",
    "Above the 3rd term is computed trivially by applying the rule of partial derivative.\n",
    "\n",
    "We see that calculating the loss gradient w.r.t. $\\pmb{W}_{hy}$ is trivial as it involves only matrix multiplication. However, we will see that the loss gradients w.r.t. $\\pmb{W}_{hh}$ and $\\pmb{W}_{xh}$ are non-trivial. \n",
    "\n",
    "### Computation of the loss gradient w.r.t. $\\pmb{W}_{hh}$\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "By applying the chain rule of calculus, we can write.\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y_t}} \\frac{\\partial \\vec{y_t}}{\\partial \\vec{h_t}} \\frac{\\partial \\vec{h}_{t}}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y_t}} \\frac{\\partial \\vec{y_t}}{\\partial \\vec{o_t}} \\frac{\\partial \\vec{o_t}}{\\partial \\vec{h}_{t}} \\frac{\\partial \\vec{h}_{t}}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\frac{\\partial \\vec{h}_{t}}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "Again the 2nd and the 3rd terms are computed trivially by applying the rule of partial derivative. We need to compute the 4th term. \n",
    "\n",
    "- Before we do this, let's compute the final loss gradient w.r.t. $\\pmb{W}_{xh}$.\n",
    "\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\pmb{W}_{xh}}$\n",
    "\n",
    "\n",
    "By applying the chain rule of calculus, we can write.\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y_t}} \\frac{\\partial \\vec{y_t}}{\\partial \\vec{h_t}} \\frac{\\partial \\vec{h}_{t}}{\\partial \\pmb{W}_{xh}}$\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y_t}} \\frac{\\partial \\vec{y_t}}{\\partial \\vec{o_t}} \\frac{\\partial \\vec{o_t}}{\\partial \\vec{h}_{t}} \\frac{\\partial \\vec{h}_{t}}{\\partial \\pmb{W}_{xh}}$\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\frac{\\partial \\vec{h}_{t}}{\\partial \\pmb{W}_{xh}}$\n",
    "\n",
    "\n",
    "\n",
    "We need to compute the 4th term, which is $\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{xh}}$ as well as $\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}}$ from the previous derivation.\n",
    "\n",
    "\n",
    "### Compute $\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{xh}}$ & $\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "To do this, we use a simple illustration below. It consists of timesteps $t_1$ to $t_4$. We want to calculate variation in the hidden state value at $t_4$ w.r.t $\\pmb{W}_{hh}$. \n",
    "\n",
    "<img src=\"https://cse.unl.edu/~hasan/Pics/RNN_Backprop_2_new.png\" width=800, height=600>\n",
    "\n",
    "Observe that current value of $\\vec{h}_4$ is not only influenced by the current weight $\\pmb{W}_{hh}$, but also by all weights $\\pmb{W}_{hh}$ from previous timesteps via the intermediate hidden state activations.\n",
    "\n",
    "Thus, we cannot calculate $\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}}$ keeping $h_3$ as constant. To accommodate the contribution of the previous hidden states, we divide the computation of $\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}}$ in two parts. \n",
    "- First part: use only the influence of the weights $\\pmb{W}_{hh}$ at the current timestep\n",
    "- Second part: add the influence of the weights $\\pmb{W}_{hh}$ from the previous timestep\n",
    "\n",
    "Then, **recursively** apply this calculation backward until we reach the starting hidden state at $t = 1$.\n",
    "\n",
    "\n",
    "$\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}} \\frac{\\partial \\vec{h}_3}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}}[ \\frac{\\partial \\vec{h}_3}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_3}{\\partial \\vec{h}_{2}}\\frac{\\partial \\vec{h}_2}{\\partial \\pmb{W}_{hh}}]$\n",
    "\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}}\\frac{\\partial \\vec{h}_3}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}}\\frac{\\partial \\vec{h}_3}{\\partial \\vec{h}_{2}}[\\frac{\\partial \\vec{h}_2}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_2}{\\partial \\vec{h}_{1}}\\frac{\\partial \\vec{h}_1}{\\partial \\pmb{W}_{hh}}]$\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}}\\frac{\\partial \\vec{h}_3}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}}\\frac{\\partial \\vec{h}_3}{\\partial \\vec{h}_{2}}\\frac{\\partial \\vec{h}_2}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}}\\frac{\\partial \\vec{h}_3}{\\partial \\vec{h}_{2}}\\frac{\\partial \\vec{h}_2}{\\partial \\vec{h}_{1}}\\frac{\\partial \\vec{h}_1}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "We need a simple expression to compute the sum for a longer sequence. To do this, we merge some intermediate terms, as follows.\n",
    "\n",
    "\n",
    "$\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_4}\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{3}}\\frac{\\partial \\vec{h}_3}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{2}}\\frac{\\partial \\vec{h}_2}{\\partial \\pmb{W}_{hh}} + \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{1}}\\frac{\\partial \\vec{h}_1}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = \\sum_{j=1}^4 \\frac{\\partial \\vec{h}_4}{\\partial \\vec{h}_{j}}\\frac{\\partial \\vec{h}_j}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "Now we can derive a general formula.\n",
    "\n",
    "$\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{hh}} = \\sum_{t=1}^T \\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{t}}\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{hh}} = \\sum_{t=1}^T \\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{t}}\\frac{\\partial \\vec{h}_t}{\\partial \\vec{z}_t} \\frac{\\partial \\vec{z}_t}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{hh}} = \\sum_{t=1}^T \\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{t}} [\\phi^{'}\\otimes(\\vec{z}_t)]\\vec{h}_{t-1}$ [computing the derivative for the last two terms]\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{hh}} = \\sum_{t=1}^T (\\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{T-1}} \\frac{\\partial \\vec{h}_{T-1}}{\\partial \\vec{h}_{T-2}} ...\\frac{\\partial \\vec{h}_{t+1}}{\\partial \\vec{h}_t}) [\\phi^{'}\\otimes(\\vec{z}_t)] \\vec{h}_{t-1}$ [applying chain rule]\n",
    "\n",
    "=> $ \\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{hh}} = \\sum_{t=1}^T(\\prod_{j=t+1}^{T} \\frac{\\partial \\vec{h}_j}{\\partial \\vec{h}_{j-1}}) [\\phi^{'}\\otimes(\\vec{z}_t)]\\vec{h}_{t-1}$\n",
    "\n",
    "\n",
    "Based on this derivation, we now write an expression for $ \\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}}$:\n",
    "\n",
    "$ \\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}} = \\sum_{i=1}^t(\\prod_{j=i+1}^t \\frac{\\partial \\vec{h}_j}{\\partial \\vec{h}_{j-1}}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "\n",
    "Let's compute $\\frac{\\partial \\vec{h}_t}{\\partial \\vec{h}_{t-1}}$. To do this, we use the forward equation.\n",
    "\n",
    "\n",
    "\n",
    "$\\vec{h}_t = \\phi \\otimes (\\pmb{W}_{xh} \\vec{x}_t + \\pmb{W}_{hh} \\vec{h}_{t-1} + \\vec{b}_h)$\n",
    "\n",
    "Thus,\n",
    "\n",
    "$\\frac{\\partial \\vec{h}_t}{\\partial \\vec{h}_{t-1}} = \\phi^{'} \\otimes (\\pmb{W}_{xh} \\vec{x}_t + \\pmb{W}_{hh} \\vec{h}_{t-1} + \\vec{b}_h) \\pmb{W}_{hh}$\n",
    "\n",
    "\n",
    "$\\frac{\\partial \\vec{h}_t}{\\partial \\vec{h}_{t-1}} = \\phi^{'} \\otimes (.) \\pmb{W}_{hh}$\n",
    "\n",
    "\n",
    "### Resume Computation of $\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}}$ \n",
    "\n",
    "\n",
    "\n",
    "Let's get back to the following calculation and complete it.\n",
    "\n",
    "$ \\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}} = \\sum_{i=1}^t(\\prod_{j=i+1}^t \\frac{\\partial \\vec{h}_j}{\\partial \\vec{h}_{j-1}}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "\n",
    "$ \\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}} = \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (\\pmb{W}_{xh} \\vec{x}_j + \\pmb{W}_{hh} \\vec{h}_{j-1} + \\vec{b}_h) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "### Complete the Computation of $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}}$ & $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}}$\n",
    "\n",
    "Using this formula we can now complete the calculation of $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}}$:\n",
    "\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\frac{\\partial \\vec{h}_{t}}{\\partial \\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (\\pmb{W}_{xh} \\vec{x}_j + \\pmb{W}_{hh} \\vec{h}_{j-1} + \\vec{b}_h) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "The calculation of $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}}$ is exactly the same except for one change in the calculation of $\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{xh}}$:\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "$\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{xh}} = \\sum_{t=1}^T \\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{t}}\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{xh}}$\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{xh}} = \\sum_{t=1}^T \\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{t}}\\frac{\\partial \\vec{h}_t}{\\partial \\vec{z}_t} \\frac{\\partial \\vec{z}_t}{\\partial \\pmb{W}_{xh}}$\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{xh}} = \\sum_{t=1}^T \\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{t}} [\\phi^{'}\\otimes(\\vec{z}_t)]\\vec{x}_{t}$ [computing the derivative for the last two terms]\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "=> $\\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{xh}} = \\sum_{t=1}^T \\frac{\\partial \\vec{h}_T}{\\partial \\vec{h}_{T-1}} \\frac{\\partial \\vec{h}_{T-1}}{\\partial \\vec{h}_{T-2}} ...\\frac{\\partial \\vec{h}_{t+1}}{\\partial \\vec{h}_t} [\\phi^{'}\\otimes(\\vec{z}_t)] \\vec{x}_{t}$ [applying chain rule]\n",
    "\n",
    "=> $ \\frac{\\partial \\vec{h}_T}{\\partial \\pmb{W}_{xh}} = \\sum_{t=1}^T(\\prod_{j=t+1}^T \\frac{\\partial \\vec{h}_j}{\\partial \\vec{h}_{j-1}}) [\\phi^{'}\\otimes(\\vec{z}_t)]\\vec{x}_{t}$\n",
    "\n",
    "\n",
    "Thus we can write a general expression for timestep up to $t$:\n",
    "\n",
    "$ \\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{xh}} = \\sum_{i=1}^t(\\prod_{j=i+1}^t \\frac{\\partial \\vec{h}_j}{\\partial \\vec{h}_{j-1}}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{x}_{i}$\n",
    "\n",
    "\n",
    "\n",
    "With this slight change, we can derive the following expression:\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (\\pmb{W}_{xh} \\vec{x}_j + \\pmb{W}_{hh} \\vec{h}_{j-1} + \\vec{b}_h) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{x}_{i}$\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Interpretation of the loss gradient $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}}$\n",
    "\n",
    "\n",
    "Let's interpret the loss gradient $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}}$. Following is the formula for $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}}$.\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (\\pmb{W}_{xh} \\vec{x}_j + \\pmb{W}_{hh} \\vec{h}_{j-1} + \\vec{b}_h) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "\n",
    "=> \n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (.) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "Observe that, for any timestep, the loss gradient is based on the **sum of the repeated multiplication of the product** of $\\pmb{W}_{hh}$ and the derivative of the activation function, starting from the beginning of the sequence up to the current timestep. \n",
    "\n",
    "In other words, we need to compute sum of repeated products for $\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}}$.\n",
    "\n",
    "\n",
    "$ \\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}} = \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (.) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "\n",
    "\n",
    "Let's evaluate the expression $\\frac{\\partial \\vec{h}_t}{\\partial \\pmb{W}_{hh}}$ for the first 4 timesteps as illustrated in the following figure. The signal propagated through 4 timesteps. Then, the loss is computed and sends backward.\n",
    "\n",
    "\n",
    "<img src=\"https://cse.unl.edu/~hasan/Pics/RNN_Backprop_2_new.png\" width=800, height=600>\n",
    "\n",
    "\n",
    "$ \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = \\sum_{i=1}^4(\\prod_{j=i+1}^{4} \\phi^{'} \\otimes (.) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "\n",
    "=> $ \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = $\n",
    "$\\prod_{j=2}^{4} (\\phi^{'} \\otimes (.) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_1)]\\vec{h}_{0}$\n",
    "$ + \\prod_{j=3}^{4} (\\phi^{'} \\otimes (.) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_2)]\\vec{h}_{1}$\n",
    "$ + \\prod_{j=4}^{4} (\\phi^{'} \\otimes (.) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_3)]\\vec{h}_{2}$\n",
    "$ + \\prod_{j=5}^{4} (\\phi^{'} \\otimes (.) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_4)]\\vec{h}_{3}$\n",
    "\n",
    "\n",
    "=> $ \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = $\n",
    "$(\\phi^{'} \\otimes (.) \\pmb{W}_{hh})^3 [\\phi^{'}\\otimes(\\vec{z}_1)]\\vec{h}_{0}$\n",
    "$ + (\\phi^{'} \\otimes (.) \\pmb{W}_{hh})^2 [\\phi^{'}\\otimes(\\vec{z}_2)]\\vec{h}_{1}$\n",
    "$ + (\\phi^{'} \\otimes (.) \\pmb{W}_{hh})^1 [\\phi^{'}\\otimes(\\vec{z}_3)]\\vec{h}_{2}$\n",
    "$ + (\\phi^{'} \\otimes (.) \\pmb{W}_{hh})^0 [\\phi^{'}\\otimes(\\vec{z}_4)]\\vec{h}_{3}$\n",
    "\n",
    "\n",
    "=> $ \\frac{\\partial \\vec{h}_4}{\\partial \\pmb{W}_{hh}} = $\n",
    "$(\\phi^{'} \\otimes (.) \\pmb{W}_{hh})^3 [\\phi^{'}\\otimes(\\vec{z}_1)]\\vec{h}_{0}$\n",
    "$ + (\\phi^{'} \\otimes (.) \\pmb{W}_{hh})^2 [\\phi^{'}\\otimes(\\vec{z}_2)]\\vec{h}_{1}$\n",
    "$ + (\\phi^{'} \\otimes (.) \\pmb{W}_{hh})^1 [\\phi^{'}\\otimes(\\vec{z}_3)]\\vec{h}_{2}$\n",
    "$ +   [\\phi^{'}\\otimes(\\vec{z}_4)]\\vec{h}_{3}$\n",
    "\n",
    "\n",
    "We see that at timestep $t=1$ (for initial hidden state $\\vec{h}_{0}$) the product of the activation derivative and $\\pmb{W}_{hh}$ is multiplied three times, then at timestep $t=2$ (for hidden state $\\vec{h}_{1}$) it is multiplied two times, and so on.\n",
    "\n",
    "\n",
    "This illustration indicates that the $\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}}$ at any arbitrary timestep is computed by the sum of the repeated multiplication of the product of weight matrix $\\pmb{W}_{hh}$ and derivative of the activation starting from the beginning up to the current timestep.\n",
    "\n",
    "As we go deeper backward in time, the number of multiplications increase."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary: Loss Gradients\n",
    "\n",
    "The formulas for computing loss gradients w.r.t. to the three weight matrices are given below.\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hy}} = \\sum_{t = 0}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(o_t)] \\vec{h}_t$\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{hh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (\\pmb{W}_{xh} \\vec{x}_j + \\pmb{W}_{hh} \\vec{h}_{j-1} + \\vec{b}_h) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{h}_{i-1}$\n",
    "\n",
    "$\\frac{\\partial \\mathcal{L}}{\\pmb{W}_{xh}} = \\sum_{t = 1}^{T} \\frac{\\partial l(\\vec{y_t}, \\vec{y_t}^*)}{\\partial \\vec{y}_t} [\\phi^{'}\\otimes(\\vec{o}_t)] \\pmb{W}_{hy} \\sum_{i=1}^t(\\prod_{j=i+1}^t \\phi^{'} \\otimes (\\pmb{W}_{xh} \\vec{x}_j + \\pmb{W}_{hh} \\vec{h}_{j-1} + \\vec{b}_h) \\pmb{W}_{hh}) [\\phi^{'}\\otimes(\\vec{z}_i)]\\vec{x}_{i}$\n",
    "\n",
    "These loss gradients are computed at each timestep, and are used to update the model parameters starting from timestep $T$ in the backward direction up to timestep $0$."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
