{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "accelerator": "TPU",
    "colab": {
      "name": "W2_Tutorial2.ipynb",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernel": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.7.8"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ML2DwqwkVwfo"
      },
      "source": [
        "# CIS-522 Week 2 Part 2\n",
        "# Deep Linear Networks\n",
        "\n",
        "__Instructor:__ Konrad Kording\n",
        "\n",
        "__Content creators:__ Ameet Rahane, Spiros Chavlis"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "218inLnvAyK9"
      },
      "source": [
        "---\n",
        "# Today's agenda\n",
        "\n",
        "In the second tutorial of Week 2, we are going to dive deep into Linear Networks. One can see Linear Networks as the core models of Deep Learning; they are simple, mathematically easily interpretable, and of course, fun. Today we will:\n",
        "\n",
        "1. Construct our first models in PyTorch using core modules\n",
        "2. Solve the XOR logical operation, which is linearly non-separable, with a linear network\n",
        "3. Investigate the initialization of our parameters\n",
        "4. Examine how the network actual learns\n",
        "5. Learn about different loss functions, and we can use them efficiently (optional: cosine similarity)\n",
        "6. Give an intuition about the high dimensional spaces, the essence of Deep Learning. (optional)\n",
        "\n",
        "Are you ready?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "A3pDjVTwBKUx",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = 'value' #@param {type:\"string\"}\n",
        "my_pod = 'sublime-newt' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']\n",
        "\n",
        "\n",
        "# start timing\n",
        "import time\n",
        "try:t0;\n",
        "except NameError: t0 = time.time()\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "vDpw51SFNH8l"
      },
      "source": [
        "# @title Slides\n",
        "from IPython.display import HTML\n",
        "HTML('<iframe src=\"https://docs.google.com/presentation/d/1SUfqb8AAF4ES1di7YUZ9WX3NJJuyrENYVyl3wtKxEos/embed?start=false&loop=false&delayms=3000\" frameborder=\"0\" width=\"480\" height=\"299\" allowfullscreen=\"true\" mozallowfullscreen=\"true\" webkitallowfullscreen=\"true\"></iframe>')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p9DIKSiaBQAL"
      },
      "source": [
        "---\n",
        "# Setup"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "idawsS-UBdUx"
      },
      "source": [
        "# imports\n",
        "import numpy as np\n",
        "import random, time\n",
        "import matplotlib.pylab as plt\n",
        "import matplotlib as mpl\n",
        "from matplotlib.collections import LineCollection\n",
        "from tqdm.notebook import tqdm, trange\n",
        "\n",
        "import torch\n",
        "from torch.autograd import Variable\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "import torch.optim as optim"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6Hc_lwIbk3PR",
        "cellView": "form"
      },
      "source": [
        "# @title Figure Settings\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "%matplotlib inline \n",
        "\n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "\n",
        "plt.rcParams[\"mpl_toolkits.legacy_colorbar\"] = False\n",
        "\n",
        "import warnings\n",
        "warnings.filterwarnings(\"ignore\", category=UserWarning, module=\"matplotlib\")\n",
        "\n",
        "\n",
        "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/\"\n",
        "              \"course-content/master/nma.mplstyle\")\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PABZV6eDX2Wk",
        "cellView": "form"
      },
      "source": [
        "#@title Helper functions\n",
        "\n",
        "def synthetic_dataset(w, b, num_examples=1000, sigma=0.01, seed=2021):\n",
        "  '''\n",
        "  Synthetic data generator in the form:\n",
        "      y = Xw + b + gaussian_noise(0, sigma).\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  w : torch.tensor\n",
        "      weights. The length of `w` denotes the number of independent variables\n",
        "  b : torch.tensor\n",
        "      bias (offset or intercept).\n",
        "  num_examples : INT, optional\n",
        "      DESCRIPTION. The default is 1000.\n",
        "  sigma : FLOAT, optional\n",
        "      Standard deviation of the Gaussian noise. The default is 0.01.\n",
        "  seed : INT, optional\n",
        "      Seed the RNG for reproducibility. The default is 2021.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  X: torch.tensor\n",
        "      the independent variable(s).\n",
        "  y: torch.tensor\n",
        "      the dependent variable\n",
        "  \n",
        "  '''\n",
        "\n",
        "  torch.manual_seed(seed)\n",
        "\n",
        "  X = torch.normal(0, 1, (w.shape[0], num_examples))\n",
        "  y = torch.matmul(w.T, X) + b\n",
        "  # Add gaussian noise\n",
        "  y += torch.normal(0, sigma, y.shape)\n",
        "  if y.shape[0]==1:\n",
        "      y = y.reshape((-1, 1))\n",
        "\n",
        "  return X, y\n",
        "\n",
        "\n",
        "def XOR_plots(activ_l1):\n",
        "  from mpl_toolkits.axes_grid1 import AxesGrid\n",
        "  fig = plt.figure()\n",
        "  fig.subplots_adjust(left=0.05, right=0.95)\n",
        "  grid = AxesGrid(fig, 111,  # similar to subplot(142)\n",
        "                  nrows_ncols=(2, 5),\n",
        "                  axes_pad=0.1,\n",
        "                  share_all=False,\n",
        "                  label_mode=\"1\",\n",
        "                  cbar_location=\"right\",\n",
        "                  cbar_mode=\"single\",\n",
        "                  )\n",
        "\n",
        "  for i in range(2*5):\n",
        "    im = grid[i].imshow(activ_l1[:, i].reshape(30,30), cmap='RdBu')\n",
        "    grid[i].set_xlabel('X')\n",
        "    grid[i].set_ylabel('Y')\n",
        "\n",
        "  grid.cbar_axes[0].colorbar(im)\n",
        "  cax = grid.cbar_axes[0]\n",
        "  axis = cax.axis[cax.orientation]\n",
        "  axis.label.set_text(\"$L_1$ output\")\n",
        "\n",
        "  for cax in grid.cbar_axes:\n",
        "      cax.toggle_label(False)\n",
        "    \n",
        "  grid.axes_llc.set_xticks([0, 14, 29])\n",
        "  grid.axes_llc.set_yticks([0, 14, 29])\n",
        "  grid.axes_llc.set_xticklabels(['-1.1', '0', '1.1'])\n",
        "  grid.axes_llc.set_yticklabels(['-1.1', '0', '1.1'])\n",
        "\n",
        "\n",
        "  fig, axes = plt.subplots(nrows=2, ncols=5)\n",
        "  cnt = 0\n",
        "  for ax in axes.flat:\n",
        "    ax.plot(np.sum(inputs.cpu().detach().numpy(), axis=1),\n",
        "            activ_l1[:,cnt], '.')\n",
        "    if cnt >= 5:\n",
        "      ax.set_xlabel('X+Y')\n",
        "    if cnt == 0 or cnt == 5:\n",
        "      ax.set_ylabel('$L_1$ output')\n",
        "    cnt += 1\n",
        "  plt.tight_layout()\n",
        "\n",
        "\n",
        "\n",
        "def XORpredictions(inputs, targets, preds):\n",
        "  print('\\nTest the model on XOR logical operation...')\n",
        "  for input, target, pred in zip(inputs, targets, preds):\n",
        "    print(\"Input:[{},{}] Target:[{}] Predicted:[{}] Error:[{}]\".format(\n",
        "      int(input[0]),\n",
        "      int(input[1]),\n",
        "      int(target[0]),\n",
        "      round(float(pred[0]), 4),\n",
        "      round(float(abs(target[0] - pred[0])), 4)\n",
        "    ))\n",
        "\n",
        "\n",
        "def plotRegression(X, y, preds, losses_xavier, losses_simple=None):\n",
        "  plt.figure()\n",
        "  plt.subplot(1, 2, 1)\n",
        "  plt.plot(losses_xavier, label='Xavier init.')\n",
        "  if losses_simple:\n",
        "    plt.plot(losses_simple, label='Simple init.')\n",
        "  plt.xlabel('epoch')\n",
        "  plt.ylabel('loss')\n",
        "  plt.title('Training loss')\n",
        "  plt.legend()\n",
        "\n",
        "  plt.subplot(1, 2, 2)\n",
        "  plt.scatter(X.T, y.reshape(-1,1), label='original data')\n",
        "  plt.plot(X.T, preds.detach().numpy(), label='regression',\n",
        "          color='red', linewidth=3.0)\n",
        "  plt.xlabel('independent variable')\n",
        "  plt.ylabel('dependent variable')\n",
        "  plt.title(f'Toy dataset, {N} samples')\n",
        "  plt.legend()\n",
        "  plt.show()\n",
        "\n",
        "\n",
        "def plot_weight(losses, weights):\n",
        "  plt.figure()\n",
        "  plt.subplot(1, 2, 1)\n",
        "  plt.plot(losses)\n",
        "  plt.xlabel('epoch')\n",
        "  plt.ylabel('loss')\n",
        "\n",
        "  plt.subplot(1, 2, 2)\n",
        "  plt.plot(weights[0], label='Layer 1')\n",
        "  plt.plot(weights[1], label='Layer 2')\n",
        "  plt.plot(weights[2], label='Layer 3')\n",
        "  plt.xlabel('epoch')\n",
        "  plt.ylabel('weight')\n",
        "  plt.legend()\n",
        "  plt.show()\n",
        "\n",
        "\n",
        "def plot_learning_modes(losses, epochs, modes, rank):\n",
        "  plt.figure()\n",
        "  plt.subplot(1, 2, 1)\n",
        "  plt.plot(losses)\n",
        "  plt.xlabel('epoch')\n",
        "  plt.ylabel('loss')\n",
        "  plt.title('Training loss')\n",
        "\n",
        "  plt.subplot(1, 2, 2)\n",
        "  plt.plot(range(epochs), modes.T)\n",
        "  plt.legend(range(1,rank+1))\n",
        "  plt.xlabel('epoch')\n",
        "  plt.ylabel('singular value [a.u.]')\n",
        "  plt.show()\n",
        "\n",
        "\n",
        "def getData():\n",
        "  # For regression with neural data\n",
        "  !pip install spykes --quiet\n",
        "  !pip install deepdish deepdish\n",
        "  from spykes.plot.neurovis import NeuroVis\n",
        "  from spykes.io.datasets import load_reaching_data\n",
        "  from spykes.utils import train_test_split\n",
        "  import pandas as pd\n",
        "  # Download the dataset\n",
        "  reach_data = load_reaching_data()\n",
        "\n",
        "  print('dataset keys:', reach_data.keys())\n",
        "  print('events:', reach_data['events'].keys())\n",
        "  print('features', reach_data['features'].keys())\n",
        "  print('number of PMd neurons:', len(reach_data['neurons_PMd']))\n",
        "\n",
        "\n",
        "  # Get reach direction, ensure it is between [-pi, pi]\n",
        "  y = np.arctan2(np.sin(reach_data['features']['endpointOfReach'] *\n",
        "                np.pi / 180.0),\n",
        "                np.cos(reach_data['features']['endpointOfReach'] *\n",
        "                np.pi / 180.0))\n",
        "\n",
        "  # Let's put the data into a DataFrame\n",
        "  #\n",
        "  # Events\n",
        "  data_df = pd.DataFrame()\n",
        "  events = ['targetOnTime', 'goCueTime', 'rewardTime']\n",
        "\n",
        "  for i in events:\n",
        "    data_df[i] = np.squeeze(reach_data['events'][i])\n",
        "\n",
        "\n",
        "  data_df[events].head()\n",
        "\n",
        "  ########################################################\n",
        "  # Extract M1 spike counts Y\n",
        "  # ~~~~~~~~~~~~~\n",
        "  # - Select only neurons above a threshold firing rate\n",
        "  # - Align spike counts to the GO cue\n",
        "  # - Use the convenience function ```get_spikecounts()``` from ```NeuroVis```\n",
        "\n",
        "  # Select only high firing rate neurons\n",
        "  M1_select = list()\n",
        "  threshold = 10.0\n",
        "\n",
        "  # Specify timestamps of events to which trials are aligned\n",
        "  align = 'goCueTime'\n",
        "\n",
        "  # Specify a window of around the go cue for spike counts\n",
        "  window = [0., 500.]  # milliseconds\n",
        "\n",
        "  # Get spike counts\n",
        "  X = np.zeros([y.shape[0], len(reach_data['neurons_M1'])])\n",
        "\n",
        "  for n in range(len(reach_data['neurons_M1'])):\n",
        "    this_neuron = NeuroVis(spiketimes=reach_data['neurons_M1'][n])\n",
        "    X[:, n] = np.squeeze(\n",
        "        this_neuron.get_spikecounts(event=align,\n",
        "                                    df=data_df,\n",
        "                                    window=window))\n",
        "\n",
        "    # Short list a few high-firing neurons\n",
        "    if this_neuron.firingrate > threshold:\n",
        "      M1_select.append(n)\n",
        "\n",
        "  # Rescale spike counts to units of spikes/s\n",
        "  X = X / float(window[1] - window[0]) * 1e3\n",
        "\n",
        "  ########################################################\n",
        "  # Split into train and test sets (2/3 training, 1/3 test)\n",
        "  # ~~~~~~~~~~~~~\n",
        "\n",
        "  (x_train, x_test), (y_train, y_test) = train_test_split(X, y, percent=0.10)\n",
        "\n",
        "  return (x_train, y_train, x_test, y_test)\n",
        "\n",
        "\n",
        "def reaching_test(x_test, y_test, yhat_test):\n",
        "  # Visualize decoded reach direction\n",
        "  L = x_test.shape[0]\n",
        "  x1 = [\"Original values\"] * L\n",
        "  x2 = [\"Predicted values\"] * L\n",
        "\n",
        "  # Define all pairs to draw lines\n",
        "  lines = [[x, list(zip([1]*L, yhat_test.cpu().detach().numpy()))[i]]\n",
        "          for i, x in enumerate(zip([0]*L, y_test))]\n",
        "  lc = LineCollection(lines)\n",
        "\n",
        "  fig, ax = plt.subplots(nrows=1, ncols=2)\n",
        "  ax[0].scatter(x1, y_test, color='k', alpha=0.5)\n",
        "  ax[0].scatter(x2, yhat_test.cpu().detach().numpy(), color='g', alpha=0.5)\n",
        "  ax[0].add_collection(lc)\n",
        "  ax[0].set_ylabel('reaching angle (radians)')\n",
        "  ax[0].set_ylim([-1.2 * np.pi, 1.2 * np.pi])\n",
        "\n",
        "  ax[1].plot(y_test, yhat_test.cpu().detach().numpy(), 'k.', alpha=0.5)\n",
        "  ax[1].plot(y_test, y_test, 'r--', linewidth=1.8, label='$y=x$')\n",
        "  ax[1].set_xlim([-1.25 * np.pi, 1.25 * np.pi])\n",
        "  ax[1].set_ylim([-1.2 * np.pi, 1.2 * np.pi])\n",
        "  ax[1].set_xlabel('True values (radians)')\n",
        "  ax[1].set_ylabel('Prdicted values (radians)')\n",
        "  plt.legend()\n",
        "  plt.show()\n",
        "\n",
        "\n",
        "def loss_comparison(lossesMAE, lossesMSE,\n",
        "                    losses_testMAE, losses_testMSE, \n",
        "                    MSE_test, MAE_test):\n",
        "  plt.figure()\n",
        "  plt.subplot(1, 2, 1)\n",
        "  plt.plot(lossesMSE, label='MSE-training')\n",
        "  plt.plot(lossesMAE, label='MAE-training')\n",
        "  plt.xlabel('epoch')\n",
        "  plt.ylabel('loss [a.u.]')\n",
        "  plt.legend()\n",
        "  plt.subplot(1, 2, 2)\n",
        "  plt.plot(losses_testMSE, label='MSE-test')\n",
        "  plt.plot(losses_testMAE, label='MAE-test')\n",
        "  plt.xlabel('epoch')\n",
        "  plt.legend()\n",
        "  plt.show()\n",
        "\n",
        "  print(\"\\nErrors in the test set using both models\")\n",
        "  print (f\"Test set: using L1 loss function (MAE): {MAE_test}\")\n",
        "  print (f\"Test set: L2 loss function (MSE): {RMSE_test}\")\n",
        "\n",
        "\n",
        "from sklearn.preprocessing import OneHotEncoder\n",
        "\n",
        "\n",
        "def idx_word(docs):\n",
        "  '''\n",
        "  Function to give an index to every word found in doc\n",
        "\n",
        "  Parameters\n",
        "  ----------\n",
        "  docs : list of STR\n",
        "      Contains the text\n",
        "\n",
        "  Returns\n",
        "  -------\n",
        "  idx_2_word : dictionary\n",
        "    assign an index to every word.\n",
        "  word_2_idx : dictionary\n",
        "    assign a word to every index.\n",
        "  '''\n",
        "  idx_2_word = {}\n",
        "  word_2_idx = {}\n",
        "  temp = []\n",
        "  i = 1\n",
        "  for doc in docs:\n",
        "    for word in doc.split():\n",
        "      if word not in temp:\n",
        "        temp.append(word)\n",
        "        idx_2_word[i] = word\n",
        "        word_2_idx[word] = i\n",
        "        i += 1\n",
        "        \n",
        "  return (idx_2_word, word_2_idx)\n",
        "\n",
        "\n",
        "def one_hot_map(doc, word_2_idx):\n",
        "  '''\n",
        "  Translate each document `doc` into a vector with integers\n",
        "  Parameters\n",
        "  ----------\n",
        "  doc : STR\n",
        "    The text to be translated.\n",
        "\n",
        "  Returns\n",
        "  -------\n",
        "  x : LIST\n",
        "    Sentence in INTEGER format.\n",
        "\n",
        "  '''\n",
        "  x = []\n",
        "  for word in doc.split():\n",
        "    x.append(word_2_idx[word])\n",
        "  return x\n",
        "\n",
        "\n",
        "def combinations(lst):\n",
        "  index = 1\n",
        "  pairs = []\n",
        "  for element1 in lst:\n",
        "    for element2 in lst[index:]:\n",
        "      pairs.append([element1, element2])\n",
        "    index += 1\n",
        "\n",
        "  return pairs\n",
        "\n",
        "\n",
        "def padding_seqs(original_seqs,\n",
        "                 value=0,\n",
        "                 max_len=None,\n",
        "                 padding='post',\n",
        "                 truncate='post'):\n",
        "  '''\n",
        "  A function that adds `value` at the end  of\n",
        "  each sequence in `original_seqs`.\n",
        "\n",
        "  Parameters\n",
        "  ----------\n",
        "    original_seqs: List of sequences (each sequence is a list of integers).\n",
        "    value: Float or String, padding value. (Optional, defaults to 0.)\n",
        "    maxlen: Optional Int, maximum length of all sequences. If not provided,\n",
        "        sequences will be padded to the length of the longest individual\n",
        "        sequence.\n",
        "    padding: String, 'pre' or 'post' (optional, defaults to 'pre'):\n",
        "        pad either before or after each sequence.\n",
        "    truncating: String, 'pre' or 'post' (optional, defaults to 'pre'):\n",
        "        remove values from sequences larger than\n",
        "        `max_len`, either at the beginning or at the end of the sequences.\n",
        "  Returns:\n",
        "    Numpy array with shape `(len(sequences), maxlen)`\n",
        "  '''\n",
        "  if not max_len:\n",
        "    max_len = max([len(i) for i in original_seqs])\n",
        "\n",
        "  padded_seqs = []\n",
        "\n",
        "  if padding == 'post':\n",
        "    for seq in original_seqs:\n",
        "      if (max_len - len(seq)) >= 0:\n",
        "        seq_pad = seq + [value] * (max_len - len(seq))\n",
        "      else:\n",
        "        if truncate == 'post':\n",
        "          seq_pad = seq[:max_len]\n",
        "        elif truncate == 'pre':\n",
        "          seq_pad = seq[(len(seq) - max_len):]\n",
        "\n",
        "      padded_seqs.append(seq_pad)\n",
        "\n",
        "  elif padding == 'pre':\n",
        "    for seq in original_seqs:\n",
        "      if (max_len-len(seq)) >= 0:\n",
        "        seq_pad = [value] * (max_len - len(seq)) + seq\n",
        "      else:\n",
        "        if truncate == 'post':\n",
        "          seq_pad = seq[:max_len]\n",
        "        elif truncate == 'pre':\n",
        "          seq_pad = seq[(len(seq) - max_len):]\n",
        "\n",
        "      padded_seqs.append(seq_pad)\n",
        "\n",
        "  return (np.array(padded_seqs))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VDvb3pL9UGG3",
        "cellView": "form"
      },
      "source": [
        "# @title Set seed for reproducibility\n",
        "seed = 2021\n",
        "random.seed(seed)\n",
        "torch.manual_seed(seed)\n",
        "torch.cuda.manual_seed_all(seed)\n",
        "torch.cuda.manual_seed(seed)\n",
        "np.random.seed(seed)\n",
        "torch.backends.cudnn.deterministic = True\n",
        "torch.backends.cudnn.benchmark = False\n",
        "def seed_worker(worker_id):\n",
        "  worker_seed = torch.initial_seed() % 2**32\n",
        "  np.random.seed(worker_seed)\n",
        "  random.seed(worker_seed)\n",
        "\n",
        "print ('Seed has been set.')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "agqx5ZXD2mOy"
      },
      "source": [
        "---\n",
        "# Section 1: Deep linear networks\n",
        "## How they can be seen as an approximation to DL later in the course.\n",
        "\n",
        "*Estimated time: 15 minutes since start*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "S01Tuh3GBuwW",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Deep Learning, the Abstraction and the Implementation\n",
        "\n",
        "try: t1;\n",
        "except NameError: t1=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"Yy68K5STSMA\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gMAegubFlOEk"
      },
      "source": [
        "## Section 1.2: `nn.Sequential` class\n",
        "\n",
        "In the previous [Tutorial](https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W2_PyTorchDLN/student/W2_Tutorial1.ipynb), we have verified that PyTorch predefined models work as expected (obviously!!!). Here, we will learn two approaches to construct our models."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7S7FJTUglYUF"
      },
      "source": [
        "First, we will use the `nn.Sequential` class to construct our model as a function. You may use `? nn.Sequential` in a *scratch cell* to see its Docstring. \n",
        "\n",
        "`nn.Sequential` is a container of Modules added to it in the order they are passed in the constructor. Alternatively, we can pass an ordered dictionary of modules in the class.\n",
        "\n",
        "```python\n",
        "# Example of using Sequential\n",
        "model = nn.Sequential(\n",
        "          nn.Linear(15,10),\n",
        "          nn.Linear(10,8)\n",
        "        )\n",
        "```\n",
        "\n",
        "In this code snippet, we have created a model with one hidden layer."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gr2K6cduxpUg"
      },
      "source": [
        "### Exercise 1: Construct a Linear Neural Network\n",
        "\n",
        "Now is your turn to implement a model consisting of two hidden layers."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Npb2m31u2um9"
      },
      "source": [
        "# Build a linear network with 2 hidden layers using nn.Sequential model\n",
        "input_dim = 1\n",
        "output_dim = 1\n",
        "hidden_1 = 10\n",
        "hidden_2 = 10\n",
        "\n",
        "def model(input_dim, hidden_1, hidden_2, output_dim):\n",
        "  ####################################################################\n",
        "  # Fill in missing code below (...),\n",
        "  # then remove or comment the line below to test your function\n",
        "  raise NotImplementedError(\"Add the missing layers\")\n",
        "  ####################################################################  \n",
        "  net = nn.Sequential(nn.Linear(input_dim, hidden_1),\n",
        "                      ...,\n",
        "                      ...)\n",
        "  return (net)\n",
        "\n",
        "## uncomment the line below to test your function\n",
        "# my_net = model(input_dim, hidden_1, hidden_2, output_dim)\n",
        "# print(my_net)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MsbH0CqRHhWv"
      },
      "source": [
        "# to_remove solution\n",
        "# Build a linear network with 2 hidden layers using nn.Sequential model\n",
        "input_dim = 1\n",
        "output_dim = 1\n",
        "hidden_1 = 10\n",
        "hidden_2 = 10\n",
        "\n",
        "def model(input_dim, hidden_1, hidden_2, output_dim):\n",
        "  net = nn.Sequential(nn.Linear(input_dim, hidden_1),\n",
        "                      nn.Linear(hidden_1, hidden_2),\n",
        "                      nn.Linear(hidden_2, output_dim))\n",
        "  return (net)\n",
        "\n",
        "\n",
        "my_net = model(input_dim, hidden_1, hidden_2, output_dim)\n",
        "print(my_net)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WNhbjSpTliwZ"
      },
      "source": [
        "## Section 1.2: `nn.Module` class\n",
        "\n",
        "Another approach is to write a custom class (`nn.Module`). This approach lets us implement custom NN functions and provides more flexibility and better handling.\n",
        "\n",
        "`nn.Module` is a base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing them to nest in a tree structure. You can assign the submodules as regular attributes.\n",
        "\n",
        "```python\n",
        "class Model(nn.Module):\n",
        "  def __init__(self):\n",
        "    super(Model, self).__init__()\n",
        "    self.linear1 = nn.Linear(1, 10)  # input to hidden layer\n",
        "    self.linear2 = nn.Linear(10, 1)  # hidden to output layer\n",
        "\n",
        "  def forward(self, x):\n",
        "    h1 = self.linear1(x)\n",
        "    return self.linear2(h1)\n",
        "```\n",
        "\n",
        "In this code snippet, we have created a Model with one hidden layer."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "aJTfdx14Z8_A",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is the dimension of the hidden layer weight matrix?\n",
        "sequential = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MKKmCMBGeePL"
      },
      "source": [
        "If we want to add more layers or if we want to explore the performance of our model concerning the number of hidden layers, we can use the `nn.ModuleList` to create a list containing the hidden layers. `nn.ModuleList` can be indexed like a regular Python list. Below we give a simple example:\n",
        "\n",
        "```python\n",
        "class Network(nn.Module):\n",
        "  def __init__(self):\n",
        "    super(Network, self).__init__()\n",
        "\n",
        "    self.hidden_layers = nn.ModuleList()  # initialize an empty list\n",
        "    input_dim = 32\n",
        "    self. hidden_units = [16, 8, 4]\n",
        "    # A fully-connected network (FCN) with len(hidden_units) hidden layers\n",
        "    for i in range(len(hidden_units)):\n",
        "      self.hidden_layers += [nn.Linear(input_dim, self.hidden_units[i])]\n",
        "      input_dim = self.hidden_units[i]  # output of layer L-1 is the input in Layer L\n",
        "    # create the output layer\n",
        "    self.out = nn.Linear(input_dim, 1)\n",
        "\n",
        "  # forward pass\n",
        "  def forward(self, x):\n",
        "    for layer in self.hidden_layers:\n",
        "      x = layer(x)\n",
        "    return self.out(x)\n",
        "```\n",
        "\n",
        "For the next exercise, let's use the simple `nn.Model` class without incorporating the `nn.ModuleList`, but feel free to explore this possibility."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jc-nxTQX_z3G"
      },
      "source": [
        "### Exercise 2: Construct the same network using the `nn.Module` class\n",
        "\n",
        "Now is your time to write some lines of code. Here, you will build a model with two hidden layers."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "i6Q9Uu0RlwWz"
      },
      "source": [
        "# Build a linear network with 2 hidden layers using nn.Module class\n",
        "\n",
        "class Net(nn.Module):\n",
        "\n",
        "  def __init__(self, input_dim, hidden_1, hidden_2, output_dim):\n",
        "    super(Net, self).__init__()\n",
        "\n",
        "    self.input_dim = input_dim\n",
        "    self.hidden_1 = hidden_1\n",
        "    self.hidden_2 = hidden_2\n",
        "    self.output_dim = output_dim\n",
        "\n",
        "    # Create a fully-connected network (FCN) with 2 hidden layers\n",
        "    self.fc1 = nn.Linear(self.input_dim, self.hidden_1)\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Define the two hidden layers\")\n",
        "    ####################################################################\n",
        "    self.fc2 = ...\n",
        "    self.fc3 = ...\n",
        "\n",
        "  def forward(self, x):\n",
        "    h1 = self.fc1(x)\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Calculate the forward pass\")\n",
        "    ####################################################################\n",
        "    h2 = ...\n",
        "    out = ...\n",
        "\n",
        "    return out\n",
        "\n",
        "## uncomment the line below to test your function\n",
        "# my_net2 = Net(input_dim, hidden_1, hidden_2, output_dim)\n",
        "# print(my_net2)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7M8CcDPcJB_y"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# Build a linear network with 2 hidden layers using nn.Module class\n",
        "\n",
        "class Net(nn.Module):\n",
        "\n",
        "  def __init__(self, input_dim, hidden_1, hidden_2, output_dim):\n",
        "    super(Net, self).__init__()\n",
        "\n",
        "    self.input_dim = input_dim\n",
        "    self.hidden_1 = hidden_1\n",
        "    self.hidden_2 = hidden_2\n",
        "    self.output_dim = output_dim\n",
        "\n",
        "    # A fully-connected network (FCN) with 2 hidden layers\n",
        "    self.fc1 = nn.Linear(self.input_dim, self.hidden_1)\n",
        "    self.fc2 = nn.Linear(self.hidden_1, self.hidden_2)\n",
        "    self.fc3 = nn.Linear(self.hidden_2, self.output_dim)\n",
        "\n",
        "  def forward(self, x):\n",
        "    h1 = self.fc1(x)\n",
        "    h2 = self.fc2(h1)\n",
        "    out = self.fc3(h2)\n",
        "\n",
        "    return out\n",
        "\n",
        "\n",
        "my_net2 = Net(input_dim, hidden_1, hidden_2, output_dim)\n",
        "print(my_net2)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ViLj3vRZl-OE"
      },
      "source": [
        "Please feel free to implement the network using any other approach that is more comfortable for you!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LK2bh5SA3GTq"
      },
      "source": [
        "---\n",
        "# Section 2: Back to actual linear neural networks\n",
        "## Let's make it deep (should have no effect, right?)\n",
        "\n",
        "*Estimated time: 35 minutes since start*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "k3ucVQxkpxwi",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Deep Linear ANNs\n",
        "\n",
        "try: t3;\n",
        "except NameError: t3=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"5w9byiqPeO0\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IFs8464CsC0Q"
      },
      "source": [
        "First, we construct our toy dataset, which consists of independent variables in $1D$ space to make the illustration easier. Let's build the dataset and then plot it."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "anFVfunGqKOf"
      },
      "source": [
        "# Dataset\n",
        "\n",
        "original_w = torch.tensor([2.5]).reshape(-1,1)\n",
        "original_b = 1.2\n",
        "N = 1000  # number of examples\n",
        "X, y = synthetic_dataset(original_w, original_b, num_examples=1000,\n",
        "                         sigma=1.0)\n",
        "\n",
        "plt.figure()\n",
        "plt.scatter(X.T, y)\n",
        "plt.xlabel('independent variable')\n",
        "plt.ylabel('dependent variable')\n",
        "plt.title(f'Toy dataset, {N} samples')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yYOoHTlysVYc"
      },
      "source": [
        "First, we will construct a Linear Regression model (one input and one output), similar to [Tutorial 1](https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W2_PyTorchDLN/student/W2_Tutorial1.ipynb). We initialize the parameters very close to zero. Run the cell and see what happens."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zDTcplHG3Pft"
      },
      "source": [
        "def deepNetwork(deep=False):\n",
        "  # Network parameters\n",
        "  input_dim = 1\n",
        "  output_dim = 1\n",
        "\n",
        "  if deep:\n",
        "    h1, h2, h3, h4, h5, h6 = 20, 15, 10, 5, 4, 2\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, h1),\n",
        "                        nn.Linear(h1, h2),\n",
        "                        nn.Linear(h2, h3),\n",
        "                        nn.Linear(h3, h4),\n",
        "                        nn.Linear(h4, h5),\n",
        "                        nn.Linear(h5, h6),\n",
        "                        nn.Linear(h6, output_dim))\n",
        "  else:\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, 1))\n",
        "\n",
        "  # parameters initialization\n",
        "  sigma = 1e-11\n",
        "  for i in range(len(net)):\n",
        "    net[i].weight.data.normal_(0, sigma)\n",
        "    net[i].bias.data.fill_(0)\n",
        "\n",
        "  return (net)\n",
        "\n",
        "\n",
        "def training_loop(X, y, model, learning_rate=0.01, num_epochs=250):\n",
        "  # Training\n",
        "  criterion = nn.MSELoss()\n",
        "  optimizer = torch.optim.SGD(net.parameters(),\n",
        "                              lr=learning_rate)\n",
        "\n",
        "  losses = []\n",
        "\n",
        "\n",
        "  epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "\n",
        "  for epoch in epoch_range:\n",
        "    if losses:\n",
        "      epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "      epoch_range.refresh() # to show immediately the update\n",
        "    time.sleep(0.01)\n",
        "\n",
        "    loss = criterion(net(X.T) , y)\n",
        "    loss.backward()\n",
        "    optimizer.step()\n",
        "    optimizer.zero_grad()\n",
        "\n",
        "    losses.append(loss)\n",
        "\n",
        "  # Calculate loss\n",
        "  preds = net(X.T)\n",
        "  loss = criterion(preds, y)\n",
        "  print(f'The loss after training is: {loss}')\n",
        "\n",
        "  return (preds, losses)\n",
        "\n",
        "# Create a shallow network\n",
        "net = deepNetwork(deep=False)\n",
        "output = training_loop(X, y, model=net)\n",
        "plotRegression(X, y, output[0], output[1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0DUlZ05fsqz7"
      },
      "source": [
        "Now, let's make the network deep. As we have a linear network, the depth shouldn't matter, should it? \n",
        "\n",
        "In a pure linear network with $L$ layers, the output is calculated by:\n",
        "\n",
        "\\begin{equation}\n",
        "Y = \\text{W}_{[L]}\\text{W}_{[L-1]} \\dots \\text{W}_{[1]} \\textbf{X} = \\text{Q} \\textbf{X}\n",
        "\\end{equation}\n",
        "\n",
        "So, a deep linear network can be approximated by a single layer linear network!\n",
        "\n",
        "However, these are mathematics. Let's see a deep linear network in action. Run the code and see if the network learns or not. Notice that all parameters across the two networks are the same!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zMnKZC_er7Ez"
      },
      "source": [
        "# Create a deep network\n",
        "net = deepNetwork(deep=True)\n",
        "output = training_loop(X, y, model=net)\n",
        "plotRegression(X, y, output[0], output[1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lDDi3hlhtJOh"
      },
      "source": [
        "It seems that our Deep NN fails to learn a straightforward task... Why is this happening? Maybe we have initialized the parameters wrongly."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AXlR-XMjJd-g"
      },
      "source": [
        "---\n",
        "# Section 3: The need for clever network initialization\n",
        "## How to do it right\n",
        "\n",
        "*Estimated time: 60 minutes since start*\n",
        "\n",
        "Until now, we took the initialization scheme for granted, bypassing how these choices are made. For example, in Tutorial 1, we initialized the weights to zero in the simple linear regression case. Can we do the same with neural networks? As you have seen before, in deep networks, we can't.\n",
        "\n",
        "You might think that these choices are not especially important. However, the choice of parameter initialization scheme plays a crucial role in neural network learning, and it can be vital to avoid numerical instabilities. We initialize the parameters to determine how quickly our optimization algorithm converges, in simple words, how fast our network learns.\n",
        "\n",
        "- What happens when we initialize weights too big? The gradient (a product of matrices from backpropagation/the chain rule) is much larger in the first than in the last layers, which causes extreme weight updates that overshoot the target or explode to infinity or `NaN`. This phenomenon is called **exploding gradient problem**.\n",
        "\n",
        "- What happens when we initialize weights too small? The gradient tends to get smaller as we move backward, which means the gradients in the first layers are tiny or zero. This phenomenon is called **vanishing gradient problem**.\n",
        "\n",
        "Poor weight initialization choices lead to **exploding** or **vanishing** gradients while training and can prevent the network from learning anything.\n",
        "\n",
        "\n",
        "When we initialize our parameters, we would like to:\n",
        "- Initialize around zero\n",
        "- Sample from a gaussian distribution with standard deviation $\\sigma$ or from a uniform distribution in $[-\\sigma,\\sigma]$"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9XCLbQvxxtBQ"
      },
      "source": [
        "## Section 3.1 Xavier initialization"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "aZkFuLcGp13S",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Xavier Initialization\n",
        "\n",
        "try: t4;\n",
        "except NameError: t4=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"gEH7hHkPn8Y\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ueyz1GFr3RES"
      },
      "source": [
        "Let us look at the scale distribution of an output (e.g., a hidden variable)  $o_i$  for some fully-connected layer. With  $n_{in}$  inputs  $x_j$  and their associated weights  $w_{ij}$  for this layer, an output is given by\n",
        "\n",
        "\\begin{equation}\n",
        "o_i = \\sum_{j=1}^{n_{in}} w_{ij}x_{j}\n",
        "\\end{equation}\n",
        "\n",
        "The weights are drawn independently from the same distribution, which is not necessarily a Gaussian. Let us assume that this distribution has zero mean and standard deviation $\\sigma_{w}$ (variance $\\sigma_{w}^2$). For now, let us assume that the inputs to the layer $x_j$  also have zero mean and variance $\\sigma_{x}^{2}$ and that they are independent of $w_{ij}$ and independent of one another other. With this assumptions, we can compute the mean and variance of $o_i$ as follows:\n",
        "\n",
        "\\begin{align}\n",
        "\\mathbb{E}[o_i] &{} \\stackrel{def} =  \\sum_{j=1}^{n_{in}}\\mathbb{E}[w_{ij}x_{j}] & \\\\\n",
        "&{} = \\sum_{j=1}^{n_{in}}\\mathbb{E}[w_{ij}]\\mathbb{E}[x_{j}] & \\text{(independence of $w$ and $x$)}\\\\\n",
        "&{} = 0 & \\text{(each has mean $0$)}\n",
        "\\end{align}\n",
        "\n",
        "\\begin{align}\n",
        "Var[o_i] & {} \\stackrel{def} = \\mathbb{E}[o_i^{2}] - \\left( \\mathbb{E}[o_i] \\right)^{2} & \\\\\n",
        "& {} = \\sum_{j=1}^{n_{in}} \\mathbb{E}[w_{ij}^{2}x_{j}^{2}] - 0 & \\\\\n",
        "&{} =  \\sum_{j=1}^{n_{in}} \\mathbb{E}[w_{ij}^{2}]\\mathbb{E}[x_{j}^{2}] & \\text{(independence of $w$ and $x$)} \\\\\n",
        "\\sigma_o^2 &{} = n_{in}\\sigma_{w}^{2}\\sigma_{x}^{2}\n",
        "\\end{align}\n",
        "\n",
        "Let's see this equation in action:\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "uWFqf6A5jxoT"
      },
      "source": [
        "n_in, n_out = 256, 1000\n",
        "\n",
        "# Create mean 0 variance 1 weights\n",
        "W = torch.randn(n_out, n_in)\n",
        "# Create mean 0 variance 1 input activations\n",
        "x = torch.randn(n_in, 1)\n",
        "# Linear layer: matrix-multiply W times x\n",
        "o = W @ x\n",
        "\n",
        "print(f'output mean = {o.mean().item()}')\n",
        "print(f'output std: {o.std().item()}')\n",
        "print(f'square root of n_in: {np.sqrt(n_in)}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e8OF33F5jyPF"
      },
      "source": [
        "Vanishing or exploding gradients happen, in part, because the variance of the activations ($\\sigma_x^2$) is itself exploding to infinity or vanishing to zero through successive layers of the network. While we have direct control over $\\sigma_w^2$ during initialization, we only indirectly have control over $\\sigma_x^2$ – it depends on how the data distribution interacts with the weights and architecture of our network. \n",
        "\n",
        "In order to keep variance of $x$ constant across layers (in other words, $\\sigma_o^2 = \\sigma_x^2$), we have to set $n_{in}\\sigma_{w}^{2}=1$. This implies that weights should be initialized with $$\\sigma_w = \\sqrt{1/n_{in}} \\, .$$\n",
        "\n",
        "This initialization helps tame the variance of $x$ in the **forward** pass of the network, but what about the **backward** pass, which is where we see vanishing or exploding gradients? In deep linear networks, the gradient at layer $l$ is also equal to a product of all matrices _after_ $l$. Using $\\mathbf{x}_l$ to denote the hidden activation at layer $l$ and $\\mathbf{g}_l$ to denote its gradient, we can visualize the forwards and backwards calculations as follows:\n",
        "\\begin{align}\n",
        "\\text{Forward:} \\quad & \\mathbf{x}_0 \\overset{\\mathbf{W}_1}{\\longrightarrow} \\mathbf{x}_1 \\overset{\\mathbf{W}_2}{\\longrightarrow} \\ldots \\overset{\\mathbf{W}_l}{\\longrightarrow} &\\mathbf{x}_l& \\\\\n",
        "\\text{Backward:} \\quad & {} &\\mathbf{g}_l& \\overset{\\mathbf{W}_{1+1}^\\top}{\\longleftarrow} \\mathbf{g}_{l+1} \\overset{\\mathbf{W}_{l+2}^\\top}{\\longleftarrow} \\ldots \\overset{\\mathbf{W}_L^\\top}{\\longleftarrow} \\mathbf{g}_L \\\\\n",
        "\\end{align}\n",
        "\n",
        "This means that during backpropagation, we can apply the same variance argument to the gradient calculation, working backwards from the output layer using $n_{out}$ rather than $n_{in}$. This results in an analogous constraint for the backwards pass: we would like $n_{out}\\sigma_{w}^{2}=1$, or $$\\sigma_w = \\sqrt{1/n_{out}} \\, .$$\n",
        "\n",
        "These two constraint cannot be satisfied simultaneously, but we can approximate them both by averaging them together:\n",
        "\n",
        "\\begin{equation}\n",
        "\\frac{1}{2}(n_{in} + n_{out})\\sigma_w^{2} = 1 \\implies \\sigma_w = \\sqrt{\\frac{2}{n_{in} + n_{out}}}\n",
        "\\end{equation}\n",
        "\n",
        "For more details on this concept, see the original publication from [Xavier Glorot and Yoshua Bengio, 2010](http://proceedings.mlr.press/v9/glorot10a.html).\n",
        "\n",
        "Typically, the Xavier initialization samples weights from a Gaussian distribution with zero mean and standard deviation $\\sigma = \\sqrt{\\frac{2}{n_{in} + n_{out}}}$. We can also adapt Xavier’s intuition to choose the variance when sampling weights from a uniform distribution. Note that a uniform distribution in the range $\\left( -\\alpha, \\alpha\\right)$ has variance $\\sigma^2 = \\alpha^2/3$, thus we initialize the weights sampling from a uniform distribution $U\\left(- \\sqrt{\\frac{6}{n_{in} + n_{out}}}, \\sqrt{\\frac{6}{n_{in} + n_{out}}}\\right)$.\n",
        "\n",
        "Overall, in linear networks, we can use either:\n",
        "1. Weights sampled from a normal distribution such as: $w_i \\sim \\mathcal{N}\\left( \\mu=0, \\sigma=\\sqrt{\\frac{2}{n_{in} + n_{out}}} \\right) $\n",
        "2. Weights sampled from a uniform distribution such as $w_i \\sim U\\left(- \\sqrt{\\frac{6}{n_{in} + n_{out}}}, \\sqrt{\\frac{6}{n_{in} + n_{out}}}\\right) $\n",
        "\n",
        "**Note:** Here, we will initialize the weights using the uniform distribution, as shown in the corresponding video."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7HVPtA_2p4D0"
      },
      "source": [
        "### Exercise 3: Debug the vanishing gradients problem. Scale with Xavier's method\n",
        "\n",
        "Let's run our deep linear network again. Notice that the code is the same as above, but now, your job is to scale the weights according to the Xavier's initialization technique."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6hwy3W363X7_"
      },
      "source": [
        "# Initialize networks well via Xavier technique\n",
        "\n",
        "def deepNetwork(deep=False):\n",
        "  # Network parameters\n",
        "  input_dim = 1\n",
        "  output_dim = 1\n",
        "\n",
        "  if deep:\n",
        "    h1, h2, h3, h4, h5, h6 = 20, 15, 10, 5, 4, 2\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, h1),\n",
        "                        nn.Linear(h1, h2),\n",
        "                        nn.Linear(h2, h3),\n",
        "                        nn.Linear(h3, h4),\n",
        "                        nn.Linear(h4, h5),\n",
        "                        nn.Linear(h5, h6),\n",
        "                        nn.Linear(h6, output_dim))\n",
        "  else:\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, 1))\n",
        "\n",
        "  # parameters initialization\n",
        "  for i in range(len(net)):\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Scale with Xavier's method!\")\n",
        "    ####################################################################    \n",
        "    n_in = ...\n",
        "    n_out = ...\n",
        "    sigma = ...\n",
        "    net[i].weight.data.uniform_(..., ...)\n",
        "    net[i].bias.data.uniform_(..., ...)\n",
        "\n",
        "  return (net)\n",
        "\n",
        "\n",
        "## uncomment the lines below to test your initialization (Xavier) method\n",
        "# net = deepNetwork(deep=True)\n",
        "# outputXav = training_loop(X, y, model=net)\n",
        "# plotRegression(X, y, outputXav[0], outputXav[1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lT1mm6DCoLS_"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# Initialize networks well via Xavier technique\n",
        "\n",
        "def deepNetwork(deep=False):\n",
        "  # Network parameters\n",
        "  input_dim = 1\n",
        "  output_dim = 1\n",
        "\n",
        "  if deep:\n",
        "    h1, h2, h3, h4, h5, h6 = 20, 15, 10, 5, 4, 2\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, h1),\n",
        "                        nn.Linear(h1, h2),\n",
        "                        nn.Linear(h2, h3),\n",
        "                        nn.Linear(h3, h4),\n",
        "                        nn.Linear(h4, h5),\n",
        "                        nn.Linear(h5, h6),\n",
        "                        nn.Linear(h6, output_dim))\n",
        "  else:\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, 1))\n",
        "\n",
        "  # parameters initialization\n",
        "  for i in range(len(net)):\n",
        "    n_in = net[i].weight.shape[0]\n",
        "    n_out = net[i].weight.shape[1]\n",
        "    sigma = np.sqrt(6 / (n_in + n_out))\n",
        "    net[i].weight.data.uniform_(-sigma, sigma)\n",
        "    net[i].bias.data.uniform_(-sigma, sigma)\n",
        "\n",
        "  return (net)\n",
        "\n",
        "\n",
        "net = deepNetwork(deep=True)\n",
        "outputXav = training_loop(X, y, model=net)\n",
        "with plt.xkcd():\n",
        "  plotRegression(X, y, outputXav[0], outputXav[1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YjGR2d1MdMjQ"
      },
      "source": [
        "## Section 3.2: A simpler intialization"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2NeVFcOLeCST",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Xavier vs a simpler initialization\n",
        "\n",
        "video = YouTubeVideo(id=\"XFHbvGXP1Ng\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8O8KqT9ff9rT"
      },
      "source": [
        "As we have seen, Xavier's initialization makes certain assumptions about the distribution of $\\mathbf{x}$ and $\\mathbf{g}$. It also involved averaging two constratins: one derived from desiderata on the forward pass, and one from the backwards pass.\n",
        "\n",
        "Xavier's method (also known as Glorot initialization) is now standard in many deep learning libraries. In this section, we'll see if we can get away with a simpler method that only uses the constraint from the forward pass, which you may recall is $$\\sigma_w \\ \\sqrt{1/n_{in}} \\, .$$"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "H69cI29kdXgB"
      },
      "source": [
        "### Exercise 4: Compare the simpler method with Xavier's technique\n",
        "\n",
        "Use a simpler method to initialize the parameters. Let's scale with the width of the inputs, i.e., $\\sigma_w=1/\\sqrt{n_{in}}$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yGpTGKUVJsyM"
      },
      "source": [
        "def deepNetwork(deep=False):\n",
        "  # Network parameters\n",
        "  input_dim = 1\n",
        "  output_dim = 1\n",
        "\n",
        "  if deep:\n",
        "    h1, h2, h3, h4, h5, h6 = 20, 15, 10, 5, 4, 2\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, h1),\n",
        "                        nn.Linear(h1, h2),\n",
        "                        nn.Linear(h2, h3),\n",
        "                        nn.Linear(h3, h4),\n",
        "                        nn.Linear(h4, h5),\n",
        "                        nn.Linear(h5, h6),\n",
        "                        nn.Linear(h6, output_dim))\n",
        "  else:\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, 1))\n",
        "\n",
        "  # parameters initialization\n",
        "  for i in range(len(net)):\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Scale weights w.r.t width!\")\n",
        "    ####################################################################\n",
        "    n_in = ...\n",
        "    sigma = ...\n",
        "    net[i].weight.data.normal_(0, ...)\n",
        "    net[i].bias.data.normal_(0, ...)\n",
        "\n",
        "  return (net)\n",
        "\n",
        "\n",
        "## uncomment the lines below to test your initialization choice\n",
        "# net = deepNetwork(deep=True)\n",
        "# outputSimple = training_loop(X, y, model=net)\n",
        "# plotRegression(X, y, outputSimple[0], outputXav[1], outputSimple[1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qXjAm5evuigD"
      },
      "source": [
        "# to_remove solution\n",
        "def deepNetwork(deep=False):\n",
        "  # Network parameters\n",
        "  input_dim = 1\n",
        "  output_dim = 1\n",
        "\n",
        "  if deep:\n",
        "    h1, h2, h3, h4, h5, h6 = 20, 15, 10, 5, 4, 2\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, h1),\n",
        "                        nn.Linear(h1, h2),\n",
        "                        nn.Linear(h2, h3),\n",
        "                        nn.Linear(h3, h4),\n",
        "                        nn.Linear(h4, h5),\n",
        "                        nn.Linear(h5, h6),\n",
        "                        nn.Linear(h6, output_dim))\n",
        "  else:\n",
        "    # define our network\n",
        "    net = nn.Sequential(nn.Linear(input_dim, 1))\n",
        "\n",
        "  # parameters initialization\n",
        "  for i in range(len(net)):\n",
        "    n_in = net[i].weight.shape[0]\n",
        "    sigma = 1/np.sqrt(n_in)\n",
        "    net[i].weight.data.normal_(0, sigma)\n",
        "    net[i].bias.data.normal_(0, sigma)\n",
        "\n",
        "  return (net)\n",
        "\n",
        "\n",
        "net = deepNetwork(deep=True)\n",
        "outputSimple = training_loop(X, y, model=net)\n",
        "with plt.xkcd():\n",
        "  plotRegression(X, y, outputSimple[0], outputXav[1], outputSimple[1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CovvGsrzKAQv"
      },
      "source": [
        "Both methods converge in the same set of parameters and at almost the same rate. However, this is not the case for more challenging tasks. This feature is critical in large scale applications, where the demand for speed is high."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uMCFkyyb26d7"
      },
      "source": [
        "---\n",
        "# Section 4: How linear networks are not linear.\n",
        "\n",
        "*Estimated time: 75 minutes since start*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Wsg-OhY3pj-K",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Solving XOR with Linear Neural Networks\n",
        "\n",
        "try: t2;\n",
        "except NameError: t2=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"9qPwfWlAsOM\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FA4TWw2EpmvT"
      },
      "source": [
        "Here, we will show that one can use a linear NN (without any nonlinearity) and solve a nonlinear problem. We focus on the XOR problem, i.e., a logical operation with linear nonseparable data!\n",
        "\n",
        "Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.\n",
        "\n",
        "In case of two inputs ($X$ and $Y$) the following truth table is applied:\n",
        "\n",
        "\\begin{array}{ccc}\n",
        "X & Y & \\text{XOR} \\\\\n",
        "\\hline\n",
        "0 & 0 & 0 \\\\\n",
        "0 & 1 & 1 \\\\\n",
        "1 & 0 & 1 \\\\\n",
        "1 & 1 & 0 \\\\\n",
        "\\end{array}\n",
        "\n",
        "Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms.\n",
        "\n",
        "But, how are we going to solve a linear nonseparable dataset without using a nonlinearity?\n",
        "\n",
        "We will show that deep linear networks implemented using floating-point arithmetic are not actually linear and can perform nonlinear computation! Without any nonlinearity, consecutive linear layers would be, in theory, mathematically equivalent to a single linear layer. So, it is a surprise that floating-point arithmetic is nonlinear enough to yield deep trainable networks.\n",
        "\n",
        "Numbers used by computers aren’t perfect mathematical numbers but approximate representations using finite numbers of bits. Computers commonly use *floating-point* numbers to represent mathematical objects. Each *floating-point* number is represented by a combination of a fraction and an exponent. In the IEEE’s float32 standard, 23 bits are used for the fraction and 8 for the exponent, and one for the sign. See more [here](https://openai.com/blog/nonlinear-computation-in-linear-networks/).\n",
        "\n",
        "The linear network will consist of 3 layers; the input layer, a hidden layer, and the output layer.\n",
        "\n",
        "First, we want to see if we can push the network in a regime where linear operations become non-linear. Thus, we construct a data set with $X$ and $Y$ variables in the range $[-1.1, 1.1]$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "aI09CQhzZAoN"
      },
      "source": [
        "# enable denormals in pytorch to take advantage of the non-linear effects\n",
        "torch.set_flush_denormal(True)\n",
        "\n",
        "# generating the data set\n",
        "X = np.linspace(start=-1.1, stop=1.1, num=30)\n",
        "Y = np.linspace(start=-1.1, stop=1.1, num=30)\n",
        "\n",
        "inputs = np.array(np.meshgrid(X, Y)).T.reshape(-1, 2)\n",
        "targets = np.ones(shape=(900, 1))\n",
        "targets[inputs[:, 0]*inputs[:, 1] < 0] = -1.\n",
        "\n",
        "inputs = inputs.astype(np.float32)\n",
        "targets = targets.astype(np.float32)\n",
        "\n",
        "# Convert inputs and targets to tensors\n",
        "inputs = torch.from_numpy(inputs)\n",
        "targets = torch.from_numpy(targets)\n",
        "\n",
        "plt.figure()\n",
        "plt.scatter(inputs.detach().cpu().numpy()[:, 0],\n",
        "            inputs.detach().cpu().numpy()[:, 1], c=targets)\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ldUUOdTxZpKu"
      },
      "source": [
        "Now, let's implement the linear network from scratch. We do this since it gives us better handling and because we are going to use a weird scaling operation! Recall that we will use a similar network to the one used for linear regression in Tutorial 1 (i.e., `linear_regression()` function).\n",
        "\n",
        "```python\n",
        "def mynetwork(x, params):\n",
        "\n",
        "  w1, ..., b3 = params[0], ..., params[5]\n",
        "\n",
        "  h1 = x @ w1.t() + b1  # dot product of inputs with weights and adding the bias\n",
        "  h2 = h1 @ w2.t() + b2  # similarly\n",
        "  return (h2 @ w3.t() + b3)  # similarly\n",
        "```\n",
        "\n",
        "Spend a minute to understand the network's implementation.\n",
        "\n",
        "Let's initialize the parameters. Notice that we do not use learnable parameters (`requires_grad=False`) in the first layer. We initialize the weights close to zero. Notice that `torch.mul(input, other)` multiplies each element of the `input` with the scalar `other` and returns a new resulting tensor."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2_Rod1vqaBXj"
      },
      "source": [
        "# Initial Weights and biases\n",
        "\n",
        "input_dim = inputs.shape[1]\n",
        "output_dim = targets.shape[1]\n",
        "hidden1, hidden2 = 10, 1  # size of the hidden layers\n",
        "\n",
        "w1 = Variable(torch.mul(torch.randn(hidden1, input_dim), np.sqrt(1/input_dim)),\n",
        "              requires_grad=False)\n",
        "b1 = Variable(torch.mul(torch.randn(hidden1), 0.0),\n",
        "              requires_grad=False)\n",
        "\n",
        "w2 = Variable(torch.mul(torch.randn(hidden2, hidden1), np.sqrt(1/hidden1)),\n",
        "              requires_grad=True)\n",
        "b2 = Variable(torch.mul(torch.randn(hidden2), 0.0),\n",
        "              requires_grad=True)\n",
        "\n",
        "params = [w1, b1, w2, b2]"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "H6x34QnwaORA"
      },
      "source": [
        "## Define a 'crazy' scaling and plot the output of the hidden layer\n",
        "\n",
        "We have to scale down weights and biases to very small (i.e., tiny) values to exploit the nonlinearity. In this regime (i.e., tiny numbers) the common rules of mathematics stop applying. But why is this true? Let's see a small example.\n",
        "\n",
        "Assume that we have three numbers, $\\alpha$, $\\beta$, and $\\gamma$ and $scale$ is a very large number, close to the machine's maximum number (e.g., $scale = 2^{126})$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5HqCLL0vL5NS"
      },
      "source": [
        "def mathematics(scale):\n",
        "  a = 1.5 * (1/scale)\n",
        "  b = -1.2 * (1/scale)\n",
        "  c =  1.0 * (scale)\n",
        "\n",
        "  sum1 = (a + b) * c\n",
        "  sum2 = a * c + b * c\n",
        "\n",
        "  print(f'(a + b)c = {sum1} and ac + bc = {sum2}')\n",
        "\n",
        "\n",
        "scale_high = torch.tensor(2**110, dtype=torch.float32)\n",
        "scale_low = torch.tensor(2**126, dtype=torch.float32)\n",
        "\n",
        "\n",
        "mathematics(scale_high)\n",
        "mathematics(scale_low)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QX1I-N5dQc5Q"
      },
      "source": [
        "As you see, going down to tiny values, the mathematical expression does not hold. Let's see this interesting effect in action. Back to XOR problem!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VSnoV4YULBAD"
      },
      "source": [
        "Now, we try different values to verify that the network is linear, apart from a range of values where the scaling is very large, and the network is behaving as a nonlinear one."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qLsGn0WcLYJV"
      },
      "source": [
        "# Not a 'Crazy scaling'\n",
        "\n",
        "# Define the model - custom based to have better handling\n",
        "def model(x, params, scaling):\n",
        "\n",
        "  w1, b1, w2, b2 = params[0], params[1], params[2], params[3]\n",
        "  h1 = (x @ w1.t()/scaling + b1/scaling)\n",
        "  out = (h1 @ w2.t() + b2/scaling) * scaling\n",
        "  return out, h1\n",
        "\n",
        "\n",
        "scaling = torch.tensor(2**2, dtype=torch.float32)\n",
        "out = model(inputs, params, scaling)\n",
        "activ_l1 = out[1]\n",
        "warnings.filterwarnings(\"ignore\") # To reactivate wornings: filterwarnings(\"default\")\n",
        "XOR_plots(activ_l1)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mavqhWsU5CTM"
      },
      "source": [
        "So, what we see here is at the top panel, the heatmap of the output of the first layer (i.e., the colors represent the output) as a function of the given set of inputs, $X$, $Y$. Things here are linear. You can see this from the bottom plot, where the output is plotted as a function of the summed inputs, $X+Y$.\n",
        "\n",
        "Let's try a way larger scaling value. What about $2^{124}$?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "r61pRn4S5JnN"
      },
      "source": [
        "# A 'Crazy scaling'\n",
        "scaling = torch.tensor(2**124, dtype=torch.float32)\n",
        "out = model(inputs, params, scaling)\n",
        "warnings.filterwarnings(\"ignore\")  # To reactivate wornings: filterwarnings(\"default\")\n",
        "XOR_plots(out[1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EfUK8vYR5aqg"
      },
      "source": [
        "Ok, now the neurons behave nonlinearly!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "18z1heYDam37",
        "cellView": "form"
      },
      "source": [
        "#@markdown See the bottom plot. Can you give a value at which the nonlinearity kicks-in?\n",
        "xor = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "re49oAhXcN_C"
      },
      "source": [
        "How can we push the network to operate in a nonlinear regime using the binary inputs to solve XOR? First, we construct the dataset consisting of $4$ input vectors and four target values, given in the table above."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9vkaSwCiNLPo"
      },
      "source": [
        "# Construct the input data\n",
        "inputs = np.array([\n",
        "                   [0., 0.],\n",
        "                   [0., 1.],\n",
        "                   [1., 0.],\n",
        "                   [1., 1.]\n",
        "                   ])\n",
        "targets = np.array([\n",
        "                    [0.],\n",
        "                    [1.],\n",
        "                    [1.],\n",
        "                    [0.]\n",
        "                    ])\n",
        "\n",
        "plt.figure()\n",
        "plt.plot(inputs[0,0], inputs[0,1], 'r.', markersize=12.0, label='false')\n",
        "plt.plot(inputs[3,0], inputs[3,1], 'r.', markersize=12.0)\n",
        "plt.plot(inputs[1,0], inputs[1,1], 'b.', markersize=12.0, label='true')\n",
        "plt.plot(inputs[2,0], inputs[2,1], 'b.', markersize=12.0)\n",
        "plt.xlabel('X')\n",
        "plt.ylabel('Y')\n",
        "plt.legend()\n",
        "plt.show()\n",
        "\n",
        "print(f'X, Y: \\n{inputs}')\n",
        "print(f'XOR: \\n{targets}')\n",
        "\n",
        "# set the input in float32 type\n",
        "inputs = inputs.astype(np.float32)\n",
        "targets = targets.astype(np.float32)\n",
        "\n",
        "# Convert inputs and targets to tensors\n",
        "inputs = torch.from_numpy(inputs)\n",
        "targets = torch.from_numpy(targets)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YVyf-Z53Yuim"
      },
      "source": [
        "## Solve XOR logical operation\n",
        "\n",
        "To take advantage of the kicked-in nonlinearity, inspect carefully the plots above and choose the bias to move close to the emerging nonlinearity to $[0,1]$ range.\n",
        "\n",
        "Here, by having 1000 neurons in the first layer and setting the bias to sampled from a gaussian with a large standard deviation, i.e., $\\sigma=2$, we force some of the nodes to operate in the non-linear regime."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CD_NZWTgUxLb"
      },
      "source": [
        "# Show how a deep linear network can 'wrongly' solve xor\n",
        "\n",
        "def solveXOR(inputs, targets, scaling, seed=2021):\n",
        "\n",
        "  # Network dimensions\n",
        "  input_dim = inputs.shape[1]\n",
        "  output_dim = targets.shape[1]\n",
        "  hidden1, hidden2 = 1000, 1\n",
        "\n",
        "  # Initial Weights and biases\n",
        "  torch.manual_seed(seed)\n",
        "  w1 = Variable(torch.mul(torch.randn(hidden1, input_dim), np.sqrt(1/input_dim)),\n",
        "                requires_grad=False)\n",
        "  b1 = Variable(torch.mul(torch.randn(hidden1), 2.0),\n",
        "                requires_grad=False)\n",
        "\n",
        "  w2 = Variable(torch.mul(torch.randn(hidden2, hidden1), np.sqrt(1/hidden1)),\n",
        "                requires_grad=True)\n",
        "  b2 = Variable(torch.mul(torch.randn(hidden2), 1.0),\n",
        "                requires_grad=True)\n",
        "  \n",
        "  params = [w1, b1, w2, b2]\n",
        "\n",
        "  # Training loop parameters\n",
        "  lr  = 2e-4  # learning rate\n",
        "  epochs = 250  # total epochs\n",
        "  criterion = nn.MSELoss()  # loss function\n",
        "\n",
        "  # Train for epochs\n",
        "  losses = []\n",
        "\n",
        "  epoch_range = trange(epochs, desc='loss: ', leave=True)\n",
        "\n",
        "  for epoch in epoch_range:\n",
        "    if losses:\n",
        "      epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "      epoch_range.refresh() # to show immediately the update\n",
        "    time.sleep(0.01)\n",
        "\n",
        "    out = model(inputs, params, scaling)\n",
        "    loss = criterion(out[0], targets)\n",
        "    loss.backward()\n",
        "      \n",
        "    # Adjust weights & reset gradients\n",
        "    with torch.no_grad():\n",
        "      # Gradient descent\n",
        "      w2 -= w2.grad * lr\n",
        "      b2 -= b2.grad * lr\n",
        "      # flush gradients\n",
        "      w2.grad.zero_()\n",
        "      b2.grad.zero_()\n",
        "        \n",
        "    losses.append(loss)\n",
        "\n",
        "  # make the predictions\n",
        "  return (model(inputs, params, scaling), b1)\n",
        "\n",
        "\n",
        "outputs = solveXOR(inputs, targets, scaling)\n",
        "XORpredictions(inputs.cpu().detach().numpy(),\n",
        "               targets.cpu().detach().numpy(),\n",
        "               outputs[0][0].cpu().detach().numpy())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dz_RB0izdNJc"
      },
      "source": [
        "Hooray! You have solved XOR with a pure Linear Neural Network!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TE52lYf8cUwc",
        "cellView": "form"
      },
      "source": [
        "#@markdown Do you think this answer is correct, or is it a \"hack\"?\n",
        "xor_solution = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iNTcGN-Y3b-r"
      },
      "source": [
        "---\n",
        "# Section 5: Race ideas (first components increase exponentially)\n",
        "\n",
        "*Estimated time: 105 minutes since start*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "N3mp1BCPeMaj",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Accelerating Growth\n",
        "import time\n",
        "try: t5;\n",
        "except NameError: t5=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"c3VNOmbi1tU\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2P4u7Y9-SkSG"
      },
      "source": [
        "## Section 5.1: 1-Dimension scenario\n",
        "\n",
        "So, let's take the simplest case, where we have a linear network with multiple layers but only one neuron (i.e., node) per layer. Initializing the weight close to zero, i.e., sampling from a normal distribution with a small $\\sigma$, we can observe when our network starts learning. \n",
        "\n",
        "Towards this goal, we will plot the weights against epochs, and what we expect is to see the weights changing rapidly at some point (i.e., epoch). If the network has learned the task, the weights will remain unchanged, which means that the gradients are almost zero, so the model has been converged."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vyoKm238TOEd"
      },
      "source": [
        "# 1D data\n",
        "original_w = torch.tensor([2.5]).reshape(-1,1)\n",
        "original_b = 1.2\n",
        "N = 1000  # number of examples\n",
        "inputs, targets = synthetic_dataset(original_w, original_b,\n",
        "                                    num_examples=1000,\n",
        "                                    sigma=0.1)\n",
        "inputs = inputs.T\n",
        "print(inputs.shape, targets.shape)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1-8uiWTFVoVS"
      },
      "source": [
        "We build a model with two, one dimensional (i.e., one node) hidden layers."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_UjbS5HBTaZ8"
      },
      "source": [
        "# Define the model\n",
        "def network(input_dim, output_dim, hidden_sizes):\n",
        "\n",
        "  # define our network\n",
        "  net = nn.Sequential(nn.Linear(input_dim, hidden_sizes),\n",
        "                      nn.Linear(hidden_sizes, hidden_sizes),\n",
        "                      nn.Linear(hidden_sizes, output_dim))\n",
        "\n",
        "  # parameters initialization\n",
        "  for i in range(len(net)): \n",
        "    sigma = 1e-1\n",
        "    net[i].weight.data.normal_(0, sigma)\n",
        "    net[i].bias.data.fill_(0)\n",
        "\n",
        "  return (net)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Yh_EvCKvR8xa"
      },
      "source": [
        "def training_1d(inputs, targets):\n",
        "\n",
        "  input_dim = inputs.shape[1]\n",
        "  output_dim = targets.shape[1]\n",
        "  hidden_sizes = 1\n",
        "\n",
        "  learning_rate  = 1e-2  # learning rate\n",
        "  num_epochs = 1000\n",
        "\n",
        "  # Loss function\n",
        "  criterion = nn.MSELoss()\n",
        "\n",
        "  model = network(input_dim, output_dim, hidden_sizes)\n",
        "  optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
        "  # Train for num_epochs\n",
        "  losses = []\n",
        "  weights = np.empty((len(model), num_epochs))\n",
        "\n",
        "  # this few lines would implement a progress bar and loss description\n",
        "  epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "  for epoch in epoch_range:\n",
        "    if losses:\n",
        "      epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "      epoch_range.refresh() # to show immediately the update\n",
        "    time.sleep(0.01)\n",
        "\n",
        "    preds = model(inputs)\n",
        "\n",
        "    loss = criterion(preds, targets)\n",
        "    loss.backward()\n",
        "\n",
        "    # Store the weights\n",
        "    for j in range(len(model)):\n",
        "      weights[j, epoch] = model[j].weight.detach().numpy()\n",
        "\n",
        "    # Grdient descent\n",
        "    optimizer.step()\n",
        "    optimizer.zero_grad()\n",
        "\n",
        "    losses.append(loss.item())\n",
        "\n",
        "  return (losses, weights, num_epochs)\n",
        "\n",
        "\n",
        "output = training_1d(inputs, targets)\n",
        "losses = output[0]\n",
        "weights = output[1]\n",
        "plot_weight(losses, weights)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3sX2TF3Y15ou"
      },
      "source": [
        "As we see here, the loss plot has two distinct phases; a drop, then unchanged, and then a huge drop to its steady-state. If we see the weights' plots, in the beginning, learning has not started, and then the absolute value of weights is increased exponentially until convergence. This kind of plot demonstrates the learning dynamics of our simple linear network.\n",
        "\n",
        "Now, it's time to increase the dimensions of both the input and output!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MYb3i5O9R9pH"
      },
      "source": [
        "## Section 5.2: Multiple Dimensions"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Y0vhIA4aRNWC",
        "cellView": "form"
      },
      "source": [
        "#@title Video: How Multiple Dimensions Kick In\n",
        "\n",
        "video = YouTubeVideo(id=\"MrqLuBsVtWU\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FZVvuhu5qLOs"
      },
      "source": [
        "Here, we go deeper into understanding the learning dynamics. As we have shown in the previous examples, a linear neural network can learn a set of parameters to perform linear regression.\n",
        "\n",
        "The idea behind this approach is to find an input-output relationship of a linear neural network. Thus, we want to take into account the weight matrix product, i.e., $W_{[1]}^{\\text{T}}W_{[2]}^{\\text{T}} \\dots W_{[L]}^{\\text{T}}$, where $L$ denotes the number of layers in our network.\n",
        "\n",
        "Towards understanding the learning dynamics, we decompose the weight matrix product into orthogonal vectors and then keep track of the singular values across time evolution, i.e., epochs. \n",
        "\n",
        "From Linear Algebra, we know that we can decompose any matrix $A$ into two orthogonal matrices and one diagonal matrix with the relationship:\n",
        "\n",
        "\\begin{equation}\n",
        "A = UΣV^{\\text{T}}\n",
        "\\end{equation}\n",
        "\n",
        "Here, we perform the so-called singular value decomposition (SVD) in every epoch's weight matrix product. For a refresher, see this [tutorial](http://gregorygundersen.com/blog/2018/12/10/svd/#:~:text=The%20singular%20value%20decomposition%20or,that%20build%20on%20the%20SVD.).\n",
        "\n",
        "We store the first $k$ singular values, and then we plot them against epochs.\n",
        "\n",
        "This approach's intuition is that we want to know how much a column of the weight product matrix is learned over time. That is, what is the size of the learned $W_{[1]}^{\\text{T}}W_{[2]}^{\\text{T}}$ projected onto that column, i.e., the corresponding singular value.\n",
        "\n",
        "To better illustrate this idea, we will use synthetic data in high dimensions. Here, our independent variables, $x$ are in the $100D$ space, whereas the dependent ones $y$, are in $10D$ space.\n",
        "\n",
        "If the dependent variable lies in the $2D$ space and above, the method is also called **multivariate linear regression**.\n",
        "\n",
        "Notice that our data $\\textbf{X} \\in \\mathbb{R}^{N \\times D}$, where $N$ is the number of examples, and $D$ the number of features."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RshwxsPS0LRc"
      },
      "source": [
        "# More dimensions; Start with some random original weights.\n",
        "original_w = torch.randn((100, 10))\n",
        "original_b = 1.2\n",
        "N = 1000  # number of examples\n",
        "X, y = synthetic_dataset(original_w, original_b,\n",
        "                         num_examples=N,\n",
        "                         sigma=0.1,\n",
        "                         seed=2021)\n",
        "\n",
        "# We take the transpose matrices\n",
        "inputs = X.T\n",
        "targets = y.T\n",
        "\n",
        "print(f'input size is: {inputs.shape},'\n",
        "      f'target size is: {targets.shape}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3AE_aVD37vIY"
      },
      "source": [
        "Here, we use a network with one hidden layer, but you can extend this adding more layers to the network."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1f1zsqhr0GwH"
      },
      "source": [
        "# Define the model\n",
        "def networkSVD(input_dim, output_dim, hidden_sizes):\n",
        "\n",
        "  # define our network\n",
        "  net = nn.Sequential(nn.Linear(input_dim, hidden_sizes),\n",
        "                      nn.Linear(hidden_sizes, output_dim))\n",
        "\n",
        "  # parameters initialization\n",
        "  for i in range(len(net)): \n",
        "    sigma = 1e-2\n",
        "    net[i].weight.data.normal_(0, sigma)\n",
        "    net[i].bias.data.fill_(0)\n",
        "\n",
        "  return (net)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-LUIl_M_zFt_"
      },
      "source": [
        "### Exercise 5: Compute the learning modes via SVD\n",
        "\n",
        "Here, you have to apply the Singular Value Decomposition method to the weight product matrix. Recall from the previous tutorial that we need to `.detach().numpy()` the product matrix to be used in NumPy, where we perform a static operation.\n",
        "\n",
        "*Hint:* To compute SVD, you can use `np.linalg.svd`. See the documentation of this function by running on a scratch cell `? np.linalg.svd`. As you want to store the singular values only, we can enable the option `compute_uv=False`."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iD1DafYX3iT9"
      },
      "source": [
        "def training_modes(inputs, targets):\n",
        "\n",
        "  input_dim = inputs.shape[1]\n",
        "  output_dim = targets.shape[1]\n",
        "  hidden_sizes = 20\n",
        "\n",
        "  learning_rate  = 2e-3  # learning rate\n",
        "  num_epochs = 3000\n",
        "\n",
        "  # Loss function\n",
        "  criterion = nn.MSELoss()\n",
        "\n",
        "  model = networkSVD(input_dim, output_dim, hidden_sizes)\n",
        "  optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
        "\n",
        "  # Train for num_epochs\n",
        "  losses = []\n",
        "  rank = 5\n",
        "  modes = np.empty((rank, num_epochs))\n",
        "\n",
        "  # this few lines would implement a progress bar and loss description\n",
        "  epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "  for epoch in epoch_range:\n",
        "    if losses:\n",
        "      epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "      epoch_range.refresh() # to show immediately the update\n",
        "    time.sleep(0.01)\n",
        "\n",
        "    preds = modelSVD(inputs, params)\n",
        "\n",
        "    loss = criterion(preds)\n",
        "    loss.backward()\n",
        "\n",
        "    # SVD applied on the matrix product\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Calculate w1.T*w2.T, detach, and then apply SVD\")\n",
        "    #################################################################### \n",
        "    w_mult = ...\n",
        "    w_svd = ...\n",
        "    modes[:,epoch] = w_svd[:rank]\n",
        "\n",
        "    # Grdient descent\n",
        "    optimizer.step()\n",
        "    optimizer.zero_grad()\n",
        "\n",
        "    losses.append(loss.item())\n",
        "\n",
        "  return (losses, modes, num_epochs)\n",
        "\n",
        "\n",
        "# uncomment the lines below to test your SVD method\n",
        "# output = training_modes(inputs, targets)\n",
        "# plot_learning_modes(output[0], output[2], output[1], rank=5)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "L4EvDLYz6FPN"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def training_modes(inputs, targets):\n",
        "\n",
        "  input_dim = inputs.shape[1]\n",
        "  output_dim = targets.shape[1]\n",
        "  hidden_sizes = 20\n",
        "\n",
        "  learning_rate  = 2e-3  # learning rate\n",
        "  num_epochs = 3000\n",
        "\n",
        "  # Loss function\n",
        "  criterion = nn.MSELoss()\n",
        "\n",
        "  model = networkSVD(input_dim, output_dim, hidden_sizes)\n",
        "  optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
        "\n",
        "  # Train for num_epochs\n",
        "  losses = []\n",
        "  rank = 5\n",
        "  modes = np.empty((rank, num_epochs))\n",
        "\n",
        "  # this few lines would implement a progress bar and loss description\n",
        "  epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "  for epoch in epoch_range:\n",
        "    if losses:\n",
        "      epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "      epoch_range.refresh() # to show immediately the update\n",
        "    time.sleep(0.01)\n",
        "\n",
        "    preds = model(inputs)\n",
        "\n",
        "    loss = criterion(preds, targets)\n",
        "    loss.backward()\n",
        "\n",
        "    # SVD applied on the matrix product\n",
        "    w_mult = (model[0].weight.T @ model[1].weight.T).detach().numpy()\n",
        "    w_svd = np.linalg.svd(w_mult, compute_uv=False, full_matrices=True)\n",
        "    modes[:,epoch] = w_svd[:rank]\n",
        "\n",
        "    # Grdient descent\n",
        "    optimizer.step()\n",
        "    optimizer.zero_grad()\n",
        "\n",
        "    losses.append(loss.item())\n",
        "\n",
        "  return (losses, modes, num_epochs)\n",
        "\n",
        "\n",
        "output = training_modes(inputs, targets)\n",
        "with plt.xkcd():\n",
        "  plot_learning_modes(output[0], output[2], output[1], rank=5)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KSS8FMWf3Hdc"
      },
      "source": [
        "As we see from the loss plot, our networks initially learn very slowly, and then the first learning component exponentially increases, which means fast learning. After a phase where the learning curve is almost constant with respect to epochs, the rest of the components increase exponentially, which explains our model's convergence.\n",
        "\n",
        "Of course, we can use more complex approaches to retrieve the learning components (e.g., using [`tensorly`](http://tensorly.org/stable/auto_examples/index.html#tensor-decomposition) or [`tensortools`](https://tensortools-docs.readthedocs.io/en/latest/)). These tools perform tensor decompositions (similar to SVD approach but applied on $3D$ tensors)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YRiloLYFczj2",
        "cellView": "form"
      },
      "source": [
        "#@markdown What will happen in the learning dynamis if we increase/decrease the learning rate?\n",
        "learning_modes = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RamnPdC33qAZ"
      },
      "source": [
        "---\n",
        "# Section 6: Cost functions - how problems give rise to cost functions\n",
        "\n",
        "*Estimated time: 145 minutes since start*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_i8U8TyJ4KQx",
        "cellView": "form"
      },
      "source": [
        "#@title Video: LogP as a Cost Function\n",
        "\n",
        "try: t6;\n",
        "except NameError: t6=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"O8psSHspno0\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Tvp3dRbn31wT"
      },
      "source": [
        "## Section 6.1: MSE, where is MSE good? Statistical properties of MSE estimators.\n",
        "\n",
        "As we already defined previously, in linear regression models, a common cost function is the so-called **Mean Squared Error (MSE)**. This cost function is also known as *Quadratic Loss*, as the function is of a quadratic form, and *L2 Loss* because the squared errors are of the same form as the L2 norm of a vector. As a quadratic function, the range of MSE is from $0$ to $\\infty$.\n",
        "\n",
        "Here, we more formally motivate the MSE cost function using assumptions about the distribution of additive noise in our dataset. As we show in the previous tutorial, the MSE in a dataset $\\textbf{X}$ is given by:\n",
        "\n",
        "\\begin{align}\n",
        "L(\\mathbf{w}, b) &{} = \\frac{1}{N}  \\sum_{i=1}^{N} \\left( \\hat{y}^{[i]} - y^{[i]}\\right)^2 \\\\\n",
        "&{}= \\frac{1}{N} \\sum_{i=1}^{N} \\left(\\mathbf{w}^{\\text{T}} \\mathbf{x}^{[i]} + b - y^{[i]}\\right)^{2}.\n",
        "\\end{align}\n",
        "\n",
        "One way to motivate linear regression with the mean squared error loss function, i.e., $L(\\mathbf{w}, b)$, is to formally assume that observations arise from noisy observations, where the noise is normally distributed (see W2 Part 1):\n",
        "\n",
        "\\begin{equation}\n",
        "y = \\textbf{w}^T\\textbf{x} + b + \\epsilon, \\epsilon \\sim \\mathcal{N}(\\mu=0, \\sigma)\n",
        "\\end{equation}\n",
        "\n",
        "Therefore, we can write the *likelihood* of seeing a particular example $y$ given the independent variable $\\mathbf{x}$\n",
        "\n",
        "\\begin{equation}\n",
        "p(y|\\mathbf{x}) = \\frac{1}{2\\pi\\sigma^2} \\text{exp}\\left( -\\frac{1}{2\\sigma^2} \\left( \\mathbf{w}^T\\mathbf{x} + b - y \\right)^2 \\right)\n",
        "\\end{equation}\n",
        "\n",
        "Assuming that all values in our dataset are independent and identically distributed (i.i.d.), the total likelihood of our dtaset is given by:\n",
        "\n",
        "\\begin{equation}\n",
        "P(\\mathbf{y}|\\mathbf{X}) = \\prod_{n=1}^N p\\left(y^{[i]}|\\mathbf{x}^{[i]} \\right)\n",
        "\\end{equation}\n",
        "\n",
        "According to the maximum likelihood principle, the best values of parameters  $\\mathbf{w}$  and  $b$  are those that maximize the likelihood of the entire dataset (i.e., *maximum likelihood estimators*).\n",
        "\n",
        "While maximizing the product of many exponential functions might look difficult, we can simplify things significantly, without changing the objective, by maximizing the log of the likelihood instead. For historical reasons, optimizations are more often expressed as minimization rather than maximization. So, without changing anything, we can minimize the negative log-likelihood\n",
        "\n",
        "\n",
        "\\begin{equation}\n",
        "-\\text{log} P(\\mathbf{y}|\\mathbf{X}) = -\\text{log} \\left( \\prod_{n=1}^N p\\left(y^{[i]}|\\mathbf{x}^{[i]} \\right) \\right)\n",
        "\\end{equation}\n",
        "\n",
        "Using the logarithm property states that the logarithm of a product is the sum of the logarithms of the individual components being multiplied, i.e., $\\text{log}(a \\cdot b) = \\text{log}(a) + \\text{log}(b)$, thus the log-likelihood is given by:\n",
        "\n",
        "\\begin{align}\n",
        "-\\text{log} P(\\mathbf{y}|\\mathbf{X}) &{}= -\\text{log} \\left( \\prod_{n=1}^N p\\left(y^{[i]}|\\mathbf{x}^{[i]} \\right) \\right) \\\\\n",
        "&{}= -\\sum_{n=1}^N \\text{log}\\left( p\\left(y^{[i]}|\\mathbf{x}^{[i]} \\right) \\right) \\\\\n",
        "&{} = \\sum_{i=2}^N \\left( \\frac{1}{2}\\text{log}(2\\pi\\sigma^2) + \\frac{1}{2\\sigma^2} \\left( \\mathbf{w}^T\\mathbf{x}^{[i]} + b - y^{[i]} \\right)^2 \\right).\n",
        "\\end{align}\n",
        "\n",
        "Assuming that $\\sigma$ is constant for all examples, the minimization of the negative *log-likelihood* with respect to the parameters $\\textbf{w}$ and $b$ can be reduced by removing the left order of the summation. The remaining order is the squared loss (taking out the constant $\\frac{1}{2\\sigma^2}$. In other words, minimizing the **mean squared error** is equivalent to **maximum likelihood estimation** of a linear model under the assumption of additive Gaussian noise with constant variance ($\\sigma^2$)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "o7PmvSsXzvkl"
      },
      "source": [
        "### Section 6.1.1 Download and Visualize the dataset\n",
        "\n",
        "Here, we will use neural data (i.e., firing rates) of hundreds of neurons as independent variables (i.e., $\\textbf{X}$). We aim to decode the reaching angle (i.e., $\\textbf{Y}$) concerning the neurons' firing rates. For more info on the behavioral experiment, see [Flint et al., 2012](http://doi.org/10.1088/1741-2560/9/4/046006)\n",
        "\n",
        "First, we will download the data. The data are extracted in 2 different groups:\n",
        "1. **training data** for the model fitting\n",
        "2. **testing (or test) data** for estimating the model’s performance\n",
        "\n",
        "Why do we need to split our dataset?\n",
        "\n",
        "In the real world, datasets contain both random and natural (i.e., real) effects. Hence it is unlikely to have a model that is 100% accurate. Furthermore, new data points will likely include random effects. Thus whether the model can explain these effects is subject to randomness. As a result, some random effects may be explained, while others may not.\n",
        "\n",
        "We want to answer questions as simple as,  \"is my model good” and “how good is my model?”.\n",
        "\n",
        "To answer these simple questions, we have to test our model in the most realistic scenario. It seems intuitive to split data into a training portion and a test portion, so the model can be trained on the first and then tested with the testing data. It may be a good idea to split the data so that the model can be **trained on a larger portion** to adapt to possible data structures. A typical process is to use $2/3$ of our data to train our model and test the rest $1/3$ of the dataset. Here, we have used a $90-10$ splitting for the illustration of the test set results.\n",
        "\n",
        "Later in the course, you will learn about more sophisticated techniques in order not only to verify if your model is good but also to choose the best model among others.\n",
        "\n",
        "Ok! Let's see what is downloaded:\n",
        "\n",
        "1. `x_train`, `x_test`: independent variables, i.e., neuronal firing rates.\n",
        "2. `y_train`, `y_test`: target variable, i.e., reaching angle."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LhLCUgtaz2wT"
      },
      "source": [
        "DATA = getData()\n",
        "\n",
        "x_train = DATA[0]\n",
        "y_train = DATA[1]\n",
        "x_test = DATA[2]\n",
        "y_test = DATA[3]"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Iu6Wgl3iM2Fh"
      },
      "source": [
        "Let's now see how some sample examples look like and plot a histogram of the reaching angles."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jMFtDJHfMsNR"
      },
      "source": [
        "neuron_id = [1, 30, 40]  # plot some neurons!\n",
        "\n",
        "plt.figure(figsize=(12, 6))\n",
        "plt.subplot(1, 2, 1)\n",
        "for id in neuron_id:\n",
        "  plt.plot(x_train[id,:], label=f'neuron {id}')\n",
        "plt.xlabel('time bin')\n",
        "plt.ylabel('spikes/sec')\n",
        "plt.legend()\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.hist(y_train, bins=15, density=True)\n",
        "plt.ylabel('probability density')\n",
        "plt.xlabel('reaching angle (radians)')\n",
        "plt.xlim([-1.4 * np.pi, 1.4 * np.pi])\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "B5nQpuuJnSX8"
      },
      "source": [
        "### Section 6.1.2 Construct the model using `nn.Module`"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LVTuWVp2D-od"
      },
      "source": [
        "# Model\n",
        "class SimpleNet(nn.Module):\n",
        "  # Initialize the layers\n",
        "  def __init__(self, input_dim, hidden_units, output_dim):\n",
        "    super().__init__()\n",
        "    \n",
        "    self.layers = nn.ModuleList()\n",
        "    self.hidden_units = hidden_units\n",
        "\n",
        "    # A fully-connected network (FCN) with len(hidden_units) hidden layers\n",
        "    for i in range(len(hidden_units)):\n",
        "      self.layers += [nn.Linear(input_dim, self.hidden_units[i])]\n",
        "      input_dim =  self.hidden_units[i]\n",
        "\n",
        "    self.out = nn.Linear(input_dim, output_dim)\n",
        "\n",
        "  # forward pass\n",
        "  def forward(self, x):\n",
        "    for layer in self.layers:\n",
        "      x = layer(x)\n",
        "\n",
        "    return self.out(x)\n",
        "\n",
        "\n",
        "model = SimpleNet(input_dim=219, hidden_units=[50, 25, 10], output_dim=1)\n",
        "print(model)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pNhhVWdQnb1n"
      },
      "source": [
        "### Section 6.1.3 Train the model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mfvEzhK4EZB5"
      },
      "source": [
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "\n",
        "# Converting inputs and labels to Variable\n",
        "if torch.cuda.is_available():\n",
        "  inputs = Variable(torch.from_numpy(x_train).float().cuda())\n",
        "  targets = Variable(torch.from_numpy(y_train).float().cuda())\n",
        "  inputs_test = Variable(torch.from_numpy(x_test).float().cuda())\n",
        "  targets_test = Variable(torch.from_numpy(y_test).float().cuda())\n",
        "else:\n",
        "  inputs = Variable(torch.from_numpy(x_train).float())\n",
        "  targets = Variable(torch.from_numpy(y_train).float())\n",
        "  inputs_test = Variable(torch.from_numpy(x_test).float())\n",
        "  targets_test = Variable(torch.from_numpy(y_test).float())\n",
        "\n",
        "# input size dimension - features\n",
        "input_dim = inputs.shape[1]\n",
        "output_dim = targets.shape[1]\n",
        "# size of the hidden layer\n",
        "hidden = [50, 10]\n",
        "\n",
        "learningRate = 1e-4\n",
        "epochs = 500\n",
        "\n",
        "model = SimpleNet(input_dim, hidden, output_dim)\n",
        "\n",
        "# Make it run on GPU!\n",
        "model.train(True)\n",
        "model.to(device)\n",
        "\n",
        "# Loss function\n",
        "criterion = nn.MSELoss()\n",
        "# Gradient Descent\n",
        "optimizer = torch.optim.SGD(model.parameters(), lr=learningRate)\n",
        "\n",
        "loss_val = []\n",
        "loss_test = []\n",
        "\n",
        "epoch_range = trange(epochs, desc='loss: ', leave=True)\n",
        "for epoch in epoch_range:\n",
        "  if loss_val:\n",
        "    epoch_range.set_description(\"loss: {:.6f}\".format(loss_val[-1]))\n",
        "    epoch_range.refresh() # to show immediately the update\n",
        "  time.sleep(0.01)\n",
        "\n",
        "  # Clear gradient buffers because we don't want any gradient from\n",
        "  # previous epoch to carry forward, dont want to cummulate gradients\n",
        "  optimizer.zero_grad()        \n",
        "  # get output from the model, given the inputs\n",
        "  outputs = model(inputs)\n",
        "  # get loss for the predicted output\n",
        "  loss = criterion(outputs, targets)\n",
        "  # get gradients w.r.t to parameters\n",
        "  loss.backward()\n",
        "\n",
        "  # update parameters\n",
        "  optimizer.step()\n",
        "  loss_val.append(loss.item())\n",
        "  # calculate and store the loss on the test set\n",
        "  loss_test.append(criterion(model(inputs_test), targets_test))\n",
        "\n",
        "plt.figure()\n",
        "plt.plot(loss_val, label='training error')\n",
        "plt.plot(loss_test, label='test error')\n",
        "plt.xlabel('epoch')\n",
        "plt.ylabel('loss')\n",
        "plt.xlim(0, 100)\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ygb4kIyKniM4"
      },
      "source": [
        "### Section 6.1.4 Test the model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GY-HfodHEtzG"
      },
      "source": [
        "# Testing the model\n",
        "\n",
        "yhat_test = model(inputs_test)\n",
        "loss_test = criterion(yhat_test, targets_test)\n",
        "print(f'Loss in the test set: {loss_test}')\n",
        "\n",
        "reaching_test(x_test, y_test, yhat_test)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sXcqfkWD2ilS"
      },
      "source": [
        "## Section 6.2: Mean Absolute Error (MAE)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Yr00-qcPonEB",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Outliers in Neural Activities\n",
        "\n",
        "video = YouTubeVideo(id=\"jxp5faHAZgM\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Drox4oTv0s_2"
      },
      "source": [
        "Mean Absolute Error (MAE) is another loss function used for regression models. MAE is the sum of absolute differences between our target and predicted variables. So, it measures the average magnitude of errors in a set of predictions without considering their directions. Similarly, the range of MAE is from $0$ to $\\infty$. MAE is also referred to as *L1 Loss*.\n",
        "\n",
        "The mathematical description of MAE is given by:\n",
        "\n",
        "\\begin{align}\n",
        "L(\\mathbf{w}, b) &{} = \\frac{1}{N}  \\sum_{i=1}^{N} \\left| \\hat{y}^{[i]} - y^{[i]}\\right| \\\\\n",
        "&{}= \\frac{1}{N} \\sum_{i=1}^{N} \\left| \\mathbf{w}^{\\text{T}} \\mathbf{x}^{[i]} + b - y^{[i]}\\right|.\n",
        "\\end{align}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4Ao6A-3rnxT1"
      },
      "source": [
        "### Section 6.2.1 MSE vs. MAE (L2 Loss vs. L1 Loss)\n",
        "\n",
        "In short, using the squared error is easier to solve, but using the absolute error is more robust to outliers. But let’s understand why!\n",
        "Whenever we train a machine learning model, our goal is to find the point that minimizes loss function. Of course, both functions reach the minimum when the prediction is exactly equal to the true value.\n",
        "\n",
        "Since MSE squares the error $l = \\left( \\hat{y} - y \\right)^2$, the value of error ($l$) increases quickly if $l > 1$. If we have an outlier in our data, the value of $l$ will be high and $l^2$ will be $>> |l|$. This will make the model with MSE loss to weight more the outliers comparing with a model containing MAE as a loss function.\n",
        "\n",
        "MAE loss is useful if the training data is corrupted with outliers (i.e., we erroneously receive unrealistically huge negative/positive values in our training environment, but not our testing environment).\n",
        "\n",
        "Intuitively, we can think about it like this: If we only had to give one prediction for all the observations that try to minimize MSE, then that prediction should be the **mean** of all target values. However, if we try to minimize MAE, that prediction would be the **median** of all examples. We know that the median is more robust to outliers than mean, making MAE more robust to outliers than MSE.\n",
        "\n",
        "One big problem in using MAE loss (especially in neural networks) is that its gradient is the same throughout, which means the gradient will be large even for small loss values. This isn’t good for learning. We can use a dynamic learning rate (i.e., learning rate that changes over epochs) to fix this, decreasing as we move closer to the minima (we will see this technique later in the course). MSE behaves nicely in this case and will converge even with a fixed learning rate. The gradient of MSE loss is high for larger loss values and decreases as loss approaches $0$, making it more precise at the end of the training.\n",
        "\n",
        "In the table below, we give an illustrative example using some random error values.\n",
        "\n",
        "\\begin{array}{ccc}\n",
        "x^{[i]} & error & error^2 & |error| \\\\\n",
        "\\hline\n",
        "x^{[1]} & 0 & 0 & 0 \\\\\n",
        "x^{[2]} & 1 & 1 & 1 \\\\\n",
        "x^{[3]} & .5 & .25 & .5 \\\\\n",
        "x^{[4]} & 3 & 9 & 3 \\\\\n",
        "x^{[5]} & -1.5 & 2.25 & 1.5  \\\\\n",
        "x^{[6]} & 15 & 225 & 15 \\\\\n",
        "\\hline\n",
        "total &  & RMSE\\approx6.3 & MAE=3.5\n",
        "\\end{array}\n",
        "\n",
        "where RMSE stands for Root MSE (i.e., we take the square root of the MSE to make both mean errors at the same scale!).\n",
        "\n",
        "\n",
        "The total error is significantly higher when introducing an outlier to some arbitrary dataset (i.e., $ID=5$).\n",
        "\n",
        "So far, so good. But, let's go back to our original dataset and manually add a highly unlikely value in the training dataset."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n-ori5gBnz4e"
      },
      "source": [
        "### Section 6.2.2 Corrupt the data by adding a tiny number of cells as outliers"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fafe8IuqmGZR"
      },
      "source": [
        "# Add an outliers in the training set\n",
        "inputs_new = inputs.cpu().detach().numpy()\n",
        "\n",
        "# Choose some neurons to be the outliers!\n",
        "outliers = np.random.choice(range(len(inputs)), size=20, replace=False)\n",
        "print(\"outliers:\", outliers)\n",
        "# Corrupt their firing! Incresa the spikes/s\n",
        "for i in outliers:\n",
        "  inputs_new[i,:] += 5000*np.ones(inputs.shape[1])\n",
        "\n",
        "# Plot some neurons to get an intuition of how the data look-like.\n",
        "neuron_id = [10, 30, 140, 15]\n",
        "\n",
        "with plt.xkcd():\n",
        "  plt.figure(figsize=(12, 6))\n",
        "  plt.subplot(1, 2, 1)\n",
        "  for id in neuron_id:\n",
        "    plt.plot(inputs_new[id,:], label=f'neuron {id}')\n",
        "  plt.xlabel('time bin')\n",
        "  plt.ylabel('spikes/sec')\n",
        "  plt.legend()\n",
        "\n",
        "  plt.subplot(1, 2, 2)\n",
        "  for id in np.random.choice(outliers, size=4, replace=False):\n",
        "    plt.plot(inputs_new[id,:], label=f'neuron {id}')\n",
        "  plt.xlabel('time bin')\n",
        "  plt.ylabel('spikes/sec')\n",
        "  plt.legend(bbox_to_anchor=(1.05, 1))\n",
        "  plt.show()\n",
        "\n",
        "# Converting inputs and labels to Variable in pytorch\n",
        "if torch.cuda.is_available():\n",
        "  inputs_new = Variable(torch.from_numpy(x_train).float().cuda())\n",
        "else:\n",
        "  inputs_new = Variable(torch.from_numpy(x_train).float())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XMru_Oy5n9F-"
      },
      "source": [
        "#### Exercise 6: Train two models; one with MSE and one with MAE loss functions"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "z4o694LQ0waM"
      },
      "source": [
        "def MSEvsMAE(inputs, targets):\n",
        "  # input size dimension - features\n",
        "  input_dim = inputs.shape[1]\n",
        "  output_dim = targets.shape[1]\n",
        "  # size of the hidden layer\n",
        "  hidden = [50, 10]\n",
        "\n",
        "  learningRate = 1e-5\n",
        "  epochs = 2000\n",
        "\n",
        "  # Create the model with MAE Loss\n",
        "  ####################################################################\n",
        "  # Fill in missing code below (...),\n",
        "  # then remove or comment the line below to test your function\n",
        "  raise NotImplementedError(\"Construct the Network/Choose the L1 loss as loss function\")\n",
        "  ####################################################################\n",
        "  modelMAE = ...\n",
        "  criterion_mae = ...  # L1 Loss - Absolute error\n",
        "  optimizerMAE = torch.optim.SGD(modelMAE.parameters(), lr=learningRate)\n",
        "\n",
        "  # Create model with MSE loss\n",
        "  modelMSE = SimpleNet(input_dim, hidden, output_dim).to(device)\n",
        "  criterion_mse = nn.MSELoss()\n",
        "  optimizerMSE = torch.optim.SGD(modelMSE.parameters(), lr=learningRate)\n",
        "\n",
        "\n",
        "  # Training Loop for both models\n",
        "  print('Training...')\n",
        "  lossesMAE = []\n",
        "  lossesMSE = []\n",
        "\n",
        "  loss_testMAE = []\n",
        "  loss_testMSE = []\n",
        "\n",
        "  epoch_range = trange(epochs, desc='MAE: vs MSE:', leave=True)\n",
        "  for epoch in epoch_range:\n",
        "    if lossesMAE:\n",
        "      epoch_range.set_description(\n",
        "          \"MAE: {:.4f} vs MSE: {:.4f}\".format(lossesMAE[-1], lossesMSE[-1]))\n",
        "      epoch_range.refresh() # to show immediately the update\n",
        "    time.sleep(0.01)\n",
        "\n",
        "    # Clear gradient buffers because we don't want any gradient from\n",
        "    # previous epoch to carry forward, dont want to cummulate gradients\n",
        "    optimizerMAE.zero_grad()\n",
        "    optimizerMSE.zero_grad()\n",
        "\n",
        "    # get output from the model, given the inputs\n",
        "    outputsMAE = modelMAE(inputs)\n",
        "    outputsMSE = modelMSE(inputs)\n",
        "\n",
        "    # get loss for the predicted output\n",
        "    lossMAE = criterion_mae(outputsMAE, targets)\n",
        "    lossMSE = criterion_mse(outputsMSE, targets)\n",
        "\n",
        "    # get gradients w.r.t to parameters\n",
        "    lossMAE.backward()\n",
        "    lossMSE.backward()\n",
        "\n",
        "    # update parameters\n",
        "    optimizerMAE.step()\n",
        "    optimizerMSE.step()\n",
        "    lossesMAE.append(lossMAE.item())\n",
        "    lossesMSE.append(lossMSE.item())\n",
        "\n",
        "    loss_testMAE.append(criterion_mae(modelMAE(inputs_test), targets_test))\n",
        "    loss_testMSE.append(criterion_mse(modelMSE(inputs_test), targets_test))\n",
        "      \n",
        "  return (modelMAE, modelMSE,\n",
        "          criterion_mae, criterion_mse,\n",
        "          lossesMAE, lossesMSE,\n",
        "          loss_testMAE, loss_testMSE)\n",
        "\n",
        "\n",
        "## uncomment the lines below to test the training of the models\n",
        "# output = MSEvsMAE(inputs_new, targets)\n",
        "\n",
        "# MAE_test = output[2](output[0](inputs_test), targets_test)\n",
        "# loss_2 = output[3](output[1](inputs_test), targets_test)\n",
        "# RMSE_test = torch.sqrt(loss_2) # we take the square root of MSE to have \n",
        "                               # both errors in the same scale\n",
        "# loss_comparison(output[4], output[5],\n",
        "#                 output[6], output[7],\n",
        "#                 RMSE_test, MAE_test)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ju5KnHJ4B7Fx"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def MSEvsMAE(inputs, targets):\n",
        "  # input size dimension - features\n",
        "  input_dim = inputs.shape[1]\n",
        "  output_dim = targets.shape[1]\n",
        "  # size of the hidden layer\n",
        "  hidden = [50, 10]\n",
        "\n",
        "  learningRate = 1e-5\n",
        "  epochs = 2000\n",
        "\n",
        "  # Create the model with MAE Loss\n",
        "  modelMAE = SimpleNet(input_dim, hidden, output_dim).to(device)\n",
        "  criterion_mae = nn.L1Loss()  # L1 Loss - Absolute error\n",
        "  optimizerMAE = torch.optim.SGD(modelMAE.parameters(), lr=learningRate)\n",
        "\n",
        "  # Create model with MSE loss\n",
        "  modelMSE = SimpleNet(input_dim, hidden, output_dim).to(device)\n",
        "  criterion_mse = nn.MSELoss()\n",
        "  optimizerMSE = torch.optim.SGD(modelMSE.parameters(), lr=learningRate)\n",
        "\n",
        "\n",
        "  # Training Loop for both models\n",
        "  print('Training...')\n",
        "  lossesMAE = []\n",
        "  lossesMSE = []\n",
        "\n",
        "  loss_testMAE = []\n",
        "  loss_testMSE = []\n",
        "\n",
        "  epoch_range = trange(epochs, desc='MAE: vs MSE:', leave=True)\n",
        "  for epoch in epoch_range:\n",
        "    if lossesMAE:\n",
        "      epoch_range.set_description(\n",
        "          \"MAE: {:.4f} vs MSE: {:.4f}\".format(lossesMAE[-1], lossesMSE[-1]))\n",
        "      epoch_range.refresh() # to show immediately the update\n",
        "    time.sleep(0.01)\n",
        "\n",
        "    # Clear gradient buffers because we don't want any gradient from\n",
        "    # previous epoch to carry forward, dont want to cummulate gradients\n",
        "    optimizerMAE.zero_grad()\n",
        "    optimizerMSE.zero_grad()\n",
        "    # get output from the model, given the inputs\n",
        "    outputsMAE = modelMAE(inputs)\n",
        "    outputsMSE = modelMSE(inputs)\n",
        "    # get loss for the predicted output\n",
        "    lossMAE = criterion_mae(outputsMAE, targets)\n",
        "    lossMSE = criterion_mse(outputsMSE, targets)\n",
        "    # get gradients w.r.t to parameters\n",
        "    lossMAE.backward()\n",
        "    lossMSE.backward()\n",
        "\n",
        "    # update parameters\n",
        "    optimizerMAE.step()\n",
        "    optimizerMSE.step()\n",
        "    lossesMAE.append(lossMAE.item())\n",
        "    lossesMSE.append(lossMSE.item())\n",
        "      \n",
        "    loss_testMAE.append(criterion_mae(modelMAE(inputs_test), targets_test))\n",
        "    loss_testMSE.append(criterion_mse(modelMSE(inputs_test), targets_test))\n",
        "      \n",
        "  return (modelMAE, modelMSE,\n",
        "          criterion_mae, criterion_mse,\n",
        "          lossesMAE, lossesMSE,\n",
        "          loss_testMAE, loss_testMSE)\n",
        "\n",
        "\n",
        "output = MSEvsMAE(inputs_new, targets)\n",
        "\n",
        "MAE_test = output[2](output[0](inputs_test), targets_test)\n",
        "loss_2 = output[3](output[1](inputs_test), targets_test)\n",
        "RMSE_test = torch.sqrt(loss_2) # we take the square root of MSE to have \n",
        "                               # both errors in the same scale\n",
        "with plt.xkcd():\n",
        "  loss_comparison(output[4], output[5],\n",
        "                  output[6], output[7],\n",
        "                  RMSE_test, MAE_test)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kbANIQAIgRwR"
      },
      "source": [
        "Both loss functions (i.e., MSE loss and MAE loss) do converge. However, the loss in the test set is lower using MAE. This observation is critical, as actually what we care about is the model's performance using the test set, i.e., unseen data."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cv35pu6-jRaW"
      },
      "source": [
        "### Section 6.2.3 Deciding which loss function to use\n",
        "If the outliers represent anomalies that are important for our problem and thus should be detected, we should use **MSE**. On the other hand, if we believe that the outliers represent corrupted/wrong data, we should choose **MAE** as the loss function.\n",
        "\n",
        "*L1 loss* is more robust to outliers, but its derivatives are not continuous, making it inefficient to find the solution. *L2 loss* is sensitive to outliers but gives a more stable and closed-form solution (by setting its derivative to 0.)\n",
        "\n",
        "Overall, with outliers in the dataset, the MSE (L2 Loss) cost function tries to adjust the model according to these outliers at the expense of other good-samples since the squared-error is going to be huge for these outliers (for error > 1). On the other hand, MAE (L1 Loss) cost is quite resistant to outliers.\n",
        "As a result, MSE cost may result in huge deviations in some of the samples, which results in reduced accuracy.\n",
        "\n",
        "If we can ignore the outliers in our dataset or need them to be there, we should be using an MAE loss function. On the other hand, if you don’t want undesired outliers in the dataset and would like to use a stable solution, then, first of all, you should try to remove the outliers and then use an MSE loss function (unless the performance of a model with an L2 loss function may deteriorate badly due to the presence of outliers in the dataset).\n",
        "\n",
        "**Problems with both:** There can be cases where neither MSE nor MAE loss functions give desirable predictions. For example, if $90%$ of observations in our data-set have a true target value of $1,000$ and the remaining $10%$ have target value between $0-100$. A model with MAE as loss might predict $1000$ for all observations, ignoring 10% of outlier cases, as it will try to go towards *median* value. In the same case, a model using MSE would give many predictions in the range of $[0, 100]$ as it will get skewed towards outliers. Both results are undesirable in many real-world cases."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eSZfws92dnxg",
        "cellView": "form"
      },
      "source": [
        "#@markdown Can you think three applications where MSE and MAE, respectively, can be applied?\n",
        "loss_func = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GBZYRr1hQhEo"
      },
      "source": [
        "---\n",
        "# Wrap up: Linear Neural Networks"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vHvS6_EpQTvA",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Wrap up\n",
        "try: t7;\n",
        "except NameError: t7=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"uXIH35VZDis\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DHRuKacjRMML",
        "cellView": "form"
      },
      "source": [
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "import time\n",
        "import numpy as np\n",
        "from IPython.display import IFrame\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefills = \"&\".join([\"prefill_%s=%s\"%(key, fields[key]) for key in fields])\n",
        "  src = src + prefills\n",
        "  src = \"+\".join(src.split(\" \"))\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "try: t2;\n",
        "except NameError: t2 = time.time()\n",
        "try: t3;\n",
        "except NameError: t3 = time.time()\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()\n",
        "try: t5;\n",
        "except NameError: t5 = time.time()\n",
        "try: t6;\n",
        "except NameError: t6 = time.time()\n",
        "try: t7;\n",
        "except NameError: t7 = time.time()\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"Select\"\n",
        "try: sequential;\n",
        "except NameError: sequential = \"\"\n",
        "try: xor;\n",
        "except NameError: xor = \"\"\n",
        "try: xor_solution;\n",
        "except NameError: xor_solution = \"\"\n",
        "try: learning_modes;\n",
        "except NameError: learning_modes = \"\"\n",
        "try: loss_func;\n",
        "except NameError: loss_func = \"\"\n",
        "\n",
        "times = [(t-t0) for t in [t1,t2,t3,t4,t5,t6,t7]]\n",
        "\n",
        "fields = {\"pennkey\": my_pennkey,\n",
        "          \"pod\": my_pod,\n",
        "          \"sequential\": sequential,\n",
        "          \"xor\": xor,\n",
        "          \"xor_solution\": xor_solution,\n",
        "          \"learning_modes\": learning_modes,\n",
        "          \"loss_func\": loss_func,\n",
        "          \"cumulative_times\": times}\n",
        "\n",
        "src = \"https://airtable.com/embed/shrLlXqT8QQdEk728?\"\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QNFRudPafVrD"
      },
      "source": [
        "---\n",
        "# Feedback\n",
        "How could this session have been better? How happy are you in your group? How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link: https://airtable.com/embed/shrNSJ5ECXhNhsYss"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "t6F7dpf5fVwC"
      },
      "source": [
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7HL7QEJ824KD"
      },
      "source": [
        "---\n",
        "# Optional Section"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mxdAv8kUs0O0"
      },
      "source": [
        "## Section 6.3 Cosine similarity: where cosine similarity is often a good idea"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TcnVeQbUs15P",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Cosine Similarity\n",
        "\n",
        "video = YouTubeVideo(id=\"n8HuO8OcU34\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "w3_LKEmd4Od2"
      },
      "source": [
        "\n",
        "\n",
        "On a lighter note, the embedding of a particular word (In Higher Dimension) is nothing but a vector representation of that word (In Lower Dimension). Where words with similar meaning Ex. “Joyful” and “Cheerful” and other closely related words like Ex. “Money” and “Bank”, gets closer vector representation when projected in the Lower Dimension.\n",
        "The transformation from words to vectors is called `word embedding`.\n",
        "\n",
        "Usually, in NLP application the loss function is the so-called Cosine Similarity. In mathematical terms, the cosine similarity between two vectors $x_1$ and $x_2$ is defined as:\n",
        "\n",
        "\\begin{equation}\n",
        "CosineSimilarity = \\frac{\\left< x_1, x_2 \\right>}{||x_1||_2 \\cdot ||x_2||_2}\n",
        "\\end{equation}\n",
        "\n",
        "where $||\\cdot||_2$ denotes the *norm-2* of a vector, i.e., the length of a vector, and $\\left< \\cdot, \\cdot \\right>$ denotes the dot product between the input vectors, i.e., $x_1^{\\text{T}}x_2$.\n",
        "\n",
        "The aforementioned equation is the definition of the Euclidean dot product between two vectors, i.e.,:\n",
        "\n",
        "\\begin{equation}\n",
        "\\left< x_1, x_2 \\right> = ||x_1||_2 \\cdot ||x_2||_2 \\cdot \\text{cos}(\\phi)\n",
        "\\end{equation}\n",
        "\n",
        "where $\\phi$ is the angle between $x_1$ and $x_2$.\n",
        "\n",
        "So the underlying concept in creating a mini word embedding boils down to train a simple linear neural network with only an input and an output layer!\n",
        "\n",
        "Here, we will construct the training dataset with triplets of words, two of them belonging in the same sentence and the third one in a different sentence. One training example looks like this:\n",
        "\n",
        "\\begin{equation}\n",
        "example = [word, same, different]\n",
        "\\end{equation}\n",
        "\n",
        "\n",
        "We want to maximize the similarity of the words found in the same text while minimizing the similarity with the word from a different sentence.\n",
        "\n",
        "The intuition is the following:\n",
        "\n",
        "1. Cosine similarity (**cos_sim**) takes values in the range $[-1,1]$, but to minimize correlation, we want cosine to be $0$, and not $-1$.\n",
        "2. We want to maximize the positive comparison ($cos\\_sim(word,same)=1$) and minimize the negative comparison ($cos\\_sim(word,diff)=0$)\n",
        "3. The so-called *triplet margin loss* is defined as $L = d(x_1,x_2) - d(x_1,x_3) + margin$, where $d(\\cdot)$ denotes a distance metric, e.g., Euclidean distance, for some arbitrary vectors $x_1, x_2, x_3$.\n",
        "\n",
        "Maximization of $cos\\_sim(\\cdot, \\cdot)$ is equivalent of minimization of $1 - cos\\_sim(\\cdot, \\cdot)$.\n",
        "\n",
        "Taking all together, the triplet margin loss using cosine similarity mathematically is described as follows:\n",
        "\n",
        "\\begin{align}\n",
        "L &{}= \\left( 1-cos\\_sim(word,same) - (1-cos\\_sim(word,diff) \\right) + margin \\\\\n",
        "&{}= -cos\\_sim(word,same) + cos\\_sim(word,diff) + margin\n",
        "\\end{align}\n",
        "\n",
        "The ideal loss would be $-1 + 0 + margin$. \n",
        "\n",
        "Setting the $margin$ to $1$ would ensure that our loss could never be negative, but it is too aggressive. Thus, we usually pick a lower value, e.g., $0.5$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0z5BQ5j7j3Pe"
      },
      "source": [
        "# Sample Document (Recreated from the tom and jerry cartoon)\n",
        "        \n",
        "sample_docs = ['cat and mouse are buddies',\n",
        "               'mouse lives in hole',\n",
        "               'cat lives in house',\n",
        "               'cat chases mouse',\n",
        "               'cat catches mouse',\n",
        "               'cat eats mouse',\n",
        "               'mouse runs into hole',\n",
        "               'cat says bad words',\n",
        "               'cat and mouse are pals',\n",
        "               'cat and mouse are chums',\n",
        "               'mouse stores food in hole',\n",
        "               'cat stores food in house',\n",
        "               'mouse sleeps in hole',\n",
        "               'cat sleeps in house']"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jiljva-K-9jz"
      },
      "source": [
        "### Section 6.3.1 Constructing the dataset\n",
        "\n",
        "We aim to translate each word into a vector of real numbers. Towards this goal, first, we give an index to each word, and then we transform each word into a vector with all zeros but one position equal to one (one-hot encoding).\n",
        "\n",
        "First, we assign an integer to each word in our vocabulary, consisting of all words in the text!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yJmhG46iG5My"
      },
      "source": [
        "# Make and indexing from indices to words and vice versa\n",
        "idx_2_word, word_2_idx = idx_word(sample_docs)\n",
        "\n",
        "# Total vacabulary\n",
        "vocab_size = len(idx_2_word)\n",
        "\n",
        "# Transform the indices in one hot encoding\n",
        "encoded_docs = [one_hot_map(d, word_2_idx) for d in sample_docs]\n",
        "\n",
        "# Padding for consistency (i.e., adding zeros if the length is smaller than the max length)\n",
        "max_length = max([len(e) for e in encoded_docs]) + 3\n",
        "padded_docs = padding_seqs(encoded_docs, max_len=max_length)\n",
        "print(padded_docs)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7BcxOYI-G6AQ"
      },
      "source": [
        "Then, we map each integer into one-hot encoding. For example:\n",
        "\n",
        "\\begin{align}\n",
        "`and` &{} \\rightarrow `1` \\rightarrow [0, 1, 0, 0, 0, 0, \\dots, 0] \\\\\n",
        "`buddies` &{} \\rightarrow `4` \\rightarrow [0, 0, 0, 0, 1, 0, \\dots, 0] \n",
        "\\end{align}"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "F9BQmZbO-otv"
      },
      "source": [
        "# Construct the training data as triplets of [word1SameSentence, word2SameSentence, word3DiffSentence]\n",
        "training_data = np.empty((0,3))\n",
        "\n",
        "for i in range(len(padded_docs)):\n",
        "  sentence = padded_docs[i]\n",
        "  x = sentence[np.argwhere(sentence!=0).squeeze()]\n",
        "  x = np.unique(x)\n",
        "  pairs = combinations(list(x))\n",
        "  for comb in pairs:\n",
        "    for j in range(len(padded_docs)):\n",
        "      if j != i:\n",
        "        sentence2 = padded_docs[j]\n",
        "        y = sentence2[np.argwhere(sentence2!=0).squeeze()]\n",
        "        for xi in y:\n",
        "          training_data = np.append(training_data,\n",
        "                                    [[comb[0], comb[1], xi]],\n",
        "                                    axis=0)\n",
        "\n",
        "# print the data shapes\n",
        "print(training_data.shape)\n",
        "# Shuffle the data\n",
        "np.random.shuffle(training_data)\n",
        "training_data = training_data\n",
        "\n",
        "# Make the data one hot encoded\n",
        "enc = OneHotEncoder()\n",
        "enc.fit(np.array(range(vocab_size+1)).reshape(-1,1))\n",
        "onehot_label_x1 = enc.transform(training_data[:,0].reshape(-1,1)).toarray()\n",
        "\n",
        "enc = OneHotEncoder()\n",
        "enc.fit(np.array(range(vocab_size+1)).reshape(-1,1))\n",
        "onehot_label_x2 = enc.transform(training_data[:,1].reshape(-1,1)).toarray()\n",
        "\n",
        "enc = OneHotEncoder()\n",
        "enc.fit(np.array(range(vocab_size+1)).reshape(-1,1))\n",
        "onehot_label_x3 = enc.transform(training_data[:,2].reshape(-1,1)).toarray()\n",
        "\n",
        "# From Numpy to Torch\n",
        "onehot_label_x1 = torch.from_numpy(onehot_label_x1)\n",
        "onehot_label_x2 = torch.from_numpy(onehot_label_x2)\n",
        "onehot_label_x3 = torch.from_numpy(onehot_label_x3)\n",
        "print(onehot_label_x1.shape, onehot_label_x2.shape, onehot_label_x3.shape)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iu5hs_xwoe0U"
      },
      "source": [
        "### Section 6.3.2 Construct the model and train it"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b-5Kqbv9yPIy"
      },
      "source": [
        "#### Exercise 7: Write the custom loss function\n",
        "\n",
        "As we mentioned above, here we will use a modified triplet margin loss function. The input `x` in the function is a `3D` `torch.Tensor`, and it should return the average loss across `N` examples.\n",
        "\n",
        "*Hint*: the input `x` has dimensions $(N \\times L \\times D)$, where $N$ denotes the number of the triplets, $L$ the number of words in the triplet (!!!), and $D$ the dimension of the embedding. You can use the `cosine_similarity` function from `torch.nn.functional` (see how we import it)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "u3k0UerdyKHY"
      },
      "source": [
        "def criterion(x):\n",
        "  margin = 0.5\n",
        "  ####################################################################\n",
        "  # Fill in missing code below (...),\n",
        "  # then remove or comment the line below to test your function\n",
        "  raise NotImplementedError(\"Calculate the `mean` loss\")\n",
        "  ####################################################################\n",
        "  loss = ...\n",
        "  return loss\n",
        "\n",
        "\n",
        "x = torch.tensor([[[1., 2.], [3., 2.], [9., 2.]],\n",
        "                  [[-1., 3.], [4., 2.], [8., 2.]],\n",
        "                  [[1., 1.], [-2., 1.], [7., 4.]]])\n",
        "print(x)\n",
        "\n",
        "# uncomment the following line to check your function\n",
        "# print(criterion(x))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vx_zs-muwZLi"
      },
      "source": [
        "# to_remove solution\n",
        "def criterion(x):\n",
        "  margin = 0.5\n",
        "  loss = torch.mean(-F.cosine_similarity(x[:,0], x[:,1]) +\n",
        "                    F.cosine_similarity(x[:,0], x[:,2]) +\n",
        "                    margin)\n",
        "  \n",
        "  return loss\n",
        "\n",
        "\n",
        "x = torch.tensor([[[1., 2.], [3., 2.], [9., 2.]],\n",
        "                  [[-1., 3.], [4., 2.], [8., 2.]],\n",
        "                  [[1., 1.], [-2., 1.], [7., 4.]]])\n",
        "print(x)\n",
        "\n",
        "print(criterion(x))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ugQdH4nI4X0g"
      },
      "source": [
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "\n",
        "# Hyperparameters\n",
        "input_size = vocab_size + 1\n",
        "hidden_size = 2\n",
        "learning_rate = 0.01\n",
        "num_epochs = 100\n",
        "batch_size = 10\n",
        "\n",
        "# Our simple model\n",
        "net = nn.Linear(input_size, hidden_size)\n",
        "\n",
        "# Weights initialization\n",
        "sigma = 0.01\n",
        "net.weight.data.normal_(0, sigma)\n",
        "net.bias.data.normal_(0, sigma)\n",
        "\n",
        "net.to(device)\n",
        "net.train(True)\n",
        "\n",
        "# Optimizer  \n",
        "optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate)\n",
        "\n",
        "# Training loop\n",
        "loss_val = []\n",
        "_data = torch.stack((onehot_label_x1, \n",
        "                     onehot_label_x2,\n",
        "                     onehot_label_x3), axis=1).float().to(device)\n",
        "\n",
        "\n",
        "epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "for epoch in epoch_range:\n",
        "  if loss_val:\n",
        "    epoch_range.set_description(\"loss: {:.6f}\".format(loss_val[-1]))\n",
        "    epoch_range.refresh() # to show immediately the update\n",
        "  time.sleep(0.01)\n",
        "\n",
        "  for idx in range(0, len(_data), batch_size):\n",
        "    batch = _data[idx:idx+batch_size]\n",
        "  \n",
        "    # Forward pass\n",
        "    output = net(batch.float())\n",
        "    \n",
        "    # Custom loss\n",
        "    loss = criterion(output)\n",
        "\n",
        "    # Backward and optimize\n",
        "    optimizer.zero_grad()\n",
        "    loss.backward()\n",
        "    optimizer.step()    \n",
        "  \n",
        "  loss_val.append(loss.item())\n",
        "\n",
        "plt.figure(figsize=(8, 5))\n",
        "plt.plot(loss_val)\n",
        "plt.ylabel('cost [a.u.]')\n",
        "plt.xlabel('epoch')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YANlxERuozLb"
      },
      "source": [
        "### Section 6.3.2: Evaluate model's performance"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "F_dy03sDHfMN"
      },
      "source": [
        "docs = ['cat and mouse are buddies hole lives in house chases catches runs '\n",
        "        'into says bad words pals chums stores sleeps']\n",
        "encoded_docs = [one_hot_map(d, word_2_idx) for d in docs]\n",
        "\n",
        "test_arr = np.array([[ 1.,  2., 3., 4., 5., 8., 6., 7., 9., 10., 11., 13.,\n",
        "                      14., 15., 16., 17., 18., 19., 20., 22.]])\n",
        "test = enc.transform(test_arr.reshape(-1,1)).toarray().astype(np.float32)\n",
        "test = torch.from_numpy(test).float().to(device)\n",
        "\n",
        "docs = ['cat', 'and', 'mouse', 'are', 'buddies', 'hole', 'lives',\n",
        "        'in', 'house', 'chases', 'catches', 'runs', 'into', 'says', 'bad',\n",
        "        'words', 'pals', 'chums', 'stores', 'sleeps']\n",
        "\n",
        "with torch.no_grad():\n",
        "  output = net(test)\n",
        "\n",
        "# Cosine Similarity Matrix\n",
        "cos = nn.CosineSimilarity(dim=0, eps=1e-6)\n",
        "\n",
        "similarities = np.empty((len(output), len(output)))\n",
        "\n",
        "for i in range(len(output)):\n",
        "  for j in range(len(output)):\n",
        "    similarities[i,j] = cos(output[i], output[j])\n",
        "\n",
        "plt.figure(figsize=(14,14))\n",
        "plt.imshow(np.triu(similarities), cmap='gray')\n",
        "cbar = plt.colorbar()\n",
        "cbar.ax.set_ylabel('cosine similarity')\n",
        "plt.xticks(range(len(docs)), docs, rotation=45, fontsize=20)\n",
        "plt.yticks(range(len(docs)), docs, fontsize=20)\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pkGnzvGBA_S8"
      },
      "source": [
        "xs = []\n",
        "ys = []\n",
        "for i in range(len(output)):\n",
        "  xs.append(output[i][0].cpu().detach().numpy())\n",
        "  ys.append(output[i][1].cpu().detach().numpy())\n",
        "\n",
        "docs = ['cat', 'and', 'mouse', 'are', 'buddies', 'hole', 'lives', 'in', 'house',\n",
        "        'chases', 'catches', 'runs', 'into', 'says', 'bad', 'words', 'pals',\n",
        "        'chums', 'stores', 'sleeps']\n",
        "\n",
        "plt.figure(figsize=(12, 12))\n",
        "plt.scatter(xs, ys)\n",
        "label = docs\n",
        "\n",
        "for i, (x, y) in enumerate(zip(xs, ys)):\n",
        "  plt.annotate(label[i], (x, y), textcoords=\"offset points\",\n",
        "              xytext=(0, 10), fontsize=20,\n",
        "              ha=random.choice(['left', 'right', 'center']))\n",
        "  plt.title(\"Trained Model\")\n",
        "  plt.xlabel('$X_1$')\n",
        "  plt.xlabel('$X_2$')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TFQiUnmX2V8A"
      },
      "source": [
        "In the plot above, we can see how the word embeddings we learn are distributed in the $2D$ space. Notice that `mouse` and `cat` are separated in space. The words `buddies`, `pals`, and `chums` are close to one another. This linear embedding 'semantically' makes sense, right?\n",
        "\n",
        "Obviously, as you will learn later in the course, more sophisticated NLP approaches can build a word embedding. Here, we explore one of the simplest cases, showing an interesting way of using the cosine similarity."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EAy_1dl-32qS"
      },
      "source": [
        "---\n",
        "## Section 7: High dimensional spaces intuition\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GSAZUnZ5ra5D",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Angle between Gaussian Drawn High Dimensional Vectors\n",
        "\n",
        "video = YouTubeVideo(id=\"OyiHIFqJvJ0\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fAsyRyXd5xxT"
      },
      "source": [
        "Calculate the angles between two vectors sampled from an isotropic gaussian, i.e., $x_i, y_i \\sim \\mathcal{N}(0, \\sigma^2I_D)$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "c9Daohw-4C9B"
      },
      "source": [
        "def plot_angles(sample_size, dimensions):\n",
        "\n",
        "  plt.figure(figsize=(15, 12))\n",
        "  for cnt, D in enumerate(dimensions):\n",
        "\n",
        "    angles = []\n",
        "        \n",
        "    if D != 1:\n",
        "      mean = np.zeros(D)\n",
        "      cov = 2*np.eye(D)  # diagonal covariance, i.e., isotropic gaussian\n",
        "      x = np.random.multivariate_normal(mean, cov, size=sample_size).T\n",
        "      x /= np.linalg.norm(x, axis=0).reshape(1, -1)\n",
        "      y = np.random.multivariate_normal(mean, cov, size=sample_size).T\n",
        "      y /= np.linalg.norm(y, axis=0).reshape(1, -1)\n",
        "      dot_product = np.dot(x.T, y)\n",
        "      angles = np.arccos(dot_product)\n",
        "      angles = angles[~np.isnan(angles)]\n",
        "    \n",
        "    elif D == 1:\n",
        "      x = np.random.randn(sample_size)\n",
        "      y = np.random.randn(sample_size)\n",
        "      dot_product = x * y\n",
        "      angles = np.arccos(dot_product)\n",
        "      angles = angles[~np.isnan(angles)]\n",
        "      \n",
        "    if cnt == 0:\n",
        "      bins = np.histogram(np.degrees(angles), bins=100)[1]  # get the bin edges\n",
        "\n",
        "    mean = np.round(np.mean(np.degrees(angles)), 1)\n",
        "    std = np.round(np.std(np.degrees(angles)), 1)\n",
        "    plt.subplot(3, 2, cnt + 1)\n",
        "    vals, bins, p = plt.hist(np.degrees(angles), bins=bins,\n",
        "                            density=True, alpha=0.6)\n",
        "    plt.xlabel('angle')\n",
        "    plt.xlim([0, 180])\n",
        "    plt.vlines(x=mean, ymin=0.0, ymax=max(vals),\n",
        "              colors='red', linewidth=1.0,\n",
        "              label=f'mean angle:{mean}')\n",
        "    plt.vlines(x=mean-std, ymin=0.0, ymax=max(vals),\n",
        "              colors='red', linewidth=1.0, linestyle='--',\n",
        "              label=f'std:{std}')\n",
        "    plt.vlines(x=mean+std, ymin=0.0, ymax=max(vals),\n",
        "              colors='red', linewidth=1.0, linestyle='--')\n",
        "    plt.ylabel('probability density')\n",
        "    plt.title(f'{D}-D Gaussian')\n",
        "    plt.legend()\n",
        "      \n",
        "  plt.tight_layout()\n",
        "\n",
        "sample_size = 500\n",
        "dimensions = [1, 2, 5, 10, 100, 1000]\n",
        "plot_angles(sample_size, dimensions)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qxyN_qyXJBmL"
      },
      "source": [
        "Notice that, as we increase the Gaussian dimensions, the mean angle is $90$ degrees and is constant, but the standard deviation becomes smaller and smaller. Thus, as $D \\rightarrow \\infty$, the angle between two randomly sampled vectors becomes $90$ degrees, i.e., the two vectors are orthogonal."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GnwsLQz15z9i",
        "cellView": "form"
      },
      "source": [
        "#@title Video:Prove 90 Degrees\n",
        "\n",
        "video = YouTubeVideo(id=\"1p2nip3qnO0\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fHqGBAqx5y9N"
      },
      "source": [
        "### Proof of 90 degrees in high $D$ space\n",
        "\n",
        "Here, we want to prove that the angle between two vectors becomes zero as we increase the dimensions of vectors sampled from an isotropic normal distribution.\n",
        "\n",
        "\\begin{equation}\n",
        "\\textbf{x, y} \\sim \\mathcal{N}(0, \\sigma I_D)\n",
        "\\end{equation}\n",
        "\n",
        "where $I_D$ is the $D\\times D$ identity matrix.\n",
        "\n",
        "We want to find the expectation and the variance of the angle ($\\theta$) between $\\textbf{x}$ and $\\text{y}$, so we need to calculate the $cos(\\theta)$\n",
        "\n",
        "We know that\n",
        "\n",
        "\\begin{equation}\n",
        "\\textbf{x}^{\\text{T}} \\textbf{y} = ||\\textbf{x}||_2||\\textbf{y}||_2 cos(\\theta) \\iff \\\\\n",
        "cos(\\theta) = \\frac{\\textbf{x}^{\\text{T}} \\textbf{y}}{||\\textbf{x}||_2 \\cdot ||\\textbf{y}||_2}\n",
        "\\end{equation}\n",
        "\n",
        "where $||\\cdot||_2$ denotes the norm-2, or mathematically $||\\textbf{x}||_2 = \\sqrt{ \\sum_{i=1}^Dx_i^2}$.\n",
        "\n",
        "So, we want to find the expectation and the variance of all terms involved.\n",
        "\n",
        "\\begin{equation}\n",
        "\\mathbb{E}\\left[ \\textbf{x}^{\\text{T}} \\textbf{y} \\right] = 0\n",
        "\\end{equation}\n",
        "\n",
        "because both $\\textbf{x}$ and $\\textbf{y}$ are zero-mean and are uncorrelated.\n",
        "\n",
        "Also, we know that \n",
        "\n",
        "\\begin{equation}\n",
        "\\mathbb{E}\\left[ || \\textbf{x} ||_2 \\right] = \\sqrt{2} \\sigma \\frac{ \\Gamma\\left( \\frac{D+1}{2} \\right) } {\\Gamma \\left( \\frac{D}{2}\\right)}\n",
        "\\end{equation}\n",
        "\n",
        "where $\\Gamma( \\cdot )$ is the [**Gamma function**](https://en.wikipedia.org/wiki/Gamma_function), $\\Gamma(n) = (n-1)!$, $\\Gamma \\left( n + \\frac{1}{2} \\right) = {n - \\frac{1}{2}\\choose n} n! \\sqrt{\\pi}$\n",
        "\n",
        "Thus, the expectation of the angle goes to zero, and the corresponding angles are $90^o$. So, what we care about here is the variance.\n",
        "\n",
        "By definition, we know that:\n",
        "\n",
        "\\begin{equation}\n",
        "Var(||\\textbf{x}||_2) = \\mathbb{E}\\left[ ||\\textbf{x}||_2^2\\right] - \\left( \\mathbb{E}\\left[ ||\\textbf{x}||_2\\right] \\right)^2\n",
        "\\end{equation}\n",
        "\n",
        "The expectation of the squared norm can be written as:\n",
        "\n",
        "\\begin{equation}\n",
        "\\mathbb{E}(||\\textbf{x}||_2^2) = \\mathbb{E}\\left[ \\sum_{i=1}^{D}x_i^2\\right] = \\sum_{i=1}^{D}\\mathbb{E}\\left[ x_i^2 \\right] = \\sum_{i=1}^{D}Var(x_i) = trace(\\Sigma) = D\\sigma^2\n",
        "\\end{equation}\n",
        "\n",
        "as $\\textbf{x}$ is zero-mean distributed.\n",
        "\n",
        "Thus, \n",
        "\n",
        "\\begin{equation}\n",
        "Var(||\\textbf{x}||_2) = D\\sigma^2 - 2\\sigma^2  \\left( \\frac{ \\Gamma\\left( \\frac{D+1}{2} \\right) } {\\Gamma \\left( \\frac{D}{2}\\right)} \\right)^2\n",
        "\\end{equation}\n",
        "\n",
        "In very high dimensional spaces, $\\frac{D+1}{2} \\approx \\frac{D}{2}$ and so the formula is reduced in:\n",
        "\n",
        "\\begin{equation}\n",
        "Var(||\\textbf{x}||_2) = \\sigma^2 (D - 2) \\approx D\\sigma^2 \n",
        "\\end{equation}\n",
        "\n",
        "Next, we calculate the variance of the inner product:\n",
        "\n",
        "\\begin{equation}\n",
        "Var(\\textbf{x}^{\\text{T}} \\textbf{y}) = Var\\left(\\sum_{i=1}^{D} x_iy_i \\right) = \\sum_{i=1}^{D}Var(x_iy_i) = D\\sigma^2\n",
        "\\end{equation}\n",
        "\n",
        "Thus, the variance of the angles approaches zero as we increase dimensions because the variance in the denominator is much higher than this in the numerator.\n",
        "\n",
        "So, at high dimensional spaces, two random vectors are orthogonal in average."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ywsuyWI-s04A"
      },
      "source": [
        "### Distance of high dimensional vectors"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AqW3rYlRQF-1",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Distance Between Gaussian Drawn High Dimensional Vectors\n",
        "\n",
        "video = YouTubeVideo(id=\"HXPZWoobWXs\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4UAsENzXJs1V"
      },
      "source": [
        "We have calculated the angle between two randomly sampled vectors, we calculate their distance, i.e., the norm of their difference. Note that, in $1D$ case, we plot the absolute difference of the two vectors."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FIQNObguvQg9"
      },
      "source": [
        "#### Exercise 8: Plot distances in high D spaces\n",
        "\n",
        "Now is your turn to calculate the distances in variable dimensional spaces. You may found the command `np.linalg.norm` useful. Execute `? np.linalg.norm` in a scratch cell to see its DocString.\n",
        "\n",
        "*Hint:* Choose the axis of the input `ndarray` of `np.linalg.norm` along which to compute the vector norms wisely..."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UkUSCzpVuxOC"
      },
      "source": [
        "def plot_distances(sample_size, dimensions):\n",
        "\n",
        "  plt.figure(figsize=(15, 12))\n",
        "  for cnt, D in enumerate(dimensions):\n",
        "\n",
        "    norms = []\n",
        "\n",
        "    if D != 1:\n",
        "      mean = np.zeros(D)\n",
        "      cov = 2*np.eye(D)  # diagonal covariance, i.e., isotropic gaussian\n",
        "      x = np.random.multivariate_normal(mean, cov, size=sample_size).T\n",
        "      x /= np.linalg.norm(x, axis=0).reshape(1,-1)\n",
        "      y = np.random.multivariate_normal(mean, cov, size=sample_size).T\n",
        "      y /= np.linalg.norm(y, axis=0).reshape(1,-1)\n",
        "      ####################################################################\n",
        "      # Fill in missing code below (...),\n",
        "      # then remove or comment the line below to test your function\n",
        "      raise NotImplementedError(\"Calculate the distance in D>2 spaces\")\n",
        "      ####################################################################\n",
        "      norms = ...\n",
        "    \n",
        "    elif D == 1:\n",
        "      x = np.random.randn(sample_size)\n",
        "      y = np.random.randn(sample_size)\n",
        "      ####################################################################\n",
        "      # Fill in missing code below (...),\n",
        "      # then remove or comment the line below to test your function\n",
        "      raise NotImplementedError(\"Calculate the distance in 1D space\")\n",
        "      ####################################################################\n",
        "      norms = ...\n",
        "    \n",
        "    if cnt == 0:\n",
        "      bins = np.histogram(norms, bins=100)[1]  # get the bin edges\n",
        "    plt.subplot(3, 2, cnt + 1)\n",
        "    plt.hist(norms, bins=bins, density=True, alpha=0.5)\n",
        "    if D == 1:\n",
        "      plt.xlabel('|x-y|')\n",
        "    else:\n",
        "      plt.xlabel('||x-y||')\n",
        "\n",
        "    plt.ylabel('probability density')\n",
        "    plt.title(f'{D}-D Gaussian')\n",
        "      \n",
        "  plt.tight_layout()\n",
        "\n",
        "## uncomment the line below to test your function\n",
        "# sample_size = 500\n",
        "# dimensions = [1, 2, 5, 10, 100, 1000]\n",
        "# plot_distances(sample_size, dimensions)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iVteU_PK4YVx"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def plot_distances(sample_size, dimensions):\n",
        "\n",
        "  plt.figure(figsize=(15, 12))\n",
        "  for cnt, D in enumerate(dimensions):\n",
        "\n",
        "    norms = []\n",
        "\n",
        "    if D != 1:\n",
        "      mean = np.zeros(D)\n",
        "      cov = 2*np.eye(D)  # diagonal covariance, i.e., isotropic gaussian\n",
        "      x = np.random.multivariate_normal(mean, cov, size=sample_size).T\n",
        "      x /= np.linalg.norm(x, axis=0).reshape(1,-1)\n",
        "      y = np.random.multivariate_normal(mean, cov, size=sample_size).T\n",
        "      y /= np.linalg.norm(y, axis=0).reshape(1,-1)\n",
        "      norms = np.linalg.norm(x-y, axis=0, ord=2)\n",
        "    \n",
        "    elif D == 1:\n",
        "      x = np.random.randn(sample_size)\n",
        "      y = np.random.randn(sample_size)\n",
        "      norms = np.abs(x - y)\n",
        "    \n",
        "    if cnt == 0:\n",
        "      bins = np.histogram(norms, bins=100)[1]  # get the bin edges\n",
        "    plt.subplot(3, 2, cnt + 1)\n",
        "    plt.hist(norms, bins=bins, density=True, alpha=0.5)\n",
        "    if D == 1:\n",
        "      plt.xlabel('|x-y|')\n",
        "    else:\n",
        "      plt.xlabel('||x-y||')\n",
        "\n",
        "    plt.ylabel('probability density')\n",
        "    plt.title(f'{D}-D Gaussian')\n",
        "      \n",
        "  plt.tight_layout()\n",
        "\n",
        "\n",
        "sample_size = 500\n",
        "dimensions = [1, 2, 5, 10, 100, 1000]\n",
        "with plt.xkcd():\n",
        "  plot_distances(sample_size, dimensions)"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}
