{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "W2_Tutorial1.ipynb",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true,
      "include_colab_link": true
    },
    "kernel": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.7.8"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W02_PyTorchDLN/W2_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ML2DwqwkVwfo"
      },
      "source": [
        "# CIS-522 Week 2 Part 1\n",
        "# PyTorch\n",
        "\n",
        "__Instructor:__ Konrad Kording\n",
        "\n",
        "__Content creators:__ Ameet Rahane, Spiros Chavlis"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "exeRQO8LnRZB"
      },
      "source": [
        "---\n",
        "# Today's agenda\n",
        "In the first tutorial of Week 2, we are going to learn about PyTorch. As a case study, we will use a simple linear regression model. Today we will:\n",
        "\n",
        "1. Learn about PyTorch and how to use it\n",
        "2. Familiarize yourselves with the concept of auto differentiation\n",
        "3. Implement from scratch a function for linear regression\n",
        "4. Check the results with an analytical solution\n",
        "\n",
        "But before we start, let's do a recap of what we have learned in the previous week."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y_mmmZQ0TIBi"
      },
      "source": [
        "---\n",
        "# Recap the experience from last week"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xRPk6HG-Rj5N",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Recap of Week 1\n",
        "import time\n",
        "try: t0;\n",
        "except NameError: t0=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"3EuhRYr9uf8\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "uiN7yWdPSiCH",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q1qjVNRuTji2"
      },
      "source": [
        "Meet with your pod for 10 minutes to discuss what you learned, what was clear, and what you hope to learn more about."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BHR0gq1VThDc",
        "cellView": "form"
      },
      "source": [
        "#@markdown Spend 3 minutes telling us on Airtable what the upshot of this discussion is for you.\n",
        "my_w1_upshot = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NKcmeRmaRatr"
      },
      "source": [
        "## What do you hope to learn more about this week"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "if_BkdUEVNLL",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Course Objectives and Short Term Plans\n",
        "\n",
        "video = YouTubeVideo(id=\"rs7KaZw-vjU\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hDpGNphFVWvU"
      },
      "source": [
        "We clearly can run code. But now we want to be good enough at this so that we can solve our own problems. And to do so, we need to understand this from the bottom up."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "izuu6SACVbcL",
        "cellView": "form"
      },
      "source": [
        "#@markdown write 3 sentences on the main things they hope to get out of the course.\n",
        "my_expectations = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5x61BA1qPubi"
      },
      "source": [
        "---\n",
        "# Setup"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "uw7q7Z_ZPt66"
      },
      "source": [
        "# imports\n",
        "import numpy as np\n",
        "import random\n",
        "import matplotlib.pylab as plt\n",
        "import matplotlib as mpl\n",
        "import pandas as pd\n",
        "from tqdm.notebook import tqdm, trange\n",
        "\n",
        "import torch\n",
        "from torch.autograd import Variable\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "import torch.optim as optim"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LID483Ou-z53",
        "cellView": "form"
      },
      "source": [
        "# @title Figure Settings\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "%matplotlib inline \n",
        "\n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "\n",
        "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\n",
        "\n",
        "# plt.rcParams.update({\n",
        "#             \"text.usetex\": False,\n",
        "#             \"font.family\": \"serif\",\n",
        "#             \"font.serif\": [\"Palatino\"],\n",
        "#             \"font.size\": 18,\n",
        "#             })\n",
        "\n",
        "# mpl.rcParams['axes.spines.right'] = False\n",
        "# mpl.rcParams['axes.spines.top'] = False\n",
        "# mpl.rcParams['axes.linewidth'] = 2\n",
        "# mpl.rcParams['ytick.major.width'] = 2\n",
        "# mpl.rcParams['xtick.major.width'] = 2\n",
        "# mpl.rcParams['ytick.major.size'] = 7\n",
        "# mpl.rcParams['xtick.major.size'] = 7"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "A9dj2mri60tR",
        "cellView": "form"
      },
      "source": [
        "#@title Helper functions\n",
        "\n",
        "def synthetic_dataset(w, b, num_examples=1000, sigma=0.01, seed=2021):\n",
        "  '''\n",
        "  Synthetic data generator in the form:\n",
        "      y = Xw + b + gaussian_noise(0, sigma).\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  w : torch.tensor\n",
        "      weights. The length of `w` denotes the number of independent variables\n",
        "  b : torch.tensor\n",
        "      bias (offset or intercept).\n",
        "  num_examples : INT, optional\n",
        "      DESCRIPTION. The default is 1000.\n",
        "  sigma : FLOAT, optional\n",
        "      Standard deviation of the Gaussian noise. The default is 0.01.\n",
        "  seed : INT, optional\n",
        "      Seed the RNG for reproducibility. The default is 2021.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  X: torch.tensor\n",
        "      the independent variable(s).\n",
        "  y: torch.tensor\n",
        "      the dependent variable\n",
        "  \n",
        "  '''\n",
        "\n",
        "  torch.manual_seed(seed)\n",
        "\n",
        "  X = torch.normal(0, 1, (w.shape[0], num_examples))\n",
        "  y = torch.matmul(w.T, X) + b\n",
        "  # Add gaussian noise\n",
        "  y += torch.normal(0, sigma, y.shape)\n",
        "\n",
        "  return X, y.reshape((-1, 1))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AeuJ4budZOas"
      },
      "source": [
        "---\n",
        "# Section 1: Intro to PyTorch"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "443l0KIXZNye",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Pytorch is Great\n",
        "\n",
        "try: t1;\n",
        "except NameError: t1=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"Murzk7_IAJA\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ijiuk5F6iKKo"
      },
      "source": [
        "## What is PyTorch?\n",
        "\n",
        "It’s a Python-based scientific computing package targeted at two sets of\n",
        "audiences:\n",
        "\n",
        "-  A replacement for NumPy to use the power of GPUs\n",
        "-  A deep learning platform that provides significant flexibility\n",
        "   and speed\n",
        "\n",
        "At its core, PyTorch provides a few key features:\n",
        "\n",
        "- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.\n",
        "- An optimized **autograd** engine for automatically computing derivatives.\n",
        "- A clean, modular API for building and deploying **deep learning models**.\n",
        "\n",
        "You can find more information about PyTorch by following one of the [official tutorials](https://pytorch.org/tutorials/) or by [reading the documentation](https://pytorch.org/docs/1.1.0/).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kqJZVXufNDea"
      },
      "source": [
        "## Section 1.1: Variable types in Pytorch\n",
        "\n",
        "`torch.Tensor` is the central class in Pytorch. `torch.Tensor` defines a multi-dimensional matrix containing elements of a single data type (e.g., 32-bit floating points, 8-bit integers, etc.). The different data types can be found [here](https://pytorch.org/docs/stable/tensors.html#:~:text=A%20torch.Tensor%20is%20a,CPU%20tensor). \n",
        "\n",
        "Mathematically, a tensor is an algebraic object that defines a multilinear relationship between objects in a vector space. \n",
        "\n",
        "By default, `torch.Tensor` creates a *FloatTensor*, or in simple words, a tensor of floats. We can either change the type of a `torch.Tensor` by calling `.type(dtype)` on the Tensor or specify the `dtype` in any function that outputs a tensor. Here are a few examples:"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MpsxP8JM3obx"
      },
      "source": [
        "### Exercise 1: PyTorch Tensors\n",
        "The following bite-sized exercises are rather a tutorial on the basics of PyTorch tensors. Since the solutions are already given, we encourage you to open the *Scratch code cell* from the `Edit` menu, explore each object and see their Docstring (e.g. `? torch.empty`)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ER8WVp1RN360"
      },
      "source": [
        "**Construct / initialize an empty $5\\times3$ matrix:**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6x5CgWo8N9w6"
      },
      "source": [
        "x = torch.empty(5, 3)\n",
        "print(x)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DA4KKM_wpacE"
      },
      "source": [
        "As we have seen, even if we have defined an empty matrix, the matrix is not empty, rather it has very large or very low positive values. Why is this?\n",
        "\n",
        "According to PyTorch's documentation, `torch.empty` returns a tensor filled with *uninitialized data*. Once we call `torch.empty`, a block of memory is allocated according to the `size` (shape) of the tensor. By *uninitialized data*, it's meant that `torch.empty` would simply return the values in the memory block as is. These values could be default values or the values stored in those memory blocks as a result of some other operations used that part of the memory block before."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q9OuxfDWNu39"
      },
      "source": [
        "**Construct a randomly (uniform in [0,1]) initialized matrix:**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BlDi1qceN-Xp"
      },
      "source": [
        "x = torch.rand(5, 3)\n",
        "print(x)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CX7qZTC7ONVx"
      },
      "source": [
        "**Construct a matrix filled with zeros and of dtype long:**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "U2TljALNOOKj"
      },
      "source": [
        "x = torch.zeros(5, 3, dtype=torch.long)\n",
        "print(x)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "grp5g18EOOoe"
      },
      "source": [
        "**Construct a tensor directly from a list:**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eFXipaI9OPSU"
      },
      "source": [
        "x = torch.tensor([5.5, 3])\n",
        "print(x)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4MuHPXIDOkZd"
      },
      "source": [
        "**Construct a tensor-based on an existing tensor:**\n",
        "\n",
        "These methods will reuse properties of the input tensor, e.g., `dtype`, unless\n",
        "the user provides new values."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZagQGOt7OmKt"
      },
      "source": [
        "x = x.new_ones(5, 3, dtype=torch.double)      # new_* methods take in sizes\n",
        "print(x)\n",
        "\n",
        "x = torch.randn_like(x, dtype=torch.float)    # override dtype!\n",
        "print(x)                                      # result has the same size"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VieCVYtfOuHW"
      },
      "source": [
        "Get its size. \n",
        "**Note**: torch.Size is a tuple, so it supports all tuple operations."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "EY-lyf_hOvZQ"
      },
      "source": [
        "print(x.size())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tCdLJ4gYQ6ah"
      },
      "source": [
        "**Declare the data type of tensors**\n",
        "\n",
        "A previous example shows that even though we initialized the Tensor with `[1, 2, 3]`, it created the Tensor `[1., 2., 3.]`, i.e., instead of integers, we get floats. Why is this?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pJ1Za7MSQ9lo"
      },
      "source": [
        "torch.Tensor([1.,2.,3.5]).type(torch.int32)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1o-ox7FaQ-la"
      },
      "source": [
        "**Change the data type**\n",
        "\n",
        "Notice that inputting floats but casting to an integer caused our output Tensor to be different from the input altogether. Change the `dtype` to avoid this from happening."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZuLVG60ARDcy"
      },
      "source": [
        "dtype = torch.float64\n",
        "\n",
        "torch.Tensor([1.,2.,3.5]).type(dtype)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RAM0Tvq0QoP-"
      },
      "source": [
        "## Section 1.2: Operations\n",
        "There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation.\n",
        "\n",
        "Addition: Syntax 1\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NbRcsar1QnpO"
      },
      "source": [
        "y = torch.rand(5, 3)\n",
        "print(x + y)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jy4cbQ8fQj2c"
      },
      "source": [
        "Addition: Syntax 2"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hgj-fXAbQjRY"
      },
      "source": [
        "print(torch.add(x, y))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nMpkc6zXQSbv"
      },
      "source": [
        "Addition: Providing an output tensor as argument"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "N10A9lKsQSIx"
      },
      "source": [
        "result = torch.empty(5, 3)\n",
        "torch.add(x, y, out=result)\n",
        "print(result)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Jxertbi7QRp8"
      },
      "source": [
        "Addition: In-place"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lYQrhyQdQQ9y"
      },
      "source": [
        "# adds x to y\n",
        "y.add_(x) # the `_` sign at end means the operation mutates tensor y in-place\n",
        "print(y)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UmX22pe2QK3Y"
      },
      "source": [
        "**Read later:**\n",
        "100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described [here](http://pytorch.org/docs/torch).\n",
        "\n",
        "You can use standard NumPy-like indexing with all bells and whistles!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6AaAqaC1QIx4"
      },
      "source": [
        "print(y[:, 1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "u69MjkUHQGMo"
      },
      "source": [
        "If you want to resize/reshape a tensor, you can use ``torch.view``:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yRbvEV0fQDq9"
      },
      "source": [
        "x = torch.randn(4, 4)\n",
        "y = x.view(16)\n",
        "z = x.view(-1, 8)  # the size -1 is inferred from other dimensions\n",
        "print(x.size(), y.size(), z.size())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EDnkuF1EQBMq"
      },
      "source": [
        "If you have a one element tensor, use `.item()` to get the value as a\n",
        "Python number:\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VEeLylCZQAbm"
      },
      "source": [
        "x = torch.randn(1)\n",
        "print(x)\n",
        "print(x.item())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ogdtq9efDzk9"
      },
      "source": [
        "## Exercise 2: Simple linear operations\n",
        "\n",
        "### Let's do a few quick refresher problems in linear algebra. Solve the following. \n",
        "\n",
        "$$ \\textbf{A} = \n",
        "\\begin{bmatrix}2 &4 \\\\5 & 7 \n",
        "\\end{bmatrix} * \n",
        "\\begin{bmatrix} 1 &1 \\\\2 & 3\n",
        "\\end{bmatrix} \n",
        " + \n",
        "\\begin{bmatrix}10 & 10  \\\\ 12 & 1 \n",
        "\\end{bmatrix} \n",
        "$$\n",
        "\n",
        "\n",
        "and\n",
        "\n",
        "\n",
        "$$ b = \n",
        "\\begin{bmatrix} 3 \\\\ 5 \\\\ 7\n",
        "\\end{bmatrix} \\cdot \n",
        "\\begin{bmatrix} 2 \\\\ 4 \\\\ 8\n",
        "\\end{bmatrix}\n",
        "$$\n",
        "\n",
        "Now, verify your results with PyTorch (or check your ability to solve by hand via PyTorch!!!)\n",
        "\n",
        "*Hint:* For matrix multiplication, you can either use `torch.dot`, `torch.matmul`, or `@`. Please explore all three approaches and find their differences."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lNfwLqxNED3g"
      },
      "source": [
        "a1 = torch.tensor([[2, 4], [5, 7]])\n",
        "a2 = ...\n",
        "a3 = ...\n",
        "A = ...\n",
        "print(A)\n",
        "b1 = torch.tensor([[3], [5], [7]])  # or b1 = torch.tensor([3, 5, 7])\n",
        "b2 = ...\n",
        "b = ...\n",
        "print(b)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZB6OWxWhETPI"
      },
      "source": [
        "# to_remove solution\n",
        "a1 = torch.tensor([[2, 4], [5, 7]])\n",
        "a2 = torch.tensor([[1, 1], [2, 3]])\n",
        "a3 = torch.tensor([[10, 10], [12, 1]])\n",
        "A = torch.add(a1 @ a2, a3)\n",
        "print(\"A =\", A)\n",
        "\n",
        "# @ and torch.matmul return a multidimensional tensor\n",
        "b1 = torch.tensor([[3], [5], [7]])\n",
        "b2 = torch.tensor([[2], [4], [8]])\n",
        "b = b1.T @ b2\n",
        "print(\"b =\", b)\n",
        "\n",
        "# dot function returns a scalar tensor\n",
        "b1 = torch.tensor([3, 5, 7])\n",
        "b2 = torch.tensor([2, 4, 8])\n",
        "b = torch.dot(b1, b2)\n",
        "print(\"b =\", b)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QFaKQpogP71M"
      },
      "source": [
        "## Section 1.3: NumPy Bridge\n",
        "\n",
        "Converting a Torch Tensor to a NumPy array and vice versa is a breeze.\n",
        "\n",
        "The Torch Tensor and NumPy array will share their underlying memory  \n",
        "locations, and changing one will change the other."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jUy5rVVtfToR"
      },
      "source": [
        "### Converting a PyTorch Tensor to a NumPy Array"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "JpUmaqBeP5vS"
      },
      "source": [
        "a = torch.ones(5)\n",
        "print(a)\n",
        "b = a.numpy()\n",
        "print(b)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jFYzfr-QP181"
      },
      "source": [
        "See how the NumPy array changed in value."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UxF1BY7bP1Pb"
      },
      "source": [
        "a.add_(1)\n",
        "print(a)\n",
        "print(b)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "w13xrs4BPy3F"
      },
      "source": [
        "### Converting NumPy Array to Torch Tensor\n",
        "\n",
        "See how changing the NumPy array changed the Torch Tensor automatically:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "JrWzaQeVPu_M"
      },
      "source": [
        "a = np.ones(5)\n",
        "b = torch.from_numpy(a)\n",
        "np.add(a, 1, out=a)\n",
        "print(a)\n",
        "print(b)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0S4sTiyJPPQ9"
      },
      "source": [
        "As you may know, PyTorch is one of many popular deep learning frameworks for Python. Two honorable mentions are TensorFlow and JAX, both developed by Google but with different preferences in mind. Take 10 minutes to share your experience using a deep learning library (if any) in your group and discuss their significant differences and features. \n",
        "\n",
        "**Student response**: Would you prefer flexibility or performance when choosing a deep learning framework and why (feel free to mix and match)?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "plfcWggTUmGw"
      },
      "source": [
        "deep_learning_benchmark = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GxW33SNOaimY"
      },
      "source": [
        "---\n",
        "# Section 2: Deep Learning and Gradients"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AojN0T8iajXM",
        "cellView": "form"
      },
      "source": [
        "#@title Video: What is Deep Learning\n",
        "\n",
        "try: t2;\n",
        "except NameError: t2=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"uyK1v8VIX4E\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5klLiRlTPkMD"
      },
      "source": [
        "## Why are Gradients important?\n",
        "\n",
        "The gradient vector can be interpreted as the \"direction and rate of fastest increase\". In deep learning, we will use the gradient to optimize all of the parameters in a model with respect to some loss function. The partial derivative defines the gradient:\n",
        "\n",
        "$$\\dfrac{\\partial loss}{\\partial x}$$ where $x$ is the parameter we're optimizing. \n",
        "\n",
        "So, what is a partial derivative? Looking back at single variable calculus, we know the ordinary derivative $\\dfrac{df}{dx}$. In this function, $df$ is interpreted as some small change in the output of $f$, and $dx$ is is interpreted a small change in $x$.\n",
        "\n",
        "However, if we have some multivariable function $f(x,y)$, $\\dfrac{df}{dx}$ doesn't show how the entire function changes, so we call it a partial derivative, and for the sake of clarity, we denote this as $\\dfrac{\\partial f}{\\partial x}$. When computing this partial derivative, we treat everything that's not what our partial derivative is with respect to as a constant. For example:\n",
        "\n",
        "Let's compute the partial derivative $\\dfrac{\\partial f}{\\partial x}$ of \n",
        "$$f(x,y) = x^2y^3$$. We treat $y$ as a constant while doing this, so this is as simple as invoking the exponent rule from single variable calculus:\n",
        "\n",
        "$$\\dfrac{\\partial f}{\\partial x} = 2xy^3$$"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VpFNj0_WwVfq"
      },
      "source": [
        "## Now, work out a few partial derivatives to get the hang of it\r\n",
        "\r\n",
        "#### Given $f(x,y) = x^2 - xy$, find \r\n",
        "\r\n",
        "\r\n",
        "1.   $\\frac{\\partial f}{\\partial x}$\r\n",
        "\r\n",
        "2.   $\\frac{\\partial f}{\\partial y}$\r\n",
        "\r\n",
        "#### Given $f(x, y, z) = x - xy + z^2$, find \r\n",
        "\r\n",
        "\r\n",
        "1.   $\\frac{\\partial f}{\\partial x}$\r\n",
        "\r\n",
        "2.   $\\frac{\\partial f}{\\partial y}$\r\n",
        "\r\n",
        "3.   $\\frac{\\partial f}{\\partial z}$\r\n",
        "\r\n",
        "\r\n",
        "#### Given $f(x, y) = x e^{-2y} + x^2y$, find the **second** derivatives\r\n",
        "\r\n",
        "1.   $\\frac{\\partial ^2f}{\\partial x^2}$\r\n",
        "\r\n",
        "2.   $\\frac{\\partial ^2f}{\\partial y^2}$\r\n",
        "\r\n",
        "3.   $\\frac{\\partial ^2f}{\\partial x \\partial y}$\r\n",
        "\r\n",
        "4.   $\\frac{\\partial ^2f}{\\partial y \\partial x}$\r\n",
        "\r\n",
        "\r\n",
        "\r\n",
        "\r\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DZzzMK8HXT8M"
      },
      "source": [
        "Although gradients are the core to many popular optimization approaches, there exist methods known as *derivative-free optimization* algorithms (Bayesian optimization, coordinate descent, and genetic algorithm, to name a few), which do not require gradient calculation.  \n",
        "Please take 10 minutes to discuss the problems that can be solved with derivative-free optimization algorithms and their limitations in your group.\n",
        "\n",
        "**Student response**: Have you ever used a derivative-free optimization algorithm? If yes, tell us which algorithm and what kind of problem.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "p73IS79KZ2N3"
      },
      "source": [
        "derivative_free_opt_alg = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TArJrT4fd9gq"
      },
      "source": [
        "---\n",
        "# Section 3: Gradients in Pytorch (Autograd)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_d-S027hd-ZU",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Automatic Differentiation\n",
        "\n",
        "try: t3;\n",
        "except NameError: t3=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"xBc95BB6Gwo\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kZ6lNk_5b7sQ"
      },
      "source": [
        "## AutoGrad\n",
        "\n",
        "The ``autograd`` package provides automatic differentiation for all operations\n",
        "on Tensors. It is a *define-by-run* framework, which means that your backpropagation algorithm is defined by how your code is run and that every single iteration can be different.\n",
        "\n",
        "Let us see this in more simple terms with some examples."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KSHfk9iDPfJA"
      },
      "source": [
        "### Tensors\n",
        "\n",
        "\n",
        "`torch.Tensor` is the central class of the package. If you set its attribute\n",
        "`.requires_grad` as `True`, it starts to track all operations on it. When you finish your computation, you can call `.backward()` and have all the gradients computed automatically. The gradient for this tensor will be accumulated into `.grad` attribute.\n",
        "\n",
        "To stop a tensor from tracking history, you can call `.detach()` to detach it from the computation history and prevent future computation from being tracked.\n",
        "\n",
        "To prevent tracking history (and use of memory), you can also wrap the code block in `with torch.no_grad():`. This can be particularly helpful when evaluating a model because the model may have trainable parameters with `requires_grad=True`, for which we don't need the gradients.\n",
        "\n",
        "There’s one more class which is very important for autograd implementation - a `Function`.\n",
        "\n",
        "<img src=\"https://miro.medium.com/max/1536/1*wE1f2i7L8QRw8iuVx5mOpw.png\" alt=\"Function\" width=\"600\"/>\n",
        "\n",
        "`Tensor` and `Function` are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each tensor has a `.grad_fn` attribute that references a `Function` that has created the `Tensor` (except for Tensors created by the user - their `grad_fn is None`).\n",
        "\n",
        "If you want to compute the derivatives, you can call `.backward()` on a `Tensor`. If `Tensor` is a scalar (i.e., it holds a one element (data), you don’t need to specify any arguments to `backward()`, however, if it has more elements, you need to specify a `gradient` argument that is a tensor of matching shape.\n",
        "\n",
        "On calling `.backward`, we get the gradients $\\dfrac{\\partial loss}{\\partial x}$ where $x$ is the tensor. PyTorch will store the variable results in $x$.\n",
        "\n",
        "---\n",
        "\n",
        "PS: In previous versions of PyTorch, this required a separate Variable parameter that acted as a wrapper around a Tensor. This is now depreciated and `pytorch.Tensor` has autograd built-in."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fTrSicHRr07P",
        "cellView": "form"
      },
      "source": [
        "#@markdown **Student response**: Please write down 3 problems where you find autograd is useful.\n",
        "autograd_uses = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-CHTVNE-o9xc"
      },
      "source": [
        "Let's start from some toy examples."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "JmImJGHhPZUB"
      },
      "source": [
        "ones_tensor = torch.ones(2, 2, requires_grad=True)\n",
        "print(ones_tensor)\n",
        "\n",
        "float_tensor = torch.FloatTensor(3, 3) # No way to specify requires_grad while initiating\n",
        "float_tensor.requires_grad = True\n",
        "print(float_tensor)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3Vk-wL5fmNLZ"
      },
      "source": [
        "**Important:** `requires_grad` option is *contagious*, i.e., when a Tensor is created by operating on other Tensors, the `requires_grad` of the resultant Tensor would be set `True` given that at least one of the tensors used for its creation has `requires_grad` set to `True`."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XQBv6f8ilVPW"
      },
      "source": [
        "*So*, what we have just built is a tensor with the autograd option enabled. As a toy example, assume that we want to differentiate the function $y=4\\textbf{x}^{\\text{T}}\\textbf{x}$ with respect to the column vector $\\textbf{x}$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_CS7nmYOnSm_"
      },
      "source": [
        "x = torch.arange(4.0, requires_grad=True)\n",
        "print(x)\n",
        "print(x.grad)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sIZXHxopo1-D"
      },
      "source": [
        "Now, we calculate the value of $y$"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bbLnn_d_o5or"
      },
      "source": [
        "y = 4 * torch.dot(x, x)\n",
        "print(y)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uUMVTwfxpLoW"
      },
      "source": [
        "Since $\\textbf{x}$ is a vector of length 4, an inner product of $\\textbf{x}$ with itself is performed, yielding the scalar output that we assign to $y$. Next, we can automatically calculate the gradient of $y$ with respect to each component of $\\textbf{x}$ by calling the function for backpropagation and printing the gradient."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_7FGlaUupmL9"
      },
      "source": [
        "y.backward()\n",
        "x.grad"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qY5XrkCNqUwn"
      },
      "source": [
        "The gradient of the function  $y=4 \\textbf{x}^{\\text{T}} \\textbf{x}$  with respect to $\\textbf{x}$ should be $8\\textbf{x}$. Let us quickly verify that our desired gradient was calculated correctly."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VqX0j1gFqqh1"
      },
      "source": [
        "x.grad == 8 * x"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mEsFRu2iqTpR"
      },
      "source": [
        "# PyTorch accumulates the gradient in default, we need to clear the previous\n",
        "# values\n",
        "x.grad.zero_()\n",
        "y = x.sum()\n",
        "y.backward()\n",
        "x.grad"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kyDxRi20O5WO"
      },
      "source": [
        "## Autograd: the importance and history of autograd\n",
        "\n",
        "Autograd came to be simply because it is impractical and challenging to both visualize and compute gradients for very high dimensional spaces, such as those we'd see in large neural networks: \n",
        "\n",
        "> \"To deal with hyper-planes in a 14-dimensional space, visualize a 3-D space and say ‘fourteen’ to yourself very loudly. Everyone does it\" - Geoffrey Hinton.\n",
        "\n",
        "Autograd abstracts these concepts and \"automagically\" calculates gradients of high dimensional spaces. Gradients care calculated by tracing the computation graph from the root (the first tensor created) to the leaf (the last computation that occurred) and using the chain rule to compute every gradient along the way. \n",
        "\n",
        "Let's look at some of the relevant functions and attributes:\n",
        "\n",
        "\n",
        "*   `.detach()`: Detaches the tensor from the computation graph and prevents future computation from being tracked\n",
        "*   `.requires_grad`: Starts tracking all operations on the computation graph\n",
        "*   `.backward()`: When the computation is finished, call `tensor.backward()` to compute all gradients automatically\n",
        "*   `.grad`: All of the gradients for each tensor is accumulated in `.grad`\n",
        "*   `.grad_fn`: For tensors created by an operation, `grad_fn` will give you the operation that created them. For tensors created by users, it will return None. \n",
        "*    `.requires_grad_(bool)`: Changes an existing Tensor's `requires_grad` flag in place. This is False by default. \n",
        "\n",
        "\n",
        "\n",
        "See: https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a84ftaP5qIBO"
      },
      "source": [
        "## Example: A small Compute Graph"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "buor3olzPGa9"
      },
      "source": [
        "### Forward pass\n",
        "\n",
        "PyTorch creates a so-called **Dynamic Computation Graph**, which means that the graph is generated on the fly. \n",
        "\n",
        "The big thing that differentiates PyTorch and Tensorflow is how the computation graph functions. In PyTorch, all variables and functions build a dynamic graph of computation. For every mathematical operation involving Tensors, a Function node is made on this computation graph. We can see what function creates a given Tensor by using the attribute `grad_fn`. So, let's examine the compute graph of a simple addition operation.\n",
        "\n",
        "<img src=\"https://miro.medium.com/max/1684/1*FDL9Se9otGzz83F3rofQuA.png\" alt=\"Computation Graph\" width=\"500\"/>\n",
        "\n",
        "Here, we give a toy example with an elementary graph to understand PyTorch's logic!\n",
        "\n",
        "The variables $\\alpha$ is the input and is initialized by the user. Variables $b$, $c$ and $d$ are created as a result of mathematical operations, variables $w_1$, $w_2$, $w_3$ and $w_4$ are initialised by the user. Since any mathematical operator does not create them, nodes corresponding to their creation are represented by their name itself. This is true for all the leaf nodes in the graph. Variable $L$ is the output of this computational graph.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XUGrcRnlqafy"
      },
      "source": [
        "#### Exercise 3a: Get the parameters\n",
        "\n",
        "Lets start by defining our parameters ($w_1, w_2, w_3, w_4$) and inputs ($\\alpha$). Lets assume that all parameters and inputs are $3 \\times 3$ tensors sampled from a normal distribution with zero mean and standard deviation $\\sigma=1$ (i.e., `torch.randn`).\n",
        "\n",
        "*Hint:* Do not forget to enable tracking of gradients."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YnyXzTqGp8Fh"
      },
      "source": [
        "def get_params():\n",
        "  '''\n",
        "  A simple function that generates our parameters and inputs\n",
        "  \n",
        "  ---\n",
        "  args: nothing\n",
        "\n",
        "  returns:\n",
        "    a: torch.Tensor\n",
        "      inputs\n",
        "    w1: torch.Tensor\n",
        "      weights\n",
        "    w2: torch.Tensor\n",
        "      weights\n",
        "    w3: torch.Tensor\n",
        "      weights\n",
        "    w4: torch.Tensor\n",
        "      weights\n",
        "\n",
        "  '''\n",
        "  #####################################################################\n",
        "  # Fill in missing code (...) with randomly initialized tensors\n",
        "  # (standrad normal) which require gradient.\n",
        "  # then remove or comment the line below to test your function.\n",
        "  raise NotImplementedError(\"Define the input and weight params!\")\n",
        "  #####################################################################\n",
        "  a = ...\n",
        "  w1 = ...\n",
        "  w2 = ...\n",
        "  w3 = ...\n",
        "  w4 = ...\n",
        "\n",
        "  return (a, w1, w2, w3, w4)\n",
        "\n",
        "\n",
        "# uncomment the lines below to test your function\n",
        "# params = get_params()\n",
        "# print(f'The inputs are: \\n{params[0]}')\n",
        "# print(f'The weights are: \\nw1:{params[1]}, \\nw2:{params[2]}, \\nw3:{params[3]}, '\n",
        "#       f' \\nw4:{params[4]}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "aQ_cUNC4reci"
      },
      "source": [
        "# to_remove solution\n",
        "def get_params():\n",
        "  '''\n",
        "  A simple function that generates our parameters and inputs\n",
        "  \n",
        "  ---\n",
        "  args: nothing\n",
        "\n",
        "  returns:\n",
        "    a: torch.Tensor\n",
        "      inputs\n",
        "    w1: torch.Tensor\n",
        "      weights\n",
        "    w2: torch.Tensor\n",
        "      weights\n",
        "    w3: torch.Tensor\n",
        "      weights\n",
        "    w4: torch.Tensor\n",
        "      weights\n",
        "\n",
        "  '''\n",
        "  a = torch.randn((3,3), requires_grad=True)\n",
        "  w1 = torch.randn((3,3), requires_grad=True)\n",
        "  w2 = torch.randn((3,3), requires_grad=True)\n",
        "  w3 = torch.randn((3,3), requires_grad=True)\n",
        "  w4 = torch.randn((3,3), requires_grad=True)\n",
        "\n",
        "  return (a, w1, w2, w3, w4)\n",
        "\n",
        "\n",
        "params = get_params()\n",
        "print(f'The inputs are: \\n{params[0]}')\n",
        "print(f'The weights are: \\nw1:{params[1]}, \\nw2:{params[2]}, \\nw3:{params[3]}, '\n",
        "      f' \\nw4:{params[4]}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OyA1Tzq-taGN"
      },
      "source": [
        "#### Exercise 3b: Compute graph\n",
        "\n",
        "Now, let's define the mathematical operations of our toy graph. Weights are multiplied with the corresponding node value, and double arrows denote a summation (see node $d$)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0-XDzn-FaJmg"
      },
      "source": [
        "def compute_graph(params, target):\n",
        "  '''\n",
        "  Simple function with the forward pass\n",
        "\n",
        "  args:\n",
        "    params: list\n",
        "      contains the inputs and the weight tensors\n",
        "  returns:\n",
        "    L: float\n",
        "      loss given a target value\n",
        "  '''\n",
        "  a, w1, w2, w3, w4 = params[0],params[1], params[2], params[3], params[4]\n",
        "  \n",
        "  #####################################################################\n",
        "  # Fill in missing code (...) to create the compute graph \n",
        "  # shown in Example A (above).\n",
        "  # then remove or comment the line below to test your function.\n",
        "  raise NotImplementedError(\"Define the compute graph!\")\n",
        "  #####################################################################\n",
        "\n",
        "  b = ...\n",
        "  c = ...\n",
        "  d = ... \n",
        "\n",
        "  # Compute the summed loss\n",
        "  L = ...\n",
        "\n",
        "  # Store weights in a dictionary\n",
        "  weights = {}\n",
        "  weights['w1'] = w1\n",
        "  weights['w2'] = w2\n",
        "  weights['w3'] = w3\n",
        "  weights['w4'] = w4\n",
        "\n",
        "  # Store values of the nodes in a dictionary\n",
        "  values = {}\n",
        "  values['a'] = a\n",
        "  values['b'] = b\n",
        "  values['c'] = c\n",
        "  values['d'] = d\n",
        "\n",
        "  return (L, values, weights)\n",
        "\n",
        "\n",
        "# uncomment the following lines to test your function\n",
        "# output = compute_graph(params, 20)\n",
        "# print(f'Loss: {output[0]}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2k2ESh1bsW4_"
      },
      "source": [
        "# to_remove solution\n",
        "def compute_graph(params, target):\n",
        "  '''\n",
        "  Simple function with the forward pass\n",
        "\n",
        "  args:\n",
        "    params: list\n",
        "      contains the inputs and the weight tensors\n",
        "  returns:\n",
        "    L: float\n",
        "      loss given a target value\n",
        "  '''\n",
        "  a, w1, w2, w3, w4 = params[0],params[1], params[2], params[3], params[4]\n",
        "  \n",
        "  b = w1 * a \n",
        "  c = w2 * a\n",
        "  d = w3 * b + w4 * c \n",
        "\n",
        "  # Compute the summed loss\n",
        "  L = (target - d).sum()\n",
        "\n",
        "  # Store weights in a dictionary\n",
        "  weights = {}\n",
        "  weights['w1'] = w1\n",
        "  weights['w2'] = w2\n",
        "  weights['w3'] = w3\n",
        "  weights['w4'] = w4\n",
        "\n",
        "  # Store values of the nodes in a dictionary\n",
        "  values = {}\n",
        "  values['a'] = a\n",
        "  values['b'] = b\n",
        "  values['c'] = c\n",
        "  values['d'] = d\n",
        "\n",
        "  return (L, values, weights)\n",
        "\n",
        "\n",
        "output = compute_graph(params, 20)\n",
        "print(f'Loss: {output[0]}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sVfbLHkMZYrv"
      },
      "source": [
        "Now, let's print the gradients."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ig-XstYcwG-s"
      },
      "source": [
        "a = output[1]['a']\n",
        "d = output[1]['d']\n",
        "\n",
        "print(\"The grad fn for a is\", a.grad_fn)\n",
        "print(\"The grad fn for d is\", d.grad_fn)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ivJSEhydZkSx"
      },
      "source": [
        "What you see is that the gradient for the input `a` is `None` as we did not calculate it yet! Let's first see what PyTorch computes as gradients of any computational graph."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CWC4l6-1waEW"
      },
      "source": [
        "### Backward pass: Computing the gradients\n",
        "\n",
        "Each node of the computation graph, with the exception of leaf nodes, can be considered as a function which takes some inputs and produces an output. Consider the node of the graph which produces variable $d$ from $w_3 \\cdot b$ and $w_4 \\cdot c$. Thus, we can write the output $d$ as a function of its inputs:\n",
        "\n",
        "\\begin{equation}\n",
        "d = f(w_3 \\cdot b, w_4 \\cdot c)\n",
        "\\end{equation}\n",
        "\n",
        "Thus, we can easily compute the gradient of the function $f$ with respect to its inputs ($\\frac{\\partial f}{\\partial w_3b}$ and $\\frac{\\partial f}{\\partial w_4c}$). Similarly, we do this for the entire graph and we have described the derivatives of this graph.\n",
        "\n",
        "<img src=\"https://miro.medium.com/max/1684/1*EWpoG5KayZSqkWmwM_wMFQ.png\" alt=\"Computation Graph with Gradients\" width=\"500\"/>\n",
        "\n",
        "In order to compute derivatives in our graph, we generally call `backward` on the Tensor representing our loss $L$, i.e., the output of the graph. Then, we backtrack through the graph starting from the node representing the `grad_fn` of our loss.\n",
        "\n",
        "As described above, the backward function is recursively called throughout the graph as we backtrack. Once, we reach a leaf node, since the `grad_fn` is `None`, but stop backtracking through that path."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mamwLy6Bwzyv"
      },
      "source": [
        "L = output[0]\n",
        "L.backward()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lAfcmVcBwrL5"
      },
      "source": [
        "Once that's done, you can access the gradients by calling the Tensor's `grad` attribute."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "aBHVzjbC0l2I"
      },
      "source": [
        "a.grad"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "WgEmyp0jZ76j"
      },
      "source": [
        "w1 = output[2]['w1']\n",
        "w1.grad"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HaEK7H7-uFLC"
      },
      "source": [
        "Understanding how **Autograd**  and **computation graphs** works can make your life easier. With our foundations rock-solid, we can proceed and see these components in action. Let's start with the simplest case; *Linear regression*."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PlmjR_TZ2TBr"
      },
      "source": [
        "---\n",
        "# Section 4: Gradients as part of optimization; first order techniques"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "3_nO54rceiCm",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Linear Regression and Neuroscience\n",
        "\n",
        "try: t4;\n",
        "except NameError: t4=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"oQcrIgtqlXU\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cXCPZoaHgnyC"
      },
      "source": [
        "## Section 4.1: Linear Regression"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "oXuBhXB7goaZ"
      },
      "source": [
        "#@title Video: Linear Regression and Matrix Inversion\n",
        "video = YouTubeVideo(id=\"h5H8A3ZFuI4\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kStiWOy2_qbA"
      },
      "source": [
        "Generally, *regression* refers to a set of methods for modeling the relationship between one (or more) independent variable(s) and one (or more) dependent variable(s). Regression is commonly applied when we want to find the relationship between the dependent and the independent variables. For example, if we want to examine the relative impacts of age, gender, and diet (the independent variables) on height (the dependent variable). On the other hand, a regression can be used for predictive analysis. Thus the independent variables are also called predictors. When the model contains more than one predictor, then the method is called *multiple regression*, and if it contains more than one dependent variable called *multivariate regression*.\n",
        "\n",
        "Regression problems pop up whenever we want to predict a numerical (usually continuous) value.\n",
        "\n",
        "The independent variables are collected in a vector $\\mathbf{x} \\in \\mathbb{R}^D$, where $D$ denotes the number of independent variables. Each variable influences differently the dependent variable by a weight factor, $w_{d}$. Thus, $\\mathbf{w} \\in \\mathbb{R}^D$.\n",
        "\n",
        "The multiple regression model can be written as:\n",
        "\n",
        "\\begin{equation}\n",
        "y = w_{1}x_{1} + w_{2}x_{2} + \\dots + w_{D}x_{D} + b = \\sum_{d=1}^D \\left(w_{d}x_{d} \\right) + b\n",
        "\\end{equation}\n",
        "\n",
        "Thus, the model can be written in matrix format as:\n",
        "\n",
        "\\begin{equation}\n",
        "y = \\begin{bmatrix} w_{1} & w_{2} & \\dots & w_{D} \\end{bmatrix} \\begin{bmatrix} x_{1} \\\\ x_{2} \\\\ \\vdots \\\\ x_{D} \\\\ \\end{bmatrix} + b\n",
        "\\end{equation}\n",
        "\n",
        "Which can be written in a compact form as:\n",
        "\\begin{equation}\n",
        "y = \\mathbf{w}^{\\text{T}}\\mathbf{x} + b\n",
        "\\end{equation}\n",
        "\n",
        "where $\\text{w}$ is the weight and $b$ is the bias (also known as intercept). Notice that $x$ is a matrix with the independent variables in rows and different examples stack in columns.\n",
        "\n",
        "First, lets plot our synthetic dataset, using $w=2.5$ and $b=1.2$. We will also add gaussian noise with standard deviation $\\sigma$ to make the synthetic data more realistic.\n",
        "\n",
        "\\begin{equation}\n",
        "y = \\mathbf{w}^{\\text{T}}\\mathbf{x} + b + \\epsilon\n",
        "\\end{equation}\n",
        "\n",
        "where \n",
        "\n",
        "\\begin{equation}\n",
        "\\epsilon \\sim \\mathcal{N}(\\mu=0, \\sigma)\n",
        "\\end{equation}\n",
        "\n",
        "### Gaussian distribution\n",
        "\n",
        "To refresh our memory, the probability density of a normal distribution with mean $\\mu$ and standard deviation $\\sigma$ is given by:\n",
        "\n",
        "\\begin{equation}\n",
        "p(x)=\\frac{1}{2\\pi\\sigma^{2}} \\exp \\left(-\\frac{(x-\\mu)^{2}}{2\\sigma^{2}}\\right)\n",
        "\\end{equation}\n",
        "\n",
        "\n",
        "Here, we will start from the simplest case, where we want to perform regression using one predictor (**x**) and one dependent variable (**y**). Thus, the model is given by the formula:\n",
        "\n",
        "\\begin{equation}\n",
        "y = w_{1}x + b + \\epsilon\n",
        "\\end{equation}\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "t9qzri932ivd"
      },
      "source": [
        "# This is the code block to generate random data for linear regression\n",
        "\n",
        "original_w = torch.tensor([2.5]).reshape(-1,1)\n",
        "original_b = 1.2\n",
        "N = 1000  # number of examples\n",
        "X, y = synthetic_dataset(original_w, original_b, num_examples=N, sigma=0.5)\n",
        "\n",
        "plt.figure()\n",
        "plt.scatter(X, y)\n",
        "plt.xlabel('x (independent variable)')\n",
        "plt.ylabel('y (dependent variable)')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b0Efb21KcsOo"
      },
      "source": [
        "## Section 4.2: Vectorized regression\n",
        "\n",
        "Now, having a lot of data, we can collect them in a matrix $\\mathbf{X} \\in \\mathbb{R}^{D \\times N}$,. Thus, linear regression takes the following form:\n",
        "\n",
        "\\begin{equation}\n",
        "\\mathbf{Y} = \\mathbf{w}^{\\text{T}}\\mathbf{X} + b\n",
        "\\end{equation}\n",
        "\n",
        "where broadcasting is applied in the summation, i.e., $b$ broadcasts in a $D$-dimensional vector with the values $b$ in every position. \n",
        "\n",
        "Let's write the code for our model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bcu5SyDujZ7J"
      },
      "source": [
        "### Exercise 4a: Write the linear regression model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "X7aaSvTocyQt"
      },
      "source": [
        "def linear_regression(X, w, b):\n",
        "  '''\n",
        "  Linear regression model.\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  X : torch.tensor\n",
        "      design matrix.\n",
        "  w : torch.tensor\n",
        "      weights.\n",
        "  b : torch.tensor\n",
        "      bias.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  torch.tensor\n",
        "    predicted values.\n",
        "  \n",
        "  '''\n",
        "  #####################################################################\n",
        "  # Fill in missing code (...),\n",
        "  # then remove or comment the line below to test your function.\n",
        "  raise NotImplementedError(\"Complete the linear_regression function\")\n",
        "  #####################################################################\n",
        "  reg = ...\n",
        "  return reg\n",
        "\n",
        "# uncomment the lines below to test your function\n",
        "# print(linear_regression(X[:,0].reshape(1,-1), original_w, original_b))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AYTKTQH-Yn9e"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def linear_regression(X, w, b):\n",
        "  '''\n",
        "  Linear regression model.\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  X : torch.tensor\n",
        "      design matrix.\n",
        "  w : torch.tensor\n",
        "      weights.\n",
        "  b : torch.tensor\n",
        "      bias.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  torch.tensor\n",
        "    predicted values.\n",
        "  \n",
        "  '''\n",
        "  reg = torch.matmul(w.T, X) + b\n",
        "  return reg\n",
        "\n",
        "\n",
        "print(linear_regression(X[:,0].reshape(1,-1), original_w, original_b))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p9SGzlHnLCp4"
      },
      "source": [
        "## Section 4.3: Loss function\n",
        "First, we need to determine a measure of fitness. In simple words, we want to quantify if our model performs good or bad given some parameters $w$ and $b$. The so-called *loss function* quantifies the distance between the real and predicted values of the target. The loss will usually be a non-negative float number (as you will see later on, some applications can also contain negative losses). Small values are better and perfect predictions incur a loss of 0. The most popular loss function in regression problems is the **squared error**. When our prediction for an example $\\mathbf{x}^{[i]}$ is $\\hat{y}^{[i]}$ and its corresponding true value is $y^{[i]}$ , the squared error is given by:\n",
        "\n",
        "\\begin{equation}\n",
        "l^{[i]}=\\left(\\hat{y}^{[i]} - y^{[i]}\\right)^{2}\n",
        "\\end{equation}\n",
        "\n",
        "Usually, our datset consits of lots of data, thus we sum up all individual squred error. Generally, we take out the average of all these erros, thus, gien that we have $N$ data points the *mean squared error* (MSE) is given by:\n",
        "\n",
        "\\begin{align}\n",
        "L(\\mathbf{w}, b) &{}= \\frac{1}{N} \\sum_{i=1}^{N} l^{[i]} \\\\ \n",
        "&{}= \\frac{1}{N} \\sum_{i=1}^{N} \\left(\\hat{y}^{[i]} - y^{[i]}\\right)^{2} \\\\ \n",
        "&{}=  \\frac{1}{N} \\sum_{i=1}^{N} \\left(\\mathbf{w}^{\\text{T}}\\mathbf{x}^{[i]} + b - y^{[i]}\\right)^{2}\n",
        "\\end{align}\n",
        "\n",
        "Our aim here is to choose the parameters so that the error will be minimized. In mathematical terms, this is translated into a minimization problem with respect to parameters $\\mathbf{w}$ and $b$:\n",
        "\n",
        "\\begin{equation}\n",
        "\\mathbf{w^{*}}, b^{*} = \\underset{\\mathbf{w},b}{\\mathrm{argmin}} \\left( L(\\mathbf{w}, b) \\right)\n",
        "\\end{equation}\n",
        "\n",
        "Let's implement a function that calculates the mean squared error."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XM3gUldHjkZn"
      },
      "source": [
        "### Exercise 4b: Squared Error calculation"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_6dabFckS70T"
      },
      "source": [
        "def squared_error(y_hat, y):\n",
        "  '''\n",
        "  Squared error loss function.\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  y_hat : torch.tensor\n",
        "      predicted values.\n",
        "  y : torch.tensor\n",
        "      true values.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  err: FLOAT\n",
        "       the squared error (loss)\n",
        "  \n",
        "  '''\n",
        "  #####################################################################\n",
        "  # Fill in missing code (...),\n",
        "  # then remove or comment the line below to test your function.\n",
        "  raise NotImplementedError(\"Complete the squared_error function\")\n",
        "  #####################################################################\n",
        "  err = ...\n",
        "  return err\n",
        "\n",
        "# uncomment the lines below to test your function\n",
        "# print(squared_error(torch.tensor([2, 1, 7]), torch.tensor([1,-1, 7.5])))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "n4C6ROqYkQE4"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def squared_error(y_hat, y):\n",
        "  '''\n",
        "  Squared error loss function.\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  y_hat : torch.tensor\n",
        "      predicted values.\n",
        "  y : torch.tensor\n",
        "      true values.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  err: FLOAT\n",
        "       the squared error (loss)\n",
        "  \n",
        "  '''\n",
        "  err = (y_hat - y.reshape(y_hat.shape)) ** 2\n",
        "  return err\n",
        "\n",
        "\n",
        "print(squared_error(torch.tensor([2, 1, 7]), torch.tensor([1,-1, 7.5])))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gqjY2LJOgkia"
      },
      "source": [
        "## Section 4.3.1 Analytic Solution by Matrix Inversion\n",
        "\n",
        "As you can see from the equation above, linear regression is a relatively simple optimization problem. Unlike most other models that we will see in this course, linear regression can be solved analytically by applying a simple formula.\n",
        "\n",
        "To further simplify things, we can subsume the bias $b$  into the parameter  $\\textbf{w}$ by appending an extra fake dimension equal to one to our data points $\\textbf{x}$. Thus, the vectors $\\textbf{x} \\in \\mathbb{R}^{D+1}$.\n",
        "\n",
        "So, using matrix notation, the optimization problem is given by:\n",
        "\n",
        "\\begin{align}\n",
        "\\mathbf{w^{*}} &{}= \\underset{\\mathbf{w}}{\\mathrm{argmin}} \\left( ||\\textbf{Y} - \\textbf{X}\\textbf{w}||^2  \\right) \\\\\n",
        "&{}= \\underset{\\mathbf{w}}{\\mathrm{argmin}} \\left( \\left( \\textbf{Y} - \\textbf{X}\\textbf{w}\\right)^{\\text{T}} \\left( \\textbf{Y} - \\textbf{X}\\textbf{w}\\right) \\right)\n",
        "\\end{align}\n",
        "\n",
        "where $\\textbf{X} \\in \\mathbb{R}^{N \\times D}$, the so-called design matrix.\n",
        "\n",
        "Thus, the minimization problem is to set the derivative of the loss with respect to $\\textbf{w}$ to zero.\n",
        "\n",
        "\\begin{equation}\n",
        "\\frac{\\partial L(\\textbf{w})}{\\partial \\textbf{w}} = 0\n",
        "\\end{equation}\n",
        "\n",
        "Assuming that $\\textbf{X}^{\\text{T}}\\textbf{X}$ is full-rank and thus its inverse exists (implying that $N>D$ and the rows are not all linearly dependent:\n",
        "\n",
        "\\begin{equation}\n",
        "\\textbf{w}^{\\textbf{*}} = \\left( \\textbf{X}^{\\text{T}}\\textbf{X} \\right)^{-1}\\textbf{X}^{\\text{T}}\\textbf{Y}\n",
        "\\end{equation}\n",
        "\n",
        "**Note**: for the analytic proof, see the [Appendix](#proof)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oaFWgefwrOu7"
      },
      "source": [
        "### Exercise 4c: Write the analytical solution of the linear regression\n",
        "\n",
        "So, let's write a function that calculates the analytic solution of the linear regression problem given that we have a dataset $X$ and the target variables $y$.\n",
        "\n",
        "*Hint*: What is the determinant (`np.linalg.det`) of a non-invertible matrix?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "oXCHHxhhf9uQ"
      },
      "source": [
        "def analytical_sol(X, y):\n",
        "  '''\n",
        "  Squared error loss function.\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  X : numpy.ndarray (float)\n",
        "      design matrix with dimensions NxD, D: features, N: examples.\n",
        "  y : numpy.ndarray (float)\n",
        "      target values.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  params: numpy.ndarray\n",
        "          the parameters w, and bias (last element)\n",
        "\n",
        "          if `X.T @ X` is singular, returns nothing.\n",
        "  '''\n",
        "\n",
        "  # Check if the inverse exists, unless print error.\n",
        "  params = []\n",
        "  #####################################################################\n",
        "  # Fill in missing code (...),\n",
        "  # then remove or comment the line below to test your function.\n",
        "  # IMPORTANT: Write the condition to check if the matrix is invertible.\n",
        "  raise NotImplementedError(\"Write the condition and the formula\")\n",
        "  #####################################################################\n",
        "  if ...:\n",
        "    params = ...\n",
        "  else:\n",
        "    print('LinAlgError. Matrix is Singular. No analytical solution.')\n",
        "\n",
        "  return (params)\n",
        "\n",
        "\n",
        "# tiny dataset\n",
        "alpha, beta = -2.0, 1.4\n",
        "xsynth = np.linspace(-1, 1, 10).reshape(-1, 1)  # 100 examples, in 10D space\n",
        "ysynth = alpha*xsynth + beta  # target values\n",
        "xsynth_append = np.hstack((xsynth, np.ones((10, 1))))  # Append a column with `1`\n",
        "\n",
        "# uncomment the lines below to test the analytical solution\n",
        "# params = analytical_sol(xsynth_append, ysynth)\n",
        "# print(f'Original: alpha={alpha} and beta={beta}')\n",
        "# print(f'Estimated: alpha={params[0].item()} and beta={params[1].item()}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zqhwsL3MkjnE"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def analytical_sol(X, y):\n",
        "  '''\n",
        "  Squared error loss function.\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  X : numpy.ndarray (float)\n",
        "      design matrix with dimensions NxD, D: features, N: examples.\n",
        "  y : numpy.ndarray (float)\n",
        "      target values.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  params: numpy.ndarray\n",
        "          the parameters w, and bias (last element)\n",
        "\n",
        "          if `X.T @ X` is singular, returns nothing.\n",
        "  '''\n",
        "\n",
        "  # Check if the inverse exists, unless print error.\n",
        "  params = []\n",
        "  if np.linalg.det(X.T @ X) != 0:\n",
        "    params = np.linalg.inv(X.T @ X) @ X.T @ y\n",
        "  else:\n",
        "    print('LinAlgError. Matrix is Singular. No analytical solution.')\n",
        "\n",
        "  return (params)\n",
        "\n",
        "\n",
        "# tiny dataset\n",
        "alpha, beta = -2.0, 1.4\n",
        "xsynth = np.linspace(-1, 1, 10).reshape(-1, 1)  # 100 examples, in 10D space\n",
        "ysynth = alpha*xsynth + beta  # target values\n",
        "xsynth_append = np.hstack((xsynth, np.ones((10, 1))))  # Append a column with `1`\n",
        "\n",
        "params = analytical_sol(xsynth_append, ysynth)\n",
        "print(f'Original: alpha={alpha} and beta={beta}')\n",
        "print(f'Estimated: alpha={params[0].item()} and beta={params[1].item()}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7WLh46IQXwFf"
      },
      "source": [
        "## Section 4.4: Gradient descent\n",
        "\n",
        "Linear regression has a closed-form solution; however, this is not the case in most machine learning applications. When we cannot solve the models analytically, it turns out that we can still train models effectively in practice.\n",
        "\n",
        "The key technique for optimizing nearly any deep learning model consists of iteratively reducing the error by updating the direction's parameters that incrementally lowers the loss function. This algorithm is called *gradient descent*.\n",
        "\n",
        "The most naive application of gradient descent consists of taking the derivative of the loss function, an average of the losses computed on every single example in the dataset.\n",
        "\n",
        "We define the gradient of a multi-variable function $f(\\theta)$ as $$\\nabla f(\\theta) = \\begin{bmatrix} \\dfrac{\\partial f(\\theta)}{\\partial \\theta_0} \\\\ \\dfrac{\\partial f(\\theta)}{\\partial \\theta_1} \\\\ \\vdots \\\\ \\dfrac{\\partial f(\\theta)}{\\partial \\theta_D} \\\\ \\end{bmatrix}$$\n",
        "\n",
        "\n",
        "Gradient Descent attempts to find the minima of a given function $f(\\theta)$ by descending to the gradient's opposite direction.\n",
        "\n",
        "It updates the parameters in an iterative fashion in the opposite direction of the gradient. First, we need an initial guess $x_0$ of the solution (often this is randomly initialized). Then, we calculate the gradient with the function evaluated at $x_0$. The update rule is as follows: \n",
        "\\begin{equation}\n",
        "\\theta^{\\text{new}} = \\theta^{\\text{old}} - \\eta \\nabla f(\\theta^{\\text{old}})\n",
        "\\end{equation}\n",
        "where $\\eta$ is the learning rate, how fast or slow the model learns.\n",
        "\n",
        "In our simple example, our model consists of two parameters, $w$ and $b$, thus, the gradient descent can be written:\n",
        "\n",
        "\\begin{equation}\n",
        "w \\leftarrow w - \\eta \\frac{\\partial }{\\partial w} L(w,b) \\\\\n",
        "b \\leftarrow b - \\eta \\frac{\\partial }{\\partial b} L(w,b)\n",
        "\\end{equation}\n",
        "\n",
        "We start with $w=0$ and $b=0$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CXMMjzZjrnnB"
      },
      "source": [
        "def gradient_descent(w, b, lr):\n",
        "  '''\n",
        "  gradient_descent algorithm.\n",
        "  \n",
        "  Parameters\n",
        "  ----------\n",
        "  w : torch.tensor\n",
        "      weights.\n",
        "  b : torch.tensor\n",
        "      bias.\n",
        "  lr : FLOAT\n",
        "      learning rate.\n",
        "  \n",
        "  Returns\n",
        "  -------\n",
        "  w, b.\n",
        "  \n",
        "  '''\n",
        "  with torch.no_grad():\n",
        "    w -= w.grad * lr\n",
        "    b -= b.grad * lr\n",
        "    # Set gradient to zero to flush the cache\n",
        "    w.grad.zero_()\n",
        "    b.grad.zero_()\n",
        "\n",
        "  return (w, b)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f99szh_7K8ST"
      },
      "source": [
        "## Section 4.5: Training\n",
        "\n",
        "Here, we will write from scratch the training loop. First, we choose the learning rate and the number of epochs, and we define the loss function. Then, we initialize the parameters. We use a list `losses` to record the mean-squared error after each **epoch** (with **epoch**, we denote a full pass of the data set through the network)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Bg4woKDGVFlg"
      },
      "source": [
        "lr = 1e-2 # learning rate\n",
        "num_epochs = 1000  # number of epochs (updates)\n",
        "net = linear_regression\n",
        "loss = squared_error\n",
        "\n",
        "y_train = y.squeeze()  # to avoid wrong broadcasting\n",
        "\n",
        "w = torch.zeros(1, requires_grad = True)\n",
        "b = torch.zeros(1, requires_grad = True)\n",
        "\n",
        "losses = []\n",
        "\n",
        "epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "\n",
        "for epoch in epoch_range:\n",
        "  if losses:\n",
        "    epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "    epoch_range.refresh()  # to show immediately the update\n",
        "\n",
        "  y_hat = net(X, w, b)\n",
        "  l = loss(y_hat, y_train).mean()  # Loss in `X` and `y`\n",
        "  \n",
        "  # Compute gradient on `l` with respect to `w`, `b`\n",
        "  l.backward()\n",
        "  w, b = gradient_descent(w, b, lr)  # Update parameters using their gradient\n",
        "  \n",
        "  losses.append(l)\n",
        "\n",
        "  time.sleep(0.01)\n",
        "\n",
        "w_train = w\n",
        "b_train = b\n",
        "y_hat = net(X, w_train, b_train).reshape(-1,1)\n",
        "\n",
        "plt.figure(figsize=(14, 5))\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.plot(range(num_epochs), losses)\n",
        "plt.xlabel('epoch')\n",
        "plt.ylabel('loss (a.u.)')\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.scatter(X, y_train, label='original data')\n",
        "plt.plot(X.reshape(-1,1), y_hat.detach(), label='regression',\n",
        "         color='red', linewidth=3.0)\n",
        "plt.xlabel('independent variable')\n",
        "plt.ylabel('dependent variable')\n",
        "plt.title(f'Toy dataset with {N} samples')\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "riysb6TSuh0j"
      },
      "source": [
        "**Student response**: Now, go back and change the learning rate to a bigger number and see what happens. Change the learning rate to a smaller number and see what happens. What happens if you change the number of epochs to a higher or lower value? Why do you think this is happening? "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "pwujC7tZcBHZ"
      },
      "source": [
        "lr_epochs_vs_convergence = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZIyDpsdHfLNz"
      },
      "source": [
        "---\n",
        "\n",
        "# Section 5: Linear regression using Pytorch's Sequential class"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-em232MHfMti",
        "cellView": "form"
      },
      "source": [
        "#@markdown I'm a time tracker, please run me.\n",
        "\n",
        "try: t5;\n",
        "except NameError: t5=time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dIdB6yaNqiI5"
      },
      "source": [
        "Now, let's implement the same model but here using a PyTorch model. When we implemented linear regression from scratch, we defined our model parameters explicitly and coded the calculations to produce output using basic linear algebra operations. You should know how to do this.\n",
        "\n",
        "But once your models get more complex, and once you have to do this nearly every day, you will be glad for the assistance. The situation is similar to coding up your own blog from scratch. Doing it once or twice is rewarding and instructive, but you would be a lousy web developer if every time you need a blog, you spent a month reinventing the wheel.\n",
        "\n",
        "We can use a framework’s predefined layers for standard operations, which allow us to focus especially on the layers used to construct the model rather than focusing on the implementation. We will first define a model variable net, which will refer to the Sequential class instance. The Sequential class defines a container for several layers that will be chained together. Given input data, a Sequential instance passes it through the first layer, passing the output as the second layer’s input and so forth. Our model consists of only one layer in the following example, so we do not really need Sequential. But since nearly all of our future models will involve multiple layers, we will use it anyway to familiarize you with the most standard workflow.\n",
        "\n",
        "Recall the architecture of a single-layer network. The layer is said to be fully-connected because each of its inputs is connected to each of its outputs through matrix-vector multiplication.\n",
        "\n",
        "The only difference is that here we use the design matrix in a transposed form; $ \\mathbf{X} \\in \\mathbb{R}^{N \\times D} $, thus the model is defined as:\n",
        "\n",
        "\\begin{equation}\n",
        "\\mathbf{Y} = \\mathbf{X} \\mathbf{w} + b\n",
        "\\end{equation}"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "r5f4JH9gqgxf"
      },
      "source": [
        "net = nn.Sequential(nn.Linear(1, 1))\n",
        "\n",
        "net[0].weight.data.fill_(0)\n",
        "net[0].bias.data.fill_(0)\n",
        "\n",
        "loss = nn.MSELoss()\n",
        "\n",
        "optimizer = torch.optim.SGD(net.parameters(), lr=1e-2)\n",
        "\n",
        "losses = []\n",
        "\n",
        "epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "\n",
        "for epoch in epoch_range:\n",
        "  if losses:\n",
        "    epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "    epoch_range.refresh() # to show immediately the update\n",
        "\n",
        "  l = loss(net(X.T) , y)\n",
        "  l.backward()\n",
        "  optimizer.step()\n",
        "  optimizer.zero_grad()\n",
        "  l = loss(net(X.T), y)\n",
        "  losses.append(l)\n",
        "\n",
        "y_hat = net(X.T).reshape(-1,1)\n",
        "\n",
        "plt.figure(figsize=(14, 5))\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.plot(range(num_epochs), losses)\n",
        "plt.xlabel('epoch')\n",
        "plt.ylabel('loss (a.u.)')\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.scatter(X, y, label='original data')\n",
        "plt.plot(X.reshape(-1,1), y_hat.detach(), label='regression',\n",
        "         color='red', linewidth=3.0)\n",
        "plt.xlabel('independent variable')\n",
        "plt.ylabel('dependent variable')\n",
        "plt.title(f'Toy dataset with {N} samples')\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YKVWwe_avAkQ"
      },
      "source": [
        "## Section 5.1: Model evaluation\n",
        "\n",
        "We compare the model parameters learned by training the given data-set and the actual parameters generated by our dataset. To access parameters, we first access the layer that we need from the model (i.e., `net`) and then access that layer’s weights and bias. As in our from-scratch implementation, note that our estimated parameters are close to their ground-truth counterparts."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pyYgP2gfvLGX"
      },
      "source": [
        "w_pytorch = net[0].weight.data\n",
        "print('error in estimating w:', original_w - w_pytorch.reshape(original_w.shape))\n",
        "b_pytorch = net[0].bias.data\n",
        "print('error in estimating b:', original_b - b_pytorch)\n",
        "\n",
        "print(f'Original w: {original_w}, from-scratch: {w_train.detach()}, pytorch: {w_pytorch}')\n",
        "print(f'Original b: {original_b}, from-scratch: {b_train.detach()}, pytorch: {b_pytorch}')\n",
        "\n",
        "\n",
        "X = X.cpu().detach().numpy()\n",
        "y = y.cpu().detach().numpy()\n",
        "\n",
        "# We append a row with ones\n",
        "X_append = np.vstack((X, np.ones((1,1000))))\n",
        "\n",
        "# Check if the inverse matrix exists!!\n",
        "params = analytical_sol(X_append.T, y)\n",
        "if len(params) != 0:\n",
        "  print(f'Original w: {original_w}, analytical: {params[0]}')\n",
        "  print(f'Original b: {original_b}, analytical: {params[1]}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tPZzpQesh-DS"
      },
      "source": [
        "## Section 5.2: More features than the number of examples"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "LW8wJnbMh_nf"
      },
      "source": [
        "#@title Video: Linear Regression with More Dimensions than Data\n",
        "\n",
        "video = YouTubeVideo(id=\"_uhMofhfYno\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8mqQp7GpfUKv"
      },
      "source": [
        "But, what if we do not have enough data? And what about if the dataset is linearly dependent?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DzLkVTXmf4jZ"
      },
      "source": [
        "# Generate a second dataset with D>>N\n",
        "\n",
        "original_w = torch.randn(200, 1).reshape(-1,1)\n",
        "original_b = 1.2\n",
        "N = 10 # number of examples\n",
        "X, y = synthetic_dataset(original_w, original_b, num_examples=N, sigma=0.5)\n",
        "\n",
        "print(f'The dataset has N={X.shape[1]} examples in the {X.shape[0]}D space!')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YR3XkdH7gQLH"
      },
      "source": [
        "# Model construction\n",
        "net = nn.Sequential(nn.Linear(X.shape[0], 1))\n",
        "\n",
        "# Parameter initialization to zero\n",
        "net[0].weight.data.fill_(0)\n",
        "net[0].bias.data.fill_(0)\n",
        "print(net[0].weight.shape)\n",
        "\n",
        "loss = nn.MSELoss()\n",
        "\n",
        "optimizer = torch.optim.SGD(net.parameters(), lr=1e-2)\n",
        "\n",
        "losses = []\n",
        "\n",
        "epoch_range = trange(num_epochs, desc='loss: ', leave=True)\n",
        "\n",
        "for epoch in epoch_range:\n",
        "  if losses:\n",
        "    epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n",
        "    epoch_range.refresh() # to show immediately the update\n",
        "\n",
        "  l = loss(net(X.T), y)\n",
        "  l.backward()\n",
        "  optimizer.step()\n",
        "  optimizer.zero_grad()\n",
        "  l = loss(net(X.T), y)\n",
        "  losses.append(l)\n",
        "\n",
        "y_hat = net(X.T).reshape(-1,1)\n",
        "\n",
        "plt.figure()\n",
        "plt.plot(range(num_epochs), losses)\n",
        "plt.xlabel('epoch')\n",
        "plt.ylabel('loss (a.u.)')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I5L1Au2NgTdv"
      },
      "source": [
        "Let's compute the analytic solution."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zgfCNBL8gWk0"
      },
      "source": [
        "# Append a row with ones and take the transpose\n",
        "X_append = np.vstack((X.detach(), np.ones((1, X.shape[1])))).T\n",
        "\n",
        "# Analytical solution\n",
        "params = analytical_sol(X_append, y.cpu().detach().numpy())\n",
        "if len(params) != 0:\n",
        "  print(f'Original w: {original_w}, analytical: {params[0]}')\n",
        "  print(f'Original b: {original_b}, analytical: {params[1]}')\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xoMBB0mxi-dF",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Condition Numbers\n",
        "\n",
        "video = YouTubeVideo(id=\"FpcSTi9YITc\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VtRxB698CTfG"
      },
      "source": [
        "---\n",
        "# Wrap up"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P5-HZSWcCbr3"
      },
      "source": [
        "## Submit responses"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FCJJf7OFk8SU",
        "cellView": "form"
      },
      "source": [
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "import time\n",
        "import numpy as np\n",
        "from IPython.display import IFrame\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefills = \"&\".join([\"prefill_%s=%s\"%(key, fields[key]) for key in fields])\n",
        "  src = src + prefills\n",
        "  src = \"+\".join(src.split(\" \"))\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "try: t2;\n",
        "except NameError: t2 = time.time()\n",
        "try: t3;\n",
        "except NameError: t3 = time.time()\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()\n",
        "try: t5;\n",
        "except NameError: t5 = time.time()\n",
        "try: t6;\n",
        "except NameError: t6 = time.time()\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"\"\n",
        "try: my_w1_upshot;\n",
        "except NameError: my_w1_upshot = \"\"\n",
        "try: my_expectations;\n",
        "except NameError: my_expectations = \"\"\n",
        "try: deep_learning_benchmark;\n",
        "except NameError: deep_learning_benchmark = \"\"\n",
        "try: derivative_free_opt_alg;\n",
        "except NameError: derivative_free_opt_alg = \"\"\n",
        "try: autograd_uses;\n",
        "except NameError: autograd_uses = \"\"\n",
        "try: lr_epochs_vs_convergence;\n",
        "except NameError: lr_epochs_vs_convergence = \"\"\n",
        "\n",
        "times = [(t-t0) for t in [t1,t2,t3,t4,t5,t6]]\n",
        "\n",
        "fields = {\"pennkey\": my_pennkey,\n",
        "          \"pod\": my_pod,\n",
        "          \"my_w1_upshot\": my_w1_upshot,\n",
        "          \"expectations\": my_expectations,\n",
        "          \"deep_learning_benchmark\": deep_learning_benchmark,\n",
        "          \"derivative_free_opt_alg\": derivative_free_opt_alg,\n",
        "          \"autograd_uses\": autograd_uses,\n",
        "          \"lr_epochs_vs_convergence\": lr_epochs_vs_convergence,\n",
        "          \"cumulative_times\": times}\n",
        "\n",
        "src = \"https://airtable.com/embed/shrjWLVOEJF0CA3up?\"\n",
        "\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HKn5d3CCC05w"
      },
      "source": [
        "## Feedback\n",
        "How could this session have been better? How happy are you in your group? How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link:\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HIvhG6VZ8zez"
      },
      "source": [
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IERM800txGdR"
      },
      "source": [
        "---\n",
        "<a name=\"proof\"></a>\n",
        "# Appendix: Linear Regression analytic solution - Proof\n",
        "\n",
        "\\begin{align}\n",
        "\\frac{\\partial L(\\textbf{w})}{\\partial \\textbf{w}} &{}= 0 \\\\\n",
        "\\frac{\\partial }{\\partial \\textbf{w}}\\left( \\left( \\textbf{Y} - \\textbf{X}\\textbf{w}\\right)^{\\text{T}} \\left( \\textbf{Y} - \\textbf{X}\\textbf{w}\\right) \\right) &{} = 0 \\\\\n",
        "\\frac{\\partial }{\\partial \\textbf{w}}\\left( \\left( \\textbf{Y}^{\\text{T}} - \\left( \\textbf{X}\\textbf{w} \\right)^{\\text{T}}\\right) \\left( \\textbf{Y} - \\textbf{X}\\textbf{w}\\right) \\right) &{} = 0 \\\\\n",
        "\\frac{\\partial }{\\partial \\textbf{w}} \\left( \\textbf{Y}^{\\text{T}}\\textbf{Y} -\n",
        "\\left( \\textbf{X}\\textbf{w} \\right)^{\\text{T}}\\textbf{Y} - \\textbf{Y}^{\\text{T}}\\left( \\textbf{X}\\textbf{w}\\right)+ \\textbf{w}\\textbf{X}^{\\text{T}}\\textbf{X}\\textbf{w} \\right)&{}= 0\n",
        "\\end{align}\n",
        "\n",
        "Here, we need to make some simplifications on the second term in the left-hand side. We know that $\\textbf{X} \\in \\mathbb{R}^{N \\times D+1}, \\textbf{w} \\in \\mathbb{R}^{D+1 \\times 1}$, therefore their product $\\in \\mathbb{R}^{N \\times 1}$. Thus, the dimensions of the second order is $1$, i.e., $ \\left( \\textbf{X}\\textbf{w} \\right)^{\\text{T}} \\in \\mathbb{R}$. This means that the second order is a scalar.\n",
        "\n",
        "Thus, we can substitute with its transpose, i.e., $\\left( \\left( \\textbf{X}\\textbf{w} \\right)^{\\text{T}} \\textbf{Y} \\right)^{\\text{T}} = \\textbf{Y}^{\\text{T}} \\left( \\textbf{X}\\textbf{w} \\right)$. So the derivative is simplified in:\n",
        "\n",
        "\\begin{align}\n",
        "\\frac{\\partial }{\\partial \\textbf{w}} \\left( \\textbf{Y}^{\\text{T}}\\textbf{Y} -\\textbf{Y}^{\\text{T}} \\left( \\textbf{X}\\textbf{w} \\right) - \\textbf{Y}^{\\text{T}} \\left( \\textbf{X}\\textbf{w} \\right) + \\textbf{w}^{\\text{T}}\\textbf{X}^{\\text{T}}\\textbf{X}\\textbf{w} \\right)&{}= 0 \\\\\n",
        "\\frac{\\partial }{\\partial \\textbf{w}} \\left( \\textbf{Y}^{\\text{T}}\\textbf{Y} -2\\textbf{Y}^{\\text{T}} \\left( \\textbf{X}\\textbf{w} \\right) + \\textbf{w}^{\\text{T}}\\textbf{X}^{\\text{T}}\\textbf{X}\\textbf{w} \\right)&{}= 0 \\\\\n",
        "\\frac{\\partial }{\\partial \\textbf{w}} \\left( \\textbf{Y}^{\\text{T}}\\textbf{Y} -2\\left( \\textbf{X}^{\\text{T}}\\textbf{Y} \\right)^{\\text{T}} \\textbf{w} + \\textbf{w}^{\\text{T}}\\textbf{X}^{\\text{T}}\\textbf{X}\\textbf{w} \\right)&{}= 0 \\\\\n",
        "-2\\textbf{X}^{\\text{T}}\\textbf{Y} + 2\\textbf{X}^{\\text{T}}\\textbf{X}\\textbf{w} &{}= 0\n",
        "\\end{align}\n",
        "\n",
        "Assuming that $\\textbf{X}^{\\text{T}}\\textbf{X}$ is full-rank and thus its inverse exists (implying that $N>D$ and the rows are not all linearly dependent:\n",
        "\n",
        "\\begin{equation}\n",
        "\\textbf{w}^{\\textbf{*}} = \\left( \\textbf{X}^{\\text{T}}\\textbf{X} \\right)^{-1}\\textbf{X}^{\\text{T}}\\textbf{Y}\n",
        "\\end{equation}\n",
        "\n",
        "\n",
        "For the solution, we have used the following identities:\n",
        "\n",
        "*Derivative of a linear function*\n",
        "\n",
        "\\begin{equation}\n",
        "\\frac{\\partial }{\\partial \\vec{x}} \\vec{\\alpha}\\vec{x} = \\frac{\\partial }{\\partial \\vec{x}} \\vec{\\alpha}^{\\text{T}}\\vec{x} = \\frac{\\partial }{\\partial \\vec{x}} \\vec{x}^{\\text{T}} \\vec{\\alpha} = \\vec{\\alpha}\n",
        "\\end{equation}\n",
        "\n",
        "*(Think of calculus: $\\frac{d}{dx}(\\alpha x)=\\alpha$)*\n",
        "\n",
        "*Derivative of a quadratic function*\n",
        "\n",
        "\\begin{equation}\n",
        "\\frac{\\partial }{\\partial \\vec{x}} \\vec{x}^{\\text{T}}A \\vec{x} = (A + A^{\\text{T}}) \\vec{x}\n",
        "\\end{equation}\n",
        "\n",
        "where if $A=A^{\\text{T}}$, i.e., $A$ is symmetric, then\n",
        "\n",
        "\\begin{equation}\n",
        "\\frac{\\partial }{\\partial \\vec{x}} \\vec{x}^{\\text{T}}A \\vec{x} = 2A \\vec{x}\n",
        "\\end{equation}\n",
        "\n",
        "*(Think of calculus: $\\frac{d}{dx}(\\alpha x^2)=2 \\alpha x$)*\n",
        "\n"
      ]
    }
  ]
}