{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<center>\n",
    "    <img src=\"./images/mlfasp.png\">\n",
    "</center>\n",
    "\n",
    "#### Prof. Dr. -Ing. Gerald Schuller <br> Jupyter Notebook: Renato Profeta\n",
    "\n",
    "[Applied Media Systems Group](https://www.tu-ilmenau.de/en/applied-media-systems-group/) <br>\n",
    "[Technische Universität Ilmenau](https://www.tu-ilmenau.de/)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "# Neural Networks Basics, Detector"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "hide_input": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/9ueXaEbRFLY\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/9ueXaEbRFLY\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "The simplest layer is the so-called **fully connected layer.** If we have a 1-dimensional input array \"*x*\" of length \"*in_features*\", and a 1-dimensional output array \"*y*\" of length \"*out_features*\", then its function is simply a **matrix multiplication** with an addition of an array \"*b*\", called **\"bias\"**, followed by a non-linear function, called the **\"activation function\"**.\n",
    "\n",
    "In Pytorch the matrix multiplication and addition is the function:\n",
    "\n",
    "`torch.nn.Linear(in_features, out_features, bias=True)`\n",
    "\n",
    "The Linear layer computes the function $y=x \\cdot A^T + b$, where $.^T$ is the transpose operator for the matrix. The\n",
    "coefficients of the matrix *A* and the array *b* are called **\"weight\"**. They will be obtained using optimization, also called **\"training\"**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "hide_input": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/wKwEfg8eKNA?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/wKwEfg8eKNA?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.Linear.html\" width=\"900\" height=\"600\"></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.Linear.html\" width=\"900\" height=\"600\"></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "This is called **\"fully connected\"** because the matrix *A* connects each input element (also called **\"feature”**) to each output element or feature.\n",
    "\n",
    "The input and output arrays are 1-dimensional because of the full and fixed connectivity. If we have an **image** as input we would just **reshape into a 1-dimensional array**, for instance with \"*view*\" or \"*reshape*\".\n",
    "\n",
    "Often neural networks are used as **detectors**. In that case, each value in the output array y would correspond to the output of one detector, and each detector represents one **\"class\"**.\n",
    "\n",
    "This linear layer is usually followed by a non-linear function, also called an **\"activation function\"**, for instance the rectified linear unit function, which is applied to each output element or feature:\n",
    "\n",
    "`torch.nn.ReLU(inplace=False)`\n",
    "\n",
    "Applies the element-wise function:\n",
    "\n",
    "ReLU(x)=max(0,x)\n",
    "\n",
    "This function limits the output to the non-negative range. This is useful when the output represents a detector. \n",
    "\n",
    "In this case, a value of 1 might mean \"detected with certainty\", and a value of 0 would mean \"not detected with certainty\".\n",
    "Here, negative values would make no sense. This is probably the most often used activation function because of its simplicity."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.ReLU.html\" width=\"900\" height=\"600\"></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.ReLU.html\" width=\"900\" height=\"600\"></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "Often a slightly modified version is used, which avoids the vanishing gradient for negative values, which helps the optimization. It has a small slope for negative values, and is hence called “LeakyReLU”:\n",
    "\n",
    "`torch.nn.LeakyReLU(negative_slope=0.01, inplace=False)`\n",
    "\n",
    "Applies the element-wise function:\n",
    "\n",
    "LeakyReLU(x)=max(0,x)+negative_slope∗min(0,x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.LeakyReLU.html\" width=\"900\" height=\"600\"></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.LeakyReLU.html\" width=\"900\" height=\"600\"></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "The Softmax activation function turns the outputs of a network into looking like a probability function, positive and summing up to 1.\n",
    "\n",
    "`torch.nn.Softmax(dim=None)`\n",
    "\n",
    "Softmax is defined as:\n",
    "\n",
    "$$\\large\n",
    "\\text{Softmax}(x_i) = \\dfrac{e^x_i}{\\sum_j e^x_j}$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.Softmax.html\" width=\"900\" height=\"600\"></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.Softmax.html\" width=\"900\" height=\"600\"></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "An alternative is the so-called \"*sigmoid*\" function, which is differential everywhere, but is more complex to compute.\n",
    "This is the classic activation function which is already used in early papers about neural networks:\n",
    "\n",
    "`torch.nn.Sigmoid`\n",
    "\n",
    "Defined as:\n",
    "\n",
    "$$\\large\n",
    "\\text{Sigmoid}(x) = \\dfrac{1}{1+e^{-x}}$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.Sigmoid.html\" width=\"900\" height=\"600\"></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe src=\"https://pytorch.org/docs/master/generated/torch.nn.Sigmoid.html\" width=\"900\" height=\"600\"></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Python Example for a Linear Layer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "hide_input": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/XGfYLfcJIsk?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/XGfYLfcJIsk?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "For linear layer the shape of the input and output tensors are:\n",
    "- Input:  (N,∗,Hin) where ∗ means any number of additional dimensions and Hin=in_features.\n",
    "- Output: (N,∗,Hout) where all but the last dimension are the same shape as the input and Hout=out_features.\n",
    "\n",
    "N is a **batch size**, * is usually the index for the test set, Hin is the length of our input signal, and Hout is the number of classes (or detectors) that we desire.\n",
    "\n",
    "The **batch index** is for different sets of training and target data. Optimization is performed one batch after the other. This saves memory because only one batch is loaded into memory.\n",
    "\n",
    "*Example:*\n",
    "\n",
    "Imagine we have 2 input signals we want to detect, one increasing and one decreasing:\n",
    "\n",
    "$$\\large\n",
    "x_0=[1,2], \\hspace{1cm} x_1=[2,1]$$\n",
    "\n",
    "We assemble the 2 signals into a 2-dimensional tensor (a matrix), where each signal is one row:\n",
    "\n",
    "$$\\large\n",
    "x=\n",
    "\\left[ \n",
    "    \\begin{array}{cc}\n",
    "    1 & 2 \\\\\n",
    "    2 & 1 \\\\\n",
    "    \\end{array}\n",
    "\\right]\n",
    "$$\n",
    "\n",
    "We want to have 2 detectors, one for the first signal, and one for the second signal. The desired output of the first dectector should be \"1\" for the first signal $x_0$ and \"0\" for the second $x_1$. The second detector should output \"0\" for the first signal $x_0$ and \"1\" for the second signal $x_1$. The desired output is called the **\"target\"**.\n",
    "\n",
    "We assemble the desired outputs or target in a matrix where each detector corrsponds to one row:\n",
    "\n",
    "$$\\large\n",
    "y=\n",
    "\\left[ \n",
    "    \\begin{array}{cc}\n",
    "    1 & 0 \\\\\n",
    "    0 & 1 \\\\\n",
    "    \\end{array}\n",
    "\\right]\n",
    "$$\n",
    "\n",
    "For our linear layer we now need to find the weights (coefficients) of the matrix *A* and bias *b* such that the desired output *y* is approximated, given out \"*training set*\" *x*:\n",
    "\n",
    "$$\\large\n",
    "y=x \\cdot A^T+\\mathbf{b}$$\n",
    "\n",
    "In this simple case we can actually find a closed form solution:\n",
    "\n",
    "$$\\large\n",
    "b=[0,0], \\hspace{1cm} A^T=x^{-1} \\cdot y$$\n",
    "\n",
    "If we now have **more samples in our training set** for *x* and *y*, we will get **tall matrices**. For instance, we could add another input $x_2=[1,1]$ for which both outputs should be zero as target.\n",
    "\n",
    "For these non-quadratic matrices we can compute the pseudo-inverse, which **minimizes the mean squared error** between the network output and the desired output.\n",
    "\n",
    "$$\\large\n",
    "y=x\\cdot A \\\\\n",
    "\\large\n",
    "x^T \\cdot y = x^T\\cdot x \\cdot A \\\\\n",
    "\\large\n",
    "(x^T \\cdot x)^{-1} \\cdot x^T \\cdot y = A\n",
    "$$\n",
    "\n",
    "**Observe** that this does not include the bias array *b*, which might lead to better approximations, and also needs the mean squared error. Depending on the application, we might want to use a different error measure, also called the **\"Loss function\"**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "A simple and widely used loss function is the mentioned **mean squared error loss**. In Pytorch it is the function:\n",
    "\n",
    "`torch.nn.MSELoss(reduction='mean')`\n",
    "\n",
    "For detectors of different classes, usually the so-called **\"cross-entropy loss\"** is used. It creates a distance or divergence function between two probability distributions.\n",
    "\n",
    "Here the probability distributions are the target distribution (probability 1 for the true classe, and probability 0 for the false class), and the distribution that the neural network generates (a predicted probability for each class, with high probability value for the class that the network \"thinks\" is at its input).\n",
    "\n",
    "In PyTorch:\n",
    "\n",
    "`torch.nn.CrossEntropyLoss(weight=None, ignore_index=-100, reduction='mean')`\n",
    "\n",
    "To obtain a better solution and include other loss function, in general **numerical optimization** is used to find the weights of coefficients which minimizes a given loss function. \n",
    "\n",
    "**Observe** that the loss function produces a single number or value for a given training set and given weights. The weights are then updated during the optimization to reduce the loss function.\n",
    "\n",
    "Commom optimizers are **\"Stochastic Gradient Descent\" (SGD)**, and **\"Adaptive Moments” (adam)**. \n",
    "\n",
    "After we obtained the weights from the optimization we have to see if it generalizes to other examples, or just works on the training set. For this we have the **validation set**, containing new input and target samples. This is used to modify the neural network structure until it performs on new samples or examples sufficiently.\n",
    "\n",
    "Once we have a good neural network structure, we test it on the **test set**, another set with new samples or examples, to test the performance."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Python Example Program"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "We start our program with importing the Pytorch library and setting the device, where we select \"cuda\" only if we have a compatible GPU:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "device='cpu'\n",
    "#device='cuda'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "Then we define our neural network with a linear layer, and a non-linear activation function, which we can comment out on the \"forward\" function if we want to test a network without the activation function. This is done as a \"class\", such that the network is an object which can instantiate and initialize in the main part of the program:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "hide_input": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/oGAVlqhJkQ8?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/oGAVlqhJkQ8?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "### Experiment 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [],
   "source": [
    "class LinNet(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(LinNet, self).__init__()\n",
    "        # Define the model.\n",
    "        self.layer1 = nn.Sequential(nn.Linear(in_features=2, out_features=2, bias=True))\n",
    "        #https://pytorch.org/docs/stable/nn.html?highlight=linear#torch.nn.Linear\n",
    "        # Generate a fully connected linear neural network model, 1 layer, bias, linear activation function\n",
    "        # returns: Trainable object\n",
    "        #self.act = nn.LeakyReLU() #non-linear activation function\n",
    "        #self.act = nn.ReLU() #non-linear activation function\n",
    "    \n",
    "    def forward(self, x):\n",
    "        out = self.layer1(x)\n",
    "        #out = self.act(out) #comment out if not desired\n",
    "        return out"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": true,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "In the main part of the program we generate the data for training and validation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "X.shape torch.Size([1, 3, 2])\n",
      "Y.shape torch.Size([1, 3, 2])\n"
     ]
    }
   ],
   "source": [
    "#input tensor, type torch tensor:\n",
    "#Indices: batch, sample, features or signal dimension. Here: 1 batch, 3 samples, signal dimension 2:\n",
    "\n",
    "#Training set:\n",
    "X=torch.tensor([[1., 2.], [2., 1.],[1., 1.]]).view(1,3,2) #adding the first dimension for the batch\n",
    "print(\"X.shape\", X.shape)\n",
    "\n",
    "#Target:\n",
    "Y=torch.tensor([[1., 0.], [0., 1.],[0., 0.]]).view(1,3,2)\n",
    "print(\"Y.shape\", Y.shape)\n",
    "\n",
    "#Validation set, to test generalization:\n",
    "Xval=torch.tensor([[0.5, 1.0], [1., 0.5],[0.5, 0.5]]).view(1,3,2)\n",
    "#Validation Target:\n",
    "Yval=torch.tensor([[1., 0.], [0., 1.],[0., 0.]]).view(1,3,2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "Observe that the validation input data contains a scaled version of the training set.\n",
    "\n",
    "Now we instantiate the model, define the loss function and the optimizer:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Define loss function: MSELoss()\n",
      "Define optimizer: SGD (\n",
      "Parameter Group 0\n",
      "    dampening: 0\n",
      "    lr: 0.1\n",
      "    momentum: 0\n",
      "    nesterov: False\n",
      "    weight_decay: 0\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "#create network object:\n",
    "model = LinNet().to(device)\n",
    "loss_fn = nn.MSELoss()\n",
    "print(\"Define loss function:\", loss_fn)\n",
    "#learning_rate = 1e-4\n",
    "#optimizer = torch.optim.Adam(model.parameters())\n",
    "optimizer = torch.optim.SGD(model.parameters(),lr=0.1)\n",
    "print(\"Define optimizer:\", optimizer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "Here we have the choice of 2 optimizers, Adam and Stochastic Gradient Descent (SGD). We can try which works better.\n",
    "\n",
    "Next we let the optimizer do 10000 update iterations (called **epochs**):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 1.1859253644943237\n",
      "100 0.03673030063509941\n",
      "200 0.022693032398819923\n",
      "300 0.014052987098693848\n",
      "400 0.008702552877366543\n",
      "500 0.005389204248785973\n",
      "600 0.0033373564947396517\n",
      "700 0.0020667125936597586\n",
      "800 0.0012798466486856341\n",
      "900 0.0007925659883767366\n",
      "1000 0.000490809790790081\n",
      "1100 0.00030394268105737865\n",
      "1200 0.00018822094716597348\n",
      "1300 0.00011655897833406925\n",
      "1400 7.218079554149881e-05\n",
      "1500 4.469896884984337e-05\n",
      "1600 2.768071135506034e-05\n",
      "1700 1.7141588614322245e-05\n",
      "1800 1.0615190149110276e-05\n",
      "1900 6.573673090315424e-06\n",
      "2000 4.070792783750221e-06\n",
      "2100 2.521037231417722e-06\n",
      "2200 1.5611514072588761e-06\n",
      "2300 9.6681196737336e-07\n",
      "2400 5.987678264318674e-07\n",
      "2500 3.708317990458454e-07\n",
      "2600 2.2966553103742626e-07\n",
      "2700 1.4223876121377543e-07\n",
      "2800 8.810457785557446e-08\n",
      "2900 5.4583626507564986e-08\n",
      "3000 3.382944413488076e-08\n",
      "3100 2.0961701707733482e-08\n",
      "3200 1.2989634257110083e-08\n",
      "3300 8.04916311381021e-09\n",
      "3400 4.991105750917768e-09\n",
      "3500 3.09521719366046e-09\n",
      "3600 1.9206336521193634e-09\n",
      "3700 1.192626997159607e-09\n",
      "3800 7.377061739255453e-10\n",
      "3900 4.6157863642370955e-10\n",
      "4000 2.858868697330763e-10\n",
      "4100 1.796601339920656e-10\n",
      "4200 1.106794655925114e-10\n",
      "4300 6.973444044433563e-11\n",
      "4400 4.150043156347927e-11\n",
      "4500 2.786511819896642e-11\n",
      "4600 1.813245907589245e-11\n",
      "4700 1.0203393685515039e-11\n",
      "4800 6.086390506171435e-12\n",
      "4900 4.721556479125866e-12\n",
      "5000 4.632146662769676e-12\n",
      "5100 4.632146662769676e-12\n",
      "5200 4.632146662769676e-12\n",
      "5300 4.632146662769676e-12\n",
      "5400 4.632146662769676e-12\n",
      "5500 4.632146662769676e-12\n",
      "5600 4.632146662769676e-12\n",
      "5700 4.632146662769676e-12\n",
      "5800 4.632146662769676e-12\n",
      "5900 4.632146662769676e-12\n",
      "6000 4.632146662769676e-12\n",
      "6100 4.632146662769676e-12\n",
      "6200 4.632146662769676e-12\n",
      "6300 4.632146662769676e-12\n",
      "6400 4.632146662769676e-12\n",
      "6500 4.632146662769676e-12\n",
      "6600 4.632146662769676e-12\n",
      "6700 4.632146662769676e-12\n",
      "6800 4.632146662769676e-12\n",
      "6900 4.632146662769676e-12\n",
      "7000 4.632146662769676e-12\n",
      "7100 4.632146662769676e-12\n",
      "7200 4.632146662769676e-12\n",
      "7300 4.632146662769676e-12\n",
      "7400 4.632146662769676e-12\n",
      "7500 4.632146662769676e-12\n",
      "7600 4.632146662769676e-12\n",
      "7700 4.632146662769676e-12\n",
      "7800 4.632146662769676e-12\n",
      "7900 4.632146662769676e-12\n",
      "8000 4.632146662769676e-12\n",
      "8100 4.632146662769676e-12\n",
      "8200 4.632146662769676e-12\n",
      "8300 4.632146662769676e-12\n",
      "8400 4.632146662769676e-12\n",
      "8500 4.632146662769676e-12\n",
      "8600 4.632146662769676e-12\n",
      "8700 4.632146662769676e-12\n",
      "8800 4.632146662769676e-12\n",
      "8900 4.632146662769676e-12\n",
      "9000 4.632146662769676e-12\n",
      "9100 4.632146662769676e-12\n",
      "9200 4.632146662769676e-12\n",
      "9300 4.632146662769676e-12\n",
      "9400 4.632146662769676e-12\n",
      "9500 4.632146662769676e-12\n",
      "9600 4.632146662769676e-12\n",
      "9700 4.632146662769676e-12\n",
      "9800 4.632146662769676e-12\n",
      "9900 4.632146662769676e-12\n"
     ]
    }
   ],
   "source": [
    "for epoch in range(10000):\n",
    "    Ypred=model(X) #the model produces prediction output\n",
    "    loss=loss_fn(Ypred, Y) #prediction and target compared by loss\n",
    "    if epoch%100==0:\n",
    "        print(epoch, loss.item()) #print current loss value\n",
    "    optimizer.zero_grad() #optimizer sets previous gradients to zero\n",
    "    loss.backward() #optimizer computes new gradients\n",
    "    optimizer.step() #optimizer updates weights"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "Finally we print out the results on the training set and the validation set, and the obtained weights:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ypred training set= tensor([[[ 1.0000e+00, -8.9407e-07],\n",
      "         [-8.3447e-07,  1.0000e+00],\n",
      "         [ 3.2783e-06,  3.2783e-06]]], grad_fn=<AddBackward0>)\n",
      "Loss on trainig set: tensor(4.6321e-12, grad_fn=<MseLossBackward>)\n",
      "Y validation set= tensor([[[ 5.3644e-06, -4.9999e-01],\n",
      "         [-4.9999e-01,  5.3644e-06],\n",
      "         [-4.9999e-01, -4.9999e-01]]], grad_fn=<AddBackward0>)\n",
      "Loss on validation set: tensor(0.5000, grad_fn=<MseLossBackward>)\n",
      "weights= OrderedDict([('layer1.0.weight', tensor([[-4.1555e-06,  1.0000e+00],\n",
      "        [ 1.0000e+00, -4.1593e-06]])), ('layer1.0.bias', tensor([-1.0000, -1.0000]))])\n"
     ]
    }
   ],
   "source": [
    "Ypred=model(X) # Make Predictions based on the obtained weights\n",
    "print(\"Ypred training set=\", Ypred)\n",
    "loss=loss_fn(Ypred, Y)\n",
    "print(\"Loss on trainig set:\", loss)\n",
    "Yvalpred=model(Xval) # Make Predictions based on the obtained weights\n",
    "print(\"Y validation set=\", Yvalpred)\n",
    "loss=loss_fn(Yvalpred, Yval)\n",
    "print(\"Loss on validation set:\", loss)\n",
    "weights = model.state_dict() #read obtained weights\n",
    "print(\"weights=\", weights)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "Each time we execute it, it starts with a different random initialization, hence each time we get a different route the optimization takes, and possibly slightly different results of the optimization. If we get very different results that is a sign that the optimization got stuck in a local minimum.\n",
    "\n",
    "**Observe**:\n",
    "We obtain the **desired output on the training set**, but the output for the validation set does not look good. This is reflected by the loss values: Only 4.6493e-12 on the training set, **but 0.5000 on the validation set, a big loss!** Hence we have a bad generalization. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### Experiment 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "Next we need to modify our neural network structure to try to obtain a better generalization. We do that by adding the activation function, by un-commenting the LeakyReLU function, and the “out=self.act(out)” line:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "hide_input": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/aGWzIZu5k70?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%html\n",
    "<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/aGWzIZu5k70?rel=0\" frameborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "hide_input": false,
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [],
   "source": [
    "class LinNet(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(LinNet, self).__init__()\n",
    "        # Define the model.\n",
    "        self.layer1 = nn.Sequential(nn.Linear(in_features=2, out_features=2, bias=True))\n",
    "        #https://pytorch.org/docs/stable/nn.html?highlight=linear#torch.nn.Linear\n",
    "        # Generate a fully connected linear neural network model, 1 layer, bias, linear activation function\n",
    "        # returns: Trainable object\n",
    "        self.act = nn.LeakyReLU() #non-linear activation function\n",
    "        #self.act = nn.ReLU() #non-linear activation function\n",
    "    \n",
    "    def forward(self, x):\n",
    "        out = self.layer1(x)\n",
    "        out = self.act(out) #comment out if not desired\n",
    "        return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Define loss function: MSELoss()\n",
      "Define optimizer: SGD (\n",
      "Parameter Group 0\n",
      "    dampening: 0\n",
      "    lr: 0.1\n",
      "    momentum: 0\n",
      "    nesterov: False\n",
      "    weight_decay: 0\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "#create network object:\n",
    "model = LinNet().to(device)\n",
    "loss_fn = nn.MSELoss()\n",
    "print(\"Define loss function:\", loss_fn)\n",
    "#learning_rate = 1e-4\n",
    "#optimizer = torch.optim.Adam(model.parameters())\n",
    "optimizer = torch.optim.SGD(model.parameters(),lr=0.1)\n",
    "print(\"Define optimizer:\", optimizer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 0.2171287089586258\n",
      "1000 0.16697926819324493\n",
      "2000 1.859538497228641e-05\n",
      "3000 1.8481365259503946e-05\n",
      "4000 1.8419650587020442e-05\n",
      "5000 1.8358035958954133e-05\n",
      "6000 1.829653228924144e-05\n",
      "7000 1.8235181414638646e-05\n",
      "8000 1.8174223441747017e-05\n",
      "9000 1.811349648050964e-05\n"
     ]
    }
   ],
   "source": [
    "for epoch in range(10000):\n",
    "    Ypred=model(X) #the model produces prediction output\n",
    "    loss=loss_fn(Ypred, Y) #prediction and target compared by loss\n",
    "    if epoch%1000==0:\n",
    "        print(epoch, loss.item()) #print current loss value\n",
    "    optimizer.zero_grad() #optimizer sets previous gradients to zero\n",
    "    loss.backward() #optimizer computes new gradients\n",
    "    optimizer.step() #optimizer updates weights"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ypred training set= tensor([[[ 9.9998e-01, -9.2057e-03],\n",
      "         [-4.8503e-03,  9.9995e-01],\n",
      "         [ 9.7573e-05,  1.8577e-04]]], grad_fn=<LeakyReluBackward0>)\n",
      "Loss on trainig set: tensor(1.8053e-05, grad_fn=<MseLossBackward>)\n",
      "Y validation set= tensor([[[ 2.4266e-01, -4.9970e-03],\n",
      "         [-4.9984e-03,  4.6056e-01],\n",
      "         [-2.5728e-03, -3.9320e-04]]], grad_fn=<LeakyReluBackward0>)\n",
      "Loss on validation set: tensor(0.1441, grad_fn=<MseLossBackward>)\n",
      "weights= OrderedDict([('layer1.0.weight', tensor([[-0.4851,  0.9999],\n",
      "        [ 0.9998, -0.9208]])), ('layer1.0.bias', tensor([-0.5146, -0.0788]))])\n"
     ]
    }
   ],
   "source": [
    "Ypred=model(X) # Make Predictions based on the obtained weights\n",
    "print(\"Ypred training set=\", Ypred)\n",
    "loss=loss_fn(Ypred, Y)\n",
    "print(\"Loss on trainig set:\", loss)\n",
    "Yvalpred=model(Xval) # Make Predictions based on the obtained weights\n",
    "print(\"Y validation set=\", Yvalpred)\n",
    "loss=loss_fn(Yvalpred, Yval)\n",
    "print(\"Loss on validation set:\", loss)\n",
    "weights = model.state_dict() #read obtained weights\n",
    "print(\"weights=\", weights)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "**Observe**:\n",
    "\n",
    "There we get a loss on the training set of 0.0000. For the validation set we get 0.1441, indeed much better than before. So using the activation function helps for better generalization."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "### Experiment 3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "To see the effect of vanishing gradients on the optimization, we uncomment the line with \"self.act=nn.ReLU()\". This activation function has a constant \"0\" for negative values and hence a vanishing gradient for negative values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [],
   "source": [
    "class LinNet(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(LinNet, self).__init__()\n",
    "        # Define the model.\n",
    "        self.layer1 = nn.Sequential(nn.Linear(in_features=2, out_features=2, bias=True))\n",
    "        #https://pytorch.org/docs/stable/nn.html?highlight=linear#torch.nn.Linear\n",
    "        # Generate a fully connected linear neural network model, 1 layer, bias, linear activation function\n",
    "        # returns: Trainable object\n",
    "        #self.act = nn.LeakyReLU() #non-linear activation function\n",
    "        self.act = nn.ReLU() #non-linear activation function\n",
    "    \n",
    "    def forward(self, x):\n",
    "        out = self.layer1(x)\n",
    "        out = self.act(out) #comment out if not desired\n",
    "        return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Define loss function: MSELoss()\n",
      "Define optimizer: SGD (\n",
      "Parameter Group 0\n",
      "    dampening: 0\n",
      "    lr: 0.1\n",
      "    momentum: 0\n",
      "    nesterov: False\n",
      "    weight_decay: 0\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "#create network object:\n",
    "model = LinNet().to(device)\n",
    "loss_fn = nn.MSELoss()\n",
    "print(\"Define loss function:\", loss_fn)\n",
    "#learning_rate = 1e-4\n",
    "#optimizer = torch.optim.Adam(model.parameters())\n",
    "optimizer = torch.optim.SGD(model.parameters(),lr=0.1)\n",
    "print(\"Define optimizer:\", optimizer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 0.3374039828777313\n",
      "1000 0.1666666716337204\n",
      "2000 0.1666666716337204\n",
      "3000 0.1666666716337204\n",
      "4000 0.1666666716337204\n",
      "5000 0.1666666716337204\n",
      "6000 0.1666666716337204\n",
      "7000 0.1666666716337204\n",
      "8000 0.1666666716337204\n",
      "9000 0.1666666716337204\n"
     ]
    }
   ],
   "source": [
    "for epoch in range(10000):\n",
    "    Ypred=model(X) #the model produces prediction output\n",
    "    loss=loss_fn(Ypred, Y) #prediction and target compared by loss\n",
    "    if epoch%1000==0:\n",
    "        print(epoch, loss.item()) #print current loss value\n",
    "    optimizer.zero_grad() #optimizer sets previous gradients to zero\n",
    "    loss.backward() #optimizer computes new gradients\n",
    "    optimizer.step() #optimizer updates weights"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ypred training set= tensor([[[0.0000e+00, 0.0000e+00],\n",
      "         [0.0000e+00, 1.0000e+00],\n",
      "         [0.0000e+00, 1.7285e-06]]], grad_fn=<ReluBackward0>)\n",
      "Loss on trainig set: tensor(0.1667, grad_fn=<MseLossBackward>)\n",
      "Y validation set= tensor([[[0.0000, 0.0000],\n",
      "         [0.0000, 0.2029],\n",
      "         [0.0512, 0.0000]]], grad_fn=<ReluBackward0>)\n",
      "Loss on validation set: tensor(0.2730, grad_fn=<MseLossBackward>)\n",
      "weights= OrderedDict([('layer1.0.weight', tensor([[-0.3343, -0.4287],\n",
      "        [ 1.0000, -0.4059]])), ('layer1.0.bias', tensor([ 0.4327, -0.5941]))])\n"
     ]
    }
   ],
   "source": [
    "Ypred=model(X) # Make Predictions based on the obtained weights\n",
    "print(\"Ypred training set=\", Ypred)\n",
    "loss=loss_fn(Ypred, Y)\n",
    "print(\"Loss on trainig set:\", loss)\n",
    "Yvalpred=model(Xval) # Make Predictions based on the obtained weights\n",
    "print(\"Y validation set=\", Yvalpred)\n",
    "loss=loss_fn(Yvalpred, Yval)\n",
    "print(\"Loss on validation set:\", loss)\n",
    "weights = model.state_dict() #read obtained weights\n",
    "print(\"weights=\", weights)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "**Observe:**\n",
    "\n",
    "The optimizer gets stuck at a loss value of 0.166... during training. This is cause by the vanishing gradient of the ReLU function in the negative input range."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
