{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![MLU Logo](../data/MLU_Logo.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# <a name=\"0\">Machine Learning Accelerator - Tabular Data - Lecture 3</a>\n",
    "\n",
    "\n",
    "## PyTorch\n",
    "\n",
    "1. <a href=\"#1\">PyTorch: Tensors and Autograd</a>\n",
    "2. <a href=\"#2\">PyTorch: Building a Neural Network</a>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "%%capture\n",
    "%pip install -q -r ../requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. <a name=\"1\">PyTorch: Tensors and Autograd</a>\n",
    "<a href=\"#0\">Go to top</a>\n",
    "\n",
    "This tutorial follows the concepts from the original MXNet tutorial but uses PyTorch instead.\n",
    "\n",
    "To get started, let's import PyTorch and NumPy.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, let's see how to create a 2D tensor (also called a matrix) with values from two sets of numbers: 1, 2, 3 and 4, 5, 6."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1, 2, 3],\n",
       "        [5, 6, 7]])"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.tensor([[1,2,3],[5,6,7]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also create a very simple matrix with the same shape (2 rows by 3 columns), but fill it with 1s."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1., 1., 1.],\n",
       "        [1., 1., 1.]])"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.ones((2,3))\n",
    "x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Often we'll want to create tensors whose values are sampled randomly. For example, sampling values uniformly between -1 and 1."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.6748,  0.4310,  0.6130],\n",
       "        [-0.9225, -0.8389, -0.4594]])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y = torch.rand(2, 3) * 2 - 1  # Values between -1 and 1\n",
    "y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can also fill a tensor of a given shape with a given value, such as 2.0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[2., 2., 2.],\n",
       "        [2., 2., 2.]])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.full((2,3), 2.0)\n",
    "x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As with NumPy, the dimensions of each tensor are accessible by accessing the .shape attribute. We can also query its size and data type."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([2, 3]), 6, torch.float32)"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "(x.shape, x.numel(), x.dtype)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Operations\n",
    "\n",
    "PyTorch supports a large number of standard mathematical operations. Such as element-wise multiplication:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 1.3496,  0.8619,  1.2259],\n",
       "        [-1.8450, -1.6778, -0.9188]])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x * y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Exponentiation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1.9637, 1.5387, 1.8459],\n",
       "        [0.3975, 0.4322, 0.6317]])"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y.exp()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And matrix multiplication:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 3.4375, -4.4415],\n",
       "        [ 3.4375, -4.4415]])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.mm(x, y.t())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Indexing\n",
    "\n",
    "PyTorch tensors support slicing in all the ways you might imagine accessing your data. Here's an example of reading a particular element, which returns a scalar tensor."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(-0.4594)"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y[1,2]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Read the second and third columns from y."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.4310,  0.6130],\n",
       "        [-0.8389, -0.4594]])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y[:,1:3]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "and writing to a specific element"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.6748,  2.0000,  2.0000],\n",
       "        [-0.9225,  2.0000,  2.0000]])"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y[:,1:3] = 2\n",
    "y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Multi-dimensional slicing is also supported."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.6748, 2.0000, 2.0000],\n",
       "        [4.0000, 4.0000, 2.0000]])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y[1:2,0:2] = 4\n",
    "y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Automatic differentiation with autograd\n",
    "\n",
    "PyTorch provides automatic differentiation through its autograd package. Let's see how it works with a simple example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1., 2.],\n",
       "        [3., 4.]], requires_grad=True)"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.tensor([[1., 2.], [3., 4.]], requires_grad=True)\n",
    "x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's define a function $y=f(x) = 0.6x^2$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.6000, 2.4000],\n",
       "        [5.4000, 9.6000]], grad_fn=<MulBackward0>)"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y = 0.6 * x * x\n",
    "y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's compute the gradients"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1.2000, 2.4000],\n",
       "        [3.6000, 4.8000]])"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y.sum().backward()\n",
    "x.grad"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. <a name=\"2\">PyTorch: Building a Neural Network</a>\n",
    "<a href=\"#0\">Go to top</a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Implement a network with sequential mode \n",
    "\n",
    "Let's implement a simple neural network with two hidden layers of size 64 and 128 using the sequential mode. We will have 5 inputs, 1 output and some dropouts between the layers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (0): Linear(in_features=5, out_features=64, bias=True)\n",
       "  (1): ReLU()\n",
       "  (2): Dropout(p=0.4, inplace=False)\n",
       "  (3): Linear(in_features=64, out_features=128, bias=True)\n",
       "  (4): ReLU()\n",
       "  (5): Dropout(p=0.3, inplace=False)\n",
       "  (6): Linear(in_features=128, out_features=1, bias=True)\n",
       "  (7): Sigmoid()\n",
       ")"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch.nn as nn\n",
    "\n",
    "net = nn.Sequential(\n",
    "    nn.Linear(5, 64),\n",
    "    nn.ReLU(),\n",
    "    nn.Dropout(0.4),\n",
    "    nn.Linear(64, 128),\n",
    "    nn.ReLU(),\n",
    "    nn.Dropout(0.3),\n",
    "    nn.Linear(128, 1),\n",
    "    nn.Sigmoid()\n",
    ")\n",
    "net"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's send a batch of data to this network (batch size is 4 in this case)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Random input data with shape torch.Size([4, 5])\n",
      "tensor([[0.6891, 0.5221, 0.7773, 0.9408, 0.7547],\n",
      "        [0.2574, 0.5219, 0.3243, 0.9965, 0.1699],\n",
      "        [0.5062, 0.1165, 0.5882, 0.4178, 0.0667],\n",
      "        [0.7801, 0.5441, 0.5210, 0.3496, 0.3415]])\n",
      "\n",
      "Output shape: torch.Size([4, 1])\n",
      "Network output:  tensor([[0.4622],\n",
      "        [0.5201],\n",
      "        [0.5014],\n",
      "        [0.4897]], grad_fn=<SigmoidBackward0>)\n"
     ]
    }
   ],
   "source": [
    "# Input shape is (batch_size, data length)\n",
    "x = torch.rand(4, 5)\n",
    "y = net(x)\n",
    "\n",
    "print(\"Random input data with shape\", x.shape)\n",
    "print(x)\n",
    "print(\"\\nOutput shape:\", y.shape)\n",
    "print(\"Network output: \", y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also see the initialized weights for each layer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([64, 5]) torch.Size([64])\n",
      "Parameter containing:\n",
      "tensor([[ 0.4140, -0.2097,  0.1934,  0.0987, -0.3828],\n",
      "        [-0.3258, -0.1371, -0.2716,  0.2433,  0.3157],\n",
      "        [ 0.3060,  0.2025, -0.1249, -0.2841, -0.1136],\n",
      "        [ 0.0635, -0.2865,  0.3451, -0.2566,  0.2379],\n",
      "        [-0.2022, -0.3182, -0.1616,  0.1147,  0.0196],\n",
      "        [ 0.0514,  0.4180, -0.1799, -0.3582, -0.3167],\n",
      "        [-0.2233, -0.0761,  0.3520, -0.1367,  0.0231],\n",
      "        [ 0.0652,  0.0074, -0.1976,  0.0652, -0.0874],\n",
      "        [ 0.2888,  0.1323,  0.2426, -0.3566, -0.1998],\n",
      "        [-0.2552,  0.4010, -0.3824, -0.0141, -0.0860],\n",
      "        [-0.2668, -0.2012, -0.0907, -0.2436,  0.1911],\n",
      "        [ 0.1006, -0.0848, -0.3372,  0.4433,  0.1452],\n",
      "        [ 0.0564,  0.0578, -0.0198, -0.2309, -0.0589],\n",
      "        [-0.1424,  0.3267, -0.4456,  0.3973, -0.2852],\n",
      "        [-0.4185, -0.0388,  0.3620,  0.2704, -0.0656],\n",
      "        [-0.3409,  0.0460, -0.2915, -0.3246,  0.0052],\n",
      "        [ 0.0496, -0.3019,  0.3156,  0.0079,  0.3143],\n",
      "        [ 0.3830, -0.3231,  0.4193,  0.2370, -0.4453],\n",
      "        [ 0.0963, -0.2967,  0.2495, -0.0356, -0.2095],\n",
      "        [-0.0252, -0.1415, -0.3344, -0.0490, -0.3190],\n",
      "        [-0.1498,  0.2223, -0.3334, -0.1432,  0.2012],\n",
      "        [ 0.2746, -0.1717,  0.0109,  0.1719,  0.1868],\n",
      "        [-0.2521,  0.1618,  0.2235, -0.4178,  0.3538],\n",
      "        [-0.4126,  0.3020, -0.3663,  0.0462,  0.0851],\n",
      "        [-0.0646,  0.4186, -0.2545, -0.3375, -0.0655],\n",
      "        [-0.1856,  0.3097, -0.4052,  0.1449,  0.2151],\n",
      "        [-0.1731, -0.1986, -0.1555,  0.1463, -0.0857],\n",
      "        [-0.2523, -0.1973, -0.2736, -0.2426,  0.0587],\n",
      "        [-0.3090, -0.1566, -0.1199,  0.3582, -0.2981],\n",
      "        [ 0.3307,  0.2290, -0.0395, -0.2179, -0.1259],\n",
      "        [ 0.3688, -0.1597,  0.3606,  0.0557, -0.0646],\n",
      "        [ 0.2586, -0.3155, -0.0124, -0.2741, -0.1273],\n",
      "        [-0.2071,  0.3514, -0.3882, -0.0621,  0.3038],\n",
      "        [-0.0540,  0.2552, -0.3168,  0.1888,  0.4385],\n",
      "        [-0.4350,  0.0270,  0.3162, -0.3843, -0.2997],\n",
      "        [-0.3382,  0.2364,  0.0146,  0.0499,  0.3829],\n",
      "        [ 0.1828, -0.3370,  0.3974, -0.1320,  0.2109],\n",
      "        [ 0.0316, -0.2776,  0.2335, -0.1636, -0.3523],\n",
      "        [ 0.4066, -0.0690,  0.3488,  0.3690, -0.0343],\n",
      "        [-0.0262, -0.3873, -0.1189, -0.1093, -0.1183],\n",
      "        [-0.0731, -0.1124, -0.2861,  0.3533, -0.3186],\n",
      "        [ 0.0551, -0.2362, -0.4419,  0.2498, -0.1034],\n",
      "        [-0.0657,  0.2276, -0.1839,  0.1906, -0.3480],\n",
      "        [ 0.1351,  0.3720, -0.4355,  0.3825, -0.4155],\n",
      "        [ 0.0468,  0.0226,  0.2082,  0.0353, -0.4345],\n",
      "        [ 0.0359, -0.2988,  0.2885,  0.2160, -0.4355],\n",
      "        [ 0.1941,  0.0895,  0.1975,  0.4031,  0.2917],\n",
      "        [-0.2787, -0.1937,  0.3792, -0.0090,  0.2317],\n",
      "        [-0.3598,  0.1516, -0.1411, -0.0970, -0.0474],\n",
      "        [-0.3468,  0.0296, -0.4169, -0.0196, -0.4110],\n",
      "        [-0.0034,  0.3747, -0.0232, -0.0106,  0.4303],\n",
      "        [-0.0273,  0.3280, -0.1235, -0.0130,  0.0794],\n",
      "        [ 0.1583, -0.2897, -0.3968, -0.1599,  0.3241],\n",
      "        [-0.4112, -0.0183,  0.1791,  0.3945,  0.2804],\n",
      "        [-0.3166, -0.3587, -0.0840, -0.3551,  0.2014],\n",
      "        [-0.0169, -0.0654, -0.4339, -0.2892, -0.0567],\n",
      "        [-0.3501,  0.0951,  0.2189,  0.2135, -0.3416],\n",
      "        [-0.4256,  0.0879, -0.2271,  0.0058, -0.1469],\n",
      "        [ 0.0039, -0.2761, -0.2123,  0.2835, -0.2394],\n",
      "        [-0.0166, -0.3109,  0.0727,  0.3113,  0.1122],\n",
      "        [-0.0071, -0.1357,  0.1317, -0.0891,  0.0404],\n",
      "        [ 0.0461,  0.0357, -0.3066, -0.3605, -0.4040],\n",
      "        [-0.0420,  0.3559, -0.3655,  0.2689,  0.2067],\n",
      "        [ 0.4344, -0.2565,  0.2187, -0.1426, -0.3401]], requires_grad=True) Parameter containing:\n",
      "tensor([ 0.2083,  0.2013,  0.0666,  0.1022,  0.0034, -0.0214,  0.1302,  0.4317,\n",
      "         0.3050,  0.0675, -0.0308,  0.0456, -0.0562, -0.3867,  0.3498, -0.0969,\n",
      "        -0.1095,  0.4283, -0.0587,  0.3590,  0.1086, -0.1134, -0.4071, -0.4229,\n",
      "        -0.3123,  0.1790,  0.4012, -0.4471, -0.0255,  0.3238,  0.0350, -0.4072,\n",
      "        -0.3451,  0.1151, -0.4271, -0.2166, -0.3191,  0.1175, -0.3801,  0.3896,\n",
      "        -0.0230, -0.3635,  0.0548, -0.0588,  0.4303,  0.0133,  0.1301, -0.0525,\n",
      "        -0.3908, -0.0770,  0.1977, -0.3945, -0.1251,  0.2640, -0.0665, -0.1348,\n",
      "        -0.0917, -0.3470,  0.2834,  0.0611, -0.2251,  0.3852, -0.2869,  0.1219],\n",
      "       requires_grad=True)\n"
     ]
    }
   ],
   "source": [
    "print(net[0].weight.shape, net[0].bias.shape)\n",
    "print(net[0].weight, net[0].bias)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Implement the network flexibly:\n",
    "\n",
    "Now let's implement the same network using a custom module, which gives more flexibility in defining the forward pass."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "MixMLP(\n",
       "  (fc1): Linear(in_features=5, out_features=64, bias=True)\n",
       "  (fc2): Linear(in_features=64, out_features=128, bias=True)\n",
       "  (fc3): Linear(in_features=128, out_features=1, bias=True)\n",
       "  (dropout1): Dropout(p=0.4, inplace=False)\n",
       "  (dropout2): Dropout(p=0.3, inplace=False)\n",
       ")"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "class MixMLP(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(MixMLP, self).__init__()\n",
    "        self.fc1 = nn.Linear(5, 64)\n",
    "        self.fc2 = nn.Linear(64, 128)\n",
    "        self.fc3 = nn.Linear(128, 1)\n",
    "        self.dropout1 = nn.Dropout(0.4)\n",
    "        self.dropout2 = nn.Dropout(0.3)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        x = torch.relu(self.fc1(x))\n",
    "        x = self.dropout1(x)\n",
    "        x = torch.relu(self.fc2(x))\n",
    "        x = self.dropout2(x)\n",
    "        x = torch.sigmoid(self.fc3(x))\n",
    "        return x\n",
    "\n",
    "net = MixMLP()\n",
    "net"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The usage of net is similar as before."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.4729],\n",
       "        [0.4819],\n",
       "        [0.4444],\n",
       "        [0.4414]], grad_fn=<SigmoidBackward0>)"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Input shape is (batch_size, data length)\n",
    "x = torch.rand(4, 5)\n",
    "net(x)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "sagemaker-distribution:Python",
   "language": "python",
   "name": "conda-env-sagemaker-distribution-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.14"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
