{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Pytorch Tutorial"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pytorch is a python framework for machine learning\n",
    "\n",
    "- GPU-accelerated computations\n",
    "- automatic differentiation\n",
    "- modules for neural networks\n",
    "\n",
    "This tutorial will teach you the fundamentals of operating on pytorch tensors and networks. You have already seen some things in recitation 0 which we will quickly review, but most of this tutorial is on mostly new or more advanced stuff.\n",
    "\n",
    "For a worked example of how to build and train a pytorch network, see `pytorch-example.ipynb`.\n",
    "\n",
    "For additional tutorials, see http://pytorch.org/tutorials/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import numpy as np\n",
    "import torch.nn as nn"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Tensors (review)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Tensors are the fundamental object for array data. The most common types you will use are `IntTensor` and `FloatTensor`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[9.6554e+23, 3.0955e-41, 1.9431e-19],\n",
      "        [4.7429e+30, 5.0938e-14, 0.0000e+00]])\n",
      "tensor([[0., 0., 0.],\n",
      "        [0., 0., 0.]])\n"
     ]
    }
   ],
   "source": [
    "# Create uninitialized tensor\n",
    "x = torch.FloatTensor(2,3)\n",
    "print(x)\n",
    "# Initialize to zeros\n",
    "x.zero_()\n",
    "print(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0.6965, 0.2861, 0.2269],\n",
      "        [0.5513, 0.7195, 0.4231]])\n",
      "tensor([[0.6965, 0.2861, 0.2269],\n",
      "        [0.5513, 0.7195, 0.4231]], dtype=torch.float64)\n"
     ]
    }
   ],
   "source": [
    "# Create from numpy array (seed for repeatability)\n",
    "np.random.seed(123)\n",
    "np_array = np.random.random((2,3))\n",
    "print(torch.FloatTensor(np_array))\n",
    "print(torch.from_numpy(np_array))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.1115,  0.1204, -0.3696],\n",
      "        [-0.2404, -1.1969,  0.2093]])\n",
      "[[-0.11146712  0.12036294 -0.36963451]\n",
      " [-0.24041797 -1.19692433  0.20926936]]\n"
     ]
    }
   ],
   "source": [
    "# Create random tensor (seed for repeatability)\n",
    "torch.manual_seed(123)\n",
    "x=torch.randn(2,3)\n",
    "print(x)\n",
    "# export to numpy array\n",
    "x_np = x.numpy()\n",
    "print(x_np)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1., 0., 0.],\n",
      "        [0., 1., 0.],\n",
      "        [0., 0., 1.]])\n",
      "tensor([[1., 1., 1.],\n",
      "        [1., 1., 1.]])\n",
      "tensor([[0., 0., 0.],\n",
      "        [0., 0., 0.]])\n",
      "tensor([0, 1, 2])\n"
     ]
    }
   ],
   "source": [
    "# special tensors (see documentation)\n",
    "print(torch.eye(3))\n",
    "print(torch.ones(2,3))\n",
    "print(torch.zeros(2,3))\n",
    "print(torch.arange(0,3))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "All tensors have a `size` and `type`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([3, 4])\n",
      "torch.FloatTensor\n"
     ]
    }
   ],
   "source": [
    "x=torch.FloatTensor(3,4)\n",
    "print(x.size())\n",
    "print(x.type())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Math, Linear Algebra, and Indexing (review)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pytorch math and linear algebra is similar to numpy. Operators are overridden so you can use standard math operators (`+`,`-`, etc.) and expect a tensor as a result. See pytorch documentation for a complete list of available functions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(10.)\n",
      "tensor(85.7910)\n",
      "tensor(2.)\n"
     ]
    }
   ],
   "source": [
    "x = torch.arange(0,5).float()\n",
    "print(torch.sum(x))\n",
    "print(torch.sum(torch.exp(x)))\n",
    "print(torch.mean(x))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pytorch indexing is similar to numpy indexing. See pytorch documentation for details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0.0756, 0.1966],\n",
      "        [0.3164, 0.4017],\n",
      "        [0.1186, 0.8274]])\n",
      "tensor([0.3164, 0.4017])\n"
     ]
    }
   ],
   "source": [
    "x = torch.rand(3,2)\n",
    "print(x)\n",
    "print(x[1,:])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## CPU and GPU"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Tensors can be copied between CPU and GPU. It is important that everything involved in a calculation is on the same device. \n",
    "\n",
    "This portion of the tutorial may not work for you if you do not have a GPU available."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0.5374, 0.9551],\n",
      "        [0.7475, 0.4979],\n",
      "        [0.8549, 0.2438]])\n",
      "tensor([[0.5374, 0.9551],\n",
      "        [0.7475, 0.4979],\n",
      "        [0.8549, 0.2438]], device='cuda:0')\n",
      "tensor([[0.5374, 0.9551],\n",
      "        [0.7475, 0.4979],\n",
      "        [0.8549, 0.2438]])\n",
      "[[ 0.53740954  0.95506984]\n",
      " [ 0.74754196  0.49788189]\n",
      " [ 0.85492277  0.24377221]]\n",
      "can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.\n"
     ]
    }
   ],
   "source": [
    "# create a tensor\n",
    "x = torch.rand(3,2)\n",
    "print(x)\n",
    "# copy to GPU\n",
    "y = x.cuda()\n",
    "print(y)\n",
    "# copy back to CPU\n",
    "z = y.cpu()\n",
    "print(z)\n",
    "# get CPU tensor as numpy array\n",
    "print(z.numpy())\n",
    "# cannot get GPU tensor as numpy array directly\n",
    "try:\n",
    "  print(y.numpy())\n",
    "except TypeError as e:\n",
    "  print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Operations between GPU and CPU tensors will fail. Operations require all arguments to be on the same device."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'mat2'\n"
     ]
    }
   ],
   "source": [
    "x = torch.rand(3,5)  # CPU tensor\n",
    "y = torch.rand(5,4).cuda()  # GPU tensor\n",
    "try:\n",
    "  torch.mm(x,y)  # Operation between CPU and GPU fails\n",
    "except RuntimeError as e:\n",
    "  print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Typical code should include `if` statements or utilize helper functions so it can operate with or without the GPU."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Put tensor on CUDA if available\n",
    "x = torch.rand(3,2)\n",
    "if torch.cuda.is_available():\n",
    "  x = x.cuda()\n",
    "\n",
    "# Do some calculations\n",
    "y = x ** 2 \n",
    "\n",
    "# Copy to CPU if on GPU\n",
    "if y.is_cuda:\n",
    "  y = y.cpu()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A convenient method is `new`, which creates a new tensor on the same device as another tensor. It should be used for creating tensors whenever possible."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1.8274e+24, 3.0955e-41]])\n",
      "tensor([[0.2897, 0.0167]], device='cuda:0')\n"
     ]
    }
   ],
   "source": [
    "x1 = torch.rand(3,2)\n",
    "x2 = x1.new(1,2)  # create cpu tensor\n",
    "print(x2)\n",
    "x1 = torch.rand(3,2).cuda()\n",
    "x2 = x1.new(1,2)  # create cuda tensor\n",
    "print(x2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Calculations executed on the GPU can be many times faster than numpy. However, numpy is still optimized for the CPU and many times faster than python `for` loops. Numpy calculations may be faster than GPU calculations for small arrays due to the cost of interfacing with the GPU."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU: 243.28446900000245ms\n",
      "GPU: 76.31748300002528ms\n"
     ]
    }
   ],
   "source": [
    "from timeit import timeit\n",
    "# Create random data\n",
    "x = torch.rand(1000,64)\n",
    "y = torch.rand(64,32)\n",
    "number = 10000  # number of iterations\n",
    "\n",
    "def square():\n",
    "  z=torch.mm(x, y) # dot product (mm=matrix multiplication)\n",
    "\n",
    "# Time CPU\n",
    "print('CPU: {}ms'.format(timeit(square, number=number)*1000))\n",
    "# Time GPU\n",
    "x, y = x.cuda(), y.cuda()\n",
    "print('GPU: {}ms'.format(timeit(square, number=number)*1000))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Differentiation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Tensors provide automatic differentiation.\n",
    "\n",
    "As you might know, previous versions of Pytorch used Variables, which were wrappers around tensors for differentiation. Starting with pytorch 0.4.0, this wrapping is done internally in the Tensor class and you can, and should, differentiate Tensors directly. However, it is possible that you walk on references to Variables, e.g. in your error messages.\n",
    "\n",
    "What you need to remember :\n",
    "\n",
    "- Tensors you are differentiating with respect to must have `requires_grad=True`\n",
    "- Call `.backward()` on scalar variables you are differentiating\n",
    "- To differentiate a vector, sum it first"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([ 0.,  6., 22., 48.], grad_fn=<ThAddBackward>)\n",
      "tensor([ 1., 11., 21., 31.])\n",
      "tensor([1., 1., 1., 1.])\n",
      "None\n"
     ]
    }
   ],
   "source": [
    "# Create differentiable tensor\n",
    "x = torch.tensor(torch.arange(0,4).float(),requires_grad=True)\n",
    "z = x ** 2\n",
    "b = torch.zeros(4,requires_grad=True)\n",
    "y = 5*z+x+b\n",
    "# Calculate gradient (dy/dx=10x+1,dy/db=1)\n",
    "y.sum().backward()\n",
    "# Print values\n",
    "print(y)\n",
    "print(x.grad)\n",
    "print(b.grad)\n",
    "print(y.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Differentiation accumulates gradients. This is sometimes what you want and sometimes not. **Make sure to zero gradients between batches if performing gradient descent or you will get strange results!**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([0., 2., 4., 6.])\n",
      "tensor([ 0.,  4.,  8., 12.])\n",
      "tensor([0., 2., 4., 6.])\n"
     ]
    }
   ],
   "source": [
    "# Create a variable\n",
    "x=torch.tensor(torch.arange(0,4).float(), requires_grad=True)\n",
    "# Differentiate\n",
    "torch.sum(x**2).backward()\n",
    "print(x.grad)\n",
    "# Differentiate again (accumulates gradient)\n",
    "torch.sum(x**2).backward()\n",
    "print(x.grad)\n",
    "# Zero gradient before differentiating\n",
    "x.grad.data.zero_()\n",
    "torch.sum(x**2).backward()\n",
    "print(x.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note that a Tensor with gradient cannot be exported to numpy directly :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-29-001829cd5143>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtensor\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfloat\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrequires_grad\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnumpy\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# raises an exception\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead."
     ]
    }
   ],
   "source": [
    "x=torch.tensor(torch.arange(0,4).float(), requires_grad=True)\n",
    "x.numpy() # raises an exception"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The reason is that pytorch remembers the graph of all computations to perform differenciation. To be integrated to this graph the raw data is wrapped internally to the Tensor class (like what was formerly a Variable). You can detach the tensor from the graph using the **.detach()** method, which returns a tensor with the same data but requires_grad set to False."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([  0.,   1.,  16.,  81.], dtype=float32)"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x=torch.tensor(torch.arange(0,4).float(), requires_grad=True)\n",
    "y=x**2\n",
    "z=y**2\n",
    "z.detach().numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Another reason to use this method is that updating the graph can use a lot of memory. If you are in a context where you have a differentiable tensor that you don't need to differentiate, think of detaching it from the graph."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Neural Network Modules"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "net = torch.nn.Linear(4,2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([ 2.8476, -0.8070], grad_fn=<ThAddBackward>)\n"
     ]
    }
   ],
   "source": [
    "x = torch.arange(0,4).float()\n",
    "y = net.forward(x)\n",
    "y = net(x) # Alternatively\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pytorch provides a framework for developing neural network modules. They take care of many things, the main one being wrapping and tracking a list of parameters for you.\n",
    "You have several ways of building and using a network, offering different tradeoffs between freedom and simplicity.\n",
    "\n",
    "torch.nn provides basic 1-layer nets, such as Linear (perceptron) and activation layers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([-16.7891,   9.0289,   6.9587,   3.7614,  -9.7460,   2.5718, -17.9099,\n",
      "          1.5903,  10.2110,   9.3332], grad_fn=<ThAddBackward>)\n"
     ]
    }
   ],
   "source": [
    "x = torch.arange(0,32).float()\n",
    "net = torch.nn.Linear(32,10)\n",
    "y = net(x)\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "All nn.Module objects are reusable as components of bigger networks ! That is how you build personnalized nets. The simplest way is to use the nn.Sequential class.\n",
    "\n",
    "You can also create your own class that inherits n.Module. The forward method should precise what happens in the forward pass given an input. This enables you to precise behaviors more complicated than just applying layers one after another, if necessary."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "# create a simple sequential network (`nn.Module` object) from layers (other `nn.Module` objects).\n",
    "# Here a MLP with 2 layers and sigmoid activation.\n",
    "net = torch.nn.Sequential(\n",
    "    torch.nn.Linear(32,128),\n",
    "    torch.nn.Sigmoid(),\n",
    "    torch.nn.Linear(128,10))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [],
   "source": [
    "# create a more customizable network module (equivalent here)\n",
    "class MyNetwork(torch.nn.Module):\n",
    "    # you can use the layer sizes as initialization arguments if you want to\n",
    "    def __init__(self,input_size, hidden_size, output_size):\n",
    "        super().__init__()\n",
    "        self.layer1 = torch.nn.Linear(input_size,hidden_size)\n",
    "        self.layer2 = torch.nn.Sigmoid()\n",
    "        self.layer3 = torch.nn.Linear(hidden_size,output_size)\n",
    "\n",
    "    def forward(self, input_val):\n",
    "        h = input_val\n",
    "        h = self.layer1(h)\n",
    "        h = self.layer2(h)\n",
    "        h = self.layer3(h)\n",
    "        return h\n",
    "\n",
    "net = MyNetwork(32,128,10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The network tracks parameters, and you can access them through the **parameters()** method, which returns a python generator."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Parameter containing:\n",
      "tensor([[-0.1484, -0.1013,  0.1411,  ...,  0.1745,  0.0688, -0.1537],\n",
      "        [ 0.1072,  0.0733,  0.1018,  ..., -0.0734, -0.1085,  0.1101],\n",
      "        [-0.0507, -0.1522,  0.0279,  ...,  0.0133,  0.0676, -0.1702],\n",
      "        ...,\n",
      "        [ 0.1137,  0.1462, -0.1158,  ..., -0.0710,  0.0157,  0.1218],\n",
      "        [-0.0432,  0.1673, -0.0256,  ...,  0.1762,  0.1235, -0.0317],\n",
      "        [ 0.0029, -0.1003, -0.0063,  ..., -0.1617, -0.0779, -0.0465]],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([-0.0539, -0.1464,  0.0541, -0.1264,  0.0822, -0.0615,  0.0047,  0.1075,\n",
      "        -0.0477,  0.1269,  0.1360,  0.1698, -0.0086, -0.1651, -0.1379,  0.0936,\n",
      "         0.1013, -0.1084, -0.0856, -0.1722,  0.0411,  0.0930, -0.0563, -0.1459,\n",
      "         0.0616,  0.1449,  0.0744,  0.1126,  0.0892, -0.0747,  0.0847,  0.1660,\n",
      "        -0.1163, -0.0190, -0.0115, -0.1163, -0.1148,  0.1084,  0.1115, -0.1079,\n",
      "        -0.0338,  0.1323, -0.1377, -0.1066, -0.1632,  0.0013, -0.0310, -0.0261,\n",
      "        -0.1195,  0.0122,  0.0535,  0.1529,  0.1180, -0.1519,  0.1054, -0.0023,\n",
      "        -0.0052, -0.0030, -0.1703,  0.1277,  0.1240, -0.0674, -0.1647, -0.0938,\n",
      "         0.1132, -0.1556,  0.1067,  0.0275,  0.0065,  0.1241, -0.0880, -0.0597,\n",
      "        -0.0689, -0.1232, -0.0309,  0.0582,  0.1103, -0.0003,  0.0181,  0.0407,\n",
      "        -0.0962,  0.1101, -0.0625,  0.0471, -0.0678,  0.0260, -0.0656,  0.1379,\n",
      "        -0.0659,  0.0158, -0.1363,  0.0727,  0.0836,  0.0690, -0.1716, -0.0640,\n",
      "         0.0419, -0.1087, -0.0313, -0.1303,  0.0942,  0.0386,  0.1198, -0.0929,\n",
      "        -0.1152,  0.1694, -0.0836, -0.1321, -0.1072,  0.1683, -0.1127, -0.0520,\n",
      "         0.0861,  0.1102, -0.0849,  0.0470,  0.1115,  0.1201, -0.1741, -0.0225,\n",
      "         0.1702,  0.0364,  0.1208, -0.1010,  0.0067, -0.0023,  0.1611,  0.1716],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([[ 0.0707,  0.0782, -0.0302,  ...,  0.0734,  0.0146, -0.0789],\n",
      "        [-0.0848, -0.0833, -0.0667,  ...,  0.0557,  0.0653, -0.0770],\n",
      "        [-0.0436, -0.0049,  0.0504,  ..., -0.0120, -0.0008, -0.0691],\n",
      "        ...,\n",
      "        [ 0.0619,  0.0845,  0.0610,  ..., -0.0844,  0.0445, -0.0809],\n",
      "        [ 0.0128, -0.0019,  0.0386,  ..., -0.0413,  0.0239,  0.0592],\n",
      "        [-0.0337, -0.0178,  0.0762,  ..., -0.0320, -0.0726,  0.0793]],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([-0.0572, -0.0041, -0.0772, -0.0589, -0.0742,  0.0379,  0.0335, -0.0418,\n",
      "         0.0130,  0.0299], requires_grad=True)\n"
     ]
    }
   ],
   "source": [
    "for param in net.parameters():\n",
    "    print(param)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Parameters are of type Parameter, which is basically a wrapper for a tensor. How does pytorch retrieve your network's parameters ? They are simply all the attributes of type Parameter in your network. Moreover, if an attribute is of type nn.Module, its own parameters are added to your network's parameters ! This is why, when you define a network by adding up basic components such as nn.Linear, you should never have to explicitely define parameters.\n",
    "\n",
    "However, if you are in a case where no pytorch default module does what you need, you can define parameters explicitely (this should be rare). For the record, let's build the previous MLP with personnalized parameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MyNetworkWithParams(nn.Module):\n",
    "    def __init__(self,input_size, hidden_size, output_size):\n",
    "        super(MyNetworkWithParams,self).__init__()\n",
    "        self.layer1_weights = nn.Parameter(torch.randn(input_size,hidden_size))\n",
    "        self.layer1_bias = nn.Parameter(torch.randn(hidden_size))\n",
    "        self.layer2_weights = nn.Parameter(torch.randn(hidden_size,output_size))\n",
    "        self.layer2_bias = nn.Parameter(torch.randn(output_size))\n",
    "        \n",
    "    def forward(self,x):\n",
    "        h1 = torch.matmul(x,self.layer1_weights) + self.layer1_bias\n",
    "        h1_act = torch.max(h1, torch.zeros(h1.size())) # ReLU\n",
    "        output = torch.matmul(h1_act,self.layer2_weights) + self.layer2_bias\n",
    "        return output\n",
    "\n",
    "net = MyNetworkWithParams(32,128,10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Parameters are useful in that they are meant to be all the network's weights that will be optimized during training. If you were needing to use a tensor in your computational graph that you want to remain constant, just define it as a regular tensor."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "net = MyNetwork(32,128,10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The nn.Module also provides loss functions, such as cross-entropy."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(2.1727, grad_fn=<NllLossBackward>)\n"
     ]
    }
   ],
   "source": [
    "x = torch.tensor([np.arange(32), np.zeros(32),np.ones(32)]).float()\n",
    "y = torch.tensor([0,3,9])\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "output = net(x)\n",
    "loss = criterion(output,y)\n",
    "print(loss)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "nn.CrossEntropyLoss does both the softmax and the actual cross-entropy : given $output$ of size $(n,d)$ and $y$ of size $n$ and values in $0,1,...,d-1$, it computes $\\sum_{i=0}^{n-1}log(s[i,y[i]])$ where $s[i,j] = \\frac{e^{output[i,j]}}{\\sum_{j'=0}^{d-1}e^{output[i,j']}}$\n",
    "\n",
    "You can also compose nn.LogSoftmax and nn.NLLLoss to get the same result. Note that all these use the log-softmax rather than the softmax, for stability in the computations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/raphael/anaconda3/envs/dl/lib/python3.6/site-packages/ipykernel_launcher.py:5: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.\n",
      "  \"\"\"\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor(2.1727, grad_fn=<NllLossBackward>)"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# equivalent\n",
    "criterion2 = nn.NLLLoss()\n",
    "sf = nn.LogSoftmax()\n",
    "output = net(x)\n",
    "loss = criterion(sf(output),y)\n",
    "loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, to perform the backward pass, just execute **loss.backward()** ! It will update gradients in all differentiable tensors in the graph, which in particular includes all the network parameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.0032,  0.0012,  0.0056,  ...,  0.1236,  0.1279,  0.1323],\n",
      "        [ 0.0043,  0.0043,  0.0043,  ...,  0.0043,  0.0043,  0.0043],\n",
      "        [ 0.0028,  0.0028,  0.0028,  ...,  0.0028,  0.0028,  0.0028],\n",
      "        ...,\n",
      "        [ 0.0021,  0.0021,  0.0021,  ...,  0.0015,  0.0015,  0.0015],\n",
      "        [ 0.0016,  0.0016,  0.0016,  ...,  0.0016,  0.0016,  0.0016],\n",
      "        [ 0.0004,  0.0004,  0.0004,  ...,  0.0003,  0.0003,  0.0003]])\n",
      "tensor([ 0.0051,  0.0075, -0.0002, -0.0051, -0.0029, -0.0007, -0.0038, -0.0067,\n",
      "        -0.0057, -0.0015, -0.0002,  0.0047,  0.0072,  0.0045, -0.0008,  0.0040,\n",
      "        -0.0118, -0.0039,  0.0013,  0.0030, -0.0055, -0.0003, -0.0072, -0.0038,\n",
      "        -0.0059,  0.0051,  0.0003, -0.0014, -0.0008, -0.0052, -0.0099, -0.0040,\n",
      "         0.0076,  0.0031, -0.0024,  0.0011, -0.0002,  0.0029, -0.0032, -0.0032,\n",
      "         0.0015,  0.0034,  0.0036, -0.0027, -0.0038, -0.0014,  0.0090, -0.0004,\n",
      "         0.0020,  0.0029,  0.0099, -0.0059, -0.0063, -0.0034, -0.0007, -0.0001,\n",
      "        -0.0049,  0.0026, -0.0007,  0.0027,  0.0111, -0.0048,  0.0031, -0.0019,\n",
      "         0.0049,  0.0069, -0.0034, -0.0056,  0.0068, -0.0032, -0.0029, -0.0040,\n",
      "        -0.0026,  0.0059, -0.0063, -0.0127, -0.0078, -0.0009, -0.0005,  0.0075,\n",
      "        -0.0031, -0.0063, -0.0041,  0.0036, -0.0089,  0.0027, -0.0050,  0.0029,\n",
      "         0.0000,  0.0017, -0.0011,  0.0016,  0.0022, -0.0069,  0.0016,  0.0105,\n",
      "        -0.0029,  0.0028,  0.0041,  0.0060,  0.0123,  0.0077,  0.0065,  0.0039,\n",
      "         0.0022, -0.0005,  0.0106, -0.0060, -0.0058, -0.0043,  0.0077, -0.0070,\n",
      "        -0.0070,  0.0019,  0.0014, -0.0031, -0.0003, -0.0110, -0.0002, -0.0001,\n",
      "         0.0060, -0.0092, -0.0057,  0.0048, -0.0032,  0.0087, -0.0046,  0.0006])\n",
      "tensor([[-0.1234,  0.0365,  0.0362,  ..., -0.2422,  0.0341,  0.0370],\n",
      "        [ 0.0516,  0.0286,  0.0284,  ...,  0.0643,  0.0267,  0.0290],\n",
      "        [ 0.0235,  0.0137,  0.0136,  ...,  0.0287,  0.0129,  0.0139],\n",
      "        ...,\n",
      "        [ 0.0503,  0.0279,  0.0277,  ...,  0.0628,  0.0261,  0.0283],\n",
      "        [ 0.0428,  0.0235,  0.0233,  ...,  0.0536,  0.0219,  0.0238],\n",
      "        [-0.1319, -0.0964, -0.0841,  ..., -0.1445, -0.0545, -0.0841]])\n",
      "tensor([-0.2037,  0.0946,  0.0432, -0.1319,  0.0943,  0.0948,  0.0923,  0.0923,\n",
      "         0.0785, -0.2543])\n"
     ]
    }
   ],
   "source": [
    "loss.backward()\n",
    "\n",
    "# Check that the parameters now have gradients\n",
    "for param in net.parameters():\n",
    "    print(param.grad)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.0064,  0.0024,  0.0111,  ...,  0.2471,  0.2559,  0.2646],\n",
      "        [ 0.0086,  0.0086,  0.0086,  ...,  0.0085,  0.0085,  0.0085],\n",
      "        [ 0.0055,  0.0055,  0.0055,  ...,  0.0055,  0.0055,  0.0055],\n",
      "        ...,\n",
      "        [ 0.0043,  0.0042,  0.0042,  ...,  0.0031,  0.0030,  0.0030],\n",
      "        [ 0.0032,  0.0032,  0.0032,  ...,  0.0032,  0.0032,  0.0032],\n",
      "        [ 0.0008,  0.0007,  0.0007,  ...,  0.0006,  0.0006,  0.0006]])\n",
      "tensor([ 0.0102,  0.0151, -0.0003, -0.0103, -0.0059, -0.0015, -0.0075, -0.0135,\n",
      "        -0.0115, -0.0030, -0.0004,  0.0094,  0.0144,  0.0089, -0.0017,  0.0081,\n",
      "        -0.0235, -0.0078,  0.0026,  0.0060, -0.0111, -0.0006, -0.0143, -0.0075,\n",
      "        -0.0119,  0.0101,  0.0005, -0.0029, -0.0017, -0.0104, -0.0198, -0.0080,\n",
      "         0.0151,  0.0062, -0.0047,  0.0022, -0.0005,  0.0057, -0.0064, -0.0064,\n",
      "         0.0031,  0.0067,  0.0072, -0.0054, -0.0076, -0.0027,  0.0179, -0.0007,\n",
      "         0.0041,  0.0057,  0.0198, -0.0117, -0.0126, -0.0069, -0.0014, -0.0002,\n",
      "        -0.0097,  0.0051, -0.0015,  0.0053,  0.0222, -0.0096,  0.0063, -0.0039,\n",
      "         0.0099,  0.0138, -0.0068, -0.0112,  0.0135, -0.0063, -0.0057, -0.0080,\n",
      "        -0.0052,  0.0118, -0.0125, -0.0253, -0.0155, -0.0017, -0.0010,  0.0151,\n",
      "        -0.0062, -0.0126, -0.0081,  0.0073, -0.0177,  0.0055, -0.0100,  0.0058,\n",
      "         0.0001,  0.0034, -0.0022,  0.0032,  0.0045, -0.0138,  0.0033,  0.0209,\n",
      "        -0.0057,  0.0056,  0.0081,  0.0120,  0.0246,  0.0154,  0.0130,  0.0077,\n",
      "         0.0044, -0.0009,  0.0213, -0.0120, -0.0116, -0.0087,  0.0154, -0.0139,\n",
      "        -0.0139,  0.0039,  0.0029, -0.0062, -0.0007, -0.0221, -0.0004, -0.0003,\n",
      "         0.0121, -0.0185, -0.0114,  0.0095, -0.0065,  0.0175, -0.0092,  0.0012])\n",
      "tensor([[-0.2469,  0.0731,  0.0724,  ..., -0.4843,  0.0681,  0.0740],\n",
      "        [ 0.1031,  0.0572,  0.0567,  ...,  0.1285,  0.0534,  0.0580],\n",
      "        [ 0.0469,  0.0274,  0.0272,  ...,  0.0573,  0.0258,  0.0279],\n",
      "        ...,\n",
      "        [ 0.1007,  0.0558,  0.0554,  ...,  0.1255,  0.0522,  0.0567],\n",
      "        [ 0.0856,  0.0470,  0.0465,  ...,  0.1071,  0.0438,  0.0476],\n",
      "        [-0.2638, -0.1929, -0.1681,  ..., -0.2891, -0.1090, -0.1682]])\n",
      "tensor([-0.4074,  0.1892,  0.0863, -0.2638,  0.1885,  0.1895,  0.1846,  0.1847,\n",
      "         0.1570, -0.5086])\n",
      "tensor([[-0.0032,  0.0012,  0.0056,  ...,  0.1236,  0.1279,  0.1323],\n",
      "        [ 0.0043,  0.0043,  0.0043,  ...,  0.0043,  0.0043,  0.0043],\n",
      "        [ 0.0028,  0.0028,  0.0028,  ...,  0.0028,  0.0028,  0.0028],\n",
      "        ...,\n",
      "        [ 0.0021,  0.0021,  0.0021,  ...,  0.0015,  0.0015,  0.0015],\n",
      "        [ 0.0016,  0.0016,  0.0016,  ...,  0.0016,  0.0016,  0.0016],\n",
      "        [ 0.0004,  0.0004,  0.0004,  ...,  0.0003,  0.0003,  0.0003]])\n",
      "tensor([ 0.0051,  0.0075, -0.0002, -0.0051, -0.0029, -0.0007, -0.0038, -0.0067,\n",
      "        -0.0057, -0.0015, -0.0002,  0.0047,  0.0072,  0.0045, -0.0008,  0.0040,\n",
      "        -0.0118, -0.0039,  0.0013,  0.0030, -0.0055, -0.0003, -0.0072, -0.0038,\n",
      "        -0.0059,  0.0051,  0.0003, -0.0014, -0.0008, -0.0052, -0.0099, -0.0040,\n",
      "         0.0076,  0.0031, -0.0024,  0.0011, -0.0002,  0.0029, -0.0032, -0.0032,\n",
      "         0.0015,  0.0034,  0.0036, -0.0027, -0.0038, -0.0014,  0.0090, -0.0004,\n",
      "         0.0020,  0.0029,  0.0099, -0.0059, -0.0063, -0.0034, -0.0007, -0.0001,\n",
      "        -0.0049,  0.0026, -0.0007,  0.0027,  0.0111, -0.0048,  0.0031, -0.0019,\n",
      "         0.0049,  0.0069, -0.0034, -0.0056,  0.0068, -0.0032, -0.0029, -0.0040,\n",
      "        -0.0026,  0.0059, -0.0063, -0.0127, -0.0078, -0.0009, -0.0005,  0.0075,\n",
      "        -0.0031, -0.0063, -0.0041,  0.0036, -0.0089,  0.0027, -0.0050,  0.0029,\n",
      "         0.0000,  0.0017, -0.0011,  0.0016,  0.0022, -0.0069,  0.0016,  0.0105,\n",
      "        -0.0029,  0.0028,  0.0041,  0.0060,  0.0123,  0.0077,  0.0065,  0.0039,\n",
      "         0.0022, -0.0005,  0.0106, -0.0060, -0.0058, -0.0043,  0.0077, -0.0070,\n",
      "        -0.0070,  0.0019,  0.0014, -0.0031, -0.0003, -0.0110, -0.0002, -0.0001,\n",
      "         0.0060, -0.0092, -0.0057,  0.0048, -0.0032,  0.0087, -0.0046,  0.0006])\n",
      "tensor([[-0.1234,  0.0365,  0.0362,  ..., -0.2422,  0.0341,  0.0370],\n",
      "        [ 0.0516,  0.0286,  0.0284,  ...,  0.0643,  0.0267,  0.0290],\n",
      "        [ 0.0235,  0.0137,  0.0136,  ...,  0.0287,  0.0129,  0.0139],\n",
      "        ...,\n",
      "        [ 0.0503,  0.0279,  0.0277,  ...,  0.0628,  0.0261,  0.0283],\n",
      "        [ 0.0428,  0.0235,  0.0233,  ...,  0.0536,  0.0219,  0.0238],\n",
      "        [-0.1319, -0.0964, -0.0841,  ..., -0.1445, -0.0545, -0.0841]])\n",
      "tensor([-0.2037,  0.0946,  0.0432, -0.1319,  0.0943,  0.0948,  0.0923,  0.0923,\n",
      "         0.0785, -0.2543])\n"
     ]
    }
   ],
   "source": [
    "# if I forward prop and backward prop again, gradients accumulate :\n",
    "output = net(x)\n",
    "loss = criterion(output,y)\n",
    "loss.backward()\n",
    "for param in net.parameters():\n",
    "    print(param.grad)\n",
    "\n",
    "# you can remove this behavior by reinitializing the gradients in your network parameters :\n",
    "net.zero_grad()\n",
    "output = net(x)\n",
    "loss = criterion(output,y)\n",
    "loss.backward()\n",
    "for param in net.parameters():\n",
    "    print(param.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We did backpropagation, but still didn't perform gradient descent. Let's define an optimizer on the network parameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Parameters before gradient descent :\n",
      "Parameter containing:\n",
      "tensor([[-0.1618,  0.1384, -0.1522,  ..., -0.0904, -0.0640,  0.1595],\n",
      "        [ 0.1098, -0.0532, -0.0712,  ..., -0.1480, -0.0330, -0.1155],\n",
      "        [-0.0031,  0.1021,  0.0145,  ..., -0.0852, -0.1609,  0.1395],\n",
      "        ...,\n",
      "        [-0.0299,  0.0002, -0.0185,  ..., -0.0998,  0.0982,  0.1510],\n",
      "        [-0.1433, -0.1705,  0.0249,  ..., -0.1314, -0.1571, -0.0235],\n",
      "        [-0.0514, -0.1553,  0.0209,  ...,  0.1528,  0.0941,  0.0452]],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([ 0.1482, -0.1545, -0.0358, -0.1579, -0.1364,  0.0007, -0.1673, -0.0247,\n",
      "        -0.0758,  0.0744,  0.0205, -0.1513,  0.0587, -0.1764,  0.1100,  0.0161,\n",
      "         0.0146, -0.0537,  0.1472,  0.0595, -0.0534, -0.0402, -0.0231,  0.1024,\n",
      "         0.0221,  0.0307, -0.1541, -0.1290, -0.1684, -0.1434, -0.1727, -0.1560,\n",
      "        -0.1492,  0.1611, -0.1365,  0.0969, -0.0332, -0.1019, -0.1478,  0.1179,\n",
      "        -0.0933, -0.0024, -0.0929, -0.1151, -0.1486,  0.0964,  0.1005, -0.0100,\n",
      "         0.1297, -0.1333,  0.0970,  0.0569, -0.1641, -0.1188,  0.0174, -0.1502,\n",
      "         0.0782,  0.0670,  0.0566,  0.1148,  0.1089, -0.1593,  0.0456, -0.1757,\n",
      "        -0.0402, -0.0803,  0.0691, -0.0189, -0.1667,  0.0308, -0.0933,  0.1738,\n",
      "        -0.0313,  0.0774, -0.0687, -0.0797, -0.0747,  0.1287,  0.1191,  0.1761,\n",
      "        -0.1479, -0.0533, -0.0010, -0.1251,  0.0590, -0.1590, -0.0845,  0.1402,\n",
      "         0.1275,  0.0441, -0.0067,  0.1082, -0.1643,  0.1607,  0.1572, -0.0526,\n",
      "         0.0279, -0.1117,  0.1094, -0.0173, -0.1230, -0.1088, -0.0989,  0.0603,\n",
      "        -0.0446,  0.1739,  0.1130, -0.0207,  0.0291,  0.1764,  0.1184, -0.0311,\n",
      "         0.0659, -0.0462,  0.0345, -0.1024,  0.0798, -0.1282, -0.0926, -0.0599,\n",
      "         0.1480,  0.1458,  0.1507, -0.1233, -0.1243,  0.1315,  0.1364,  0.0318],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([[-0.0690,  0.0443,  0.0257,  ...,  0.0492, -0.0570,  0.0550],\n",
      "        [-0.0393,  0.0690, -0.0690,  ..., -0.0801, -0.0853, -0.0167],\n",
      "        [-0.0165, -0.0412,  0.0769,  ...,  0.0727, -0.0340,  0.0438],\n",
      "        ...,\n",
      "        [ 0.0523,  0.0615,  0.0787,  ..., -0.0605, -0.0068, -0.0701],\n",
      "        [-0.0042, -0.0353, -0.0071,  ...,  0.0763, -0.0136,  0.0499],\n",
      "        [ 0.0315, -0.0645, -0.0316,  ..., -0.0262, -0.0593, -0.0059]],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([-0.0275,  0.0791, -0.0126, -0.0857, -0.0223, -0.0362, -0.0639,  0.0790,\n",
      "         0.0802, -0.0371], requires_grad=True)\n",
      "Parameters after gradient descent :\n",
      "Parameter containing:\n",
      "tensor([[-0.1618,  0.1384, -0.1522,  ..., -0.0917, -0.0653,  0.1582],\n",
      "        [ 0.1098, -0.0532, -0.0713,  ..., -0.1481, -0.0331, -0.1155],\n",
      "        [-0.0031,  0.1021,  0.0144,  ..., -0.0852, -0.1609,  0.1395],\n",
      "        ...,\n",
      "        [-0.0299,  0.0002, -0.0185,  ..., -0.0999,  0.0982,  0.1509],\n",
      "        [-0.1433, -0.1705,  0.0249,  ..., -0.1314, -0.1571, -0.0235],\n",
      "        [-0.0514, -0.1553,  0.0209,  ...,  0.1528,  0.0941,  0.0452]],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([ 0.1481, -0.1546, -0.0358, -0.1579, -0.1364,  0.0007, -0.1672, -0.0247,\n",
      "        -0.0757,  0.0744,  0.0205, -0.1513,  0.0587, -0.1764,  0.1100,  0.0161,\n",
      "         0.0147, -0.0536,  0.1472,  0.0595, -0.0533, -0.0402, -0.0230,  0.1025,\n",
      "         0.0222,  0.0307, -0.1541, -0.1290, -0.1684, -0.1434, -0.1726, -0.1560,\n",
      "        -0.1493,  0.1611, -0.1364,  0.0969, -0.0332, -0.1019, -0.1478,  0.1180,\n",
      "        -0.0934, -0.0025, -0.0929, -0.1150, -0.1486,  0.0964,  0.1004, -0.0100,\n",
      "         0.1297, -0.1334,  0.0969,  0.0570, -0.1640, -0.1188,  0.0174, -0.1502,\n",
      "         0.0782,  0.0669,  0.0567,  0.1148,  0.1088, -0.1592,  0.0456, -0.1757,\n",
      "        -0.0402, -0.0803,  0.0691, -0.0189, -0.1668,  0.0308, -0.0933,  0.1739,\n",
      "        -0.0312,  0.0774, -0.0686, -0.0795, -0.0746,  0.1287,  0.1191,  0.1760,\n",
      "        -0.1479, -0.0532, -0.0010, -0.1252,  0.0591, -0.1590, -0.0845,  0.1402,\n",
      "         0.1275,  0.0441, -0.0067,  0.1082, -0.1644,  0.1608,  0.1571, -0.0527,\n",
      "         0.0280, -0.1117,  0.1093, -0.0174, -0.1232, -0.1089, -0.0989,  0.0602,\n",
      "        -0.0446,  0.1739,  0.1129, -0.0206,  0.0292,  0.1764,  0.1183, -0.0311,\n",
      "         0.0660, -0.0462,  0.0345, -0.1024,  0.0798, -0.1281, -0.0926, -0.0599,\n",
      "         0.1480,  0.1459,  0.1507, -0.1234, -0.1243,  0.1314,  0.1364,  0.0318],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([[-0.0677,  0.0439,  0.0253,  ...,  0.0517, -0.0574,  0.0546],\n",
      "        [-0.0398,  0.0687, -0.0693,  ..., -0.0807, -0.0856, -0.0169],\n",
      "        [-0.0167, -0.0413,  0.0768,  ...,  0.0724, -0.0341,  0.0436],\n",
      "        ...,\n",
      "        [ 0.0518,  0.0612,  0.0784,  ..., -0.0612, -0.0070, -0.0703],\n",
      "        [-0.0046, -0.0355, -0.0073,  ...,  0.0757, -0.0138,  0.0497],\n",
      "        [ 0.0328, -0.0635, -0.0307,  ..., -0.0247, -0.0588, -0.0051]],\n",
      "       requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([-0.0255,  0.0782, -0.0130, -0.0844, -0.0233, -0.0372, -0.0648,  0.0781,\n",
      "         0.0794, -0.0345], requires_grad=True)\n"
     ]
    }
   ],
   "source": [
    "optimizer = torch.optim.SGD(net.parameters(), lr=0.01)\n",
    "\n",
    "print(\"Parameters before gradient descent :\")\n",
    "for param in net.parameters():\n",
    "    print(param)\n",
    "\n",
    "optimizer.step()\n",
    "\n",
    "print(\"Parameters after gradient descent :\")\n",
    "for param in net.parameters():\n",
    "    print(param)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(2.0646, grad_fn=<NllLossBackward>)\n",
      "tensor(1.9667, grad_fn=<NllLossBackward>)\n",
      "tensor(1.8782, grad_fn=<NllLossBackward>)\n",
      "tensor(1.7977, grad_fn=<NllLossBackward>)\n",
      "tensor(1.7243, grad_fn=<NllLossBackward>)\n",
      "tensor(1.6579, grad_fn=<NllLossBackward>)\n",
      "tensor(1.5980, grad_fn=<NllLossBackward>)\n",
      "tensor(1.5441, grad_fn=<NllLossBackward>)\n",
      "tensor(1.4955, grad_fn=<NllLossBackward>)\n",
      "tensor(1.4513, grad_fn=<NllLossBackward>)\n",
      "tensor(1.4106, grad_fn=<NllLossBackward>)\n",
      "tensor(1.3729, grad_fn=<NllLossBackward>)\n",
      "tensor(1.3377, grad_fn=<NllLossBackward>)\n",
      "tensor(1.3047, grad_fn=<NllLossBackward>)\n",
      "tensor(1.2737, grad_fn=<NllLossBackward>)\n",
      "tensor(1.2445, grad_fn=<NllLossBackward>)\n",
      "tensor(1.2169, grad_fn=<NllLossBackward>)\n",
      "tensor(1.1909, grad_fn=<NllLossBackward>)\n",
      "tensor(1.1663, grad_fn=<NllLossBackward>)\n",
      "tensor(1.1431, grad_fn=<NllLossBackward>)\n",
      "tensor(1.1211, grad_fn=<NllLossBackward>)\n",
      "tensor(1.1002, grad_fn=<NllLossBackward>)\n",
      "tensor(1.0804, grad_fn=<NllLossBackward>)\n",
      "tensor(1.0616, grad_fn=<NllLossBackward>)\n",
      "tensor(1.0436, grad_fn=<NllLossBackward>)\n",
      "tensor(1.0264, grad_fn=<NllLossBackward>)\n",
      "tensor(1.0100, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9943, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9793, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9649, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9510, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9377, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9248, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9125, grad_fn=<NllLossBackward>)\n",
      "tensor(0.9005, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8890, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8779, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8672, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8568, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8468, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8370, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8276, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8185, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8096, grad_fn=<NllLossBackward>)\n",
      "tensor(0.8010, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7926, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7845, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7766, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7689, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7614, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7541, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7470, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7401, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7333, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7267, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7203, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7140, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7079, grad_fn=<NllLossBackward>)\n",
      "tensor(0.7019, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6961, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6903, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6847, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6793, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6739, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6687, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6635, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6585, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6535, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6487, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6440, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6393, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6347, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6303, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6259, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6215, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6173, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6131, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6090, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6050, grad_fn=<NllLossBackward>)\n",
      "tensor(0.6011, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5972, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5934, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5896, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5859, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5823, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5787, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5751, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5717, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5683, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5649, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5616, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5583, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5551, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5519, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5488, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5457, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5427, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5397, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5367, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5338, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5309, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5281, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5253, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5225, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5198, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5171, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5144, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5118, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5092, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5066, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5041, grad_fn=<NllLossBackward>)\n",
      "tensor(0.5016, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4991, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4967, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4942, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4918, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4895, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4871, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4848, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4825, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4803, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4780, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4758, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4736, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4714, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4693, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4672, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4651, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4630, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4609, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4589, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4568, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4548, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4528, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4509, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4489, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4470, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4451, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4432, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4413, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4394, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4376, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4357, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4339, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4321, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4303, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4286, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4268, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4251, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4233, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4216, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4199, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4182, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4165, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4149, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4132, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4116, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4100, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4084, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4068, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4052, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4036, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4020, grad_fn=<NllLossBackward>)\n",
      "tensor(0.4005, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3990, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3974, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3959, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3944, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3929, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3914, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3899, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3885, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3870, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3856, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3841, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3827, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3813, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3799, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3785, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3771, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3757, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3743, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3730, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3716, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3703, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3689, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3676, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3663, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3650, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3637, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3624, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3611, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3598, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3585, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3573, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3560, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3548, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3535, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3523, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3510, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3498, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3486, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3474, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3462, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3450, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3438, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3426, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3415, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3403, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3391, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3380, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3368, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3357, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3346, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3334, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3323, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3312, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3301, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3290, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3279, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3268, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3257, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3246, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3235, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3225, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3214, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3203, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3193, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3182, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3172, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3161, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3151, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3141, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3131, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3120, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3110, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3100, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3090, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3080, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3070, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3060, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3051, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3041, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3031, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3021, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3012, grad_fn=<NllLossBackward>)\n",
      "tensor(0.3002, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2992, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2983, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2974, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2964, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2955, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2945, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2936, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2927, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2918, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2908, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2899, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2890, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2881, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2872, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2863, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2854, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2846, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2837, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2828, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2819, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2810, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2802, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2793, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2785, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2776, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2768, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2759, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2751, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2742, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2734, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2726, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2717, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2709, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2701, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2693, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2684, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2676, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2668, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2660, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2652, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2644, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2636, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2628, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2621, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2613, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2605, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2597, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2589, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2582, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2574, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2566, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2559, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2551, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2544, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2536, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2529, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2521, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2514, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2506, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2499, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2492, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2485, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2477, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2470, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2463, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2456, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2449, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2441, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2434, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2427, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2420, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2413, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2406, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2399, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2392, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2386, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2379, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2372, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2365, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2358, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2352, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2345, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2338, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2332, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2325, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2318, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2312, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2305, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2299, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2292, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2286, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2279, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2273, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2266, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2260, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2254, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2247, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2241, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2235, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2229, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2222, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2216, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2210, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2204, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2198, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2192, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2186, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2180, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2174, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2168, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2162, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2156, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2150, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2144, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2138, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2132, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2126, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2120, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2115, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2109, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2103, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2097, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2092, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2086, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2080, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2075, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2069, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2063, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2058, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2052, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2047, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2041, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2036, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2030, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2025, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2019, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2014, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2009, grad_fn=<NllLossBackward>)\n",
      "tensor(0.2003, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1998, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1993, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1987, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1982, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1977, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1972, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1966, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1961, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1956, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1951, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1946, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1941, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1935, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1930, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1925, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1920, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1915, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1910, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1905, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1900, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1895, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1890, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1885, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1881, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1876, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1871, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1866, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1861, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1856, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1852, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1847, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1842, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1837, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1833, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1828, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1823, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1819, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1814, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1809, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1805, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1800, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1795, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1791, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1786, grad_fn=<NllLossBackward>)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(0.1782, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1777, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1773, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1768, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1764, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1759, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1755, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1751, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1746, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1742, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1737, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1733, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1729, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1724, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1720, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1716, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1712, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1707, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1703, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1699, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1695, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1690, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1686, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1682, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1678, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1674, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1670, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1666, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1662, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1657, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1653, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1649, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1645, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1641, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1637, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1633, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1629, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1625, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1621, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1618, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1614, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1610, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1606, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1602, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1598, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1594, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1590, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1587, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1583, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1579, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1575, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1571, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1568, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1564, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1560, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1556, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1553, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1549, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1545, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1542, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1538, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1534, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1531, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1527, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1524, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1520, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1516, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1513, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1509, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1506, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1502, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1499, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1495, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1492, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1488, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1485, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1481, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1478, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1475, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1471, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1468, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1464, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1461, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1458, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1454, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1451, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1448, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1444, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1441, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1438, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1434, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1431, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1428, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1425, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1421, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1418, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1415, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1412, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1408, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1405, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1402, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1399, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1396, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1393, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1389, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1386, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1383, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1380, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1377, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1374, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1371, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1368, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1365, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1362, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1359, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1356, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1353, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1350, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1347, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1344, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1341, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1338, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1335, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1332, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1329, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1326, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1323, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1320, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1317, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1314, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1312, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1309, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1306, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1303, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1300, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1297, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1294, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1292, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1289, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1286, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1283, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1281, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1278, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1275, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1272, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1270, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1267, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1264, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1261, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1259, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1256, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1253, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1251, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1248, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1245, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1243, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1240, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1237, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1235, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1232, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1230, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1227, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1224, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1222, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1219, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1217, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1214, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1212, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1209, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1207, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1204, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1201, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1199, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1196, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1194, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1192, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1189, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1187, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1184, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1182, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1179, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1177, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1174, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1172, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1170, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1167, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1165, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1162, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1160, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1158, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1155, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1153, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1151, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1148, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1146, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1144, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1141, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1139, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1137, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1134, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1132, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1130, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1127, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1125, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1123, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1121, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1118, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1116, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1114, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1112, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1110, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1107, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1105, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1103, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1101, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1099, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1096, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1094, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1092, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1090, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1088, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1086, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1083, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1081, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1079, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1077, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1075, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1073, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1071, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1069, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1067, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1064, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1062, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1060, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1058, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1056, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1054, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1052, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1050, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1048, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1046, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1044, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1042, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1040, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1038, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1036, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1034, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1032, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1030, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1028, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1026, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1024, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1022, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1020, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1018, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1016, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1014, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1013, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1011, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1009, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1007, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1005, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1003, grad_fn=<NllLossBackward>)\n",
      "tensor(0.1001, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0999, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0997, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0996, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0994, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0992, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0990, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0988, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0986, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0984, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0983, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0981, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0979, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0977, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0975, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0974, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0972, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0970, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0968, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0966, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0965, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0963, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0961, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0959, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0958, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0956, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0954, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0952, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0951, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0949, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0947, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0945, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0944, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0942, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0940, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0939, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0937, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0935, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0933, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0932, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0930, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0928, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0927, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0925, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0923, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0922, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0920, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0918, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0917, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0915, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0914, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0912, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0910, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0909, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0907, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0906, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0904, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0902, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0901, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0899, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0898, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0896, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0894, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0893, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0891, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0890, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0888, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0887, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0885, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0884, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0882, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0880, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0879, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0877, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0876, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0874, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0873, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0871, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0870, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0868, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0867, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0865, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0864, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0862, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0861, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0859, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0858, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0856, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0855, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0854, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0852, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0851, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0849, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0848, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0846, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0845, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0843, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0842, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0841, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0839, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0838, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0836, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0835, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0834, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0832, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0831, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0829, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0828, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0827, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0825, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0824, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0822, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0821, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0820, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0818, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0817, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0816, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0814, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0813, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0812, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0810, grad_fn=<NllLossBackward>)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(0.0809, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0808, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0806, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0805, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0804, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0802, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0801, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0800, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0798, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0797, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0796, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0794, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0793, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0792, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0791, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0789, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0788, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0787, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0785, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0784, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0783, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0782, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0780, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0779, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0778, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0777, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0775, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0774, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0773, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0772, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0770, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0769, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0768, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0767, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0766, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0764, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0763, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0762, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0761, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0759, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0758, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0757, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0756, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0755, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0754, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0752, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0751, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0750, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0749, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0748, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0746, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0745, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0744, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0743, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0742, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0741, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0739, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0738, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0737, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0736, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0735, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0734, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0733, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0732, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0730, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0729, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0728, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0727, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0726, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0725, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0724, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0723, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0721, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0720, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0719, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0718, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0717, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0716, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0715, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0714, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0713, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0712, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0711, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0709, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0708, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0707, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0706, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0705, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0704, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0703, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0702, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0701, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0700, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0699, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0698, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0697, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0696, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0695, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0694, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0693, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0692, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0691, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0690, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0689, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0688, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0686, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0685, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0684, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0683, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0682, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0681, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0680, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0679, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0678, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0677, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0676, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0675, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0674, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0673, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0673, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0672, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0671, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0670, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0669, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0668, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0667, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0666, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0665, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0664, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0663, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0662, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0661, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0660, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0659, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0658, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0657, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0656, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0655, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0654, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0653, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0652, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0651, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0651, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0650, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0649, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0648, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0647, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0646, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0645, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0644, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0643, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0642, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0641, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0640, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0640, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0639, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0638, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0637, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0636, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0635, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0634, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0633, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0632, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0632, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0631, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0630, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0629, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0628, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0627, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0626, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0625, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0625, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0624, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0623, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0622, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0621, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0620, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0619, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0619, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0618, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0617, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0616, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0615, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0614, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0613, grad_fn=<NllLossBackward>)\n",
      "tensor(0.0613, grad_fn=<NllLossBackward>)\n"
     ]
    }
   ],
   "source": [
    "# In a training loop, we should perform many GD iterations.\n",
    "n_iter = 1000\n",
    "for i in range(n_iter):\n",
    "    optimizer.zero_grad() # equivalent to net.zero_grad()\n",
    "    output = net(x)\n",
    "    loss = criterion(output,y)\n",
    "    loss.backward()\n",
    "    optimizer.step()\n",
    "    print(loss)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 8.0828, -1.7975, -2.0597,  1.0675, -1.6249, -1.9246, -1.5834, -1.9368,\n",
      "         -1.7684,  4.1029],\n",
      "        [ 0.5056, -1.3013, -1.5002,  5.9456, -1.3405, -1.3123, -1.3705, -1.2731,\n",
      "         -1.3589,  3.2805],\n",
      "        [ 1.8830, -1.4081, -1.6762,  3.0854, -1.5293, -1.3578, -1.4603, -1.4866,\n",
      "         -1.4614,  5.7883]], grad_fn=<ThAddmmBackward>)\n",
      "tensor([0, 3, 9])\n"
     ]
    }
   ],
   "source": [
    "output = net(x)\n",
    "print(output)\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now you know how to train a network ! For a complete training check the pytorch_example notebook."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Saving and Loading"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "odict_keys(['0.weight', '0.bias', '2.weight', '2.bias'])\n"
     ]
    }
   ],
   "source": [
    "# get dictionary of keys to weights using `state_dict`\n",
    "net = torch.nn.Sequential(\n",
    "    torch.nn.Linear(28*28,256),\n",
    "    torch.nn.Sigmoid(),\n",
    "    torch.nn.Linear(256,10))\n",
    "print(net.state_dict().keys())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "# save a dictionary\n",
    "torch.save(net.state_dict(),'test.t7')\n",
    "# load a dictionary\n",
    "net.load_state_dict(torch.load('test.t7'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Common issues to look out for"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Type mismatch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'mat2'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-49-4d1c8f2c4847>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0mnet\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      2\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtensor\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0my\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnet\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      4\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m    475\u001b[0m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    476\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 477\u001b[0;31m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    478\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    479\u001b[0m             \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/linear.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m     53\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     54\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 55\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mF\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbias\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     56\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     57\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mextra_repr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/functional.py\u001b[0m in \u001b[0;36mlinear\u001b[0;34m(input, weight, bias)\u001b[0m\n\u001b[1;32m   1024\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0maddmm\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbias\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mt\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1025\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1026\u001b[0;31m     \u001b[0moutput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmatmul\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mweight\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mt\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1027\u001b[0m     \u001b[0;32mif\u001b[0m \u001b[0mbias\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1028\u001b[0m         \u001b[0moutput\u001b[0m \u001b[0;34m+=\u001b[0m \u001b[0mbias\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'mat2'"
     ]
    }
   ],
   "source": [
    "net = nn.Linear(4,2)\n",
    "x = torch.tensor([1,2,3,4])\n",
    "y = net(x)\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = x.float()\n",
    "x = torch.tensor([1.,2.,3.,4.])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[6., 6.],\n",
      "        [6., 6.]])\n",
      "tensor([[12., 12.],\n",
      "        [12., 12.]])\n"
     ]
    }
   ],
   "source": [
    "x = 2* torch.ones(2,2)\n",
    "y = 3* torch.ones(2,2)\n",
    "print(x * y)\n",
    "print(x.matmul(y))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1., 2., 3., 4., 5.],\n",
      "        [1., 2., 3., 4., 5.],\n",
      "        [1., 2., 3., 4., 5.],\n",
      "        [1., 2., 3., 4., 5.]])\n",
      "tensor([[1., 1., 1., 1., 1.],\n",
      "        [2., 2., 2., 2., 2.],\n",
      "        [3., 3., 3., 3., 3.],\n",
      "        [4., 4., 4., 4., 4.]])\n"
     ]
    },
    {
     "ename": "RuntimeError",
     "evalue": "The size of tensor a (5) must match the size of tensor b (4) at non-singleton dimension 1",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-55-21069971df37>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      5\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m+\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      6\u001b[0m \u001b[0my\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfloat\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 7\u001b[0;31m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m+\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# exception\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m: The size of tensor a (5) must match the size of tensor b (4) at non-singleton dimension 1"
     ]
    }
   ],
   "source": [
    "x = torch.ones(4,5).float()\n",
    "y = torch.arange(5).float()\n",
    "print(x+y)\n",
    "y = torch.arange(4).float().view(-1,1)\n",
    "print(x+y)\n",
    "y = torch.arange(4).float()\n",
    "print(x+y) # exception"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1, 2, 3],\n",
      "        [4, 5, 6]])\n",
      "tensor([[1, 4],\n",
      "        [2, 5],\n",
      "        [3, 6]])\n",
      "tensor([[1, 2],\n",
      "        [3, 4],\n",
      "        [5, 6]])\n"
     ]
    }
   ],
   "source": [
    "x = torch.tensor([[1,2,3],[4,5,6]])\n",
    "print(x)\n",
    "print(x.t())\n",
    "print(x.view(3,2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "CUDA error: out of memory",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-67-c97ca5ec6a72>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     11\u001b[0m \u001b[0my\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcuda\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     12\u001b[0m \u001b[0mcrit\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mCrossEntropyLoss\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 13\u001b[0;31m \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnet\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     14\u001b[0m \u001b[0mloss\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcrit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mout\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     15\u001b[0m \u001b[0mloss\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbackward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m    475\u001b[0m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    476\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 477\u001b[0;31m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    478\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    479\u001b[0m             \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/container.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m     89\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     90\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mmodule\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_modules\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 91\u001b[0;31m             \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodule\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     92\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     93\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m    475\u001b[0m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    476\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 477\u001b[0;31m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    478\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    479\u001b[0m             \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/container.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m     89\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     90\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mmodule\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_modules\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 91\u001b[0;31m             \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodule\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     92\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     93\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m    475\u001b[0m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    476\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 477\u001b[0;31m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    478\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    479\u001b[0m             \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/container.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m     89\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     90\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mmodule\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_modules\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 91\u001b[0;31m             \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodule\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     92\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     93\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m    475\u001b[0m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    476\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 477\u001b[0;31m             \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    478\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    479\u001b[0m             \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/modules/activation.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m     44\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     45\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 46\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mF\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mthreshold\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mthreshold\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minplace\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     47\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     48\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mextra_repr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/envs/dl/lib/python3.6/site-packages/torch/nn/functional.py\u001b[0m in \u001b[0;36mthreshold\u001b[0;34m(input, threshold, value, inplace)\u001b[0m\n\u001b[1;32m    623\u001b[0m     \u001b[0;32mif\u001b[0m \u001b[0minplace\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    624\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_C\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_nn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mthreshold_\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mthreshold\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mvalue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 625\u001b[0;31m     \u001b[0;32mreturn\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_C\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_nn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mthreshold\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mthreshold\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mvalue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    626\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    627\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: CUDA error: out of memory"
     ]
    }
   ],
   "source": [
    "def get_layer():\n",
    "    return nn.Sequential(nn.Linear(2048,2048),nn.ReLU())\n",
    "def get_layers(n):\n",
    "    return nn.Sequential(*[get_layer() for i in range(n)])\n",
    "net = nn.Sequential(get_layers(100),\n",
    "                   nn.Linear(2048,120))\n",
    "x = torch.rand(1024,2048)\n",
    "y = torch.zeros(1024).long()\n",
    "net=net.cuda()\n",
    "x=x.cuda()\n",
    "y=y.cuda()\n",
    "crit=nn.CrossEntropyLoss()\n",
    "out = net(x)\n",
    "loss = crit(out,y)\n",
    "loss.backward()\n",
    "print(loss)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MyNet(nn.Module):\n",
    "    def __init__(self,n_hidden_layers):\n",
    "        super(MyNet,self).__init__()\n",
    "        self.n_hidden_layers=n_hidden_layers\n",
    "        self.final_layer = nn.Linear(128,10)\n",
    "        self.act = nn.ReLU()\n",
    "        self.hidden = []\n",
    "        for i in range(n_hidden_layers):\n",
    "            self.hidden.append(nn.Linear(128,128))\n",
    "    \n",
    "            \n",
    "    def forward(self,x):\n",
    "        h = x\n",
    "        for i in range(self.n_hidden_layers):\n",
    "            h = self.hidden[i](h)\n",
    "            h = self.act(h)\n",
    "        out = self.final_layer(h)\n",
    "        return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MyNet(nn.Module):\n",
    "    def __init__(self,n_hidden_layers):\n",
    "        super(MyNet,self).__init__()\n",
    "        self.n_hidden_layers=n_hidden_layers\n",
    "        self.final_layer = nn.Linear(128,10)\n",
    "        self.act = nn.ReLU()\n",
    "        self.hidden = []\n",
    "        for i in range(n_hidden_layers):\n",
    "            self.hidden.append(nn.Linear(128,128))\n",
    "        self.hidden = nn.ModuleList(self.hidden)\n",
    "            \n",
    "    def forward(self,x):\n",
    "        h = x\n",
    "        for i in range(self.n_hidden_layers):\n",
    "            h = self.hidden[i](h)\n",
    "            h = self.act(h)\n",
    "        out = self.final_layer(h)\n",
    "        return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
