{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "6c62b709-eea8-4e47-b263-73a944a6fa3f",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/github/mrdbourke/pytorch-deep-learning/blob/main/extras/pytorch_most_common_errors.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd5762fa-e6ff-488b-b088-3a5ca69d56a5",
   "metadata": {},
   "source": [
    "# The Three Most Common Errors in PyTorch\n",
    "\n",
    "PyTorch is one of the largest machine learning libraries available.\n",
    "\n",
    "So it's likely you'll run into various errors when using it.\n",
    "\n",
    "Because of the various maintenance and checks performed by the creators, it's rare the error will be because of the library itself.\n",
    "\n",
    "This means the majority of the errors you run into will be user errors.\n",
    "\n",
    "More specifically, you wrote the wrong code.\n",
    "\n",
    "Don't be offended, this happens to every programmer.\n",
    "\n",
    "Of the user errors you run into, chances are they'll be one of the following:\n",
    "\n",
    "1. **Shape errors** - You're trying to perform an operation on matrices/tensors with shapes that don't line up. For example, your data's shape is `[1, 28, 28]` but your first layer takes an input of `[10]`.\n",
    "2. **Device errors** - Your model is on a different device to your data. For example your model is on the GPU (e.g. `\"cuda\"`) and your data is on the CPU (e.g. `\"cpu\"`).\n",
    "3. **Datatype errors** - Your data is one datatype (e.g. `torch.float32`), however the operation you're trying to perform requires another datatype (e.g. `torch.int64`).\n",
    "\n",
    "<img src=\"https://github.com/mrdbourke/pytorch-deep-learning/raw/main/images/misc-three-main-errors-in-pytorch.png\" width=750 alt=\"the three most common errors in PyTorch\"/>\n",
    "\n",
    "Notice the recurring theme here.\n",
    "\n",
    "There's some kind of mismatch between your shape(s), device(s) and/or datatype(s).\n",
    "\n",
    "This notebook/blog post goes through examples of each of the above errors and how to fix them.\n",
    "\n",
    "It won't prevent you from making them in the future but it will make you aware enough to perhaps reduce them and even more important, know how to solve them.\n",
    "\n",
    "> **Note:** All of the following examples have been adapted from [learnpytorch.io](https://learnpytorch.io) which is the book version of the [Zero to Mastery: PyTorch for Deep Learning](https://dbourke.link/ZTMPyTorch) video course.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1f0b33b-6a43-4c10-8363-e7b8cac6ceef",
   "metadata": {},
   "source": [
    "## 1. Shape errors in PyTorch"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b999de2b-88e1-4b7e-a669-928e58b5148d",
   "metadata": {},
   "source": [
    "### 1.1 Matrix multiplication shape errors\n",
    "\n",
    "PyTorch is one of the best frameworks to build neural network models with.\n",
    "\n",
    "And one of the fundamental operations of a neural network is matrix multiplication.\n",
    "\n",
    "However, matrix multiplication comes with very specific rules.\n",
    "\n",
    "If these rules aren't adhered to, you'll get an infamous shape error.\n",
    "\n",
    "```\n",
    "RuntimeError: mat1 and mat2 shapes cannot be multiplied (3x4 and 3x4)\n",
    "```\n",
    "\n",
    "Let's start with a brief example. \n",
    "\n",
    "> **Note:** Although it's called \"matrix multiplication\" almost every form of data in PyTorch comes in the form of a tensor. Where a tensor is an n-dimensional array (n can be any number). So while I use the terminology \"matrix multiplication\", this extends to \"tensor multiplication\" as well. See [00. PyTorch Fundamentals: Introduction to Tensors](https://www.learnpytorch.io/00_pytorch_fundamentals/#introduction-to-tensors) for more on the difference between matrices and tensors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "9b50eccb-017f-49ac-a274-0cde7d724138",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "PyTorch version: 1.12.1+cu113\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "print(f\"PyTorch version: {torch.__version__}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "c218bcbd-d31d-45b6-8f83-8ccb43a5abf5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([3, 4])\n",
      "torch.Size([3, 4])\n"
     ]
    }
   ],
   "source": [
    "# Create two tensors\n",
    "tensor_1 = torch.rand(3, 4)\n",
    "tensor_2 = torch.rand(3, 4)\n",
    "\n",
    "# Check the shapes\n",
    "print(tensor_1.shape)\n",
    "print(tensor_2.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "827859d9-d392-4a30-8467-39c8a40268fa",
   "metadata": {},
   "source": [
    "Notice both tensors have the same shape.\n",
    "\n",
    "Let's try to perform a matrix multiplication on them.\n",
    "\n",
    "> **Note:** The matrix multiplication operation is different from a standard multiplication operation. \n",
    ">\n",
    "> With our current tensors, the standard multiplication operation (`*` or [`torch.mul()`](https://pytorch.org/docs/stable/generated/torch.mul.html)) will work where as the matrix multiplication operation (`@` or [`torch.matmul()`](https://pytorch.org/docs/stable/generated/torch.matmul.html)) will error. \n",
    ">\n",
    "> See [00. PyTorch Fundamentals: Matrix Multiplication](https://www.learnpytorch.io/00_pytorch_fundamentals/#matrix-multiplication-is-all-you-need) for a breakdown of what happens in matrix multiplication."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "90589c0c-1038-4016-b4ea-e51f6c24f6e4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[True, True, True, True],\n",
       "        [True, True, True, True],\n",
       "        [True, True, True, True]])"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Standard multiplication, the following lines perform the same operation (will work)\n",
    "tensor_3 = tensor_1 * tensor_2 # can do standard multiplication with \"*\"\n",
    "tensor_4 = torch.mul(tensor_1, tensor_2) # can also do standard multiplicaton with \"torch.mul()\" \n",
    "\n",
    "# Check for equality \n",
    "tensor_3 == tensor_4"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d51eacf-87b0-4ea0-becd-feeb95e437ce",
   "metadata": {},
   "source": [
    "Wonderful! Looks like standard multiplication works with our current tensor shapes.\n",
    "\n",
    "Let's try matrix multiplication."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "6f996537-6d21-41fd-bcc5-f1c563df34d5",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "mat1 and mat2 shapes cannot be multiplied (3x4 and 3x4)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [4]\u001b[0m, in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;66;03m# Try matrix multiplication (won't work)\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m tensor_5 \u001b[38;5;241m=\u001b[39m \u001b[43mtensor_1\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m@\u001b[39;49m\u001b[43m \u001b[49m\u001b[43mtensor_2\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: mat1 and mat2 shapes cannot be multiplied (3x4 and 3x4)"
     ]
    }
   ],
   "source": [
    "# Try matrix multiplication (won't work)\n",
    "tensor_5 = tensor_1 @ tensor_2 # could also do \"torch.matmul(tensor_1, tensor_2)\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "34574baa-9c23-4e6a-bf45-33ad3da0c7a6",
   "metadata": {},
   "source": [
    "Oh no!\n",
    "\n",
    "We get an error similar to the following:\n",
    "\n",
    "```\n",
    "RuntimeError                              Traceback (most recent call last)\n",
    "<ipython-input-11-2ca2c90dbb42> in <module>\n",
    "      1 # Try matrix multiplication (won't work)\n",
    "----> 2 tensor_5 = tensor_1 @ tensor_2\n",
    "\n",
    "RuntimeError: mat1 and mat2 shapes cannot be multiplied (3x4 and 3x4)\n",
    "```\n",
    "\n",
    "This is a **shape error**, our two tensors (matrices) can't be *matrix* multiplied because their shapes are incompatible.\n",
    "\n",
    "Why?\n",
    "\n",
    "This is because matrix multiplication has specific rules:\n",
    "\n",
    "1. The **inner dimensions** must match:\n",
    "* `(3, 4) @ (3, 4)` won't work \n",
    "* `(4, 3) @ (3, 4)` will work\n",
    "* `(3, 4) @ (4, 3)` will work\n",
    "2. The resulting matrix has the shape of the **outer dimensions**:\n",
    "* `(4, 3) @ (3, 4)` -> `(4, 4)`\n",
    "* `(3, 4) @ (4, 3)` -> `(3, 3)`\n",
    "\n",
    "So how do we fix it?\n",
    "\n",
    "This is where either a *transpose* or a *reshape* comes in.\n",
    "\n",
    "And in the case of neural networks, it's more generally a transpose operation. \n",
    "\n",
    "* **Transpose** - The transpose ([`torch.transpose()`](https://pytorch.org/docs/stable/generated/torch.transpose.html)) operation swaps the dimensions of a given tensor.\n",
    "  * **Note:** You can also use the shortcut of `tensor.T` to perform a transpose.\n",
    "* **Reshape** - The reshape ([`torch.reshape()`](https://pytorch.org/docs/stable/generated/torch.reshape.html)) operation returns a tensor with the same number of original elements but in a different specified shape.\n",
    "\n",
    "Let's see this in action."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "ba5c1b7f-542b-4163-a094-34ec7f0c4976",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Shape of input tensors: torch.Size([4, 3]) and torch.Size([3, 4])\n",
      "Shape of output tensor: torch.Size([4, 4])\n"
     ]
    }
   ],
   "source": [
    "# Perform a transpose on tensor_1 and then perform matrix multiplication \n",
    "tensor_6 = tensor_1.T @ tensor_2\n",
    "print(f\"Shape of input tensors: {tensor_1.T.shape} and {tensor_2.shape}\")\n",
    "print(f\"Shape of output tensor: {tensor_6.shape}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d145eff9-cfcb-4d91-a732-de7c9b662cf7",
   "metadata": {},
   "source": [
    "No errors!\n",
    "\n",
    "See how the input shape of `tensor_1` changed from `(3, 4)` to `(4, 3)` thanks to the transpose (`tensor_1.T`).\n",
    "\n",
    "And because of this, rule 1 of matrix multiplication, **the inner dimensions must match** was satisfied.\n",
    "\n",
    "Finally, the output shape satisfied rule 2 of matrix multiplication, **the resulting matrix has the shape of the outer dimensions**.\n",
    "\n",
    "In our case, `tensor_6` has a shape of `(4, 4)`.\n",
    "\n",
    "Let's do the same operation except now we'll transpose `tensor_2` instead of `tensor_1`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "4533e23b-0e9d-4bce-aa0c-dae1c2879e65",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Shape of input tensors: torch.Size([3, 4]) and torch.Size([4, 3])\n",
      "Shape of output tensor: torch.Size([3, 3])\n"
     ]
    }
   ],
   "source": [
    "# Perform a transpose on tensor_2 and then perform matrix multiplication\n",
    "tensor_7 = tensor_1 @ tensor_2.T\n",
    "print(f\"Shape of input tensors: {tensor_1.shape} and {tensor_2.T.shape}\")\n",
    "print(f\"Shape of output tensor: {tensor_7.shape}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d87e9014-583a-4db5-ae81-c804c2cfe274",
   "metadata": {},
   "source": [
    "Woohoo!\n",
    "\n",
    "No errors again!\n",
    "\n",
    "See how rule 1 and rule 2 of matrix multiplication were satisfied again.\n",
    "\n",
    "Except this time because we transposed `tensor_2`, the resulting output tensor shape is `(3, 3)`.\n",
    "\n",
    "The good news is most of the time, when you build neural networks with PyTorch, the library takes care of most of the matrix multiplication operations you'll need to perform for you.\n",
    "\n",
    "With that being said, let's build a neural network with PyTorch and see where shape errors might occur."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2cca226-4ea2-408e-a05d-52f54cabb62c",
   "metadata": {},
   "source": [
    "### 1.2 PyTorch neural network shape errors\n",
    "\n",
    "We've seen how shape errors can occur when working with matrix multiplication (or matrix multiplying tensors).\n",
    "\n",
    "Now let's build a neural network with PyTorch and see where shape errors can occur.\n",
    "\n",
    "A shape error will occur in neural network in any of the following situations:\n",
    "* **Incorrect input shape** - your data is in a certain shape but the model's first layer expects a different shape.\n",
    "* **Incorrect input and output shapes between layers** - one of the layers of your model outputs a certain shape but the following layer expects a different shape as input.\n",
    "* **No batch size dimension in input data when trying to make a prediction** - your model was trained on samples with a batch dimension, so when you try to predict on a single sample *without* a batch dimension, an error occurs.\n",
    "\n",
    "To showcase these shape errors, let's build a simple neural network (the errors are the same regardless of the size of your network) to try and find patterns in the Fashion MNIST dataset (black and white images of 10 different classes of clothing).\n",
    "\n",
    "> **Note:** The following examples focus specifically on shape errors rather than building the *best* neural network. You can see a fully working example of this problem in [03. PyTorch Computer Vision](https://www.learnpytorch.io/03_pytorch_computer_vision/).\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c370b97e-9731-496b-be6a-d2d955dc3bc9",
   "metadata": {},
   "source": [
    "### 1.3 Downloading a dataset\n",
    "\n",
    "To begin, we'll get the Fashion MNIST dataset from [`torchvision.datasets`](https://pytorch.org/vision/stable/generated/torchvision.datasets.FashionMNIST.html). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "8932b2c6-49b0-4ce5-80ed-c8118eddd07f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz\n",
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "966b1ed47e214e189c79c2d1e1c1f2bb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/26421880 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw\n",
      "\n",
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz\n",
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a4d501453aba4c43bf5ea3e9a12ffe80",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/29515 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw\n",
      "\n",
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz\n",
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5a57edd30fc643c0baba05f3595fbe93",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/4422102 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw\n",
      "\n",
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz\n",
      "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f795a425a8d04dd59215f4414997a416",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/5148 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import torchvision\n",
    "from torchvision import datasets, transforms\n",
    "\n",
    "# Setup training data\n",
    "train_data = datasets.FashionMNIST(\n",
    "    root=\"data\", # where to download data to?\n",
    "    train=True, # get training data\n",
    "    download=True, # download data if it doesn't exist on disk\n",
    "    transform=transforms.ToTensor(), # images come as PIL format, we want to turn into Torch tensors\n",
    "    target_transform=None # you can transform labels as well\n",
    ")\n",
    "\n",
    "# Setup testing data\n",
    "test_data = datasets.FashionMNIST(\n",
    "    root=\"data\",\n",
    "    train=False, # get test data\n",
    "    download=True,\n",
    "    transform=transforms.ToTensor()\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a17253b-aa2b-4ad5-bd52-289da2ac934d",
   "metadata": {},
   "source": [
    "Now let's get some details about the first training sample, the label as well as the class names and number of classes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "ec308d1f-3b54-48c9-9169-3fe25077b651",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Image shape: torch.Size([1, 28, 28]) -> [batch, height, width]\n",
      "Label: 9\n"
     ]
    }
   ],
   "source": [
    "# See first training sample\n",
    "image, label = train_data[0]\n",
    "print(f\"Image shape: {image.shape} -> [batch, height, width]\") \n",
    "print(f\"Label: {label}\") # label is an int rather than a tensor (it has no shape attribute)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40897bb1-1c7e-434f-822d-7473e03077a2",
   "metadata": {},
   "source": [
    "Our image has a shape of `[1, 28, 28]` or `[batch_size, height, width]`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "eaf83774-5995-4ddf-8394-d956e25fb85a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(['T-shirt/top',\n",
       "  'Trouser',\n",
       "  'Pullover',\n",
       "  'Dress',\n",
       "  'Coat',\n",
       "  'Sandal',\n",
       "  'Shirt',\n",
       "  'Sneaker',\n",
       "  'Bag',\n",
       "  'Ankle boot'],\n",
       " 10)"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# See class names and number of classes\n",
    "class_names = train_data.classes\n",
    "num_classes = len(class_names)\n",
    "class_names, num_classes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "85a4b016-5a02-46b9-ba4d-3a3fe144190d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAN1ElEQVR4nO3da2ie9RnH8d9lotUcTFvbrE09RK2d3ZhWPLVaxbPWF0OqVmRMS9e9cC/GBoKMyWQ4V6eDCZswmGN90U3YCwXF0xxssJFW7cQ1jPRFW22bRmPPa9ODPfz3Ik9HFnJfV81jliv6/UCg6c//k/t5nv68k1z879tKKQKQzynjfQAARkY5gaQoJ5AU5QSSopxAUpQTSIpyJmRmxcxmf9oseMylZvb3+o8O/y+UcwyZ2V/NbLeZTRrvYxkrZnaDmfWO93F8HlHOMWJmnZKuk1QkfX18jwYTEeUcOw9IWiNppaQHhwZmttLMnjWzV8xsn5m9ZWYXjvQgZrbQzLaa2Y0jZJPM7OdmtsXM+s3s12Z2hnNMZma/NLO9ZrbezG4eEnSY2UtmtsvMNpjZt4d9nWfMrK/28Uzt75olvSapw8z21z46PtWrhEqUc+w8IOn3tY/bzexLw/L7Jf1Y0hRJGyQ9MfwBzOx2Sc9LuruU8pcRvsbPJM2RNE/SbEmzJP3IOaarJW2SNE3SY5JeMLOptex5Sb2SOiTdI+mnQ8r7Q0nza1/nUklXSXq0lDIgaZGkvlJKS+2jz/n6+DRKKXx8xh+SFko6Imla7fP1kr4/JF8p6bkhn98paf2Qz4ukH0jaLOlrwx67aLCIJmlA0oVDsgWS3q84pqWS+iTZkL97W9I3JZ0j6Zik1iHZCkkra3/eKOnOIdntkj6o/fkGSb3j/Zp/Hj84c46NByX9qZSyo/b5HzTsW1tJHw358wFJLcPy70n6Yymlu+JrTJfUJOkfZrbHzPZIer3291W2lVqjajZr8EzZIWlXKWXfsGxW7c8dtc+Hr8MYahzvA/i8qf3Mt0RSg5mdKOAkSZPN7NJSyj9P8qHulfRbM9tWSnlmhHyHpIOSvlpK2XaSjznLzGxIQc+V9JIGz6hTzax1SEHPlXTicfsknSfpX0OyE9++sq1pjHDm/OzdpcFvEb+iwZ/R5kmaK+lvGvw59GT1SbpZ0nfN7DvDw1LKcUm/kfQLM2uXJDObVfs5tUp77fFONbN7a8f1aillq6QuSSvM7HQzu0TStzT487I0+PPoo2Y23cymafDn2lW1rF/SWWbW9imeG04C5fzsPSjpd6WULaWUj058SPqVpG+Y2Ul/t1JK2aLBgj5iZstH+E8e0eAvk9aY2b8l/VnSl52HfEvSRRo86z4h6Z5Sys5adr+kTg3+T+FFSY+VUt6sZT+RtFbSOkndkt6t/Z1KKes1WN5NtW+v+Xb3M2L/+yMIgCw4cwJJUU4gKcoJJEU5gaTc3xyaGb8tAsZYKcVG+nvOnEBSlBNIinICSVFOICnKCSRFOYGkKCeQFOUEkqKcQFKUE0iKcgJJUU4gKcoJJEU5gaQoJ5AU5QSSopxAUpQTSIpyAklRTiApygkkRTmBpLgFYDJmI14l8b/qvbdNa2urmy9cuLAye+211+r62tFza2hoqMyOHj1a19euV3TsntG+Z5w5gaQoJ5AU5QSSopxAUpQTSIpyAklRTiAp5pzJnHKK///LY8eOufns2bPdfPny5W5+8ODBymxgYMBde+jQITd/++233byeWWY0h4xe12h9PcfmzW89nDmBpCgnkBTlBJKinEBSlBNIinICSVFOICnmnMlEM7FoznnTTTe5+S233OLmvb29ldmkSZPctU1NTW5+6623uvlzzz1XmfX397troz2T0esWaWlpqcyOHz/urj1w4MCoviZnTiApygkkRTmBpCgnkBTlBJKinEBSlBNIijlnMp988kld66+88ko37+zsdHNvzhrtiXzjjTfc/LLLLnPzp556qjJbu3atu7a7u9vNe3p63Pyqq65yc+917erqcteuXr3azatw5gSSopxAUpQTSIpyAklRTiApygkkxShlHHiXYYy2PkXbrq644go337dvn5s3NzdXZnPmzHHXRvk777zj5hs2bKjMvC1bkrRgwQI3X7x4sZsfOXLEzb1jjy43evjwYTevwpkTSIpyAklRTiApygkkRTmBpCgnkBTlBJIyb65mZv7Q7Qsqul1cPaI555o1a9w82hIW8Z5bdBu8ere7ebcQjC4/+e6777q5N0OV4ud2xx13VGYXXHCBu3bWrFluXkoZ8UXnzAkkRTmBpCgnkBTlBJKinEBSlBNIinICSbGfcxSiWeRY2r17t5vPnDnTzQ8ePOjm3m3+Ghv9fy7RnktvjilJZ5xxRmUWzTmvu+46N7/mmmvcPLrsZ3t7e2X2+uuvu2tHizMnkBTlBJKinEBSlBNIinICSVFOICnKCSTFnHOCaWpqcvNoXhflBw4cqMz27t3rrt25c6ebR3tNg73F7troeUWv27Fjx9zcm7Oec8457trR4swJJEU5gaQoJ5AU5QSSopxAUpQTSIpyAkkx5xyFemdu3kwt2hPZ0dHh5tG9IKPc288ZXZfWm5FK0uTJk93cm5NGc8rTTjvNzaP7kra1tbn5unXrKrPoPYvumVqFMyeQFOUEkqKcQFKUE0iKcgJJUU4gKUYpoxBdGrOhocHNvVHKfffd566dMWOGm2/fvt3NvctPSv7WqObmZndttHUqGsV4Y5wjR464a6PLdkbP+6yzznLzZ599tjKbN2+euzY6tiqcOYGkKCeQFOUEkqKcQFKUE0iKcgJJUU4gKQsuRzh+97pLLJpbHT16dNSPffXVV7v5K6+84ubRLf7qmcG2tra6a6Nb/EWXzjz11FNHlUnxDDa6dWLEe25PP/20u3bVqlVuXkoZcQ8iZ04gKcoJJEU5gaQoJ5AU5QSSopxAUpQTSGpM93N6l5CM5m3R5SWjy1N6+/+8PYsno545ZuTVV19184GBATeP5pzRJSS9uXe0VzR6T08//XQ3j/Zs1rM2es+jY7/kkksqs+jWiKPFmRNIinICSVFOICnKCSRFOYGkKCeQFOUEkqprzlnP3sCxnBWOteuvv97N7777bje/9tprK7PoNnrRnshojhntRfXes+jYon8P3nVpJX8OGl0rODq2SPS67d+/vzJbvHixu/bll18e1TFx5gSSopxAUpQTSIpyAklRTiApygkkRTmBpNJet3bq1Klu3tHR4eYXXXTRqNdGc6s5c+a4+eHDh93c26sa7UuM7jPZ19fn5tH1X715X3QPy+j+m01NTW7e1dVVmbW0tLhro9lztJ8z2pPpvW79/f3u2rlz57o5160FJhjKCSRFOYGkKCeQFOUEkqKcQFJ1jVLmz5/vPvjjjz9emU2fPt1dO3nyZDf3tjZJ/valPXv2uGuj7WzRSCAaKXiX9YwubdnT0+PmS5YscfO1a9e6uXebvylTprhrOzs73TyyadOmyiy6/eC+ffvcPNpSFo2ovFHOmWee6a6N/r0wSgEmGMoJJEU5gaQoJ5AU5QSSopxAUpQTSMqdczY2NrpzztWrV7sPPnPmzMosmlNGeT2XQowu4RjNGuvV1tZWmU2bNs1du3TpUje/7bbb3Pyhhx5yc2/L2aFDh9y177//vpt7c0zJ3+ZX73a1aKtcNEf11kfb0c477zw3Z84JTDCUE0iKcgJJUU4gKcoJJEU5gaQoJ5CUO+dctmyZO+d88skn3QffuHFjZRZd6jDKo9vJeaKZlzeHlKStW7e6eXR5Sm8vq3fZTEmaMWOGm991111u7t1mT/L3ZEbvyeWXX15X7j33aI4ZvW7RLf4i3h7c6N9TtO95y5YtzDmBiYRyAklRTiApygkkRTmBpCgnkBTlBJJq9MKPP/7YXRzN+7w9ctFt8qLHjmZu3lwrus7orl273Hzz5s1uHh2bt1802jMZXVP3xRdfdPPu7m439+ac0W0Zo1lkdL1g7/aH0fOO9lRGs8hovTfnjGao0S0jq3DmBJKinEBSlBNIinICSVFOICnKCSTljlK2bdvmLva2m0lSb29vZdbc3OyujS4RGf1afseOHZXZ9u3b3bWNje7LEm5Xi35t723bii7RGG2N8p63JM2dO9fNBwYGKrNovLV79243j14379i9MYsUj1qi9dEtAL2tenv37nXXzps3z82rcOYEkqKcQFKUE0iKcgJJUU4gKcoJJEU5gaTcgd57773nLn7hhRfcfNmyZZVZdPnI6HZx0dYqb9tWNIeMZl7RFqHoFoPedrno1ofRbDm6NeKHH3446sePji2aD9fzntW7Ha2e7WqSP0c9//zz3bX9/f1uXoUzJ5AU5QSSopxAUpQTSIpyAklRTiApygkk5d4C0Mz8oVpg0aJFldnDDz/srm1vb3fzaN+iN9eK5nXRnDKac0bzPu/xvUswSvGcM5rhRrn33KK10bFHvPWjnRWeEL1n0aUxvf2c69atc9cuWbLEzUsp3AIQmEgoJ5AU5QSSopxAUpQTSIpyAklRTiApd87Z0NDgDtWi2VA9brzxRjdfsWKFm3tz0ra2NndtdG3YaA4azTmjOasnui1jNAeNrkXsvaf79+9310avS8Q79mi/ZbSPNXpP33zzTTfv6empzLq6uty1EeacwARDOYGkKCeQFOUEkqKcQFKUE0iKcgJJjel+zqwuvvhiN6/33qBnn322m3/wwQeVWTTP27hxo5tj4mHOCUwwlBNIinICSVFOICnKCSRFOYGkvpCjFCATRinABEM5gaQoJ5AU5QSSopxAUpQTSIpyAklRTiApygkkRTmBpCgnkBTlBJKinEBSlBNIinICSbn7OQGMH86cQFKUE0iKcgJJUU4gKcoJJEU5gaT+A7Hp/CVxPzn3AAAAAElFTkSuQmCC",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Plot a sample\n",
    "import matplotlib.pyplot as plt\n",
    "plt.imshow(image.squeeze(), cmap=\"gray\") # plot image as grayscale\n",
    "plt.axis(False)\n",
    "plt.title(class_names[label]);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f8d8f1db-d8c6-4d2f-9cac-2a19594ad3bb",
   "metadata": {},
   "source": [
    "### 1.4 Building a series of neural networks with different shape errors\n",
    "\n",
    "Our problem is: build a neural network capable of finding patterns in grayscale images of clothing.\n",
    "\n",
    "This statement could go very deep since \"what neural network is the best?\" is one of the main research problems in machine learning as a whole.\n",
    "\n",
    "But let's start as simple as possible to showcase different error types.\n",
    "\n",
    "We'll build several two layer neural networks with PyTorch each to showcase a different error:\n",
    "\n",
    "| **Model number** | **Layers** | **Error showcase** | \n",
    "| ----- | ----- | ----- |\n",
    "| 0 | 2 x `nn.Linear()` with 10 hidden units | Incorrect input shape |\n",
    "| 1 | Same as model 1 + 1 x `nn.Flatten()` | Incorrect input shape (still) |\n",
    "| 2 | 1 x `nn.Flatten()`, 1 x `nn.Linear()` with correct input shape and 1 x `nn.Linear()` with 10 hidden units | None (input shape is correct) |\n",
    "| 3 | Same as model 2 but with different shapes between `nn.Linear()` layers | Incorrect shapes between layers |\n",
    "| 4 | Same as model 3 but with last layer replaced with `nn.LazyLinear()` | None (shows how `nn.LazyX()` layers can infer correct shape) |\n",
    "| 5 | Same as model 4 but with all `nn.Linear()` replaced with `nn.LazyLinear()` | None (shows how `nn.LazyX()` layers can infer correct shape)  | "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "16a1bf8f-95be-4f80-8701-11a34e7dee3e",
   "metadata": {},
   "source": [
    "### 1.5 Incorrect input layer shapes \n",
    "\n",
    "We'll start with a two layer network with [`nn.Linear()`](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) layers with 10 hidden units in each. \n",
    "\n",
    "> **Note:** See [01. PyTorch Workflow section 6: Putting it all together](https://www.learnpytorch.io/01_pytorch_workflow/#6-putting-it-all-together) for what happens inside `nn.Linear()`.\n",
    "\n",
    "And then we'll pass our `image` through it and see what happens."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "2aaf7f7a-6100-4d09-9d05-e35d985a773c",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "mat1 and mat2 shapes cannot be multiplied (28x28 and 10x10)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [11]\u001b[0m, in \u001b[0;36m<cell line: 10>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      4\u001b[0m model_0 \u001b[38;5;241m=\u001b[39m nn\u001b[38;5;241m.\u001b[39mSequential(\n\u001b[1;32m      5\u001b[0m     nn\u001b[38;5;241m.\u001b[39mLinear(in_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m, out_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m),\n\u001b[1;32m      6\u001b[0m     nn\u001b[38;5;241m.\u001b[39mLinear(in_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m, out_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m)\n\u001b[1;32m      7\u001b[0m )\n\u001b[1;32m      9\u001b[0m \u001b[38;5;66;03m# Pass the image through the model (this will error)\u001b[39;00m\n\u001b[0;32m---> 10\u001b[0m \u001b[43mmodel_0\u001b[49m\u001b[43m(\u001b[49m\u001b[43mimage\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[1;32m    138\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[0;32m--> 139\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    140\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/linear.py:114\u001b[0m, in \u001b[0;36mLinear.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    113\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m--> 114\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlinear\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m)\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: mat1 and mat2 shapes cannot be multiplied (28x28 and 10x10)"
     ]
    }
   ],
   "source": [
    "from torch import nn\n",
    "\n",
    "# Create a two layer neural network\n",
    "model_0 = nn.Sequential(\n",
    "    nn.Linear(in_features=10, out_features=10),\n",
    "    nn.Linear(in_features=10, out_features=10)\n",
    ")\n",
    "\n",
    "# Pass the image through the model (this will error)\n",
    "model_0(image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7e39356d-1e12-40c2-9003-03f268078ed2",
   "metadata": {},
   "source": [
    "Running the above code we get another shape error! \n",
    "\n",
    "Something similar to:\n",
    "\n",
    "```\n",
    "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n",
    "    112 \n",
    "    113     def forward(self, input: Tensor) -> Tensor:\n",
    "--> 114         return F.linear(input, self.weight, self.bias)\n",
    "    115 \n",
    "    116     def extra_repr(self) -> str:\n",
    "\n",
    "RuntimeError: mat1 and mat2 shapes cannot be multiplied (28x28 and 10x10)\n",
    "```\n",
    "\n",
    "The key is in the final line `RuntimeError: mat1 and mat2 shapes cannot be multiplied (28x28 and 10x10)`.\n",
    "\n",
    "This is telling us there's something wrong with our data shapes.\n",
    "\n",
    "Because behind the scenes, `nn.Linear()` is attempting to do a matrix multiplication.\n",
    "\n",
    "How do we fix this?\n",
    "\n",
    "There are several different options depending on what kind of layer(s) you're using.\n",
    "\n",
    "But since we're using `nn.Linear()` layers, let's focus on that.\n",
    "\n",
    "`nn.Linear()` likes to accept data as a single-dimension vector .\n",
    "\n",
    "For example, instead of an input `image` shape of `[1, 28, 28]`, it would prefer `[1, 784]` (`784 = 28*28`).\n",
    "\n",
    "In other words, it likes all of the information to be *flattened* into a single dimension.\n",
    "\n",
    "We can achieve this flattening using PyTorch's [`nn.Flatten()`](https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html). \n",
    "\n",
    "Let's see it happen."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "3cd2b1eb-a355-4d87-b895-10a6f3f6ae75",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Before flatten shape: torch.Size([1, 28, 28]) -> [batch, height, width]\n",
      "After flatten shape: torch.Size([1, 784]) -> [batch, height*width]\n"
     ]
    }
   ],
   "source": [
    "# Create a flatten layer\n",
    "flatten = nn.Flatten()\n",
    "\n",
    "# Pass the image through the flatten layer\n",
    "flattened_image = flatten(image)\n",
    "\n",
    "# Print out the image shape before and after \n",
    "print(f\"Before flatten shape: {image.shape} -> [batch, height, width]\")\n",
    "print(f\"After flatten shape: {flattened_image.shape} -> [batch, height*width]\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd85d38f-5016-4314-85b1-83e79786fc71",
   "metadata": {},
   "source": [
    "Wonderful, image data flattened! \n",
    "\n",
    "Now let's try adding the `nn.Flatten()` layer to our existing model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "54446b26-438e-45b8-96bd-bd3967d35902",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "mat1 and mat2 shapes cannot be multiplied (1x784 and 10x10)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [13]\u001b[0m, in \u001b[0;36m<cell line: 9>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      2\u001b[0m model_1 \u001b[38;5;241m=\u001b[39m nn\u001b[38;5;241m.\u001b[39mSequential(\n\u001b[1;32m      3\u001b[0m     nn\u001b[38;5;241m.\u001b[39mFlatten(), \u001b[38;5;66;03m# <-- NEW: add nn.Flatten() layer\u001b[39;00m\n\u001b[1;32m      4\u001b[0m     nn\u001b[38;5;241m.\u001b[39mLinear(in_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m, out_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m),\n\u001b[1;32m      5\u001b[0m     nn\u001b[38;5;241m.\u001b[39mLinear(in_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m, out_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m)\n\u001b[1;32m      6\u001b[0m )\n\u001b[1;32m      8\u001b[0m \u001b[38;5;66;03m# Pass the image through the model\u001b[39;00m\n\u001b[0;32m----> 9\u001b[0m \u001b[43mmodel_1\u001b[49m\u001b[43m(\u001b[49m\u001b[43mimage\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[1;32m    138\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[0;32m--> 139\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    140\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/linear.py:114\u001b[0m, in \u001b[0;36mLinear.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    113\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m--> 114\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlinear\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m)\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: mat1 and mat2 shapes cannot be multiplied (1x784 and 10x10)"
     ]
    }
   ],
   "source": [
    "# Replicate model_0 except add a nn.Flatten() layer to begin with \n",
    "model_1 = nn.Sequential(\n",
    "    nn.Flatten(), # <-- NEW: add nn.Flatten() layer\n",
    "    nn.Linear(in_features=10, out_features=10),\n",
    "    nn.Linear(in_features=10, out_features=10)\n",
    ")\n",
    "\n",
    "# Pass the image through the model\n",
    "model_1(image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a592ac09-b222-4601-bd46-3cb093f81318",
   "metadata": {},
   "source": [
    "Oh no!\n",
    "\n",
    "Another error...\n",
    "\n",
    "Something like:\n",
    "\n",
    "```\n",
    "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n",
    "    112 \n",
    "    113     def forward(self, input: Tensor) -> Tensor:\n",
    "--> 114         return F.linear(input, self.weight, self.bias)\n",
    "    115 \n",
    "    116     def extra_repr(self) -> str:\n",
    "\n",
    "RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x784 and 10x10)\n",
    "```\n",
    "\n",
    "Again, the key information is in the bottom line. \n",
    "\n",
    "`RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x784 and 10x10)`\n",
    "\n",
    "Hmm, we know the `(1x784)` must be coming from our input data (`image`) since we flattened it from `(1, 28, 28)` -> `(1, 784)`.\n",
    "\n",
    "How about the `(10x10)`?\n",
    "\n",
    "These values come from the parameters we set in our `nn.Linear()` layers, `in_features=10` and `out_features=10` or `nn.Linear(in_features=10, out_features=10)`.\n",
    "\n",
    "What was the first rule of matrix multiplication again?\n",
    "\n",
    "1. The **inner dimensions** must match.\n",
    "\n",
    "Right!\n",
    "\n",
    "So what happens if we change `in_features=10` to `in_features=784` in the first layer?\n",
    "\n",
    "Let's find out!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "a241a720-8d9a-4308-8ba2-b2b7c92b6240",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-0.2045,  0.2677, -0.0713, -0.3096, -0.0586,  0.3153, -0.3413,  0.2031,\n",
       "          0.4421,  0.1715]], grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Flatten the input as well as make sure the first layer can accept the flattened input shape\n",
    "model_2 = nn.Sequential(\n",
    "    nn.Flatten(),\n",
    "    nn.Linear(in_features=784, out_features=10), # <-- NEW: change in_features=10 to in_features=784\n",
    "    nn.Linear(in_features=10, out_features=10)\n",
    ")\n",
    "\n",
    "# Pass the image through the model\n",
    "model_2(image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9c7bea3-cd05-4c34-9ea0-594429cdf20d",
   "metadata": {},
   "source": [
    "It worked!\n",
    "\n",
    "We got an output from our model!\n",
    "\n",
    "The output might not mean much for now but at least we know all of the shapes line up and data can flow all through our model.\n",
    "\n",
    "The `nn.Flatten()` layer turned our input image from `(1, 28, 28)` to `(1, 784)` and our first `nn.Linear(in_features=784, out_features=10)` layer could accept it as input."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3161e428-22ce-476d-b19f-5867c77e137e",
   "metadata": {},
   "source": [
    "### 1.6 Incorrect hidden layer input and output shapes\n",
    "\n",
    "What happens if our input layer(s) had the correct shapes but there was a mismatch between the interconnected layer(s)?\n",
    "\n",
    "As in, our first `nn.Linear()` had `out_features=10` but the next `nn.Linear()` had `in_features=5`.\n",
    "\n",
    "This is an example of **incorrect input and output shapes between layers**. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "ae5ea9d8-9f7f-4341-b2db-a23649460400",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "mat1 and mat2 shapes cannot be multiplied (1x10 and 5x10)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [15]\u001b[0m, in \u001b[0;36m<cell line: 9>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      2\u001b[0m model_3 \u001b[38;5;241m=\u001b[39m nn\u001b[38;5;241m.\u001b[39mSequential(\n\u001b[1;32m      3\u001b[0m     nn\u001b[38;5;241m.\u001b[39mFlatten(),\n\u001b[1;32m      4\u001b[0m     nn\u001b[38;5;241m.\u001b[39mLinear(in_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m784\u001b[39m, out_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m), \u001b[38;5;66;03m# out_features=10 \u001b[39;00m\n\u001b[1;32m      5\u001b[0m     nn\u001b[38;5;241m.\u001b[39mLinear(in_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m5\u001b[39m, out_features\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m10\u001b[39m) \u001b[38;5;66;03m# <-- NEW: in_features does not match the out_features of the previous layer\u001b[39;00m\n\u001b[1;32m      6\u001b[0m )\n\u001b[1;32m      8\u001b[0m \u001b[38;5;66;03m# Pass the image through the model (this will error)\u001b[39;00m\n\u001b[0;32m----> 9\u001b[0m \u001b[43mmodel_3\u001b[49m\u001b[43m(\u001b[49m\u001b[43mimage\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[1;32m    138\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[0;32m--> 139\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    140\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/linear.py:114\u001b[0m, in \u001b[0;36mLinear.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    113\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m--> 114\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlinear\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m)\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: mat1 and mat2 shapes cannot be multiplied (1x10 and 5x10)"
     ]
    }
   ],
   "source": [
    "# Create a model with incorrect input and output shapes between layers\n",
    "model_3 = nn.Sequential(\n",
    "    nn.Flatten(),\n",
    "    nn.Linear(in_features=784, out_features=10), # out_features=10 \n",
    "    nn.Linear(in_features=5, out_features=10) # <-- NEW: in_features does not match the out_features of the previous layer\n",
    ")\n",
    "\n",
    "# Pass the image through the model (this will error)\n",
    "model_3(image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5bb6f61-02ba-4065-b6eb-c920c198816c",
   "metadata": {},
   "source": [
    "Running the model above we get the following error:\n",
    "\n",
    "```\n",
    "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n",
    "    112 \n",
    "    113     def forward(self, input: Tensor) -> Tensor:\n",
    "--> 114         return F.linear(input, self.weight, self.bias)\n",
    "    115 \n",
    "    116     def extra_repr(self) -> str:\n",
    "\n",
    "RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x10 and 5x10)\n",
    "```\n",
    "\n",
    "Once again, we've broken rule 1 of matrix multiplication, the **inner dimensions** must match.\n",
    "\n",
    "Our first `nn.Linear()` layer outputs a shape of `(1, 10)` but our second `nn.Linear()` layer is expecting a shape of `(1, 5)`.\n",
    "\n",
    "How could we fix this?\n",
    "\n",
    "Well, we could set `in_features=10` for the second `nn.Linear()` manually, or we could try one of the newer features of PyTorch, \"lazy\" layers.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41f41b4e-dd31-44f2-a22f-960d8a0d51a2",
   "metadata": {},
   "source": [
    "### 1.7 PyTorch lazy layers (automatically inferring the input shape)\n",
    "\n",
    "Lazy layers in PyTorch often come in the form of `nn.LazyX` where `X` is an existing non-lazy form of the layer.\n",
    "\n",
    "For example, the lazy equivalent of `nn.Linear()` is [`nn.LazyLinear()`](https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html).\n",
    "\n",
    "The main feature of a `Lazy` layer is to *infer* what the `in_features` or input shape from the previous layer should be.\n",
    "\n",
    "> **Note:** As of November 2022, `Lazy` layers in PyTorch are still experimental and subject to change, however their usage shouldn't differ too dramatically from what's below.\n",
    "\n",
    "For example, if the previous layer has `out_features=10`, the subsequent `Lazy` layer should infer that `in_features=10`.\n",
    "\n",
    "Let's test it out. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "e615051b-93ac-4dd9-a480-8e24619aeaa3",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment.\n",
      "  warnings.warn('Lazy modules are a new feature under heavy development '\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.4282,  0.2492, -0.2045, -0.4943, -0.1639,  0.1166,  0.3828, -0.1283,\n",
       "         -0.1771, -0.2277]], grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Try nn.LazyLinear() as the second layer\n",
    "model_4 = nn.Sequential(\n",
    "    nn.Flatten(),\n",
    "    nn.Linear(in_features=784, out_features=10),\n",
    "    nn.LazyLinear(out_features=10) # <-- NEW: no in_features parameter as this is inferred from the previous layer's output\n",
    ")\n",
    "\n",
    "# Pass the image through the model\n",
    "model_4(image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7d421bf-e58d-4a5d-b1db-19616dfe9f65",
   "metadata": {},
   "source": [
    "It works (though there may be a warning depending on the version of PyTorch you're using, if so, don't worry, it's just to say the `Lazy` layers are still in development)!\n",
    "\n",
    "How about we try replacing all the `nn.Linear()` layers with `nn.LazyLinear()` layers?\n",
    "\n",
    "Then we'll only have to set the `out_features` values for each."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "2230de11-f765-495b-8e9f-0c680d9902f6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.1375, -0.2175, -0.1054,  0.1424, -0.1406, -0.1180, -0.0896, -0.4285,\n",
       "         -0.0077, -0.3188]], grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Replace all nn.Linear() layers with nn.LazyLinear()\n",
    "model_5 = nn.Sequential(\n",
    "    nn.Flatten(),\n",
    "    nn.LazyLinear(out_features=10),\n",
    "    nn.LazyLinear(out_features=10) # <-- NEW \n",
    ")\n",
    "\n",
    "# Pass the image through the model\n",
    "model_5(image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e1cdcbc-0f57-4371-b753-655f36bbea3a",
   "metadata": {},
   "source": [
    "Nice!\n",
    "\n",
    "It worked again, our image was able to flow through the network without any issues.\n",
    "\n",
    "> **Note:** The above examples only deal with one type of layer in PyTorch, `nn.Linear()`, however, the principles of lining up input and output shapes with each layer is a constant throughout all neural networks and different types of data. \n",
    ">\n",
    "> Layers like [`nn.Conv2d()`](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html), used in convolutional neural networks (CNNs) can even accept inputs without the use of `nn.Flatten()`. You can see more on this in [03. PyTorch Computer Vision section 7: Building a CNN](https://www.learnpytorch.io/03_pytorch_computer_vision/#7-model-2-building-a-convolutional-neural-network-cnn)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "14bb1210-564f-404b-b5d3-a149972cc22b",
   "metadata": {},
   "source": [
    "## 2. Device errors in PyTorch\n",
    "\n",
    "One of the main benefits of PyTorch is the in-built ability for doing computations on a GPU (graphics processing unit).\n",
    "\n",
    "GPUs can often perform operations, specifically matrix multiplications (which make up the most of neural networks) much faster than CPUs (central processing units).\n",
    "\n",
    "If you're using vanilla PyTorch (no other external libraries), PyTorch requires you to explicitly set which device you're computing on.\n",
    "\n",
    "For example, to send your model to a target `device`, you would use the [`to()`](https://pytorch.org/docs/stable/generated/torch.Tensor.to.html) method, such as `model.to(device)`.\n",
    "\n",
    "And similarly for data `some_dataset.to(device)`.\n",
    "\n",
    "**Device errors** occur when your model/data are on different devices.\n",
    "\n",
    "Such as when you've sent your model to the target GPU device but your data is still on the CPU.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab9ade20-6f1b-4fc6-944d-553f2112834d",
   "metadata": {},
   "source": [
    "### 2.1 Setting the target device \n",
    "\n",
    "Let's set our current device to `\"cuda\"` if it's available.\n",
    "\n",
    "> **Note:** See [00. PyTorch Fundamentals: Running Tensors on GPUs](https://www.learnpytorch.io/00_pytorch_fundamentals/#running-tensors-on-gpus-and-making-faster-computations) for more information about how to get access to a GPU and set it up with PyTorch."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "aff03038-c2f0-4e51-a46e-9d1e8fbde893",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Current device: cuda\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "# Set device to \"cuda\" if it's available otherwise default to \"cpu\"\n",
    "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
    "print(f\"Current device: {device}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "494deffc-46a7-49e1-8a52-a73636982162",
   "metadata": {},
   "source": [
    "Now let's create a model with the same layers as `model_5`.\n",
    "\n",
    "In PyTorch, models and tensors are created on the CPU by default.\n",
    "\n",
    "We can test this by checking the `device` attribute of the model we create."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "cff50c25-5e81-43d6-816d-8816c58cbb02",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model is on device: cpu\n"
     ]
    }
   ],
   "source": [
    "from torch import nn\n",
    "\n",
    "# Create a model (similar to model_5 above)\n",
    "model_6 = nn.Sequential(\n",
    "    nn.Flatten(),\n",
    "    nn.LazyLinear(out_features=10), \n",
    "    nn.LazyLinear(out_features=10)\n",
    ")\n",
    "\n",
    "# All models and tensors are created on the CPU by default (unless explicitly set otherwise)\n",
    "print(f\"Model is on device: {next(model_6.parameters()).device}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17c0e5f9-fcc1-4abf-a81d-496aa2f8b5cc",
   "metadata": {},
   "source": [
    "### 2.2 Preparing data for modelling\n",
    "\n",
    "To prepare our data for modelling, let's create some PyTorch `DataLoader`'s.\n",
    "\n",
    "To make things quicker, we'll use an instance of [`torch.utils.data.RandomSampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.RandomSampler) to randomly select 10% of the training and testing samples (we're not interested in the best performing model as much as we are in showcasing potential errors).\n",
    "\n",
    "We'll also setup a loss function of `torch.nn.CrossEntropyLoss()` as well as an optimizer of `torch.optim.SGD(lr=0.01)`.\n",
    "\n",
    "> **Note:** For more information on preparing data, loss functions and optimizers for training a PyTorch model, see [01. PyTorch Workflow Fundamentals section 3: Training a model](https://www.learnpytorch.io/01_pytorch_workflow/#3-train-model)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "b3873224-cf4e-4ec9-aa2d-713166ace3cb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of random training samples selected: 6000/60000\n",
      "Number of random testing samples selected: 1000/10000\n",
      "Number of batches in train_dataloader: 188 batches of size 32\n",
      "Number of batches in test_dataloader: 32 batch of size 32\n"
     ]
    }
   ],
   "source": [
    "from torch.utils.data import DataLoader, RandomSampler\n",
    "\n",
    "# Only sample 10% of the data\n",
    "train_sampler = RandomSampler(train_data, \n",
    "                              num_samples=int(0.1*len(train_data)))\n",
    "\n",
    "test_sampler = RandomSampler(test_data, \n",
    "                             num_samples=int(0.1*len(test_data)))\n",
    "\n",
    "print(f\"Number of random training samples selected: {len(train_sampler)}/{len(train_data)}\")\n",
    "print(f\"Number of random testing samples selected: {len(test_sampler)}/{len(test_data)}\")\n",
    "\n",
    "# Create DataLoaders and turn data into batches\n",
    "BATCH_SIZE = 32\n",
    "train_dataloader = DataLoader(dataset=train_data,\n",
    "                              batch_size=BATCH_SIZE,\n",
    "                              sampler=train_sampler)\n",
    "\n",
    "test_dataloader = DataLoader(dataset=test_data,\n",
    "                             batch_size=BATCH_SIZE,\n",
    "                             sampler=test_sampler)\n",
    "\n",
    "print(f\"Number of batches in train_dataloader: {len(train_dataloader)} batches of size {BATCH_SIZE}\")\n",
    "print(f\"Number of batches in test_dataloader: {len(test_dataloader)} batch of size {BATCH_SIZE}\")\n",
    "\n",
    "# Create loss function\n",
    "loss_fn = nn.CrossEntropyLoss()\n",
    "\n",
    "# Create optimizer\n",
    "optimizer = torch.optim.SGD(lr=0.01, \n",
    "                            params=model_6.parameters())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23d9fb5e-0ff8-4869-ba04-b62bab59f02c",
   "metadata": {},
   "source": [
    "### 2.3 Training a model on the CPU\n",
    "\n",
    "Data ready, model ready, let's train!\n",
    "\n",
    "We'll use a standard PyTorch training loop to do five epochs of training with `model_6` going over 10% of the data.\n",
    "\n",
    "Don't worry too much here about the loss being as low as it could be as we're more focused on making sure there aren't any errors than having the lowest possible loss.\n",
    "\n",
    "<img src=\"https://github.com/mrdbourke/pytorch-deep-learning/raw/main/images/01-pytorch-training-loop-annotated.png\" alt=\"annotated pytorch training loop steps\" width=750/>\n",
    "\n",
    "> **Note:** For more information on the steps in a PyTorch training loop, see [01. PyTorch Workflow section 3: PyTorch training loop](https://www.learnpytorch.io/01_pytorch_workflow/#pytorch-training-loop)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "03d6f92e-d4aa-40bf-b3ed-c97a98c72e3d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "28ec173850c84277a11540fc3cd0350f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/5 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 0 | Training loss: 334.65\n",
      "Epoch: 1 | Training loss: 215.44\n",
      "Epoch: 2 | Training loss: 171.15\n",
      "Epoch: 3 | Training loss: 154.72\n",
      "Epoch: 4 | Training loss: 142.22\n"
     ]
    }
   ],
   "source": [
    "from tqdm.auto import tqdm\n",
    "\n",
    "# Set the number of epochs\n",
    "epochs = 5\n",
    "\n",
    "# Train the model\n",
    "for epoch in tqdm(range(epochs)):\n",
    "\n",
    "    # Set loss to 0 every epoch\n",
    "    train_loss = 0\n",
    "\n",
    "    # Get images (X) and labels (y)\n",
    "    for X, y in train_dataloader:\n",
    "\n",
    "        # Forward pass\n",
    "        y_pred = model_6(X)\n",
    "\n",
    "        # Calculate the loss\n",
    "        loss = loss_fn(y_pred, y)\n",
    "        train_loss += loss\n",
    "\n",
    "        # Optimizer zero grad\n",
    "        optimizer.zero_grad()\n",
    "\n",
    "        # Loss backward\n",
    "        loss.backward()\n",
    "\n",
    "        # Optimizer step\n",
    "        optimizer.step()\n",
    "  \n",
    "    # Print loss in the epoch loop only\n",
    "    print(f\"Epoch: {epoch} | Training loss: {train_loss:.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "884d3079-a360-44fb-a955-05f7f74edaec",
   "metadata": {},
   "source": [
    "Nice! Looks like our training loop is working!\n",
    "\n",
    "Our model's loss is going down (the lower the loss the better)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c5f79817-30ff-4f06-a201-954acba34abb",
   "metadata": {},
   "source": [
    "### 2.4 Attempting to train a model on the GPU (with errors)\n",
    "\n",
    "Now let's send our `model_6` to the target `device` (in our case, this is a `\"cuda\"` GPU)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "e4ce89bb-2624-4427-9de1-7570154bb8ee",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model is on device: cuda:0\n"
     ]
    }
   ],
   "source": [
    "# Send model_6 to the target device (\"cuda\")\n",
    "model_6.to(device)\n",
    "\n",
    "# Print out what device the model is on\n",
    "print(f\"Model is on device: {next(model_6.parameters()).device}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e75c9434-9a9a-4089-98fd-32196036504b",
   "metadata": {},
   "source": [
    "Our `model_6` is on the `\"cuda:0\"` (where `0` is the index of the device, in case there was more than one GPU) device.\n",
    "\n",
    "Now let's run the same training loop code as above and see what happens.\n",
    "\n",
    "Can you guess?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "a7552567-8d35-4570-8dd9-f984d02cd2e8",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "75606965bc224a6a8b223db95612e24e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/5 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "ename": "RuntimeError",
     "evalue": "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [24]\u001b[0m, in \u001b[0;36m<cell line: 7>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     12\u001b[0m \u001b[38;5;66;03m# Get images (X) and labels (y)\u001b[39;00m\n\u001b[1;32m     13\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m X, y \u001b[38;5;129;01min\u001b[39;00m train_dataloader:\n\u001b[1;32m     14\u001b[0m \n\u001b[1;32m     15\u001b[0m   \u001b[38;5;66;03m# Forward pass\u001b[39;00m\n\u001b[0;32m---> 16\u001b[0m   y_pred \u001b[38;5;241m=\u001b[39m \u001b[43mmodel_6\u001b[49m\u001b[43m(\u001b[49m\u001b[43mX\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;66;03m# model is on GPU, data is on CPU (will error)\u001b[39;00m\n\u001b[1;32m     18\u001b[0m   \u001b[38;5;66;03m# Calculate the loss\u001b[39;00m\n\u001b[1;32m     19\u001b[0m   loss \u001b[38;5;241m=\u001b[39m loss_fn(y_pred, y)\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[1;32m    138\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[0;32m--> 139\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    140\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/linear.py:114\u001b[0m, in \u001b[0;36mLinear.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    113\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m--> 114\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlinear\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m)\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)"
     ]
    }
   ],
   "source": [
    "from tqdm.auto import tqdm\n",
    "\n",
    "# Set the number of epochs\n",
    "epochs = 5\n",
    "\n",
    "# Train the model\n",
    "for epoch in tqdm(range(epochs)):\n",
    "\n",
    "  # Set loss to 0 every epoch\n",
    "  train_loss = 0\n",
    "\n",
    "  # Get images (X) and labels (y)\n",
    "  for X, y in train_dataloader:\n",
    "\n",
    "    # Forward pass\n",
    "    y_pred = model_6(X) # model is on GPU, data is on CPU (will error)\n",
    "\n",
    "    # Calculate the loss\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    train_loss += loss\n",
    "    \n",
    "    # Optimizer zero grad\n",
    "    optimizer.zero_grad()\n",
    "\n",
    "    # Loss backward\n",
    "    loss.backward()\n",
    "\n",
    "    # Optimizer step\n",
    "    optimizer.step()\n",
    "  \n",
    "  # Print loss in the epoch loop only\n",
    "  print(f\"Epoch: {epoch} | Training loss: {train_loss:.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6dde9a2-8c87-44d2-9d2c-3257589e37c3",
   "metadata": {},
   "source": [
    "Whoops!\n",
    "\n",
    "Looks like we got a device error:\n",
    "\n",
    "```\n",
    "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n",
    "    112 \n",
    "    113     def forward(self, input: Tensor) -> Tensor:\n",
    "--> 114         return F.linear(input, self.weight, self.bias)\n",
    "    115 \n",
    "    116     def extra_repr(self) -> str:\n",
    "\n",
    "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)\n",
    "```\n",
    "\n",
    "We can see the error states `Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!`.\n",
    "\n",
    "In essence, our model is on the `cuda:0` device but our data tensors (`X` and `y`) are still on the `cpu` device.\n",
    "\n",
    "But **PyTorch expects *all* tensors to be on the same device**."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c8c50ba5-d2ae-4b1b-98f2-5b8c9b0d366a",
   "metadata": {},
   "source": [
    "### 2.5 Training a model on the GPU (without errors) \n",
    "\n",
    "Let's fix this error by sending our data tensors (`X` and `y`) to the target `device` as well.\n",
    "\n",
    "We can do so using `X.to(device)` and `y.to(device)`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "505b45f9-abd3-4abe-bd34-17052312dee5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3f94f47483b84c0ca344c1c3545a9d04",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/5 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 0 | Training loss: 134.76\n",
      "Epoch: 1 | Training loss: 127.76\n",
      "Epoch: 2 | Training loss: 120.85\n",
      "Epoch: 3 | Training loss: 120.50\n",
      "Epoch: 4 | Training loss: 116.29\n"
     ]
    }
   ],
   "source": [
    "# Send the model to the target device (we don't need to do this again but we will for completeness)\n",
    "model_6.to(device)\n",
    "\n",
    "# Set the number of epochs\n",
    "epochs = 5\n",
    "\n",
    "# Train the model\n",
    "for epoch in tqdm(range(epochs)):\n",
    "\n",
    "  # Set loss to 0 every epoch\n",
    "  train_loss = 0\n",
    "\n",
    "  # Get images (X) and labels (y)\n",
    "  for X, y in train_dataloader:\n",
    "\n",
    "    # Put target data on target device  <-- NEW\n",
    "    X, y = X.to(device), y.to(device) # <-- NEW: send data to target device\n",
    "\n",
    "    # Forward pass\n",
    "    y_pred = model_6(X)\n",
    "\n",
    "    # Calculate the loss\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    train_loss += loss\n",
    "    \n",
    "    # Optimizer zero grad\n",
    "    optimizer.zero_grad()\n",
    "\n",
    "    # Loss backward\n",
    "    loss.backward()\n",
    "\n",
    "    # Optimizer step\n",
    "    optimizer.step()\n",
    "  \n",
    "  # Print loss in the epoch loop only\n",
    "  print(f\"Epoch: {epoch} | Training loss: {train_loss:.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4866ae09-785a-4ddb-b2f3-a9e224bf7929",
   "metadata": {},
   "source": [
    "Excellent!\n",
    "\n",
    "Our training loop completes just as before because now both our model *and* data tensors are on the same device.\n",
    "\n",
    "> **Note:** Libraries like [HuggingFace Accelerate](https://github.com/huggingface/accelerate) are a fantastic way to train your PyTorch models with minimal explicit device setting (they discover the best device to use and set things up for you). \n",
    "> \n",
    "> You could also write functions to ensure your training code happens all on the same device, see [05. PyTorch Going Modular section 4: Creating training functions](https://www.learnpytorch.io/05_pytorch_going_modular/#4-creating-train_step-and-test_step-functions-and-train-to-combine-them) for more."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "098b8ac3-192e-4299-8545-a2fd389c36a0",
   "metadata": {},
   "source": [
    "### 2.6 Device errors when making predictions\n",
    "\n",
    "We've seen device errors whilst training but the same error can occur during testing or inference (making predictions).\n",
    "\n",
    "The whole idea of training a model on some data is to use it to make predictions on *unseen* data.\n",
    "\n",
    "Let's take our trained `model_6` and use it to make a prediction on a sample from the test dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "38082d32-4d43-43e9-974e-f00f4f2724fa",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test image shape: torch.Size([28, 28])\n",
      "Test image label: 9\n"
     ]
    }
   ],
   "source": [
    "# Get a single sample from the test dataset\n",
    "test_image, test_label = test_data.data[0], test_data.targets[0]\n",
    "print(f\"Test image shape: {test_image.shape}\")\n",
    "print(f\"Test image label: {test_label}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "e64a5ebb-058b-4b3f-ba0f-ccf041a580c6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAAD3CAYAAADmIkO7AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAALV0lEQVR4nO3db2iV9xnG8euu2qjRqo3GNmFqqZq64OyLsmit0NHSsL4YQ+tAxmrFDcpejA0KZaysDNYVYbDSbTDYxnyxrbAXE4QtYw5W2BitbNJZBoG6TquxWjTRGP+0/vntxTnCWchz39GT09y67wcOJLn6nPPkxKtPcm5+v2OlFAHI547pPgEAE6OcQFKUE0iKcgJJUU4gKcoJJEU5EzKzYmYrbzQL7vMZM/tr82eHjwvlbCEze93MRsysbbrPpVXM7FEzOzbd53E7opwtYmYrJG2SVCR9bnrPBrciytk6T0t6Q9JuSdsbAzPbbWY/NrPfmdk5M3vTzO6f6E7M7BEzO2pmn5kgazOz75vZe2Z20sx+YmZznHMyM/uhmZ01s0Eze6wh6DKzvWY2bGaHzOwr4x7nFTM7Xr+9Uv9au6QBSV1mNla/dd3Qs4RKlLN1npb0q/qt38yWjsu3SfqOpEWSDkl6afwdmFm/pNckbSml/HmCx9glabWkByWtlNQt6dvOOfVJelfSYkkvSvqtmd1dz16TdExSl6SnJH2vobzfkrS+/jjrJH1a0gullPOSPivpeCllXv123Hl83IhSCrcpvkl6RNJlSYvrnw9K+kZDvlvSzxo+f1LSYMPnRdI3JR2RtHbcfRfVimiSzku6vyHbIOk/Fef0jKTjkqzha/slfUnSJyRdlTS/IXtZ0u76x/+W9GRD1i/pcP3jRyUdm+7n/Ha8ceVsje2S/lhKOVX//Nca96utpBMNH1+QNG9c/nVJvymlvF3xGEskzZX0DzM7Y2ZnJP2h/vUqQ6XeqLojql0puyQNl1LOjcu66x931T8ffxxaaOZ0n8Dtpv433xckzTCz6wVsk7TQzNaVUv45ybvaKunnZjZUSnllgvyUpIuSekspQ5O8z24zs4aCLpO0V7Ur6t1mNr+hoMskXb/f45KWS/pXQ3b911eWNbUIV86p93nVfkX8pGp/oz0oaY2kv6j2d+hkHZf0mKSvmdlXx4ellGuSfirpB2bWKUlm1l3/O7VKZ/3+ZpnZ1vp5/b6UclTS3yS9bGazzexTknaq9veyVPt79AUzW2Jmi1X7u/aX9eykpA4zW3AD3xsmgXJOve2SflFKea+UcuL6TdKPJH3RzCb920op5T3VCvq8mX15gv/kedVeTHrDzEYl/UlSj3OXb0papdpV9yVJT5VSTtezbZJWqPY/hT2SXiyl7Ktn35X0d0kHJb0t6UD9ayqlDKpW3nfrv17z6+4Usf/9EwRAFlw5gaQoJ5AU5QSSopxAUu4rh2bGq0VAi5VSbKKvc+UEkqKcQFKUE0iKcgJJUU4gKcoJJEU5gaQoJ5AU5QSSopxAUpQTSIpyAklRTiApygkkRTmBpCgnkBTlBJKinEBSlBNIinICSVFOICnKCSRFOYGkKCeQFOUEkqKcQFKUE0iKcgJJUU4gKcoJJEU5gaQoJ5AU5QSSopxAUpQTSIpyAklRTiApygkkRTmBpCgnkBTlBJKinEBSlBNIinICSVFOICnKCSRFOYGkKCeQFOUEkqKcQFKUE0iKcgJJUU4gqZnTfQL4/zFjxgw3v3btWmVWSmnqsdva2tz8ww8/dPOVK1dWZocOHbqpc4pw5QSSopxAUpQTSIpyAklRTiApygkkRTmBpJhz3mLMrKncmyVKUnd3d2W2YcMG99iBgQE3P3/+vJu3UjTHjGzZsqUy27VrV1P3XYUrJ5AU5QSSopxAUpQTSIpyAklRTiApygkkxZzzNhPNMSObNm2qzPr6+txju7q63PzVV1+9qXOaCp2dnW7e39/v5qOjo1N5OpPClRNIinICSVFOICnKCSRFOYGkKCeQFOUEkmLOeYuJ9n69cuWKmz/00ENuvmbNmsrs5MmT7rGrVq1y8z179rj58PBwZTZnzhz32CNHjrh5R0eHm991111ufuzYMTdvBa6cQFKUE0iKcgJJUU4gKcoJJEU5gaQoJ5AUc85k7rjD//9lNMdsb293861bt7q5t7/r7Nmz3WPnz5/v5tGeut73Hh3b29vr5kePHnXzkZERN5858+OvCldOICnKCSRFOYGkKCeQFOUEkqKcQFK37SjFe+m9lOIeG40zouOj3Fv2dfXqVffYyLPPPuvmJ06ccPNLly5VZitWrHCPjUYt0ZIz73mJtvyM3l7wo48+cvNoyVhbW1tlFo2vbvatD7lyAklRTiApygkkRTmBpCgnkBTlBJKinEBSaeec0RKhZmeNnmbfRi/avrKZWea2bdvc/J577nHzAwcOuPmsWbMqs4ULF7rHnj592s29rS8lafHixZVZtBwtes4j0Wx77ty5lVm0Jehbb711M6fElRPIinICSVFOICnKCSRFOYGkKCeQFOUEkko752xmTin5c6tophXNIaNza2aOuWPHDjfv6elx82gLSG+WKPnz5eht+IaGhtw8mlV68+ULFy64x0ZrSZudm3v6+/vdnDkncJuhnEBSlBNIinICSVFOICnKCSRFOYGkWjrnjOaJnmjuFM2tvJlZs+s1I11dXW6+efPmyiyaJb7zzjtuPm/ePDf39l+VpI6Ojsos2vs1+pl5ayIj0ezYe+vCyRwf7S3r/ZvZuHGje+zN4soJJEU5gaQoJ5AU5QSSopxAUpQTSIpyAkm5c85m919t5TyxmfV3S5YscfPly5e7+QMPPODm9957r5t788LR0VH32Gjv2Oh9Jr19aSV/Dhr9PKPnLXrsM2fOVGaXL192j43OLZq5X7x40c29Lpw7d849tre3182rcOUEkqKcQFKUE0iKcgJJUU4gKcoJJOWOUprZ4lGSli5dWplFL7u3t7c3lXtLr+677z732GhpU/Sy/tjYmJt7L+svWLDAPTZaUnblyhU3j743bwvKaFnWnXfe6ebvv/++m3vfe3TeIyMjbh4tpVu0aJGbe0vKordd9JbhebhyAklRTiApygkkRTmBpCgnkBTlBJKinEBSTW2N+fjjj7u5t0VkNCvs7Ox082gJkLeEKHrsaAlQNDOL5l7etp7R1pXRPC96XqJz95ZGRdtHRs/b2bNn3Tz6mTcjet6iJWfefDma70az5ypcOYGkKCeQFOUEkqKcQFKUE0iKcgJJUU4gKXfO+cQTT7gH79y5080HBwcrs2htX7RFZLRtp7f9ZHRsJJrnRXMvb51stLVl9NaH0XrPaJ7nbV8ZzW+99btSvEWk99jN/syiGW20XvTSpUs3fd8ffPCBm1fhygkkRTmBpCgnkBTlBJKinEBSlBNIinICSblzzv3797sHr1+/3s3Xrl1bmW3cuNE9NhKtkfNmkcPDw+6xUR6tS4zmnN6sMtrjtKenx82jeV00R/XeWnHdunXusQcPHnTzw4cPu7m3Pjha59rMW0JK8b+noaGhyiyayUdraKtw5QSSopxAUpQTSIpyAklRTiApygkkZd5L0GbW3OvTjujl5b6+PjdfvXq1mz/88MOVWbQFYzRuiN5+MFrW5T3n0ZKuaMzjLdOTpH379rn5wMBAZeYtm5oKe/furcyWLVvmHnvq1Ck3j5b5Rbk3aoneGvG5555z87GxsQn/wXDlBJKinEBSlBNIinICSVFOICnKCSRFOYGkpm3OCaCmlMKcE7iVUE4gKcoJJEU5gaQoJ5AU5QSSopxAUpQTSIpyAklRTiApygkkRTmBpCgnkBTlBJKinEBSlBNIinICSVFOICnKCSRFOYGkKCeQFOUEkqKcQFKUE0iKcgJJUU4gKcoJJEU5gaQoJ5AU5QSSopxAUpQTSIpyAklRTiApygkkRTmBpCgnkBTlBJKinEBSlBNIinICSVFOICnKCSRFOYGkKCeQFOUEkrJSynSfA4AJcOUEkqKcQFKUE0iKcgJJUU4gKcoJJPVfrufSsFzAW4UAAAAASUVORK5CYII=",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Plot test image\n",
    "import matplotlib.pyplot as plt\n",
    "plt.imshow(test_image, cmap=\"gray\")\n",
    "plt.axis(False)\n",
    "plt.title(class_names[test_label]);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "90f98c15-6b4a-457a-a7c9-bfe62692c97c",
   "metadata": {},
   "source": [
    "Looking good!\n",
    "\n",
    "Now let's try to make a prediction on it by passing it to our `model_6`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "4f77c721-c9f5-4f65-bacb-899a06e72148",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [28]\u001b[0m, in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;66;03m# Pass the test image through model_6 to make a prediction\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[43mmodel_6\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtest_image\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[1;32m    138\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[0;32m--> 139\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    140\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/linear.py:114\u001b[0m, in \u001b[0;36mLinear.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    113\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m--> 114\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlinear\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m)\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)"
     ]
    }
   ],
   "source": [
    "# Pass the test image through model_6 to make a prediction\n",
    "model_6(test_image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79aeb66d-e84d-4b95-b1ed-052a023a50d5",
   "metadata": {},
   "source": [
    "Dam!\n",
    "\n",
    "We get another device error.\n",
    "\n",
    "```\n",
    "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n",
    "    112 \n",
    "    113     def forward(self, input: Tensor) -> Tensor:\n",
    "--> 114         return F.linear(input, self.weight, self.bias)\n",
    "    115 \n",
    "    116     def extra_repr(self) -> str:\n",
    "\n",
    "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)\n",
    "```\n",
    "\n",
    "This is because our `model_6` is on the GPU (`\"cuda\"`), however, our `test_image` is on the CPU (in PyTorch, all tensors are on the CPU by default).\n",
    "\n",
    "Let's send the `test_image` to the target `device` and then try the prediction again."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "b1a3d722-24e5-4df0-907d-4553050fd2f9",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "mat1 and mat2 shapes cannot be multiplied (28x28 and 784x10)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [30]\u001b[0m, in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;66;03m# Send test_image to target device\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[43mmodel_6\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtest_image\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mto\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdevice\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[1;32m    138\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[0;32m--> 139\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    140\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/linear.py:114\u001b[0m, in \u001b[0;36mLinear.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    113\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m--> 114\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlinear\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m)\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: mat1 and mat2 shapes cannot be multiplied (28x28 and 784x10)"
     ]
    }
   ],
   "source": [
    "# Send test_image to target device\n",
    "model_6(test_image.to(device))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1ccfecaf-d7b2-49ba-a98f-016d4078bf36",
   "metadata": {},
   "source": [
    "Oh no! Another error...\n",
    "\n",
    "```\n",
    "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n",
    "    112 \n",
    "    113     def forward(self, input: Tensor) -> Tensor:\n",
    "--> 114         return F.linear(input, self.weight, self.bias)\n",
    "    115 \n",
    "    116     def extra_repr(self) -> str:\n",
    "\n",
    "RuntimeError: mat1 and mat2 shapes cannot be multiplied (28x28 and 784x10)\n",
    "```\n",
    "\n",
    "This time it's a shape error. \n",
    "\n",
    "We've seen these before.\n",
    "\n",
    "What's going on with our `test_image` shape?\n",
    "\n",
    "Perhaps it's because our model was trained on images that had a batch dimension?\n",
    "\n",
    "And our current `test_image` doesn't have a batch dimension?\n",
    "\n",
    "Here's another helpful rule of thumb to remember: **trained models like to predict on data in the same format and shape that they were trained on**.\n",
    "\n",
    "This means if our model was trained on images with a batch dimension, it'll tend to like to predict on images with a batch dimension, even if the batch dimension is only 1 (a single sample).\n",
    "\n",
    "And if our model was trained on data in the format `torch.float32` (or another format), it'll like to predict on data in that same format (we'll see this later on).\n",
    "\n",
    "We can add a single batch dimension to our `test_image` using the [`torch.unsqueeze()`](https://pytorch.org/docs/stable/generated/torch.unsqueeze.html) method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "e8dc1ba7-9156-44ee-b6b3-35771106a1a5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Original input data shape: torch.Size([28, 28]) -> [height, width]\n",
      "Updated input data shape (with added batch dimension): torch.Size([1, 28, 28]) -> [batch, height, width]\n"
     ]
    }
   ],
   "source": [
    "# Changing the input size to be the same as what the model was trained on\n",
    "original_input_shape = test_image.shape\n",
    "updated_input_shape = test_image.unsqueeze(dim=0).shape # adding a batch dimension on the \"0th\" dimension\n",
    "\n",
    "# Print out shapes of original tensor and updated tensor\n",
    "print(f\"Original input data shape: {original_input_shape} -> [height, width]\")\n",
    "print(f\"Updated input data shape (with added batch dimension): {updated_input_shape} -> [batch, height, width]\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "745679ec-251e-4cb3-9732-d9954dbe0e2c",
   "metadata": {},
   "source": [
    "Nice!\n",
    "\n",
    "We've found a way to add a batch dimension to our `test_image`.\n",
    "\n",
    "Let's try to make a prediction on it again."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "3a25cef5-e0db-44dd-bbc7-17ff104ab846",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "expected scalar type Float but found Byte",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "Input \u001b[0;32mIn [32]\u001b[0m, in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;66;03m# Make prediction on test image with additional batch size dimension and with it on the target device\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[43mmodel_6\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtest_image\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43munsqueeze\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdim\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mto\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdevice\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[1;32m    138\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[0;32m--> 139\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    140\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m   1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m   1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m   1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m   1129\u001b[0m         \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m   1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
      "File \u001b[0;32m~/code/pytorch/env/lib/python3.8/site-packages/torch/nn/modules/linear.py:114\u001b[0m, in \u001b[0;36mLinear.forward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m    113\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m--> 114\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlinear\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m)\u001b[49m\n",
      "\u001b[0;31mRuntimeError\u001b[0m: expected scalar type Float but found Byte"
     ]
    }
   ],
   "source": [
    "# Make prediction on test image with additional batch size dimension and with it on the target device\n",
    "model_6(test_image.unsqueeze(dim=0).to(device))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49a30c81-4118-471e-8672-7897257e8709",
   "metadata": {},
   "source": [
    "What?\n",
    "\n",
    "Another error!\n",
    "\n",
    "This time it's a datatype error:\n",
    "\n",
    "```\n",
    "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n",
    "    112 \n",
    "    113     def forward(self, input: Tensor) -> Tensor:\n",
    "--> 114         return F.linear(input, self.weight, self.bias)\n",
    "    115 \n",
    "    116     def extra_repr(self) -> str:\n",
    "\n",
    "RuntimeError: expected scalar type Float but found Byte\n",
    "```\n",
    "\n",
    "We've stumbled upon the third most common error in PyTorch, datatype errors.\n",
    "\n",
    "Let's figure out how to fix them in the next section."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4053b033-862b-4c67-b2e4-dc8de4d0fd6e",
   "metadata": {},
   "source": [
    "## 3. Datatype errors in PyTorch\n",
    "\n",
    "Recall the rule of thumb: **trained models like to predict on data that's in the same shape and format that they were trained on**.\n",
    "\n",
    "It looks like our model expects a `Float` datatype but our `test_image` is in `Byte` datatype.\n",
    "\n",
    "We can tell this by the last line in the previous error:\n",
    "\n",
    "```\n",
    "RuntimeError: expected scalar type Float but found Byte\n",
    "```\n",
    "\n",
    "Why is this?\n",
    "\n",
    "It's because our `model_6` was trained on data samples in the format of `Float`, specifically, `torch.float32`.\n",
    "\n",
    "How do we know this?\n",
    "\n",
    "Well, `torch.float32` is the default for many tensors in PyTorch, unless explicitly set otherwise.\n",
    "\n",
    "But let's do a check to make sure."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7fa086a9-d8ef-4438-9e86-512cfd86dd6e",
   "metadata": {},
   "source": [
    "### 3.1 Checking the datatype of the data the model was trained on \n",
    "\n",
    "We can check the datatype of the data our model was trained on by looking at the `dtype` attribute of a sample from our `train_dataloader`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "f9145327-5d47-48fa-a6c1-5763896bed90",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Datatype of training data: torch.float32\n"
     ]
    }
   ],
   "source": [
    "# Get a single sample from the train_dataloader and print the dtype\n",
    "train_image_batch, train_label_batch = next(iter(train_dataloader))\n",
    "train_image_single, train_label_single = train_image_batch[0], train_label_batch[0]\n",
    "\n",
    "# Print the datatype of the train_image_single\n",
    "print(f\"Datatype of training data: {train_image_single.dtype}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0d881ad2-8aeb-4742-8823-8dbaab7f4e18",
   "metadata": {},
   "source": [
    "There we go! We confirmed our training data samples are in `torch.float32`.\n",
    "\n",
    "So it makes sense that our `model_6` wants to predict on this datatype.\n",
    "\n",
    "But how did our training data get into that datatype?\n",
    "\n",
    "It happened back in section 1.3 when we downloaded the Fashion MNIST dataset and used the `transform` parameter of [`torchvision.transforms.ToTensor()`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ToTensor.html).\n",
    "\n",
    "This `transform` converts whatever data is passed to it into a `torch.Tensor` with the *default* datatype `torch.float32`.\n",
    "\n",
    "So another rule of thumb: **when making predictions, whatever transforms you performed on the training data, you should also perform on the testing data**."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f8d2a74-4c28-4c9a-a9ad-05cc81c71a14",
   "metadata": {},
   "source": [
    "### 3.2 Changing the datatype of a tensor\n",
    "\n",
    "In our case, we could create a standalone transform to transform our test data but we can also change the datatype of a target tensor with `tensor.type(some_type_here)`, for example, `tensor_1.type(torch.float32)`. \n",
    "\n",
    "Let's try it out."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "9e031504-824a-4ea1-b159-dbf54846af26",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Original datatype: torch.uint8\n",
      "Changing the datatype: torch.float32\n"
     ]
    }
   ],
   "source": [
    "# Print out the original datatype of test_image\n",
    "print(f\"Original datatype: {test_image.unsqueeze(dim=0).dtype}\")\n",
    "\n",
    "# Change the datatype of test_image and see the change\n",
    "print(f\"Changing the datatype: {test_image.unsqueeze(dim=0).type(torch.float32).dtype}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2a01303-8a2f-40cf-8a39-077acc4063cd",
   "metadata": {},
   "source": [
    "### 3.3 Making predictions on a test image and making sure it's in the right format\n",
    "\n",
    "Alright, it looks like we've got all of the pieces of the puzzle ready, shape, device and datatype, let's try and make a prediction!\n",
    "\n",
    "> **Note:** Remember a model likes to make predictions on data in the same (or similar) format to what it was trained on (shape, device and datatype)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "c2db9993-0148-4881-9f68-fcea63946c13",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ -963.8352, -1658.8182,  -735.9952, -1285.2964,  -550.3845,   949.4190,\n",
       "          -538.1960,  1123.0616,   552.7371,  1413.8110]], device='cuda:0',\n",
       "       grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Make a prediction with model_6 on the transformed test_image\n",
    "pred_on_gpu = model_6(test_image.unsqueeze(dim=0) # add a batch dimension\n",
    "                      .type(torch.float32) # convert the datatype to torch.float32\n",
    "                      .to(device)) # send the tensor to the target device\n",
    "pred_on_gpu"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef441646-1d24-4fde-9fa5-0b54ce9de2fe",
   "metadata": {},
   "source": [
    "Woohoo!!!\n",
    "\n",
    "A fair few steps but our `model_6` successfully makes a prediction on `test_image`.\n",
    "\n",
    "Since `test_image` is on the CPU by default, we could also put the model back on the CPU using the [`.cpu()` method](https://pytorch.org/docs/stable/generated/torch.Tensor.cpu.html) and make the same prediction on the CPU device instead of the GPU device."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "e98bd335-43ca-4c76-ac66-f3ffccf79dc1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ -963.8351, -1658.8182,  -735.9953, -1285.2964,  -550.3845,   949.4189,\n",
       "          -538.1960,  1123.0615,   552.7371,  1413.8110]],\n",
       "       grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Put model back on CPU\n",
    "model_6.cpu()\n",
    " \n",
    "# Make a prediction on the CPU device (no need to put test_image on the CPU as it's already there)\n",
    "pred_on_cpu = model_6(test_image.unsqueeze(dim=0) # add a batch dimension\n",
    "                      .type(torch.float32)) # convert the datatype to torch.float32 \n",
    "pred_on_cpu"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2269862-f9d9-4985-9483-1821132b8605",
   "metadata": {},
   "source": [
    "And again the prediction works!\n",
    "\n",
    "Is it correct?\n",
    "\n",
    "We can check by taking the model's raw outputs and converting them from `raw logits -> prediction probabilities -> prediction label` (see [02. PyTorch Neural Network Classification section 3.1](https://www.learnpytorch.io/02_pytorch_classification/#31-going-from-raw-model-outputs-to-predicted-labels-logits-prediction-probabilities-prediction-labels) for more on this conversion)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "afbaa8ae-3954-43d4-92e1-3a0074428056",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test label: 9\n",
      "Pred label: tensor([9])\n",
      "Is the prediction correct? True\n"
     ]
    }
   ],
   "source": [
    "# Convert raw logits to prediction probabilities\n",
    "pred_probs = torch.softmax(pred_on_cpu, dim=1)\n",
    "\n",
    "# Convert prediction probabilities to prediction label\n",
    "pred_label = torch.argmax(pred_probs, dim=1)\n",
    "\n",
    "# Check if it's correct\n",
    "print(f\"Test label: {test_label}\")\n",
    "print(f\"Pred label: {pred_label}\")\n",
    "print(f\"Is the prediction correct? {pred_label.item() == test_label}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f8a3073-6c98-4137-aa3b-f2df832d1a67",
   "metadata": {},
   "source": [
    "There can be a fair few steps involved when making predictions on a test or custom sample.\n",
    "\n",
    "So one of the ways to prevent repeating all of these steps is to turn them into a function.\n",
    "\n",
    "There's an example of this in [04. PyTorch Custom Datasets section 11.3: Building a function to predict on custom images](https://www.learnpytorch.io/04_pytorch_custom_datasets/#113-putting-custom-image-prediction-together-building-a-function). "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2740aaa9-e3a7-42ea-9ca0-d4dc5be67766",
   "metadata": {},
   "source": [
    "## Putting it all together \n",
    "\n",
    "We've been hands on with three of the main errors you'll come across when building neural networks with PyTorch:\n",
    "\n",
    "1. **Shape errors** - there are mismatches between the data you're working with and the neural network you're building to find patterns in or there are mismatches between the connecting layers of your neural network.\n",
    "2. **Device errors** - your model and data are on different devices, PyTorch expects *all* tensors and objects to be on the *same* device.\n",
    "3. **Datatype errors** - you're trying to compute on one datatype when your model expects another datatype.\n",
    "\n",
    "And we've seen how and why they occur and then how to fix them:\n",
    "\n",
    "* Your model wants to make predictions on same kind of data it was trained on (shape, device and datatype).\n",
    "* Your model and data should be on same device for training and testing.\n",
    "* You can take care of many of these issues by creating reusable functions that define `device` and datatype, such as in [04. PyTorch Going Modular section 4: Creating training and testing functions](https://www.learnpytorch.io/05_pytorch_going_modular/#4-creating-train_step-and-test_step-functions-and-train-to-combine-them).\n",
    "\n",
    "Knowing about these errors won't prevent you from making them in the future but it will give you an idea of where to go to fix them.\n",
    "\n",
    "For more in-depth examples of these errors, including how to create and fix them in a hands-on manner, check out the [Zero to Mastery: PyTorch for Deep Learning course](https://dbourke.link/ZTMPyTorch)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
