{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Pytorch Tutorial\n",
    "\n",
    "[Video-Source]https://www.youtube.com/playlist?list=PLqnslRFeH2UrcDBWF5mfPGpqQDSta6VK4;  \n",
    "[Code-Source]https://github.com/patrickloeber/pytorch-examples\n",
    "\n",
    "## Tensor Basics\n",
    "\n",
    "```python\n",
    "torch.ones(2, 2, dtype=torch.float16)\n",
    "torch.rand(2, 2)\n",
    "torch.zeros(2, 2)\n",
    "\n",
    "torch.mul(x, y) # assert x.shape == y.shape\n",
    "torch.div(x, y) # assert x.shape == y.shape\n",
    "torch.matmul(x, y) # assert x.shape == (m, k) and y.shape == (k, n)\n",
    "torch.add(x, y) # assert x.shape == y.shape\n",
    "torch.sub(x, y) # assert x.shape == y.shape\n",
    "\n",
    "a = numpy.arrary()\n",
    "torch.from_numpy(a)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import numpy as np \n",
    "\n",
    "x = torch.rand(4, 4)\n",
    "print(x)\n",
    "\n",
    "y = x.view(16)\n",
    "z = x.view(-1, 8)\n",
    "print(y)\n",
    "print(z, z.size())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check the device\n",
    "\n",
    "```python\n",
    "if torch.cuda.is_available():\n",
    "    device = torch.device(\"cuda\")\n",
    "    x = torch.ones(5, device=device)\n",
    "    y = torch.ones(5)\n",
    "    y = y.to(device)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.cuda.is_available()\n",
    "\n",
    "if torch.cuda.is_available():\n",
    "    device = torch.device(\"cuda\")\n",
    "    x = torch.ones(5, device=device)\n",
    "    y = torch.ones(5)\n",
    "    y = y.to(device) # to gpu\n",
    "    z = x + y\n",
    "    z = z.to(\"cpu\") "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Gradient calculation with autograd\n",
    "\n",
    "```python\n",
    "x = torch.randn(3, requires_grad=True)\n",
    "y = x.mean()\n",
    "# create a Jacobian matrix to get derivative\n",
    "y.backward() # dy / dx\n",
    "print(x.grad)\n",
    "\n",
    "# Not show gradient info\n",
    "x.requires_grad_(False) # form 1\n",
    "x.detach() # form 2\n",
    "with torch.no_grad(): # form 3\n",
    "    y = x + 2\n",
    "    print(y)\n",
    "```\n",
    "\n",
    "**Backpropagation Theory**\n",
    "\n",
    "chain rule\n",
    "$$\\frac{dz}{dx} = \\frac{dz}{dy} \\cdot \\frac{dy}{dx}$$\n",
    "\n",
    "## Training Pipeline\n",
    "\n",
    "+ 1) Design our model (input, output size, forward pass)\n",
    "+ 2) Construct loss and optimizer\n",
    "+ 3) Training loop:   \n",
    "  Iterate over data, calculate loss, perform backward pass, update weights\n",
    "  + forward pass: compute prediction\n",
    "  + backward pass: compute gradients\n",
    "  + update weights\n",
    "\n",
    "**Key coding**\n",
    "\n",
    "+ ```model.forward()```：前向推理，计算损失函数；\n",
    "+ ```loss.backward()```：反向传播，计算当前梯度；\n",
    "+ ```optimizer.step()```：根据梯度更新网络参数\n",
    "+ ```optimizer.zero_grad()```：清空过往梯度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch \n",
    "\n",
    "X = torch.randn(100, 4, requires_grad=True)\n",
    "y = torch.randn(100, 1, requires_grad=True)\n",
    "w = torch.randn(4, 4, dtype=torch.float32, requires_grad=True) \n",
    "\n",
    "def forward(X, w):\n",
    "    return torch.matmul(X, w)\n",
    "\n",
    "def loss_func(y, y_pred):\n",
    "    return torch.mean((y_pred - y) ** 2)\n",
    "\n",
    "learning_rate = 0.01\n",
    "n_iters = 100\n",
    "\n",
    "for epoch in range(n_iters):\n",
    "    y_pred = forward(X, w)\n",
    "    l = loss_func(y, y_pred)\n",
    "    l.backward() # dl / dw\n",
    "\n",
    "    with torch.no_grad():\n",
    "        w -= learning_rate * w.grad\n",
    "    w.grad.zero_() # zero gradients\n",
    "    if epoch % 10 == 0:\n",
    "        print(f\"Epoch: {epoch}, Loss: {l.item()}\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch \n",
    "import torch.nn as nn \n",
    "\n",
    "X = torch.randn(100, 4, dtype=torch.float32)\n",
    "y = torch.randn(100, 1, dtype=torch.float32)\n",
    "\n",
    "n_samples, n_features = X.shape\n",
    "# input, output size of features\n",
    "input_size = n_features\n",
    "output_size = n_features\n",
    "# linear regression model\n",
    "model = nn.Linear(input_size, output_size) \n",
    "\n",
    "lr = 0.01\n",
    "n_iters = 100\n",
    "\n",
    "criterion = nn.MSELoss() # loss function\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=lr)\n",
    "\n",
    "for epoch in range(n_iters):\n",
    "    y_pred = model(X)\n",
    "    loss = criterion(y_pred, y)\n",
    "\n",
    "    loss.backward() # compute gradients\n",
    "    optimizer.step() # update weights   \n",
    "    optimizer.zero_grad() # zero gradients\n",
    "\n",
    "    if epoch % 10 == 0:\n",
    "        print(f\"Epoch = {epoch}, Loss = {loss.item():.5f}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Dataset and Dataloader\n",
    "\n",
    "[reference] https://pytorch.org/tutorials/beginner/basics/data_tutorial.html  \n",
    "\n",
    "+ Dataset: 数据集，存储数据和标签；\n",
    "  + A custom Dataset class must implement three functions: ```__init__```, ```__len__```, and ```__getitem__```.\n",
    "\n",
    "+ Dataloader: 数据加载器，对数据进行预处理，并生成批量数据；\n",
    "\n",
    "  + The ```Dataset``` retrieves our dataset’s features and labels one sample at a time. While training a model, we typically want to pass samples in “minibatches”, reshuffle the data at every epoch to reduce model overfitting, and use Python’s ```multiprocessing``` to speed up data retrieval.\n",
    "\n",
    "  + ```DataLoader``` is an iterable that abstracts this complexity for us in an easy API.\n",
    "\n",
    "`DataLoader` 参数说明\n",
    "\n",
    "1. `dataset` (必需): 用于加载数据的数据集，通常是`torch.utils.data.Dataset`的子类实例。\n",
    "1. `batch_size` (可选): 每个批次的数据样本数。默认值为1。\n",
    "1. `shuffle` (可选): 是否在每个周期开始时打乱数据。默认为False。\n",
    "1. `sampler` (可选): 定义从数据集中抽取样本的策略。如果指定，则忽略`shuffle`参数。\n",
    "1. `batch_sampler` (可选): 与sampler类似，但一次返回一个批次的索引。不能与`batch_size`、`shuffle`和`sampler`同时使用。\n",
    "1. `num_workers` (可选): 用于数据加载的子进程数量。默认为0，意味着数据将在主进程中加载。\n",
    "1. `collate_fn` (可选): 如何将多个数据样本整合成一个批次。通常不需要指定。将一个list的sample组成一个mini-batch的函数.\n",
    "1. `drop_last` (可选): 如果数据集大小不能被批次大小整除，是否丢弃最后一个不完整的批次。默认为False。\n",
    "1. `pin_memory` (可选): 如果为True，数据加载器将使用固定内存（pinned memory）来加速数据传输到GPU。默认为False。\n",
    "\n",
    "\n",
    "\n",
    "## Dataset Transforms\n",
    "\n",
    "[reference] https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#transforms  \n",
    "\n",
    "+ ```torchvision.transforms```: 图像预处理\n",
    "    \n",
    "```python\n",
    "\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "import numpy as np\n",
    "import math\n",
    "\n",
    "class WineDataset(Dataset, transform=None):\n",
    "    def __init__(self, transform=False):\n",
    "        # data loading (skip the first row header)\n",
    "        xy = np.loadtxt('./asset/wine/wine.csv', delimiter=\",\", dtype=np.float32, skiprows=1)\n",
    "        self.X = torch.from_numpy(xy[:, 1:])\n",
    "        self.y = torch.from_numpy(xy[:, [0]]) # n_samples, 1\n",
    "        self.n_samples = xy.shape[0]\n",
    "        self.transform = transform\n",
    "\n",
    "\n",
    "    def __getitem__(self, index):\n",
    "        sample = self.X[index], self.y[index]\n",
    "\n",
    "        if self.transform:\n",
    "            sample = self.transform(sample)\n",
    "        return sample\n",
    "\n",
    "    def __len__(self):\n",
    "        return self.n_samples\n",
    "\n",
    "dataset = WineDataset()\n",
    "# first_data = dataset[0]\n",
    "# features, labels = first_data\n",
    "# print(features, labels)\n",
    "\n",
    "dataloader = DataLoader(dataset=dataset, batch_size=4, shuffle=True)\n",
    "# features, labels = next(iter(dataloader))\n",
    "# print(features, labels)\n",
    "\n",
    "# training loop \n",
    "num_epochs = 2\n",
    "total_samples = len(dataset)\n",
    "n_iters = math.ceil(total_samples / 4) # get upper boundary\n",
    "print(total_samples, n_iters)\n",
    "\n",
    "'''\n",
    "for epoch in range(num_epochs):\n",
    "    for i, (inputs, _) in enumerate(dataloader):\n",
    "        # forward pass, backward pass, update weights\n",
    "        if (i + 1) % 5 == 0:\n",
    "            print(f'epoch {epoch+1} / {num_epochs}, step {i+1} / {n_iters}, inputs {inputs.shape}')\n",
    "'''"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Feed-Forward Neural Networks\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "# fully connencted neual network with a hidden layer\n",
    "class NeuralNetwork(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, num_classes):\n",
    "        super(NeuralNetwork, self).__init__()\n",
    "        self.fc1 = nn.Linear(input_size, hidden_size) \n",
    "        self.relu = nn.ReLU()\n",
    "        self.fc2 = nn.Linear(hidden_size, num_classes)  \n",
    "    \n",
    "    def forward(self, x):\n",
    "        out = self.fc1(x)\n",
    "        out = self.relu(out)\n",
    "        out = self.fc2(out)\n",
    "        # no activation and no softmax at the end\n",
    "        return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import matplotlib.pyplot as plt\n",
    "from torch.utils.data import DataLoader, Dataset\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "import sys\n",
    "writer = SummaryWriter(\"runs/mnist2\")\n",
    "\n",
    "# device config\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "# hyper parameters \n",
    "input_size = 784 # 28x28\n",
    "hidden_size = 100\n",
    "num_classes = 10\n",
    "num_epochs = 5\n",
    "batch_size = 100\n",
    "learning_rate = 0.001\n",
    "\n",
    "# MNIST dataset\n",
    "train_dataset = torchvision.datasets.MNIST(root='./data', train=True, \\\n",
    "                                           transform=transforms.ToTensor(), download=True)\n",
    "test_dataset = torchvision.datasets.MNIST(root='./data', train=False, \\\n",
    "                                          transform=transforms.ToTensor())\n",
    "train_loader = DataLoader(dataset=train_dataset, batch_size= batch_size, shuffle=True)\n",
    "test_loader = DataLoader(dataset=test_dataset, batch_size= batch_size, shuffle=False)\n",
    "\n",
    "examples = iter(train_loader)\n",
    "samples, labels = examples.__next__() # bug for python version\n",
    "print(samples.shape, labels.shape)\n",
    "\n",
    "for i in range(6):\n",
    "    plt.subplot(2, 3, i+1)\n",
    "    plt.imshow(samples[i][0], cmap='gray')\n",
    "    plt.title(labels[i].item())\n",
    "plt.show()\n",
    "\n",
    "img_grid = torchvision.utils.make_grid(samples)\n",
    "writer.add_image('mnist_images', img_grid)\n",
    "# writer.close()\n",
    "# sys.exit()\n",
    "\n",
    "model = NeuralNetwork(input_size, hidden_size, num_classes)\n",
    "# loss and optimizer\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n",
    "\n",
    "writer.add_graph(model, samples.reshape(-1, 28*28))\n",
    "writer.close()\n",
    "# sys.exit()\n",
    "# training loop\n",
    "n_total_steps = len(train_loader)\n",
    "\n",
    "running_loss = 0.0 # for tensorboard scalar\n",
    "running_correct = 0 # for tensorboard scalar\n",
    "for epoch in range(num_epochs):\n",
    "    for i, (images, labels) in enumerate(train_loader):\n",
    "        # reshape images to (batch_size, input_size)\n",
    "        # (100, 1, 28, 28) -> (100, 28*28)\n",
    "        images = images.reshape(-1, 28*28).to(device)\n",
    "        labels = labels.to(device)\n",
    "        \n",
    "        # forward pass\n",
    "        outputs = model(images)\n",
    "        loss = criterion(outputs, labels)\n",
    "\n",
    "        # backward pass and update weights\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        running_loss += loss.item()\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        running_correct += (predicted == labels).sum().item()\n",
    "\n",
    "        if (i+1) % 100 == 0:\n",
    "            print(f\"epoch {epoch+1} / {num_epochs}, step {i+1}/{n_total_steps}, loss = {loss}\")\n",
    "            writer.add_scalar('training loss', running_loss / 100, epoch * n_total_steps + i)\n",
    "            writer.add_scalar('accuracy', running_correct / 100, epoch * n_total_steps + i)\n",
    "            running_loss = 0.0\n",
    "            running_correct = 0\n",
    "\n",
    "# test\n",
    "preds = [] \n",
    "labels = []\n",
    "with torch.no_grad():\n",
    "    n_correct = 0\n",
    "    n_samples = 0\n",
    "    for images, label in test_loader:\n",
    "        images = images.reshape(-1, 28*28).to(device)\n",
    "        label = label.to(device)\n",
    "        \n",
    "        outputs = model(images)\n",
    "        # max returns (value, index)\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        n_samples += label.shape[0]\n",
    "        n_correct += (predicted == label).sum().item()\n",
    "\n",
    "        # classification results for tensorboard\n",
    "        class_predictions = [nn.functional.softmax(output, dim=0) for output in outputs]\n",
    "        preds.append(class_predictions)\n",
    "        labels.append(predicted)\n",
    "    \n",
    "    preds = torch.cat([torch.stack(batch) for batch in preds])\n",
    "    labels = torch.cat(labels)\n",
    "    acc = 100.0 * n_correct / n_samples\n",
    "    print(f'Accuracy on the testing images= {acc}%')\n",
    "\n",
    "    classes = range(10)\n",
    "    for i in classes:\n",
    "        labels_i = labels == i\n",
    "        preds_i = preds[:, i]\n",
    "        writer.add_pr_curve(str(i), labels_i, preds_i, global_step=0)\n",
    "        writer.close()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Convolutional Neural Networks\n",
    "\n",
    "+ The CIFAR-10 dataset \n",
    "+ Convolutional Layer\n",
    "  + input size: $(n_h \\times n_w)$, convo kernel: $(k_h \\times k_w)$, \n",
    "  + padding: $(p_h, p_w)$,  stride: $(s_h, s_w)$\n",
    "  + output size: $$(n_h - k_h + p_h + 1)\\times (n_w - k_w + p_w + 1)$$\n",
    "  + output size: $$\\lfloor(n_h - k_h + p_h + s_h)/s_h \\rfloor \\times \\lfloor (n_w - k_w + p_w + s_w)/s_w \\rfloor$$ , or we can just compute $$ \\lfloor (N - K + P) / S \\rfloor + 1$$ if $p_h = k_h -1, p_w = k_w - 1$, output $(n_h/s_h)\\times (n_w/s_w)$\n",
    "+ Max Pooling \n",
    "  + (2 x 2) max pooling, output size: $(n_h/2)\\times (n_w/2)$\n",
    "+ Pytorch code\n",
    "  + ```torch.nn.Conv2d```: 卷积层\n",
    "  + ```torch.nn.MaxPool2d```: 最大池化层\n",
    "  + ```torch.nn.Flatten```: 展平层\n",
    "  + ```torch.nn.Linear```: 全连接层\n",
    "\n",
    "  + ```torch.nn.Sequential```: 顺序模型\n",
    "+ utils \n",
    "  + ```out = torchvision.utils.make_grid(images)``` : 显示图像\n",
    "  + ```imshow(out, title=[class_names[x] for x in classes])```: 显示图像\n",
    "  + ```torchvision.transforms.ToPILImage()```: 图像转换\n",
    "    \n",
    "  ``````"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "class ConvNetwork(nn.Module):\n",
    "    # original shape of images [4, 3, 32, 32]\n",
    "    # input_layer: 3 input channels, 6 output channels, 5 kernel size   \n",
    "    def __init__(self):\n",
    "        super(ConvNetwork, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(3, 6, 5) # [4, 3, 32, 32] -> [4, 6, 28, 28]\n",
    "        self.pool = nn.MaxPool2d(2, 2) # [4, 6, 28, 28] -> [4, 6, 14, 14]\n",
    "        self.conv2 = nn.Conv2d(6, 16, 5) # [4, 6, 14, 14] -> [4, 16, 10, 10]\n",
    "        self.fc1 = nn.Linear(16*5*5, 120)\n",
    "        self.fc2 = nn.Linear(120, 84)\n",
    "        self.fc3 = nn.Linear(84, 10)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = F.relu(self.conv1(x))\n",
    "        x = self.pool(x)\n",
    "        x = F.relu(self.conv2(x))\n",
    "        x = self.pool(x)\n",
    "        # flatten the output of conv2 to (batch_size, 16*5*5)\n",
    "        x = x.view(-1, 16*5*5) \n",
    "        x = F.relu(self.fc1(x))\n",
    "        x = F.relu(self.fc2(x))\n",
    "        return self.fc3(x)\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import matplotlib.pyplot as plt\n",
    "from torch.utils.data import DataLoader, Dataset\n",
    "\n",
    "# device config\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "# hyper parameters \n",
    "input_size = 1024 # 32x32\n",
    "hidden_size = 100\n",
    "num_classes = 10\n",
    "num_epochs = 4\n",
    "batch_size = 4\n",
    "learning_rate = 0.001\n",
    "\n",
    "# CIRAR10 dataset\n",
    "transform = transforms.Compose(\n",
    "    [transforms.ToTensor(),\n",
    "    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]\n",
    ")\n",
    "train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, \\\n",
    "                                           transform=transform, download=True)\n",
    "test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, \\\n",
    "                                          transform=transform, download=True)\n",
    "train_loader = DataLoader(dataset=train_dataset, batch_size= batch_size, shuffle=True)\n",
    "test_loader = DataLoader(dataset=test_dataset, batch_size= batch_size, shuffle=False)\n",
    "\n",
    "classes = ('plane', 'car', 'bird', 'cat', 'deer', \n",
    "           'dog', 'frog', 'horse', 'ship', 'truck')\n",
    "\n",
    "examples = iter(train_loader)\n",
    "samples, labels = examples.__next__() # bug for python version\n",
    "print(samples.shape, labels.shape)\n",
    "\n",
    "for i in range(4):\n",
    "    plt.subplot(1, 4, i+1)\n",
    "    plt.imshow(samples[i][0])\n",
    "    plt.title(classes[labels[i].item()])\n",
    "plt.show()\n",
    "\n",
    "\n",
    "model = ConvNetwork()\n",
    "# loss and optimizer\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
    "\n",
    "# training loop\n",
    "n_total_steps = len(train_loader)\n",
    "for epoch in range(num_epochs):\n",
    "    for i, (images, labels) in enumerate(train_loader):\n",
    "        # original shape of images [4, 3, 32, 32]\n",
    "        # input_layer: 3 input channels, 6 output channels, 5 kernel size\n",
    "        images = images.to(device)\n",
    "        labels = labels.to(device)\n",
    "        \n",
    "        # forward pass\n",
    "        outputs = model(images)\n",
    "        loss = criterion(outputs, labels)\n",
    "\n",
    "        # backward pass and update weights\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        if (i+1) % 100 == 0:\n",
    "            print(f\"epoch {epoch+1} / {num_epochs}, step {i+1}/{n_total_steps}, loss = {loss}\")\n",
    "\n",
    "print('Finishing Training')\n",
    "# test \n",
    "with torch.no_grad():\n",
    "    n_correct = 0\n",
    "    n_samples = 0\n",
    "    n_class_correct = [0 for i in range(num_classes)]\n",
    "    n_class_samples = [0 for i in range(num_classes)]\n",
    "\n",
    "    for images, labels in test_loader:\n",
    "        images = images.to(device)\n",
    "        labels = labels.to(device)\n",
    "        \n",
    "        outputs = model(images)\n",
    "        # value, index\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        n_samples += labels.shape[0]\n",
    "        n_correct += (predicted == labels).sum().item()\n",
    "\n",
    "        for i in range(batch_size):\n",
    "            label = labels[i]\n",
    "            pred = predicted[i]\n",
    "            if (label == pred):\n",
    "                n_class_correct[label] += 1\n",
    "            n_class_samples[label] += 1\n",
    "\n",
    "    acc = 100.0 * n_correct / n_samples\n",
    "    print(f'Accuracy of Convolutional Network = {acc}%')\n",
    "\n",
    "    for i in range(10):\n",
    "        acc = 100.0 * n_class_correct[i] / n_class_samples[i]\n",
    "        print(f'Accuracy of {classes[i]} = {acc}%')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Tranfer Learning\n",
    "\n",
    "+ ```torch.utils.tensorboard```: 记录训练过程\n",
    "    \n",
    "\n",
    "```python\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "import sys\n",
    "\n",
    "writer = SummaryWriter(\"runs/mnist2\")\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# tranfer.py \n",
    "import torch \n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torch.optim import lr_scheduler\n",
    "import torchvision\n",
    "from torchvision import datasets, models, transforms\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import time\n",
    "import os\n",
    "import copy\n",
    "\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "\n",
    "mean = np.array([0.485, 0.456, 0.406])\n",
    "std = np.array([0.229, 0.224, 0.225]) \n",
    "\n",
    "... \n",
    "\n",
    "def train_model():\n",
    "    pass\n",
    "\n",
    "####\n",
    "model = models.resnet18(pretrained=True)\n",
    "for param in model.parameters():\n",
    "    param.requires_grad = False \n",
    "    \n",
    "num_ftrs = model.fc.in_features  # num_features\n",
    "model.fc = nn.Linear(num_ftrs, 2) \n",
    "model.to(device)\n",
    "\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "optimizer = optim.SGD(model.parameters(), lr=0.001)\n",
    "\n",
    "# scheduler\n",
    "# Decay LR by a factor of 0.1 every 7 epochs\n",
    "step_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)\n",
    "model = train_model(model, criterion, optimizer, step_lr_scheduler, num_epochs=25)\n",
    "# for epoch in range(num_epochs):\n",
    "#     train(...) \n",
    "#     validate(...) \n",
    "#     schelduler.step()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Transformer\n",
    "\n",
    "\n",
    "+ ```torch.nn.Transformer```: 实现Transformer模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## TensorBoard Usage\n",
    "\n",
    "+ ```torch.utils.tensorboard```: 记录训练过程\n",
    "  [referce]https://pytorch.org/tutorials/beginner/introyt/tensorboardyt_tutorial.html;\n",
    "  \n",
    "```python\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "import sys\n",
    "\n",
    "writer = SummaryWriter(\"runs/mnist2\")\n",
    "\n",
    "examples = iter(train_loader)\n",
    "samples, labels = examples.__next__() # bug for python version\n",
    "print(samples.shape, labels.shape)\n",
    "\n",
    "\n",
    "img_grid = torchvision.utils.make_grid(samples)\n",
    "writer.add_image('mnist_images', img_grid)\n",
    "\n",
    "writer.add_graph(model, samples.reshape(-1, 28*28))\n",
    "# writer.close()\n",
    "# training loop\n",
    "running_loss = 0.0\n",
    "running_correct = 0\n",
    "\n",
    "for epoch in range(num_epochs):\n",
    "    for i, (images, labels) in enumerate(train_loader):\n",
    "\n",
    "        running_loss += loss.item()\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        running_correct += (predicted == labels).sum().item()\n",
    "\n",
    "        if (i+1) % 100 == 0:\n",
    "            writer.add_scalar('training loss', running_loss / 100, epoch * n_total_steps + i)\n",
    "            writer.add_scalar('accuracy', running_correct / 100, epoch * n_total_steps + i)\n",
    "            running_loss = 0.0\n",
    "            running_correct = 0\n",
    "# test\n",
    "preds = []\n",
    "labels = []\n",
    "with torch.no_grad():\n",
    "    for images, label in test_loader:\n",
    "        images = images.reshape(-1, 28*28).to(device)\n",
    "        label = label.to(device)\n",
    "        \n",
    "        outputs = model(images)\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        ...\n",
    "        # classification results for tensorboard\n",
    "        class_predictions = [nn.functional.softmax(output, dim=0) for output in outputs]\n",
    "        preds.append(class_predictions)\n",
    "        labels.append(predicted)\n",
    "    \n",
    "    preds = torch.cat([torch.stack(batch) for batch in preds])\n",
    "    labels = torch.cat(labels)\n",
    "    acc = 100.0 * n_correct / n_samples\n",
    "    print(f'Accuracy on the testing images= {acc}%')\n",
    "\n",
    "    classes = range(10)\n",
    "    for i in classes:\n",
    "        labels_i = labels == i\n",
    "        preds_i = preds[:, i]\n",
    "        writer.add_pr_curve(str(i), labels_i, preds_i, global_step=0)\n",
    "        writer.close()\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Saving and Loading Models\n",
    "\n",
    "[Reference]https://pytorch.org/tutorials/beginner/basics/saveloadrun_tutorial.html;\n",
    "\n",
    "+ ```torch.save```: 保存模型参数\n",
    "  + complete model: 保存整个模型对象，包括模型的结构和参数。当加载模型时，需要确保与原始模型相同的代码定义了模型的结构。\n",
    "  + state dict: 仅保存模型的状态字典（state dictionary），即模型的参数。这种方法保存的文件相对较小，只包含模型的权重信息，而不包括模型的结构。加载模型时，需要首先根据代码定义模型的结构，然后再将参数加载到模型中。\n",
    "+ ```torch.load```: 加载模型参数\n",
    "\n",
    "```python\n",
    "# Example\n",
    "PATH = 'mymodel.pth'\n",
    "#### COMPLETE MODEL ####\n",
    "torch.save(model, PATH)\n",
    "\n",
    "# model class must be define somewhere \n",
    "model = torch.load(PATH)\n",
    "model.eval()\n",
    "\n",
    "##### STATE DICT #####\n",
    "torch.save(model.state_dict(), PATH)\n",
    "\n",
    "# model must be created again with parameters\n",
    "model = MyModel(*args, **kwargs)\n",
    "model.load_state_dict(torch.load(PATH))\n",
    "model.eval()\n",
    "\n",
    "# How to make model human visible\n",
    "for param in loaded_model.parameters():\n",
    "  print(param)\n",
    "\n",
    "print(model.state_dict())\n",
    "```\n",
    "\n",
    "+ A pipeline for using **checkpoint** to save and load model\n",
    "\n",
    "```python\n",
    "# train your model\n",
    "learning_rate = 0.01 \n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
    "print(optimizer.state_dict())\n",
    "\n",
    "checkpoint = {\n",
    "    'epoch': current_epoch,\n",
    "    'model_state_dict': model.state_dict(),\n",
    "    'optimizer_state_dict': optimizer.state_dict(),\n",
    "    # 'loss': loss,\n",
    "    # 'accuracy': accuracy\n",
    "}\n",
    "# torch.save(checkpoint, 'checkpoint.pth')\n",
    "loaded_checkpoint = torch.load('checkpoint.pth')\n",
    "epoch = loaded_checkpoint['epoch']\n",
    "\n",
    "model = MyModel(*args, **kwargs)\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=0)\n",
    "\n",
    "model.load_state_dict(loaded_checkpoint['model_state_dict'])\n",
    "optimizer.load_state_dict(loaded_checkpoint['optimizer_state_dict'])\n",
    "\n",
    "print(optimizer.state_dict())\n",
    "```\n",
    "\n",
    "+ **Saving and loading model on CPU or GPU**\n",
    "\n",
    "```python\n",
    "# Save on GPU， load on CPU \n",
    "device = torch.device(\"cuda\")\n",
    "model.to(device) \n",
    "model.save(model.sate_dict(), PATH)\n",
    "\n",
    "target_device = torch.device(\"cpu\")\n",
    "model = MyModel(*args, **kwargs)\n",
    "model.load_state_dict(torch.load(PATH, map_location=target_device)) \n",
    "\n",
    "# Save on GPU， load on GPU\n",
    "device = torch.device(\"cuda\")\n",
    "model.to(device)\n",
    "model.save(model.sate_dict(), PATH)\n",
    "\n",
    "model = MyModel(*args, **kwargs)\n",
    "model.load_state_dict(torch.load(PATH))\n",
    "model.to(device)\n",
    "\n",
    "# Save on CPU， load on GPU \n",
    "model.save(model.sate_dict(), PATH)\n",
    "\n",
    "device = torch.device(\"cuda\") # specify the cude device\n",
    "model = MyModel(*args, **kwargs)\n",
    "model.load_state_dict(torch.load(PATH, map_location=\"cuda:0\")) # choose which cuda device to load\n",
    "model.to(device)\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Pytorch Lightning Tutorial\n",
    "\n",
    "Lightning Source: [github-PytorchLightning](https://github.com/Lightning-AI/pytorch-lightning);  \n",
    "Reference: [pytorch-lightning入门到精通](https://github.com/3017218062/Pytorch-Lightning-Learning)  \n",
    "\n",
    "Simple installation from PyPI or Conda\n",
    "\n",
    " - ```pip install pytorch-lightning``` \n",
    " - ```conda install pytorch-lightning -c conda-forge``` \n",
    "\n",
    "Show on ``tensorboard``` \n",
    "\n",
    "```logger = TensorBoardLogger('tb_logs', name='my_model')```  \n",
    "```tensorboard --logdir ./tb_logs```\n",
    "\n",
    "```python\n",
    "# lightning features\n",
    "model.train()\n",
    "model.eval()\n",
    "\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "model.to(device)\n",
    "# -> easy GPU/TPU support\n",
    "# -> scale GPUs\n",
    "\n",
    "# Bonus: - Tensorbord support\n",
    "#        - prints tips/hints\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "GPU available: False, used: False\n",
      "TPU available: False, using: 0 TPU cores\n",
      "IPU available: False, using: 0 IPUs\n",
      "HPU available: False, using: 0 HPUs\n",
      "Missing logger folder: d:\\Desktop\\StudyNote\\PythonNote\\lightning_logs\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "  | Name | Type   | Params\n",
      "--------------------------------\n",
      "0 | l1   | Linear | 392 K \n",
      "1 | relu | ReLU   | 0     \n",
      "2 | l2   | Linear | 5.0 K \n",
      "--------------------------------\n",
      "397 K     Trainable params\n",
      "0         Non-trainable params\n",
      "397 K     Total params\n",
      "1.590     Total estimated model params size (MB)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sanity Checking: |          | 0/? [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "d:\\Code\\Anaconda\\envs\\ml\\lib\\site-packages\\pytorch_lightning\\trainer\\connectors\\data_connector.py:436: Consider setting `persistent_workers=True` in 'val_dataloader' to speed up the dataloader worker initialization.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "                                                                            "
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "d:\\Code\\Anaconda\\envs\\ml\\lib\\site-packages\\pytorch_lightning\\trainer\\connectors\\data_connector.py:436: Consider setting `persistent_workers=True` in 'train_dataloader' to speed up the dataloader worker initialization.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 4: 100%|██████████| 600/600 [00:14<00:00, 40.85it/s, v_num=0]"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "`Trainer.fit` stopped: `max_epochs=5` reached.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 4: 100%|██████████| 600/600 [00:14<00:00, 40.80it/s, v_num=0]\n"
     ]
    }
   ],
   "source": [
    "# lightning.py  \n",
    "import torch \n",
    "import torch.nn as nn \n",
    "import torchvision \n",
    "import torchvision.transforms as transforms \n",
    "import matplotlib.pyplot as plt \n",
    "\n",
    "import pytorch_lightning as pl\n",
    "import torch.nn.functional as F\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "\n",
    "# Hyper-parameters \n",
    "input_size = 784  # 28x28\n",
    "hidden_size = 500 \n",
    "num_classes = 10 \n",
    "num_epochs = 5 \n",
    "batch_size = 100 \n",
    "learning_rate = 0.001\n",
    "\n",
    "\n",
    "class LitNeuralNet(pl.LightningModule):\n",
    "    def __init__(self, input_size, hidden_size, num_classes):\n",
    "        super(LitNeuralNet, self).__init__()\n",
    "        self.validation_step_outputs = []\n",
    "        self.input_size = input_size \n",
    "        self.l1 = nn.Linear(input_size, hidden_size) \n",
    "        self.relu = nn.ReLU()\n",
    "        self.l2 = nn.Linear(hidden_size, num_classes) \n",
    "\n",
    "    def forward(self, x):\n",
    "        out = self.relu(self.l1(x))\n",
    "        out = self.l2(out)\n",
    "        # no activation and no softmax at the end \n",
    "        return out \n",
    "    \n",
    "    def configure_optimizers(self):\n",
    "        return torch.optim.Adam(self.parameters(), lr=learning_rate) \n",
    "   \n",
    "    def training_step(self, batch, batch_idx):\n",
    "        images, labels = batch \n",
    "        images = images.reshape(-1, 28*28)\n",
    "\n",
    "        # forward pass \n",
    "        outputs = self(images) \n",
    "        loss = F.cross_entropy(outputs, labels)\n",
    "        tensorboard_logs = {'train_loss': loss} \n",
    "        return {'loss': loss, 'log': tensorboard_logs} \n",
    "    \n",
    "    def train_dataloader(self):\n",
    "        train_dataset = torchvision.datasets.MNIST(root='./data/', \n",
    "                         train=True, transform=transforms.ToTensor(), download=True)\n",
    "        train_loader = DataLoader(train_dataset, batch_size=batch_size, num_workers=4, shuffle=True) \n",
    "        return train_loader\n",
    "    \n",
    "    def validation_step(self, batch, batch_idx):\n",
    "        images, labels = batch \n",
    "        images = images.reshape(-1, 28*28)\n",
    "\n",
    "        # forward pass \n",
    "        outputs = self(images) \n",
    "        loss = F.cross_entropy(outputs, labels)\n",
    "        self.validation_step_outputs.append(loss)\n",
    "        tensorboard_logs = {'avg_val_loss': loss} \n",
    "        return {'val_loss': loss, 'log': tensorboard_logs} \n",
    "    \n",
    "    def val_dataloader(self):\n",
    "        val_dataset = torchvision.datasets.MNIST(root='./data/', \n",
    "                         train=False, transform=transforms.ToTensor(), download=True)\n",
    "        val_loader = DataLoader(val_dataset, batch_size=batch_size, num_workers=4, shuffle=False) \n",
    "        return val_loader    \n",
    "    \n",
    "    def on_validation_epoch_end(self):\n",
    "        avg_loss = torch.stack(self.validation_step_outputs).mean()\n",
    "        tensorboard_logs = {'avg_val_loss': avg_loss} \n",
    "        self.validation_step_outputs.clear()  # free memory\n",
    "        return {'val_loss': avg_loss, 'log': tensorboard_logs}\n",
    "    \n",
    "\n",
    "# if __name__ == 'main':\n",
    "trainer = pl.Trainer(max_epochs=num_epochs, fast_dev_run=False) \n",
    "model = LitNeuralNet(input_size, hidden_size, num_classes) \n",
    "trainer.fit(model) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "GPU available: False, used: False\n",
      "TPU available: False, using: 0 TPU cores\n",
      "IPU available: False, using: 0 IPUs\n",
      "HPU available: False, using: 0 HPUs\n",
      "Running in `fast_dev_run` mode: will run the requested loop using 1 batch(es). Logging and checkpointing is suppressed.\n",
      "d:\\Code\\Anaconda\\envs\\ml\\lib\\site-packages\\pytorch_lightning\\trainer\\configuration_validator.py:72: You passed in a `val_dataloader` but have no `validation_step`. Skipping val loop.\n",
      "\n",
      "  | Name    | Type       | Params\n",
      "---------------------------------------\n",
      "0 | encoder | Sequential | 100 K \n",
      "1 | decoder | Sequential | 101 K \n",
      "---------------------------------------\n",
      "202 K     Trainable params\n",
      "0         Non-trainable params\n",
      "202 K     Total params\n",
      "0.810     Total estimated model params size (MB)\n",
      "d:\\Code\\Anaconda\\envs\\ml\\lib\\site-packages\\pytorch_lightning\\trainer\\connectors\\data_connector.py:441: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=15` in the `DataLoader` to improve performance.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0: 100%|██████████| 1/1 [00:00<00:00, 61.58it/s]"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "`Trainer.fit` stopped: `max_steps=1` reached.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0: 100%|██████████| 1/1 [00:00<00:00, 54.83it/s]\n"
     ]
    }
   ],
   "source": [
    "# **A simple model by pytorch-lightning on github**\n",
    "# main.py\n",
    "# ! pip install torchvision\n",
    "import torch, torch.nn as nn, torch.utils.data as data, torchvision as tv, torch.nn.functional as F\n",
    "import pytorch_lightning as pl\n",
    "\n",
    "# --------------------------------\n",
    "# Step 1: Define a LightningModule\n",
    "# --------------------------------\n",
    "# A LightningModule (nn.Module subclass) defines a full *system*\n",
    "# (ie: an LLM, diffusion model, autoencoder, or simple image classifier).\n",
    "\n",
    "\n",
    "class LitAutoEncoder(pl.LightningModule):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))\n",
    "        self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))\n",
    "\n",
    "    def forward(self, x):\n",
    "        # in lightning, forward defines the prediction/inference actions\n",
    "        embedding = self.encoder(x)\n",
    "        return embedding\n",
    "\n",
    "    def training_step(self, batch, batch_idx):\n",
    "        # training_step defines the train loop. It is independent of forward\n",
    "        x, _ = batch\n",
    "        x = x.view(x.size(0), -1)\n",
    "        z = self.encoder(x)\n",
    "        x_hat = self.decoder(z)\n",
    "        loss = F.mse_loss(x_hat, x)\n",
    "        self.log(\"train_loss\", loss)\n",
    "        return loss\n",
    "\n",
    "    def configure_optimizers(self):\n",
    "        optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)\n",
    "        return optimizer\n",
    "\n",
    "\n",
    "# -------------------\n",
    "# Step 2: Define data\n",
    "# -------------------\n",
    "dataset = tv.datasets.MNIST(\"./data/\", download=True, transform=tv.transforms.ToTensor())\n",
    "train, val = data.random_split(dataset, [55000, 5000])\n",
    "\n",
    "# -------------------\n",
    "# Step 3: Train\n",
    "# -------------------\n",
    "autoencoder = LitAutoEncoder()\n",
    "trainer = pl.Trainer(fast_dev_run=True)\n",
    "trainer.fit(autoencoder, data.DataLoader(train), data.DataLoader(val))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "myenv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.19"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
