{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2ca7bf1c",
   "metadata": {},
   "source": [
    "\n",
    "# 6-3,使用GPU训练模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cadcb375",
   "metadata": {},
   "source": [
    "深度学习的训练过程常常非常耗时，一个模型训练几个小时是家常便饭，训练几天也是常有的事情，有时候甚至要训练几十天。\n",
    "\n",
    "训练过程的耗时主要来自于两个部分，一部分来自数据准备，另一部分来自参数迭代。\n",
    "\n",
    "当数据准备过程还是模型训练时间的主要瓶颈时，我们可以使用更多进程来准备数据。\n",
    "\n",
    "当参数迭代过程成为训练时间的主要瓶颈时，我们通常的方法是应用GPU来进行加速。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14993aea",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch \n",
    "import torchkeras \n",
    "\n",
    "print(\"torch.__version__ = \",torch.__version__)\n",
    "print(\"torchkeras.__version__ = \",torchkeras.__version__)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a67c1c4c",
   "metadata": {},
   "source": [
    "注：本节代码只能在有GPU的机器环境上才能正确执行。\n",
    "\n",
    "对于没有GPU的同学，推荐使用kaggle平台上的GPU。\n",
    "\n",
    "\n",
    "可点击如下链接，直接在kaggle中运行范例代码。\n",
    "\n",
    "https://www.kaggle.com/lyhue1991/pytorch-gpu-examples\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3ed179b3",
   "metadata": {},
   "source": [
    "Pytorch中使用GPU加速模型非常简单，只要将模型和数据移动到GPU上。核心代码只有以下几行。\n",
    "\n",
    "```python\n",
    "# 定义模型\n",
    "... \n",
    "\n",
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
    "model.to(device) # 移动模型到cuda\n",
    "\n",
    "# 训练模型\n",
    "...\n",
    "\n",
    "features = features.to(device) # 移动数据到cuda\n",
    "labels = labels.to(device) # 或者  labels = labels.cuda() if torch.cuda.is_available() else labels\n",
    "...\n",
    "```\n",
    "\n",
    "如果要使用多个GPU训练模型，也非常简单。只需要在将模型设置为数据并行风格模型。\n",
    "则模型移动到GPU上之后，会在每一个GPU上拷贝一个副本，并把数据平分到各个GPU上进行训练。核心代码如下。\n",
    "\n",
    "```python\n",
    "# 定义模型\n",
    "... \n",
    "\n",
    "if torch.cuda.device_count() > 1:\n",
    "    model = nn.DataParallel(model) # 包装为并行风格模型\n",
    "\n",
    "# 训练模型\n",
    "...\n",
    "features = features.to(device) # 移动数据到cuda\n",
    "labels = labels.to(device) # 或者 labels = labels.cuda() if torch.cuda.is_available() else labels\n",
    "...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cfa2354a",
   "metadata": {},
   "source": [
    "## 〇，GPU相关操作汇总"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "98f0023a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch \n",
    "from torch import nn \n",
    "\n",
    "# 1，查看gpu信息\n",
    "if_cuda = torch.cuda.is_available()\n",
    "print(\"if_cuda=\",if_cuda)\n",
    "\n",
    "gpu_count = torch.cuda.device_count()\n",
    "print(\"gpu_count=\",gpu_count)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ae2a699e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2，将张量在gpu和cpu间移动\n",
    "tensor = torch.rand((100,100))\n",
    "tensor_gpu = tensor.to(\"cuda:0\") # 或者 tensor_gpu = tensor.cuda()\n",
    "print(tensor_gpu.device)\n",
    "print(tensor_gpu.is_cuda)\n",
    "\n",
    "tensor_cpu = tensor_gpu.to(\"cpu\") # 或者 tensor_cpu = tensor_gpu.cpu() \n",
    "print(tensor_cpu.device)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7daa73c5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 3，将模型中的全部张量移动到gpu上\n",
    "net = nn.Linear(2,1)\n",
    "print(next(net.parameters()).is_cuda)\n",
    "net.to(\"cuda:0\") # 将模型中的全部参数张量依次到GPU上，注意，无需重新赋值为 net = net.to(\"cuda:0\")\n",
    "print(next(net.parameters()).is_cuda)\n",
    "print(next(net.parameters()).device)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "029cd019",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 4，创建支持多个gpu数据并行的模型\n",
    "linear = nn.Linear(2,1)\n",
    "print(next(linear.parameters()).device)\n",
    "\n",
    "model = nn.DataParallel(linear)\n",
    "print(model.device_ids)\n",
    "print(next(model.module.parameters()).device) \n",
    "\n",
    "#注意保存参数时要指定保存model.module的参数\n",
    "torch.save(model.module.state_dict(), \"model_parameter.pt\") \n",
    "\n",
    "linear = nn.Linear(2,1)\n",
    "linear.load_state_dict(torch.load(\"model_parameter.pt\")) \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "91dde858",
   "metadata": {},
   "source": [
    "## 一，矩阵乘法范例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f750b06",
   "metadata": {},
   "source": [
    "下面分别使用CPU和GPU作一个矩阵乘法，并比较其计算效率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a729766e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "import torch \n",
    "from torch import nn"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f4b0ef82",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用cpu\n",
    "a = torch.rand((10000,200))\n",
    "b = torch.rand((200,10000))\n",
    "tic = time.time()\n",
    "c = torch.matmul(a,b)\n",
    "toc = time.time()\n",
    "\n",
    "print(toc-tic)\n",
    "print(a.device)\n",
    "print(b.device)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "893d1a27",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用gpu\n",
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
    "a = torch.rand((10000,200),device = device) #可以指定在GPU上创建张量\n",
    "b = torch.rand((200,10000)) #也可以在CPU上创建张量后移动到GPU上\n",
    "b = b.to(device) #或者 b = b.cuda() if torch.cuda.is_available() else b \n",
    "tic = time.time()\n",
    "c = torch.matmul(a,b)\n",
    "toc = time.time()\n",
    "print(toc-tic)\n",
    "print(a.device)\n",
    "print(b.device)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "63fa7654",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "1babcdf5",
   "metadata": {},
   "source": [
    "## 二，线性回归范例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "90b7b1e8",
   "metadata": {},
   "source": [
    "下面对比使用CPU和GPU训练一个线性回归模型的效率"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eb77c320",
   "metadata": {},
   "source": [
    "### 1，使用CPU"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cfecdb45",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 准备数据\n",
    "n = 1000000 #样本数量\n",
    "\n",
    "X = 10*torch.rand([n,2])-5.0  #torch.rand是均匀分布 \n",
    "w0 = torch.tensor([[2.0,-3.0]])\n",
    "b0 = torch.tensor([[10.0]])\n",
    "Y = X@w0.t() + b0 + torch.normal( 0.0,2.0,size = [n,1])  # @表示矩阵乘法,增加正态扰动"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "add72b0d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义模型\n",
    "class LinearRegression(nn.Module): \n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.w = nn.Parameter(torch.randn_like(w0))\n",
    "        self.b = nn.Parameter(torch.zeros_like(b0))\n",
    "    #正向传播\n",
    "    def forward(self,x): \n",
    "        return x@self.w.t() + self.b\n",
    "        \n",
    "linear = LinearRegression() \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "091eef39",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练模型\n",
    "optimizer = torch.optim.Adam(linear.parameters(),lr = 0.1)\n",
    "loss_fn = nn.MSELoss()\n",
    "\n",
    "def train(epoches):\n",
    "    tic = time.time()\n",
    "    for epoch in range(epoches):\n",
    "        optimizer.zero_grad()\n",
    "        Y_pred = linear(X) \n",
    "        loss = loss_fn(Y_pred,Y)\n",
    "        loss.backward() \n",
    "        optimizer.step()\n",
    "        if epoch%50==0:\n",
    "            print({\"epoch\":epoch,\"loss\":loss.item()})\n",
    "    toc = time.time()\n",
    "    print(\"time used:\",toc-tic)\n",
    "\n",
    "train(500)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0099bc25",
   "metadata": {},
   "source": [
    "### 2，使用GPU"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dc8b66cb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 准备数据\n",
    "n = 1000000 #样本数量\n",
    "\n",
    "X = 10*torch.rand([n,2])-5.0  #torch.rand是均匀分布 \n",
    "w0 = torch.tensor([[2.0,-3.0]])\n",
    "b0 = torch.tensor([[10.0]])\n",
    "Y = X@w0.t() + b0 + torch.normal( 0.0,2.0,size = [n,1])  # @表示矩阵乘法,增加正态扰动\n",
    "\n",
    "# 数据移动到GPU上\n",
    "print(\"torch.cuda.is_available() = \",torch.cuda.is_available())\n",
    "X = X.cuda()\n",
    "Y = Y.cuda()\n",
    "print(\"X.device:\",X.device)\n",
    "print(\"Y.device:\",Y.device)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4f368c13",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义模型\n",
    "class LinearRegression(nn.Module): \n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.w = nn.Parameter(torch.randn_like(w0))\n",
    "        self.b = nn.Parameter(torch.zeros_like(b0))\n",
    "    #正向传播\n",
    "    def forward(self,x): \n",
    "        return x@self.w.t() + self.b\n",
    "        \n",
    "linear = LinearRegression() \n",
    "\n",
    "# 移动模型到GPU上\n",
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
    "linear.to(device)\n",
    "\n",
    "#查看模型是否已经移动到GPU上\n",
    "print(\"if on cuda:\",next(linear.parameters()).is_cuda)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "46296b4c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练模型\n",
    "optimizer = torch.optim.Adam(linear.parameters(),lr = 0.1)\n",
    "loss_fn = nn.MSELoss()\n",
    "\n",
    "def train(epoches):\n",
    "    tic = time.time()\n",
    "    for epoch in range(epoches):\n",
    "        optimizer.zero_grad()\n",
    "        Y_pred = linear(X) \n",
    "        loss = loss_fn(Y_pred,Y)\n",
    "        loss.backward() \n",
    "        optimizer.step()\n",
    "        if epoch%50==0:\n",
    "            print({\"epoch\":epoch,\"loss\":loss.item()})\n",
    "    toc = time.time()\n",
    "    print(\"time used:\",toc-tic)\n",
    "    \n",
    "train(500)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6921a4fe",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "978fdb3f",
   "metadata": {},
   "source": [
    "## 三，图片分类范例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "34de7cf7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch \n",
    "from torch import nn \n",
    "\n",
    "import torchvision \n",
    "from torchvision import transforms"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6130e226",
   "metadata": {},
   "outputs": [],
   "source": [
    "transform = transforms.Compose([transforms.ToTensor()])\n",
    "\n",
    "ds_train = torchvision.datasets.MNIST(root=\"minist/\",train=True,download=True,transform=transform)\n",
    "ds_val = torchvision.datasets.MNIST(root=\"minist/\",train=False,download=True,transform=transform)\n",
    "\n",
    "dl_train =  torch.utils.data.DataLoader(ds_train, batch_size=128, shuffle=True, num_workers=4)\n",
    "dl_val =  torch.utils.data.DataLoader(ds_val, batch_size=128, shuffle=False, num_workers=4)\n",
    "\n",
    "print(len(ds_train))\n",
    "print(len(ds_val))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "53c17ca9",
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_net():\n",
    "    net = nn.Sequential()\n",
    "    net.add_module(\"conv1\",nn.Conv2d(in_channels=1,out_channels=32,kernel_size = 3))\n",
    "    net.add_module(\"pool1\",nn.MaxPool2d(kernel_size = 2,stride = 2))\n",
    "    net.add_module(\"conv2\",nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5))\n",
    "    net.add_module(\"pool2\",nn.MaxPool2d(kernel_size = 2,stride = 2))\n",
    "    net.add_module(\"dropout\",nn.Dropout2d(p = 0.1))\n",
    "    net.add_module(\"adaptive_pool\",nn.AdaptiveMaxPool2d((1,1)))\n",
    "    net.add_module(\"flatten\",nn.Flatten())\n",
    "    net.add_module(\"linear1\",nn.Linear(64,32))\n",
    "    net.add_module(\"relu\",nn.ReLU())\n",
    "    net.add_module(\"linear2\",nn.Linear(32,10))\n",
    "    return net\n",
    "\n",
    "net = create_net()\n",
    "print(net)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a93868ff",
   "metadata": {},
   "source": [
    "### 1，使用CPU进行训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "323629db",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os,sys,time\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import datetime \n",
    "from tqdm import tqdm \n",
    "\n",
    "import torch\n",
    "from torch import nn \n",
    "from copy import deepcopy\n",
    "from torchmetrics import Accuracy\n",
    "#注：多分类使用torchmetrics中的评估指标，二分类使用torchkeras.metrics中的评估指标\n",
    "\n",
    "def printlog(info):\n",
    "    nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n",
    "    print(\"\\n\"+\"==========\"*8 + \"%s\"%nowtime)\n",
    "    print(str(info)+\"\\n\")\n",
    "    \n",
    "\n",
    "net = create_net() \n",
    "\n",
    "loss_fn = nn.CrossEntropyLoss()\n",
    "optimizer= torch.optim.Adam(net.parameters(),lr = 0.01)   \n",
    "metrics_dict = {\"acc\":Accuracy()}\n",
    "\n",
    "epochs = 20 \n",
    "ckpt_path='checkpoint.pt'\n",
    "\n",
    "#early_stopping相关设置\n",
    "monitor=\"val_acc\"\n",
    "patience=5\n",
    "mode=\"max\"\n",
    "\n",
    "history = {}\n",
    "\n",
    "for epoch in range(1, epochs+1):\n",
    "    printlog(\"Epoch {0} / {1}\".format(epoch, epochs))\n",
    "\n",
    "    # 1，train -------------------------------------------------  \n",
    "    net.train()\n",
    "    \n",
    "    total_loss,step = 0,0\n",
    "    \n",
    "    loop = tqdm(enumerate(dl_train), total =len(dl_train))\n",
    "    train_metrics_dict = deepcopy(metrics_dict) \n",
    "    \n",
    "    for i, batch in loop: \n",
    "        \n",
    "        features,labels = batch\n",
    "        #forward\n",
    "        preds = net(features)\n",
    "        loss = loss_fn(preds,labels)\n",
    "        \n",
    "        #backward\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        optimizer.zero_grad()\n",
    "            \n",
    "        #metrics\n",
    "        step_metrics = {\"train_\"+name:metric_fn(preds, labels).item() \n",
    "                        for name,metric_fn in train_metrics_dict.items()}\n",
    "        \n",
    "        step_log = dict({\"train_loss\":loss.item()},**step_metrics)\n",
    "\n",
    "        total_loss += loss.item()\n",
    "        \n",
    "        step+=1\n",
    "        if i!=len(dl_train)-1:\n",
    "            loop.set_postfix(**step_log)\n",
    "        else:\n",
    "            epoch_loss = total_loss/step\n",
    "            epoch_metrics = {\"train_\"+name:metric_fn.compute().item() \n",
    "                             for name,metric_fn in train_metrics_dict.items()}\n",
    "            epoch_log = dict({\"train_loss\":epoch_loss},**epoch_metrics)\n",
    "            loop.set_postfix(**epoch_log)\n",
    "\n",
    "            for name,metric_fn in train_metrics_dict.items():\n",
    "                metric_fn.reset()\n",
    "                \n",
    "    for name, metric in epoch_log.items():\n",
    "        history[name] = history.get(name, []) + [metric]\n",
    "        \n",
    "\n",
    "    # 2，validate -------------------------------------------------\n",
    "    net.eval()\n",
    "    \n",
    "    total_loss,step = 0,0\n",
    "    loop = tqdm(enumerate(dl_val), total =len(dl_val))\n",
    "    \n",
    "    val_metrics_dict = deepcopy(metrics_dict) \n",
    "    \n",
    "    with torch.no_grad():\n",
    "        for i, batch in loop: \n",
    "\n",
    "            features,labels = batch\n",
    "            \n",
    "            #forward\n",
    "            preds = net(features)\n",
    "            loss = loss_fn(preds,labels)\n",
    "\n",
    "            #metrics\n",
    "            step_metrics = {\"val_\"+name:metric_fn(preds, labels).item() \n",
    "                            for name,metric_fn in val_metrics_dict.items()}\n",
    "\n",
    "            step_log = dict({\"val_loss\":loss.item()},**step_metrics)\n",
    "\n",
    "            total_loss += loss.item()\n",
    "            step+=1\n",
    "            if i!=len(dl_val)-1:\n",
    "                loop.set_postfix(**step_log)\n",
    "            else:\n",
    "                epoch_loss = (total_loss/step)\n",
    "                epoch_metrics = {\"val_\"+name:metric_fn.compute().item() \n",
    "                                 for name,metric_fn in val_metrics_dict.items()}\n",
    "                epoch_log = dict({\"val_loss\":epoch_loss},**epoch_metrics)\n",
    "                loop.set_postfix(**epoch_log)\n",
    "\n",
    "                for name,metric_fn in val_metrics_dict.items():\n",
    "                    metric_fn.reset()\n",
    "                    \n",
    "    epoch_log[\"epoch\"] = epoch           \n",
    "    for name, metric in epoch_log.items():\n",
    "        history[name] = history.get(name, []) + [metric]\n",
    "\n",
    "    # 3，early-stopping -------------------------------------------------\n",
    "    arr_scores = history[monitor]\n",
    "    best_score_idx = np.argmax(arr_scores) if mode==\"max\" else np.argmin(arr_scores)\n",
    "    if best_score_idx==len(arr_scores)-1:\n",
    "        torch.save(net.state_dict(),ckpt_path)\n",
    "        print(\"<<<<<< reach best {0} : {1} >>>>>>\".format(monitor,\n",
    "             arr_scores[best_score_idx]),file=sys.stderr)\n",
    "    if len(arr_scores)-best_score_idx>patience:\n",
    "        print(\"<<<<<< {} without improvement in {} epoch, early stopping >>>>>>\".format(\n",
    "            monitor,patience),file=sys.stderr)\n",
    "        break \n",
    "    net.load_state_dict(torch.load(ckpt_path))\n",
    "    \n",
    "dfhistory = pd.DataFrame(history)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b0308b51",
   "metadata": {},
   "source": [
    "================================================================================2022-07-17 15:07:03\n",
    "Epoch 1 / 20\n",
    "\n",
    "100%|██████████| 469/469 [00:57<00:00,  8.15it/s, train_acc=0.909, train_loss=0.279] \n",
    "100%|██████████| 79/79 [00:04<00:00, 16.80it/s, val_acc=0.956, val_loss=0.147] \n",
    "\n",
    "================================================================================2022-07-17 15:08:06\n",
    "Epoch 2 / 20\n",
    "\n",
    "\n",
    "<<<<<< reach best val_acc : 0.9556000232696533 >>>>>>\n",
    "100%|██████████| 469/469 [00:58<00:00,  8.03it/s, train_acc=0.968, train_loss=0.105] \n",
    "100%|██████████| 79/79 [00:04<00:00, 18.59it/s, val_acc=0.977, val_loss=0.0849]\n",
    "\n",
    "================================================================================2022-07-17 15:09:09\n",
    "Epoch 3 / 20\n",
    "\n",
    "\n",
    "<<<<<< reach best val_acc : 0.9765999913215637 >>>>>>\n",
    "100%|██████████| 469/469 [00:58<00:00,  8.07it/s, train_acc=0.974, train_loss=0.0882]\n",
    "100%|██████████| 79/79 [00:04<00:00, 17.13it/s, val_acc=0.984, val_loss=0.0554] \n",
    "<<<<<< reach best val_acc : 0.9843000173568726 >>>>>>\n",
    "\n",
    "================================================================================2022-07-17 15:10:12\n",
    "Epoch 4 / 20\n",
    "\n",
    "100%|██████████| 469/469 [01:01<00:00,  7.63it/s, train_acc=0.976, train_loss=0.0814] \n",
    "100%|██████████| 79/79 [00:04<00:00, 16.34it/s, val_acc=0.979, val_loss=0.0708]\n",
    "\n",
    "================================================================================2022-07-17 15:11:18\n",
    "Epoch 5 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [01:03<00:00,  7.42it/s, train_acc=0.974, train_loss=0.0896]\n",
    "100%|██████████| 79/79 [00:05<00:00, 14.06it/s, val_acc=0.979, val_loss=0.076] \n",
    "\n",
    "================================================================================2022-07-17 15:12:28\n",
    "Epoch 6 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [01:00<00:00,  7.77it/s, train_acc=0.972, train_loss=0.0937]\n",
    "100%|██████████| 79/79 [00:04<00:00, 17.45it/s, val_acc=0.976, val_loss=0.0787] \n",
    "\n",
    "================================================================================2022-07-17 15:13:33\n",
    "Epoch 7 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [01:01<00:00,  7.63it/s, train_acc=0.974, train_loss=0.0858]\n",
    "100%|██████████| 79/79 [00:05<00:00, 14.50it/s, val_acc=0.976, val_loss=0.082] \n",
    "\n",
    "================================================================================2022-07-17 15:14:40\n",
    "Epoch 8 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [00:59<00:00,  7.85it/s, train_acc=0.972, train_loss=0.0944]\n",
    "100%|██████████| 79/79 [00:04<00:00, 17.21it/s, val_acc=0.982, val_loss=0.062] \n",
    "<<<<<< val_acc without improvement in 5 epoch, early stopping >>>>>>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c88222e",
   "metadata": {},
   "source": [
    "CPU每个Epoch大概1分钟"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "30f523ad",
   "metadata": {},
   "source": [
    "### 2，使用GPU进行训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "53792236",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os,sys,time\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import datetime \n",
    "from tqdm import tqdm \n",
    "\n",
    "import torch\n",
    "from torch import nn \n",
    "from copy import deepcopy\n",
    "from torchmetrics import Accuracy\n",
    "#注：多分类使用torchmetrics中的评估指标，二分类使用torchkeras.metrics中的评估指标\n",
    "\n",
    "def printlog(info):\n",
    "    nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n",
    "    print(\"\\n\"+\"==========\"*8 + \"%s\"%nowtime)\n",
    "    print(str(info)+\"\\n\")\n",
    "    \n",
    "net = create_net() \n",
    "\n",
    "\n",
    "loss_fn = nn.CrossEntropyLoss()\n",
    "optimizer= torch.optim.Adam(net.parameters(),lr = 0.01)   \n",
    "metrics_dict = {\"acc\":Accuracy()}\n",
    "\n",
    "\n",
    "# =========================移动模型到GPU上==============================\n",
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
    "net.to(device)\n",
    "loss_fn.to(device)\n",
    "for name,fn in metrics_dict.items():\n",
    "    fn.to(device)\n",
    "# ====================================================================\n",
    "\n",
    "\n",
    "epochs = 20 \n",
    "ckpt_path='checkpoint.pt'\n",
    "\n",
    "#early_stopping相关设置\n",
    "monitor=\"val_acc\"\n",
    "patience=5\n",
    "mode=\"max\"\n",
    "\n",
    "history = {}\n",
    "\n",
    "for epoch in range(1, epochs+1):\n",
    "    printlog(\"Epoch {0} / {1}\".format(epoch, epochs))\n",
    "\n",
    "    # 1，train -------------------------------------------------  \n",
    "    net.train()\n",
    "    \n",
    "    total_loss,step = 0,0\n",
    "    \n",
    "    loop = tqdm(enumerate(dl_train), total =len(dl_train))\n",
    "    train_metrics_dict = deepcopy(metrics_dict) \n",
    "    \n",
    "    for i, batch in loop: \n",
    "        \n",
    "        features,labels = batch\n",
    "        \n",
    "        # =========================移动数据到GPU上==============================\n",
    "        features = features.to(device)\n",
    "        labels = labels.to(device)\n",
    "        # ====================================================================\n",
    "        \n",
    "        #forward\n",
    "        preds = net(features)\n",
    "        loss = loss_fn(preds,labels)\n",
    "        \n",
    "        #backward\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        optimizer.zero_grad()\n",
    "            \n",
    "        #metrics\n",
    "        step_metrics = {\"train_\"+name:metric_fn(preds, labels).item() \n",
    "                        for name,metric_fn in train_metrics_dict.items()}\n",
    "        \n",
    "        step_log = dict({\"train_loss\":loss.item()},**step_metrics)\n",
    "\n",
    "        total_loss += loss.item()\n",
    "        \n",
    "        step+=1\n",
    "        if i!=len(dl_train)-1:\n",
    "            loop.set_postfix(**step_log)\n",
    "        else:\n",
    "            epoch_loss = total_loss/step\n",
    "            epoch_metrics = {\"train_\"+name:metric_fn.compute().item() \n",
    "                             for name,metric_fn in train_metrics_dict.items()}\n",
    "            epoch_log = dict({\"train_loss\":epoch_loss},**epoch_metrics)\n",
    "            loop.set_postfix(**epoch_log)\n",
    "\n",
    "            for name,metric_fn in train_metrics_dict.items():\n",
    "                metric_fn.reset()\n",
    "                \n",
    "    for name, metric in epoch_log.items():\n",
    "        history[name] = history.get(name, []) + [metric]\n",
    "        \n",
    "\n",
    "    # 2，validate -------------------------------------------------\n",
    "    net.eval()\n",
    "    \n",
    "    total_loss,step = 0,0\n",
    "    loop = tqdm(enumerate(dl_val), total =len(dl_val))\n",
    "    \n",
    "    val_metrics_dict = deepcopy(metrics_dict) \n",
    "    \n",
    "    with torch.no_grad():\n",
    "        for i, batch in loop: \n",
    "\n",
    "            features,labels = batch\n",
    "            \n",
    "            # =========================移动数据到GPU上==============================\n",
    "            features = features.to(device)\n",
    "            labels = labels.to(device)\n",
    "            # ====================================================================\n",
    "            \n",
    "            #forward\n",
    "            preds = net(features)\n",
    "            loss = loss_fn(preds,labels)\n",
    "\n",
    "            #metrics\n",
    "            step_metrics = {\"val_\"+name:metric_fn(preds, labels).item() \n",
    "                            for name,metric_fn in val_metrics_dict.items()}\n",
    "\n",
    "            step_log = dict({\"val_loss\":loss.item()},**step_metrics)\n",
    "\n",
    "            total_loss += loss.item()\n",
    "            step+=1\n",
    "            if i!=len(dl_val)-1:\n",
    "                loop.set_postfix(**step_log)\n",
    "            else:\n",
    "                epoch_loss = (total_loss/step)\n",
    "                epoch_metrics = {\"val_\"+name:metric_fn.compute().item() \n",
    "                                 for name,metric_fn in val_metrics_dict.items()}\n",
    "                epoch_log = dict({\"val_loss\":epoch_loss},**epoch_metrics)\n",
    "                loop.set_postfix(**epoch_log)\n",
    "\n",
    "                for name,metric_fn in val_metrics_dict.items():\n",
    "                    metric_fn.reset()\n",
    "                    \n",
    "    epoch_log[\"epoch\"] = epoch           \n",
    "    for name, metric in epoch_log.items():\n",
    "        history[name] = history.get(name, []) + [metric]\n",
    "\n",
    "    # 3，early-stopping -------------------------------------------------\n",
    "    arr_scores = history[monitor]\n",
    "    best_score_idx = np.argmax(arr_scores) if mode==\"max\" else np.argmin(arr_scores)\n",
    "    if best_score_idx==len(arr_scores)-1:\n",
    "        torch.save(net.state_dict(),ckpt_path)\n",
    "        print(\"<<<<<< reach best {0} : {1} >>>>>>\".format(monitor,\n",
    "             arr_scores[best_score_idx]),file=sys.stderr)\n",
    "    if len(arr_scores)-best_score_idx>patience:\n",
    "        print(\"<<<<<< {} without improvement in {} epoch, early stopping >>>>>>\".format(\n",
    "            monitor,patience),file=sys.stderr)\n",
    "        break \n",
    "    net.load_state_dict(torch.load(ckpt_path))\n",
    "    \n",
    "dfhistory = pd.DataFrame(history)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b7d6009c",
   "metadata": {},
   "source": [
    "```\n",
    "================================================================================2022-07-17 15:20:40\n",
    "Epoch 1 / 20\n",
    "\n",
    "100%|██████████| 469/469 [00:12<00:00, 37.07it/s, train_acc=0.89, train_loss=0.336]  \n",
    "100%|██████████| 79/79 [00:02<00:00, 37.31it/s, val_acc=0.95, val_loss=0.16]   \n",
    "\n",
    "================================================================================2022-07-17 15:20:55\n",
    "Epoch 2 / 20\n",
    "\n",
    "\n",
    "<<<<<< reach best val_acc : 0.9498000144958496 >>>>>>\n",
    "100%|██████████| 469/469 [00:12<00:00, 37.04it/s, train_acc=0.964, train_loss=0.115] \n",
    "100%|██████████| 79/79 [00:01<00:00, 43.36it/s, val_acc=0.972, val_loss=0.0909]\n",
    "\n",
    "================================================================================2022-07-17 15:21:10\n",
    "Epoch 3 / 20\n",
    "\n",
    "\n",
    "<<<<<< reach best val_acc : 0.9721999764442444 >>>>>>\n",
    "100%|██████████| 469/469 [00:12<00:00, 38.05it/s, train_acc=0.971, train_loss=0.0968]\n",
    "100%|██████████| 79/79 [00:01<00:00, 42.10it/s, val_acc=0.974, val_loss=0.0878] \n",
    "\n",
    "================================================================================2022-07-17 15:21:24\n",
    "Epoch 4 / 20\n",
    "\n",
    "<<<<<< reach best val_acc : 0.974399983882904 >>>>>>\n",
    "100%|██████████| 469/469 [00:13<00:00, 35.56it/s, train_acc=0.973, train_loss=0.089] \n",
    "100%|██████████| 79/79 [00:02<00:00, 38.16it/s, val_acc=0.982, val_loss=0.0585]\n",
    "\n",
    "================================================================================2022-07-17 15:21:40\n",
    "Epoch 5 / 20\n",
    "\n",
    "\n",
    "<<<<<< reach best val_acc : 0.9822999835014343 >>>>>>\n",
    "100%|██████████| 469/469 [00:12<00:00, 36.80it/s, train_acc=0.977, train_loss=0.0803]\n",
    "100%|██████████| 79/79 [00:01<00:00, 42.38it/s, val_acc=0.976, val_loss=0.0791]\n",
    "\n",
    "================================================================================2022-07-17 15:21:55\n",
    "Epoch 6 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [00:13<00:00, 34.63it/s, train_acc=0.977, train_loss=0.0787]\n",
    "100%|██████████| 79/79 [00:02<00:00, 39.01it/s, val_acc=0.97, val_loss=0.105]   \n",
    "\n",
    "================================================================================2022-07-17 15:22:11\n",
    "Epoch 7 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [00:12<00:00, 37.39it/s, train_acc=0.975, train_loss=0.0871]\n",
    "100%|██████████| 79/79 [00:02<00:00, 39.16it/s, val_acc=0.984, val_loss=0.0611]\n",
    "\n",
    "================================================================================2022-07-17 15:22:26\n",
    "Epoch 8 / 20\n",
    "\n",
    "\n",
    "<<<<<< reach best val_acc : 0.9835000038146973 >>>>>>\n",
    "100%|██████████| 469/469 [00:13<00:00, 35.63it/s, train_acc=0.976, train_loss=0.0774] \n",
    "100%|██████████| 79/79 [00:01<00:00, 42.92it/s, val_acc=0.982, val_loss=0.0778] \n",
    "\n",
    "================================================================================2022-07-17 15:22:41\n",
    "Epoch 9 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [00:12<00:00, 37.96it/s, train_acc=0.976, train_loss=0.0819]\n",
    "100%|██████████| 79/79 [00:01<00:00, 42.99it/s, val_acc=0.981, val_loss=0.0652] \n",
    "\n",
    "================================================================================2022-07-17 15:22:56\n",
    "Epoch 10 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [00:13<00:00, 35.29it/s, train_acc=0.975, train_loss=0.0852]\n",
    "100%|██████████| 79/79 [00:01<00:00, 41.38it/s, val_acc=0.978, val_loss=0.0808]\n",
    "\n",
    "================================================================================2022-07-17 15:23:12\n",
    "Epoch 11 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [00:12<00:00, 38.77it/s, train_acc=0.975, train_loss=0.0863] \n",
    "100%|██████████| 79/79 [00:01<00:00, 42.71it/s, val_acc=0.983, val_loss=0.0665] \n",
    "\n",
    "================================================================================2022-07-17 15:23:26\n",
    "Epoch 12 / 20\n",
    "\n",
    "\n",
    "100%|██████████| 469/469 [00:12<00:00, 36.55it/s, train_acc=0.976, train_loss=0.0818]\n",
    "100%|██████████| 79/79 [00:02<00:00, 37.44it/s, val_acc=0.979, val_loss=0.0819]\n",
    "<<<<<< val_acc without improvement in 5 epoch, early stopping >>>>>>\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3fc1cc4f",
   "metadata": {},
   "source": [
    "使用GPU后每个Epoch只需要10秒钟左右，提升了6倍。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4a52c7b5",
   "metadata": {},
   "source": [
    "## 四，torchkeras.KerasModel中使用GPU"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4951f88d",
   "metadata": {},
   "outputs": [],
   "source": [
    "从上面的例子可以看到，在pytorch中使用GPU并不复杂，但对于经常炼丹的同学来说，模型和数据老是移来移去还是蛮麻烦的。\n",
    "\n",
    "一不小心就会忘了移动某些数据或者某些module，导致报错。\n",
    "\n",
    "torchkeras.KerasModel 在设计的时候考虑到了这一点，如果环境当中存在可用的GPU，会自动使用GPU，反之则使用CPU。\n",
    "\n",
    "通过引入accelerate的一些基础功能，torchkeras.KerasModel以非常优雅的方式在GPU和CPU之间切换。\n",
    "\n",
    "详细实现可以参考torchkeras.KerasModel的源码。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fcc49be8",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install torchkeras==3.2.3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b8ba42bb",
   "metadata": {},
   "outputs": [],
   "source": [
    "import  accelerate \n",
    "accelerator = accelerate.Accelerator()\n",
    "print(accelerator.device)  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "32ba126f",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchkeras import KerasModel \n",
    "from torchmetrics import Accuracy\n",
    "\n",
    "net = create_net() \n",
    "model = KerasModel(net,\n",
    "                   loss_fn=nn.CrossEntropyLoss(),\n",
    "                   metrics_dict = {\"acc\":Accuracy()},\n",
    "                   optimizer = torch.optim.Adam(net.parameters(),lr = 0.01)  )\n",
    "\n",
    "model.fit(\n",
    "    train_data = dl_train,\n",
    "    val_data= dl_val,\n",
    "    epochs=10,\n",
    "    patience=3,\n",
    "    monitor=\"val_acc\", \n",
    "    mode=\"max\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60cfd732",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "a7e6d799",
   "metadata": {},
   "source": [
    "## 五，torchkeras.LightModel中使用GPU"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "230a863d",
   "metadata": {},
   "source": [
    "通过引用pytorch_lightning的功能，\n",
    "\n",
    "torchkeras.LightModel以更加显式的方式支持GPU训练，\n",
    "\n",
    "不仅如此，还能支持多GPU和TPU训练。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a075749c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchmetrics import Accuracy \n",
    "from torchkeras import LightModel \n",
    "\n",
    "net = create_net() \n",
    "model = LightModel(net,\n",
    "                   loss_fn=nn.CrossEntropyLoss(),\n",
    "                   metrics_dict = {\"acc\":Accuracy()},\n",
    "                   optimizer = torch.optim.Adam(net.parameters(),lr = 0.01) )\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "84d8dd0b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pytorch_lightning as pl     \n",
    "\n",
    "#1，设置回调函数\n",
    "model_ckpt = pl.callbacks.ModelCheckpoint(\n",
    "    monitor='val_acc',\n",
    "    save_top_k=1,\n",
    "    mode='max'\n",
    ")\n",
    "\n",
    "early_stopping = pl.callbacks.EarlyStopping(monitor = 'val_acc',\n",
    "                           patience=3,\n",
    "                           mode = 'max'\n",
    "                          )\n",
    "\n",
    "#2，设置训练参数\n",
    "\n",
    "# gpus=0 则使用cpu训练，gpus=1则使用1个gpu训练，gpus=2则使用2个gpu训练，gpus=-1则使用所有gpu训练，\n",
    "# gpus=[0,1]则指定使用0号和1号gpu训练， gpus=\"0,1,2,3\"则使用0,1,2,3号gpu训练\n",
    "# tpus=1 则使用1个tpu训练\n",
    "trainer = pl.Trainer(logger=True,\n",
    "                     min_epochs=3,max_epochs=20,\n",
    "                     gpus=1,\n",
    "                     callbacks = [model_ckpt,early_stopping],\n",
    "                     enable_progress_bar = True) \n",
    "\n",
    "\n",
    "##4，启动训练循环\n",
    "trainer.fit(model,dl_train,dl_val)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fad61a3e",
   "metadata": {},
   "source": [
    "```\n",
    "================================================================================2022-07-18 00:18:14\n",
    "{'epoch': 0, 'val_loss': 2.31911301612854, 'val_acc': 0.0546875}\n",
    "<<<<<< reach best val_acc : 0.0546875 >>>>>>\n",
    "\n",
    "================================================================================2022-07-18 00:18:29\n",
    "{'epoch': 0, 'val_loss': 0.10364170372486115, 'val_acc': 0.9693999886512756}\n",
    "{'epoch': 0, 'train_loss': 0.31413567066192627, 'train_acc': 0.8975499868392944}\n",
    "<<<<<< reach best val_acc : 0.9693999886512756 >>>>>>\n",
    "\n",
    "================================================================================2022-07-18 00:18:43\n",
    "{'epoch': 1, 'val_loss': 0.0983758345246315, 'val_acc': 0.9710999727249146}\n",
    "{'epoch': 1, 'train_loss': 0.10680060088634491, 'train_acc': 0.9673333168029785}\n",
    "<<<<<< reach best val_acc : 0.9710999727249146 >>>>>>\n",
    "\n",
    "================================================================================2022-07-18 00:18:58\n",
    "{'epoch': 2, 'val_loss': 0.08315123617649078, 'val_acc': 0.9764999747276306}\n",
    "{'epoch': 2, 'train_loss': 0.09339822083711624, 'train_acc': 0.9722166657447815}\n",
    "<<<<<< reach best val_acc : 0.9764999747276306 >>>>>>\n",
    "\n",
    "================================================================================2022-07-18 00:19:13\n",
    "{'epoch': 3, 'val_loss': 0.06529796123504639, 'val_acc': 0.9799000024795532}\n",
    "{'epoch': 3, 'train_loss': 0.08487282693386078, 'train_acc': 0.9746000170707703}\n",
    "<<<<<< reach best val_acc : 0.9799000024795532 >>>>>>\n",
    "\n",
    "================================================================================2022-07-18 00:19:27\n",
    "{'epoch': 4, 'val_loss': 0.10162600129842758, 'val_acc': 0.9735000133514404}\n",
    "{'epoch': 4, 'train_loss': 0.08439336717128754, 'train_acc': 0.9746666550636292}\n",
    "\n",
    "================================================================================2022-07-18 00:19:42\n",
    "{'epoch': 5, 'val_loss': 0.0818500965833664, 'val_acc': 0.9789000153541565}\n",
    "{'epoch': 5, 'train_loss': 0.08107426762580872, 'train_acc': 0.9763166904449463}\n",
    "\n",
    "================================================================================2022-07-18 00:19:56\n",
    "{'epoch': 6, 'val_loss': 0.08046088367700577, 'val_acc': 0.979200005531311}\n",
    "{'epoch': 6, 'train_loss': 0.08173364400863647, 'train_acc': 0.9772833585739136}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "026f3faf",
   "metadata": {},
   "source": [
    "**如果本书对你有所帮助，想鼓励一下作者，记得给本项目加一颗星星star⭐️，并分享给你的朋友们喔😊!** \n",
    "\n",
    "如果对本书内容理解上有需要进一步和作者交流的地方，欢迎在公众号\"算法美食屋\"下留言。作者时间和精力有限，会酌情予以回复。\n",
    "\n",
    "也可以在公众号后台回复关键字：**加群**，加入读者交流群和大家讨论。\n",
    "\n",
    "![算法美食屋logo.png](https://tva1.sinaimg.cn/large/e6c9d24egy1h41m2zugguj20k00b9q46.jpg)"
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "formats": "ipynb,md",
   "main_language": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
