{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 知乎超600赞的pytorch模型训练代码模版支持Multi-GPU-DDP模式啦"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一般pytorch需要用户自定义训练循环，可以说有1000个pytorch用户就有1000种训练代码风格。\n",
    "\n",
    "从实用角度讲，一个优秀的训练循环应当具备以下特点。\n",
    "\n",
    "代码简洁易懂 【模块化、易修改、short-enough】\n",
    "\n",
    "支持常用功能 【进度条、评估指标、early-stopping】\n",
    "\n",
    "经过反复斟酌测试，我精心设计了仿照keras风格的pytorch训练循环，完全满足以上条件。\n",
    "\n",
    "该方案在知乎受到许多读者喜爱，目前为止获得了超过600个赞。\n",
    "\n",
    "知乎完整回答链接：《深度学习里面，请问有写train函数的模板吗？》\n",
    "\n",
    "https://www.zhihu.com/question/523869554/answer/2633479163\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以上pytorch模型训练模版也是我开源的一个pytorch模型训练工具 torchkeras库的核心代码。\n",
    "\n",
    "https://github.com/lyhue1991/torchkeras\n",
    "\n",
    "铛铛铛铛，torchkeras加入新功能啦。\n",
    "\n",
    "最近，通过引入HuggingFace的accelerate库的功能，torchkeras进一步支持了 多GPU的DDP模式和TPU设备上的模型训练。\n",
    "\n",
    "这里给大家演示一下，非常强大和丝滑。\n",
    "\n",
    "accelerate库的一个简要介绍，可以参考我在知乎的文章。\n",
    "\n",
    "《20分钟吃掉accelerate模型加速工具😋》\n",
    "\n",
    "https://zhuanlan.zhihu.com/p/599274899"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-01-20T03:22:49.659676Z",
     "iopub.status.busy": "2023-01-20T03:22:49.659127Z",
     "iopub.status.idle": "2023-01-20T03:23:16.565851Z",
     "shell.execute_reply": "2023-01-20T03:23:16.564638Z",
     "shell.execute_reply.started": "2023-01-20T03:22:49.659593Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting git+https://github.com/huggingface/accelerate\n",
      "  Cloning https://github.com/huggingface/accelerate to /tmp/pip-req-build-fwewjjgf\n",
      "  Running command git clone --filter=blob:none --quiet https://github.com/huggingface/accelerate /tmp/pip-req-build-fwewjjgf\n",
      "  Resolved https://github.com/huggingface/accelerate to commit b22f088ff662de748cf3f97c7ad8bf5a6dd6a7b9\n",
      "  Installing build dependencies ... \u001b[?25ldone\n",
      "\u001b[?25h  Getting requirements to build wheel ... \u001b[?25ldone\n",
      "\u001b[?25h  Preparing metadata (pyproject.toml) ... \u001b[?25ldone\n",
      "\u001b[?25hRequirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.7/site-packages (from accelerate==0.15.0.dev0) (22.0)\n",
      "Requirement already satisfied: psutil in /opt/conda/lib/python3.7/site-packages (from accelerate==0.15.0.dev0) (5.9.1)\n",
      "Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.7/site-packages (from accelerate==0.15.0.dev0) (1.21.6)\n",
      "Requirement already satisfied: torch>=1.4.0 in /opt/conda/lib/python3.7/site-packages (from accelerate==0.15.0.dev0) (1.11.0)\n",
      "Requirement already satisfied: pyyaml in /opt/conda/lib/python3.7/site-packages (from accelerate==0.15.0.dev0) (6.0)\n",
      "Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from torch>=1.4.0->accelerate==0.15.0.dev0) (4.1.1)\n",
      "Building wheels for collected packages: accelerate\n",
      "  Building wheel for accelerate (pyproject.toml) ... \u001b[?25ldone\n",
      "\u001b[?25h  Created wheel for accelerate: filename=accelerate-0.15.0.dev0-py3-none-any.whl size=195428 sha256=41a490004fc65e286cb18d6896c6b2fc93129c85d8a100b3c4a3f0543ded6064\n",
      "  Stored in directory: /tmp/pip-ephem-wheel-cache-u8mnojrw/wheels/81/c1/23/6068c1115888b4dd7da88f966c002c30840985c047f6cc1653\n",
      "Successfully built accelerate\n",
      "Installing collected packages: accelerate\n",
      "  Attempting uninstall: accelerate\n",
      "    Found existing installation: accelerate 0.12.0\n",
      "    Uninstalling accelerate-0.12.0:\n",
      "      Successfully uninstalled accelerate-0.12.0\n",
      "Successfully installed accelerate-0.15.0.dev0\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "#从git安装最新的accelerate仓库\n",
    "!pip install git+https://github.com/huggingface/accelerate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 一，torchkeras源码解析"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "torchkeras的核心代码在 下面这个文件中。\n",
    "\n",
    "https://github.com/lyhue1991/torchkeras/blob/master/torchkeras/kerasmodel.py\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "b1076dfc-b9ad-4769-8c92-a6c4dae69d19",
    "_uuid": "8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
   },
   "outputs": [],
   "source": [
    "import sys,datetime\n",
    "from tqdm import tqdm \n",
    "from copy import deepcopy\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import torch\n",
    "from accelerate import Accelerator\n",
    "\n",
    "def colorful(obj,color=\"red\", display_type=\"plain\"):\n",
    "    color_dict = {\"black\":\"30\", \"red\":\"31\", \"green\":\"32\", \"yellow\":\"33\",\n",
    "                    \"blue\":\"34\", \"purple\":\"35\",\"cyan\":\"36\",  \"white\":\"37\"}\n",
    "    display_type_dict = {\"plain\":\"0\",\"highlight\":\"1\",\"underline\":\"4\",\n",
    "                \"shine\":\"5\",\"inverse\":\"7\",\"invisible\":\"8\"}\n",
    "    s = str(obj)\n",
    "    color_code = color_dict.get(color,\"\")\n",
    "    display  = display_type_dict.get(display_type,\"\")\n",
    "    out = '\\033[{};{}m'.format(display,color_code)+s+'\\033[0m'\n",
    "    return out \n",
    "\n",
    "class StepRunner:\n",
    "    def __init__(self, net, loss_fn, accelerator, stage = \"train\", metrics_dict = None, \n",
    "                 optimizer = None, lr_scheduler = None\n",
    "                 ):\n",
    "        self.net,self.loss_fn,self.metrics_dict,self.stage = net,loss_fn,metrics_dict,stage\n",
    "        self.optimizer,self.lr_scheduler = optimizer,lr_scheduler\n",
    "        self.accelerator = accelerator\n",
    "    \n",
    "    def __call__(self, batch):\n",
    "        features,labels = batch \n",
    "        \n",
    "        #loss\n",
    "        preds = self.net(features)\n",
    "        loss = self.loss_fn(preds,labels)\n",
    "\n",
    "        #backward()\n",
    "        if self.optimizer is not None and self.stage==\"train\":\n",
    "            self.accelerator.backward(loss)\n",
    "            self.optimizer.step()\n",
    "            if self.lr_scheduler is not None:\n",
    "                self.lr_scheduler.step()\n",
    "            self.optimizer.zero_grad()\n",
    "        all_preds = self.accelerator.gather(preds)\n",
    "        all_labels = self.accelerator.gather(labels)\n",
    "        all_loss = self.accelerator.gather(loss).sum()\n",
    "            \n",
    "        #metrics\n",
    "        step_metrics = {self.stage+\"_\"+name:metric_fn(all_preds, all_labels).item() \n",
    "                        for name,metric_fn in self.metrics_dict.items()}\n",
    "        \n",
    "        return all_loss.item(),step_metrics\n",
    "\n",
    "class EpochRunner:\n",
    "    def __init__(self,steprunner):\n",
    "        self.steprunner = steprunner\n",
    "        self.stage = steprunner.stage\n",
    "        self.steprunner.net.train() if self.stage==\"train\" else self.steprunner.net.eval()\n",
    "        self.accelerator = self.steprunner.accelerator\n",
    "        \n",
    "    def __call__(self,dataloader):\n",
    "        total_loss,step = 0,0\n",
    "        loop = tqdm(enumerate(dataloader), \n",
    "                    total =len(dataloader),\n",
    "                    file=sys.stdout,\n",
    "                    disable=not self.accelerator.is_local_main_process,\n",
    "                    ncols = 100\n",
    "                   )\n",
    "        \n",
    "        for i, batch in loop: \n",
    "            if self.stage==\"train\":\n",
    "                loss, step_metrics = self.steprunner(batch)\n",
    "            else:\n",
    "                with torch.no_grad():\n",
    "                    loss, step_metrics = self.steprunner(batch)\n",
    "                    \n",
    "            step_log = dict({self.stage+\"_loss\":loss},**step_metrics)\n",
    "            total_loss += loss\n",
    "            step+=1\n",
    "            \n",
    "            if i!=len(dataloader)-1:\n",
    "                loop.set_postfix(**step_log)\n",
    "            else:\n",
    "                epoch_loss = total_loss/step\n",
    "                epoch_metrics = {self.stage+\"_\"+name:metric_fn.compute().item() \n",
    "                                 for name,metric_fn in self.steprunner.metrics_dict.items()}\n",
    "                epoch_log = dict({self.stage+\"_loss\":epoch_loss},**epoch_metrics)\n",
    "                loop.set_postfix(**epoch_log)\n",
    "                for name,metric_fn in self.steprunner.metrics_dict.items():\n",
    "                    metric_fn.reset()\n",
    "        return epoch_log\n",
    "    \n",
    "class KerasModel(torch.nn.Module):\n",
    "    def __init__(self,net,loss_fn,metrics_dict=None,optimizer=None,lr_scheduler = None):\n",
    "        super().__init__()\n",
    "        self.net,self.loss_fn = net, loss_fn\n",
    "        self.metrics_dict = torch.nn.ModuleDict(metrics_dict) \n",
    "        self.optimizer = optimizer if optimizer is not None else torch.optim.Adam(\n",
    "            self.net.parameters(), lr=1e-3)\n",
    "        self.lr_scheduler = lr_scheduler\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.net.forward(x)\n",
    "\n",
    "    def fit(self, train_data, val_data=None, epochs=10,ckpt_path='checkpoint.pt',\n",
    "            patience=5, monitor=\"val_loss\", mode=\"min\", mixed_precision='no'):\n",
    "        \n",
    "        accelerator = Accelerator(mixed_precision=mixed_precision)\n",
    "        device = str(accelerator.device)\n",
    "        device_type = '🐌'  if 'cpu' in device else '⚡️'\n",
    "        accelerator.print(colorful(\"<<<<<< \"+device_type +\" \"+ device +\" is used >>>>>>\"))\n",
    "    \n",
    "        net,optimizer,lr_scheduler= accelerator.prepare(\n",
    "            self.net,self.optimizer,self.lr_scheduler)\n",
    "        train_dataloader,val_dataloader = accelerator.prepare(train_data,val_data)\n",
    "        \n",
    "        loss_fn = self.loss_fn\n",
    "        if isinstance(loss_fn,torch.nn.Module):\n",
    "            loss_fn.to(accelerator.device)\n",
    "        metrics_dict = self.metrics_dict \n",
    "        metrics_dict.to(accelerator.device)\n",
    "        \n",
    "        history = {}\n",
    "        \n",
    "        for epoch in range(1, epochs+1):\n",
    "\n",
    "            nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n",
    "            accelerator.print(\"\\n\"+\"==========\"*8 + \"%s\"%nowtime)\n",
    "            accelerator.print(\"Epoch {0} / {1}\".format(epoch, epochs)+\"\\n\")\n",
    "\n",
    "            # 1，train -------------------------------------------------  \n",
    "            train_step_runner = StepRunner(\n",
    "                    net = net,\n",
    "                    loss_fn = loss_fn,\n",
    "                    accelerator = accelerator,\n",
    "                    stage=\"train\",\n",
    "                    metrics_dict=deepcopy(metrics_dict),\n",
    "                    optimizer = optimizer,\n",
    "                    lr_scheduler = lr_scheduler\n",
    "            )\n",
    "\n",
    "            train_epoch_runner = EpochRunner(train_step_runner)\n",
    "            train_metrics = train_epoch_runner(train_dataloader)\n",
    "            for name, metric in train_metrics.items():\n",
    "                history[name] = history.get(name, []) + [metric]\n",
    "\n",
    "            # 2，validate -------------------------------------------------\n",
    "            if val_dataloader:\n",
    "                val_step_runner = StepRunner(\n",
    "                    net = net,\n",
    "                    loss_fn = loss_fn,\n",
    "                    accelerator = accelerator,\n",
    "                    stage=\"val\",\n",
    "                    metrics_dict= deepcopy(metrics_dict)\n",
    "                )\n",
    "                val_epoch_runner = EpochRunner(val_step_runner)\n",
    "                with torch.no_grad():\n",
    "                    val_metrics = val_epoch_runner(val_dataloader)\n",
    "\n",
    "                val_metrics[\"epoch\"] = epoch\n",
    "                for name, metric in val_metrics.items():\n",
    "                    history[name] = history.get(name, []) + [metric]\n",
    "\n",
    "            # 3，early-stopping -------------------------------------------------\n",
    "            accelerator.wait_for_everyone()\n",
    "            arr_scores = history[monitor]\n",
    "            best_score_idx = np.argmax(arr_scores) if mode==\"max\" else np.argmin(arr_scores)\n",
    "\n",
    "            if best_score_idx==len(arr_scores)-1:\n",
    "                unwrapped_net = accelerator.unwrap_model(net)\n",
    "                accelerator.save(unwrapped_net.state_dict(),ckpt_path)\n",
    "                accelerator.print(colorful(\"<<<<<< reach best {0} : {1} >>>>>>\".format(monitor,\n",
    "                     arr_scores[best_score_idx])))\n",
    "\n",
    "            if len(arr_scores)-best_score_idx>patience:\n",
    "                accelerator.print(colorful(\"<<<<<< {} without improvement in {} epoch, early stopping >>>>>>\".format(\n",
    "                    monitor,patience)))\n",
    "                break \n",
    "                \n",
    "        if accelerator.is_local_main_process:\n",
    "            self.net.load_state_dict(torch.load(ckpt_path))\n",
    "            dfhistory = pd.DataFrame(history)\n",
    "            accelerator.print(dfhistory)\n",
    "            return dfhistory \n",
    "    \n",
    "    @torch.no_grad()\n",
    "    def evaluate(self, val_data):\n",
    "        accelerator = Accelerator()\n",
    "        self.net = accelerator.prepare(self.net)\n",
    "        val_data = accelerator.prepare(val_data)\n",
    "        if isinstance(self.loss_fn,torch.nn.Module):\n",
    "            self.loss_fn.to(accelerator.device)\n",
    "        self.metrics_dict.to(accelerator.device)\n",
    "        \n",
    "        val_step_runner = StepRunner(net = self.net,stage=\"val\",\n",
    "                    loss_fn = self.loss_fn,metrics_dict=deepcopy(self.metrics_dict),\n",
    "                    accelerator = accelerator)\n",
    "        val_epoch_runner = EpochRunner(val_step_runner)\n",
    "        val_metrics = val_epoch_runner(val_data)\n",
    "        return val_metrics\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以上该训练循环满足我所设想的全部特性。\n",
    "\n",
    "模块化：自下而上分成 StepRunner, EpochRunner, 和KerasModel 三级，结构清晰明了。\n",
    "\n",
    "易修改：如果输入和label形式有差异(例如，输入可能组装成字典，或者有多个输入)，仅需更改StepRunner就可以了，后面无需改动，非常灵活。\n",
    "\n",
    "short-enough: 全部训练代码不到200行。\n",
    "\n",
    "支持进度条：通过tqdm引入。\n",
    "\n",
    "支持评估指标：可以引入torchmetrics库中的指标，也可以自定义评估指标。\n",
    "\n",
    "支持early-stopping：在fit时候指定 monitor、mode、patience即可。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 一，使用 CPU/单GPU 训练你的pytorch模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "当系统存在GPU时，torchkeras 会自动使用GPU训练你的pytorch模型，否则会使用CPU训练模型。\n",
    "\n",
    "在我们的范例中，单GPU训练的话，一个Epoch大约是18s。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-01-20T03:23:42.606292Z",
     "iopub.status.busy": "2023-01-20T03:23:42.605914Z",
     "iopub.status.idle": "2023-01-20T03:24:02.743532Z",
     "shell.execute_reply": "2023-01-20T03:24:02.742296Z",
     "shell.execute_reply.started": "2023-01-20T03:23:42.606257Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting torchkeras\n",
      "  Downloading torchkeras-3.3.2-py3-none-any.whl (16 kB)\n",
      "Installing collected packages: torchkeras\n",
      "Successfully installed torchkeras-3.3.2\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n",
      "\u001b[0mRequirement already satisfied: torchmetrics in /opt/conda/lib/python3.7/site-packages (0.11.0)\n",
      "Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from torchmetrics) (4.1.1)\n",
      "Requirement already satisfied: torch>=1.8.1 in /opt/conda/lib/python3.7/site-packages (from torchmetrics) (1.11.0)\n",
      "Requirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from torchmetrics) (22.0)\n",
      "Requirement already satisfied: numpy>=1.17.2 in /opt/conda/lib/python3.7/site-packages (from torchmetrics) (1.21.6)\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "!pip install -U torchkeras \n",
    "!pip install -U torchmetrics "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-01-20T03:24:21.353048Z",
     "iopub.status.busy": "2023-01-20T03:24:21.352672Z",
     "iopub.status.idle": "2023-01-20T03:26:08.547350Z",
     "shell.execute_reply": "2023-01-20T03:26:08.546160Z",
     "shell.execute_reply.started": "2023-01-20T03:24:21.353014Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\n",
      "Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./minist/MNIST/raw/train-images-idx3-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b4f748c4619c4a0fb4d386d9efab0e26",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/9912422 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./minist/MNIST/raw/train-images-idx3-ubyte.gz to ./minist/MNIST/raw\n",
      "\n",
      "Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\n",
      "Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./minist/MNIST/raw/train-labels-idx1-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "859762ef49954149847c12de8e1ae73f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/28881 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./minist/MNIST/raw/train-labels-idx1-ubyte.gz to ./minist/MNIST/raw\n",
      "\n",
      "Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\n",
      "Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./minist/MNIST/raw/t10k-images-idx3-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3d20ef3c231d43b7b4b0648bc1078567",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/1648877 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./minist/MNIST/raw/t10k-images-idx3-ubyte.gz to ./minist/MNIST/raw\n",
      "\n",
      "Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\n",
      "Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./minist/MNIST/raw/t10k-labels-idx1-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7a7931a8b11e42aaaf946bf3eb0a5958",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/4542 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./minist/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./minist/MNIST/raw\n",
      "\n",
      "\u001b[0;31m<<<<<< ⚡️ cuda is used >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:24:31\n",
      "Epoch 1 / 5\n",
      "\n",
      "100%|█████████████████████████████| 58/58 [00:23<00:00,  2.46it/s, train_acc=0.636, train_loss=1.68]\n",
      "100%|███████████████████████████████████| 9/9 [00:01<00:00,  7.15it/s, val_acc=0.872, val_loss=1.01]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.8717448115348816 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:24:56\n",
      "Epoch 2 / 5\n",
      "\n",
      "100%|████████████████████████████| 58/58 [00:16<00:00,  3.62it/s, train_acc=0.868, train_loss=0.747]\n",
      "100%|██████████████████████████████████| 9/9 [00:01<00:00,  6.81it/s, val_acc=0.938, val_loss=0.433]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.9381510615348816 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:25:13\n",
      "Epoch 3 / 5\n",
      "\n",
      "100%|████████████████████████████| 58/58 [00:16<00:00,  3.57it/s, train_acc=0.918, train_loss=0.401]\n",
      "100%|██████████████████████████████████| 9/9 [00:01<00:00,  7.04it/s, val_acc=0.951, val_loss=0.257]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.9510633945465088 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:25:31\n",
      "Epoch 4 / 5\n",
      "\n",
      "100%|████████████████████████████| 58/58 [00:16<00:00,  3.52it/s, train_acc=0.939, train_loss=0.271]\n",
      "100%|████████████████████████████████████| 9/9 [00:01<00:00,  5.04it/s, val_acc=0.96, val_loss=0.19]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.9596354365348816 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:25:49\n",
      "Epoch 5 / 5\n",
      "\n",
      "100%|████████████████████████████| 58/58 [00:16<00:00,  3.51it/s, train_acc=0.951, train_loss=0.204]\n",
      "100%|███████████████████████████████████| 9/9 [00:01<00:00,  7.35it/s, val_acc=0.966, val_loss=0.15]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.9660373330116272 >>>>>>\u001b[0m\n",
      "   train_loss  train_acc  val_loss   val_acc  epoch\n",
      "0    1.678970   0.636079  1.005352  0.871745      1\n",
      "1    0.746820   0.867861  0.433099  0.938151      2\n",
      "2    0.401276   0.917868  0.257341  0.951063      3\n",
      "3    0.270778   0.939217  0.189748  0.959635      4\n",
      "4    0.204484   0.951340  0.150364  0.966037      5\n",
      "100%|███████████████████████████████████| 9/9 [00:01<00:00,  7.26it/s, val_acc=0.966, val_loss=0.15]\n",
      "{'val_loss': 0.1503636125061247, 'val_acc': 0.9660373330116272}\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from torch import nn \n",
    "import torchvision \n",
    "from torchvision import transforms\n",
    "import torchmetrics \n",
    "from torchkeras import KerasModel \n",
    "\n",
    "### 1，准备数据\n",
    "\n",
    "def create_dataloaders(batch_size=1024):\n",
    "    transform = transforms.Compose([transforms.ToTensor()])\n",
    "\n",
    "    ds_train = torchvision.datasets.MNIST(root=\"./minist/\",train=True,download=True,transform=transform)\n",
    "    ds_val = torchvision.datasets.MNIST(root=\"./minist/\",train=False,download=True,transform=transform)\n",
    "\n",
    "    dl_train =  torch.utils.data.DataLoader(ds_train, batch_size=batch_size, shuffle=True,\n",
    "                                            num_workers=2,drop_last=True)\n",
    "    dl_val =  torch.utils.data.DataLoader(ds_val, batch_size=batch_size, shuffle=False, \n",
    "                                          num_workers=2,drop_last=True)\n",
    "    return dl_train,dl_val\n",
    "\n",
    "dl_train,dl_val = create_dataloaders(batch_size=1024)\n",
    "\n",
    "### 2，定义模型\n",
    "\n",
    "def create_net():\n",
    "    net = nn.Sequential()\n",
    "    net.add_module(\"conv1\",nn.Conv2d(in_channels=1,out_channels=512,kernel_size = 3))\n",
    "    net.add_module(\"pool1\",nn.MaxPool2d(kernel_size = 2,stride = 2)) \n",
    "    net.add_module(\"conv2\",nn.Conv2d(in_channels=512,out_channels=256,kernel_size = 5))\n",
    "    net.add_module(\"pool2\",nn.MaxPool2d(kernel_size = 2,stride = 2))\n",
    "    net.add_module(\"dropout\",nn.Dropout2d(p = 0.1))\n",
    "    net.add_module(\"adaptive_pool\",nn.AdaptiveMaxPool2d((1,1)))\n",
    "    net.add_module(\"flatten\",nn.Flatten())\n",
    "    net.add_module(\"linear1\",nn.Linear(256,128))\n",
    "    net.add_module(\"relu\",nn.ReLU())\n",
    "    net.add_module(\"linear2\",nn.Linear(128,10))\n",
    "    return net \n",
    "\n",
    "net = create_net() \n",
    "\n",
    "\n",
    "### 3，训练模型\n",
    "\n",
    "loss_fn = nn.CrossEntropyLoss() \n",
    "metrics_dict = {'acc':torchmetrics.Accuracy(task='multiclass',num_classes=10)}\n",
    "\n",
    "optimizer = torch.optim.AdamW(params=net.parameters(), lr=1e-4)\n",
    "lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(\n",
    "    optimizer=optimizer,T_0=5)\n",
    "\n",
    "model = KerasModel(net,loss_fn,metrics_dict,optimizer,lr_scheduler)\n",
    "dfhistory = model.fit(train_data = dl_train,\n",
    "    val_data = dl_val,\n",
    "    epochs=5,\n",
    "    ckpt_path='checkpoint.pt',\n",
    "    patience=2,\n",
    "    monitor='val_acc',\n",
    "    mode='max',\n",
    "    mixed_precision='no')\n",
    "\n",
    "### 4，评估模型\n",
    "model.net.load_state_dict(torch.load('checkpoint.pt'))\n",
    "print(model.evaluate(dl_val)) \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 二，使用多GPU DDP模式训练你的pytorch模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Kaggle中右边settings 中的 ACCELERATOR选择 GPU T4x2。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1，设置config "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-01-20T03:26:16.345354Z",
     "iopub.status.busy": "2023-01-20T03:26:16.343506Z"
    }
   },
   "outputs": [],
   "source": [
    "import os\n",
    "from accelerate.utils import write_basic_config\n",
    "write_basic_config() # Write a config file\n",
    "os._exit(0) # Restart the notebook to reload info from the latest config file "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# %load /root/.cache/huggingface/accelerate/default_config.yaml\n",
    "{\n",
    "  \"compute_environment\": \"LOCAL_MACHINE\",\n",
    "  \"deepspeed_config\": {},\n",
    "  \"distributed_type\": \"MULTI_GPU\",\n",
    "  \"downcast_bf16\": false,\n",
    "  \"fsdp_config\": {},\n",
    "  \"machine_rank\": 0,\n",
    "  \"main_process_ip\": null,\n",
    "  \"main_process_port\": null,\n",
    "  \"main_training_function\": \"main\",\n",
    "  \"mixed_precision\": \"no\",\n",
    "  \"num_machines\": 1,\n",
    "  \"num_processes\": 2,\n",
    "  \"use_cpu\": false\n",
    "}\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# or answer some question to create a config\n",
    "#!accelerate config  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2，训练代码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在我们的范例中，双GPU使用DDP模式训练的话，一个Epoch大约是12s。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-01-20T03:26:26.609388Z",
     "iopub.status.busy": "2023-01-20T03:26:26.608963Z",
     "iopub.status.idle": "2023-01-20T03:27:39.756433Z",
     "shell.execute_reply": "2023-01-20T03:27:39.755294Z",
     "shell.execute_reply.started": "2023-01-20T03:26:26.609299Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Launching training on 2 GPUs.\n",
      "\u001b[0;31m<<<<<< ⚡️ cuda:0 is used >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:26:31\n",
      "Epoch 1 / 5\n",
      "\n",
      "100%|█████████████████████████████| 29/29 [00:13<00:00,  2.14it/s, train_acc=0.575, train_loss=3.96]\n",
      "100%|███████████████████████████████████| 4/4 [00:01<00:00,  2.73it/s, val_acc=0.859, val_loss=3.14]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.8587646484375 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:26:46\n",
      "Epoch 2 / 5\n",
      "\n",
      "100%|█████████████████████████████| 29/29 [00:09<00:00,  2.92it/s, train_acc=0.815, train_loss=2.52]\n",
      "100%|████████████████████████████████████| 4/4 [00:01<00:00,  2.37it/s, val_acc=0.899, val_loss=1.8]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.8985595703125 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:26:58\n",
      "Epoch 3 / 5\n",
      "\n",
      "100%|██████████████████████████████| 29/29 [00:10<00:00,  2.88it/s, train_acc=0.873, train_loss=1.5]\n",
      "100%|███████████████████████████████████| 4/4 [00:01<00:00,  2.95it/s, val_acc=0.922, val_loss=1.06]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.922119140625 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:27:10\n",
      "Epoch 4 / 5\n",
      "\n",
      "100%|████████████████████████████| 29/29 [00:10<00:00,  2.74it/s, train_acc=0.905, train_loss=0.988]\n",
      "100%|██████████████████████████████████| 4/4 [00:01<00:00,  3.04it/s, val_acc=0.937, val_loss=0.731]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.9371337890625 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:27:22\n",
      "Epoch 5 / 5\n",
      "\n",
      "100%|████████████████████████████| 29/29 [00:10<00:00,  2.75it/s, train_acc=0.925, train_loss=0.728]\n",
      "100%|██████████████████████████████████| 4/4 [00:01<00:00,  2.93it/s, val_acc=0.948, val_loss=0.559]\n",
      "\u001b[0;31m<<<<<< reach best val_acc : 0.947509765625 >>>>>>\u001b[0m\n",
      "   train_loss  train_acc  val_loss   val_acc  epoch\n",
      "0    3.964827   0.575246  3.135620  0.858765      1\n",
      "1    2.521898   0.815127  1.795437  0.898560      2\n",
      "2    1.503394   0.873468  1.060129  0.922119      3\n",
      "3    0.987565   0.905071  0.730675  0.937134      4\n",
      "4    0.727536   0.925478  0.559400  0.947510      5\n",
      "100%|██████████████████████████████████| 9/9 [00:03<00:00,  2.84it/s, val_acc=0.951, val_loss=0.266]\n",
      "{'val_loss': 0.2662947823603948, 'val_acc': 0.9510633945465088}\n"
     ]
    }
   ],
   "source": [
    "import torchvision \n",
    "from torchvision import transforms\n",
    "from torch import nn \n",
    "import torch\n",
    "import torchmetrics \n",
    "from accelerate import notebook_launcher\n",
    "from torchkeras import KerasModel \n",
    "\n",
    "### 1，准备数据\n",
    "\n",
    "def create_dataloaders(batch_size=1024):\n",
    "    transform = transforms.Compose([transforms.ToTensor()])\n",
    "\n",
    "    ds_train = torchvision.datasets.MNIST(root=\"./minist/\",train=True,download=True,transform=transform)\n",
    "    ds_val = torchvision.datasets.MNIST(root=\"./minist/\",train=False,download=True,transform=transform)\n",
    "\n",
    "    dl_train =  torch.utils.data.DataLoader(ds_train, batch_size=batch_size, shuffle=True,\n",
    "                                            num_workers=2,drop_last=True)\n",
    "    dl_val =  torch.utils.data.DataLoader(ds_val, batch_size=batch_size, shuffle=False, \n",
    "                                          num_workers=2,drop_last=True)\n",
    "    return dl_train,dl_val\n",
    "\n",
    "dl_train,dl_val = create_dataloaders(batch_size=1024)\n",
    "\n",
    "### 2，定义模型\n",
    "\n",
    "def create_net():\n",
    "    net = nn.Sequential()\n",
    "    net.add_module(\"conv1\",nn.Conv2d(in_channels=1,out_channels=512,kernel_size = 3))\n",
    "    net.add_module(\"pool1\",nn.MaxPool2d(kernel_size = 2,stride = 2)) \n",
    "    net.add_module(\"conv2\",nn.Conv2d(in_channels=512,out_channels=256,kernel_size = 5))\n",
    "    net.add_module(\"pool2\",nn.MaxPool2d(kernel_size = 2,stride = 2))\n",
    "    net.add_module(\"dropout\",nn.Dropout2d(p = 0.1))\n",
    "    net.add_module(\"adaptive_pool\",nn.AdaptiveMaxPool2d((1,1)))\n",
    "    net.add_module(\"flatten\",nn.Flatten())\n",
    "    net.add_module(\"linear1\",nn.Linear(256,128))\n",
    "    net.add_module(\"relu\",nn.ReLU())\n",
    "    net.add_module(\"linear2\",nn.Linear(128,10))\n",
    "    return net \n",
    "\n",
    "net = create_net() \n",
    "\n",
    "\n",
    "### 3，训练模型\n",
    "\n",
    "loss_fn = nn.CrossEntropyLoss() \n",
    "metrics_dict = {'acc':torchmetrics.Accuracy(task='multiclass',num_classes=10)}\n",
    "\n",
    "optimizer = torch.optim.AdamW(params=net.parameters(), lr=1e-4)\n",
    "lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(\n",
    "    optimizer=optimizer,T_0=5)\n",
    "\n",
    "model = KerasModel(net,loss_fn,metrics_dict,optimizer,lr_scheduler)\n",
    "\n",
    "ckpt_path = 'checkpoint.pt'\n",
    "args = dict(train_data = dl_train,\n",
    "        val_data = dl_val,\n",
    "        epochs=5,\n",
    "        ckpt_path= ckpt_path,\n",
    "        patience=2,\n",
    "        monitor='val_acc',\n",
    "        mode='max',\n",
    "        mixed_precision='no').values()\n",
    "\n",
    "notebook_launcher(model.fit, args, num_processes=2)\n",
    "\n",
    "### 4，评估模型\n",
    "model.net.load_state_dict(torch.load('checkpoint.pt'))\n",
    "print(model.evaluate(dl_val)) \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 三，使用TPU加速你的pytorch模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Kaggle中右边settings 中的 ACCELERATOR选择 TPU v3-8。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1，安装torch_xla"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#安装torch_xla支持\n",
    "!pip uninstall -y torch torch_xla \n",
    "!pip install torch==1.8.2+cpu -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html\n",
    "!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#从git安装最新的accelerate仓库\n",
    "!pip install git+https://github.com/huggingface/accelerate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -U torchkeras \n",
    "!pip install -U torchmetrics "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#检查是否成功安装 torch_xla \n",
    "import torch_xla "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2，训练代码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "torchmetrics库和TPU兼容性不太好，可以去掉metrics_dict进行训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-01-20T03:20:10.294943Z",
     "iopub.status.busy": "2023-01-20T03:20:10.293703Z",
     "iopub.status.idle": "2023-01-20T03:21:28.082960Z",
     "shell.execute_reply": "2023-01-20T03:21:28.081552Z",
     "shell.execute_reply.started": "2023-01-20T03:20:10.294792Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Launching a training on 8 TPU cores.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "torchkeras.LightModel can't be used!\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[0;31m<<<<<< ⚡️ xla:1 is used >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:20:16\n",
      "Epoch 1 / 5\n",
      "\n",
      "100%|████████████████████████████████████████████████| 7/7 [00:12<00:00,  1.75s/it, train_loss=17.8]\n",
      "100%|████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.11s/it, val_loss=17]\n",
      "\u001b[0;31m<<<<<< reach best val_loss : 17.017776489257812 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:20:31\n",
      "Epoch 2 / 5\n",
      "\n",
      "100%|████████████████████████████████████████████████| 7/7 [00:12<00:00,  1.78s/it, train_loss=16.4]\n",
      "100%|██████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.09s/it, val_loss=15.5]\n",
      "\u001b[0;31m<<<<<< reach best val_loss : 15.495216369628906 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:20:45\n",
      "Epoch 3 / 5\n",
      "\n",
      "100%|████████████████████████████████████████████████| 7/7 [00:12<00:00,  1.76s/it, train_loss=14.8]\n",
      "100%|██████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.11s/it, val_loss=13.6]\n",
      "\u001b[0;31m<<<<<< reach best val_loss : 13.585103988647461 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:20:59\n",
      "Epoch 4 / 5\n",
      "\n",
      "100%|████████████████████████████████████████████████| 7/7 [00:12<00:00,  1.83s/it, train_loss=13.1]\n",
      "100%|██████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.15s/it, val_loss=11.8]\n",
      "\u001b[0;31m<<<<<< reach best val_loss : 11.82296085357666 >>>>>>\u001b[0m\n",
      "\n",
      "================================================================================2023-01-20 03:21:13\n",
      "Epoch 5 / 5\n",
      "\n",
      "100%|████████████████████████████████████████████████| 7/7 [00:12<00:00,  1.78s/it, train_loss=11.4]\n",
      "100%|██████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.14s/it, val_loss=10.3]\n",
      "\u001b[0;31m<<<<<< reach best val_loss : 10.327007293701172 >>>>>>\u001b[0m\n",
      "   train_loss   val_loss  epoch\n",
      "0   17.819965  17.017776      1\n",
      "1   16.367791  15.495216      2\n",
      "2   14.803013  13.585104      3\n",
      "3   13.097384  11.822961      4\n",
      "4   11.432619  10.327007      5\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from torch import nn \n",
    "import torchvision \n",
    "from torchvision import transforms\n",
    "from accelerate import notebook_launcher\n",
    "\n",
    "from torchkeras import KerasModel \n",
    "\n",
    "### 1，准备数据\n",
    "\n",
    "def create_dataloaders(batch_size=1024):\n",
    "    transform = transforms.Compose([transforms.ToTensor()])\n",
    "\n",
    "    ds_train = torchvision.datasets.MNIST(root=\"./minist/\",train=True,download=True,transform=transform)\n",
    "    ds_val = torchvision.datasets.MNIST(root=\"./minist/\",train=False,download=True,transform=transform)\n",
    "\n",
    "    dl_train =  torch.utils.data.DataLoader(ds_train, batch_size=batch_size, shuffle=True,\n",
    "                                            num_workers=2,drop_last=True)\n",
    "    dl_val =  torch.utils.data.DataLoader(ds_val, batch_size=batch_size, shuffle=False, \n",
    "                                          num_workers=2,drop_last=True)\n",
    "    return dl_train,dl_val\n",
    "\n",
    "dl_train,dl_val = create_dataloaders(batch_size=1024)\n",
    "\n",
    "### 2，定义模型\n",
    "\n",
    "def create_net():\n",
    "    net = nn.Sequential()\n",
    "    net.add_module(\"conv1\",nn.Conv2d(in_channels=1,out_channels=512,kernel_size = 3))\n",
    "    net.add_module(\"pool1\",nn.MaxPool2d(kernel_size = 2,stride = 2)) \n",
    "    net.add_module(\"conv2\",nn.Conv2d(in_channels=512,out_channels=256,kernel_size = 5))\n",
    "    net.add_module(\"pool2\",nn.MaxPool2d(kernel_size = 2,stride = 2))\n",
    "    net.add_module(\"dropout\",nn.Dropout2d(p = 0.1))\n",
    "    net.add_module(\"adaptive_pool\",nn.AdaptiveMaxPool2d((1,1)))\n",
    "    net.add_module(\"flatten\",nn.Flatten())\n",
    "    net.add_module(\"linear1\",nn.Linear(256,128))\n",
    "    net.add_module(\"relu\",nn.ReLU())\n",
    "    net.add_module(\"linear2\",nn.Linear(128,10))\n",
    "    return net \n",
    "\n",
    "net = create_net() \n",
    "\n",
    "### 3，训练模型\n",
    "\n",
    "loss_fn = nn.CrossEntropyLoss() \n",
    "\n",
    "optimizer = torch.optim.AdamW(params=net.parameters(), lr=1e-4)\n",
    "lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(\n",
    "    optimizer=optimizer,T_0=5)\n",
    "\n",
    "model = KerasModel(net,loss_fn,None,optimizer,lr_scheduler)\n",
    "\n",
    "from accelerate import notebook_launcher\n",
    "\n",
    "ckpt_path = 'checkpoint.pt'\n",
    "args = dict(train_data = dl_train,\n",
    "        val_data = dl_val,\n",
    "        epochs=5,\n",
    "        ckpt_path= ckpt_path,\n",
    "        patience=2,\n",
    "        monitor='val_loss',\n",
    "        mode='min',\n",
    "        mixed_precision='no').values()\n",
    "\n",
    "notebook_launcher(model.fit, args, num_processes=8)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
