{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "利用pytorch自动下载MNIST数据集\n",
    "\n",
    "MNIST数据集介绍：https://docs.ultralytics.com/zh/datasets/classify/mnist/"
   ],
   "id": "e2a1055260107c79"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import matplotlib.pyplot as plt\n",
    "from torchvision import datasets\n",
    "\n",
    "# pytorch下载mnist数据集\n",
    "train_dataset = datasets.MNIST(root='./data/', train=True, download=True)\n",
    "test_dataset = datasets.MNIST(root='./data/', train=False, download=True)\n",
    "print(len(train_dataset))\n",
    "print(len(test_dataset))"
   ],
   "id": "6a8045e503437201",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "train_dataset[0][0]",
   "id": "70d5050cc316e7bf",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "train_dataset[0][1]",
   "id": "12da69b406742d69",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "获取MNIST中的第一张图片",
   "id": "d17b019fb76c1362"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "first_image = train_dataset[0][0]\n",
    "print(first_image)\n",
    "plt.imshow(first_image, cmap='gray')"
   ],
   "id": "e77b716831a4bac7",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "如何将图片转为张量？\n",
    "\n",
    "图片的每个像素，都是这种图片的特征，因此我们可以用28*28的矩阵来表示这个图片。"
   ],
   "id": "6a8e77e42e10cd59"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import numpy as np\n",
    "\n",
    "image_array = np.array(first_image)\n",
    "image_array"
   ],
   "id": "6dd49a14676368fb",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "对像素点做归一化",
   "id": "de8256cbb9940ae4"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "\n",
    "# 归一化到 [0.0, 1.0]\n",
    "image_normalized = image_array / 255.0\n",
    "\n",
    "image_normalized"
   ],
   "id": "823b7258b8188b33",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from torchvision import transforms\n",
    "\n",
    "transform = transforms.ToTensor()\n",
    "first_tensor = transform(first_image)  # Channel 通道，灰度=1 彩色=3\n",
    "first_tensor.shape"
   ],
   "id": "39b4a62d803e863c",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "print(first_tensor[0][7][7])",
   "id": "7cd68e55129acc94",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "直接通过transforms得到归一化后的图片张量数据集",
   "id": "aa6715b77bd2798f"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from torchvision import datasets, transforms\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# pytorch下载mnist\n",
    "train_dataset = datasets.MNIST(root='./data/', train=True, transform=transforms.ToTensor(), download=True)\n",
    "test_dataset = datasets.MNIST(root='./data/', train=False, transform=transforms.ToTensor(), download=True)\n",
    "train_dataset[0][0].shape"
   ],
   "id": "def0631eb08602dc",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "train_dataset[0][1]",
   "id": "88941196ae23918b",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "plt.imshow(train_dataset[0][0].view(-1, 28), cmap='gray')",
   "id": "4f1ce419-3cbd-482b-9bb4-9cc70f972e3e",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from torch.utils.data import DataLoader\n",
    "\n",
    "train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)\n",
    "test_loader = DataLoader(test_dataset, batch_size=16, shuffle=False)\n",
    "\n",
    "for images, labels in train_loader:\n",
    "    print(images.shape)\n",
    "    print(labels.shape)\n",
    "    break"
   ],
   "id": "80b7452499e35425",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "定义模型",
   "id": "8bc14eb7f032e7bc"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from torch import nn\n",
    "import torch\n",
    "\n",
    "\n",
    "class MnistModel(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.fc1 = nn.Linear(1 * 28 * 28, 128)\n",
    "        self.fc2 = nn.Linear(128, 256)\n",
    "        self.fc3 = nn.Linear(256, 128)\n",
    "        self.fc4 = nn.Linear(128, 1)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = torch.relu(self.fc1(x))  # (16, 1*28*28) * (1*28*28, 128) = (16, 128)\n",
    "        x = torch.relu(self.fc2(x))  # (16, 128) * (128, 256) = (16, 256)\n",
    "        x = torch.relu(self.fc3(x))  # (16, 256) * (256, 128) = (16, 128)\n",
    "        x = self.fc4(x)  # (16, 128) * (128, 1) = (16, 1)\n",
    "        return x"
   ],
   "id": "9ad34aa4f7d8d743",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "[\n",
    "    [[1, 2], [3, 4]],\n",
    "    [[5, 6], [7, 8]]\n",
    "]\n",
    "\n",
    "[1, 2, 3, 4, 5, 6, 7, 8]"
   ],
   "id": "61c761e27a26e694",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "torch.tensor([[1, 2, 3]]).shape",
   "id": "1424ad0bb1bd2822",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "\n",
    "model = MnistModel()\n",
    "\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n",
    "criterion = nn.MSELoss()\n",
    "\n",
    "epochs = 10\n",
    "for epoch in range(epochs):\n",
    "    for i, (images, labels) in enumerate(train_loader):\n",
    "\n",
    "        labels = labels.float()\n",
    "\n",
    "        outputs = model(images.view(16, 1 * 28 * 28))  # (16, 1)\n",
    "\n",
    "        loss = criterion(outputs, labels.view(16, 1))\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        if (i + 1) % 100 == 0:\n",
    "            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'\n",
    "                  .format(epoch + 1, epochs, i + 1, len(train_loader), loss.item()))"
   ],
   "id": "8866b928c08b225d",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "因为上面的模型本质上是一个线性模型，不适合分类任务，我们需要定义好模型的输出固定只有10个类别，因此我们修改最后一层为10个神经元，并且使用交叉熵损失函数",
   "id": "7d4ac77d3c89dfa9"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from torch import nn\n",
    "import torch\n",
    "\n",
    "\n",
    "class MnistModel(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.fc1 = nn.Linear(1 * 28 * 28, 128)\n",
    "        self.fc2 = nn.Linear(128, 256)\n",
    "        self.fc3 = nn.Linear(256, 128)\n",
    "        self.fc4 = nn.Linear(128, 10)  # img--->(r0,r1,r2....r9) 数值 分类\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = torch.relu(self.fc1(x))  # (16, 1*28*28) * (1*28*28, 128) = (16, 128)\n",
    "        x = torch.relu(self.fc2(x))  # (16, 128) * (128, 256) = (16, 256)\n",
    "        x = torch.relu(self.fc3(x))  # (16, 256) * (256, 128) = (16, 128)\n",
    "        x = self.fc4(x)  # (16, 128) * (128, 1) = (16, 10)\n",
    "        return x"
   ],
   "id": "a55cf6403d35a642",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "model = MnistModel()\n",
    "\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n",
    "criterion = nn.CrossEntropyLoss()  # softmax 交叉熵损失函数\n",
    "\n",
    "epochs = 10\n",
    "for epoch in range(epochs):\n",
    "    for i, (images, labels) in enumerate(train_loader):\n",
    "\n",
    "        outputs = model(images.view(16, 1 * 28 * 28))\n",
    "        loss = criterion(outputs, labels)\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        if (i + 1) % 100 == 0:\n",
    "            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'\n",
    "                  .format(epoch + 1, epochs, i + 1, len(train_loader), loss.item()))"
   ],
   "id": "fe129235b6599770",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 开始测试\n",
    "\n",
    "test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False)\n",
    "plt.imshow(test_dataset[0][0].view(-1, 28), cmap='gray')"
   ],
   "id": "928fc30bdc060a0",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "with torch.no_grad():\n",
    "    for images, labels in test_loader:\n",
    "        outputs = model(images.view(-1, 1 * 28 * 28))  # 形状是16 * 10\n",
    "\n",
    "        _, indices = torch.max(outputs.data, 1)\n",
    "\n",
    "        print('预测结果为: {}'.format(indices[0]))\n",
    "        break\n"
   ],
   "id": "cad3949e75f58bb4",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 开始测试\n",
    "correct = 0\n",
    "total = 0\n",
    "with torch.no_grad():\n",
    "    for images, labels in test_loader:\n",
    "        outputs = model(images.view(-1, 1 * 28 * 28))  # 形状是16 * 10\n",
    "\n",
    "        _, indices = torch.max(outputs.data, 1)\n",
    "\n",
    "        total += labels.size(0)  # (16,10) 记录总测试样本个数\n",
    "\n",
    "        matches = (indices == labels)\n",
    "        correct += matches.sum().item()\n",
    "\n",
    "print('测试集正确率为: {} %'.format(100 * correct / total))"
   ],
   "id": "6374780fd056c48d",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 交叉熵CrossEntropy\n",
    "\n",
    "x-->手写体数字图片--->数值不确定，预测值0.9， 真实值1   (0.999999-1)**2=0.01误差, 精确    0-9, 离散\n",
    "\n",
    "x-->手写体数字图片---->[0.1, 0.9, 2.1, 2.4, 5.5, 7.3, 0.4, 3.1, 1.2, 2.2]--->5\n",
    "\n",
    "MES和CrossEntropy都是计算误差的方式，MSE适合回归任务，而CrossEntropy适合分类任务。\n",
    "\n",
    "假如，x是一张手写体数字图片，模型对应的预测值为0.9，但是真实值为1，那么MSE算出来的误差为：(0.9-1)**2=0.01，误差其实很小了，但是如果模型到此为止就还不行，因为如果拿这个模型预测，输入一张图片，它的预测结果为0.9，而图片中根本就是没有0.9这个数字，那么预测其实还是错的，这就要求模型要把误差0.01降到0，这就要求非常精确，很难做到，而对于回归任务，0.9和1其实差别不大，如果是回归任务，比如房价预测，预测结果为0.9，而真实值是1，其实是可以接受的。\n",
    "\n",
    "而交叉熵计算误差的方式是这样的：x是一张手写体数字图片，模型会输出10个预测值，每个预测值对应一个0-9数字，相当于10个类别，而预测值越大，我们就认为x越接近于对应的数字，比如模型的预测结果为[0.1, 0.9, 2.1, 2.4, 5.5, 7.3, 0.4, 3.1, 1.2, 2.2]，这其中7.3这个数值最大，它对应的下标为5，所以我们任务x对应的是数字5，那么如何计算误差呢？\n",
    "\n",
    "我们先把以上数值转成概率，这里用到softmax，softmax的结果为：[6.1957e-04, 1.3789e-03, 4.5780e-03, 6.1797e-03, 1.3718e-01, 8.2987e-01, 8.3633e-04, 1.2444e-02, 1.8613e-03, 5.0595e-03]，这就是x对应0-9的概率分布，这些概率加起来是1，也就是不会出现40%是1，,40%是2，40%是3这种总概率超过1的情况，而这其中8.2987e-01是最大的，表示大概80%的概率是5，其他几个数也有一定概率，但是概率比较小。\n",
    "\n",
    "此时，如果真实值确实就是5，那么表示预测对了，经过概率大概是80%，但模型其实没有必要一定要把这个概率提高到100%。\n",
    "\n",
    "那如果真实值不是5，比如真实值是6，那么表示预测错了，此时就可以算误差了，是这么算的-np.log(8.3633e-04)\n"
   ],
   "id": "23d3d5f7e45e0c8b"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "loss = -np.log(8.3633e-04)\n",
    "print(loss)"
   ],
   "id": "562ec9f07382f865",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "\n",
    "values, indices = torch.max(torch.tensor([[1, 2, 3],\n",
    "                                          [4, 5, 6]]), 1)\n",
    "print(values)\n",
    "print(indices)"
   ],
   "id": "fc246a34b038fc2b",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## softmax\n",
    "\n",
    "在上面的模型中，最后一层是10个神经元，每个神经元的输出是一个数字，这个数字越大，表示对应类别的概率就越高\n",
    "\n",
    "softmax的函数定义\n",
    "$$\n",
    "softmax(x)_i = \\frac{e^{x_i}}{\\sum_{j=1}^{n}e^{x_j}}\n",
    "$$\n",
    "\n",
    "softmax的作用是：将向量中的每个元素变成概率，使其和为1\n",
    "\n",
    "为什么需要softmax呢？"
   ],
   "id": "ac855e29bcdd1cb5"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import numpy as np\n",
    "\n",
    "x = np.array([0.1, 0.9, 2.1, 2.4, 5.5, 7.3, 0.4, 3.1, 1.2, 2.2])\n",
    "result = torch.softmax(torch.from_numpy(x), dim=0)\n",
    "print(result)\n",
    "print(result.sum())"
   ],
   "id": "15adb5c2f8fcf198",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 简单测试一下CrossEntropyLoss的效果\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "output = torch.tensor([[0.1, 0.9, 2.1, 2.4, 5.5, 7.3, 0.4, 3.1, 1.2, 2.2]])\n",
    "target = torch.tensor([6])  # 0表示第一个类别，1表示第二个类别\n",
    "loss = criterion(output, target)  # 多个样本就是求平均\n",
    "print(loss)"
   ],
   "id": "bbdc9f433924b5c9",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-22T06:25:40.155848Z",
     "start_time": "2025-06-22T06:25:38.602953Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from torchvision import datasets, transforms\n",
    "from torch.utils.data import DataLoader\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "\n",
    "# 大都督周瑜（我的微信: it_zhouyu）\n",
    "\n",
    "train_dataset = datasets.MNIST(root='./data/', train=True, transform=transforms.ToTensor(), download=True)\n",
    "test_dataset = datasets.MNIST(root='./data/', train=False, transform=transforms.ToTensor(), download=True)\n",
    "\n",
    "# 从train_dataset中分5000个样本作为评估集\n",
    "train_dataset, valid_dataset = torch.utils.data.random_split(train_dataset, [55000, 5000])\n",
    "\n",
    "# 获取train_dataset前500条作为训练集\n",
    "train_dataset = torch.utils.data.Subset(train_dataset, range(500))\n",
    "\n",
    "print(len(train_dataset))\n",
    "print(len(valid_dataset))\n",
    "print(len(test_dataset))\n",
    "\n",
    "train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)\n",
    "valid_loader = DataLoader(valid_dataset, batch_size=16, shuffle=True)\n",
    "test_loader = DataLoader(test_dataset, batch_size=16, shuffle=False)\n",
    "\n",
    "# 55000/16=3438  step、epoch、batchsize\n",
    "len(train_loader)"
   ],
   "id": "540e1c40e095d495",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "500\n",
      "5000\n",
      "10000\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "32"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 1
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-22T06:42:09.620702Z",
     "start_time": "2025-06-22T06:41:17.121645Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class MnistModel(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.fc1 = nn.Linear(1 * 28 * 28, 128)  # fulll connection\n",
    "        self.fc2 = nn.Linear(128, 128)\n",
    "        self.fc3 = nn.Linear(128, 128)\n",
    "        self.fc4 = nn.Linear(128, 10)  # img--->(r0,r1,r2....r9) 数值 分类\n",
    "        self.dropout = nn.Dropout(0.1)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = torch.relu(self.fc1(x))  # (16, 1*28*28) * (1*28*28, 128) = (16, 128)\n",
    "        x = self.dropout(x)\n",
    "        # x = torch.relu(self.fc2(x))  # (16, 128) * (128, 256) = (16, 256)\n",
    "        # x = self.dropout(x)\n",
    "        # x = torch.relu(self.fc3(x))  # (16, 256) * (256, 128) = (16, 128)\n",
    "        # x = self.dropout(x)\n",
    "        x = self.fc4(x)  # (16, 128) * (128, 1) = (16, 10)\n",
    "        return x\n",
    "\n",
    "\n",
    "model = MnistModel()\n",
    "\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "writer = SummaryWriter()\n",
    "\n",
    "epochs = 100\n",
    "model.train() # train=false dropout\n",
    "total_train_loss = 0.0\n",
    "for epoch in range(epochs):\n",
    "    for i, (images, labels) in enumerate(train_loader):\n",
    "\n",
    "        outputs = model(images.view(-1, 1 * 28 * 28))\n",
    "        loss = criterion(outputs, labels)\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        total_train_loss += loss.item()\n",
    "\n",
    "        # 每100步记录一次训练损失 step\n",
    "        if (i + 1) % 10 == 0:\n",
    "            avg_train_loss = total_train_loss / 10\n",
    "            total_train_loss = 0.0\n",
    "\n",
    "            # 评估\n",
    "            model.eval()\n",
    "            total_valid_loss = 0.0\n",
    "            with torch.no_grad():\n",
    "                for valid_images, valid_labels in valid_loader:\n",
    "                    valid_outputs = model(valid_images.view(-1, 1 * 28 * 28))\n",
    "                    valid_loss = criterion(valid_outputs, valid_labels)\n",
    "                    total_valid_loss += valid_loss.item()\n",
    "            avg_valid_loss = total_valid_loss / len(valid_loader)\n",
    "\n",
    "            print('Epoch [{}/{}], Step [{}/{}], training loss: {:.4f}, validation loss: {:.4f}'\n",
    "                  .format(epoch + 1, epochs, i + 1, len(train_loader), avg_train_loss, avg_valid_loss))\n",
    "\n",
    "            global_step = epoch * len(train_loader) + i\n",
    "            writer.add_scalar('training loss', avg_train_loss, global_step)\n",
    "            writer.add_scalar('validation loss', avg_valid_loss, global_step)"
   ],
   "id": "8ce8a626dc1f4515",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch [1/100], Step [10/32], training loss: 2.2539, validation loss: 2.1515\n",
      "Epoch [1/100], Step [20/32], training loss: 2.0780, validation loss: 1.9148\n",
      "Epoch [1/100], Step [30/32], training loss: 1.7928, validation loss: 1.5989\n",
      "Epoch [2/100], Step [10/32], training loss: 1.6894, validation loss: 1.2702\n",
      "Epoch [2/100], Step [20/32], training loss: 1.1907, validation loss: 1.0560\n",
      "Epoch [2/100], Step [30/32], training loss: 0.8622, validation loss: 0.9778\n",
      "Epoch [3/100], Step [10/32], training loss: 0.9248, validation loss: 0.8533\n",
      "Epoch [3/100], Step [20/32], training loss: 0.6935, validation loss: 0.7967\n",
      "Epoch [3/100], Step [30/32], training loss: 0.6235, validation loss: 0.7139\n",
      "Epoch [4/100], Step [10/32], training loss: 0.6730, validation loss: 0.6563\n",
      "Epoch [4/100], Step [20/32], training loss: 0.4628, validation loss: 0.6422\n",
      "Epoch [4/100], Step [30/32], training loss: 0.5130, validation loss: 0.6072\n",
      "Epoch [5/100], Step [10/32], training loss: 0.4914, validation loss: 0.5665\n",
      "Epoch [5/100], Step [20/32], training loss: 0.3860, validation loss: 0.5429\n",
      "Epoch [5/100], Step [30/32], training loss: 0.3898, validation loss: 0.5452\n",
      "Epoch [6/100], Step [10/32], training loss: 0.3663, validation loss: 0.5628\n",
      "Epoch [6/100], Step [20/32], training loss: 0.3002, validation loss: 0.5034\n",
      "Epoch [6/100], Step [30/32], training loss: 0.3090, validation loss: 0.4906\n",
      "Epoch [7/100], Step [10/32], training loss: 0.3069, validation loss: 0.5327\n",
      "Epoch [7/100], Step [20/32], training loss: 0.2230, validation loss: 0.5309\n",
      "Epoch [7/100], Step [30/32], training loss: 0.2655, validation loss: 0.4805\n",
      "Epoch [8/100], Step [10/32], training loss: 0.3155, validation loss: 0.4743\n",
      "Epoch [8/100], Step [20/32], training loss: 0.2018, validation loss: 0.4892\n",
      "Epoch [8/100], Step [30/32], training loss: 0.1866, validation loss: 0.4662\n",
      "Epoch [9/100], Step [10/32], training loss: 0.2031, validation loss: 0.4730\n",
      "Epoch [9/100], Step [20/32], training loss: 0.1299, validation loss: 0.4852\n",
      "Epoch [9/100], Step [30/32], training loss: 0.2038, validation loss: 0.5397\n",
      "Epoch [10/100], Step [10/32], training loss: 0.1833, validation loss: 0.4580\n",
      "Epoch [10/100], Step [20/32], training loss: 0.1442, validation loss: 0.4650\n",
      "Epoch [10/100], Step [30/32], training loss: 0.1419, validation loss: 0.4625\n",
      "Epoch [11/100], Step [10/32], training loss: 0.1423, validation loss: 0.4744\n",
      "Epoch [11/100], Step [20/32], training loss: 0.0890, validation loss: 0.4729\n",
      "Epoch [11/100], Step [30/32], training loss: 0.1125, validation loss: 0.4499\n",
      "Epoch [12/100], Step [10/32], training loss: 0.1066, validation loss: 0.4489\n",
      "Epoch [12/100], Step [20/32], training loss: 0.0960, validation loss: 0.4642\n",
      "Epoch [12/100], Step [30/32], training loss: 0.1259, validation loss: 0.4606\n",
      "Epoch [13/100], Step [10/32], training loss: 0.0843, validation loss: 0.4579\n",
      "Epoch [13/100], Step [20/32], training loss: 0.0936, validation loss: 0.4601\n",
      "Epoch [13/100], Step [30/32], training loss: 0.0775, validation loss: 0.4501\n",
      "Epoch [14/100], Step [10/32], training loss: 0.1033, validation loss: 0.4578\n",
      "Epoch [14/100], Step [20/32], training loss: 0.0616, validation loss: 0.4491\n",
      "Epoch [14/100], Step [30/32], training loss: 0.0754, validation loss: 0.4557\n",
      "Epoch [15/100], Step [10/32], training loss: 0.0675, validation loss: 0.4540\n",
      "Epoch [15/100], Step [20/32], training loss: 0.0579, validation loss: 0.4546\n",
      "Epoch [15/100], Step [30/32], training loss: 0.0649, validation loss: 0.4513\n",
      "Epoch [16/100], Step [10/32], training loss: 0.0590, validation loss: 0.4565\n",
      "Epoch [16/100], Step [20/32], training loss: 0.0530, validation loss: 0.4619\n",
      "Epoch [16/100], Step [30/32], training loss: 0.0475, validation loss: 0.4519\n",
      "Epoch [17/100], Step [10/32], training loss: 0.0681, validation loss: 0.4563\n",
      "Epoch [17/100], Step [20/32], training loss: 0.0436, validation loss: 0.4569\n",
      "Epoch [17/100], Step [30/32], training loss: 0.0498, validation loss: 0.4622\n",
      "Epoch [18/100], Step [10/32], training loss: 0.0586, validation loss: 0.4701\n",
      "Epoch [18/100], Step [20/32], training loss: 0.0394, validation loss: 0.4554\n",
      "Epoch [18/100], Step [30/32], training loss: 0.0470, validation loss: 0.4510\n",
      "Epoch [19/100], Step [10/32], training loss: 0.0443, validation loss: 0.4541\n",
      "Epoch [19/100], Step [20/32], training loss: 0.0316, validation loss: 0.4607\n",
      "Epoch [19/100], Step [30/32], training loss: 0.0417, validation loss: 0.4696\n",
      "Epoch [20/100], Step [10/32], training loss: 0.0364, validation loss: 0.4608\n",
      "Epoch [20/100], Step [20/32], training loss: 0.0285, validation loss: 0.4691\n",
      "Epoch [20/100], Step [30/32], training loss: 0.0396, validation loss: 0.4662\n",
      "Epoch [21/100], Step [10/32], training loss: 0.0345, validation loss: 0.4616\n",
      "Epoch [21/100], Step [20/32], training loss: 0.0334, validation loss: 0.4630\n",
      "Epoch [21/100], Step [30/32], training loss: 0.0316, validation loss: 0.4765\n",
      "Epoch [22/100], Step [10/32], training loss: 0.0332, validation loss: 0.4737\n",
      "Epoch [22/100], Step [20/32], training loss: 0.0308, validation loss: 0.4649\n",
      "Epoch [22/100], Step [30/32], training loss: 0.0266, validation loss: 0.4686\n",
      "Epoch [23/100], Step [10/32], training loss: 0.0366, validation loss: 0.4671\n",
      "Epoch [23/100], Step [20/32], training loss: 0.0250, validation loss: 0.4713\n",
      "Epoch [23/100], Step [30/32], training loss: 0.0257, validation loss: 0.4680\n",
      "Epoch [24/100], Step [10/32], training loss: 0.0328, validation loss: 0.4728\n",
      "Epoch [24/100], Step [20/32], training loss: 0.0256, validation loss: 0.4731\n",
      "Epoch [24/100], Step [30/32], training loss: 0.0202, validation loss: 0.4688\n",
      "Epoch [25/100], Step [10/32], training loss: 0.0263, validation loss: 0.4756\n",
      "Epoch [25/100], Step [20/32], training loss: 0.0224, validation loss: 0.4701\n",
      "Epoch [25/100], Step [30/32], training loss: 0.0234, validation loss: 0.4801\n",
      "Epoch [26/100], Step [10/32], training loss: 0.0226, validation loss: 0.4772\n",
      "Epoch [26/100], Step [20/32], training loss: 0.0233, validation loss: 0.4725\n",
      "Epoch [26/100], Step [30/32], training loss: 0.0202, validation loss: 0.4817\n",
      "Epoch [27/100], Step [10/32], training loss: 0.0209, validation loss: 0.4797\n",
      "Epoch [27/100], Step [20/32], training loss: 0.0203, validation loss: 0.4787\n",
      "Epoch [27/100], Step [30/32], training loss: 0.0223, validation loss: 0.4807\n",
      "Epoch [28/100], Step [10/32], training loss: 0.0222, validation loss: 0.4807\n",
      "Epoch [28/100], Step [20/32], training loss: 0.0157, validation loss: 0.4772\n",
      "Epoch [28/100], Step [30/32], training loss: 0.0210, validation loss: 0.4828\n",
      "Epoch [29/100], Step [10/32], training loss: 0.0203, validation loss: 0.4775\n",
      "Epoch [29/100], Step [20/32], training loss: 0.0173, validation loss: 0.4826\n",
      "Epoch [29/100], Step [30/32], training loss: 0.0200, validation loss: 0.4844\n",
      "Epoch [30/100], Step [10/32], training loss: 0.0218, validation loss: 0.4885\n",
      "Epoch [30/100], Step [20/32], training loss: 0.0162, validation loss: 0.4858\n",
      "Epoch [30/100], Step [30/32], training loss: 0.0149, validation loss: 0.4808\n",
      "Epoch [31/100], Step [10/32], training loss: 0.0215, validation loss: 0.4855\n",
      "Epoch [31/100], Step [20/32], training loss: 0.0157, validation loss: 0.4845\n",
      "Epoch [31/100], Step [30/32], training loss: 0.0136, validation loss: 0.4868\n",
      "Epoch [32/100], Step [10/32], training loss: 0.0201, validation loss: 0.4884\n",
      "Epoch [32/100], Step [20/32], training loss: 0.0140, validation loss: 0.4897\n",
      "Epoch [32/100], Step [30/32], training loss: 0.0127, validation loss: 0.4898\n",
      "Epoch [33/100], Step [10/32], training loss: 0.0177, validation loss: 0.4885\n",
      "Epoch [33/100], Step [20/32], training loss: 0.0133, validation loss: 0.4931\n",
      "Epoch [33/100], Step [30/32], training loss: 0.0159, validation loss: 0.4956\n",
      "Epoch [34/100], Step [10/32], training loss: 0.0182, validation loss: 0.4949\n",
      "Epoch [34/100], Step [20/32], training loss: 0.0136, validation loss: 0.4936\n",
      "Epoch [34/100], Step [30/32], training loss: 0.0122, validation loss: 0.4943\n",
      "Epoch [35/100], Step [10/32], training loss: 0.0154, validation loss: 0.4943\n",
      "Epoch [35/100], Step [20/32], training loss: 0.0116, validation loss: 0.4971\n",
      "Epoch [35/100], Step [30/32], training loss: 0.0132, validation loss: 0.4973\n",
      "Epoch [36/100], Step [10/32], training loss: 0.0174, validation loss: 0.4993\n",
      "Epoch [36/100], Step [20/32], training loss: 0.0118, validation loss: 0.4986\n",
      "Epoch [36/100], Step [30/32], training loss: 0.0122, validation loss: 0.4950\n",
      "Epoch [37/100], Step [10/32], training loss: 0.0131, validation loss: 0.5009\n",
      "Epoch [37/100], Step [20/32], training loss: 0.0126, validation loss: 0.4985\n",
      "Epoch [37/100], Step [30/32], training loss: 0.0114, validation loss: 0.4985\n",
      "Epoch [38/100], Step [10/32], training loss: 0.0151, validation loss: 0.5002\n",
      "Epoch [38/100], Step [20/32], training loss: 0.0113, validation loss: 0.5013\n",
      "Epoch [38/100], Step [30/32], training loss: 0.0123, validation loss: 0.4984\n",
      "Epoch [39/100], Step [10/32], training loss: 0.0133, validation loss: 0.4998\n",
      "Epoch [39/100], Step [20/32], training loss: 0.0104, validation loss: 0.5046\n",
      "Epoch [39/100], Step [30/32], training loss: 0.0119, validation loss: 0.5004\n",
      "Epoch [40/100], Step [10/32], training loss: 0.0133, validation loss: 0.5024\n",
      "Epoch [40/100], Step [20/32], training loss: 0.0104, validation loss: 0.5037\n",
      "Epoch [40/100], Step [30/32], training loss: 0.0103, validation loss: 0.5060\n",
      "Epoch [41/100], Step [10/32], training loss: 0.0135, validation loss: 0.5016\n",
      "Epoch [41/100], Step [20/32], training loss: 0.0117, validation loss: 0.5066\n",
      "Epoch [41/100], Step [30/32], training loss: 0.0084, validation loss: 0.5049\n",
      "Epoch [42/100], Step [10/32], training loss: 0.0112, validation loss: 0.5066\n",
      "Epoch [42/100], Step [20/32], training loss: 0.0100, validation loss: 0.5055\n",
      "Epoch [42/100], Step [30/32], training loss: 0.0106, validation loss: 0.5072\n",
      "Epoch [43/100], Step [10/32], training loss: 0.0094, validation loss: 0.5087\n",
      "Epoch [43/100], Step [20/32], training loss: 0.0098, validation loss: 0.5084\n",
      "Epoch [43/100], Step [30/32], training loss: 0.0103, validation loss: 0.5099\n",
      "Epoch [44/100], Step [10/32], training loss: 0.0120, validation loss: 0.5106\n",
      "Epoch [44/100], Step [20/32], training loss: 0.0098, validation loss: 0.5123\n",
      "Epoch [44/100], Step [30/32], training loss: 0.0086, validation loss: 0.5089\n",
      "Epoch [45/100], Step [10/32], training loss: 0.0105, validation loss: 0.5106\n",
      "Epoch [45/100], Step [20/32], training loss: 0.0077, validation loss: 0.5084\n",
      "Epoch [45/100], Step [30/32], training loss: 0.0086, validation loss: 0.5110\n",
      "Epoch [46/100], Step [10/32], training loss: 0.0126, validation loss: 0.5099\n",
      "Epoch [46/100], Step [20/32], training loss: 0.0084, validation loss: 0.5155\n",
      "Epoch [46/100], Step [30/32], training loss: 0.0088, validation loss: 0.5114\n",
      "Epoch [47/100], Step [10/32], training loss: 0.0101, validation loss: 0.5117\n",
      "Epoch [47/100], Step [20/32], training loss: 0.0082, validation loss: 0.5113\n",
      "Epoch [47/100], Step [30/32], training loss: 0.0082, validation loss: 0.5178\n",
      "Epoch [48/100], Step [10/32], training loss: 0.0091, validation loss: 0.5158\n",
      "Epoch [48/100], Step [20/32], training loss: 0.0082, validation loss: 0.5137\n",
      "Epoch [48/100], Step [30/32], training loss: 0.0087, validation loss: 0.5168\n",
      "Epoch [49/100], Step [10/32], training loss: 0.0097, validation loss: 0.5154\n",
      "Epoch [49/100], Step [20/32], training loss: 0.0093, validation loss: 0.5155\n",
      "Epoch [49/100], Step [30/32], training loss: 0.0056, validation loss: 0.5197\n",
      "Epoch [50/100], Step [10/32], training loss: 0.0084, validation loss: 0.5204\n",
      "Epoch [50/100], Step [20/32], training loss: 0.0078, validation loss: 0.5195\n",
      "Epoch [50/100], Step [30/32], training loss: 0.0085, validation loss: 0.5186\n",
      "Epoch [51/100], Step [10/32], training loss: 0.0077, validation loss: 0.5189\n",
      "Epoch [51/100], Step [20/32], training loss: 0.0070, validation loss: 0.5231\n",
      "Epoch [51/100], Step [30/32], training loss: 0.0083, validation loss: 0.5199\n",
      "Epoch [52/100], Step [10/32], training loss: 0.0088, validation loss: 0.5190\n",
      "Epoch [52/100], Step [20/32], training loss: 0.0076, validation loss: 0.5233\n",
      "Epoch [52/100], Step [30/32], training loss: 0.0072, validation loss: 0.5200\n",
      "Epoch [53/100], Step [10/32], training loss: 0.0083, validation loss: 0.5218\n",
      "Epoch [53/100], Step [20/32], training loss: 0.0071, validation loss: 0.5220\n",
      "Epoch [53/100], Step [30/32], training loss: 0.0068, validation loss: 0.5226\n",
      "Epoch [54/100], Step [10/32], training loss: 0.0091, validation loss: 0.5205\n",
      "Epoch [54/100], Step [20/32], training loss: 0.0063, validation loss: 0.5208\n",
      "Epoch [54/100], Step [30/32], training loss: 0.0071, validation loss: 0.5241\n",
      "Epoch [55/100], Step [10/32], training loss: 0.0084, validation loss: 0.5262\n",
      "Epoch [55/100], Step [20/32], training loss: 0.0072, validation loss: 0.5275\n",
      "Epoch [55/100], Step [30/32], training loss: 0.0064, validation loss: 0.5232\n",
      "Epoch [56/100], Step [10/32], training loss: 0.0071, validation loss: 0.5253\n",
      "Epoch [56/100], Step [20/32], training loss: 0.0057, validation loss: 0.5239\n",
      "Epoch [56/100], Step [30/32], training loss: 0.0078, validation loss: 0.5289\n",
      "Epoch [57/100], Step [10/32], training loss: 0.0059, validation loss: 0.5298\n",
      "Epoch [57/100], Step [20/32], training loss: 0.0071, validation loss: 0.5279\n",
      "Epoch [57/100], Step [30/32], training loss: 0.0065, validation loss: 0.5270\n",
      "Epoch [58/100], Step [10/32], training loss: 0.0076, validation loss: 0.5288\n",
      "Epoch [58/100], Step [20/32], training loss: 0.0059, validation loss: 0.5288\n",
      "Epoch [58/100], Step [30/32], training loss: 0.0063, validation loss: 0.5271\n",
      "Epoch [59/100], Step [10/32], training loss: 0.0079, validation loss: 0.5332\n",
      "Epoch [59/100], Step [20/32], training loss: 0.0060, validation loss: 0.5283\n",
      "Epoch [59/100], Step [30/32], training loss: 0.0057, validation loss: 0.5289\n",
      "Epoch [60/100], Step [10/32], training loss: 0.0072, validation loss: 0.5311\n",
      "Epoch [60/100], Step [20/32], training loss: 0.0056, validation loss: 0.5280\n",
      "Epoch [60/100], Step [30/32], training loss: 0.0062, validation loss: 0.5293\n",
      "Epoch [61/100], Step [10/32], training loss: 0.0070, validation loss: 0.5308\n",
      "Epoch [61/100], Step [20/32], training loss: 0.0053, validation loss: 0.5319\n",
      "Epoch [61/100], Step [30/32], training loss: 0.0061, validation loss: 0.5297\n",
      "Epoch [62/100], Step [10/32], training loss: 0.0068, validation loss: 0.5341\n",
      "Epoch [62/100], Step [20/32], training loss: 0.0050, validation loss: 0.5306\n",
      "Epoch [62/100], Step [30/32], training loss: 0.0062, validation loss: 0.5349\n",
      "Epoch [63/100], Step [10/32], training loss: 0.0063, validation loss: 0.5351\n",
      "Epoch [63/100], Step [20/32], training loss: 0.0052, validation loss: 0.5314\n",
      "Epoch [63/100], Step [30/32], training loss: 0.0064, validation loss: 0.5325\n",
      "Epoch [64/100], Step [10/32], training loss: 0.0068, validation loss: 0.5358\n",
      "Epoch [64/100], Step [20/32], training loss: 0.0047, validation loss: 0.5350\n",
      "Epoch [64/100], Step [30/32], training loss: 0.0059, validation loss: 0.5351\n",
      "Epoch [65/100], Step [10/32], training loss: 0.0057, validation loss: 0.5352\n",
      "Epoch [65/100], Step [20/32], training loss: 0.0051, validation loss: 0.5374\n",
      "Epoch [65/100], Step [30/32], training loss: 0.0055, validation loss: 0.5372\n",
      "Epoch [66/100], Step [10/32], training loss: 0.0069, validation loss: 0.5376\n",
      "Epoch [66/100], Step [20/32], training loss: 0.0042, validation loss: 0.5352\n",
      "Epoch [66/100], Step [30/32], training loss: 0.0057, validation loss: 0.5379\n",
      "Epoch [67/100], Step [10/32], training loss: 0.0071, validation loss: 0.5377\n",
      "Epoch [67/100], Step [20/32], training loss: 0.0051, validation loss: 0.5355\n",
      "Epoch [67/100], Step [30/32], training loss: 0.0046, validation loss: 0.5363\n",
      "Epoch [68/100], Step [10/32], training loss: 0.0062, validation loss: 0.5371\n",
      "Epoch [68/100], Step [20/32], training loss: 0.0044, validation loss: 0.5382\n",
      "Epoch [68/100], Step [30/32], training loss: 0.0054, validation loss: 0.5396\n",
      "Epoch [69/100], Step [10/32], training loss: 0.0052, validation loss: 0.5417\n",
      "Epoch [69/100], Step [20/32], training loss: 0.0046, validation loss: 0.5384\n",
      "Epoch [69/100], Step [30/32], training loss: 0.0053, validation loss: 0.5390\n",
      "Epoch [70/100], Step [10/32], training loss: 0.0058, validation loss: 0.5399\n",
      "Epoch [70/100], Step [20/32], training loss: 0.0046, validation loss: 0.5440\n",
      "Epoch [70/100], Step [30/32], training loss: 0.0046, validation loss: 0.5423\n",
      "Epoch [71/100], Step [10/32], training loss: 0.0061, validation loss: 0.5387\n",
      "Epoch [71/100], Step [20/32], training loss: 0.0042, validation loss: 0.5416\n",
      "Epoch [71/100], Step [30/32], training loss: 0.0053, validation loss: 0.5465\n",
      "Epoch [72/100], Step [10/32], training loss: 0.0059, validation loss: 0.5435\n",
      "Epoch [72/100], Step [20/32], training loss: 0.0046, validation loss: 0.5419\n",
      "Epoch [72/100], Step [30/32], training loss: 0.0047, validation loss: 0.5415\n",
      "Epoch [73/100], Step [10/32], training loss: 0.0061, validation loss: 0.5422\n",
      "Epoch [73/100], Step [20/32], training loss: 0.0042, validation loss: 0.5446\n",
      "Epoch [73/100], Step [30/32], training loss: 0.0048, validation loss: 0.5431\n",
      "Epoch [74/100], Step [10/32], training loss: 0.0053, validation loss: 0.5451\n",
      "Epoch [74/100], Step [20/32], training loss: 0.0049, validation loss: 0.5429\n",
      "Epoch [74/100], Step [30/32], training loss: 0.0040, validation loss: 0.5439\n",
      "Epoch [75/100], Step [10/32], training loss: 0.0049, validation loss: 0.5444\n",
      "Epoch [75/100], Step [20/32], training loss: 0.0046, validation loss: 0.5468\n",
      "Epoch [75/100], Step [30/32], training loss: 0.0047, validation loss: 0.5444\n",
      "Epoch [76/100], Step [10/32], training loss: 0.0045, validation loss: 0.5433\n",
      "Epoch [76/100], Step [20/32], training loss: 0.0051, validation loss: 0.5476\n",
      "Epoch [76/100], Step [30/32], training loss: 0.0039, validation loss: 0.5452\n",
      "Epoch [77/100], Step [10/32], training loss: 0.0052, validation loss: 0.5463\n",
      "Epoch [77/100], Step [20/32], training loss: 0.0042, validation loss: 0.5482\n",
      "Epoch [77/100], Step [30/32], training loss: 0.0041, validation loss: 0.5481\n",
      "Epoch [78/100], Step [10/32], training loss: 0.0036, validation loss: 0.5477\n",
      "Epoch [78/100], Step [20/32], training loss: 0.0049, validation loss: 0.5501\n",
      "Epoch [78/100], Step [30/32], training loss: 0.0044, validation loss: 0.5484\n",
      "Epoch [79/100], Step [10/32], training loss: 0.0050, validation loss: 0.5500\n",
      "Epoch [79/100], Step [20/32], training loss: 0.0037, validation loss: 0.5493\n",
      "Epoch [79/100], Step [30/32], training loss: 0.0042, validation loss: 0.5483\n",
      "Epoch [80/100], Step [10/32], training loss: 0.0047, validation loss: 0.5489\n",
      "Epoch [80/100], Step [20/32], training loss: 0.0038, validation loss: 0.5487\n",
      "Epoch [80/100], Step [30/32], training loss: 0.0043, validation loss: 0.5496\n",
      "Epoch [81/100], Step [10/32], training loss: 0.0043, validation loss: 0.5514\n",
      "Epoch [81/100], Step [20/32], training loss: 0.0039, validation loss: 0.5500\n",
      "Epoch [81/100], Step [30/32], training loss: 0.0043, validation loss: 0.5511\n",
      "Epoch [82/100], Step [10/32], training loss: 0.0051, validation loss: 0.5513\n",
      "Epoch [82/100], Step [20/32], training loss: 0.0039, validation loss: 0.5520\n",
      "Epoch [82/100], Step [30/32], training loss: 0.0033, validation loss: 0.5506\n",
      "Epoch [83/100], Step [10/32], training loss: 0.0051, validation loss: 0.5531\n",
      "Epoch [83/100], Step [20/32], training loss: 0.0038, validation loss: 0.5529\n",
      "Epoch [83/100], Step [30/32], training loss: 0.0037, validation loss: 0.5527\n",
      "Epoch [84/100], Step [10/32], training loss: 0.0049, validation loss: 0.5531\n",
      "Epoch [84/100], Step [20/32], training loss: 0.0035, validation loss: 0.5524\n",
      "Epoch [84/100], Step [30/32], training loss: 0.0032, validation loss: 0.5538\n",
      "Epoch [85/100], Step [10/32], training loss: 0.0041, validation loss: 0.5535\n",
      "Epoch [85/100], Step [20/32], training loss: 0.0038, validation loss: 0.5529\n",
      "Epoch [85/100], Step [30/32], training loss: 0.0041, validation loss: 0.5550\n",
      "Epoch [86/100], Step [10/32], training loss: 0.0046, validation loss: 0.5594\n",
      "Epoch [86/100], Step [20/32], training loss: 0.0038, validation loss: 0.5538\n",
      "Epoch [86/100], Step [30/32], training loss: 0.0036, validation loss: 0.5546\n",
      "Epoch [87/100], Step [10/32], training loss: 0.0040, validation loss: 0.5600\n",
      "Epoch [87/100], Step [20/32], training loss: 0.0033, validation loss: 0.5573\n",
      "Epoch [87/100], Step [30/32], training loss: 0.0037, validation loss: 0.5583\n",
      "Epoch [88/100], Step [10/32], training loss: 0.0049, validation loss: 0.5539\n",
      "Epoch [88/100], Step [20/32], training loss: 0.0032, validation loss: 0.5556\n",
      "Epoch [88/100], Step [30/32], training loss: 0.0035, validation loss: 0.5559\n",
      "Epoch [89/100], Step [10/32], training loss: 0.0038, validation loss: 0.5599\n",
      "Epoch [89/100], Step [20/32], training loss: 0.0035, validation loss: 0.5597\n",
      "Epoch [89/100], Step [30/32], training loss: 0.0036, validation loss: 0.5561\n",
      "Epoch [90/100], Step [10/32], training loss: 0.0043, validation loss: 0.5567\n",
      "Epoch [90/100], Step [20/32], training loss: 0.0035, validation loss: 0.5572\n",
      "Epoch [90/100], Step [30/32], training loss: 0.0031, validation loss: 0.5604\n",
      "Epoch [91/100], Step [10/32], training loss: 0.0042, validation loss: 0.5577\n",
      "Epoch [91/100], Step [20/32], training loss: 0.0032, validation loss: 0.5564\n",
      "Epoch [91/100], Step [30/32], training loss: 0.0035, validation loss: 0.5598\n",
      "Epoch [92/100], Step [10/32], training loss: 0.0043, validation loss: 0.5589\n",
      "Epoch [92/100], Step [20/32], training loss: 0.0033, validation loss: 0.5583\n",
      "Epoch [92/100], Step [30/32], training loss: 0.0031, validation loss: 0.5597\n",
      "Epoch [93/100], Step [10/32], training loss: 0.0041, validation loss: 0.5590\n",
      "Epoch [93/100], Step [20/32], training loss: 0.0031, validation loss: 0.5610\n",
      "Epoch [93/100], Step [30/32], training loss: 0.0032, validation loss: 0.5615\n",
      "Epoch [94/100], Step [10/32], training loss: 0.0041, validation loss: 0.5618\n",
      "Epoch [94/100], Step [20/32], training loss: 0.0030, validation loss: 0.5621\n",
      "Epoch [94/100], Step [30/32], training loss: 0.0032, validation loss: 0.5594\n",
      "Epoch [95/100], Step [10/32], training loss: 0.0035, validation loss: 0.5621\n",
      "Epoch [95/100], Step [20/32], training loss: 0.0032, validation loss: 0.5622\n",
      "Epoch [95/100], Step [30/32], training loss: 0.0034, validation loss: 0.5642\n",
      "Epoch [96/100], Step [10/32], training loss: 0.0037, validation loss: 0.5614\n",
      "Epoch [96/100], Step [20/32], training loss: 0.0032, validation loss: 0.5661\n",
      "Epoch [96/100], Step [30/32], training loss: 0.0031, validation loss: 0.5622\n",
      "Epoch [97/100], Step [10/32], training loss: 0.0036, validation loss: 0.5672\n",
      "Epoch [97/100], Step [20/32], training loss: 0.0029, validation loss: 0.5636\n",
      "Epoch [97/100], Step [30/32], training loss: 0.0032, validation loss: 0.5614\n",
      "Epoch [98/100], Step [10/32], training loss: 0.0036, validation loss: 0.5631\n",
      "Epoch [98/100], Step [20/32], training loss: 0.0027, validation loss: 0.5631\n",
      "Epoch [98/100], Step [30/32], training loss: 0.0031, validation loss: 0.5649\n",
      "Epoch [99/100], Step [10/32], training loss: 0.0036, validation loss: 0.5642\n",
      "Epoch [99/100], Step [20/32], training loss: 0.0031, validation loss: 0.5663\n",
      "Epoch [99/100], Step [30/32], training loss: 0.0030, validation loss: 0.5641\n",
      "Epoch [100/100], Step [10/32], training loss: 0.0039, validation loss: 0.5638\n",
      "Epoch [100/100], Step [20/32], training loss: 0.0030, validation loss: 0.5664\n",
      "Epoch [100/100], Step [30/32], training loss: 0.0030, validation loss: 0.5657\n"
     ]
    }
   ],
   "execution_count": 9
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 评估集、过拟合、欠拟合、Dropout\n",
    "Dropout是神经网络中的一种正则化方法，它可以防止过拟合，在训练过程中，每次前向传播时，都强制让20%的神经元“休息” (输出值设为0)，强迫剩下的神经元更努力地协作，学习更稳健、更通用的特征模式，最终效果是让你的模型不那么容易死记硬背训练数据 (减少过拟合)，提高在没见过的数据上的表现 (泛化能力)。"
   ],
   "id": "bcce36ee5ea79a57"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
