{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "## 数据加载器"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "---"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 介绍"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "数据是深度学习的基础，我们解决的大多数深度学习问题都是需要数据的。而每一种深度学习框架都对数据的格式有自己的要求，因此，本实验主要讲解了 PyTorch 对输入数据的格式要求，以及如何将现实中的数据处理成 PyTorch 能够识别的数据集合。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 知识点"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "- 数据的分批\n",
    "- 手写字符数据的分批\n",
    "- 葡萄酒数据的分批"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "---"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 数据的分批"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "由于深度学习中的数据量一般都是极大的，我们无法一次性将所有数据全部加载到内存中。因此，在模型训练之前，一般我们都会对训练集进行分批，将数据随机分成等量的几份，每次迭代只训练一份，如下面的伪代码所示："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "    # training loop\n",
    "    for epoch in range(num_epochs):\n",
    "        # 遍历所有的批次\n",
    "        for i in range(total_batches):\n",
    "            batch_x, batch_y = ..\n",
    "              "
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "上面伪代码可以看出："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "- epoch 每增加一次，表示完成了所有数据的一次正向传播和反向传播。\n",
    "- total_batches 表示分批后的数据集合。\n",
    "- i 每增加一次，表示完成了一批数据的一次正向传播和反向传播。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "那么如何对数据集合进行分批呢？我们可以自己尝试编写数据分批的代码。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "当然，PyTorch 中也为我们提供了相应的接口，我们可以很容易的实现数据分批的过程。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "PyTorch 为我们提供了 `torch.utils.data.DataLoader` 加载器，该加载器可以自动的将传入的数据进行打乱和分批。 `DataLoader()` 的加载参数如下："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "- dataset：需要打乱的数据集。\n",
    "- batch_size： 每一批的数据条数。\n",
    "- shuffle：True 或者 False，表示是否将数据打乱后再分批。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "当然利用该加载器并不仅仅是对数据进行打乱和分批的操作，该加载器还可以对数据进行格式化处理，使其能够被放入后面的神经网络模型中。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "接下来，让我们首先利用该加载器对 PyTorch 中自带的数据集合 MNIST 进行分批操作。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### MNIST 的分批"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "首先让我们加载 PyTorch 中的自带数据集合，该数据集合存在于 `torchvision.datasets` 中，可以直接利用 ` torchvision.datasets.MNIST` 获得："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "source": [
    "import torch\n",
    "import torchvision\n",
    "# 将数据集合下载到指定目录下\n",
    "train_dataset = torchvision.datasets.MNIST(root='./data',\n",
    "                                           train=True,\n",
    "                                           transform=torchvision.transforms.ToTensor(),\n",
    "                                           download=True)\n",
    "train_dataset"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\22459\\Miniconda3\\envs\\pytorch\\lib\\site-packages\\torchvision\\datasets\\mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  ..\\torch\\csrc\\utils\\tensor_numpy.cpp:180.)\n",
      "  return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)\n"
     ]
    },
    {
     "data": {
      "text/plain": "Dataset MNIST\n    Number of datapoints: 60000\n    Root location: ./data\n    Split: Train\n    StandardTransform\nTransform: ToTensor()"
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "source": [
    "import os\n",
    "from pathlib import Path\n",
    "\n",
    "print('WD:',os.getcwd())\n",
    "# os.chdir()\n",
    "PATH = Path().cwd()\n",
    "print('PATH:',PATH)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WD: C:\\files\\git_repository\\pytorch-learning\\2pytorch基础入门实战\\7数据加载器\n",
      "PATH: C:\\files\\git_repository\\pytorch-learning\\2pytorch基础入门实战\\7数据加载器\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "source": [
    "import torch\n",
    "import torchvision\n",
    "# 下载数据\n",
    "# 先定义数据需要的变换\n",
    "transform = torchvision.transforms.ToTensor()\n",
    "# 再加载数据\n",
    "train_dataset = torchvision.datasets.MNIST(root='./data',train=True,transform=transform)"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "从结果可以看出该数据集合共有 60000 条数据。由于这些数据都是图片，因此分批传入内存是非常必要的。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "MNIST 数据集合是一个手写字符集合，该数据集合中存储了大量的手写字符图像。我们可以加载一张图片，观察一下："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "a_data, a_label = train_dataset[0]\n",
    "print(\"\\n打印第一个batch的第一个图像，对应标签数字为{}\".format(a_label))\n",
    "# 显示第一个 batch 中的第一个图像\n",
    "# 原始数据是归一化后的数据，因此这里需要反归一化\n",
    "img = np.array(a_data+1)*127.5\n",
    "# reshape从三维变成二维，然后变成整形\n",
    "img = np.reshape(img, [28, 28]).astype(np.uint8)\n",
    "# 显示图片\n",
    "plt.imshow(img, 'gray')\n",
    "plt.show()"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "打印第一个batch的第一个图像，对应标签数字为5\n"
     ]
    },
    {
     "data": {
      "text/plain": "<Figure size 432x288 with 1 Axes>",
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8rg+JYAAAACXBIWXMAAAsTAAALEwEAmpwYAAAN4klEQVR4nO3df6hc9ZnH8c9HU/9QKzGVjUHjpqlCLOKmEoK4YXGtDa6IxhhKgywu/khBoy1sYCX7RwPrhrAmXVbRhluMjUs3taCSSy3b+KPZ7AqpXkPUJNrqitKbXJOVEEwUjDHP/nFPylXvfOc6c+ZH7vN+wWVmzjNnzsOYj+fM+c6ZryNCACa/U3rdAIDuIOxAEoQdSIKwA0kQdiCJKd3cmG1O/QMdFhEeb3lbe3bb19j+ve23bN/bzmsB6Cy3Os5u+1RJf5D0HUnDkl6StDQi9hTWYc8OdFgn9uzzJb0VEW9HxFFJv5B0QxuvB6CD2gn7eZL+OObxcLXsM2wvsz1ke6iNbQFoU8dP0EXEgKQBicN4oJfa2bPvlTRzzOPzq2UA+lA7YX9J0kW2v277NEnfkzRYT1sA6tbyYXxEHLO9XNJvJJ0qaUNE7K6tMwC1annoraWN8Zkd6LiOfKkGwMmDsANJEHYgCcIOJEHYgSQIO5AEYQeSIOxAEoQdSIKwA0kQdiAJwg4kQdiBJAg7kARhB5Ig7EAShB1IgrADSRB2IAnCDiRB2IEkCDuQBGEHkiDsQBKEHUiCsANJEHYgCcIOJEHYgSRanrIZJ4cpU8r/ic8666yObn/58uUNa6effnpx3Tlz5hTrd955Z7G+du3ahrWlS5cW1/3444+L9TVr1hTrq1atKtZ7oa2w235H0mFJn0o6FhHz6mgKQP3q2LP/dUS8X8PrAOggPrMDSbQb9pC0xfbLtpeN9wTby2wP2R5qc1sA2tDuYfyCiNhr+88kPWP7jYjYNvYJETEgaUCSbEeb2wPQorb27BGxt7o9IOkpSfPraApA/VoOu+0zbH/1xH1JCyXtqqsxAPVq5zB+uqSnbJ94nf+IiP+spatJ5oILLijWTzvttGL9iiuuKNYXLFjQsDZ16tTiukuWLCnWe2l4eLhYf+CBB4r1xYsXN6wdPny4uO4rr7xSrG/durVY70cthz0i3pb0FzX2AqCDGHoDkiDsQBKEHUiCsANJEHYgCUd070ttk/UbdJdddlmx/uyzzxbrzYbHJqvjx48X67feemux3mz4rGRkZKRYP3ToULH+xhtvtLztTosIj7ecPTuQBGEHkiDsQBKEHUiCsANJEHYgCcIOJME4ew2mTZtWrL/44ovF+uzZs+tsp1bbt28v1puNR1911VUNa0ePHi2u2+mfuZ6sGGcHkiPsQBKEHUiCsANJEHYgCcIOJEHYgSSYsrkGBw8eLNZXrFhRrF933XXF+s6dO4v1Zj+p3M5rL1y4sFg/cuRIsX7JJZc0rN1zzz3FdVEv9uxAEoQdSIKwA0kQdiAJwg4kQdiBJAg7kATXs/eBZtdtNxvLXr9+fcPa7bffXlz35ptvLtY3bdpUrKP/tHw9u+0Ntg/Y3jVm2TTbz9h+s7o9u85mAdRvIofxP5N0zeeW3SvpuYi4SNJz1WMAfaxp2CNim6TPfx/0Bkkbq/sbJS2qty0AdWv1u/HTI+LEZFnvSZre6Im2l0la1uJ2ANSk7QthIiJKJ94iYkDSgMQJOqCXWh162297hiRVtwfqawlAJ7Qa9kFJt1T3b5G0uZ52AHRK08N425skXSnpHNvDkn4kaY2kX9q+TdK7kr7bySYnuw8++KBn699xxx3F+uOPP16sN5tjHf2jadgjYmmD0rdr7gVAB/F1WSAJwg4kQdiBJAg7kARhB5LgEtdJ4Mwzz2xYGxwcLK575ZVXFuvXXPP5a6A+a8uWLcU6uo8pm4HkCDuQBGEHkiDsQBKEHUiCsANJEHYgCcbZJ7kLL7ywWN+xY0exfujQoWL9+eefL9aHhoYa1h5++OHiulw+2xrG2YHkCDuQBGEHkiDsQBKEHUiCsANJEHYgCcbZk7vxxhuL9UcffbRYbzbddMnKlSuL9ccee6xY37dvX8vbnswYZweSI+xAEoQdSIKwA0kQdiAJwg4kQdiBJBhnR9Gll15arK9du7ZYv/rqq1ve9vr164v11atXF+vDw8Mtb/tk1vI4u+0Ntg/Y3jVm2Srbe23vrP6urbNZAPWbyGH8zySNNy3Iv0bE3Orv1/W2BaBuTcMeEdskHexCLwA6qJ0TdMttv1od5p/d6Em2l9kest34x8gAdFyrYf+JpG9ImitpRNK6Rk+MiIGImBcR81rcFoAatBT2iNgfEZ9GxHFJP5U0v962ANStpbDbnjHm4Y2SdjV6LoD+0HSc3fYmSVdKOkfSfkk/qh7PlRSS3pH0/YgYaboxxtknnalTpxbr119/fcPahg0biuueckp5X9TsN+vbGeM/mTUaZ58ygRWXjrP4kbY7AtBVfF0WSIKwA0kQdiAJwg4kQdiBJLjEFT1z9OjRYn3KlPJg0bFjx4r1hQsXNqxt3bq1uO7JjJ+SBpIj7EAShB1IgrADSRB2IAnCDiRB2IEkml71htya/ZT0kiVLivX58xv/rkmzcfRm9uzZU6xv27atrdefbNizA0kQdiAJwg4kQdiBJAg7kARhB5Ig7EASjLNPcnPmzCnW77rrrmL9pptuKtbPPffcL93TRDW7Xn3fvn3F+vHjx+ts56THnh1IgrADSRB2IAnCDiRB2IEkCDuQBGEHkmCc/STQbCx76dLxJtoddffddxfXnTVrVist1WJoaKhYv++++4r1wcHBOtuZ9Jru2W3PtP1b23ts77b9g2r5NNvP2H6zuj278+0CaNVEDuOPSfr7iPimpMsl3WX7m5LulfRcRFwk6bnqMYA+1TTsETESETuq+4clvS7pPEk3SNpYPW2jpEUd6hFADb7UZ3bbsyR9S9LvJE2PiJGq9J6k6Q3WWSZpWRs9AqjBhM/G2z5T0hOSfhgRH4ytxejskONO2hgRAxExLyLmtdUpgLZMKOy2v6LRoP88Ip6sFu+3PaOqz5B0oDMtAqhD08N425b0iKTXI+LHY0qDkm6RtKa63dyRDieBZkNnF198cbH+0EMPFevNLmPtpO3btxfr999/f8Pa5s3lfzJcolqviXxm/0tJfyvpNds7q2UrNRryX9q+TdK7kr7bkQ4B1KJp2CPifySNO7m7pG/X2w6ATuHrskAShB1IgrADSRB2IAnCDiTBJa4TNG3atIa1gYGB4rpz584t1mfPnt1KS7V44YUXivV169YV61u2bCnWP/rooy/dEzqDPTuQBGEHkiDsQBKEHUiCsANJEHYgCcIOJJFmnP3yyy8v1lesWFGsz58/v2Ht/PPPb6mnunz44YcNaw8++GBx3dWrVxfrR44caakn9B/27EAShB1IgrADSRB2IAnCDiRB2IEkCDuQRJpx9kWLFhXrixcv7ti2d+/eXaw//fTTxfonn3xSrJeuOT906FBxXeTBnh1IgrADSRB2IAnCDiRB2IEkCDuQBGEHknBElJ9gz5T0mKTpkkLSQET8m+1Vku6Q9H/VU1dGxK+bvFZ5YwDaFhHjzro8kbDPkDQjInbY/qqklyUt0uh87EciYu1EmyDsQOc1CvtE5mcfkTRS3T9s+3VJ59XbHoBO+1Kf2W3PkvQtSb+rFi23/artDbbPbrDOMttDtofaaxVAO5oexv/pifaZkv5L0j9HxJO2p0t6X6Of4/9Jo4f6tzZ5DQ7jgQ5r+TO7JNn+iqRfSfpNRPx4nPosSb+KiEuavA5hBzqsUdibHsbbtqRHJL0+NujVibsTbpS0q90mAXTORM7GL5D035Jek3S8WrxS0lJJczV6GP+OpO9XJ/NKr8WeHeiwtg7j60LYgc5r+TAewORA2IEkCDuQBGEHkiDsQBKEHUiCsANJEHYgCcIOJEHYgSQIO5AEYQeSIOxAEoQdSKLbUza/L+ndMY/PqZb1o37trV/7kuitVXX29ueNCl29nv0LG7eHImJezxoo6Nfe+rUvid5a1a3eOIwHkiDsQBK9DvtAj7df0q+99WtfEr21qiu99fQzO4Du6fWeHUCXEHYgiZ6E3fY1tn9v+y3b9/aih0Zsv2P7Nds7ez0/XTWH3gHbu8Ysm2b7GdtvVrfjzrHXo95W2d5bvXc7bV/bo95m2v6t7T22d9v+QbW8p+9doa+uvG9d/8xu+1RJf5D0HUnDkl6StDQi9nS1kQZsvyNpXkT0/AsYtv9K0hFJj52YWsv2v0g6GBFrqv9Rnh0R/9Anva3Sl5zGu0O9NZpm/O/Uw/euzunPW9GLPft8SW9FxNsRcVTSLyTd0IM++l5EbJN08HOLb5C0sbq/UaP/WLquQW99ISJGImJHdf+wpBPTjPf0vSv01RW9CPt5kv445vGw+mu+95C0xfbLtpf1uplxTB8zzdZ7kqb3splxNJ3Gu5s+N81437x3rUx/3i5O0H3Rgoi4TNLfSLqrOlztSzH6Gayfxk5/IukbGp0DcETSul42U00z/oSkH0bEB2NrvXzvxumrK+9bL8K+V9LMMY/Pr5b1hYjYW90ekPSURj929JP9J2bQrW4P9LifP4mI/RHxaUQcl/RT9fC9q6YZf0LSzyPiyWpxz9+78frq1vvWi7C/JOki21+3fZqk70ka7EEfX2D7jOrEiWyfIWmh+m8q6kFJt1T3b5G0uYe9fEa/TOPdaJpx9fi96/n05xHR9T9J12r0jPz/SvrHXvTQoK/Zkl6p/nb3ujdJmzR6WPeJRs9t3Cbpa5Kek/SmpGclTeuj3v5do1N7v6rRYM3oUW8LNHqI/qqkndXftb1+7wp9deV94+uyQBKcoAOSIOxAEoQdSIKwA0kQdiAJwg4kQdiBJP4fN6NxqfHk1jAAAAAASUVORK5CYII=\n"
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "该数据集合的分批步骤很简单，只需要将我们得到的数据集合传入 DataLoader 中即可："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "source": [
    "from torch.utils.data import DataLoader\n",
    "\n",
    "train_loader = DataLoader(dataset=train_dataset, batch_size=100, shuffle=True)\n",
    "num_epochs = 1  # 迭代次数\n",
    "for epoch in range(num_epochs):\n",
    "    for i, (inputs, labels) in enumerate(train_loader):\n",
    "        # 每 10 个批次展示一次\n",
    "        if (i+1) % 10 == 0:\n",
    "            print(\n",
    "                f'Epoch: {epoch+1}/{num_epochs},Step {i+1}/{len(train_dataset)/100}| Inputs {inputs.shape} | Labels {labels.shape}')"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 1/1,Step 10/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 20/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 30/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 40/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 50/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 60/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 70/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 80/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 90/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 100/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 110/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 120/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 130/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 140/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 150/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 160/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 170/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 180/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 190/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 200/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 210/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 220/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 230/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 240/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 250/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 260/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 270/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 280/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 290/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 300/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 310/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 320/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 330/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 340/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 350/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 360/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 370/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 380/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 390/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 400/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 410/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 420/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 430/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 440/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 450/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 460/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 470/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 480/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 490/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 500/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 510/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 520/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 530/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 540/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 550/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 560/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 570/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 580/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 590/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 600/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "source": [
    "# 进行数据分批\n",
    "from torch.utils.data import DataLoader\n",
    "\n",
    "train_loader = DataLoader(dataset = train_dataset,batch_size=100,shuffle=True)\n",
    "num_epochs = 1  # 迭代次数\n",
    "for epoch in range(num_epochs):\n",
    "    for i, (inputs, labels) in enumerate(train_loader):\n",
    "        # 每 10 个批次展示一次\n",
    "        if (i+1) % 10 == 0:\n",
    "            print(\n",
    "                f'Epoch: {epoch+1}/{num_epochs},Step {i+1}/{len(train_dataset)/100}| Inputs {inputs.shape} | Labels {labels.shape}')"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 1/1,Step 10/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 20/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 30/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 40/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 50/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 60/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 70/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 80/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 90/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 100/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 110/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 120/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 130/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 140/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 150/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 160/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 170/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 180/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 190/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 200/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 210/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 220/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 230/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 240/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 250/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 260/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 270/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 280/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 290/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 300/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 310/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 320/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 330/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 340/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 350/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 360/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 370/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 380/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 390/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 400/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 410/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 420/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 430/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 440/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 450/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 460/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 470/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 480/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 490/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 500/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 510/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 520/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 530/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 540/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 550/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 560/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 570/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 580/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 590/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n",
      "Epoch: 1/1,Step 600/600.0| Inputs torch.Size([100, 1, 28, 28]) | Labels torch.Size([100])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "如上所示，我们对 MNIST 数据集合进行分批，且利用循环对其进行了遍历。在模型训练时，我们将内循环中的代码换成模型训练的代码即可。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "上面我们展示的是 PyTorch 中自带的数据集合的分批步骤，但是该库并没有包含全世界所有的数据集合。如果我们需要将自己的数据集合进行分批，应该怎么做呢？\n"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "在 `DataLoader` 传入的参数值，我们需要注意的是 `dataset` 参数。该参数是一个  Dataset 类，即只有继承了 PyTorch 中的 `Dataset` 接口的类，才能够被传入 `DataLoader` 中。\n"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "因此，针对于自己的数据集合，我们往往需要对原始的数据进行处理，将其封装成一个继承了 Dataset 的 Python 类。这样，我们才能够将其传入 PyTorch 之中，进行数据的分批和模型的训练。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "接下来，让我们以葡萄酒的种类预测为例，详细的阐述自定义数据应当如何进行封装和分批。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 葡萄酒数据的分批"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "接下来，我们将对葡萄酒数据进行分批操作。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "首先，让我们先来加载数据集合："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "source": [
    "import pandas as pd\n",
    "df = pd.read_csv(\n",
    "    \"https://labfile.oss.aliyuncs.com/courses/2316/wine.csv\", header=None)\n",
    "df"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": "     0      1     2     3     4    5     6     7     8     9      10    11  \\\n0     1  14.23  1.71  2.43  15.6  127  2.80  3.06  0.28  2.29   5.64  1.04   \n1     1  13.20  1.78  2.14  11.2  100  2.65  2.76  0.26  1.28   4.38  1.05   \n2     1  13.16  2.36  2.67  18.6  101  2.80  3.24  0.30  2.81   5.68  1.03   \n3     1  14.37  1.95  2.50  16.8  113  3.85  3.49  0.24  2.18   7.80  0.86   \n4     1  13.24  2.59  2.87  21.0  118  2.80  2.69  0.39  1.82   4.32  1.04   \n..   ..    ...   ...   ...   ...  ...   ...   ...   ...   ...    ...   ...   \n173   3  13.71  5.65  2.45  20.5   95  1.68  0.61  0.52  1.06   7.70  0.64   \n174   3  13.40  3.91  2.48  23.0  102  1.80  0.75  0.43  1.41   7.30  0.70   \n175   3  13.27  4.28  2.26  20.0  120  1.59  0.69  0.43  1.35  10.20  0.59   \n176   3  13.17  2.59  2.37  20.0  120  1.65  0.68  0.53  1.46   9.30  0.60   \n177   3  14.13  4.10  2.74  24.5   96  2.05  0.76  0.56  1.35   9.20  0.61   \n\n       12    13  \n0    3.92  1065  \n1    3.40  1050  \n2    3.17  1185  \n3    3.45  1480  \n4    2.93   735  \n..    ...   ...  \n173  1.74   740  \n174  1.56   750  \n175  1.56   835  \n176  1.62   840  \n177  1.60   560  \n\n[178 rows x 14 columns]",
      "text/html": "<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }\n\n    .dataframe tbody tr th {\n        vertical-align: top;\n    }\n\n    .dataframe thead th {\n        text-align: right;\n    }\n</style>\n<table border=\"1\" class=\"dataframe\">\n  <thead>\n    <tr style=\"text-align: right;\">\n      <th></th>\n      <th>0</th>\n      <th>1</th>\n      <th>2</th>\n      <th>3</th>\n      <th>4</th>\n      <th>5</th>\n      <th>6</th>\n      <th>7</th>\n      <th>8</th>\n      <th>9</th>\n      <th>10</th>\n      <th>11</th>\n      <th>12</th>\n      <th>13</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <th>0</th>\n      <td>1</td>\n      <td>14.23</td>\n      <td>1.71</td>\n      <td>2.43</td>\n      <td>15.6</td>\n      <td>127</td>\n      <td>2.80</td>\n      <td>3.06</td>\n      <td>0.28</td>\n      <td>2.29</td>\n      <td>5.64</td>\n      <td>1.04</td>\n      <td>3.92</td>\n      <td>1065</td>\n    </tr>\n    <tr>\n      <th>1</th>\n      <td>1</td>\n      <td>13.20</td>\n      <td>1.78</td>\n      <td>2.14</td>\n      <td>11.2</td>\n      <td>100</td>\n      <td>2.65</td>\n      <td>2.76</td>\n      <td>0.26</td>\n      <td>1.28</td>\n      <td>4.38</td>\n      <td>1.05</td>\n      <td>3.40</td>\n      <td>1050</td>\n    </tr>\n    <tr>\n      <th>2</th>\n      <td>1</td>\n      <td>13.16</td>\n      <td>2.36</td>\n      <td>2.67</td>\n      <td>18.6</td>\n      <td>101</td>\n      <td>2.80</td>\n      <td>3.24</td>\n      <td>0.30</td>\n      <td>2.81</td>\n      <td>5.68</td>\n      <td>1.03</td>\n      <td>3.17</td>\n      <td>1185</td>\n    </tr>\n    <tr>\n      <th>3</th>\n      <td>1</td>\n      <td>14.37</td>\n      <td>1.95</td>\n      <td>2.50</td>\n      <td>16.8</td>\n      <td>113</td>\n      <td>3.85</td>\n      <td>3.49</td>\n      <td>0.24</td>\n      <td>2.18</td>\n      <td>7.80</td>\n      <td>0.86</td>\n      <td>3.45</td>\n      <td>1480</td>\n    </tr>\n    <tr>\n      <th>4</th>\n      <td>1</td>\n      <td>13.24</td>\n      <td>2.59</td>\n      <td>2.87</td>\n      <td>21.0</td>\n      <td>118</td>\n      <td>2.80</td>\n      <td>2.69</td>\n      <td>0.39</td>\n      <td>1.82</td>\n      <td>4.32</td>\n      <td>1.04</td>\n      <td>2.93</td>\n      <td>735</td>\n    </tr>\n    <tr>\n      <th>...</th>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n    </tr>\n    <tr>\n      <th>173</th>\n      <td>3</td>\n      <td>13.71</td>\n      <td>5.65</td>\n      <td>2.45</td>\n      <td>20.5</td>\n      <td>95</td>\n      <td>1.68</td>\n      <td>0.61</td>\n      <td>0.52</td>\n      <td>1.06</td>\n      <td>7.70</td>\n      <td>0.64</td>\n      <td>1.74</td>\n      <td>740</td>\n    </tr>\n    <tr>\n      <th>174</th>\n      <td>3</td>\n      <td>13.40</td>\n      <td>3.91</td>\n      <td>2.48</td>\n      <td>23.0</td>\n      <td>102</td>\n      <td>1.80</td>\n      <td>0.75</td>\n      <td>0.43</td>\n      <td>1.41</td>\n      <td>7.30</td>\n      <td>0.70</td>\n      <td>1.56</td>\n      <td>750</td>\n    </tr>\n    <tr>\n      <th>175</th>\n      <td>3</td>\n      <td>13.27</td>\n      <td>4.28</td>\n      <td>2.26</td>\n      <td>20.0</td>\n      <td>120</td>\n      <td>1.59</td>\n      <td>0.69</td>\n      <td>0.43</td>\n      <td>1.35</td>\n      <td>10.20</td>\n      <td>0.59</td>\n      <td>1.56</td>\n      <td>835</td>\n    </tr>\n    <tr>\n      <th>176</th>\n      <td>3</td>\n      <td>13.17</td>\n      <td>2.59</td>\n      <td>2.37</td>\n      <td>20.0</td>\n      <td>120</td>\n      <td>1.65</td>\n      <td>0.68</td>\n      <td>0.53</td>\n      <td>1.46</td>\n      <td>9.30</td>\n      <td>0.60</td>\n      <td>1.62</td>\n      <td>840</td>\n    </tr>\n    <tr>\n      <th>177</th>\n      <td>3</td>\n      <td>14.13</td>\n      <td>4.10</td>\n      <td>2.74</td>\n      <td>24.5</td>\n      <td>96</td>\n      <td>2.05</td>\n      <td>0.76</td>\n      <td>0.56</td>\n      <td>1.35</td>\n      <td>9.20</td>\n      <td>0.61</td>\n      <td>1.60</td>\n      <td>560</td>\n    </tr>\n  </tbody>\n</table>\n<p>178 rows × 14 columns</p>\n</div>"
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "source": [
    "import pandas as pd\n",
    "# 如果读取的数据没有表头，那么一定要设定header=None\n",
    "df = pd.read_csv(\n",
    "    \"https://labfile.oss.aliyuncs.com/courses/2316/wine.csv\",header=None)\n",
    "df"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": "     0      1     2     3     4    5     6     7     8     9      10    11  \\\n0     1  14.23  1.71  2.43  15.6  127  2.80  3.06  0.28  2.29   5.64  1.04   \n1     1  13.20  1.78  2.14  11.2  100  2.65  2.76  0.26  1.28   4.38  1.05   \n2     1  13.16  2.36  2.67  18.6  101  2.80  3.24  0.30  2.81   5.68  1.03   \n3     1  14.37  1.95  2.50  16.8  113  3.85  3.49  0.24  2.18   7.80  0.86   \n4     1  13.24  2.59  2.87  21.0  118  2.80  2.69  0.39  1.82   4.32  1.04   \n..   ..    ...   ...   ...   ...  ...   ...   ...   ...   ...    ...   ...   \n173   3  13.71  5.65  2.45  20.5   95  1.68  0.61  0.52  1.06   7.70  0.64   \n174   3  13.40  3.91  2.48  23.0  102  1.80  0.75  0.43  1.41   7.30  0.70   \n175   3  13.27  4.28  2.26  20.0  120  1.59  0.69  0.43  1.35  10.20  0.59   \n176   3  13.17  2.59  2.37  20.0  120  1.65  0.68  0.53  1.46   9.30  0.60   \n177   3  14.13  4.10  2.74  24.5   96  2.05  0.76  0.56  1.35   9.20  0.61   \n\n       12    13  \n0    3.92  1065  \n1    3.40  1050  \n2    3.17  1185  \n3    3.45  1480  \n4    2.93   735  \n..    ...   ...  \n173  1.74   740  \n174  1.56   750  \n175  1.56   835  \n176  1.62   840  \n177  1.60   560  \n\n[178 rows x 14 columns]",
      "text/html": "<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }\n\n    .dataframe tbody tr th {\n        vertical-align: top;\n    }\n\n    .dataframe thead th {\n        text-align: right;\n    }\n</style>\n<table border=\"1\" class=\"dataframe\">\n  <thead>\n    <tr style=\"text-align: right;\">\n      <th></th>\n      <th>0</th>\n      <th>1</th>\n      <th>2</th>\n      <th>3</th>\n      <th>4</th>\n      <th>5</th>\n      <th>6</th>\n      <th>7</th>\n      <th>8</th>\n      <th>9</th>\n      <th>10</th>\n      <th>11</th>\n      <th>12</th>\n      <th>13</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <th>0</th>\n      <td>1</td>\n      <td>14.23</td>\n      <td>1.71</td>\n      <td>2.43</td>\n      <td>15.6</td>\n      <td>127</td>\n      <td>2.80</td>\n      <td>3.06</td>\n      <td>0.28</td>\n      <td>2.29</td>\n      <td>5.64</td>\n      <td>1.04</td>\n      <td>3.92</td>\n      <td>1065</td>\n    </tr>\n    <tr>\n      <th>1</th>\n      <td>1</td>\n      <td>13.20</td>\n      <td>1.78</td>\n      <td>2.14</td>\n      <td>11.2</td>\n      <td>100</td>\n      <td>2.65</td>\n      <td>2.76</td>\n      <td>0.26</td>\n      <td>1.28</td>\n      <td>4.38</td>\n      <td>1.05</td>\n      <td>3.40</td>\n      <td>1050</td>\n    </tr>\n    <tr>\n      <th>2</th>\n      <td>1</td>\n      <td>13.16</td>\n      <td>2.36</td>\n      <td>2.67</td>\n      <td>18.6</td>\n      <td>101</td>\n      <td>2.80</td>\n      <td>3.24</td>\n      <td>0.30</td>\n      <td>2.81</td>\n      <td>5.68</td>\n      <td>1.03</td>\n      <td>3.17</td>\n      <td>1185</td>\n    </tr>\n    <tr>\n      <th>3</th>\n      <td>1</td>\n      <td>14.37</td>\n      <td>1.95</td>\n      <td>2.50</td>\n      <td>16.8</td>\n      <td>113</td>\n      <td>3.85</td>\n      <td>3.49</td>\n      <td>0.24</td>\n      <td>2.18</td>\n      <td>7.80</td>\n      <td>0.86</td>\n      <td>3.45</td>\n      <td>1480</td>\n    </tr>\n    <tr>\n      <th>4</th>\n      <td>1</td>\n      <td>13.24</td>\n      <td>2.59</td>\n      <td>2.87</td>\n      <td>21.0</td>\n      <td>118</td>\n      <td>2.80</td>\n      <td>2.69</td>\n      <td>0.39</td>\n      <td>1.82</td>\n      <td>4.32</td>\n      <td>1.04</td>\n      <td>2.93</td>\n      <td>735</td>\n    </tr>\n    <tr>\n      <th>...</th>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n    </tr>\n    <tr>\n      <th>173</th>\n      <td>3</td>\n      <td>13.71</td>\n      <td>5.65</td>\n      <td>2.45</td>\n      <td>20.5</td>\n      <td>95</td>\n      <td>1.68</td>\n      <td>0.61</td>\n      <td>0.52</td>\n      <td>1.06</td>\n      <td>7.70</td>\n      <td>0.64</td>\n      <td>1.74</td>\n      <td>740</td>\n    </tr>\n    <tr>\n      <th>174</th>\n      <td>3</td>\n      <td>13.40</td>\n      <td>3.91</td>\n      <td>2.48</td>\n      <td>23.0</td>\n      <td>102</td>\n      <td>1.80</td>\n      <td>0.75</td>\n      <td>0.43</td>\n      <td>1.41</td>\n      <td>7.30</td>\n      <td>0.70</td>\n      <td>1.56</td>\n      <td>750</td>\n    </tr>\n    <tr>\n      <th>175</th>\n      <td>3</td>\n      <td>13.27</td>\n      <td>4.28</td>\n      <td>2.26</td>\n      <td>20.0</td>\n      <td>120</td>\n      <td>1.59</td>\n      <td>0.69</td>\n      <td>0.43</td>\n      <td>1.35</td>\n      <td>10.20</td>\n      <td>0.59</td>\n      <td>1.56</td>\n      <td>835</td>\n    </tr>\n    <tr>\n      <th>176</th>\n      <td>3</td>\n      <td>13.17</td>\n      <td>2.59</td>\n      <td>2.37</td>\n      <td>20.0</td>\n      <td>120</td>\n      <td>1.65</td>\n      <td>0.68</td>\n      <td>0.53</td>\n      <td>1.46</td>\n      <td>9.30</td>\n      <td>0.60</td>\n      <td>1.62</td>\n      <td>840</td>\n    </tr>\n    <tr>\n      <th>177</th>\n      <td>3</td>\n      <td>14.13</td>\n      <td>4.10</td>\n      <td>2.74</td>\n      <td>24.5</td>\n      <td>96</td>\n      <td>2.05</td>\n      <td>0.76</td>\n      <td>0.56</td>\n      <td>1.35</td>\n      <td>9.20</td>\n      <td>0.61</td>\n      <td>1.60</td>\n      <td>560</td>\n    </tr>\n  </tbody>\n</table>\n<p>178 rows × 14 columns</p>\n</div>"
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "如上表所示，第一列表示该条数据属于哪一种葡萄酒（0,1,2）。而后面 13 列的数据表示的就是葡萄酒的每种化学成分的浓度。这些化学成分分别为：酒精 、苹果酸 、灰分 、灰分的碱度、镁 、总酚、 黄酮类化合物 、非类黄酮酚 、原花色素 、颜色强度 、色相 、稀释酒的 OD280/OD315 和脯氨酸。 "
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "我们需要对这些数据进行分批，那么我们就需要将该数据转为 PyTorch 认识的数据集合。我们可以建立一个类 `WineDataset` 去继承 `Dataset`。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "如果继承了 `Dataset` 类，我们就必须实现下面三个函数："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "- ` __init__(self)` ：用于初始化类中所需要的一些变量。\n",
    "- ` __len__(self)`: 返回数据集合的长度，即数据量大小。\n",
    "- ` __getitem__(self, index)`：返回第 index 条数据\n"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "source": [
    "from torch.utils.data import Dataset\n",
    "\n",
    "\n",
    "class WineDataset(Dataset):\n",
    "    # 建立一个数据集合继承  Dataset 即可\n",
    "    def __init__(self):\n",
    "        # I初始化数据\n",
    "        # 以pandas的形式读入数据\n",
    "        xy = pd.read_csv(\n",
    "            \"https://labfile.oss.aliyuncs.com/courses/2316/wine.csv\", header=None)\n",
    "        self.n_samples = xy.shape[0]\n",
    "\n",
    "        # 将 pandas 类型的数据转换成 numpy 类型\n",
    "        # size [n_samples, n_features]\n",
    "        self.x_data = torch.from_numpy(xy.values[:, 1:])\n",
    "        self.y_data = torch.from_numpy(\n",
    "            xy.values[:, [0]])  # size [n_samples, 1]\n",
    "\n",
    "    # 返回 dataset[index]\n",
    "    def __getitem__(self, index):\n",
    "        return self.x_data[index], self.y_data[index]\n",
    "\n",
    "    # 返回数据长度\n",
    "    def __len__(self):\n",
    "        return self.n_samples\n",
    "\n",
    "\n",
    "# 测试\n",
    "# 创造 dataset\n",
    "dataset = WineDataset()\n",
    "dataset[0]\n",
    "print(dataset[0][0].shape)\n",
    "print(dataset[0][1].shape)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([13])\n",
      "torch.Size([1])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "source": [
    "# 通过继承Dataset来自定义数据集\n",
    "from torch.utils.data import Dataset\n",
    "\n",
    "class MyWineDataset(Dataset):\n",
    "    def __init__(self):\n",
    "        super(MyWineDataset,self).__init__()\n",
    "        # 初始化数据\n",
    "        data_df = pd.read_csv(\"https://labfile.oss.aliyuncs.com/courses/2316/wine.csv\",header=None)\n",
    "        data_np = df.to_numpy()\n",
    "        # 第一列是标签，其他列是训练数据,应该是对第1维度，也就是列进行切片，而且还要把标签还原成二维\n",
    "        self.data_x = torch.from_numpy(data_np[:,1:])\n",
    "        self.data_y = torch.from_numpy(data_np[:,0]).reshape(-1,1)\n",
    "    # 这里只需要返回data的index就好了，并不需要返回切片\n",
    "    def __getitem__(self,index):\n",
    "        return self.data_x[index],self.data_y[index]\n",
    "    # 返回数据长度,也就是样本的个数\n",
    "    def __len__(self):\n",
    "        return self.data_x.shape[0]\n",
    "\n",
    "dataset = MyWineDataset()\n",
    "dataset[0]\n",
    "print(dataset[0][0].shape)\n",
    "print(dataset[0][1].shape)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([13])\n",
      "torch.Size([1])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "至此，我们将葡萄酒数据也装上了一个“外壳”，使 PyTorch 能够识别出该数据集合。接下来，我们只需要利用 `DataLoader` 加载该数据集合即可："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "source": [
    "import math\n",
    "# 传入加载器\n",
    "train_loader = DataLoader(dataset=dataset, batch_size=4, shuffle=True)\n",
    "# 分批训练\n",
    "# 迭代次数\n",
    "num_epochs = 2\n",
    "total_samples = len(dataset)\n",
    "# 批次\n",
    "n_iterations = math.ceil(total_samples/4)\n",
    "print(\"该数据集合共有{}条数据，被分成了{}个批次\".format(total_samples, n_iterations))\n",
    "for epoch in range(num_epochs):\n",
    "    for i, (inputs, labels) in enumerate(train_loader):\n",
    "\n",
    "        # 178 个样本, batch_size = 4, n_iters=178/4=44.5 -> 45 个批次\n",
    "        if (i+1) % 5 == 0:\n",
    "            print(\n",
    "                f'Epoch: {epoch+1}/{num_epochs}, Step {i+1}/{n_iterations}| Inputs {inputs.shape} | Labels {labels.shape}')"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "该数据集合共有178条数据，被分成了45个批次\n",
      "Epoch: 1/2, Step 5/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 10/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 15/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 20/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 25/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 30/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 35/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 40/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 1/2, Step 45/45| Inputs torch.Size([2, 13]) | Labels torch.Size([2, 1])\n",
      "Epoch: 2/2, Step 5/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 10/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 15/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 20/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 25/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 30/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 35/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 40/45| Inputs torch.Size([4, 13]) | Labels torch.Size([4, 1])\n",
      "Epoch: 2/2, Step 45/45| Inputs torch.Size([2, 13]) | Labels torch.Size([2, 1])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "source": [
    "# 给数据加上一个外壳\n",
    "train_loader = DataLoader(dataset=dataset, batch_size=4, shuffle=True)"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "从结果可以看出，我们按照每个批次 batch_size 个，将数据集合分成了 45 个批次，并对这些批次进行了 2 次迭代。由于数据总量不是 batch_size 的整数倍，因此最后一个批次的数据量为 total_samples 除以 batc_size 的余数个。 "
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 实验总结"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "本实验详细的阐述了数据分批的重要性。并且以手写字符数据集和葡萄酒数据集为例，阐述了如何将一个数据集合转成 PyTorch 能够使用的数据集。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "<hr><div style=\"color: #999; font-size: 12px;\"><i class=\"fa fa-copyright\" aria-hidden=\"true\"> 本课程内容版权归蓝桥云课所有，禁止转载、下载及非法传播。</i></div>"
   ],
   "metadata": {}
  }
 ],
 "metadata": {
  "kernelspec": {
   "name": "pycharm-9ef32c86",
   "language": "python",
   "display_name": "PyCharm (pytorch-learning)"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  },
  "interpreter": {
   "hash": "95edf26445b41d81dc60008cc593bb3c243ca80a3a822915e2b6f7013280bc10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}