{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 实验题目：使用 PyTorch 进行糖尿病数据集的加载与预处理\n",
    "\n",
    "#### 实验目的：\n",
    "\n",
    "1. 掌握 PyTorch 中 `Dataset`和 `DataLoader`的使用方法。\n",
    "2. 学习如何使用 PyTorch 进行数据预处理和批量加载。\n",
    "3. 理解并实践数据增强和归一化在图像处理中的应用。\n",
    "\n",
    "#### 实验环境：\n",
    "\n",
    "- Python 3.11.4\n",
    "- PyTorch 2.4.0\n",
    "- Vscode 1.93.1\n",
    "- NumPy 1.26.4\n",
    "- torchvision 0.19.0\n",
    "\n",
    "#### 实验步骤：\n",
    "\n",
    "1. **理论学习**：\n",
    "\n",
    "   - 阅读并理解 PyTorch 中 `Dataset`和 `DataLoader`的工作原理。\n",
    "   - 学习数据预处理的重要性和基本方法。\n",
    "2. **环境搭建**：\n",
    "\n",
    "   - 安装并配置 Python 环境。\n",
    "   - 安装 PyTorch、NumPy 和 torchvision 库。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3. **数据加载与预处理**：\n",
    "\n",
    "   - 使用 PyTorch 的 `DataLoader`和自定义的 `Dataset`类加载糖尿病数据集。\n",
    "   - 实现数据的预处理，包括归一化和可能的数据增强。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([759, 8]) torch.Size([759, 1])\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from torch.utils.data import Dataset, DataLoader, TensorDataset\n",
    "from torch.autograd import Variable\n",
    "import numpy as np\n",
    "\n",
    "xy = np.loadtxt(\"E:\\diabetes.csv.gz\", delimiter=',', dtype=np.float32) # 使用numpy读取数据\n",
    "x_data = torch.from_numpy(xy[:, 0:-1])\n",
    "y_data = torch.from_numpy(xy[:, [-1]])\n",
    "\n",
    "print(x_data.shape, y_data.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 0 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 1 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 2 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 3 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 4 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 5 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 6 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 7 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 8 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 9 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 10 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 11 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 12 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 13 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 14 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 15 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 16 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 17 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 18 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 19 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 20 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 21 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 22 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "0 23 inputs torch.Size([23, 8]) labels torch.Size([23, 1])\n",
      "1 0 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 1 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 2 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 3 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 4 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 5 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 6 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 7 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 8 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 9 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 10 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 11 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 12 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 13 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 14 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 15 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 16 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 17 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 18 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 19 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 20 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 21 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 22 inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "1 23 inputs torch.Size([23, 8]) labels torch.Size([23, 1])\n"
     ]
    }
   ],
   "source": [
    "deal_dataset = TensorDataset(x_data, y_data)\n",
    "\n",
    "train_loader = DataLoader(dataset=deal_dataset,\n",
    "                          batch_size=32,\n",
    "                          shuffle=True,\n",
    "                          num_workers=2)\n",
    "for epoch in range(2):\n",
    "    for i, data in enumerate(train_loader):\n",
    "        # 将数据从 train_loader 中读出来,一次读取的样本数是32个\n",
    "        inputs, labels = data\n",
    "\n",
    "        # 将这些数据转换成Variable类型\n",
    "        inputs, labels = Variable(inputs), Variable(labels)\n",
    "\n",
    "        # 接下来就是跑模型的环节了，我们这里使用print来代替\n",
    "        print(epoch, i, \"inputs\", inputs.data.size(), \"labels\", labels.data.size())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "4. **编写自定义 Dataset 类**：\n",
    "\n",
    "   - 根据提供的代码，编写一个自定义的 `Dataset`类来处理糖尿病数据集。\n",
    "   - 实现 `__getitem__`和 `__len__`方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch： 0 的第 0 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 1 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 2 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 3 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 4 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 5 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 6 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 7 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 8 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 9 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 10 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 11 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 12 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 13 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 14 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 15 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 16 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 17 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 18 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 19 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 20 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 21 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 22 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 0 的第 23 个inputs torch.Size([23, 8]) labels torch.Size([23, 1])\n",
      "epoch： 1 的第 0 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 1 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 2 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 3 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 4 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 5 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 6 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 7 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 8 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 9 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 10 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 11 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 12 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 13 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 14 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 15 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 16 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 17 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 18 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 19 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 20 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 21 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 22 个inputs torch.Size([32, 8]) labels torch.Size([32, 1])\n",
      "epoch： 1 的第 23 个inputs torch.Size([23, 8]) labels torch.Size([23, 1])\n"
     ]
    }
   ],
   "source": [
    "class DealDataset(Dataset):\n",
    "    \"\"\"\n",
    "        下载数据、初始化数据，都可以在这里完成\n",
    "    \"\"\"\n",
    "    def __init__(self):\n",
    "        xy = np.loadtxt(\"E:\\diabetes.csv.gz\", delimiter=',', dtype=np.float32) # 使用numpy读取数据\n",
    "        self.x_data = torch.from_numpy(xy[:, 0:-1])\n",
    "        self.y_data = torch.from_numpy(xy[:, [-1]])\n",
    "        self.len = xy.shape[0]\n",
    "    \n",
    "    def __getitem__(self, index):\n",
    "        return self.x_data[index], self.y_data[index]\n",
    "\n",
    "    def __len__(self):\n",
    "        return self.len\n",
    "\n",
    "# 实例化这个类，然后我们就得到了Dataset类型的数据，记下来就将这个类传给DataLoader，就可以了。    \n",
    "dealDataset = DealDataset()\n",
    "\n",
    "train_loader2 = DataLoader(dataset=dealDataset,\n",
    "                          batch_size=32,\n",
    "                          shuffle=True)\n",
    "\n",
    "\n",
    "for epoch in range(2):\n",
    "    for i, data in enumerate(train_loader2):\n",
    "        # 将数据从 train_loader 中读出来,一次读取的样本数是32个\n",
    "        inputs, labels = data\n",
    "\n",
    "        # 将这些数据转换成Variable类型\n",
    "        inputs, labels = Variable(inputs), Variable(labels)\n",
    "\n",
    "        # 接下来就是跑模型的环节了，我们这里使用print来代替\n",
    "        print(\"epoch：\", epoch, \"的第\" , i, \"个inputs\", inputs.data.size(), \"labels\", labels.data.size())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "5. **图像数据增强**：\n",
    "\n",
    "   - 使用 torchvision 的 transforms 对 MNIST 数据集进行预处理和数据增强。\n",
    "   - 包括调整图像大小、转换为张量、归一化等操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\n",
      "Using downloaded and verified file: ./data\\MNIST\\raw\\train-images-idx3-ubyte.gz\n",
      "Extracting ./data\\MNIST\\raw\\train-images-idx3-ubyte.gz to ./data\\MNIST\\raw\n",
      "\n",
      "Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\n",
      "Using downloaded and verified file: ./data\\MNIST\\raw\\train-labels-idx1-ubyte.gz\n",
      "Extracting ./data\\MNIST\\raw\\train-labels-idx1-ubyte.gz to ./data\\MNIST\\raw\n",
      "\n",
      "Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\n",
      "Failed to download (trying next):\n",
      "HTTP Error 403: Forbidden\n",
      "\n",
      "Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz\n",
      "Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz to ./data\\MNIST\\raw\\t10k-images-idx3-ubyte.gz\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 1648877/1648877 [00:02<00:00, 585998.19it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./data\\MNIST\\raw\\t10k-images-idx3-ubyte.gz to ./data\\MNIST\\raw\n",
      "\n",
      "Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\n",
      "Failed to download (trying next):\n",
      "HTTP Error 403: Forbidden\n",
      "\n",
      "Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz\n",
      "Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz to ./data\\MNIST\\raw\\t10k-labels-idx1-ubyte.gz\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 4542/4542 [00:00<00:00, 4230630.42it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./data\\MNIST\\raw\\t10k-labels-idx1-ubyte.gz to ./data\\MNIST\\raw\n",
      "\n",
      "训练集总长度为： 60000\n",
      "每个mini-batch的size 为 32 , 一共有： 1875 个\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "import torchvision\n",
    "\n",
    "DOWNLOAD = True\n",
    "\n",
    "trainset = torchvision.datasets.MNIST(root='./data', # 表示 MNIST 数据的加载的相对目录\n",
    "                                      train=True,  # 表示是否加载数据库的训练集，false的时候加载测试集\n",
    "                                      download=DOWNLOAD, # 表示是否自动下载 MNIST 数据集\n",
    "                                      transform=None) # 表示是否需要对数据进行预处理，none为不进行预处理\n",
    "\n",
    "# 上面代码就完成了MNIST数据 训练集的加载环节，\n",
    "train_loader2 = DataLoader(dataset=trainset,\n",
    "                          batch_size=32,\n",
    "                          shuffle=True)\n",
    "print(\"训练集总长度为：\" , len(trainset))\n",
    "print(\"每个mini-batch的size 为 32 , 一共有：\" , len(train_loader2) , \"个\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "60000\n",
      "1875\n"
     ]
    }
   ],
   "source": [
    "# 我们这里还是对MNIST进行处理，初始的MNIST是 28 * 28，我们把它处理成 96 * 96 的torch.Tensor的格式\n",
    "from torchvision import transforms as transforms\n",
    "import torchvision\n",
    "from torch.utils.data import DataLoader\n",
    "\n",
    "# 图像预处理步骤\n",
    "transform = transforms.Compose([\n",
    "    transforms.Resize(96), # 缩放到 96 * 96 大小\n",
    "    transforms.ToTensor(),\n",
    "    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # 归一化\n",
    "])\n",
    "\n",
    "DOWNLOAD = True\n",
    "BATCH_SIZE = 32\n",
    "\n",
    "train_dataset = torchvision.datasets.MNIST(root='./data/', train=True, transform=transform, download=DOWNLOAD)\n",
    "\n",
    "\n",
    "train_loader = DataLoader(dataset=train_dataset,\n",
    "                          batch_size=BATCH_SIZE,\n",
    "                          shuffle=True)\n",
    "\n",
    "print(len(train_dataset))\n",
    "print(len(train_loader))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "6. **模型训练**：\n",
    "\n",
    "   - 使用预处理后的数据训练一个简单的模型（如 MLP）。\n",
    "   - 记录训练过程中的损失和准确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ResNet(\n",
      "  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\n",
      "  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "  (relu): ReLU(inplace=True)\n",
      "  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\n",
      "  (layer1): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer2): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer3): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer4): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))\n",
      "  (fc): Linear(in_features=512, out_features=1000, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "import torchvision.models as models\n",
    "# 加载一个 resnet18 模型\n",
    "resnet18 = models.resnet18()\n",
    "print(resnet18)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "e:\\python\\Lib\\site-packages\\torchvision\\models\\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\n",
      "  warnings.warn(\n",
      "e:\\python\\Lib\\site-packages\\torchvision\\models\\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.\n",
      "  warnings.warn(msg)\n",
      "Downloading: \"https://download.pytorch.org/models/resnet18-f37072fd.pth\" to C:\\Users\\陈炳煜/.cache\\torch\\hub\\checkpoints\\resnet18-f37072fd.pth\n",
      "100%|██████████| 44.7M/44.7M [02:57<00:00, 264kB/s]\n"
     ]
    }
   ],
   "source": [
    "import torchvision.models as models\n",
    "resnet18 = models.resnet18(pretrained=True) # 加载一个已经预训练好的模型， 需要下载一段时间... "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "7. **实验结果分析**：\n",
    "\n",
    "- 不同预处理对模型性能的影响主要体现在以下几个方面：\n",
    "\n",
    "   - 准确性：预处理可以消除噪声和错误数据对模型的影响，提高模型的准确性和泛化能力。\n",
    "\n",
    "   - 收敛速度：数据标准化和归一化可以加快模型的收敛速度，提高模型的训练效率。\n",
    "\n",
    "   - 泛化能力：数据增强和特征工程可以增加模型的泛化能力，提高模型的准确性和鲁棒性。\n",
    "\n",
    "   - 特征维度：特征工程可以增加模型的特征维度，提高模型的准确性和泛化能力。\n",
    "\n",
    "   - 模型复杂度：预处理可以降低模型的复杂度，提高模型的训练效率和泛化能力。\n",
    "- 数据增强对模型泛化能力有着重要的影响。通过增加数据多样性、提高模型鲁棒性、减少过拟合、提高模型准确率和提高模型泛化能力等方面，数据增强可以显著提高模型的性能。在实际应用中，可以通过实验和调参来找到最优的数据增强方法，以提高模型的泛化能力。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
