{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第四题 基于卷积神经网络的人脸表情识别  \n",
    "给定数据集train.csv，要求使用卷积神经网络CNN，根据每个样本的面部图片判断出其表情。在本项目中，表情共分7类，分别为：（0）生气，（1）厌恶，（2）恐惧，（3）高兴，（4）难过，（5）惊讶，（6）中立（即面无表情，无法归为前六类）。所以，本项目实质上是一个7分类问题。\n",
    "CSV文件，大小为28710行X2305列。在28710行中，其中第一行为描述信息，即“label”和“feature”两个单词，其余每行内含有一个样本信息，即共有28709个样本。在2305列中，其中第一列为该样本对应的label，取值范围为0到6。其余2304列为包含着每个样本大小为48X48人脸图片的像素值（2304=48X48），每个像素值取值范围在0到255之间。\n",
    "数据集地址：https://pan.baidu.com/s/1hwrq5Abx8NOUse3oew3BXg ，提取码：ukf7 \n",
    "任务要求：利用给定的数据集，完成训练卷积神经网络模型，并选择合适的评价指标对模型性能进行评估。CK+、FER2013、RaFD 、imagenet\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据预处理  \n",
    "采用icml数据集，其中共有34034条数据，其中80%为训练数据，10%为公测数据，10%为私测数据。我们选择使用训练数据和私测数据作为我们的训练样本，公测数据作为测试集。首先读入csv文件进行数据分离，将图片信息用cv2放入data文件夹中，分为测试集和训练集，样本标签包含在文件命名之中，命名规则为“序号_标签.jpg”，处理完毕后一并归属于DL_2022_4文件夹中"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 库文件导入"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import cv2\n",
    "import os\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from torchvision import datasets\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from torch import nn\n",
    "from torch.utils.data import DataLoader\n",
    "from torchvision import transforms\n",
    "from torch.utils.data import Dataset\n",
    "import torchvision\n",
    "from PIL import Image"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 数据提取"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练集数据提取\n",
    "rowDataPath = \"./DL_2022_4/icml_face_data.csv\"\n",
    "df = pd.read_csv(rowDataPath)\n",
    "# 原文件列号前有空格...\n",
    "df_label = df[['emotion']]\n",
    "df_usage = df[['usage']]\n",
    "df_data = df[['pixels']]\n",
    "# 将图片文件保存\n",
    "base_path = \"./DL_2022_4/data/\"\n",
    "img_data = np.array(df_data)\n",
    "img_usage = np.array(df_usage)\n",
    "img_label = np.array(df_label)\n",
    "for i in range(img_data.shape[0]):\n",
    "    temp_data = img_data[i]\n",
    "    temp_data = list(map(int,temp_data[0].split(' ')))\n",
    "    temp_data = np.array(temp_data)\n",
    "    face = temp_data.reshape(48,48)\n",
    "    tem_lable = img_label[i][0]\n",
    "    face_usage = img_usage[i].item()\n",
    "    if face_usage == 'Training' or face_usage == 'PrivateTest':\n",
    "        face_path = base_path + 'train'\n",
    "    elif face_usage == 'PublicTest':\n",
    "        face_path = base_path + 'test'\n",
    "    else :\n",
    "        face_path = base_path + 'error'\n",
    "    # print(f'{img_path}/{i}_{tem_lable}.jpg')\n",
    "    # print(face_path)\n",
    "    cv2.imwrite(f'{face_path}/{i}_{tem_lable}.jpg',face)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据集封装与导入   \n",
    "#### Dataset封装\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "class mydataset(Dataset):\n",
    "    def __init__(self, data_path,trans):\n",
    "        # 此处放整个数据集的信息\n",
    "        self.data_path = data_path\n",
    "        self.data_list = os.listdir(self.data_path)\n",
    "        self.trans = trans\n",
    "\n",
    "    def __getitem__(self, index):\n",
    "        # 此处放单次取数据的信息，数据内容和数据标签然后返回\n",
    "        img_name = self.data_list[index]\n",
    "        img_path = os.path.join(self.data_path, img_name)\n",
    "        img = Image.open(img_path)\n",
    "        label_idx = img_name.find('_')\n",
    "        img_label = int(img_name[label_idx+1])\n",
    "        # 此处需返回tensor格式的图片和int类型的label，字符不行\n",
    "        img = self.trans(img)\n",
    "        return img, img_label\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.data_list)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 训练时的随机trans构建"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "tensor_trans = transforms.ToTensor()\n",
    "Hflp_trans = transforms.RandomHorizontalFlip(p=0.5)\n",
    "RandomR_trans = transforms.RandomRotation(degrees=15)\n",
    "train_trains=transforms.Compose([tensor_trans,Hflp_trans,RandomR_trans])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 实例化Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练集数据量为：36671\n",
      "测试集数据量为：3589\n"
     ]
    }
   ],
   "source": [
    "traindataset_path = \"./DL_2022_4/data/train/\"\n",
    "testdataset_path = \"./DL_2022_4/data/test/\"\n",
    "\n",
    "train_dataset = mydataset(data_path = traindataset_path,trans=train_trains)\n",
    "test_dataset = mydataset(data_path = testdataset_path,trans=tensor_trans)\n",
    "train_data_len = len(train_dataset)\n",
    "test_data_len = len(test_dataset)\n",
    "print(f'训练集数据量为：{train_data_len}')\n",
    "print(f'测试集数据量为：{test_data_len}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 152,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[[0., 0., 0.,  ..., 0., 0., 0.],\n",
       "          [0., 0., 0.,  ..., 0., 0., 0.],\n",
       "          [0., 0., 0.,  ..., 0., 0., 0.],\n",
       "          ...,\n",
       "          [0., 0., 0.,  ..., 0., 0., 0.],\n",
       "          [0., 0., 0.,  ..., 0., 0., 0.],\n",
       "          [0., 0., 0.,  ..., 0., 0., 0.]]]),\n",
       " 0)"
      ]
     },
     "execution_count": 152,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_dataset[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### DataLoader封装"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Dataloader\n",
    "train_dataloader = DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)\n",
    "test_dataloader = DataLoader(dataset=test_dataset, batch_size=64, shuffle=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([64, 1, 48, 48])\n",
      "64\n"
     ]
    }
   ],
   "source": [
    "for data in train_dataloader:\n",
    "    imgs,targets = data\n",
    "    print(imgs.shape)\n",
    "    print(targets.shape[0])\n",
    "    break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 网络结构搭建  \n",
    "#### 自己搭建的小网络"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "class netWork(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(netWork, self).__init__()\n",
    "        self.model1 = nn.Sequential(\n",
    "            nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.MaxPool2d(kernel_size=(2, 2), stride=2, padding=0),\n",
    "            nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(in_channels=128, out_channels=128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.MaxPool2d(kernel_size=(2, 2), stride=2, padding=0),\n",
    "            nn.Flatten(),\n",
    "            nn.Linear(in_features=12*12*128, out_features=100),\n",
    "            nn.Linear(in_features=100, out_features=7),\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        output = self.model1(x)\n",
    "        return output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### VGG16修改版本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "VGG(\n",
      "  (features): Sequential(\n",
      "    (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (1): ReLU(inplace=True)\n",
      "    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (3): ReLU(inplace=True)\n",
      "    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (6): ReLU(inplace=True)\n",
      "    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (8): ReLU(inplace=True)\n",
      "    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (11): ReLU(inplace=True)\n",
      "    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (13): ReLU(inplace=True)\n",
      "    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (15): ReLU(inplace=True)\n",
      "    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (18): ReLU(inplace=True)\n",
      "    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (20): ReLU(inplace=True)\n",
      "    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (22): ReLU(inplace=True)\n",
      "    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (25): ReLU(inplace=True)\n",
      "    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (27): ReLU(inplace=True)\n",
      "    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (29): ReLU(inplace=True)\n",
      "    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "  )\n",
      "  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))\n",
      "  (classifier): Sequential(\n",
      "    (0): Linear(in_features=25088, out_features=4096, bias=True)\n",
      "    (1): ReLU(inplace=True)\n",
      "    (2): Dropout(p=0.5, inplace=False)\n",
      "    (3): Linear(in_features=4096, out_features=4096, bias=True)\n",
      "    (4): ReLU(inplace=True)\n",
      "    (5): Dropout(p=0.5, inplace=False)\n",
      "    (6): Linear(in_features=4096, out_features=1000, bias=True)\n",
      "  )\n",
      "  (add_linear1): Linear(in_features=1000, out_features=100, bias=True)\n",
      "  (add_linear2): Linear(in_features=100, out_features=7, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "network = torchvision.models.vgg16()\n",
    "# print(network)\n",
    "network.features[0] = nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
    "network.add_module('add_linear1', nn.Linear(in_features=1000, out_features=100))\n",
    "network.add_module('add_linear2', nn.Linear(in_features=100, out_features=7))\n",
    "print(network)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### VGG16 BN修改版本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "VGG(\n",
      "  (features): Sequential(\n",
      "    (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (2): ReLU(inplace=True)\n",
      "    (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (5): ReLU(inplace=True)\n",
      "    (6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (7): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (8): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (9): ReLU(inplace=True)\n",
      "    (10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (11): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (12): ReLU(inplace=True)\n",
      "    (13): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (14): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (15): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (16): ReLU(inplace=True)\n",
      "    (17): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (18): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (19): ReLU(inplace=True)\n",
      "    (20): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (21): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (22): ReLU(inplace=True)\n",
      "    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (24): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (25): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (26): ReLU(inplace=True)\n",
      "    (27): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (28): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (29): ReLU(inplace=True)\n",
      "    (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (31): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (32): ReLU(inplace=True)\n",
      "    (33): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (35): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (36): ReLU(inplace=True)\n",
      "    (37): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (38): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (39): ReLU(inplace=True)\n",
      "    (40): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
      "    (41): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    (42): ReLU(inplace=True)\n",
      "    (43): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "  )\n",
      "  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))\n",
      "  (classifier): Sequential(\n",
      "    (0): Linear(in_features=25088, out_features=4096, bias=True)\n",
      "    (1): ReLU(inplace=True)\n",
      "    (2): Dropout(p=0.5, inplace=False)\n",
      "    (3): Linear(in_features=4096, out_features=4096, bias=True)\n",
      "    (4): ReLU(inplace=True)\n",
      "    (5): Dropout(p=0.5, inplace=False)\n",
      "    (6): Linear(in_features=4096, out_features=1000, bias=True)\n",
      "  )\n",
      "  (add_linear1): Linear(in_features=1000, out_features=7, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "network = torchvision.models.vgg16_bn()\n",
    "network.features[0] = nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
    "network.add_module('add_linear1', nn.Linear(in_features=1000, out_features=7))\n",
    "print(network)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ResNet18修改版本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ResNet(\n",
      "  (conv1): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\n",
      "  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "  (relu): ReLU(inplace=True)\n",
      "  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\n",
      "  (layer1): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer2): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer3): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer4): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))\n",
      "  (fc): Linear(in_features=512, out_features=100, bias=True)\n",
      "  (add_linear1): Linear(in_features=100, out_features=7, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "network = torchvision.models.resnet18()\n",
    "# print(network)\n",
    "network.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3),bias=False)\n",
    "network.fc = nn.Linear(in_features=512, out_features=100, bias=True)\n",
    "network.add_module('add_linear1', nn.Linear(in_features=100, out_features=7,bias=True))\n",
    "# network.add_module('add_linear2', nn.Linear(in_features=100, out_features=7))\n",
    "print(network)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### RESNET34修改版本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ResNet(\n",
      "  (conv1): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\n",
      "  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "  (relu): ReLU(inplace=True)\n",
      "  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\n",
      "  (layer1): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (2): BasicBlock(\n",
      "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer2): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (2): BasicBlock(\n",
      "      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (3): BasicBlock(\n",
      "      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer3): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (2): BasicBlock(\n",
      "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (3): BasicBlock(\n",
      "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (4): BasicBlock(\n",
      "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (5): BasicBlock(\n",
      "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (layer4): Sequential(\n",
      "    (0): BasicBlock(\n",
      "      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (downsample): Sequential(\n",
      "        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
      "        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      )\n",
      "    )\n",
      "    (1): BasicBlock(\n",
      "      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "    (2): BasicBlock(\n",
      "      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "      (relu): ReLU(inplace=True)\n",
      "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
      "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
      "    )\n",
      "  )\n",
      "  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))\n",
      "  (fc): Linear(in_features=512, out_features=7, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "network = torchvision.models.ResNet34_Weights()\n",
    "network.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3),bias=False)\n",
    "network.fc = nn.Linear(in_features=512, out_features=7, bias=True)\n",
    "\n",
    "print(network)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 网络训练  \n",
    "#### 参数配置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 2.00 GiB total capacity; 934.03 MiB already allocated; 166.50 MiB free; 946.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[1;32me:\\MyProject\\VSCode\\py\\DL_2022\\Draco.ipynb Cell 29\u001b[0m in \u001b[0;36m<cell line: 29>\u001b[1;34m()\u001b[0m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=19'>20</a>\u001b[0m network\u001b[39m.\u001b[39madd_module(\u001b[39m'\u001b[39m\u001b[39madd_linear1\u001b[39m\u001b[39m'\u001b[39m, nn\u001b[39m.\u001b[39mLinear(in_features\u001b[39m=\u001b[39m\u001b[39m1000\u001b[39m, out_features\u001b[39m=\u001b[39m\u001b[39m7\u001b[39m))\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=21'>22</a>\u001b[0m \u001b[39m# vgg16_BN\u001b[39;00m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=22'>23</a>\u001b[0m \u001b[39m# network = torchvision.models.vgg16_bn()\u001b[39;00m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=23'>24</a>\u001b[0m \u001b[39m# network.features[0] = nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\u001b[39;00m\n\u001b[1;32m   (...)\u001b[0m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=26'>27</a>\u001b[0m \u001b[39m# mynet\u001b[39;00m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=27'>28</a>\u001b[0m \u001b[39m# network = netWork()\u001b[39;00m\n\u001b[1;32m---> <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=28'>29</a>\u001b[0m network \u001b[39m=\u001b[39m network\u001b[39m.\u001b[39;49mto(device)\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=29'>30</a>\u001b[0m \u001b[39m# 损失函数\u001b[39;00m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y620sZmlsZQ%3D%3D?line=30'>31</a>\u001b[0m loss_fn \u001b[39m=\u001b[39m nn\u001b[39m.\u001b[39mCrossEntropyLoss()\n",
      "File \u001b[1;32mc:\\Users\\16320\\anaconda3\\envs\\DLpy38\\lib\\site-packages\\torch\\nn\\modules\\module.py:927\u001b[0m, in \u001b[0;36mModule.to\u001b[1;34m(self, *args, **kwargs)\u001b[0m\n\u001b[0;32m    923\u001b[0m         \u001b[39mreturn\u001b[39;00m t\u001b[39m.\u001b[39mto(device, dtype \u001b[39mif\u001b[39;00m t\u001b[39m.\u001b[39mis_floating_point() \u001b[39mor\u001b[39;00m t\u001b[39m.\u001b[39mis_complex() \u001b[39melse\u001b[39;00m \u001b[39mNone\u001b[39;00m,\n\u001b[0;32m    924\u001b[0m                     non_blocking, memory_format\u001b[39m=\u001b[39mconvert_to_format)\n\u001b[0;32m    925\u001b[0m     \u001b[39mreturn\u001b[39;00m t\u001b[39m.\u001b[39mto(device, dtype \u001b[39mif\u001b[39;00m t\u001b[39m.\u001b[39mis_floating_point() \u001b[39mor\u001b[39;00m t\u001b[39m.\u001b[39mis_complex() \u001b[39melse\u001b[39;00m \u001b[39mNone\u001b[39;00m, non_blocking)\n\u001b[1;32m--> 927\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_apply(convert)\n",
      "File \u001b[1;32mc:\\Users\\16320\\anaconda3\\envs\\DLpy38\\lib\\site-packages\\torch\\nn\\modules\\module.py:579\u001b[0m, in \u001b[0;36mModule._apply\u001b[1;34m(self, fn)\u001b[0m\n\u001b[0;32m    577\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39m_apply\u001b[39m(\u001b[39mself\u001b[39m, fn):\n\u001b[0;32m    578\u001b[0m     \u001b[39mfor\u001b[39;00m module \u001b[39min\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mchildren():\n\u001b[1;32m--> 579\u001b[0m         module\u001b[39m.\u001b[39;49m_apply(fn)\n\u001b[0;32m    581\u001b[0m     \u001b[39mdef\u001b[39;00m \u001b[39mcompute_should_use_set_data\u001b[39m(tensor, tensor_applied):\n\u001b[0;32m    582\u001b[0m         \u001b[39mif\u001b[39;00m torch\u001b[39m.\u001b[39m_has_compatible_shallow_copy_type(tensor, tensor_applied):\n\u001b[0;32m    583\u001b[0m             \u001b[39m# If the new tensor has compatible tensor type as the existing tensor,\u001b[39;00m\n\u001b[0;32m    584\u001b[0m             \u001b[39m# the current behavior is to change the tensor in-place using `.data =`,\u001b[39;00m\n\u001b[1;32m   (...)\u001b[0m\n\u001b[0;32m    589\u001b[0m             \u001b[39m# global flag to let the user control whether they want the future\u001b[39;00m\n\u001b[0;32m    590\u001b[0m             \u001b[39m# behavior of overwriting the existing tensor or not.\u001b[39;00m\n",
      "File \u001b[1;32mc:\\Users\\16320\\anaconda3\\envs\\DLpy38\\lib\\site-packages\\torch\\nn\\modules\\module.py:579\u001b[0m, in \u001b[0;36mModule._apply\u001b[1;34m(self, fn)\u001b[0m\n\u001b[0;32m    577\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39m_apply\u001b[39m(\u001b[39mself\u001b[39m, fn):\n\u001b[0;32m    578\u001b[0m     \u001b[39mfor\u001b[39;00m module \u001b[39min\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mchildren():\n\u001b[1;32m--> 579\u001b[0m         module\u001b[39m.\u001b[39;49m_apply(fn)\n\u001b[0;32m    581\u001b[0m     \u001b[39mdef\u001b[39;00m \u001b[39mcompute_should_use_set_data\u001b[39m(tensor, tensor_applied):\n\u001b[0;32m    582\u001b[0m         \u001b[39mif\u001b[39;00m torch\u001b[39m.\u001b[39m_has_compatible_shallow_copy_type(tensor, tensor_applied):\n\u001b[0;32m    583\u001b[0m             \u001b[39m# If the new tensor has compatible tensor type as the existing tensor,\u001b[39;00m\n\u001b[0;32m    584\u001b[0m             \u001b[39m# the current behavior is to change the tensor in-place using `.data =`,\u001b[39;00m\n\u001b[1;32m   (...)\u001b[0m\n\u001b[0;32m    589\u001b[0m             \u001b[39m# global flag to let the user control whether they want the future\u001b[39;00m\n\u001b[0;32m    590\u001b[0m             \u001b[39m# behavior of overwriting the existing tensor or not.\u001b[39;00m\n",
      "File \u001b[1;32mc:\\Users\\16320\\anaconda3\\envs\\DLpy38\\lib\\site-packages\\torch\\nn\\modules\\module.py:602\u001b[0m, in \u001b[0;36mModule._apply\u001b[1;34m(self, fn)\u001b[0m\n\u001b[0;32m    598\u001b[0m \u001b[39m# Tensors stored in modules are graph leaves, and we don't want to\u001b[39;00m\n\u001b[0;32m    599\u001b[0m \u001b[39m# track autograd history of `param_applied`, so we have to use\u001b[39;00m\n\u001b[0;32m    600\u001b[0m \u001b[39m# `with torch.no_grad():`\u001b[39;00m\n\u001b[0;32m    601\u001b[0m \u001b[39mwith\u001b[39;00m torch\u001b[39m.\u001b[39mno_grad():\n\u001b[1;32m--> 602\u001b[0m     param_applied \u001b[39m=\u001b[39m fn(param)\n\u001b[0;32m    603\u001b[0m should_use_set_data \u001b[39m=\u001b[39m compute_should_use_set_data(param, param_applied)\n\u001b[0;32m    604\u001b[0m \u001b[39mif\u001b[39;00m should_use_set_data:\n",
      "File \u001b[1;32mc:\\Users\\16320\\anaconda3\\envs\\DLpy38\\lib\\site-packages\\torch\\nn\\modules\\module.py:925\u001b[0m, in \u001b[0;36mModule.to.<locals>.convert\u001b[1;34m(t)\u001b[0m\n\u001b[0;32m    922\u001b[0m \u001b[39mif\u001b[39;00m convert_to_format \u001b[39mis\u001b[39;00m \u001b[39mnot\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m t\u001b[39m.\u001b[39mdim() \u001b[39min\u001b[39;00m (\u001b[39m4\u001b[39m, \u001b[39m5\u001b[39m):\n\u001b[0;32m    923\u001b[0m     \u001b[39mreturn\u001b[39;00m t\u001b[39m.\u001b[39mto(device, dtype \u001b[39mif\u001b[39;00m t\u001b[39m.\u001b[39mis_floating_point() \u001b[39mor\u001b[39;00m t\u001b[39m.\u001b[39mis_complex() \u001b[39melse\u001b[39;00m \u001b[39mNone\u001b[39;00m,\n\u001b[0;32m    924\u001b[0m                 non_blocking, memory_format\u001b[39m=\u001b[39mconvert_to_format)\n\u001b[1;32m--> 925\u001b[0m \u001b[39mreturn\u001b[39;00m t\u001b[39m.\u001b[39;49mto(device, dtype \u001b[39mif\u001b[39;49;00m t\u001b[39m.\u001b[39;49mis_floating_point() \u001b[39mor\u001b[39;49;00m t\u001b[39m.\u001b[39;49mis_complex() \u001b[39melse\u001b[39;49;00m \u001b[39mNone\u001b[39;49;00m, non_blocking)\n",
      "\u001b[1;31mRuntimeError\u001b[0m: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 2.00 GiB total capacity; 934.03 MiB already allocated; 166.50 MiB free; 946.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
     ]
    }
   ],
   "source": [
    "# 定义训练的设备\n",
    "device = torch.device(\"cuda\")\n",
    "\n",
    "# 选择要使用的网络模型\n",
    "\n",
    "# resnet34\n",
    "network = torchvision.models.ResNet34_Weights()\n",
    "network.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3),bias=False)\n",
    "network.fc = nn.Linear(in_features=512, out_features=7, bias=True)\n",
    "\n",
    "# resnet18\n",
    "network = torchvision.models.resnet34()\n",
    "network.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3),bias=False)\n",
    "network.fc = nn.Linear(in_features=512, out_features=100, bias=True)\n",
    "network.add_module('add_linear1', nn.Linear(in_features=100, out_features=7,bias=True))\n",
    "\n",
    "# vgg16\n",
    "network = torchvision.models.vgg16()\n",
    "network.features[0] = nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
    "network.add_module('add_linear1', nn.Linear(in_features=1000, out_features=7))\n",
    "\n",
    "# vgg16_BN\n",
    "# network = torchvision.models.vgg16_bn()\n",
    "# network.features[0] = nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
    "# network.add_module('add_linear1', nn.Linear(in_features=1000, out_features=7))\n",
    "\n",
    "# mynet\n",
    "# network = netWork()\n",
    "network = network.to(device)\n",
    "# 损失函数\n",
    "loss_fn = nn.CrossEntropyLoss()\n",
    "loss_fn = loss_fn.to(device)\n",
    "# 优化器\n",
    "learning_rate = 0.01\n",
    "optimizer = torch.optim.Adam(params=network.parameters(),lr=learning_rate,betas=(0.9, 0.999),eps=1e-08,weight_decay=0,amsgrad=False)\n",
    "# optimizer = torch.optim.SGD(params=network.parameters(), lr=learning_rate)\n",
    "\n",
    "# 设置训练网络的一些参数\n",
    "# 记录训练batch的次数\n",
    "total_train_step = 1\n",
    "# 需要训练的轮数\n",
    "epoch = 30\n",
    "# 全局epoch计数\n",
    "globle_epoch = 1\n",
    "# 输出的指标值\n",
    "output_acc_list = torch.zeros(1000,device='cuda')\n",
    "output_loss_list = torch.zeros(1000,device='cuda')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 155,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1\n",
      "2\n",
      "3\n",
      "4\n"
     ]
    }
   ],
   "source": [
    "a = 5\n",
    "n = 1\n",
    "for i in range(n,a):\n",
    "    print(i)\n",
    "    n = i"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 断点重训参数配置  \n",
    "模型导入"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_path = \"\"\n",
    "network = torch.load(model_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "参数调整"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 图片噪声参数修改\n",
    "\n",
    "tensor_trans = transforms.ToTensor()\n",
    "Hflp_trans = transforms.RandomHorizontalFlip(p=0.4)\n",
    "RandomR_trans = transforms.RandomRotation(degrees=10)\n",
    "# 随机透视变换\n",
    "test_trans = transforms.RandomPerspective(0.4,0.4) \n",
    "# 随机直方图均衡\n",
    "test_trans2 = transforms.RandomEqualize(p=0.1)\n",
    "# 随机对比度调整\n",
    "test_trans3 = transforms.RandomAutocontrast(p=0.4)\n",
    "# 随机清晰度调整\n",
    "test_trans4 = transforms.RandomAdjustSharpness(sharpness_factor=0,p = 0.2)\n",
    "test_trans5 = transforms.RandomAdjustSharpness(sharpness_factor=2,p = 0.2)\n",
    "\n",
    "train_trans=transforms.Compose([Hflp_trans,RandomR_trans,tensor_trans,test_trans,test_trans2,test_trans3,test_trans4,test_trans,test_trans2,test_trans3,test_trans4,test_trans5,test_trans5])\n",
    "\n",
    "# 学习速率修改\n",
    "learning_rate = 0.05\n",
    "\n",
    "# 再次训练的轮数\n",
    "epoch = 10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 训练过程"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-----第1轮训练开始-----\n"
     ]
    },
    {
     "ename": "RuntimeError",
     "evalue": "CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 2.00 GiB total capacity; 827.40 MiB already allocated; 266.50 MiB free; 846.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[1;32me:\\MyProject\\VSCode\\py\\DL_2022\\Draco.ipynb Cell 32\u001b[0m in \u001b[0;36m<cell line: 1>\u001b[1;34m()\u001b[0m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y204sZmlsZQ%3D%3D?line=12'>13</a>\u001b[0m \u001b[39m# 优化器调用\u001b[39;00m\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y204sZmlsZQ%3D%3D?line=13'>14</a>\u001b[0m optimizer\u001b[39m.\u001b[39mzero_grad()\n\u001b[1;32m---> <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y204sZmlsZQ%3D%3D?line=14'>15</a>\u001b[0m loss\u001b[39m.\u001b[39;49mbackward()\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y204sZmlsZQ%3D%3D?line=15'>16</a>\u001b[0m optimizer\u001b[39m.\u001b[39mstep()\n\u001b[0;32m     <a href='vscode-notebook-cell:/e%3A/MyProject/VSCode/py/DL_2022/Draco.ipynb#Y204sZmlsZQ%3D%3D?line=16'>17</a>\u001b[0m total_train_step \u001b[39m+\u001b[39m\u001b[39m=\u001b[39m \u001b[39m1\u001b[39m\n",
      "File \u001b[1;32mc:\\Users\\16320\\anaconda3\\envs\\DLpy38\\lib\\site-packages\\torch\\_tensor.py:396\u001b[0m, in \u001b[0;36mTensor.backward\u001b[1;34m(self, gradient, retain_graph, create_graph, inputs)\u001b[0m\n\u001b[0;32m    387\u001b[0m \u001b[39mif\u001b[39;00m has_torch_function_unary(\u001b[39mself\u001b[39m):\n\u001b[0;32m    388\u001b[0m     \u001b[39mreturn\u001b[39;00m handle_torch_function(\n\u001b[0;32m    389\u001b[0m         Tensor\u001b[39m.\u001b[39mbackward,\n\u001b[0;32m    390\u001b[0m         (\u001b[39mself\u001b[39m,),\n\u001b[1;32m   (...)\u001b[0m\n\u001b[0;32m    394\u001b[0m         create_graph\u001b[39m=\u001b[39mcreate_graph,\n\u001b[0;32m    395\u001b[0m         inputs\u001b[39m=\u001b[39minputs)\n\u001b[1;32m--> 396\u001b[0m torch\u001b[39m.\u001b[39;49mautograd\u001b[39m.\u001b[39;49mbackward(\u001b[39mself\u001b[39;49m, gradient, retain_graph, create_graph, inputs\u001b[39m=\u001b[39;49minputs)\n",
      "File \u001b[1;32mc:\\Users\\16320\\anaconda3\\envs\\DLpy38\\lib\\site-packages\\torch\\autograd\\__init__.py:173\u001b[0m, in \u001b[0;36mbackward\u001b[1;34m(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\u001b[0m\n\u001b[0;32m    168\u001b[0m     retain_graph \u001b[39m=\u001b[39m create_graph\n\u001b[0;32m    170\u001b[0m \u001b[39m# The reason we repeat same the comment below is that\u001b[39;00m\n\u001b[0;32m    171\u001b[0m \u001b[39m# some Python versions print out the first line of a multi-line function\u001b[39;00m\n\u001b[0;32m    172\u001b[0m \u001b[39m# calls in the traceback and some print out the last line\u001b[39;00m\n\u001b[1;32m--> 173\u001b[0m Variable\u001b[39m.\u001b[39;49m_execution_engine\u001b[39m.\u001b[39;49mrun_backward(  \u001b[39m# Calls into the C++ engine to run the backward pass\u001b[39;49;00m\n\u001b[0;32m    174\u001b[0m     tensors, grad_tensors_, retain_graph, create_graph, inputs,\n\u001b[0;32m    175\u001b[0m     allow_unreachable\u001b[39m=\u001b[39;49m\u001b[39mTrue\u001b[39;49;00m, accumulate_grad\u001b[39m=\u001b[39;49m\u001b[39mTrue\u001b[39;49;00m)\n",
      "\u001b[1;31mRuntimeError\u001b[0m: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 2.00 GiB total capacity; 827.40 MiB already allocated; 266.50 MiB free; 846.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
     ]
    }
   ],
   "source": [
    "for i in range(globle_epoch, epoch + globle_epoch):\n",
    "    print(f'-----第{i}轮训练开始-----')\n",
    "    \n",
    "    # 训练步骤开始\n",
    "    network.train() # 当模型含有特定层时，需要使用切换至训练状态\n",
    "    for data in train_dataloader:\n",
    "        imgs, targets = data\n",
    "        imgs = imgs.to(device)\n",
    "        targets = targets.to(device)\n",
    "        outputs = network(imgs)\n",
    "        loss = loss_fn(outputs, targets)\n",
    "\n",
    "        # 优化器调用\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        total_train_step += 1\n",
    "\n",
    "        # 当前batch数据输出\n",
    "        if total_train_step % 100 == 0:\n",
    "            accuracy = (outputs.argmax(1) == targets).sum() / targets.shape[0]\n",
    "            print(f'训练次数：{total_train_step},当前batch上 Loss:{loss.item()},Accuracy:{accuracy}')\n",
    "\n",
    "\n",
    "    # 在训练过程中同步在训练集上测试\n",
    "    network.eval() # 进入测试步骤，只对特定层有用\n",
    "    # 首先取消梯度\n",
    "    total_test_loss = 0\n",
    "    total_test_accuracy = 0\n",
    "    \n",
    "    with torch.no_grad():\n",
    "        for data in test_dataloader:\n",
    "            test_imgs, test_targets = data\n",
    "            test_imgs = test_imgs.to(device)\n",
    "            test_targets = test_targets.to(device)\n",
    "            \n",
    "            test_outputs = network(test_imgs)\n",
    "            test_loss = loss_fn(test_outputs, test_targets)\n",
    "            total_test_loss += test_loss.item()\n",
    "            accuracy = (test_outputs.argmax(1) == test_targets).sum()\n",
    "            total_test_accuracy += accuracy\n",
    "            \n",
    "    print(f'整体测试数据集上的Loss:{total_test_loss}')\n",
    "    print(f'整体测试数据集上的正确率：{total_test_accuracy / test_data_len}')\n",
    "    output_acc_list[i] = total_test_accuracy / test_data_len\n",
    "    output_loss_list[i] = total_test_loss\n",
    "\n",
    "    globle_epoch += 1\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 保存当前训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 保存训练模型\n",
    "torch.save(network, '/kaggle/working/network_.pth')\n",
    "print(f'模型已保存')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 保存测试集训练信息"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 保存每个epoch结束时，测试集的acc和loss\n",
    "out_acc = output_acc_list.cpu()\n",
    "out_loss = output_loss_list.cpu()\n",
    "output_dic = {'test_accuracy':out_acc,'test_loss':out_loss}\n",
    "\n",
    "# 将loss和acc数据写入cvs文件\n",
    "df = pd.DataFrame(data=output_dic,columns=['test_accuracy','test_loss'])#列名\n",
    "df.to_csv(\"/kaggle/working/output.csv\",index=False) #路径可以根据需要更改"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据增强  \n",
    "对训练样本进行分析，观察训练样本数据发现，样本数据有部分无关数据，完全没有人脸，同时有正负15度左右的倾角变化，和较大幅度的空间z轴转向，肤色有黑有白，部分有手、墨镜等的遮挡物，图片有真人照片也有动漫头像。同时不同标签数据之前样本数量差别较大，标签类别'1'和其他类别差了一个数量级，考虑剔除无关数据后使用数据扩增维护不同类别数据之前的平衡。\n",
    "使用自己搭建的卷积网络进行训练，最后测试集识别率稳定在52%，使用VGG16进行训练，最后识别率为56%，使用RESNET18，正确率约54%，都不太理想。  \n",
    "#### 训练时使用transform进行数据增强\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {},
   "outputs": [],
   "source": [
    "tensor_trans = transforms.ToTensor()\n",
    "Hflp_trans = transforms.RandomHorizontalFlip(p=0.5)\n",
    "RandomR_trans = transforms.RandomRotation(degrees=15)\n",
    "train_trains=transforms.Compose([tensor_trans,Hflp_trans,RandomR_trans])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "单张图片效果  \n",
    "随机15度以内的旋转以及50%的概率左右翻转"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAAAAAByaaZbAAAHNUlEQVR4nG2V22+cVxXF1z7nfPe5eMYzvsWXxElzJW0aEiKUqrSlpC2lRlWhReIFCYkXJIRQnxBVQTzxwh/ACwKpfaFFqpBKiRAqVdS0onHSNHbi2s7VdmyPZ8ae+Wa+6zmbB1dVuaz3n9aW9t5rEQDgVyh7tv2HnauVhgPL7lkY1D0b4i7+RwoAIPPIty5gYaTh6yq3ikll4jaxOuR2SmbpPwHaNfDFYPzbRiFMvFxEgZjy7a595MDjPy81tvY8fTye+W+Aq5WLryWuMOP75jB6ZKcm61eOToq9Y+tLFy5F3/lGKknwU18YSVud81pKv/Zg6/lakAsl9JNZDxjIlw6u3P2zeASQ+Tua8pldgAVfuQnHmz5QOyg4JiWUjO2M1neWt7PJpPfmU4nMoZjpLSgAr2gXs8b3qxNFpMKWDgVS5WSpFiRhR8FROclUW5EgVgDIi365UxIHjhWYctcvRIFft5IcKYk+MJzat29Ox2TrTDBIACDjhhAFsLa94kBl6tDhibI9aavcTQUysuJ9f9JCSC1BRAKAZs9JrAoi5lRRuFGVMuzF1BCxkQXIKjcX3vc4lRrYBYSIfCBxw35skjzdvtVMDaQ/pETR5RzGEaOne0qSIAACeFUO/n5zYrjU0mlf61QWnIAVUBj1/cD1HBQPBN2FQpqyZjALgKW7P4tYVGp+uegUBnyTGtXtmkDqsMfDkzaj/W4q7AxgNgJgo45buZdZTgGxVmBLdtrC15cR5c13hRgYHtaL64BN9NmmrWTcVKIKyLAmqFimO/FostbpfPLBreJVp3bmAX3+4kvbgnZP4xeFavCzoBmKuitD13GXJ3vrd1kg63eu62+eeCe9Hjx5qXY9UjlATApk4E/PBsGxiaxXmrqEf56ZuxkcGapXNhcfHl7Bd68WPPnQvZWVyZwMgxSEQnR4Fgmn2l6+9489337tyIyu1Li8d7k2G681N9PG6EHVufZADBIMBc1KHLe5hUSmn97eMzPwamu2Wckq60ty52iyWtybHrvrOsGtjUJOAKtXisMi2nfy4+LySWEOrD5yafPLszwcNqxsqLvfpFFxdHoJuq2e3ihrQ2QUkdd55bHMS/fG4eDQi5fbZ+48ZPbAypVV2q7UHacipvLuaDN6uAeCYQUVV1+8eOrwha6T7Rhz2vGHRibR21LEtfutgcPGCb2BvNwMjcWGmRU78Kazonx0drAiCgOe7XJlmyPBGaI0yOIiqJnqdGQ1z1hAGlXqldKSv/f+FTsqCUsEfnUjrH5qbauBeMweb88PlpOkn9im367nuSZSQdUzd4fj2ZPLMYaV0jZq9seNmpfVok7/2h7VmC+MhW2r9Gy5R1JkEIY1JnLxaBGdYtl33dTxby+MlMfkB378yetvXa7XN3qNdRQSacPAspUklQYoF9ob0UY5sUQczcuzbwfX0sEQc8nlL92X4wuereezYeGkTJliIm0P3a+Ksa1wvcS+kjeP2fhw9MHOG/LkoYbbo7hQZPuRIbunLUqhKp4mam9uFoN6vz3WymSrN7r9tdNyrNp6rDO3f2k12F9z6Vw9SSSxkUKJnIw8ItLqUtM+uo2uvzVluo5NC5s/uHGhfX0tKpQOG87AYCbgGSWFtpLi4Y/iU+kna0duDPb3ja6cP7uY4Ngb5tSxtx+4dLyiNAHExGBAQWas4sJXrlUfX+0k++6PtW78rXV35tSDq/7DPQr/eqI8RCbTBOLdB1KJdmDl1gmqnnrv6lN9+HzU+M9PWpN51LrSFs6e+s7NM1MRgwFiKCOsPJBC5ZQtsn7/azcnN1/cDrNFfTDcWC4taCHZ7WgQswH4W1Ai07YhN7LS4OuvGYfK8cAg3f67laCdDy1YZHdlhVwAnw0lWAhiZIAVTiT2yvurXdHEiCdr/v7ifJWKTpYTD4OZaTc1DMAkSJG2+4+/ObH2whxXSxPV1Fw1iVl1amnFn38500Ib+qxQtGXZ0MKQwVfXL46/99j1YLPk6ICllluCrb4ZIZNCsGB+FlAMgAUgTS7ilbLF/anm0KZno+5JbahnZCuQLPMcmS0BCDDDkGDNIveesIL8QwEPDHi+o8uC2G2mEqRdCnKNXQem3b3L3tHWXFF+dOjW1DryHqc7+U66NX8WkjMRU6Q0AGGU5ZDRQkKz9l+A7Y18qm/IdmpsAQSYb9Z/eFExM2yN3bgHgwRl2o1bo/E4RFq3nMu3pbWzIdBdvGefiO9kTkLQ4jkA6iX8RVtkNAuxef7U1ORq38rXnXi1aIXdka1s/tzBQuv0b14uh0rT58UOnAdy/ca/Xpfjjbm5VqNrba92eLK6Gv70w++lJTGz/7mzsWWe+7zYcQ4ARqkzmO+bfvonFPRTyobCdO3c4IxDhT9G2+9lT+TOFxw+15rtW3fW7rTC11P27z/6zIHUzdX8j6cy8+v8R/8PAAA2aP1uLa19/62T0949u8uLr+Y1u1/4GAD+DWsSpkjj5ZT4AAAAAElFTkSuQmCC",
      "text/plain": [
       "<PIL.Image.Image image mode=L size=48x48>"
      ]
     },
     "execution_count": 82,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "PIL_trans = transforms.ToPILImage()\n",
    "ten = train_dataset[1]\n",
    "img = train_dataset[1][0]\n",
    "img2 = PIL_trans(img)\n",
    "img2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在dataloader中测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAAAAAByaaZbAAAGgklEQVR4nGWWO4ycdxVHf/f+7/ec2d2Z3Ym9jr2QOImx4xglQIgTEAghUaRBCg0FUkoKJHoKhARIiAIaWooUKBWiiISERJMECZQgoRjyUN4O1sbe9Xp3xjPzfd//dS8FRUA5/ZFOeQgAHhlhBchI+f5m/tXH5q2o7Q/d/Pg3+BQCAJr62BZt2nrwpHt2qxi3ZnQmpuC7PlDx/KcFeN1yFHVnOP7cJhcjNebG5Tp9JURwPSrll/8vcM2yqt0UB/N765TNKZTZwLNkOLzqgv7I3742efmTpFyrY1m5dT7LmlgNCuc0VqI0S6hRDKtanyi5LwTAZWksbqRsOm92BiZkYihgxAZUDolFntj6O49QLxlA5QgCpsmHs5+PyoLgmBwDzGBWdo7gG774tO89iwCAcCYzGgLMAS4GI4pESg6a1DJ01KHYPXMrcc8A7L+4qlHOQZO5qiCpShc1rzNlE+pQaPrahTokBuAMZnAyoMsK74iMLEN7ZjATNLCDaRfdeKMX4GpqojmSviuWi6licLVlqWy+nqy9kCkERsmjOl/9eyyAEdScxWST7+/ECuoWsNRaz0t1tirVWeIhGMxtTVcCaMkkTJqoUuXsZXncHtxrfoYji1FWFSkNgyqTuFYEMFMDG8PYX4+9f6BbF1iq6/MoAzayAYM3cADVIxE8zsog1mwFvx5ur7mKuaH5Qu8+duyxOj+4rMLRzJL540MBQ3LlNAzFtrt1Nn4UT7d169bzo2W6cxiLRTom92UJKZNxPdsSKDExCMTiz/9j6+K/Xp1+o42+P1gvQtfXXGSXXmtL0ppcOTkrMAIMxBBDePDy2/3hwd6VD+7eND1asTvcHEvFN9pxU0vhjM4KwJQhIIvW7Zwtr/5psiXTrZUusBpvnnv1YH+K9AiptOqI3ES+UNQeIKjVSfvP3lh8M06ubG4ntzEp/O4Ve7keT+NiVrGUyJRPxLGKiQ79hBaPPo1mfSmM9tyo3Wh27lmwfvH17adi8WHbTgDiwdXCxGakyTgJlSfTya17dsaHx8Wsq8s2BP9Me5SKJyMXJjFTZCEABNNyKDhx3fututHI2eqxymb4eD5vTpWFa8ploUmCFyIFmZnlymB1LFBrRtbcYhYTO94gG6pCjLoyJUpSsLGZchw51eRKU9/EgOCCScE2K7Ttqa2CeVsP/uZ1gRFyH7GraJ/xjprAVUTMkVoR5Uw0cNMGWxORqq6EYJaylbCiO5kURV9Ohm7IvkPZbYwjpZKZgvmoMSj6E6ZkaiQwIhywmrXJyNXSWmzLWJdNjss4dH2yNAyp2pEMhRopcdbr54vswpo9eKxx1bomUAg+9RpTTOseod4TLTw8E2dkujHf4vWNvr2zS6rlvKs1lOrEsnLIMaTUL56XIppVfTuJQ4lHN5CL69dyvLCuNitbllpkhWVENQQfFQJRsC7zJMaNoO9daNJiNL599E+e3vtA2wZOGjVQ9IlyCGooIN6F9e6Z9+u0LOSj96/0dGV07dRmq9vnNl3BzoeUnBLHGDLBCLJ9uPODh+ufdn3jnXvzopTR3bcaN80kjsSrIRmSxn7ls1g0QD7zvT1Z/OoS3tlnX9189yLudoI9F3RWUJUtQzNZTMmEMuO3kBt/nvf30N6Ffrhdxt1LwwIFu7QmSqw5OjOQpZwYDIMDZNhfdIvxUH735Cfkbh3s5EqlbTuf+zXKKjPMNGUjqCYjQFaIG6PRH//20Je+/WIX3ngyHr5ZfeOWt/FdCOVWc1wDMWfKqhmADNZMRyEPh29dXr/I17/uP749jzcN8q0YUFROY1DTOMAsxucAKe9vDlkv+PTu/admHyyWG49fdb/gWDZ/2dhsTyfSlBNrgsI0AJDL7mB7NL2zaL4zHA7T3WqqSD9c2aj6nU5OIVEK2XKMWXNMGYC4w9OX2heOeJuL+3ZeeuMpWD6+4OOqe5YUy8QxwEyjqvcKANLvnXusvvPma5eamTu6+9d3Hqr2315pWSoRLNnSd0UOd2GaUmYAcqaTGB4ePSjvuDOzp++8tG3L/MJ0Vjyyyo7zKoSgqesCI/brVwDIrwHgZ9vn5jt30rTf3X//4Y0bp6nDW5sbdc8+DljnIXiB7/5nHX4MAH9I137/wWx30l7NqXKvtJYtU+y1SNYnSXNffSIAAJ4BMIrvnZHT1OTu81V10knsBl/H5aB8sqoMAOjTg4PntExD44/75XoV1gP1/Wh/d57b+AaA/wD0CymYyLQXogAAAABJRU5ErkJggg==",
      "text/plain": [
       "<PIL.Image.Image image mode=L size=48x48>"
      ]
     },
     "execution_count": 83,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Dataloader\n",
    "test_dataloader = DataLoader(dataset=train_dataset, batch_size=1, shuffle=False)\n",
    "for data in test_dataloader:\n",
    "    imgs,targets = data\n",
    "    imgs = imgs.reshape(1,48,48)\n",
    "    img = PIL_trans(imgs)\n",
    "    \n",
    "    break\n",
    "img"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 给少样本类别进行数据扩增  \n",
    "标签为1的样本少得可怜，我们以5000为基准，对标签为1的样本进行十倍扩增  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "扩增方案"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 142,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机亮度，对比度，饱和度调整\n",
    "test_trans1 = transforms.ColorJitter(brightness=0.2,contrast=0.3,saturation=0.3)\n",
    "# 随机直方图均衡\n",
    "test_trans2 = transforms.RandomEqualize(p=0.3)\n",
    "# 随机对比度调整\n",
    "test_trans3 = transforms.RandomAutocontrast(p=0.3)\n",
    "# 随机清晰度调整\n",
    "test_trans4 = transforms.RandomAdjustSharpness(sharpness_factor=0,p = 0.3)\n",
    "test_trans5 = transforms.RandomAdjustSharpness(sharpness_factor=2,p = 0.3)\n",
    "\n",
    "aug_trans=transforms.Compose([test_trans1,test_trans2,test_trans3,test_trans4,test_trans5])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据扩增过程"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 146,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from PIL import Image\n",
    "from torchvision import transforms\n",
    "base_path = './DL_2022_4/data/train/'\n",
    "img_list = os.listdir(base_path)\n",
    "transforms.ColorJitter()\n",
    "for i in img_list:\n",
    "    label_index = i.find('_') + 1\n",
    "    if i[label_index] == '1':\n",
    "        tem_img = Image.open(base_path+i)\n",
    "        for j in range(9):\n",
    "            trans_img = aug_trans(tem_img)\n",
    "            trans_img.save(f\"./DL_2022_4/data/error/{i.split('.')[0]}_{j}.jpg\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "部分测试内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAAAAAByaaZbAAAGdUlEQVR4nD2PyXJcWRFAM/PmfVONUpWkkku2bGO7cQ+4gQ4iCBys+QT4AX6PTUewZudVO2xGt5s2WLYla1ZN790hM1k0cNZncQ7WLoFll+/87nBYVR49sWcw55EZJIskNUnr1fHKP8OrvyzJsoE5c3crA2YkJPwvYGYGCugcKAzLWHdVWTBkMnNpdMeJIQERghkgApgigJmhM0DnXYvH7IQMEMBsvxezihohgqkBwf9AVAXH6MqRPzZmRTAUf8eLqSECwA89//edSTY0LMbui+WVIwMDsP6uR1AFMlFABFNABFVVQDLVnLSsisNPfSY0Q7NBiYSqimZgagYAAPbDjCGaGjp21cNP+kQGaFhbBkQTQ4dmCshIiIDOoRkgIFHEIY8+3ycjl52CAWSTTISKaIjiNAoImmfHzM5hI66ut++SWiKF3JYmhhozIhKpQrJKkksOhw48EDrkovC9fUYURMz1oKhSy8KCKgBoCSOaweY6ofPo+itxyEVuGAyTs3WxjSVgULUSBDAlL4lVz//xDqeUynq3zwSu1IoV0HnhxbOdW/3apZUYs8toG/PnL16duNvzce2SBY9OwbBgr+LENs9zM9q9e2tUdMgVJCKJL1+84b1HB8NaYhdy8EioVKADY7Jib9+UweYPp5OyLpUsfPcNb/diq7kwP2DobTUF6+KUESk5Xt6fzcfu5u3Z0aP70+GgkDau9ntXr7+/rCZty9sHvdnukC17Ys7Zq7gTeFOGYlgOr1/OftwrYr6myerq74e9b8pvZ6PXl0/2dysBcJ6TM8EitB9UHp+9/PL9vH8zuc2NBmnD6a9/++rtPx8+OZGnvzjoaxZh5kIFQa0sVp/8/k9f/224n8JpNS6svn2zOpx/P/yNTZb9p1/NVROraWIfgDRqb+AeD391UX764FXCemT9oq4+772/rndS2r5/MCYQh2oWOAGrOrDb8Ge89bPBKDxdH3nXMtlwy8bnHY52D/q9xgXNKUiOjGCKTmDqb47jrrmp8ryvFKjw29rbLXw9Lgo0IpCQc7thzgIE5oePF7Hu706GeOVZ1BvrbnPRlVXRY8IcRAQwrQIrOENA7M8/c76EEm0x9xbVCUm1m1suYgVZIackslllzigMhCHbcFAQprxMru6EwbK5xDVQoaghBgNNy4RExualXF8vF9FVZYlLG3UGFoSQyIid5RhTFslh0xmzATKW2p1vNl2FBKGbDVTK5DRaYUpmqppzzjl3m4glAzpP2MrJoovRFUQ8MW+aRCmLMwVHICI5xS4k80joPTaJ7OT4eh2ypA0WXHFHlXe6WmdQkZxTShLaqOyJsUm1A9art/tbDarlRKEWBI9RIUpiAtMskkIU8hC5LqjuSik2bw+2GgO8ON6DsK4s88aarisFQU1jWHcZHRvwIG1l8xnw9N3OCEzDi/BFWhy99g8n9w5uLoAQzCR0q0TkRJTbnl02LkRanb0fY6JV/MP5Ia7vAE4OqAlrymYa22DMOXLacGXHPdcZGXzYO+9lbreaP+786Kuf9wpn0frQgWTNkp2KWQ5rthXvpByy0NXHneFWCdV+UY3lArMz7AglKYIIKocg3WLNuZz0jhdVQtCzs62eLQduPv9p4orBKaQQkhIhoYpECYuWS6gv2yaJS3jxYXu7tDRv+twLmxpzktM3bjKwBARuI7m7uQGOszZARJcB08e92a2r89Pt6bJASesgTQ79KCg5JbXYbj6u7nNfV626DoQMF0fbO+XR8ZP2VPVWUQ9qzzub1YIwBhHaLK+uJw84xGigHAlV0/vxbPaTg88OxtqOp+W5dWupUpYUlGW16T7qvSlHqVEpG2WH0L6uraY3U+uNMFycr2QfhLlLvMnrxc0yPz5krshDGxkDGiAt/+ru0XdXv9wJ2RbrwV4jKUFhQbujLob5vbFndsm5PkSDBAp28e1tmrz694RjuXdvPgDiZEhx/eF80xven3rHkLUoIGoBkLNUJf7rTvMg9gfT0XzaaAZk6ZbXl2cLnd3ecc5zsF5VtOrXqsY8GOK7wfbe7HC/rovQImrMOdycvW/d7NZu7UgZmi1wMa2zSS6HtY/rZ182ah1EdDkFxc3y/Oh4OTo4HIz7LMoiF4mZCxchY4NXi5Se+yQ3TU0VooT19cXJu27/wXTbNw3nzGoyYDFYi2PAfNO58vz55eJg0tAAJHcXpxctzR7tDF1Jyi79B0PvOcxWGNNGAAAAAElFTkSuQmCC",
      "text/plain": [
       "<PIL.JpegImagePlugin.JpegImageFile image mode=L size=48x48>"
      ]
     },
     "execution_count": 93,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "transforms.ColorJitter(brightness=10,contrast=10,saturation=10)\n",
    "test_img = Image.open(base_path+'10898_6.jpg')\n",
    "test_img"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 136,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAAAAAByaaZbAAAGnklEQVR4nD2PSZNlVRVG9z5nn9u+tjLzZZ9ZZFVB0YqIhIQRhkE49y8Q/gJ/lgMnjp0xAwMk1AIUlCzIrMqmsnn53rv3dHtvB6Br+MUafAtriqDZ5vu/OxrXtcPCUEGgtkBykDPnJJrTcnGyKD7C68/nRpOCWrUPagUiNGjwR0BEFQTQEghMqtD4uiwJslG1afKSzYoGrEFQBUQAFQRQVbQKaJ3t8MQRGwUEUN0dhCwsahBBRcHC/0AUAUtoq2lxokSCoMjupYJVFBEAfvjzf98qJ0XFcmp/endljYIC6HDLIbCAURZABGVABBFhQKMiOUlVl0dvuWxQFVVHFRoUEVQFFVUAANAfYhRRWNE6Wz9+fWiMAio2muHHHVUE0KFBBLSEqoBgjAk4psnbe0aNzVZAAbJysgYFjSKylcjAqIWz5Jy12GZqmvUHRjQZgdxVmhUlJEQ0RgSSVpxssjgmcGCQ0BVlMdglREbE3IyKOnWOiVEyAErGgKqwuk5oC7TDRbZIRWoJFJPVRbGOJRjPA62AAWMsODqWiydPcWZS2WwNnQWqpCYBtI7d/KPN/aGzacHqyGbUToqLT588s/f3pw1FDQVaAcWCnLBlXX2S28n2g71J2aOrimhNjp99+g1tv3YwqTn2PnnnDIop0YKS0XJ7V4VADx7PNqq6EqP+nx+7tUHsJJXqxgSDtbYgmZ8RokmW7h7t7t+zN8fnx689mo1HJXdhsTe4+urrF/VG19H64WBna0yanSHK2QnbZ/B15ctxOb7+bOeNQRHzjVlfXP39aPBx+eXO9MsX7+5u1QxABSWrjKXvvhd+8/yzd/+xP7zZOHSteO7C2W8+/OLbfz1+9zR/8MvDoSQ2RFQII7CW5eL13//5j3+b7EV/Vk9LbczN4mj/6/Fvdf1u+MH7ByzJiUqkwoORKMORfWP868vyrcdPEjZTHZZN9fbgu5tmM6b1R4dTA2xRVANFIBELcgh/NXvvjSf+g+Wxs52zOlk+mF54nG4dDIcNec7RcwqEoIKWYVbcnoQttTOhg5EYb0q3LoOt0jX3ihLVGOCQU78ilxkMaDF5cx6a4dbGBK8csRRKstVe+rIuB2Qwe+YMmO4CCVhFQBweTMlVUKLO951EJjZcb+XOFaGCLJBTyrxaJErIBAZ90smosBjzXaSmzw40qwVqwJSC4qNXkHiX0BglLbhc3sxvA1VlhXOd9Arq2aCxaog0x5AS5xxWvTpSQIeV+Itu5T0a8H5nJFwly6yFilEB4ZxyyqlfBSwJkJzBPj+79SHY0hja0EIlspjMVgWsAjOnFHqf1KHBwmGbjD47vV6GzKnDkirXm9pZuVsmEOacY0oc+iiuMGTa2BCQXH27u9aiaEomNBnBYRAInMiCSM4cfcjGAVJdmKavcrH69nCtVcDL023wy1ozrbTt+5IRRCT6ZZ/ROgEax3tZXQI8e7o5AeXwqX8nzo+/cq+uPzy8uQSDoMq+X0RrlUWoG+iLlnw0i/PvphjNIvzh4giX9wE3Dk0bliarcuy8OspMcUW1ng6oV6Nwsn0xyNSttX/afPn9XwwKq0GH0EPOkjhbYdUcliQr2ozJZzZXzzcn9yqodot6mi8xkZjeIEdB4IxCwXN/u6Rc7g1O53VCkPPztYHOx3Sw/150lYNCIAYfxVi0LJwD+3lHFTQvujZmm/DyZH2t0rDfDmngVw2mlM++ofWxRDBgV5z721ugsNMFCEgJMD3f3hldX5ytz+5KzGnpuc1+GDNySok1dKvndy/TUBadWA9sFG+P1zfL49Ofdc9F9otm1Di3uVrMDUbP2a7urq43XqEQg4JQNCiSvr+3s/PO4U8O7kk3nVUX2i+5jomTF8qLVf9cHs4ocINikppkEbovG23MNzMdTNBfXix4F7JzPtIqLee38/zmEVFtXN0HZzwqoLn73D40X139ajMknS9H221OEQr14p/2Iew/nBZENlk7hCgQQUBffHHfbDz5zwbFcvvRwQgMRUUTlicXq8H45VlhCbKUJQQuAbLnqsJ/328fh+FoNt2ftZJBHPfzmxfnt7Jzf9PagoIMqrKTYikiROMxPh2tbe8c7TZN6TtEjin72/PvOtrd36qtEYJ2DaxNy6w5V+PGxeVHP29FPUS0OXrB1d3F8end5PBoNB0RJ8p8mchRYQNkbPH6NqVPXOKbtjE1IvvlzeWzp/3eK7N117aUE4nyiFiaJVsCzDfelpd/uZofbLRmBJz7y7PLzuy+ujmxleXKxv8C4ZE5VzKzWSYAAAAASUVORK5CYII=",
      "text/plain": [
       "<PIL.Image.Image image mode=L size=48x48>"
      ]
     },
     "execution_count": 136,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# test_trans = transforms.ColorJitter(brightness=0.2,contrast=0.3,saturation=0.3)\n",
    "# test_trans = transforms.RandomEqualize(p=1)\n",
    "# 适合对所有数据都进行处理\n",
    "# 随机透视变换\n",
    "# test_trans = transforms.RandomPerspective(0.2,1) \n",
    "\n",
    "test_trans = transforms.RandomAutocontrast(p=1)\n",
    "\n",
    "test_trans(test_img)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 141,
   "metadata": {},
   "outputs": [],
   "source": [
    "# test_trans = transforms.RandomEqualize(p=1)\n",
    "test_trans = transforms.RandomAdjustSharpness(sharpness_factor=0,p = 1)\n",
    "a = test_trans(test_img)\n",
    "# cv2.imwrite(f'./DL_2022_4/data/error/test.jpg',a)\n",
    "a.save(f'./DL_2022_4/data/error/test.jpg')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 再次训练结果  \n",
    "resnet18与数据增强配合获得了约57%的正确率，但可能是扰动加的有点太大了，训练集上正确率为80%，拟合程度较低  \n",
    "经过多轮调优，最终拿到了59%的正确率  \n",
    "vggnet16与数据增强配合获得了峰值63%的正确率，epoch较大时训练集正确率95%以上，但接着训练无法再次提高测试集正确率  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# resnet18调优结果\n",
    "\n",
    "# -----第74轮训练开始-----\n",
    "# 训练次数：41900,当前batch上 Loss:0.19161486625671387,Accuracy:0.890625\n",
    "# 训练次数：42000,当前batch上 Loss:0.14215636253356934,Accuracy:0.9375\n",
    "# 训练次数：42100,当前batch上 Loss:0.08652709424495697,Accuracy:0.953125\n",
    "# 训练次数：42200,当前batch上 Loss:0.05085368826985359,Accuracy:1.0\n",
    "# 训练次数：42300,当前batch上 Loss:0.11623609066009521,Accuracy:0.9375\n",
    "# 训练次数：42400,当前batch上 Loss:0.10388883203268051,Accuracy:0.953125\n",
    "# 整体测试数据集上的Loss:126.40826630592346\n",
    "# 整体测试数据集上的正确率：0.600724458694458\n",
    "\n",
    "# VGG16调优结果\n",
    "\n",
    "# -----第32轮训练开始----- \n",
    "# 训练次数：41000,当前batch上 Loss:0.05006464570760727,Accuracy:0.984375 \n",
    "# 训练次数：41100,当前batch上 Loss:0.14000216126441956,Accuracy:0.953125 \n",
    "# 训练次数：41200,当前batch上 Loss:0.12939506769180298,Accuracy:0.953125 \n",
    "# 训练次数：41300,当前batch上 Loss:0.03886398300528526,Accuracy:0.984375 \n",
    "# 训练次数：41400,当前batch上 Loss:0.12857870757579803,Accuracy:0.953125 \n",
    "# 整体测试数据集上的Loss:101.74371147155762 整体测试数据集上的正确率：0.6333240866661072"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### BN层引入  \n",
    "VGGNET训练到后期的主要问题是太容易过拟合了，哪怕加了足够的扰动，训练集正确率依然能够接近1，这使得测试集指标无法接着提高  \n",
    "此时引入BN层，使用带BN的VGG网络进行训练，发现效果明显好于其他网络，在50个epoch时获得了稳定的65%的正确率，多次调整参数，断点重训后获得了高达67%的正确率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# -----第87轮训练开始-----\n",
    "# 训练次数：49300,当前batch上 Loss:0.04520918428897858,Accuracy:0.984375\n",
    "# 训练次数：49400,当前batch上 Loss:0.04933832585811615,Accuracy:0.984375\n",
    "# 训练次数：49500,当前batch上 Loss:0.05514150112867355,Accuracy:0.96875\n",
    "# 训练次数：49600,当前batch上 Loss:0.10689633339643478,Accuracy:0.96875\n",
    "# 训练次数：49700,当前batch上 Loss:0.012603617273271084,Accuracy:1.0\n",
    "# 训练次数：49800,当前batch上 Loss:0.16900798678398132,Accuracy:0.921875\n",
    "# 整体测试数据集上的Loss:114.09531688690186\n",
    "# 整体测试数据集上的正确率：0.6706603765487671"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 尝试通过多个模型投票的方式再次提高正确率  \n",
    "目前训练出了67%正确率的VGG16_BN,64%正确率的RESNET34，60%正确率的RESNET18，下面尝试使用已训练好的表现较好的模型进行组合  \n",
    "主要方案有直接平均、加权平均和投票法三种，我们采用了输出矩阵加权平均的方案  \n",
    "同时采用数据归一化和对指数函数处理等方法进行优化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 模型导入"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "device = torch.device(\"cuda\")\n",
    "resnet18_path = \"./DL_2022_4/model/resnet18.pth\"\n",
    "myresnet18 = torch.load(resnet18_path)\n",
    "myresnet18 = myresnet18.to(device)\n",
    "\n",
    "resnet34_path = \"./DL_2022_4/model/resnet34.pth\"\n",
    "myresnet34 = torch.load(resnet34_path)\n",
    "myresnet34 = myresnet34.to(device)\n",
    "\n",
    "vgg16bn_path = \"./DL_2022_4/model/vgg16bn.pth\"\n",
    "myvgg16bn = torch.load(vgg16bn_path)\n",
    "myvgg16bn = myvgg16bn.to(device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Dataloader定义"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "judge_dataloader = DataLoader(dataset=test_dataset, batch_size=16, shuffle=True)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 加权平均调参  \n",
    "通过将模型输出直接加权平均后进行决策，获得了高达68.6%的正确率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "整体测试数据集上的正确率：0.6862636208534241\n"
     ]
    }
   ],
   "source": [
    "total_test_accuracy = 0\n",
    "for data in judge_dataloader:\n",
    "    judge_imgs, judge_targets = data\n",
    "    judge_imgs = judge_imgs.to(device)\n",
    "    judge_targets = judge_targets.to(device)\n",
    "\n",
    "    # 用三个网络分别判断\n",
    "    resnet18_outputs = myresnet18(judge_imgs)       \n",
    "    resnet34_outputs = myresnet34(judge_imgs)\n",
    "    vgg16bn_outputs = myvgg16bn(judge_imgs)\n",
    "    judge_out = resnet18_outputs[:,0:7]*0.186+resnet34_outputs[:,0:7]*0.452+vgg16bn_outputs[:,0:7]*0.45\n",
    "    accuracy = (judge_out.argmax(1) == judge_targets).sum()\n",
    "    total_test_accuracy += accuracy\n",
    "    \n",
    "print(f'整体测试数据集上的正确率：{total_test_accuracy / test_data_len}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([5, 1000])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor([[11.3731,  8.7132, 15.1313, 28.5123, 19.4885, 13.1968, 19.2688],\n",
       "        [13.0819,  9.8482, 13.2244, 20.0521, 25.6563, 12.0112, 16.6523],\n",
       "        [17.3106, 12.1050, 20.3806, 17.2583, 30.8838, 11.2405, 17.6165],\n",
       "        [14.3496,  6.0731, 11.0831, 10.3856, 15.3640,  8.6384, 18.4805],\n",
       "        [ 8.7711,  6.1543, 12.4093,  9.0281, 16.3923,  5.8774, 10.8704]],\n",
       "       grad_fn=<SliceBackward0>)"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = vgg16bn_outputs.cpu()\n",
    "print(a.shape)\n",
    "b = a[:,0:7]\n",
    "b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "进一步优化考虑采用数据归一化和对指数函数处理方法  \n",
    "通过对指数函数处理后再次归一化，可使高分段区分度降低/提高，进而利于模型调优"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 203,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "整体测试数据集上的正确率：0.6804124116897583\n"
     ]
    }
   ],
   "source": [
    "total_test_accuracy = 0\n",
    "# 归一化函数\n",
    "def my201(input_tensor,multi,bias):\n",
    "    input_max = torch.max(input_tensor,dim=1)\n",
    "    input_max = input_max.values\n",
    "    input_max = input_max.reshape(-1,1)\n",
    "    input_min = torch.min(input_tensor,dim=1)\n",
    "    input_min = input_min.values\n",
    "    input_min = input_min.reshape(-1,1)\n",
    "    tem_tensor = torch.ones(1,7,device=device)\n",
    "    upper = torch.addcmul(input=input_tensor,tensor1=input_min,tensor2=tem_tensor,value=-1)\n",
    "    outputs = upper.div((input_max-input_min)*multi)\n",
    "    outputs = outputs + bias\n",
    "    return outputs\n",
    "\n",
    "for data in judge_dataloader:\n",
    "    judge_imgs, judge_targets = data\n",
    "    judge_imgs = judge_imgs.to(device)\n",
    "    judge_targets = judge_targets.to(device)\n",
    "\n",
    "    # 用三个网络分别判断\n",
    "    resnet18_outputs = myresnet18(judge_imgs)       \n",
    "    resnet34_outputs = myresnet34(judge_imgs)\n",
    "    vgg16bn_outputs = myvgg16bn(judge_imgs)\n",
    "\n",
    "    # 将输出结果归一化\n",
    "    resnet18_outputs = my201(resnet18_outputs[:,0:7],0.2,5)\n",
    "    resnet34_outputs = my201(resnet34_outputs[:,0:7],0.2,5)\n",
    "    vgg16bn_outputs = my201(vgg16bn_outputs[:,0:7],0.2,5)\n",
    "\n",
    "    # 将结果进行对数运算\n",
    "    resnet18_outputs = resnet18_outputs.log()\n",
    "    resnet34_outputs = resnet34_outputs.log()\n",
    "    vgg16bn_outputs = vgg16bn_outputs.log()\n",
    "\n",
    "    # # 将输出结果再次归一化\n",
    "    resnet18_outputs = my201(resnet18_outputs,0.2,5)\n",
    "    resnet34_outputs = my201(resnet34_outputs,0.2,5)\n",
    "    vgg16bn_outputs = my201(vgg16bn_outputs,0.2,5)\n",
    "\n",
    "    judge_out = resnet18_outputs*0.25+resnet34_outputs*0.25+vgg16bn_outputs*0.85\n",
    "    accuracy = (judge_out.argmax(1) == judge_targets).sum()\n",
    "    total_test_accuracy += accuracy\n",
    "    \n",
    "print(f'整体测试数据集上的正确率：{total_test_accuracy / test_data_len}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "构建归一化函数时部分debug内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 8.7132],\n",
      "        [ 9.8482],\n",
      "        [11.2405],\n",
      "        [ 6.0731],\n",
      "        [ 5.8774]], grad_fn=<ReshapeAliasBackward0>)\n",
      "torch.Size([5, 1])\n",
      "tensor([[1., 1., 1., 1., 1., 1., 1.]])\n",
      "torch.Size([1, 7])\n",
      "torch.Size([5, 7])\n"
     ]
    }
   ],
   "source": [
    "a = torch.min(b,dim=1)\n",
    "a = a.values\n",
    "a = a.reshape(5,1)\n",
    "print(a)\n",
    "print(a.shape)\n",
    "c = torch.ones(1,7)\n",
    "print(c)\n",
    "print(c.shape)\n",
    "print(b.shape)\n",
    "tem = torch.addcmul(input=b,tensor1=a,tensor2=c,value=-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 121,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.1343, 0.0000, 0.3242, 1.0000, 0.5442, 0.2265, 0.5331],\n",
       "        [0.2046, 0.0000, 0.2136, 0.6455, 1.0000, 0.1368, 0.4304],\n",
       "        [0.3090, 0.0440, 0.4653, 0.3064, 1.0000, 0.0000, 0.3246],\n",
       "        [0.6671, 0.0000, 0.4038, 0.3476, 0.7488, 0.2068, 1.0000],\n",
       "        [0.2752, 0.0263, 0.6212, 0.2996, 1.0000, 0.0000, 0.4749]],\n",
       "       grad_fn=<DivBackward0>)"
      ]
     },
     "execution_count": 121,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a_min = torch.min(b,dim=1)\n",
    "a_min = a_min.values\n",
    "a_max = torch.max(b,dim=1)\n",
    "a_max = a_max.values\n",
    "div = a_max-a_min\n",
    "div = div.reshape(-1,1)\n",
    "tem.div(div)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 对指数处理效果展示  \n",
    "对数函数处理，使低分段分数分布分散，高分段分布聚拢，即拉低了top score之间的距离"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 211,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "直接归一化结果:\n",
      " tensor([[0.1343, 0.0000, 0.3242, 1.0000, 0.5442, 0.2265, 0.5331],\n",
      "        [0.2046, 0.0000, 0.2136, 0.6455, 1.0000, 0.1368, 0.4304],\n",
      "        [0.3090, 0.0440, 0.4653, 0.3064, 1.0000, 0.0000, 0.3246],\n",
      "        [0.6671, 0.0000, 0.4038, 0.3476, 0.7488, 0.2068, 1.0000],\n",
      "        [0.2752, 0.0263, 0.6212, 0.2996, 1.0000, 0.0000, 0.4749]],\n",
      "       device='cuda:0', grad_fn=<DivBackward0>)\n",
      "使用对数函数后再次归一化结果：\n",
      " tensor([[0.1603, 0.0000, 0.3705, 1.0000, 0.5936, 0.2645, 0.5828],\n",
      "        [0.2402, 0.0000, 0.2502, 0.6898, 1.0000, 0.1632, 0.4807],\n",
      "        [0.3543, 0.0537, 0.5159, 0.3515, 1.0000, 0.0000, 0.3709],\n",
      "        [0.7099, 0.0000, 0.4536, 0.3952, 0.7843, 0.2426, 1.0000],\n",
      "        [0.3180, 0.0323, 0.6671, 0.3443, 1.0000, 0.0000, 0.5254]],\n",
      "       device='cuda:0', grad_fn=<DivBackward0>)\n"
     ]
    }
   ],
   "source": [
    "def my202(input_tensor):\n",
    "    input_max = torch.max(input_tensor,dim=1)\n",
    "    input_max = input_max.values\n",
    "    input_max = input_max.reshape(-1,1)\n",
    "    input_min = torch.min(input_tensor,dim=1)\n",
    "    input_min = input_min.values\n",
    "    input_min = input_min.reshape(-1,1)\n",
    "    tem_tensor = torch.ones(1,7,device = device)\n",
    "    upper = torch.addcmul(input=input_tensor,tensor1=input_min,tensor2=tem_tensor,value=-1)\n",
    "    outputs = upper.div(input_max-input_min)\n",
    "    return outputs\n",
    "b = b.cuda()\n",
    "duibi = my202(b)\n",
    "print(f'直接归一化结果:\\n {duibi}')\n",
    "norm = my201(b,2,1)\n",
    "tt = norm.log()\n",
    "ans = my202(tt)\n",
    "print(f'使用对数函数后再次归一化结果：\\n {ans}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 212,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "直接归一化结果:\n",
      " tensor([[0.1343, 0.0000, 0.3242, 1.0000, 0.5442, 0.2265, 0.5331],\n",
      "        [0.2046, 0.0000, 0.2136, 0.6455, 1.0000, 0.1368, 0.4304],\n",
      "        [0.3090, 0.0440, 0.4653, 0.3064, 1.0000, 0.0000, 0.3246],\n",
      "        [0.6671, 0.0000, 0.4038, 0.3476, 0.7488, 0.2068, 1.0000],\n",
      "        [0.2752, 0.0263, 0.6212, 0.2996, 1.0000, 0.0000, 0.4749]],\n",
      "       device='cuda:0', grad_fn=<DivBackward0>)\n",
      "使用指数函数后再次归一化结果：\n",
      " tensor([[0.1071, 0.0000, 0.2712, 1.0000, 0.4821, 0.1848, 0.4709],\n",
      "        [0.1660, 0.0000, 0.1737, 0.5872, 1.0000, 0.1092, 0.3701],\n",
      "        [0.2576, 0.0343, 0.4038, 0.2552, 1.0000, 0.0000, 0.2716],\n",
      "        [0.6103, 0.0000, 0.3449, 0.2926, 0.7000, 0.1679, 1.0000],\n",
      "        [0.2274, 0.0204, 0.5615, 0.2491, 1.0000, 0.0000, 0.4131]],\n",
      "       device='cuda:0', grad_fn=<DivBackward0>)\n"
     ]
    }
   ],
   "source": [
    "duibi = my202(b)\n",
    "print(f'直接归一化结果:\\n {duibi}')\n",
    "norm = my201(b,2,1)\n",
    "tt = norm.exp()\n",
    "ans = my202(tt)\n",
    "print(f'使用指数函数后再次归一化结果：\\n {ans}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 总结  \n",
    "通过自己构建的小模型训练，再到逐步尝试主流模型，再到尝试各类调优方法进行参数调节，模型的正确率不断上升，最后稳定在68%左右，峰值可达68.6%。  \n",
    "训练过程中正确率的上升如下图所示，可见模型优化、数据增强等方式可以对识别效果产生很大影响。  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|                   | CNN | resnet18 | vgg16 |\n",
    "| ----------------- | ----- | -------- | ----- |\n",
    "| 原数据            | 52%   | 54%      | 56%   |\n",
    "| 数据增强          | 未测  | 59%      | 63%   |\n",
    "| 加深网络/引入BN层 | 未测  | 64%      | 67%   |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.8.13 ('DLpy38')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "e79232abc89fce0a8aa39b1575b748d7525bee86d7524d21703044819b67c8bd"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
