{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "b5f84f33",
   "metadata": {},
   "source": [
    "# （一）首先导入基本的包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 116,
   "id": "cdbfbf51",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import torch\n",
    "import time\n",
    "import torch.nn as nn  # 这里将重要的库命名。\n",
    "import torch.nn.functional as F   # 这里将内置函数也命名。\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "from torch.utils.data import DataLoader    #  这里引入一个dataloader."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "id": "4c447dfb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# (二)构建卷积神经网络和建立自定义函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60232750",
   "metadata": {},
   "source": [
    "这里打算构建2个卷积层接下一个池化层，然后再两个卷积层接下去一个池化层，最后就是全连接层，然后softmax函数输出"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "71d9b3cf",
   "metadata": {},
   "source": [
    "## 2.1conv2d参数设置\n",
    "这里一定要说一下conv2d的参数设置，conv2d主要是进行的一个图像处理，和conv3d其实有很大的区别（这里暂时不说），\n",
    "首先第一个参数是你输入的那个通道数是多少，第二个表示使用了多少个滤波器，也就是会输出多少个channel，第3个参数就是\n",
    "使用多大的滤波器来进行卷积，然后其他的参数可以自己定义，比如padding表示的是用多少来包围住这个矩阵，stride表示的是\n",
    "步长。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2514d220",
   "metadata": {},
   "source": [
    "## 2.2 Maxpool2d参数设置\n",
    "其实这里没有什么太好说的，从池化层可以了解到，不改变channel的数量，只是求取一个最大值，这里直接设置滤波器的大小是\n",
    "2*2即可。这里还要解释一下stride，这里如果自己不特意去设置的话stride一般都是等于K_size的，所以下面的神经网络可以看到就是\n",
    "一个2输进去，还要注意的是这个池化层的超参数的设置一定要去寻找相关的文献。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 118,
   "id": "80bb89db",
   "metadata": {},
   "outputs": [],
   "source": [
    "class CNN_net(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(CNN_net,self).__init__()\n",
    "        # 这里是第一层\n",
    "        self.layer1 = nn.Sequential(\n",
    "            nn.Conv2d(1,5,5,padding=2,stride = 1),  # 这里只弄5个滤波器\n",
    "            nn.MaxPool2d(2),   # 先池化后激活。\n",
    "            nn.ReLU())  \n",
    "        \n",
    "        # 接下来定义第二层的卷积。\n",
    "        self.layer2 = nn.Sequential(\n",
    "            nn.Conv2d(5,10,3,padding=2,stride = 1),    # 这里弄8个。\n",
    "             nn.MaxPool2d(2),  # 这里是先池化，后激活。\n",
    "                nn.ReLU())\n",
    "        \n",
    "        # 这里开始定义全连接层。\n",
    "        self.liner1 = nn.Linear(640,50)\n",
    "        self.liner2 = nn.Linear(50,10)    # 这里直接转化为10个，然后使用softmax函数进求解。\n",
    "       \n",
    "    def forward(self,x):  # 这里就是定义向前传播的参数了，x表示的是一个图片的特征。\n",
    "        x = self.layer1(x) # 这里进行第一步的卷积操作\n",
    "        #x = F.relu(x)   # 这里使用的是relu函数进行激活。\n",
    "        x = self.layer2(x)  # 这里把用relu函数激活后的x再一次进行卷积。\n",
    "        #x = F.relu(x)   # 这里是将经过第二层的卷积进一步的使用函数激活。\n",
    "        # 接下来的这一步是要将数据展开。\n",
    "        #print(x.shape)\n",
    "        x = x.view(-1, 8*8*10) # 这里要将数据展开，表示的是一个一维，方便后续的全连接层。\n",
    "        x = self.liner1(x)\n",
    "        x = F.relu(x)\n",
    "        x = self.liner2(x)\n",
    "        # x = F.softmax(x,dim=1)  # 这里要特别注意的就是这个维度问题，这个指的是在第0维方向上面进行归一化。\n",
    "        return x\n",
    "            "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 119,
   "id": "64eaaccb",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_it(alpha):  # 这里是获得模型的函数,传入的参数是学习率\n",
    "    net = CNN_net() # 首先还是实例化\n",
    "    optimizer = torch.optim.Adam(net.parameters(),lr = alpha)  # 这里设置学习率和优化方法。 \n",
    "    return net,optimizer  # 这里返回的是实例化后的参数以及优化方法optimizer."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aec25956",
   "metadata": {},
   "source": [
    "这里神经网络的建立要注意的一点就是每个Sequential使用了nn.conv2d后要加逗号，不然就会报错，因为注释的原因刚开始没有特别注意。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98e6fbac",
   "metadata": {},
   "source": [
    "# （三）处理数据。\n",
    "这里是从网上下载数据集到c盘里面去，可以使用vpn来下载，这个样子速度很快"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 120,
   "id": "d9f0a8e2",
   "metadata": {},
   "outputs": [],
   "source": [
    "#获取训练集\n",
    "\n",
    "train = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST',train=True,download = True,transform = transforms.ToTensor())\n",
    "train_data  = DataLoader(dataset=train,batch_size=600,shuffle=True)\n",
    "#获取测试集\n",
    "test = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST',train=False,download = True,transform = transforms.ToTensor())\n",
    "test_data  = DataLoader(dataset=test,batch_size=100,shuffle=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23396ddf",
   "metadata": {},
   "source": [
    "在这里将训练集保存在train这个变量里面，训练集保存在test这个变量里面。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d723eac5",
   "metadata": {},
   "source": [
    "这里可以大致了解一下batch_size用于数据的处理。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6091447d",
   "metadata": {},
   "source": [
    "## 3.1 数据的预处理部分。\n",
    "这里主要是熟悉数据的类型以及对数据进行预处理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 121,
   "id": "d248e010",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<torch.utils.data.dataloader.DataLoader object at 0x000002B3101783D0>\n",
      "<torch.utils.data.dataloader.DataLoader object at 0x000002B3103B6CD0>\n"
     ]
    }
   ],
   "source": [
    "print(test_data)  # 查看数据的状态。\n",
    "print(train_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "id": "aa33da90",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([1, 28, 28])\n",
      "<class 'int'>\n",
      "<class 'torch.Tensor'>\n",
      "9\n",
      "<class 'torch.Tensor'>\n"
     ]
    }
   ],
   "source": [
    "x,y = train[0]\n",
    "print(x.shape)    # 这里可以看到对于每一个trian数据的索引，都有一个图片以及索引。\n",
    "print(type(y))\n",
    "print(type(x))\n",
    "print(y)\n",
    "y = torch.tensor(y)  # 这里是转化为这个\n",
    "print(type(y))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 123,
   "id": "ab22a0a8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "60000\n"
     ]
    }
   ],
   "source": [
    "print(len(train))    # 使用len函数可以查看数据的大小。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b3e1d1a",
   "metadata": {},
   "source": [
    "## 3.2 接下来定义超参数以及优化模型。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 124,
   "id": "f09d9106",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "epoch = 20   # 这里表示整个神经网络迭代50次即可。\n",
    "alpha = 0.01   # 这里表示的是定义学习率。\n",
    "\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")  # 初始化GPU,将数据放在GPU上面进行训练。\n",
    "net,optimizer = get_it(alpha)    # 这里是实例化以及反向传播的优化方法\n",
    "loss_function = nn.CrossEntropyLoss().to(device)  # 放在GPU上面进行训练。\n",
    "loss = 0   # 初始化损失值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 125,
   "id": "56d8d47b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0\n",
      "cuda\n"
     ]
    }
   ],
   "source": [
    "print(loss)\n",
    "print(device)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "id": "b700f2ee",
   "metadata": {},
   "outputs": [],
   "source": [
    "net = net.to(device)  # 这里是将模型也放在GPU上面进行运算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 127,
   "id": "96735892",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CNN_net(\n",
      "  (layer1): Sequential(\n",
      "    (0): Conv2d(1, 5, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n",
      "    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (2): ReLU()\n",
      "  )\n",
      "  (layer2): Sequential(\n",
      "    (0): Conv2d(5, 10, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2))\n",
      "    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
      "    (2): ReLU()\n",
      "  )\n",
      "  (liner1): Linear(in_features=640, out_features=50, bias=True)\n",
      "  (liner2): Linear(in_features=50, out_features=10, bias=True)\n",
      ")\n",
      "Adam (\n",
      "Parameter Group 0\n",
      "    amsgrad: False\n",
      "    betas: (0.9, 0.999)\n",
      "    capturable: False\n",
      "    eps: 1e-08\n",
      "    foreach: None\n",
      "    lr: 0.01\n",
      "    maximize: False\n",
      "    weight_decay: 0\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "print(net)\n",
    "print(optimizer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 128,
   "id": "8f031193",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "正在进行第1次迭代\n",
      "tensor(0.1246, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第2次迭代\n",
      "tensor(0.0691, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第3次迭代\n",
      "tensor(0.0592, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第4次迭代\n",
      "tensor(0.0526, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第5次迭代\n",
      "tensor(0.0492, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第6次迭代\n",
      "tensor(0.0468, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第7次迭代\n",
      "tensor(0.0447, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第8次迭代\n",
      "tensor(0.0428, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第9次迭代\n",
      "tensor(0.0418, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第10次迭代\n",
      "tensor(0.0401, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第11次迭代\n",
      "tensor(0.0389, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第12次迭代\n",
      "tensor(0.0377, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第13次迭代\n",
      "tensor(0.0375, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第14次迭代\n",
      "tensor(0.0364, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第15次迭代\n",
      "tensor(0.0358, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第16次迭代\n",
      "tensor(0.0343, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第17次迭代\n",
      "tensor(0.0336, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第18次迭代\n",
      "tensor(0.0326, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第19次迭代\n",
      "tensor(0.0322, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "正在进行第20次迭代\n",
      "tensor(0.0318, device='cuda:0', grad_fn=<AddBackward0>)\n",
      "任务完成\n"
     ]
    }
   ],
   "source": [
    "## （四）开始训练模型。\n",
    "torch.cuda.manual_seed(223) # 这里是固定住GPU的随机种子，使结果复现。\n",
    "loss_x = []\n",
    "for i in range(epoch):   # 这里表示迭代多少次的意思。\n",
    "    loss = 0\n",
    "    print(\"正在进行第\" + str(i+1) + \"次迭代\")\n",
    "    for a,b in train_data: \n",
    "        a = a.to(device)\n",
    "        b = b.to(device)   # 这里将数据全部放到GPU上面进行训练。\n",
    "        optimizer.zero_grad()  # 初始化权重。\n",
    "        y_pre = net(a)   # 这里得到最后的答案。       \n",
    "        loss_data = loss_function(y_pre,b)  # 计算出损失值。\n",
    "        loss += loss_data/600    # 这里是除以你batch_size的大小即可。\n",
    "        #print(\"损失值为：\" + str(loss_data/10000))\n",
    "        loss_data.backward()# 反向传播。\n",
    "        optimizer.step() \n",
    "    print(loss)\n",
    "    loss_x.append(loss)  # 这里将每次的损失值储存起来，然后便于后续的可视化操作。\n",
    "print(\"任务完成\")     "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 129,
   "id": "d9f137cf",
   "metadata": {},
   "outputs": [],
   "source": [
    "accrate = 0  # 初始话为0.\n",
    "for a,b in test_data:\n",
    "    a = a.to(device)\n",
    "    b = b.to(device)   # 这里将数据全部放到GPU上面进行训练。\n",
    "    finish = net(a)\n",
    "    aaa,finish_pre = torch.max(finish,axis = 1)\n",
    "    # print(finish_pre)\n",
    "    acc = (finish_pre == b)\n",
    "    accrate += acc.sum()\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 130,
   "id": "80bf91d8",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "89.97000122070312%\n"
     ]
    }
   ],
   "source": [
    "print(str(float(accrate/100)) + \"%\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8a471150",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
