{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8526a21f",
   "metadata": {},
   "source": [
    "# （一）首先导入基本的包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 311,
   "id": "f5b58902",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import time\n",
    "import torch.nn as nn  # 这里将重要的库命名。\n",
    "import torch.nn.functional as F   # 这里将内置函数也命名。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ea0e230",
   "metadata": {},
   "source": [
    "# （二）导入数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 312,
   "id": "939537e7",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(163,)\n",
      "(163, 2)\n"
     ]
    }
   ],
   "source": [
    "data = pd.read_csv('./AI_exam2-master/田字型散点/train.csv',header=None)\n",
    "y_data = data.iloc[2,:]\n",
    "x_data = data.iloc[0:2,:]   #提取出数据。\n",
    "x1 = np.array(x_data).T   # 转化为列向量。\n",
    "y1 = np.array(y_data)  # 转化为矩阵的形式\n",
    "print(y1.shape)\n",
    "print(x1.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0e8e994",
   "metadata": {},
   "source": [
    "# (三)张量的操作"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c052cd89",
   "metadata": {},
   "source": [
    "## 3.1 首先是使用torch.from_numpy(z)可以将z从矩阵转化为张量\n",
    "## 3.2 假设z是张量，对数据类型进行修改可以使用z.float(),这里是转化为float类型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 313,
   "id": "7255a9d6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.float32\n"
     ]
    }
   ],
   "source": [
    "# 这里将数据转化为张量\n",
    "x = torch.from_numpy(x1)   # 直接转化为张量。\n",
    "y = torch.from_numpy(y1)\n",
    "y = y.float()\n",
    "x = x.float()\n",
    "print(x.dtype)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dae08fbf",
   "metadata": {},
   "source": [
    "# （四）搭建一个简单的神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1054432",
   "metadata": {},
   "source": [
    "## 4.1 relu函数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e0c1589",
   "metadata": {},
   "source": [
    "### 这里将介绍一下relu函数，relu函数经常杯用来作为中间隐藏层的激活函数，而且速度精准度都比sigmoid函数好，relu函数如下：\n",
    "$$\n",
    "f\\left( x \\right) =\\max \\left( 0,x \\right) \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6cb9e29",
   "metadata": {},
   "source": [
    "### 这里表示的是输入一个x,只要大于0的就输出x,小于0的就等于0."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9e4d4af0",
   "metadata": {},
   "source": [
    "## 4.2 sigmoid函数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7776c652",
   "metadata": {},
   "source": [
    "### sigmoid函数通常是用于二分类问题的，搭建神经网络的时候，我选择了此函数最为最后的输出层，sigmoid函数的形式如下：\n",
    "$$\n",
    "S\\left( x \\right) =\\frac{1}{1+e^{-x}}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bae13ba2",
   "metadata": {},
   "source": [
    "### sigmoid函数会将数据锁定在一个0~1之间，所以可以作为二分类的输出函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "845cefe4",
   "metadata": {},
   "source": [
    "## 4.3 建立神经网络遇到的问题以及解决办法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d413a893",
   "metadata": {},
   "source": [
    "### 4.3.1 首先就是随机种子的问题了，在我弄线性回归的时候就已经注意到这个问题了，中间没有设置随机种子，得到的结果有时候不一样，疑惑了一段时间。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1327e1c",
   "metadata": {},
   "source": [
    "### 4.3.2 接下来就是建立神经网络的过程，这里使用到了类，以前敲代码的时候没有注意到类这么好用，可以把这个神经网络很简单的串联到一起"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f92a6e1",
   "metadata": {},
   "source": [
    "### 4.3.3.这里我在使用numpy转化为张量的时候，输入的矩阵是二维的，然后妆花的时候也是二维的，最后预测值与真实值做损失函数的时候由于维度问题报了错误，最终的解决方案是将向前传播求得的那个矩阵进行降维。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19eb1ace",
   "metadata": {},
   "source": [
    "### 降维的方法是：假设一个张量z，使用z.aqueeze()将数据进行降维即可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 314,
   "id": "61b84eba",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 建立一个神经网络。\n",
    "torch.manual_seed(1) # 这里是设置一个随机种子。 \n",
    "class Network(nn.Module):  # 这里要将其变成副类。\n",
    "    def __init__(self):\n",
    "        super(Network,self).__init__()   # 这里表示继承类里面\n",
    "        self.liner_1 = nn.Linear(2,10)  # 这里表示的是有两个特征，隐藏层有10个。\n",
    "        self.liner_2 = nn.Linear(10,10)  # 这里是输入下一层。\n",
    "        self.liner_3 = nn.Linear(10,1)  # 这里使用sigmoid函数来激活。\n",
    "    \n",
    "    def forward(self,input): # 这里的input表示的是你要输入的那个数据，同时这里也是向前传播。\n",
    "        z = self.liner_1(input)  # 这里表示输入层转化为线性函数。\n",
    "        z = F.relu(z)  # 这里表示用relu函数将z激活。\n",
    "        z = self.liner_2(z)  # 这里再转化为线性函数。\n",
    "        z = F.relu(z)   # 再一次使用费relu函数将其激活。\n",
    "        z = self.liner_3(z)   # 再一次转化为线性函数\n",
    "        z = torch.sigmoid(z) # 最后使用的是sigmoid函数来激活。\n",
    "        z = z.squeeze(-1)   # 这里有的时候是要将维度的，反正是玄学。\n",
    "        return z   # 将最终的结果也就是分类结果进行返回。\n",
    "        \n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "966647ab",
   "metadata": {},
   "source": [
    "### 4.3.4 这里我定义了一个函数，是可以将你要的那个变量进行实例化，然后同时返回一个优化方案，这里我选择了Adam的优化方案，可以自动的矫正学习率，其实还有很多种的方案，这个可能要等到具体的问题具体的去分析了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 315,
   "id": "82151ec1",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_it():  # 这里是获得模型的函数。\n",
    "    two_class = Network()  # 首先还是实例化。\n",
    "    opti = torch.optim.Adam(two_class.parameters(),lr = 0.015)  # 这里设置学习率和优化方法。\n",
    "    return two_class,opti"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 316,
   "id": "84ba64b7",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "net,opti = get_it()  # 这里是将变量实例化还有获得训练的"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 317,
   "id": "3d35239d",
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Network(\n",
      "  (liner_1): Linear(in_features=2, out_features=10, bias=True)\n",
      "  (liner_2): Linear(in_features=10, out_features=10, bias=True)\n",
      "  (liner_3): Linear(in_features=10, out_features=1, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "print(net)   #查看类别。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2851f462",
   "metadata": {},
   "source": [
    "### 4.3.5 这里定义了损失函数，使用的是BCEloss函数，公式如下：\n",
    "$$\n",
    "LOSS=-y\\log \\left( \\hat{y} \\right) -\\left( 1-y \\right) \\log \\left( 1-\\hat{y} \\right) \n",
    "$$\n",
    "其中y为真实标签，$\\hat{y}$为预测值。这样，当标签为0的时候，公式前半部分为0，$\\hat{y}$ 需要尽可能为0才能使后半部分数值更小；当标签为1时，后半部分为0，$\\hat{y}$需要尽可能为1才能使前半部分的值更小，这样就达到了让$\\hat{y}$尽量靠近标签也就是真实值的预期效果。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 318,
   "id": "f3841521",
   "metadata": {},
   "outputs": [],
   "source": [
    "loss_function = torch.nn.BCELoss()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 319,
   "id": "0527195e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "这里是训练集用于训练网络的损失值以及准确率：\n",
      "迭代次数为：0.  此时的损失值为：0.6959086656570435.  此时模型的准确率是：42.944786071777344%\n",
      "迭代次数为：10.  此时的损失值为：0.47769060730934143.  此时模型的准确率是：84.04907989501953%\n",
      "迭代次数为：20.  此时的损失值为：0.2586727440357208.  此时模型的准确率是：90.79754638671875%\n",
      "迭代次数为：30.  此时的损失值为：0.15177933871746063.  此时模型的准确率是：93.25153350830078%\n",
      "迭代次数为：40.  此时的损失值为：0.10224137455224991.  此时模型的准确率是：95.09202575683594%\n",
      "迭代次数为：50.  此时的损失值为：0.0677039697766304.  此时模型的准确率是：97.54601287841797%\n",
      "迭代次数为：60.  此时的损失值为：0.04686707630753517.  此时模型的准确率是：98.77300262451172%\n",
      "迭代次数为：70.  此时的损失值为：0.03337283805012703.  此时模型的准确率是：98.77300262451172%\n",
      "迭代次数为：80.  此时的损失值为：0.025386329740285873.  此时模型的准确率是：99.38650512695312%\n",
      "迭代次数为：90.  此时的损失值为：0.021162960678339005.  此时模型的准确率是：99.38650512695312%\n",
      "迭代次数为：100.  此时的损失值为：0.018188485875725746.  此时模型的准确率是：99.38650512695312%\n"
     ]
    }
   ],
   "source": [
    "print(\"这里是训练集用于训练网络的损失值以及准确率：\")\n",
    "for i in range(101):\n",
    "    finish = net(x)  # 这里是将数据传进去。\n",
    "    #print(finish)\n",
    "    #print(finish.dtype)\n",
    "    loss = loss_function(finish,y)   # 这里是将返回的结果传进去，然后和真实值做对比。\n",
    "    #print(loss)\n",
    "    if i % 10 == 0:  # 这里是定义的一个if语句，用于输出中间值。\n",
    "        \n",
    "        # 这里是计算准确率的。\n",
    "        middle_data = torch.empty_like(finish)  # 创造一个空张量,大小是和finish一样大的。。\n",
    "        middle_data = middle_data.copy_(finish)  # 这里将数据打印，变量不在一个内存里面。\n",
    "        \n",
    "        zero = torch.zeros_like(loss)\n",
    "        one = torch.ones_like(loss)  # 创造一个0张量和1张量。\n",
    "        \n",
    "        middle_data = torch.where(middle_data >= 0.5, one,middle_data)  # 这里将大等于0.5的变量转化为1\n",
    "        middle_data = torch.where(middle_data < 0.5, zero, middle_data) # 这里将小于0.5的变量转化为0.\n",
    "        \n",
    "        acc = 100*(middle_data == y).sum() / y.numel()  # 这里表示的是\n",
    "        \n",
    "        print(\"迭代次数为：\" + str(i) + \".  此时的损失值为：\" + str(float(loss)) + \".  此时模型的准确率是：\" + str(float(acc)) + \"%\")\n",
    "        # 接下来就是计算准确率。\n",
    "        \n",
    "        \n",
    "    opti.zero_grad()  # 将权重变成零。\n",
    "    loss.backward()   # 这里将损失值进行反向传播        \n",
    "    opti.step()   # 更新新的梯度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 320,
   "id": "924ecf4b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 接下来是导入测试集的数据并进行划分。\n",
    "test_data = pd.read_csv(\"./AI_exam2-master/田字型散点/test.csv\",header = None)\n",
    "test_x = test_data.iloc[0:2,:]\n",
    "test_y = test_data.iloc[2:3,:]\n",
    "test_x = torch.from_numpy(np.array(test_x).T)\n",
    "test_y = torch.from_numpy(np.array(test_y))\n",
    "test_x = test_x.float()\n",
    "test_y = test_y.float()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 321,
   "id": "0f7729f6",
   "metadata": {},
   "outputs": [],
   "source": [
    "finish2 = net(test_x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 322,
   "id": "6c4505bc",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([9.9561e-01, 9.9708e-01, 9.8869e-01, 1.0000e+00, 9.9998e-01, 1.0000e+00,\n",
      "        9.9993e-01, 9.9999e-01, 1.0000e+00, 1.0000e+00, 1.0000e+00, 9.9998e-01,\n",
      "        1.0000e+00, 1.0000e+00, 9.9940e-01, 9.9590e-01, 7.8907e-01, 6.6087e-01,\n",
      "        2.0393e-05, 1.1270e-06, 7.7963e-08, 2.8819e-08, 1.2960e-06, 4.4412e-06,\n",
      "        1.3902e-04, 4.3714e-01, 8.6114e-04, 1.3575e-02, 1.3497e-01, 6.3923e-03,\n",
      "        2.4488e-02, 6.1813e-02, 2.8761e-01, 6.4989e-04, 1.4396e-01, 7.7225e-01,\n",
      "        9.3072e-01, 9.8051e-01, 9.4468e-01, 9.7854e-01, 9.9999e-01, 9.9212e-01,\n",
      "        9.9999e-01, 9.7701e-01, 9.9996e-01, 9.8069e-01, 9.9991e-01, 1.0000e+00,\n",
      "        1.0000e+00, 1.0000e+00, 1.0000e+00, 1.0000e+00, 9.9998e-01, 1.0000e+00,\n",
      "        1.0000e+00, 2.7227e-03, 4.2245e-05, 1.0140e-03, 6.2978e-03, 4.1655e-04,\n",
      "        1.3051e-01, 3.1900e-01, 1.2537e-01, 6.0770e-03, 3.9756e-02, 3.6577e-06,\n",
      "        3.6594e-06, 6.1981e-08, 3.4085e-06, 2.3452e-04, 2.2719e-09, 3.0757e-07,\n",
      "        6.8054e-05, 6.2025e-07, 1.9707e-07, 8.2210e-04, 5.2294e-09],\n",
      "       grad_fn=<SqueezeBackward1>)\n"
     ]
    }
   ],
   "source": [
    "print(finish2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 326,
   "id": "5206c83d",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0.], grad_fn=<WhereBackward0>)\n",
      "测试集模型的准确率是：100.0%\n"
     ]
    }
   ],
   "source": [
    "zero1 = torch.zeros_like(finish2)\n",
    "one1 = torch.ones_like(finish2)  # 创造一个0张量和1张量。      \n",
    "finish2 = torch.where(finish2 >= 0.5, one, finish2)  # 这里将大等于0.5的变量转化为1\n",
    "finish2 = torch.where(finish2 < 0.5, zero, finish2) # 这里将小于0.5的变量转化为0.\n",
    "print(finish2)\n",
    "acc = 100*(finish2 == test_y).sum() / test_y.numel()  # 这里表示的是\n",
    "print(\"测试集模型的准确率是：\" + str(float(acc)) + \"%\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f7402a9",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "29613145",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17bb5e98",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
