{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### pytorchAPI的使用\n",
    "### 1、nn.Module\n",
    "- a. ```__init__(self)```\n",
    "- b. ```forward()```:完成一次前向计算的过程，在使用model()的时候会自动调用```__call__()```方法，然后调用forward()方法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2、nn.Linear(input的特征数量， 输出的特征数量)\n",
    "但是在调用的时候只需要传入样本就可以了，因为nn.Linear()也是会通过```__call__()```方法去调用自己的```forward()```方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch import nn\n",
    "class Lr(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Lr, self).__init__()  # 继承父类init的参数\n",
    "        self.linear = nn.Linear(1, 1)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        out = self.linear(x)\n",
    "        return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 实例化模型\n",
    "model = Lr()  # 这里里面实例化模型，实际上是调用Lr类的__init__()方法\n",
    "# 传入数据计算结果\n",
    "predict = model(x)  # 这里model中的就会调用__call__()方法来调用forward()方法，结果就是forward的返回值"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3、优化器类optimizer\n",
    "优化器类是由```torch.optim```提供的<br/>\n",
    "1、torch.optim.SGD(参数，学习率)<br/>\n",
    "2、torch.optim.Adam(参数，学习率)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意：1、 参数可以通过```model.parameters()```来获取,获取模型中所有requires_grad=True的参数<br/>\n",
    "2、优化器类的使用方法<br/>\n",
    "- 1、实例化\n",
    "- 2、所有参数的梯度，将其置为0\n",
    "- 3、反向传播计算梯度\n",
    "- 4、更新参数值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch import optim\n",
    "model = Lr()\n",
    "optimizer = optim.SGD(model.parameters(), lr=1e-3)  # 1.实例化\n",
    "optimizer.zero_grad()  # 2、梯度设置为0\n",
    "loss.backward()  # 3、计算梯度\n",
    "optimizer.step()  # 4、更新梯度"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### pytorch实现线性回归"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Using matplotlib backend: Qt5Agg\n",
      "4.517959117889404 0.640555202960968 0.01674823835492134\n",
      "0.17935867607593536 1.616166353225708 1.3798341751098633\n",
      "0.11687769740819931 1.845745325088501 1.4028667211532593\n",
      "0.08868056535720825 1.997928500175476 1.332999587059021\n",
      "0.0673731118440628 2.126875638961792 1.2651582956314087\n",
      "0.051185671240091324 2.2389848232269287 1.2054895162582397\n",
      "0.03888745233416557 2.3366806507110596 1.1534392833709717\n",
      "0.029544051736593246 2.4218337535858154 1.108067274093628\n",
      "0.02244568057358265 2.496053695678711 1.068519949913025\n",
      "0.01705280877649784 2.56074595451355 1.0340498685836792\n",
      "0.012955632992088795 2.617133617401123 1.0040042400360107\n",
      "0.0098428251221776 2.666283369064331 0.9778156280517578\n",
      "0.007477885112166405 2.7091243267059326 0.9549885392189026\n",
      "0.005681248381733894 2.7464637756347656 0.935092568397522\n",
      "0.004316254984587431 2.779010534286499 0.9177508354187012\n",
      "0.003279202152043581 2.807379722595215 0.902634859085083\n",
      "0.0024913179222494364 2.8321070671081543 0.8894592523574829\n",
      "0.0018927399069070816 2.8536598682403564 0.8779750466346741\n",
      "0.0014379684580489993 2.8724467754364014 0.8679652810096741\n",
      "0.0010924441739916801 2.888822317123413 0.8592394590377808\n",
      "0.0008299618493765593 2.90309476852417 0.8516344428062439\n",
      "0.0006305580027401447 2.915534257888794 0.8450061678886414\n",
      "0.0004790611274074763 2.926377058029175 0.8392288088798523\n",
      "0.00036395725328475237 2.935828447341919 0.8341929912567139\n",
      "0.0002765064418781549 2.9440667629241943 0.8298032879829407\n",
      "0.00021006954193580896 2.951247215270996 0.8259772062301636\n",
      "0.00015959734446369112 2.957505464553833 0.8226423859596252\n",
      "0.00012125403009122238 2.9629604816436768 0.8197358846664429\n",
      "9.2120717454236e-05 2.967715263366699 0.8172023296356201\n",
      "6.998771277721971e-05 2.9718594551086426 0.8149941563606262\n",
      "5.317235627444461e-05 2.9754719734191895 0.8130693435668945\n",
      "4.039621853735298e-05 2.978620767593384 0.8113915920257568\n",
      "3.069056401727721e-05 2.981365203857422 0.8099291920661926\n",
      "2.3316844817600213e-05 2.983757495880127 0.8086545467376709\n",
      "1.771473944245372e-05 2.98584246635437 0.8075435757637024\n"
     ]
    }
   ],
   "source": [
    "%matplotlib\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 1、定义线性回归的数据\n",
    "x = torch.rand([500, 1])\n",
    "y_true = 3 * x + 0.8\n",
    "\n",
    "# 2、定义模型\n",
    "class My_Linear(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(My_Linear, self).__init__()\n",
    "        self.linear = nn.Linear(1, 1)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        out = self.linear(x)\n",
    "        return out\n",
    "    \n",
    "# 3、实例化模型，实例化优化器，实例化损失函数\n",
    "my_linear = My_Linear()\n",
    "optimizer = optim.SGD(my_linear.parameters(), lr=1e-2)\n",
    "loss_fn = nn.MSELoss()\n",
    "loss_list = []\n",
    "\n",
    "# 4、循环进行前向传播，计算损失，反向传播，参数更新。\n",
    "for i in range(3500):\n",
    "    # 1、前向传播\n",
    "    y_predict = my_linear(x)\n",
    "    # 2、计算损失\n",
    "    loss = loss_fn(y_predict, y_true)\n",
    "    # 3、梯度清零\n",
    "    optimizer.zero_grad()\n",
    "    # 4、反向传播\n",
    "    loss.backward()\n",
    "    # 5、更新参数\n",
    "    optimizer.step()\n",
    "    \n",
    "    if i % 100 == 0:\n",
    "        params = list(my_linear.parameters())\n",
    "        print(loss.item(), params[0].item(), params[1].item())\n",
    "        loss_list.append(loss.data.numpy())\n",
    "\n",
    "# 后面还需要画个图\n",
    "plt.figure(figsize=(20, 8))\n",
    "plt.scatter(x.numpy().reshape(-1), y_true.numpy().reshape(-1))\n",
    "y_predict = my_linear(x)\n",
    "plt.plot(x.numpy().reshape(-1), y_predict.detach().numpy().reshape(-1),c='r')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "# loss随着迭代次数的增加，loss在下降\n",
    "plt.plot(range(len(loss_list)), loss_list, c='g')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4、模型的评估\n",
    "一般情况下model.training的值默认是True，如果设置模型为评估模式的话```model.eval()```,就会变成model.training=False"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "False\n"
     ]
    }
   ],
   "source": [
    "x_test = torch.rand([50, 1])\n",
    "y_test = 3 * x_test + 0.8\n",
    "my_linear.eval()\n",
    "print(my_linear.training)  # 在评估模式下面的的model.training的值为False，表示现在不是训练模式\n",
    "predict = my_linear(x_test)\n",
    "predict = predict.data.numpy()\n",
    "plt.scatter(x_test.numpy(), y_test.numpy(), c='r')\n",
    "plt.plot(x_test.data.numpy().reshape(-1), predict.reshape(-1))\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 在GPU上运行代码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意：我们使用GPU的时候传入的<b>样本特征需要是GPU形式</b>的，然后模型定义为GPU模式的时候，模型里面的参数会<b>自动变为GPU模式</b>，不需要自己调用，如果GPU使用完，要在CPU上使用的时候，需要转换为cpu模式。使用：```tensor.cpu()```<br/>\n",
    "这边的```tensor.cpu()```也是临时的，会产生新的变量，不会改变原来GPU类型的tensor的类型。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如果是GPU类型的数据，没有办法跟cpu类型的数据进行运算操作，但是可以跟常量进行操作，这是后也是产生GPU类型的数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Using matplotlib backend: Qt5Agg\n"
     ]
    },
    {
     "ename": "RuntimeError",
     "evalue": "CUDA error: an illegal memory access was encountered",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-4-edf50535f187>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      9\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     10\u001b[0m \u001b[1;31m# 1、定义线性回归的数据\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 11\u001b[1;33m \u001b[0mx\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mrand\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m500\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m1\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdevice\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mdevice\u001b[0m\u001b[1;33m)\u001b[0m  \u001b[1;31m# 或者x = torch.rand([500, 1]).to(device)  # 两者的效果是一样的\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     12\u001b[0m \u001b[0my_true\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;36m3\u001b[0m \u001b[1;33m*\u001b[0m \u001b[0mx\u001b[0m \u001b[1;33m+\u001b[0m \u001b[1;36m0.8\u001b[0m  \u001b[1;31m# 这边自动的变为GPU的形式：因为是常量跟GPU类型的Tensor进行比较。\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     13\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mRuntimeError\u001b[0m: CUDA error: an illegal memory access was encountered"
     ]
    }
   ],
   "source": [
    "%matplotlib\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 定义一个device对象\n",
    "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
    "\n",
    "# 1、定义线性回归的数据\n",
    "x = torch.rand([500, 1], device=device)  # 或者x = torch.rand([500, 1]).to(device)  # 两者的效果是一样的\n",
    "y_true = 3 * x + 0.8  # 这边自动的变为GPU的形式：因为是常量跟GPU类型的Tensor进行比较。\n",
    "\n",
    "# 2、定义模型\n",
    "class My_Linear(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(My_Linear, self).__init__()\n",
    "        self.linear = nn.Linear(1, 1)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        out = self.linear(x)\n",
    "        return out\n",
    "    \n",
    "# 3、实例化模型，实例化优化器，实例化损失函数\n",
    "my_linear = My_Linear().to(device)  # 当我们模型进行to(device)的时候，模型里面的参数会自动进行to(device)\n",
    "optimizer = optim.SGD(my_linear.parameters(), lr=1e-2)\n",
    "loss_fn = nn.MSELoss()\n",
    "loss_list = []\n",
    "\n",
    "# 4、循环进行前向反向传播，计算损失，参数更新。\n",
    "for i in range(3500):\n",
    "    # 1、前向传播\n",
    "    y_predict = my_linear(x)\n",
    "    # 2、计算损失\n",
    "    loss = loss_fn(y_predict, y_true)\n",
    "    # 3、梯度清零\n",
    "    optimizer.zero_grad()\n",
    "    # 4、反向传播\n",
    "    loss.backward()\n",
    "    # 5、更新参数\n",
    "    optimizer.step()\n",
    "    \n",
    "    if i % 100 == 0:\n",
    "        params = list(my_linear.parameters())\n",
    "        print(loss.item(), params[0].item(), params[1].item())\n",
    "        loss_list.append(loss.cpu().data.numpy())\n",
    "\n",
    "# 后面还需要画个图\n",
    "# 这里面还需要转换成cpu模式的tensor形式\n",
    "plt.figure(figsize=(20, 8))\n",
    "plt.scatter(x.cpu().numpy().reshape(-1), y_true.cpu().numpy().reshape(-1))\n",
    "y_predict = my_linear(x)\n",
    "plt.plot(x.cpu().numpy().reshape(-1), y_predict.cpu().detach().numpy().reshape(-1),c='r')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在GPU上计算的结果需要转化为numpy或者torch的CPU的tensor类型，才能在cpu上运行<br/>\n",
    "举例：```predict = predict.cpu().detach().numpy()```<br/>\n",
    "直接转换为CPU好像没办法```predict = predict.cpu()```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "torch.cuda.is_available()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "torch.cuda.device_count()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
