{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 机器学习总结\n",
    "\n",
    "1. 机器学习目标、任务\n",
    "\n",
    "2. 机器学习简介\n",
    "\n",
    "3. 机器学习工作流程\n",
    "\n",
    "4. 深度学习\n",
    "\n",
    "5. 循环神经网络代码示例\n",
    "\n",
    "\n",
    "\n",
    "## 机器学习目标、任务   \n",
    "\n",
    "首先，建模的两大目标：解释、预测。到目前为止没有一个模型能同时兼顾这两点。传统计量模型甚至是传统机器学习模型(决策树、SVM)用于解释许多经济、科学现象，但是它模型形式的简单性决定了它逼近真实函数的能力不强。当数据量巨大时，不能很好地逼近现实的数据分布，所以就出现了神经网络。根据通用近似定理，神经网络能够无限逼近真实的数据分布。在我们的选题中，由于气象环境的复杂性、气象数据量大，我们必须选用神经网络来模型现实中这种生成气象数据的规律(数据分布)。因此我们选用神经网络。另外，由于我们研究对象是气象指标的变化情况，变化最终归结于指标数据大小的变化，因此我们的任务是预测这种指标大小的变化，任务是预测。故我们追求预测的精度，而不是解释性。\n",
    "\n",
    "**注：** 解释性在大多场景下是指模型中参数的可解释性和可信水平，而不是我们能不能解释这个模型。因此，神经网络是一个可以写出很多道理的可解释模型，只是无法解释某个参数的实际意义而已。\n",
    "\n",
    "机器学习的任务：\n",
    "\n",
    "有监督：分类(二、多)、预测(回归)\n",
    "\n",
    "无监督：聚类\n",
    "\n",
    "降维\n",
    "\n",
    "## 机器学习简介(P22)\n",
    "\n",
    "![image-20211118153042103](\\img\\image-20211118153042103.png)\n",
    "\n",
    "![image-20211118153134977](./img/image-20211118153134977.png)\n",
    "\n",
    "值得强调的是：机器学习有三个基本要素，当我们学习任何一个新模型时，应该关注它的三个方面——模型、学习准则、优化算法。\n",
    "\n",
    "1. **模型：**\n",
    "\n",
    "线性、非线性。\n",
    "\n",
    "2. **学习准则：**\n",
    "\n",
    "风险最小化(Loss或者说损失最小，这种损失可以定义为MSE、0-1损失等形式)\n",
    "\n",
    "3. **优化算法：**\n",
    "\n",
    "在机器学习中，优化可以分为参数优化和超参数优化。参数，可以理解为$f(X)$中的系数，它们会在模型训练的过程中自动计算。如下面的$w^T$和$b$\n",
    "\n",
    "![image-20211118154825323](\\img\\image-20211118154825323.png)\n",
    "\n",
    "超参数，模型训练前或者训练过程中需要人为调整的参数，如学习率(梯度下降中的步长$\\alpha$)：\n",
    "\n",
    "![image-20211118155107151](\\img\\image-20211118155107151.png)\n",
    "\n",
    "## 机器学习工作流程\n",
    "\n",
    "概括：实际场景→产生data→将data中我们感兴趣的指标或者变量作为y(label)，其余的指标或者变量进行一定的特征选择，构造X(自变量)→选用模型，构造优化函数$f(X)$→基于学习准则或者损失函数的定义确定优化方向→使用合适的优化算法，借助计算机迭代求解参数→根据精度或者其他评价指标调整模型超参数→重新训练$f(X)$。\n",
    "\n",
    "![workflow](./img/workflow.jpg)\n",
    "\n",
    "这张图适用于任何一个学习场景。假设我们处理的是气象数据，我们首先要理解问题，确定研究的对象(指标：$CO_x$)；提取信息、清洗转化、特征工程构造用来预测$y$(或者说$CO_x$)的自变量($X$)；分析挖掘就是建模(求逼近真实数据分布的$f(X)$)，具体就是一个图神经网络，给它$X$，它可以吐出$y$；建模展示就是可视化，比如可视化训练过程中精度、学习率的变化，最后预测结果的数据分布等等；根据可视化的结果我们可以发现模型可能存在的问题，重新审视问题，调参，重新训练。循环往复，直至最优。\n",
    "\n",
    "## 深度学习(P79-102)\n",
    "\n",
    "《nndl-book》\n",
    "\n",
    "## 循环神经网络\n",
    "\n",
    "自回归模型的两种策略\n",
    "\n",
    "详见动手学书P292-295关于循环神经网络的简单介绍\n",
    "\n",
    "task1给出了一个实现的demo，可以体会一下。\n",
    "\n",
    "## 图神经网络\n",
    "\n",
    "图神经网络目前有很多内容，现在的想法就是通过图卷积神经网络更好地取结点（气象站）之间的关系，提取方式可以是图嵌入(Graph Embedding)，就是使用GNN的方式，将图变成稠密向量；也可以是直接使用STGCN的方式，STGCN的介绍，详见下面链接.\n",
    "\n",
    "https://blog.csdn.net/zuiyishihefang/article/details/93685443\n",
    "\n",
    "然后如果使用图嵌入的话应该就是GNN后接LSTM的方式，使用STGCN的话，就是一个模型一条龙，直接得到预测结果。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Tensors (张量)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from __future__ import print_function  旧版本Python使用新版本特性\n",
    "import torch"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Tensors 类似于 NumPy 的 ndarrays ，同时 Tensors 可以使用 GPU 进行计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1.1112e-38, 9.5511e-39, 1.0102e-38],\n",
      "        [1.0286e-38, 1.0194e-38, 9.6429e-39],\n",
      "        [9.2755e-39, 9.1837e-39, 9.3674e-39],\n",
      "        [1.0745e-38, 1.0653e-38, 9.5510e-39],\n",
      "        [1.0561e-38, 1.0194e-38, 1.1112e-38]])\n",
      "tensor([[0., 0., 0.],\n",
      "        [0., 0., 0.],\n",
      "        [0., 0., 0.],\n",
      "        [0., 0., 0.],\n",
      "        [0., 0., 0.]])\n"
     ]
    }
   ],
   "source": [
    "# 构造空矩阵\n",
    "x = torch.empty(5,3)\n",
    "y = torch.zeros(5,3)\n",
    "print(x)\n",
    "print(y)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0., 0., 0.],\n",
       "        [0., 0., 0.],\n",
       "        [0., 0., 0.],\n",
       "        [0., 0., 0.],\n",
       "        [0., 0., 0.]])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 构造相同形状的矩阵\n",
    "z = torch.zero_(x)\n",
    "z"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "创建一个 tensor 基于已经存在的 tensor。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1., 1., 1.],\n",
      "        [1., 1., 1.],\n",
      "        [1., 1., 1.],\n",
      "        [1., 1., 1.],\n",
      "        [1., 1., 1.]], dtype=torch.float64)\n",
      "tensor([[-0.7490, -1.4016,  0.9320],\n",
      "        [-0.4019, -0.1216,  0.2651],\n",
      "        [-2.2833,  0.0084, -1.0857],\n",
      "        [-0.5218,  1.2915, -1.6153],\n",
      "        [-0.5839, -0.9853, -0.0037]])\n"
     ]
    }
   ],
   "source": [
    "x = x.new_ones(5, 3, dtype=torch.double)      \n",
    "# new_* methods take in sizes\n",
    "print(x)\n",
    "\n",
    "x = torch.randn_like(x, dtype=torch.float)    \n",
    "# override dtype!\n",
    "print(x)                                      \n",
    "# result has the same size"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([5, 3])\n"
     ]
    }
   ],
   "source": [
    "# 获取维度信息\n",
    "print(x.size())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "# size_x支持元组类似功能\n",
    "size_x = x.size()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "非in-place的加法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-1.4298, -1.6379,  1.9653],\n",
      "        [ 0.1658, -1.2925,  2.1177],\n",
      "        [-2.1158, -0.3177, -0.1608],\n",
      "        [-1.0279,  1.0297, -0.0514],\n",
      "        [-0.7609, -0.8904, -0.0787]])\n"
     ]
    }
   ],
   "source": [
    "result = torch.empty(5, 3)\n",
    "torch.add(x, y, out=result)\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-1.4298, -1.6379,  1.9653],\n",
       "        [ 0.1658, -1.2925,  2.1177],\n",
       "        [-2.1158, -0.3177, -0.1608],\n",
       "        [-1.0279,  1.0297, -0.0514],\n",
       "        [-0.7609, -0.8904, -0.0787]])"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x+y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "in-place的加法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-1.4298, -1.6379,  1.9653],\n",
      "        [ 0.1658, -1.2925,  2.1177],\n",
      "        [-2.1158, -0.3177, -0.1608],\n",
      "        [-1.0279,  1.0297, -0.0514],\n",
      "        [-0.7609, -0.8904, -0.0787]])\n"
     ]
    }
   ],
   "source": [
    "# adds x to y\n",
    "y.add_(x)\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-2.1789, -3.0395,  2.8973],\n",
       "        [-0.2361, -1.4141,  2.3828],\n",
       "        [-4.3990, -0.3093, -1.2465],\n",
       "        [-1.5498,  2.3211, -1.6667],\n",
       "        [-1.3448, -1.8757, -0.0825]])"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 可以发现y变了，结果也变了\n",
    "x+y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-1.4298, -1.6379,  1.9653],\n",
       "        [ 0.1658, -1.2925,  2.1177],\n",
       "        [-2.1158, -0.3177, -0.1608],\n",
       "        [-1.0279,  1.0297, -0.0514],\n",
       "        [-0.7609, -0.8904, -0.0787]])"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 注意 任何使张量会发生变化的操作都有一个后缀 '_'。例如：\n",
    "x.copy_(y)\n",
    "x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "改变大小：如果你想改变一个 tensor 的大小或者形状，你可以使用 \n",
    "`torch.view`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])\n"
     ]
    }
   ],
   "source": [
    "x = torch.randn(4, 4)\n",
    "y = x.view(16)\n",
    "z = x.view(-1, 8)  # the size -1 is inferred from other dimensions\n",
    "print(x.size(), y.size(), z.size())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "torch.Tensor 是包的核心类。如果将其属性 .requires_grad 设置为 True，则会开始跟踪针对 tensor 的所有操作。完成计算后，您可以调用 .backward() 来自动计算所有梯度。该张量的梯度将累积到 .grad 属性中。\n",
    "\n",
    "要停止 tensor 历史记录的跟踪，您可以调用 .detach()，它将其与计算历史记录分离，并防止将来的计算被跟踪。\n",
    "\n",
    "要停止跟踪历史记录（和使用内存），您还可以将代码块使用 with torch.no_grad(): 包装起来。在评估模型时，这是特别有用，因为模型在训练阶段具有 requires_grad = True 的可训练参数有利于调参，但在评估阶段我们不需要梯度。\n",
    "\n",
    "还有一个类对于 autograd 实现非常重要那就是 Function。Tensor 和 Function 互相连接并构建一个非循环图，它保存整个完整的计算过程的历史信息。每个张量都有一个 .grad_fn 属性保存着创建了张量的 Function 的引用，（如果用户自己创建张量，则g rad_fn 是 None ）。\n",
    "\n",
    "如果你想计算导数，你可以调用 Tensor.backward()。如果 Tensor 是标量（即它包含一个元素数据），则不需要指定任何参数backward()，但是如果它有更多元素，则需要指定一个gradient 参数来指定张量的形状。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1., 1.],\n",
      "        [1., 1.]], requires_grad=True)\n"
     ]
    }
   ],
   "source": [
    "x = torch.ones(2, 2, requires_grad=True)\n",
    "print(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[3., 3.],\n",
      "        [3., 3.]], grad_fn=<AddBackward0>)\n"
     ]
    }
   ],
   "source": [
    "y = x + 2\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[27., 27.],\n",
      "        [27., 27.]], grad_fn=<MulBackward0>) tensor(27., grad_fn=<MeanBackward0>)\n"
     ]
    }
   ],
   "source": [
    "# 针对 y 做更多的操作：\n",
    "# 矩阵点乘是.dot()\n",
    "z = y * y * 3\n",
    "out = z.mean()\n",
    "\n",
    "print(z, out)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "w = torch.rand_like(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.8947, 0.5226],\n",
       "        [0.8068, 0.9164]])"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "w"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([0.7087, 0.8616])"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 按列跨行做聚合运算\n",
    "w.mean(0)\n",
    "# 按行跨列做聚合运算\n",
    "w.mean(1)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.70865"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "np.array([0.8947, 0.5226]).mean()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.85075"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.array([0.8947, 0.8068]).mean()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    ".requires_grad_( ... ) 会改变张量的 requires_grad 标记。输入的标记默认为 False ，如果没有提供相应的参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "False\n",
      "None\n",
      "True\n",
      "<SumBackward0 object at 0x0000029B5ECAD9A0>\n"
     ]
    }
   ],
   "source": [
    "a = torch.randn(2, 2)\n",
    "a = ((a * 3) / (a - 1))\n",
    "print(a.requires_grad)\n",
    "print(a.grad_fn)\n",
    "\n",
    "a.requires_grad_(True)\n",
    "\n",
    "print(a.requires_grad)\n",
    "b = (a * a).sum()\n",
    "print(b.grad_fn)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1., 1.],\n",
       "        [1., 1.]], requires_grad=True)"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "梯度：\n",
    "\n",
    "我们现在后向传播，因为输出包含了一个标量，out.backward() 等同于out.backward(torch.tensor(1.))。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 计算梯度\n",
    "out.backward()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[4.5000, 4.5000],\n",
      "        [4.5000, 4.5000]])\n"
     ]
    }
   ],
   "source": [
    "# 打印梯度 d(out)/dx\n",
    "print(x.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "![解释](./img/basic1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "要停止跟踪历史记录（和使用内存），您还可以将代码块使用 with torch.no_grad(): 包装起来。在评估模型时，这是特别有用，因为模型在训练阶段具有 requires_grad = True 的可训练参数有利于调参，但在评估阶段我们不需要梯度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "True\n",
      "True\n",
      "False\n"
     ]
    }
   ],
   "source": [
    "print(x.requires_grad)\n",
    "print((x ** 2).requires_grad)\n",
    "\n",
    "with torch.no_grad():\n",
    "    print((x ** 2).requires_grad)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "interpreter": {
   "hash": "07b2cea26089ea0ebec7cc3a83022d31e9f80d0db55f432f89380becb3d80933"
  },
  "kernelspec": {
   "display_name": "Python 3.8.12 64-bit ('mytorch': conda)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.12"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
