{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第三章 PyTorch基础：Tensor和Autograd\n",
    "\n",
    "## 3.1 Tensor\n",
    "\n",
    "Tensor，又名张量，读者可能对这个名词似曾相识，因它不仅在PyTorch中出现过，它也是Theano、TensorFlow、\n",
    "Torch和MxNet中重要的数据结构。关于张量的本质不乏深度的剖析，但从工程角度来讲，可简单地认为它就是一个数组，且支持高效的科学计算。它可以是一个数（标量）、一维数组（向量）、二维数组（矩阵）和更高维的数组（高阶数据）。Tensor和Numpy的ndarrays类似，但PyTorch的tensor支持GPU加速。\n",
    "\n",
    "本节将系统讲解tensor的使用，力求面面俱到，但不会涉及每个函数。对于更多函数及其用法，读者可通过在IPython/Notebook中使用函数名加`?`查看帮助文档，或查阅PyTorch官方文档[^1]。\n",
    "\n",
    "[^1]: http://docs.pytorch.org"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Let's begin\n",
    "from __future__ import print_function\n",
    "import torch  as t"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "###  3.1.1 基础操作\n",
    "\n",
    "学习过Numpy的读者会对本节内容感到非常熟悉，因tensor的接口有意设计成与Numpy类似，以方便用户使用。但不熟悉Numpy也没关系，本节内容并不要求先掌握Numpy。\n",
    "\n",
    "从接口的角度来讲，对tensor的操作可分为两类：\n",
    "\n",
    "1. `torch.function`，如`torch.save`等。\n",
    "2. 另一类是`tensor.function`，如`tensor.view`等。\n",
    "\n",
    "为方便使用，对tensor的大部分操作同时支持这两类接口，在本书中不做具体区分，如`torch.sum (torch.sum(a, b))`与`tensor.sum (a.sum(b))`功能等价。\n",
    "\n",
    "而从存储的角度来讲，对tensor的操作又可分为两类：\n",
    "\n",
    "1. 不会修改自身的数据，如 `a.add(b)`， 加法的结果会返回一个新的tensor。\n",
    "2. 会修改自身的数据，如 `a.add_(b)`， 加法的结果仍存储在a中，a被修改了。\n",
    "\n",
    "函数名以`_`结尾的都是inplace方式, 即会修改调用者自己的数据，在实际应用中需加以区分。\n",
    "\n",
    "#### 创建Tensor\n",
    "\n",
    "在PyTorch中新建tensor的方法有很多，具体如表3-1所示。\n",
    "\n",
    "表3-1: 常见新建tensor的方法\n",
    "\n",
    "|函数|功能|\n",
    "|:---:|:---:|\n",
    "|Tensor(\\*sizes)|基础构造函数|\n",
    "|ones(\\*sizes)|全1Tensor|\n",
    "|zeros(\\*sizes)|全0Tensor|\n",
    "|eye(\\*sizes)|对角线为1，其他为0|\n",
    "|arange(s,e,step|从s到e，步长为step|\n",
    "|linspace(s,e,steps)|从s到e，均匀切分成steps份|\n",
    "|rand/randn(\\*sizes)|均匀/标准分布|\n",
    "|normal(mean,std)/uniform(from,to)|正态分布/均匀分布|\n",
    "|randperm(m)|随机排列|\n",
    "\n",
    "其中使用`Tensor`函数新建tensor是最复杂多变的方式，它既可以接收一个list，并根据list的数据新建tensor，也能根据指定的形状新建tensor，还能传入其他的tensor，下面举几个例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-2.0131e+15  4.5573e-41  1.1429e-36\n",
       " 0.0000e+00  4.4842e-44  0.0000e+00\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 指定tensor的形状\n",
    "a = t.Tensor(2, 3)\n",
    "a # 数值取决于内存空间的状态"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  2  3\n",
       " 4  5  6\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 用list的数据创建tensor\n",
    "b = t.Tensor([[1,2,3],[4,5,6]])\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.tolist() # 把tensor转为list"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`tensor.size()`返回`torch.Size`对象，它是tuple的子类，但其使用方式与tuple略有区别"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 3])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b_size = b.size()\n",
    "b_size"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "6"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.numel() # b中元素总个数，2*3，等价于b.nelement()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(\n",
       " -2.0131e+15  4.5573e-41  1.0241e-36\n",
       "  0.0000e+00  4.4842e-44  0.0000e+00\n",
       " [torch.FloatTensor of size 2x3], \n",
       "  2\n",
       "  3\n",
       " [torch.FloatTensor of size 2])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 创建一个和b形状一样的tensor\n",
    "c = t.Tensor(b_size)\n",
    "# 创建一个元素为2和3的tensor\n",
    "d = t.Tensor((2, 3))\n",
    "c, d"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "除了`tensor.size()`，还可以利用`tensor.shape`直接查看tensor的形状，`tensor.shape`等价于`tensor.size()`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 3])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\u001b[0;31mType:\u001b[0m        property\n",
       "\u001b[0;31mString form:\u001b[0m <property object at 0x7f09f7912728>\n",
       "\u001b[0;31mSource:\u001b[0m     \n",
       "\u001b[0;31m# c.shape.fget\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\u001b[0;34m@\u001b[0m\u001b[0mproperty\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\u001b[0;32mdef\u001b[0m \u001b[0mshape\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m    \u001b[0;34m\"\"\"Alias for .size()\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m    Returns a torch.Size object, containing the dimensions of the tensor\u001b[0m\n",
       "\u001b[0;34m    \"\"\"\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m    \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msize\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "c.shape??"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要注意的是，`t.Tensor(*sizes)`创建tensor时，系统不会马上分配空间，只是会计算剩余的内存是否足够使用，使用到tensor时才会分配，而其它操作都是在创建完tensor之后马上进行空间分配。其它常用的创建tensor的方法举例如下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.ones(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  0  0\n",
       " 0  0  0\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.zeros(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1\n",
       " 3\n",
       " 5\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.arange(1, 6, 2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  1.0000\n",
       "  5.5000\n",
       " 10.0000\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.linspace(1, 10, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0.8020  0.9395 -2.4781\n",
       " 1.3814  0.2889  3.4069\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.randn(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1\n",
       " 2\n",
       " 4\n",
       " 3\n",
       " 0\n",
       "[torch.LongTensor of size 5]"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.randperm(5) # 长度为5的随机排列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  0  0\n",
       " 0  1  0\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.eye(2, 3) # 对角线为1, 不要求行列数一致"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 常用Tensor操作"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过`tensor.view`方法可以调整tensor的形状，但必须保证调整前后元素总数一致。`view`不会修改自身的数据，返回的新tensor与源tensor共享内存，也即更改其中的一个，另外一个也会跟着改变。在实际应用中可能经常需要添加或减少某一维度，这时候`squeeze`和`unsqueeze`两个函数就派上用场了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6)\n",
    "a.view(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = a.view(-1, 3) # 当某一维为-1的时候，会自动计算它的大小\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  0  1  2\n",
       "\n",
       "(1 ,.,.) = \n",
       "  3  4  5\n",
       "[torch.FloatTensor of size 2x1x3]"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.unsqueeze(1) # 注意形状，在第1维（下标从0开始）上增加“１”"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  0  1  2\n",
       "\n",
       "(1 ,.,.) = \n",
       "  3  4  5\n",
       "[torch.FloatTensor of size 2x1x3]"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.unsqueeze(-2) # -2表示倒数第二个维度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,0 ,.,.) = \n",
       "  0  1  2\n",
       "  3  4  5\n",
       "[torch.FloatTensor of size 1x1x2x3]"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = b.view(1, 1, 1, 2, 3)\n",
    "c.squeeze(0) # 压缩第0维的“１”"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c.squeeze() # 把所有维度为“1”的压缩"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0  100    2\n",
       "   3    4    5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[1] = 100\n",
    "b # a修改，b作为view之后的，也会跟着修改"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`resize`是另一种可用来调整`size`的方法，但与`view`不同，它可以修改tensor的大小。如果新大小超过了原大小，会自动分配新的内存空间，而如果新大小小于原大小，则之前的数据依旧会被保存，看一个例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0  100    2\n",
       "[torch.FloatTensor of size 1x3]"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.resize_(1, 3)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0.0000  100.0000    2.0000\n",
       "   3.0000    4.0000    5.0000\n",
       "   0.0000    0.0000    0.0000\n",
       "[torch.FloatTensor of size 3x3]"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.resize_(3, 3) # 旧的数据依旧保存着，多出的大小会分配新空间\n",
    "b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 索引操作\n",
    "\n",
    "Tensor支持与numpy.ndarray类似的索引操作，语法上也类似，下面通过一些例子，讲解常用的索引操作。如无特殊说明，索引出来的结果与原tensor共享内存，也即修改一个，另一个会跟着修改。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-0.1103  0.4096  1.5802  1.1658\n",
       " 0.3915  0.5752  0.8781 -0.4837\n",
       " 0.4399  0.0309 -2.2749 -1.5515\n",
       "[torch.FloatTensor of size 3x4]"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.randn(3, 4)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-0.1103\n",
       " 0.4096\n",
       " 1.5802\n",
       " 1.1658\n",
       "[torch.FloatTensor of size 4]"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0] # 第0行(下标从0开始)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-0.1103\n",
       " 0.3915\n",
       " 0.4399\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[:, 0] # 第0列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1.5802396535873413"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0][2] # 第0行第2个元素，等价于a[0, 2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1.1657776832580566"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0, -1] # 第0行最后一个元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-0.1103  0.4096  1.5802  1.1658\n",
       " 0.3915  0.5752  0.8781 -0.4837\n",
       "[torch.FloatTensor of size 2x4]"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[:2] # 前两行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-0.1103  0.4096\n",
       " 0.3915  0.5752\n",
       "[torch.FloatTensor of size 2x2]"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[:2, 0:2] # 前两行，第0,1列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "-0.1103  0.4096\n",
      "[torch.FloatTensor of size 1x2]\n",
      "\n",
      "\n",
      "-0.1103\n",
      " 0.4096\n",
      "[torch.FloatTensor of size 2]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(a[0:1, :2]) # 第0行，前两列 \n",
    "print(a[0, :2]) # 注意两者的区别：形状不同"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  0  1  1\n",
       " 0  0  0  0\n",
       " 0  0  0  0\n",
       "[torch.ByteTensor of size 3x4]"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a > 1 # 返回一个ByteTensor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.5802\n",
       " 1.1658\n",
       "[torch.FloatTensor of size 2]"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[a>1] # 等价于a.masked_select(a>1)\n",
    "# 选择结果与原tensor不共享内存空间"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-0.1103  0.4096  1.5802  1.1658\n",
       " 0.3915  0.5752  0.8781 -0.4837\n",
       "[torch.FloatTensor of size 2x4]"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[t.LongTensor([0,1])] # 第0行和第1行"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其它常用的选择函数如表3-2所示。\n",
    "\n",
    "表3-2常用的选择函数\n",
    "\n",
    "函数|功能|\n",
    ":---:|:---:|\n",
    "index_select(input, dim, index)|在指定维度dim上选取，比如选取某些行、某些列\n",
    "masked_select(input, mask)|例子如上，a[a>0]，使用ByteTensor进行选取\n",
    "non_zero(input)|非0元素的下标\n",
    "gather(input, dim, index)|根据index，在dim维度上选取数据，输出的size与index一样\n",
    "\n",
    "\n",
    "`gather`是一个比较复杂的操作，对一个2维tensor，输出的每个元素如下：\n",
    "\n",
    "```python\n",
    "out[i][j] = input[index[i][j]][j]  # dim=0\n",
    "out[i][j] = input[i][index[i][j]]  # dim=1\n",
    "```\n",
    "三维tensor的`gather`操作同理，下面举几个例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   1   2   3\n",
       "  4   5   6   7\n",
       "  8   9  10  11\n",
       " 12  13  14  15\n",
       "[torch.FloatTensor of size 4x4]"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 16).view(4, 4)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   5  10  15\n",
       "[torch.FloatTensor of size 1x4]"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取对角线的元素\n",
    "index = t.LongTensor([[0,1,2,3]])\n",
    "a.gather(0, index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  3\n",
       "  6\n",
       "  9\n",
       " 12\n",
       "[torch.FloatTensor of size 4x1]"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取反对角线上的元素\n",
    "index = t.LongTensor([[3,2,1,0]]).t()\n",
    "a.gather(1, index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 12   9   6   3\n",
       "[torch.FloatTensor of size 1x4]"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取反对角线上的元素，注意与上面的不同\n",
    "index = t.LongTensor([[3,2,1,0]])\n",
    "a.gather(0, index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   3\n",
       "  5   6\n",
       " 10   9\n",
       " 15  12\n",
       "[torch.FloatTensor of size 4x2]"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取两个对角线上的元素\n",
    "index = t.LongTensor([[0,1,2,3],[3,2,1,0]]).t()\n",
    "b = a.gather(1, index)\n",
    "b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "与`gather`相对应的逆操作是`scatter_`，`gather`把数据从input中按index取出，而`scatter_`是把取出的数据再放回去。注意`scatter_`函数是inplace操作。\n",
    "\n",
    "```python\n",
    "out = input.gather(dim, index)\n",
    "-->近似逆操作\n",
    "out = Tensor()\n",
    "out.scatter_(dim, index)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   0   0   3\n",
       "  0   5   6   0\n",
       "  0   9  10   0\n",
       " 12   0   0  15\n",
       "[torch.FloatTensor of size 4x4]"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 把两个对角线元素放回去到指定位置\n",
    "c = t.zeros(4,4)\n",
    "c.scatter_(1, index, b)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 高级索引\n",
    "PyTorch在0.2版本中完善了索引操作，目前已经支持绝大多数numpy的高级索引[^10]。高级索引可以看成是普通索引操作的扩展，但是高级索引操作的结果一般不和原始的Tensor贡献内出。 \n",
    "[^10]: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "   0   1   2\n",
       "   3   4   5\n",
       "   6   7   8\n",
       "\n",
       "(1 ,.,.) = \n",
       "   9  10  11\n",
       "  12  13  14\n",
       "  15  16  17\n",
       "\n",
       "(2 ,.,.) = \n",
       "  18  19  20\n",
       "  21  22  23\n",
       "  24  25  26\n",
       "[torch.FloatTensor of size 3x3x3]"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = t.arange(0,27).view(3,3,3)\n",
    "x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 14\n",
       " 24\n",
       "[torch.FloatTensor of size 2]"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x[[1, 2], [1, 2], [2, 0]] # x[1,1,2]和x[2,2,0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 19\n",
       " 10\n",
       "  1\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x[[2, 1, 0], [0], [1]] # x[2,,0,1],x[1,0,1],x[0,0,1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "   0   1   2\n",
       "   3   4   5\n",
       "   6   7   8\n",
       "\n",
       "(1 ,.,.) = \n",
       "  18  19  20\n",
       "  21  22  23\n",
       "  24  25  26\n",
       "[torch.FloatTensor of size 2x3x3]"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x[[0, 2], ...] # x[0] 和 x[2]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Tensor类型\n",
    "\n",
    "Tensor有不同的数据类型，如表3-3所示，每种类型分别对应有CPU和GPU版本(HalfTensor除外)。默认的tensor是FloatTensor，可通过`t.set_default_tensor_type` 来修改默认tensor类型(如果默认类型为GPU tensor，则所有操作都将在GPU上进行)。Tensor的类型对分析内存占用很有帮助。例如对于一个size为(1000, 1000, 1000)的FloatTensor，它有`1000*1000*1000=10^9`个元素，每个元素占32bit/8 = 4Byte内存，所以共占大约4GB内存/显存。HalfTensor是专门为GPU版本设计的，同样的元素个数，显存占用只有FloatTensor的一半，所以可以极大缓解GPU显存不足的问题，但由于HalfTensor所能表示的数值大小和精度有限[^2]，所以可能出现溢出等问题。\n",
    "\n",
    "[^2]: https://stackoverflow.com/questions/872544/what-range-of-numbers-can-be-represented-in-a-16-32-and-64-bit-ieee-754-syste\n",
    "\n",
    "表3-3: tensor数据类型\n",
    "\n",
    "数据类型|\tCPU tensor\t|GPU tensor|\n",
    ":---:|:---:|:--:|\n",
    "32-bit 浮点|\ttorch.FloatTensor\t|torch.cuda.FloatTensor\n",
    "64-bit 浮点|\ttorch.DoubleTensor|\ttorch.cuda.DoubleTensor\n",
    "16-bit 半精度浮点|\tN/A\t|torch.cuda.HalfTensor\n",
    "8-bit 无符号整形(0~255)|\ttorch.ByteTensor|\ttorch.cuda.ByteTensor\n",
    "8-bit 有符号整形(-128~127)|\ttorch.CharTensor\t|torch.cuda.CharTensor\n",
    "16-bit 有符号整形  |\ttorch.ShortTensor|\ttorch.cuda.ShortTensor\n",
    "32-bit 有符号整形 \t|torch.IntTensor\t|torch.cuda.IntTensor\n",
    "64-bit 有符号整形  \t|torch.LongTensor\t|torch.cuda.LongTensor\n",
    "\n",
    "各数据类型之间可以互相转换，`type(new_type)`是通用的做法，同时还有`float`、`long`、`half`等快捷方法。CPU tensor与GPU tensor之间的互相转换通过`tensor.cuda`和`tensor.cpu`方法实现。Tensor还有一个`new`方法，用法与`t.Tensor`一样，会调用该tensor对应类型的构造函数，生成与当前tensor类型一致的tensor。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 设置默认tensor，注意参数是字符串\n",
    "t.set_default_tensor_type('torch.IntTensor')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-6.5609e+08  3.2522e+04 -6.5609e+08\n",
       " 3.2522e+04  3.2000e+01  0.0000e+00\n",
       "[torch.IntTensor of size 2x3]"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.Tensor(2,3)\n",
    "a # 现在a是IntTensor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-6.5609e+08  3.2522e+04 -6.5609e+08\n",
       " 3.2522e+04  3.2000e+01  0.0000e+00\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 把a转成FloatTensor，等价于b=a.type(t.FloatTensor)\n",
    "b = a.float() \n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-6.5609e+08  3.2522e+04 -6.5609e+08\n",
       " 3.2522e+04  3.2000e+01  0.0000e+00\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = a.type_as(b)\n",
    "c"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-6.5609e+08  3.2522e+04 -6.5609e+08\n",
       " 3.2522e+04  3.2000e+01  0.0000e+00\n",
       "[torch.IntTensor of size 2x3]"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "d = a.new(2,3) # 等价于torch.IntTensor(3,4)\n",
    "d"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\u001b[0;31mSignature:\u001b[0m \u001b[0ma\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnew\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
       "\u001b[0;31mSource:\u001b[0m   \n",
       "    \u001b[0;32mdef\u001b[0m \u001b[0mnew\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;34m\"\"\"Constructs a new tensor of the same data type.\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__class__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
       "\u001b[0;31mFile:\u001b[0m      /usr/local/lib/python3.5/dist-packages/torch/tensor.py\n",
       "\u001b[0;31mType:\u001b[0m      method\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 查看函数new的源码\n",
    "a.new??"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 恢复之前的默认设置\n",
    "t.set_default_tensor_type('torch.FloatTensor')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 逐元素操作\n",
    "\n",
    "这部分操作会对tensor的每一个元素(point-wise，又名element-wise)进行操作，此类操作的输入与输出形状一致。常用的操作如表3-4所示。\n",
    "\n",
    "表3-4: 常见的逐元素操作\n",
    "\n",
    "|函数|功能|\n",
    "|:--:|:--:|\n",
    "|abs/sqrt/div/exp/fmod/log/pow..|绝对值/平方根/除法/指数/求余/求幂..|\n",
    "|cos/sin/asin/atan2/cosh..|相关三角函数|\n",
    "|ceil/round/floor/trunc| 上取整/四舍五入/下取整/只保留整数部分|\n",
    "|clamp(input, min, max)|超过min和max部分截断|\n",
    "|sigmod/tanh..|激活函数\n",
    "\n",
    "对于很多操作，例如div、mul、pow、fmod等，PyTorch都实现了运算符重载，所以可以直接使用运算符。如`a ** 2` 等价于`torch.pow(a,2)`, `a * 2`等价于`torch.mul(a,2)`。\n",
    "\n",
    "其中`clamp(x, min, max)`的输出满足以下公式：\n",
    "$$\n",
    "y_i =\n",
    "\\begin{cases}\n",
    "min,  & \\text{if  } x_i \\lt min \\\\\n",
    "x_i,  & \\text{if  } min \\le x_i \\le max  \\\\\n",
    "max,  & \\text{if  } x_i \\gt max\\\\\n",
    "\\end{cases}\n",
    "$$\n",
    "`clamp`常用在某些需要比较大小的地方，如取一个tensor的每个元素与另一个数的较大值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.0000  0.5403 -0.4161\n",
       "-0.9900 -0.6536  0.2837\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6).view(2, 3)\n",
    "t.cos(a)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 0  1  2\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a % 3 # 等价于t.fmod(a, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   1   4\n",
       "  9  16  25\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a ** 2 # 等价于t.pow(a, 2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      " 0  1  2\n",
      " 3  4  5\n",
      "[torch.FloatTensor of size 2x3]\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\n",
       " 3  3  3\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 取a中的每一个元素与3相比较大的一个 (小于3的截断成3)\n",
    "print(a)\n",
    "t.clamp(a, min=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "####  归并操作 \n",
    "此类操作会使输出形状小于输入形状，并可以沿着某一维度进行指定操作。如加法`sum`，既可以计算整个tensor的和，也可以计算tensor中每一行或每一列的和。常用的归并操作如表3-5所示。\n",
    "\n",
    "表3-5: 常用归并操作\n",
    "\n",
    "|函数|功能|\n",
    "|:---:|:---:|\n",
    "|mean/sum/median/mode|均值/和/中位数/众数|\n",
    "|norm/dist|范数/距离|\n",
    "|std/var|标准差/方差|\n",
    "|cumsum/cumprod|累加/累乘|\n",
    "\n",
    "以上大多数函数都有一个参数**`dim`**，用来指定这些操作是在哪个维度上执行的。关于dim(对应于Numpy中的axis)的解释众说纷纭，这里提供一个简单的记忆方式：\n",
    "\n",
    "假设输入的形状是(m, n, k)\n",
    "\n",
    "- 如果指定dim=0，输出的形状就是(1, n, k)或者(n, k)\n",
    "- 如果指定dim=1，输出的形状就是(m, 1, k)或者(m, k)\n",
    "- 如果指定dim=2，输出的形状就是(m, n, 1)或者(m, n)\n",
    "\n",
    "size中是否有\"1\"，取决于参数`keepdim`，`keepdim=True`会保留维度`1`。注意，以上只是经验总结，并非所有函数都符合这种形状变化方式，如`cumsum`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 2  2  2\n",
       "[torch.FloatTensor of size 1x3]"
      ]
     },
     "execution_count": 58,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.ones(2, 3)\n",
    "b.sum(dim = 0, keepdim=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 2\n",
       " 2\n",
       " 2\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 59,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# keepdim=False，不保留维度\"1\"，注意形状\n",
    "b.sum(dim=0, keepdim=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 3\n",
       " 3\n",
       "[torch.FloatTensor of size 2]"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.sum(dim=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      " 0  1  2\n",
      " 3  4  5\n",
      "[torch.FloatTensor of size 2x3]\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   1   3\n",
       "  3   7  12\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 61,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6).view(2, 3)\n",
    "print(a)\n",
    "a.cumsum(dim=1) # 沿着行累加"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 比较\n",
    "比较函数中有一些是逐元素比较，操作类似于逐元素操作，还有一些则类似于归并操作。常用比较函数如表3-6所示。\n",
    "\n",
    "表3-6: 常用比较函数\n",
    "\n",
    "|函数|功能|\n",
    "|:--:|:--:|\n",
    "|gt/lt/ge/le/eq/ne|大于/小于/大于等于/小于等于/等于/不等|\n",
    "|topk|最大的k个数|\n",
    "|sort|排序|\n",
    "|max/min|比较两个tensor最大最小值|\n",
    "\n",
    "表中第一行的比较操作已经实现了运算符重载，因此可以使用`a>=b`、`a>b`、`a!=b`、`a==b`，其返回结果是一个`ByteTensor`，可用来选取元素。max/min这两个操作比较特殊，以max来说，它有以下三种使用情况：\n",
    "- t.max(tensor)：返回tensor中最大的一个数\n",
    "- t.max(tensor,dim)：指定维上最大的数，返回tensor和下标\n",
    "- t.max(tensor1, tensor2): 比较两个tensor相比较大的元素\n",
    "\n",
    "至于比较一个tensor和一个数，可以使用clamp函数。下面举例说明。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   3   6\n",
       "  9  12  15\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.linspace(0, 15, 6).view(2, 3)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 15  12   9\n",
       "  6   3   0\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.linspace(15, 0, 6).view(2, 3)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  0  0\n",
       " 1  1  1\n",
       "[torch.ByteTensor of size 2x3]"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a>b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  9\n",
       " 12\n",
       " 15\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 65,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[a>b] # a中大于b的元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "15.0"
      ]
     },
     "execution_count": 66,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.max(a)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(\n",
       "  15\n",
       "   6\n",
       " [torch.FloatTensor of size 2], \n",
       "  0\n",
       "  0\n",
       " [torch.LongTensor of size 2])"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.max(b, dim=1) \n",
    "# 第一个返回值的15和6分别表示第0行和第1行最大的元素\n",
    "# 第二个返回值的0和0表示上述最大的数是该行第0个元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 15  12   9\n",
       "  9  12  15\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 68,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.max(a,b)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 10  10  10\n",
       " 10  12  15\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 69,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 比较a和10较大的元素\n",
    "t.clamp(a, min=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 线性代数\n",
    "\n",
    "PyTorch的线性函数主要封装了Blas和Lapack，其用法和接口都与之类似。常用的线性代数函数如表3-7所示。\n",
    "\n",
    "表3-7: 常用的线性代数函数\n",
    "\n",
    "|函数|功能|\n",
    "|:---:|:---:|\n",
    "|trace|对角线元素之和(矩阵的迹)|\n",
    "|diag|对角线元素|\n",
    "|triu/tril|矩阵的上三角/下三角，可指定偏移量|\n",
    "|mm/bmm|矩阵乘法，batch的矩阵乘法|\n",
    "|addmm/addbmm/addmv/addr/badbmm..|矩阵运算\n",
    "|t|转置|\n",
    "|dot/cross|内积/外积\n",
    "|inverse|求逆矩阵\n",
    "|svd|奇异值分解\n",
    "\n",
    "具体使用说明请参见官方文档[^3]，需要注意的是，矩阵的转置会导致存储空间不连续，需调用它的`.contiguous`方法将其转为连续。\n",
    "[^3]: http://pytorch.org/docs/torch.html#blas-and-lapack-operations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "False"
      ]
     },
     "execution_count": 70,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = a.t()\n",
    "b.is_contiguous()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   9\n",
       "  3  12\n",
       "  6  15\n",
       "[torch.FloatTensor of size 3x2]"
      ]
     },
     "execution_count": 71,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.contiguous()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.2 Tensor和Numpy\n",
    "\n",
    "Tensor和Numpy数组之间具有很高的相似性，彼此之间的互操作也非常简单高效。需要注意的是，Numpy和Tensor共享内存。由于Numpy历史悠久，支持丰富的操作，所以当遇到Tensor不支持的操作时，可先转成Numpy数组，处理后再转回tensor，其转换开销很小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[ 1.,  1.,  1.],\n",
       "       [ 1.,  1.,  1.]])"
      ]
     },
     "execution_count": 72,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "a = np.ones([2, 3])\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.DoubleTensor of size 2x3]"
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.from_numpy(a)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 74,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.Tensor(a) # 也可以直接将numpy对象传入Tensor\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 75,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0, 1]=100\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[ 1.,  1.,  1.],\n",
       "       [ 1.,  1.,  1.]], dtype=float32)"
      ]
     },
     "execution_count": 76,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = b.numpy() # a, b, c三个对象共享内存\n",
    "c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "广播法则(broadcast)是科学运算中经常使用的一个技巧，它在快速执行向量化的同时不会占用额外的内存/显存。\n",
    "Numpy的广播法则定义如下：\n",
    "\n",
    "- 让所有输入数组都向其中shape最长的数组看齐，shape中不足的部分通过在前面加1补齐\n",
    "- 两个数组要么在某一个维度的长度一致，要么其中一个为1，否则不能计算 \n",
    "- 当输入数组的某个维度的长度为1时，计算时沿此维度复制扩充成一样的形状\n",
    "\n",
    "PyTorch当前已经支持了自动广播法则，但是笔者还是建议读者通过以下两个函数的组合手动实现广播法则，这样更直观，更不易出错：\n",
    "\n",
    "- `unsqueeze`或者`view`：为数据某一维的形状补1，实现法则1\n",
    "- `expand`或者`expand_as`，重复数组，实现法则3；该操作不会复制数组，所以不会占用额外的空间。\n",
    "\n",
    "注意，repeat实现与expand相类似的功能，但是repeat会把相同数据复制多份，因此会占用额外的空间。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {
    "collapsed": true,
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "a = t.ones(3, 2)\n",
    "b = t.zeros(2, 3,1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "\n",
       "(1 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "[torch.FloatTensor of size 2x3x2]"
      ]
     },
     "execution_count": 78,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 自动广播法则\n",
    "# 第一步：a是2维,b是3维，所以先在较小的a前面补1 ，\n",
    "#               即：a.unsqueeze(0)，a的形状变成（1，3，2），b的形状是（2，3，1）,\n",
    "# 第二步:   a和b在第一维和第三维形状不一样，其中一个为1 ，\n",
    "#               可以利用广播法则扩展，两个形状都变成了（2，3，2）\n",
    "a+b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "\n",
       "(1 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "[torch.FloatTensor of size 2x3x2]"
      ]
     },
     "execution_count": 79,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 手动广播法则\n",
    "# 或者 a.view(1,3,2).expand(2,3,2)+b.expand(2,3,2)\n",
    "a.unsqueeze(0).expand(2, 3, 2) + b.expand(2,3,2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [],
   "source": [
    "# expand不会占用额外空间，只会在需要的时候才扩充，可极大节省内存\n",
    "e = a.unsqueeze(0).expand(10000000000000, 3,2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.3 内部结构\n",
    "\n",
    "tensor的数据结构如图3-1所示。tensor分为头信息区(Tensor)和存储区(Storage)，信息区主要保存着tensor的形状（size）、步长（stride）、数据类型（type）等信息，而真正的数据则保存成连续数组。由于数据动辄成千上万，因此信息区元素占用内存较少，主要内存占用则取决于tensor中元素的数目，也即存储区的大小。\n",
    "\n",
    "一般来说一个tensor有着与之相对应的storage, storage是在data之上封装的接口，便于使用，而不同tensor的头信息一般不同，但却可能使用相同的数据。下面看两个例子。\n",
    "\n",
    "![图3-1: Tensor的数据结构](imgs/tensor_data_structure.svg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       " 0.0\n",
       " 1.0\n",
       " 2.0\n",
       " 3.0\n",
       " 4.0\n",
       " 5.0\n",
       "[torch.FloatStorage of size 6]"
      ]
     },
     "execution_count": 81,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6)\n",
    "a.storage()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       " 0.0\n",
       " 1.0\n",
       " 2.0\n",
       " 3.0\n",
       " 4.0\n",
       " 5.0\n",
       "[torch.FloatStorage of size 6]"
      ]
     },
     "execution_count": 82,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = a.view(2, 3)\n",
    "b.storage()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 83,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 一个对象的id值可以看作它在内存中的地址\n",
    "# storage的内存地址一样，即是同一个storage\n",
    "id(b.storage()) == id(a.storage())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0  100    2\n",
       "   3    4    5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 84,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# a改变，b也随之改变，因为他们共享storage\n",
    "a[1] = 100\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 85,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       " 0.0\n",
       " 100.0\n",
       " 2.0\n",
       " 3.0\n",
       " 4.0\n",
       " 5.0\n",
       "[torch.FloatStorage of size 6]"
      ]
     },
     "execution_count": 85,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = a[2:] \n",
    "c.storage()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(42430248, 42430240)"
      ]
     },
     "execution_count": 86,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c.data_ptr(), a.data_ptr() # data_ptr返回tensor首元素的内存地址\n",
    "# 可以看出相差8，这是因为2*4=8--相差两个元素，每个元素占4个字节(float)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0\n",
       " 100\n",
       "-100\n",
       "   3\n",
       "   4\n",
       "   5\n",
       "[torch.FloatTensor of size 6]"
      ]
     },
     "execution_count": 87,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c[0] = -100 # c[0]的内存地址对应a[2]的内存地址\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 6666   100  -100\n",
       "    3     4     5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 88,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "d = t.Tensor(c.storage())\n",
    "d[0] = 6666\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 89,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 下面４个tensor共享storage\n",
    "id(a.storage()) == id(b.storage()) == id(c.storage()) == id(d.storage())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(0, 2, 0)"
      ]
     },
     "execution_count": 90,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a.storage_offset(), c.storage_offset(), d.storage_offset()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 91,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "e = b[::2, ::2] # 隔2行/列取一个元素\n",
    "id(e.storage()) == id(a.storage())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((3, 1), (6, 2))"
      ]
     },
     "execution_count": 92,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.stride(), e.stride()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "False"
      ]
     },
     "execution_count": 93,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "e.is_contiguous()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可见绝大多数操作并不修改tensor的数据，而只是修改了tensor的头信息。这种做法更节省内存，同时提升了处理速度。在使用中需要注意。\n",
    "此外有些操作会导致tensor不连续，这时需调用`tensor.contiguous`方法将它们变成连续的数据，该方法会使数据复制一份，不再与原来的数据共享storage。\n",
    "另外读者可以思考一下，之前说过的高级索引一般不共享stroage，而普通索引共享storage，这是为什么？（提示：普通索引可以通过只修改tensor的offset，stride和size，而不修改storage来实现）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.4 其它有关Tensor的话题\n",
    "这部分的内容不好专门划分一小节，但是笔者认为仍值得读者注意，故而将其放在这一小节。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 持久化\n",
    "Tensor的保存和加载十分的简单，使用t.save和t.load即可完成相应的功能。在save/load时可指定使用的`pickle`模块，在load时还可将GPU tensor映射到CPU或其它GPU上。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true,
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "if t.cuda.is_available():\n",
    "    a = a.cuda(1) # 把a转为GPU1上的tensor,\n",
    "    t.save(a,'a.pth')\n",
    "\n",
    "    # 加载为b, 存储于GPU1上(因为保存时tensor就在GPU1上)\n",
    "    b = t.load('a.pth')\n",
    "    # 加载为c, 存储于CPU\n",
    "    c = t.load('a.pth', map_location=lambda storage, loc: storage)\n",
    "    # 加载为d, 存储于GPU0上\n",
    "    d = t.load('a.pth', map_location={'cuda:1':'cuda:0'})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "####   向量化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "向量化计算是一种特殊的并行计算方式，相对于一般程序在同一时间只执行一个操作的方式，它可在同一时间执行多个操作，通常是对不同的数据执行同样的一个或一批指令，或者说把指令应用于一个数组/向量上。向量化可极大提高科学运算的效率，Python本身是一门高级语言，使用很方便，但这也意味着很多操作很低效，尤其是`for`循环。在科学计算程序中应当极力避免使用Python原生的`for循环`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def for_loop_add(x, y):\n",
    "    result = []\n",
    "    for i,j in zip(x, y):\n",
    "        result.append(i + j)\n",
    "    return t.Tensor(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "10 loops, best of 3: 189 µs per loop\n",
      "10 loops, best of 3: 5.76 µs per loop\n"
     ]
    }
   ],
   "source": [
    "x = t.zeros(100)\n",
    "y = t.ones(100)\n",
    "%timeit -n 10 for_loop_add(x, y)\n",
    "%timeit -n 10 x + y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可见二者有超过40倍的速度差距，因此在实际使用中应尽量调用内建函数(buildin-function)，这些函数底层由C/C++实现，能通过执行底层优化实现高效计算。因此在平时写代码时，就应养成向量化的思维习惯。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此外还有以下几点需要注意：\n",
    "- 大多数`t.function`都有一个参数`out`，这时候产生的结果将保存在out指定tensor之中。\n",
    "- `t.set_num_threads`可以设置PyTorch进行CPU多线程并行计算时候所占用的线程数，这个可以用来限制PyTorch所占用的CPU数目。\n",
    "- `t.set_printoptions`可以用来设置打印tensor时的数值精度和格式。\n",
    "下面举例说明。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "16777216.0 16777216.0\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(199999, 199998)"
      ]
     },
     "execution_count": 98,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 20000000)\n",
    "print(a[-1], a[-2]) # 32bit的IntTensor精度有限导致溢出\n",
    "b = t.LongTensor()\n",
    "t.arange(0, 200000, out=b) # 64bit的LongTensor不会溢出\n",
    "b[-1],b[-2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0.5813  1.0527 -0.0117\n",
       " 0.8768  1.2595  0.0564\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 99,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.randn(2,3)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "0.5813463926 1.0527025461 -0.0117204413\n",
       "0.8768243790 1.2595347166 0.0564190336\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 100,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.set_printoptions(precision=10)\n",
    "a"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.5 小试牛刀：线性回归"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "线性回归是机器学习入门知识，应用十分广泛。线性回归利用数理统计中回归分析，来确定两种或两种以上变量间相互依赖的定量关系的，其表达形式为$y = wx+b+e$，$e$为误差服从均值为0的正态分布。首先让我们来确认线性回归的损失函数：\n",
    "$$\n",
    "loss = \\sum_i^N \\frac 1 2 ({y_i-(wx_i+b)})^2\n",
    "$$\n",
    "然后利用随机梯度下降法更新参数$\\textbf{w}$和$\\textbf{b}$来最小化损失函数，最终学得$\\textbf{w}$和$\\textbf{b}$的数值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch as t\n",
    "%matplotlib inline\n",
    "from matplotlib import pyplot as plt\n",
    "from IPython import display"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 设置随机数种子，保证在不同电脑上运行时下面的输出一致\n",
    "t.manual_seed(1000) \n",
    "\n",
    "def get_fake_data(batch_size=8):\n",
    "    ''' 产生随机数据：y=x*2+3，加上了一些噪声'''\n",
    "    x = t.rand(batch_size, 1) * 20\n",
    "    y = x * 2 + (1 + t.randn(batch_size, 1))*3\n",
    "    return x, y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<matplotlib.collections.PathCollection at 0x7f097ff8eef0>"
      ]
     },
     "execution_count": 103,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD8CAYAAABn919SAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAE6tJREFUeJzt3X+MH3ed3/Hnq46BFdBzclml/gE1cMiInoQd7VlcoZSG\nA4ccIqZCp6DrNdeL5EMHUlCvhvhOunJVTzrqQq6tKipDcvG1EYQG40QUatIQhJCO0E3sxA7Bl5AG\nNRsT7xVMQF1Rx3n3j+9sbrPZ9c7ufr/f/XryfEhffWc+8xnPW+PZ185+vjPfSVUhSbrw/a21LkCS\n1B8GuiR1hIEuSR1hoEtSRxjoktQRBrokdYSBLkkdYaBLUkcY6JLUERe17ZhkHTAJTFXVe5LcAvxD\n4CdNl9+uqmPn+zcuvfTS2rp16wpLlaQXp/vuu++vq2p8qX6tAx24HngY+Ntz2vZW1e1t/4GtW7cy\nOTm5jE1KkpL8oE2/VkMuSbYAvw58djVFSZIGp+0Y+p8BHwWendf+J0keTHJjkpf2tzRJ0nIsGehJ\n3gOcrqr75i3aB7wB+BXgEuBji6y/J8lkksnp6enV1itJWkSbM/S3AO9N8jjweeCKJP+lqk5Vz8+B\nPwd2LrRyVR2oqomqmhgfX3JMX5K0QksGelXtq6otVbUVuAb4elX9kyQbAZIE2A2cGGilkqTzWs5V\nLvPdmmQcCHAM+GB/SpKkbjh8dIr9R07y5JkZNm0YY++ubezesXlg21tWoFfVN4BvNNNXDKAeSeqE\nw0en2HfoODNnzwEwdWaGfYeOAwws1L1TVJIGYP+Rk8+F+ayZs+fYf+TkwLZpoEvSADx5ZmZZ7f1g\noEvSAGzaMLas9n4w0CVpAPbu2sbY+nXPaxtbv469u7YNbJurucpFkrSI2Q8+R/YqF0lSe7t3bB5o\ngM/nkIskdYSBLkkdYaBLUkcY6JLUEQa6JHWEgS5JHWGgS1JHGOiS1BEGuiR1hIEuSR3ROtCTrEty\nNMmXm/nXJLk3yaNJbkvyksGVKUlaynLO0K8HHp4z/wngxqr6JeDHwHX9LEyStDytAj3JFuDXgc82\n8wGuAG5vuhyk96BoSdIaaXuG/mfAR4Fnm/lfBM5U1TPN/BPA8L5STJL0AksGepL3AKer6r6VbCDJ\nniSTSSanp6dX8k9Iklpoc4b+FuC9SR4HPk9vqOXfARuSzH6f+hZgaqGVq+pAVU1U1cT4+HgfSpYk\nLWTJQK+qfVW1paq2AtcAX6+q3wTuAd7fdLsWuGNgVUqSlrSa69A/BvzzJI/SG1O/qT8lSZJWYlmP\noKuqbwDfaKYfA3b2vyRJ0kp4p6gkdYSBLkkdYaBLUkcY6JLUEQa6JHWEgS5JHWGgS1JHGOiS1BEG\nuiR1hIEuSR1hoEtSRxjoktQRBrokdYSBLkkdYaBLUkcY6JLUEW0eEv2yJN9J8kCSh5L8cdN+S5L/\nleRY89o++HIlSYtp88SinwNXVNXPkqwHvpXkq82yvVV1++DKkyS1tWSgV1UBP2tm1zevGmRRkqTl\nazWGnmRdkmPAaeCuqrq3WfQnSR5McmOSlw6sSknSkloFelWdq6rtwBZgZ5JfBvYBbwB+BbgE+NhC\n6ybZk2QyyeT09HSfypYkzbesq1yq6gxwD3BlVZ2qnp8Dfw7sXGSdA1U1UVUT4+Pjq69YkrSgNle5\njCfZ0EyPAe8EvpdkY9MWYDdwYpCFSpLOr81VLhuBg0nW0fsF8IWq+nKSrycZBwIcAz44wDolSUto\nc5XLg8COBdqvGEhFkqQV8U5RSeqINkMuki5Qh49Osf/ISZ48M8OmDWPs3bWN3Ts2r3VZGhADXeqo\nw0en2HfoODNnzwEwdWaGfYeOAxjqHeWQi9RR+4+cfC7MZ82cPcf+IyfXqCINmoEuddSTZ2aW1a4L\nn4EuddSmDWPLateFz0CXOmrvrm2MrV/3vLax9evYu2vbGlWkQfNDUamjZj/49CqXFw8DXeqw3Ts2\nG+AvIg65SFJHGOiS1BEGuiR1hIEuSR1hoEtSRxjoktQRBrokdUSbR9C9LMl3kjyQ5KEkf9y0vybJ\nvUkeTXJbkpcMvlxJ0mLanKH/HLiiqt4EbAeuTPJm4BPAjVX1S8CPgesGV6YkaSlLBnr1/KyZXd+8\nCrgCuL1pP0jvQdGSpDXSagw9ybokx4DTwF3A94EzVfVM0+UJwPuLJWkNtQr0qjpXVduBLcBO4A1t\nN5BkT5LJJJPT09MrLFOStJRlXeVSVWeAe4BfBTYkmf1yry3A1CLrHKiqiaqaGB8fX1WxkqTFtbnK\nZTzJhmZ6DHgn8DC9YH9/0+1a4I5BFSlJWlqbr8/dCBxMso7eL4AvVNWXk3wX+HySfw0cBW4aYJ2S\npCUsGehV9SCwY4H2x+iNp0uSRoB3ikpSRxjoktQRPoJOugAcPjrls0G1JANdGnGHj06x79BxZs6e\nA2DqzAz7Dh0HMNT1PA65SCNu/5GTz4X5rJmz59h/5OQaVaRRZaBLI+7JMzPLateLl4EujbhNG8aW\n1a4XLwNdGnF7d21jbP2657WNrV/H3l3b1qgijSo/FJVG3OwHn17loqUY6NIFYPeOzQa4luSQiyR1\nhIEuSR1hoEtSRxjoktQRBrokdYSBLkkd0eYRdK9Kck+S7yZ5KMn1TfvHk0wlOda8rhp8uZKkxbS5\nDv0Z4Per6v4krwTuS3JXs+zGqvq3gytPktRWm0fQnQJONdM/TfIw4B0OkjRiljWGnmQrveeL3ts0\nfTjJg0luTnJxn2uTJC1D60BP8grgi8BHqupp4NPA64Dt9M7gP7nIenuSTCaZnJ6e7kPJkqSFtAr0\nJOvphfmtVXUIoKqeqqpzVfUs8Blg50LrVtWBqpqoqonx8fF+1S1JmqfNVS4BbgIerqpPzWnfOKfb\n+4AT/S9PktRWm6tc3gL8FnA8ybGm7Q+ADyTZDhTwOPC7A6lQktRKm6tcvgVkgUVf6X85kqSV8k5R\nSeoIA12SOsJAl6SOMNAlqSMMdEnqCANdkjrCQJekjjDQJakjDHRJ6ggDXZI6wkCXpI4w0CWpIwx0\nSeqINl+fK3Xa4aNT7D9ykifPzLBpwxh7d21j9w4fm6sLj4GuF7XDR6fYd+g4M2fPATB1ZoZ9h44D\nGOq64Djkohe1/UdOPhfms2bOnmP/kZNrVJG0cm0eQfeqJPck+W6Sh5Jc37RfkuSuJI807xcPvlyp\nv548M7OsdmmUtTlDfwb4/ap6I/Bm4ENJ3gjcANxdVa8H7m7mpQvKpg1jy2qXRtmSgV5Vp6rq/mb6\np8DDwGbgauBg0+0gsHtQRUqDsnfXNsbWr3te29j6dezdtW2NKpJWblkfiibZCuwA7gUuq6pTzaIf\nApf1tTJpCGY/+PQqF3VB60BP8grgi8BHqurp5G+eG11VlaQWWW8PsAfg1a9+9eqqlQZg947NBrg6\nodVVLknW0wvzW6vqUNP8VJKNzfKNwOmF1q2qA1U1UVUT4+Pj/ahZkrSANle5BLgJeLiqPjVn0Z3A\ntc30tcAd/S9PktRWmyGXtwC/BRxPcqxp+wPgT4EvJLkO+AHwG4MpUZLUxpKBXlXfArLI4nf0txx1\nlbfXS4Pnrf8aOG+vl4bDW/81cN5eLw2Hga6B8/Z6aTgMdA2ct9dLw2Gga+C8vV4aDj8U1cB5e700\nHAa6hsLb66XBc8hFkjrCQJekjjDQJakjDHRJ6ggDXZI6wkCXpI4w0CWpIwx0SeoIA12SOqLNI+hu\nTnI6yYk5bR9PMpXkWPO6arBlSpKW0uYM/RbgygXab6yq7c3rK/0tS5K0XEsGelV9E/jREGqRJK3C\nasbQP5zkwWZI5uLFOiXZk2QyyeT09PQqNidJOp+VBvqngdcB24FTwCcX61hVB6pqoqomxsfHV7g5\nSdJSVhToVfVUVZ2rqmeBzwA7+1uWJGm5VhToSTbOmX0fcGKxvpKk4VjyARdJPge8Hbg0yRPAvwTe\nnmQ7UMDjwO8OsEZJUgtLBnpVfWCB5psGUIskaRW8U1SSOsJAl6SOMNAlqSMMdEnqCANdkjrCQJek\njjDQJakjDHRJ6ggDXZI6wkCXpI4w0CWpIwx0SeoIA12SOsJAl6SOMNAlqSMMdEnqiCUDPcnNSU4n\nOTGn7ZIkdyV5pHm/eLBlSpKW0uYM/RbgynltNwB3V9XrgbubeUnSGloy0Kvqm8CP5jVfDRxspg8C\nu/tclyRpmVY6hn5ZVZ1qpn8IXLZYxyR7kkwmmZyenl7h5iRJS1n1h6JVVUCdZ/mBqpqoqonx8fHV\nbk6StIiVBvpTSTYCNO+n+1eSJGklVhrodwLXNtPXAnf0pxxJ0kq1uWzxc8BfAtuSPJHkOuBPgXcm\neQT4tWZekrSGLlqqQ1V9YJFF7+hzLZKkVfBOUUnqCANdkjrCQJekjlhyDF09h49Osf/ISZ48M8Om\nDWPs3bWN3Ts2r3VZkvQcA72Fw0en2HfoODNnzwEwdWaGfYeOAxjqkkaGQy4t7D9y8rkwnzVz9hz7\nj5xco4ok6YU8Q2/hyTMzy2rvF4d5JC2HZ+gtbNowtqz2fpgd5pk6M0PxN8M8h49ODWybki5sBnoL\ne3dtY2z9uue1ja1fx95d2wa2TYd5JC2XQy4tzA5zDHP4Y62GeSRduAz0lnbv2DzU8etNG8aYWiC8\nBznMI+nC5pDLiFqLYR5JFzbP0EfUWgzzSLqwGegjbNjDPJIubA65SFJHGOiS1BGrGnJJ8jjwU+Ac\n8ExVTfSjKEnS8vVjDP0fVdVf9+HfkSStgkMuktQRqw30Ar6W5L4kexbqkGRPkskkk9PT06vcnCRp\nMasN9LdW1eXAu4EPJXnb/A5VdaCqJqpqYnx8fJWbkyQtZlVj6FU11byfTvIlYCfwzX4UNsuvkJWk\ndlZ8hp7k5UleOTsNvAs40a/CwK+QlaTlWM2Qy2XAt5I8AHwH+G9V9d/7U1aPXyErSe2teMilqh4D\n3tTHWl7Ar5CVpPZG+rLFtXhSkCRdqEY60P0KWUlqb6S/bdGvkJWk9kY60MGvkJWktkZ6yEWS1J6B\nLkkdYaBLUkcY6JLUEQa6JHVEqmp4G0umgR8sY5VLgVF+eMao1wejX+Oo1wejX+Oo1wejX+Oo1/d3\nq2rJr6sdaqAvV5LJUX6s3ajXB6Nf46jXB6Nf46jXB6Nf46jX15ZDLpLUEQa6JHXEqAf6gbUuYAmj\nXh+Mfo2jXh+Mfo2jXh+Mfo2jXl8rIz2GLklqb9TP0CVJLY1EoCe5MsnJJI8muWGB5S9Ncluz/N4k\nW4dY26uS3JPku0keSnL9An3enuQnSY41rz8aVn1zang8yfFm+5MLLE+Sf9/swweTXD7E2rbN2TfH\nkjyd5CPz+gx9Hya5OcnpJCfmtF2S5K4kjzTvFy+y7rVNn0eSXDvE+vYn+V7zf/ilJBsWWfe8x8OA\na/x4kqk5/5dXLbLueX/uB1jfbXNqezzJsUXWHco+7KuqWtMXsA74PvBa4CXAA8Ab5/X5PeA/NdPX\nALcNsb6NwOXN9CuBv1qgvrcDX17j/fg4cOl5ll8FfBUI8Gbg3jX8//4hvetq13QfAm8DLgdOzGn7\nN8ANzfQNwCcWWO8S4LHm/eJm+uIh1fcu4KJm+hML1dfmeBhwjR8H/kWL4+C8P/eDqm/e8k8Cf7SW\n+7Cfr1E4Q98JPFpVj1XV/wM+D1w9r8/VwMFm+nbgHUkyjOKq6lRV3d9M/xR4GLgQv8/3auAvqufb\nwIYkG9egjncA36+q5dxgNhBV9U3gR/Oa5x5rB4HdC6y6C7irqn5UVT8G7gKuHEZ9VfW1qnqmmf02\nsKXf212ORfZhG21+7lftfPU1GfIbwOf6vd21MgqBvhn433Pmn+CFgflcn+Zg/gnwi0Opbo5mqGcH\ncO8Ci381yQNJvprk7w21sJ4CvpbkviR7FljeZj8PwzUs/gO01vsQ4LKqOtVM/5Dew9DnG5V9+Tv0\n/upayFLHw6B9uBkWunmRYatR2If/AHiqqh5ZZPla78NlG4VAvyAkeQXwReAjVfX0vMX30xtCeBPw\nH4DDw64PeGtVXQ68G/hQkretQQ3nleQlwHuB/7rA4lHYh89Tvb+7R/IysCR/CDwD3LpIl7U8Hj4N\nvA7YDpyiN6wxij7A+c/OR/5nar5RCPQp4FVz5rc0bQv2SXIR8AvA/xlKdb1trqcX5rdW1aH5y6vq\n6ar6WTP9FWB9kkuHVV+z3anm/TTwJXp/0s7VZj8P2ruB+6vqqfkLRmEfNp6aHYpq3k8v0GdN92WS\n3wbeA/xm80vnBVocDwNTVU9V1bmqehb4zCLbXut9eBHwj4HbFuuzlvtwpUYh0P8n8Pokr2nO4K4B\n7pzX505g9kqC9wNfX+xA7rdmnO0m4OGq+tQiff7O7Jh+kp309uswf+G8PMkrZ6fpfXB2Yl63O4F/\n2lzt8mbgJ3OGFoZl0TOitd6Hc8w91q4F7ligzxHgXUkuboYT3tW0DVySK4GPAu+tqv+7SJ82x8Mg\na5z72cz7Ftl2m5/7Qfo14HtV9cRCC9d6H67YWn8q2+TyVfSuHvk+8IdN27+id9ACvIzen+mPAt8B\nXjvE2t5K78/uB4Fjzesq4IPAB5s+HwYeovdJ/beBvz/k/ffaZtsPNHXM7sO5NQb4j80+Pg5MDLnG\nl9ML6F+Y07am+5DeL5dTwFl6Y7jX0fts5m7gEeB/AJc0fSeAz85Z93ea4/FR4J8Nsb5H6Y09zx6L\ns1d/bQK+cr7jYYg1/ufmGHuQXkhvnF9jM/+Cn/th1Ne03zJ77M3puyb7sJ8v7xSVpI4YhSEXSVIf\nGOiS1BEGuiR1hIEuSR1hoEtSRxjoktQRBrokdYSBLkkd8f8Bz/vgV2nenfEAAAAASUVORK5CYII=\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x7f0982274898>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 来看看产生的x-y分布\n",
    "x, y = get_fake_data()\n",
    "plt.scatter(x.squeeze().numpy(), y.squeeze().numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXwAAAD8CAYAAAB0IB+mAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xl8VOXZ//HPRQgQ9n0LRPYdAhhBUXGtgBuIy09q3S2t\nfXxaLSCgtVKXIi61Ln2suFRbW5UdwQVxobhUfIKQhH2TLQQSlgCBQJLJ/ftjBp4AkzAksyXzfb9e\neWXmzJmZy+PhOyf3OXNf5pxDRESqvmqRLkBERMJDgS8iEiMU+CIiMUKBLyISIxT4IiIxQoEvIhIj\nFPgiIjFCgS8iEiMU+CIiMaJ6ON+sadOmrl27duF8SxGRMjkHOw8cYXfeUWrEVaNNowTq1AxrNJ7W\n0qVLdzvnmlX0dcL6X9WuXTtSU1PD+ZYiIqVK25bLmOlp7MnOY+zAJB66snvUhT2AmW0JxusE/F9m\nZnFAKpDpnLvazNoD7wFNgKXArc65gmAUJSISSgVFxbz0xXr+Z9FGmteryd/vGsDgLhU+gA6aOcsy\neWbBWnbk5tO6YQLVEuo3DsbrnskY/m+A1SXuTwGed851AvYBdwejIBGRUFqddYDhf/mGl77YwIi+\niXxy/+CoC/uJszLIzM3HAZm5+VSv3+ysYLx2QIFvZm2Aq4DXffcNuBSY4VvlbWBEMAoSEQmFIk8x\nL3+xnmtf/pqcg0d57bYUnrspmQYJ8ZEu7QTPLFhLfqHnxIVmQbnAJtAhnT8DDwL1fPebALnOuSLf\n/e1AYjAKEhEJtg3ZeYyZtpy07fu5uk8rHh/ei0Z1akS6LL925OaH7LVPG/hmdjWQ7ZxbamYXn+kb\nmNloYDRAUlLSGRcoIlJenmLH3775kWcWrKV2jThe/mk/ru7TOtJllal1wwQyQxT6gfyZcD5wrZlt\nxnuS9lLgBaChmR37wGgDZPp7snNuqnMuxTmX0qxZ9IyTiUjVtmXPIW6e+h+e+HA1F3ZuxoIHBkd9\n2AOMG9KVhPi4Exc6VxyM1z7tEb5zbiIwEcB3hD/WOXeLmU0HbsD7IXA7MDcYBYmIVIRzjneWbGXy\nR6uJq2Y8d2MyI/sn4j31GP1G9POOjpe8SmfbgZzwXpbpx3jgPTN7AlgGvBGMgkREyiszN5/xM9L5\nesNuLuzclCnX96F1w4RIl3XGRvRLPB78ADbxwN5gvO4ZBb5zbhGwyHd7EzAgGEWIiFSEc47pS7fz\n+LxVeJzjyet68dMBSZXmqD5cou8rZSIiZyD7wBEmzsrg8zXZDGjfmGdvSCapSe1IlxWVFPgiUik5\n55iXnsUjc1ZwpNDDI1f34M5B7ahWTUf1pVHgi0ilsyfvKI/MXcFHGTvpl9SQZ29MpmOzupEuK+op\n8EWkUlmwcicPz87gQH4RDw7tyugLO1A9TjO9B0KBLyKVwv7DhUyat5LZyzLp2bo+79yTTLeW9SNd\nVqWiwBeRqLdobTbjZ6azO6+A31zWmfsu7US8jurPmAJfRKJW3tEinvxwFe9+v40uLery+m3n0LtN\ng0iXVWkp8EUkKn27cTcPzkhnR24+v7ioAw9c3oVaJ085IGdEgS8iUSW/wMOUT9bw1rebadekNtN/\neR5nnxWU/h8xT4EvIlFj6Za9jJ2ezo+7D3HHoHY8OLQrtWsopoJFW1JEIu5IoYfnP1vHa4s30apB\nAv/6+UAGdWwa6bKqHAW+iERUxvb9/HbactZn5zFqQFsevqoHdaOwkXhVoK0qIhFR6Cnm5S828PKX\nG2hatwZ/u/Mc9h8uZMjzi49PCzxuSNcTZo2UilHgi0jYrdl5gDHT0li54wDX9Utk0jU9+XJtNhNn\nZRzv55qZm8/EWRkACv0gUeCLSNgUeYqZ+tUm/rxwPfVqVeevPzubob1aAv6bd+cXenhmwVoFfpAo\n8EUkLDbl5DFmehrLtuYyrFdLnhjRiyZ1ax5/vLTm3aFs6h1rFPgiElLFxY63vt3MlE/WUCs+jhdu\n7su1ya1PaU5SWvPuytixKlppMgoRCZltew8z6rXveGz+Ks7v1JRPHxjM8L7++8v6a96dEB/HuCFd\nw1VulXfaI3wzqwUsBmr61p/hnHvUzN4CLgL2+1a9wzm3PFSFikjl4Zzj3e+38cSHq6hmxtM39OHG\ns9uU2XLQX/NuXaUTXIEM6RwFLnXO5ZlZPPC1mX3se2ycc25G6MoTkcoma38+42dmsHhdDud3asLT\nNySTGOCwzMnNuyW4Thv4zjkH5Pnuxvt+XCiLEpHKxznHrB8ymTRvJUUex+PDe3LLwLPUcjCKBDSG\nb2ZxZrYcyAYWOueW+B560szSzex5M6tZynNHm1mqmaXm5OQEqWwRiSY5B48y+h9LGTM9jW4t6/Hx\nby7k1vPUXzbamPcAPsCVzRoCs4H/BvYAO4EawFRgo3PusbKen5KS4lJTU8tfrYhEnQ/Ts/jdnAwO\nFXgYd0VX7rqgPXEK+qAys6XOuZSKvs4ZXZbpnMs1sy+Boc65Z32Lj5rZ34CxFS1GRCqPfYcKeGTu\nCuanZ5HcpgHP3ZRMp+b1Il2WlCGQq3SaAYW+sE8AfgJMMbNWzrks8552HwGsCHGtIhIlPlu1iwmz\nMtifX8DYK7rwy4s6qpF4JRDIEX4r4G0zi8M75j/NOTffzL7wfRgYsBz4ZQjrFJEocOBIIY/NW8WM\npdvp1rIef79rAD1aq5F4ZRHIVTrpQD8/yy8NSUUiEpW+Wp/DgzPSyT54lPsu6cSvL+tMjeo6qq9M\nNLWCiJTp0NEiJn+8mne+20rHZnWYee8g+rZtGOmypBwU+CJSqiWb9jBuRjrb9h3m5xe2Z8wVXdVI\nvBJT4IvIKY74piV+85sfaduoNu+PPo8B7dVIvLJT4IvICZZt3ceY6WlsyjnEreeexYRh3aijloNV\ngv4viggAR4s8vPj5el5ZtJGW9Wvxzt0DuaCzGolXJQp8EWHljv2MmZbGmp0HufHsNjxyTQ/q14qP\ndFkSZAp8kRhW6CnmlUUbefHz9TSqU4M3bk/hsu4tIl2WhIgCX6SKmbMsM6A55dfvOsiY6Wmkb9/P\ntcmt+cO1PWlUp0YEKpZw0bcmRKqQOcsymTgrg8zcfByQmZvPxFkZzFmWeXwdT7Hj1X9vZNgLX7Ei\n09u/aOmWffx7nWazreoU+CJVyDML1pJf6DlhWb7vEkuAH3cf4qZX/8Pkj9dQ7BzFvsly/X0wSNWj\nwBepQnb4aQIO3kB/+9vNDHthMet3HaRR7fjjYX9MyQ8GqZoU+CJVSOtSWgnWqF6NRz9YycD2Tfj0\ngYvIPVzod73SPjCkalDgi1Qh44Z0JcHP1AcGPDWyN2/deQ4tG9Qq9YOhtOVSNSjwRaqQEf0SmTyy\nNy3r1zq+rHPzunz224u4eUAS3vYV/j8YEuLjGDeka1jrlfDSZZkiVYhzDofjcEERteKrMWFoN27z\n01v22GWagVy+KVWHAl+kitidd5SHZ2ewYOUu+ic15Lmb+tK+aZ1S1x/RL1EBH2MU+CJVwMcZWTw8\nZwV5R4qYOKwb91zYQY3E5RSB9LStBSwGavrWn+Gce9TM2gPvAU2ApcCtzrmCUBYrIifKPVzAox+s\nZO7yHfRO9DYS79JCjcTFv0CO8I8Clzrn8swsHvjazD4Gfgs875x7z8z+CtwNvBLCWkWkhC/W7GLC\nzAz2Hirggcu78KtLOhKvRuJShkB62jogz3c33vfjgEuBn/qWvw1MQoEvEnIHjxTy+PxVTEvdTtcW\n9XjzjnPoldgg0mVJJRDQGL6ZxeEdtukE/AXYCOQ654p8q2wHdPZHJMS+2bCbB2ekk7U/n3sv7sj9\nl3emZnW1HJTABBT4zjkP0NfMGgKzgW6BvoGZjQZGAyQlJZWnRpGYd7igiKc+XsPf/7OFDk3rMOPe\nQfRPahTpsqSSOaOrdJxzuWb2JXAe0NDMqvuO8tsAfmddcs5NBaYCpKSkOH/riIiXv6mN2zRKYMz0\nNLbsOcxd57f3fmmqho7q5cwFcpVOM6DQF/YJwE+AKcCXwA14r9S5HZgbykJFqrpjUxsfm+0yMzef\nsdPT8BQ72jRO4L3R53JuhyYRrlIqs0CO8FsBb/vG8asB05xz881sFfCemT0BLAPeCGGdIlWev6mN\ni4odtWvE8fFvBlNXjcSlggK5Sicd6Odn+SZgQCiKEolFpc1UmV/gUdhLUOiiXZEoUVqoN0hQM3EJ\nDh02iERYkaeYVxdv4uDRIr+Pm2ZIkCBR4ItE0IbsPMZMTyNtW26p65TWrETkTGlIRyQCPMWO17/a\nxFUvfsWWPYd4aVQ/EtWUREJMR/giYbZlzyHGTU/n+817ubx7C/44shfN69XCU+xOuCwT1JREgkuB\nLxImzjneWbKVyR+tJq6a8eyNyVzfP/F4Fyo1JZFQU+CLhEFmbj7jZ6Tz9YbdXNi5KU/f0IdWDU4d\nqlFTEgklBb5ICDnnmLF0O4/NW4XHOZ68rhc/LdFbViScFPgiIZJ94AgTZ2Xw+ZpsBrRvzLM3JJPU\npHaky5IYpsAXCTLnHPPSs/j93BXkF3h45Ooe3Dno1EbiIuGmwBcJor2HCnhkzgo+zMiiX1JDnr0x\nmY7N6ka6LBFAgS8SNAtW7uTh2Rnszy/kwaFdGX1hB6qr5aBEEQW+SAXtP1zIH+atZNayTHq2rs87\n9wykW8v6kS5L5BQKfJEKWLQ2m/Ez09mdV8CvL+vMfZd0okZ1HdVLdFLgi5RD3tEinvxwNe9+v5Uu\nLery+m3n0LuNGolLdFPgi5yh/2zcw7gZaWTm5vOLizrwwOVdqBWvloMS/RT4IgHKL/Aw5ZM1vPXt\nZto1qc2MX57H2Wc1jnRZIgFT4IsEYOmWvYydns6Puw9xx6B2jB/aTY3EpdIJpIl5W+DvQAvAAVOd\ncy+Y2STg50COb9WHnHMfhapQkUg4WuTh+YXrmbp4I60aJPCvnw9kUMemkS5LpFwCOcIvAsY4534w\ns3rAUjNb6Hvseefcs6ErTyRyMrbvZ8z05azblceoAW156Mru1KuldoNSeQXSxDwLyPLdPmhmqwFN\n5ydVVqGnmJe/2MDLX26gad0a/O3Oc7ika/NIlyVSYWc0hm9m7YB+wBLgfOA+M7sNSMX7V8A+P88Z\nDYwGSEpKqmC5IqG1ZucBxkxLY+WOA1zXL5FJ1/SkQW0d1UvVYM65wFY0qwv8G3jSOTfLzFoAu/GO\n6z8OtHLO3VXWa6SkpLjU1NQKliwSfJ5ix9TFm3h+4Trq1arOk9f1ZmivlpEuSwQAM1vqnEup6OsE\ndIRvZvHATOCfzrlZAM65XSUefw2YX9FiRCJhU463kfiyrbkM69WSJ0b0okndmpEuSyToArlKx4A3\ngNXOuT+VWN7KN74PcB2wIjQlioRGcbHjrW83M+WTNdSKj+OFm/tybXJrNSeRKiuQI/zzgVuBDDNb\n7lv2EDDKzPriHdLZDPwiJBWKhMC2vYcZOz2NJT/u5dJuzXlqZG+a168V6bJEQiqQq3S+Bvwd8uia\ne6l0nHO8+/02nvxwFWbG0zf04caz2+ioXmKCvmkrMSNrfz7jZ2aweF0O53dqwtM3JJPY8NRG4iJV\nlQJfqjznHLN+yGTSvJUUeRyPD+/JLQPPUstBiTkKfKnScg4e5aHZGSxctYtz2jXimRuSade0TqTL\nEokIBb5UWR+mZ/G7ORkcKvDwu6u6c+f57YnTUb3EMAW+VDn7DhXwyNwVzE/PIrlNA567KZlOzetF\nuiyRiFPgS5Xy2apdTJiVwf78AsZe0YVfXtRRjcRFfBT4UiUcOFLIY/NWMWPpdrq1rMfbd51Dz9Zq\nOShSkgJfKr2v1ufw4Ix0sg8e5b5LOvHryzqrkbiIHwp8qbQOHS3ijx+t5p9LttKxWR1m3juIvm0b\nRroskailwJdKacmmPYybkc62fYe554L2jB3SVY3ERU5DgS+VypFCD88sWMub3/xI20a1eX/0eQxo\nr0biIoFQ4EulsWzrPsZMT2NTziFuPfcsJgzrRp2a1ZmzLJNnFqxlR24+rRsmMG5IV0b0U1M2kZMp\n8CVsyhvMR4s8vPDZev767420rF+Ld+4eyAWdmx5/zYmzMsgv9ACQmZvPxFkZAAp9kZMo8CUsyhvM\nK3fsZ8y0NNbsPMiNZ7fhkWt6UL9EI/FnFqw9/prH5PuGfRT4IidS4EtYnGkwF3qKeWXRRl78fD2N\n6tTgjdtTuKx7i1PW25Gb7/f9SlsuEssU+BIWZxLM63YdZMy0NDIy93Ntcmv+cG1PGtWp4ff5rRsm\nkOnnNVpr2mORU+jbKRIWpQVwyeWeYser/97I1S9+TWZuPq/c0p8XR/UrNewBxg3pSsJJl2MmxMcx\nbkjX4BQuUoWcNvDNrK2ZfWlmq8xspZn9xre8sZktNLP1vt+NQl+uVFanC+Yfdx/iplf/w+SP13BJ\nt2Z8+sBghvVuddrXHdEvkckje5PYMAEDEhsmMHlkb43fi/hhzrmyVzBrBbRyzv1gZvWApcAI4A5g\nr3PuKTObADRyzo0v67VSUlJcampqcCqXSsffVTrXJrfmH99tYfLHq6kRV43HhvdieF81EhcpycyW\nOudSKvo6gfS0zQKyfLcPmtlqIBEYDlzsW+1tYBFQZuBLbBvRL/GEI+/t+w7zszeW8O3GPVzUpRlT\nru9DywZqJC4SKmd00tbM2gH9gCVAC9+HAcBO4NRLKET8cM7x/v9u44kPV+OcY/LI3tx8Tlsd1YuE\nWMCBb2Z1gZnA/c65AyX/cTrnnJn5HRsys9HAaICkpKSKVSuV3q4DRxg/M51Fa3M4r0MTnr6hD20b\n1450WSIxIaDAN7N4vGH/T+fcLN/iXWbWyjmX5Rvnz/b3XOfcVGAqeMfwg1CzVELOOeYu38Hv566g\nwFPMH67tya3nqpG4SDidNvDNeyj/BrDaOfenEg99ANwOPOX7PTckFUqltzvvKA/PzmDByl30T2rI\nczf1pb0aiYuEXSBH+OcDtwIZZrbct+whvEE/zczuBrYAN4WmRIkG5Z0H5+OMLB6es4K8I0VMHNaN\ney7soEbiIhESyFU6XwOl/Qu9LLjlSDQqzzw4uYcLePSDlcxdvoPeid5G4l1aqJG4SCRpagU5rTOd\nB+fLNdmMn5nO3kMFPHB5F351SUfi1UhcJOIU+HJagc6Dc/BIIU/MX837qdvo2qIeb95xDr0S1Uhc\nJFoo8OW0Apmg7JsNu3lwRjpZ+/O59+KO3H95Z2pWV8tBkWiiv7PltMqaB+dwQRG/n7uCW15fQs3q\n1Zhx7yDGD+2msBeJQjrCl9M6Nk5/8lU6bRolMOyFr9iy5zB3nd/e+8FQQ0EvEq0U+BKQkvPgHCn0\n8KeF63hg2nISGybw3uhzObdDkwhXKCKno8CXM5K2LZcx09PYkJ3HTwcm8dCV3albU7uRSGWgf6kS\nkIKiYl7+Yj1/WbSRZnVr8vZdA7ioS7NIlyUiZ0CBL6e1OusAY6alsSrrANf3b8Pvr+lBg4T40z9R\nRKKKAl9KVeQp5tXFm/jzZ+tokBDP1FvP5oqeLSNdloiUkwJf/NqQnceY6Wmkbcvlqj6teHx4LxqX\n0VtWRKKfAl9OUFzsePObH3lmwVoSasTx0qh+XJPcOtJliUgQKPDluC17DjFuejrfb97L5d2b88eR\nvWleTy0HRaoKBb7gnOOdJVuZ/NFq4sx49sZkru+fqJaDIlWMAj/G7cjNZ/zMdL5av5sLOzdlyvV9\nTpgjR0SqDgV+jHLOMWPpdh6btwqPczwxohe3DEzSUb1IFabAj0HZB44wcVYGn6/JZkC7xjx7YzJJ\nTWqXu6uViFQOCvwYMy9tB4/MXUF+gYffXdWdu85vT7VqVq6uViJSuZx2emQze9PMss1sRYllk8ws\n08yW+36uDG2ZUlF7DxXwX//8gf9+dxlnNanDh7++kHsu7EA1X3/ZsrpaiUjVEMgR/lvAy8DfT1r+\nvHPu2aBXJEG3YOVOHp6dwf78QsYN6covBneg+kktBwPtaiUilVcgTcwXm1m70JciwfavJVt4fP5q\n8gs9xMeZr79sJ7/rBtLVSkQqt4p0vLrPzNJ9Qz6NSlvJzEabWaqZpebk5FTg7eRMTP5oNQ/NXnF8\nmKbQ43jpiw3MWZbpd/2yulqJSNVQ3sB/BegI9AWygOdKW9E5N9U5l+KcS2nWTNPphlre0SImzsrg\n1cWbTnmsrDH5Ef0SmTyyN4kNEzAgsWECk0f21glbkSqkXFfpOOd2HbttZq8B84NWkZTbfzbuYdyM\nNL9DM8eUNSZfsquViFQ95TrCN7NWJe5eB6wobV0JvfwCD5M+WMmo176jejVj+i/OI7GUsXeNyYvE\nrtMe4ZvZu8DFQFMz2w48ClxsZn0BB2wGfhHCGqUMS7fsY+z0NH7cfYg7BrXjwaFdqV2jOuOGdD3h\nunrQmLxIrAvkKp1Rfha/EYJa5AwcLfLw/ML1TF28kVYNEvjXPQMZ1Knp8cePDc3om7Micoy+aVsJ\nZWzfz5jpy1m3K49RA9ry0JXdqVfr1JaDGpMXkZIU+GEQrDlqCj3FvPzFBv7y5Qaa1K3B3+48h0u6\nNg9BxSJSFSnwQyxYc9Ss3XmQMdOXsyLzANf1S2TSNT1pUFuNxEUkcAr8ECtrjppAAt9T7Ji6eBPP\nL1xHvVrV+evPzmZoLzUSF5Ezp8APsYrMUbMpx9tIfNnWXIb1askTI3rRpG7NYJcoIjFCgR9i5Zmj\nprjY8da3m3l6wRpqVo/jhZv7cm1yazUnEZEKqchcOhKAM52jZtvew/z09e94bP4qBnVsyqcPDGZ4\nX/WXFZGK0xF+iAV6Pbxzjne/38aTH67CzHj6+j7cmNJGQS8iQaPAD4PTXQ+ftT+f8TMzWLwuh/M7\nNWHK9X1o06h2GCsUkVigwA8Tf9fiD+/bmlk/ZDJp3kqKPI7Hh/fkloFnHe9CJSISTAr8MPB3Lf6E\nmem8/vUmVmQeIOWsRjx7YzLtmtaJcKUiUpUp8MPA37X4R4qKWZF5gIev7M5dF7QnTkf1IhJiCvww\nKOua+58P7hDGSkQklumyzDAo7Zr70uasFxEJBQV+iB04Uug32DU3vYiEmwI/hL5an8PQ5xezdOs+\nftKjBa0b1FK/WBGJGI3hh8Cho0VM/ng173y3lY7N6jDz3kH0bdsw0mWJSIwLpMXhm8DVQLZzrpdv\nWWPgfaAd3haHNznn9oWuzMpjyaY9jJuRzrZ9h7nngvaMHdKVWidNrSAiEgmBDOm8BQw9adkE4HPn\nXGfgc9/9mHak0MPj81dx82vfAfD+6PP43dU9FPYiEjUC6Wm72MzanbR4ON7G5gBvA4uA8UGsq1JZ\nvi2X305bzqacQ9x67llMGNaNOjW9mzZY3a4ioTLXLiKnKu8YfgvnXJbv9k6gRZDqqVSOFnl48fP1\nvLJoIy3r1+KduwdyQef/ayQerG5XkVCZaxcR/yp8lY5zzgGutMfNbLSZpZpZak5OTkXfLmqs3LGf\n4S9/w1++3MgNZ7fhkwcGnxD2UHa3q2hXmWsXEf/Ke4S/y8xaOeeyzKwVkF3ais65qcBUgJSUlFI/\nGCqLIk8xryzayAufr6dRnRq8cXsKl3X3/wdORbpdRVplrl1E/CvvEf4HwO2+27cDc4NTTnRbv+sg\nI1/5lucWruPK3q349P7BpYY9lP4N27K6XUWLyly7iPh32sA3s3eB/wBdzWy7md0NPAX8xMzWA5f7\n7ldZ3kbiG7nqpa/Zvi+f/7mlPy+O6kejOjXKfN6ZdruKJpW5dhHxL5CrdEaV8tBlQa4lKm3efYix\n09NI3bKPK3q04I8je9M0wEbigXa7ikaVuXYR8c+851zDIyUlxaWmpobt/SqiuNjxzpItTP5oDfFx\nxh+G92SEesuKSASY2VLnXEpFX0dTK/ixfd9hHpyRzrcb93BRl2ZMub4PLRvUinRZIiIVosAvwTnH\ntNRtPD5/Nc45nhrZm/93Tlsd1YtIlaDA99l14AgTZqbz5doczuvQhKdv6EPbxmokLiJVR8wHvnOO\nuct38OgHKzla5GHSNT247bx2aiQuIlVOTAf+7ryjPDw7gwUrd9E/qSHP3phMh2Z1I12WiEhIxGzg\nf7Iii4dmryDvSBEThnXj5xd2UCNxEanSYi7wcw8X8OgHK5m7fAe9Euvzp5v60qVFvUiXJSIScjEV\n+F+uyWb8zHT2Hirggcu78KtLOhIfpy6PIhIbYiLwDx4p5PH5q5iWup2uLerx5h3n0CuxQaTLEhEJ\nqyob+Mead2Tm5hNnRjGOey/uyP2Xd6ZmdXWhEpHYUyUDf86yTCbMTOdIUTEAHueoGVeNri3qKexF\nJGZVyQHsJz5cdTzsjznqKVbzDhGJaVXqCP9IoYc/LVzH7rwCv4+reYeIxLKIB36wGmWnbctlzPQ0\nNmTnUbtGHIcLPKeso+YdIhLLIhr4wWiUXVBUzMtfrOcvizbSrG5N3r5rAPsOFZzwuqDmHSIiEQ38\nshplBxL4q7MOMGZaGquyDjCyfyKPXtOTBgnxJ7y+mneIiHhFNPDL2yi7yFPMq4s38efP1tEgIZ6p\nt57NFT1bnrDOiH6JCngRkRIqFPhmthk4CHiAojPtyNK6YQKZfsK9rLH2Ddl5jJmeRtq2XK7q04rH\nh/ei8Wl6y4qISHCO8C9xzu0uzxPHDeka8Fh7cbHjzW9+5JkFa0moEcdLo/pxTXLr8ld9BoJ1YllE\nJJIiOqQTaKPsrXsOM3ZGGt//uJfLuzfnjyN707xeeFoOBuPEsohINKhQE3Mz+xHYBzjgVefcVD/r\njAZGAyQlJZ29ZcuWgF/fOcc/l2zljx+tJs6MR6/tyfX9w9tI/PynvvA77JTYMIFvJlwatjpEJHZF\nSxPzC5xzmWbWHFhoZmucc4tLruD7EJgKkJKSEvCny47cfMbPTOer9bu5sHNTplzfJyLX0Zf3xLKI\nSLSpUOA75zJ9v7PNbDYwAFhc9rNO+5rMWLqdx+atwuMcT17Xi58OSIpYI/HynFgWEYlG5Z5Lx8zq\nmFm9Y7eBK4AVFSkm++ARfv73VMbNSKd76/p88pvB3DLwrIiFPXhPLCfEnzjhmr7EJSKVUUWO8FsA\ns31hXB0m2gdDAAAG5UlEQVT4l3Puk/K+2Ly0HTwydwX5BR4euboHdw6KjkbigZ5YFhGJduUOfOfc\nJiC5ogXsPVTAI3NW8GFGFn3bNuS5m5LpGGWNxPUlLhGpCiJ6WeanK3fy0OwM9ucXMm5IV34xuAPV\n1XJQRCQkIhL4+/ML+cO8lcz6IZMererzj7sH0r1V/UiUIiISM8Ie+P9el8P4Genk5B3l15d15r5L\nOlGjuo7qRURCLayBn5mbz+1vfk+n5nWZetvZ9GnTMJxvLyIS08Ia+HsPFTBpcAce+EkXasWrt6yI\nSDiFNfA7NqvDxCu7h/MtRUTEJ6yD57VrRLyjoohIzNLZUhGRGKHAFxGJEQp8EZEYocAXEYkRCnwR\nkRihwBcRiREKfBGRGKHAFxGJEQp8EZEYocAXEYkRFQp8MxtqZmvNbIOZTQhWUSIiEnwVaWIeB/wF\nGAb0AEaZWY9gFSYiIsFVkSP8AcAG59wm51wB8B4wPDhliYhIsFUk8BOBbSXub/ctExGRKBTy+YrN\nbDQw2nf3qJmtCPV7BkFTYHekiwiA6gyeylAjqM5gqyx1dg3Gi1Qk8DOBtiXut/EtO4FzbiowFcDM\nUp1zKRV4z7BQncFVGeqsDDWC6gy2ylRnMF6nIkM6/wt0NrP2ZlYDuBn4IBhFiYhI8JX7CN85V2Rm\n9wELgDjgTefcyqBVJiIiQVWhMXzn3EfAR2fwlKkVeb8wUp3BVRnqrAw1guoMtpiq05xzwXgdERGJ\ncppaQUQkRoQk8E835YKZ1TSz932PLzGzdqGo4zQ1tjWzL81slZmtNLPf+FnnYjPbb2bLfT+/D3ed\nvjo2m1mGr4ZTztab14u+7ZluZv3DXF/XEttouZkdMLP7T1onItvSzN40s+ySlwObWWMzW2hm632/\nG5Xy3Nt966w3s9sjUOczZrbG9/90tpk1LOW5Ze4fYahzkplllvh/e2Upzw3bVCyl1Pl+iRo3m9ny\nUp4blu1ZWgaFdP90zgX1B+8J3I1AB6AGkAb0OGmdXwF/9d2+GXg/2HUEUGcroL/vdj1gnZ86Lwbm\nh7s2P7VuBpqW8fiVwMeAAecCSyJYaxywEzgrGrYlMBjoD6wosexpYILv9gRgip/nNQY2+X438t1u\nFOY6rwCq+25P8VdnIPtHGOqcBIwNYL8oMxdCXedJjz8H/D6S27O0DArl/hmKI/xAplwYDrztuz0D\nuMzMLAS1lMo5l+Wc+8F3+yCwmsr7TeHhwN+d13dAQzNrFaFaLgM2Oue2ROj9T+CcWwzsPWlxyf3v\nbWCEn6cOARY65/Y65/YBC4Gh4azTOfepc67Id/c7vN91iahStmcgwjoVS1l1+rLmJuDdUL1/IMrI\noJDtn6EI/ECmXDi+jm+H3g80CUEtAfENKfUDlvh5+DwzSzOzj82sZ1gL+z8O+NTMlpr3m8sni6Zp\nLm6m9H9I0bAtAVo457J8t3cCLfysE03bFOAuvH/F+XO6/SMc7vMNPb1ZyhBENG3PC4Fdzrn1pTwe\n9u15UgaFbP+M+ZO2ZlYXmAnc75w7cNLDP+AdmkgGXgLmhLs+nwucc/3xzkz6X2Y2OEJ1lMm8X8C7\nFpju5+Fo2ZYncN6/j6P6UjUzexgoAv5ZyiqR3j9eAToCfYEsvMMl0WwUZR/dh3V7lpVBwd4/QxH4\ngUy5cHwdM6sONAD2hKCWMplZPN4N/U/n3KyTH3fOHXDO5flufwTEm1nTMJeJcy7T9zsbmI33z+OS\nAprmIgyGAT8453ad/EC0bEufXceGvHy/s/2sExXb1MzuAK4GbvH94z9FAPtHSDnndjnnPM65YuC1\nUt4/WrZndWAk8H5p64Rze5aSQSHbP0MR+IFMufABcOys8g3AF6XtzKHiG8d7A1jtnPtTKeu0PHZu\nwcwG4N1eYf1gMrM6Zlbv2G28J/JOnoDuA+A28zoX2F/iT8JwKvXIKRq2ZQkl97/bgbl+1lkAXGFm\njXxDFFf4loWNmQ0FHgSudc4dLmWdQPaPkDrpfNF1pbx/tEzFcjmwxjm33d+D4dyeZWRQ6PbPEJ19\nvhLvGeeNwMO+ZY/h3XEBauH9s38D8D3QIRR1nKbGC/D+qZQOLPf9XAn8Evilb537gJV4ryj4DhgU\ngTo7+N4/zVfLse1Zsk7D24xmI5ABpESgzjp4A7xBiWUR35Z4P4CygEK845x34z1f9DmwHvgMaOxb\nNwV4vcRz7/LtoxuAOyNQ5wa847TH9s9jV7a1Bj4qa/8Ic53/8O136XjDqtXJdfrun5IL4azTt/yt\nY/tkiXUjsj3LyKCQ7Z/6pq2ISIyI+ZO2IiKxQoEvIhIjFPgiIjFCgS8iEiMU+CIiMUKBLyISIxT4\nIiIxQoEvIhIj/j8L0VTPrjSwoQAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x7f097fe360f0>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2.0185186862945557 3.03572154045105\n"
     ]
    }
   ],
   "source": [
    "# 随机初始化参数\n",
    "w = t.rand(1, 1) \n",
    "b = t.zeros(1, 1)\n",
    "\n",
    "lr =0.001 # 学习率\n",
    "\n",
    "for ii in range(20000):\n",
    "    x, y = get_fake_data()\n",
    "    \n",
    "    # forward：计算loss\n",
    "    y_pred = x.mm(w) + b.expand_as(y) # x@W等价于x.mm(w);for python3 only\n",
    "    loss = 0.5 * (y_pred - y) ** 2 # 均方误差\n",
    "    loss = loss.sum()\n",
    "    \n",
    "    # backward：手动计算梯度\n",
    "    dloss = 1\n",
    "    dy_pred = dloss * (y_pred - y)\n",
    "    \n",
    "    dw = x.t().mm(dy_pred)\n",
    "    db = dy_pred.sum()\n",
    "    \n",
    "    # 更新参数\n",
    "    w.sub_(lr * dw)\n",
    "    b.sub_(lr * db)\n",
    "    \n",
    "    if ii%1000 ==0:\n",
    "       \n",
    "        # 画图\n",
    "        display.clear_output(wait=True)\n",
    "        x = t.arange(0, 20).view(-1, 1)\n",
    "        y = x.mm(w) + b.expand_as(x)\n",
    "        plt.plot(x.numpy(), y.numpy()) # predicted\n",
    "        \n",
    "        x2, y2 = get_fake_data(batch_size=20) \n",
    "        plt.scatter(x2.numpy(), y2.numpy()) # true data\n",
    "        \n",
    "        plt.xlim(0, 20)\n",
    "        plt.ylim(0, 41)\n",
    "        plt.show()\n",
    "        plt.pause(0.5)\n",
    "        \n",
    "print(w.squeeze()[0], b.squeeze()[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可见程序已经基本学出w=2、b=3，并且图中直线和数据已经实现较好的拟合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "虽然上面提到了许多操作，但是只要掌握了这个例子基本上就可以了，其他的知识，读者日后遇到的时候，可以再看看这部份的内容或者查找对应文档。\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
