{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第三章 PyTorch基础：Tensor和Autograd\n",
    "\n",
    "## 3.1 Tensor\n",
    "\n",
    "Tensor，又名张量，读者可能对这个名词似曾相识，因它不仅在PyTorch中出现过，它也是Theano、TensorFlow、\n",
    "Torch和MxNet中重要的数据结构。关于张量的本质不乏深度的剖析，但从工程角度来讲，可简单地认为它就是一个数组，且支持高效的科学计算。它可以是一个数（标量）、一维数组（向量）、二维数组（矩阵）和更高维的数组（高阶数据）。Tensor和Numpy的ndarrays类似，但PyTorch的tensor支持GPU加速。\n",
    "\n",
    "本节将系统讲解tensor的使用，力求面面俱到，但不会涉及每个函数。对于更多函数及其用法，读者可通过在IPython/Notebook中使用函数名加`?`查看帮助文档，或查阅PyTorch官方文档[^1]。\n",
    "\n",
    "[^1]: http://docs.pytorch.org"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'0.3.0.post4'"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Let's begin\n",
    "from __future__ import print_function\n",
    "import torch  as t\n",
    "t.__version__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "###  3.1.1 基础操作\n",
    "\n",
    "学习过Numpy的读者会对本节内容感到非常熟悉，因tensor的接口有意设计成与Numpy类似，以方便用户使用。但不熟悉Numpy也没关系，本节内容并不要求先掌握Numpy。\n",
    "\n",
    "从接口的角度来讲，对tensor的操作可分为两类：\n",
    "\n",
    "1. `torch.function`，如`torch.save`等。\n",
    "2. 另一类是`tensor.function`，如`tensor.view`等。\n",
    "\n",
    "为方便使用，对tensor的大部分操作同时支持这两类接口，在本书中不做具体区分，如`torch.sum (torch.sum(a, b))`与`tensor.sum (a.sum(b))`功能等价。\n",
    "\n",
    "而从存储的角度来讲，对tensor的操作又可分为两类：\n",
    "\n",
    "1. 不会修改自身的数据，如 `a.add(b)`， 加法的结果会返回一个新的tensor。\n",
    "2. 会修改自身的数据，如 `a.add_(b)`， 加法的结果仍存储在a中，a被修改了。\n",
    "\n",
    "函数名以`_`结尾的都是inplace方式, 即会修改调用者自己的数据，在实际应用中需加以区分。\n",
    "\n",
    "#### 创建Tensor\n",
    "\n",
    "在PyTorch中新建tensor的方法有很多，具体如表3-1所示。\n",
    "\n",
    "表3-1: 常见新建tensor的方法\n",
    "\n",
    "|函数|功能|\n",
    "|:---:|:---:|\n",
    "|Tensor(\\*sizes)|基础构造函数|\n",
    "|ones(\\*sizes)|全1Tensor|\n",
    "|zeros(\\*sizes)|全0Tensor|\n",
    "|eye(\\*sizes)|对角线为1，其他为0|\n",
    "|arange(s,e,step|从s到e，步长为step|\n",
    "|linspace(s,e,steps)|从s到e，均匀切分成steps份|\n",
    "|rand/randn(\\*sizes)|均匀/标准分布|\n",
    "|normal(mean,std)/uniform(from,to)|正态分布/均匀分布|\n",
    "|randperm(m)|随机排列|\n",
    "\n",
    "其中使用`Tensor`函数新建tensor是最复杂多变的方式，它既可以接收一个list，并根据list的数据新建tensor，也能根据指定的形状新建tensor，还能传入其他的tensor，下面举几个例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.0739e+26  4.5632e-41  1.7047e-37\n",
       " 0.0000e+00  4.4842e-44  0.0000e+00\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 指定tensor的形状\n",
    "a = t.Tensor(2, 3)\n",
    "a # 数值取决于内存空间的状态"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  2  3\n",
       " 4  5  6\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 用list的数据创建tensor\n",
    "b = t.Tensor([[1,2,3],[4,5,6]])\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.tolist() # 把tensor转为list"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`tensor.size()`返回`torch.Size`对象，它是tuple的子类，但其使用方式与tuple略有区别"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 3])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b_size = b.size()\n",
    "b_size"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "6"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.numel() # b中元素总个数，2*3，等价于b.nelement()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(\n",
       "  1.0739e+26  4.5632e-41  2.1006e-37\n",
       "  0.0000e+00  4.4842e-44  0.0000e+00\n",
       " [torch.FloatTensor of size 2x3], \n",
       "  2\n",
       "  3\n",
       " [torch.FloatTensor of size 2])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 创建一个和b形状一样的tensor\n",
    "c = t.Tensor(b_size)\n",
    "# 创建一个元素为2和3的tensor\n",
    "d = t.Tensor((2, 3))\n",
    "c, d"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "除了`tensor.size()`，还可以利用`tensor.shape`直接查看tensor的形状，`tensor.shape`等价于`tensor.size()`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 3])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "c.shape??"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要注意的是，`t.Tensor(*sizes)`创建tensor时，系统不会马上分配空间，只是会计算剩余的内存是否足够使用，使用到tensor时才会分配，而其它操作都是在创建完tensor之后马上进行空间分配。其它常用的创建tensor的方法举例如下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.ones(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  0  0\n",
       " 0  0  0\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.zeros(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1\n",
       " 3\n",
       " 5\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.arange(1, 6, 2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  1.0000\n",
       "  5.5000\n",
       " 10.0000\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.linspace(1, 10, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0.0015 -0.0256 -2.2059\n",
       "-1.0305 -0.2663  0.6902\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.randn(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 4\n",
       " 3\n",
       " 0\n",
       " 1\n",
       " 2\n",
       "[torch.LongTensor of size 5]"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.randperm(5) # 长度为5的随机排列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  0  0\n",
       " 0  1  0\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.eye(2, 3) # 对角线为1, 不要求行列数一致"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 常用Tensor操作"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过`tensor.view`方法可以调整tensor的形状，但必须保证调整前后元素总数一致。`view`不会修改自身的数据，返回的新tensor与源tensor共享内存，也即更改其中的一个，另外一个也会跟着改变。在实际应用中可能经常需要添加或减少某一维度，这时候`squeeze`和`unsqueeze`两个函数就派上用场了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6)\n",
    "a.view(2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = a.view(-1, 3) # 当某一维为-1的时候，会自动计算它的大小\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  0  1  2\n",
       "\n",
       "(1 ,.,.) = \n",
       "  3  4  5\n",
       "[torch.FloatTensor of size 2x1x3]"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.unsqueeze(1) # 注意形状，在第1维（下标从0开始）上增加“１”"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  0  1  2\n",
       "\n",
       "(1 ,.,.) = \n",
       "  3  4  5\n",
       "[torch.FloatTensor of size 2x1x3]"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.unsqueeze(-2) # -2表示倒数第二个维度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,0 ,.,.) = \n",
       "  0  1  2\n",
       "  3  4  5\n",
       "[torch.FloatTensor of size 1x1x2x3]"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = b.view(1, 1, 1, 2, 3)\n",
    "c.squeeze(0) # 压缩第0维的“１”"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c.squeeze() # 把所有维度为“1”的压缩"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0  100    2\n",
       "   3    4    5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[1] = 100\n",
    "b # a修改，b作为view之后的，也会跟着修改"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`resize`是另一种可用来调整`size`的方法，但与`view`不同，它可以修改tensor的大小。如果新大小超过了原大小，会自动分配新的内存空间，而如果新大小小于原大小，则之前的数据依旧会被保存，看一个例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0  100    2\n",
       "[torch.FloatTensor of size 1x3]"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.resize_(1, 3)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0.0000e+00  1.0000e+02  2.0000e+00\n",
       " 3.0000e+00  4.0000e+00  5.0000e+00\n",
       " 1.5301e-38  0.0000e+00  1.3768e+26\n",
       "[torch.FloatTensor of size 3x3]"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.resize_(3, 3) # 旧的数据依旧保存着，多出的大小会分配新空间\n",
    "b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 索引操作\n",
    "\n",
    "Tensor支持与numpy.ndarray类似的索引操作，语法上也类似，下面通过一些例子，讲解常用的索引操作。如无特殊说明，索引出来的结果与原tensor共享内存，也即修改一个，另一个会跟着修改。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-2.1098 -1.4390 -1.4180  0.1874\n",
       " 0.3988  0.4784 -0.9994  1.0953\n",
       "-0.3281 -0.8193  0.9801 -1.1096\n",
       "[torch.FloatTensor of size 3x4]"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.randn(3, 4)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-2.1098\n",
       "-1.4390\n",
       "-1.4180\n",
       " 0.1874\n",
       "[torch.FloatTensor of size 4]"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0] # 第0行(下标从0开始)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-2.1098\n",
       " 0.3988\n",
       "-0.3281\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[:, 0] # 第0列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "-1.4179892539978027"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0][2] # 第0行第2个元素，等价于a[0, 2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.18744279444217682"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0, -1] # 第0行最后一个元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-2.1098 -1.4390 -1.4180  0.1874\n",
       " 0.3988  0.4784 -0.9994  1.0953\n",
       "[torch.FloatTensor of size 2x4]"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[:2] # 前两行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-2.1098 -1.4390\n",
       " 0.3988  0.4784\n",
       "[torch.FloatTensor of size 2x2]"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[:2, 0:2] # 前两行，第0,1列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "-2.1098 -1.4390\n",
      "[torch.FloatTensor of size 1x2]\n",
      "\n",
      "\n",
      "-2.1098\n",
      "-1.4390\n",
      "[torch.FloatTensor of size 2]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(a[0:1, :2]) # 第0行，前两列 \n",
    "print(a[0, :2]) # 注意两者的区别：形状不同"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  0  0  0\n",
       " 0  0  0  1\n",
       " 0  0  0  0\n",
       "[torch.ByteTensor of size 3x4]"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a > 1 # 返回一个ByteTensor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.0953\n",
       "[torch.FloatTensor of size 1]"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[a>1] # 等价于a.masked_select(a>1)\n",
    "# 选择结果与原tensor不共享内存空间"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "-2.1098 -1.4390 -1.4180  0.1874\n",
       " 0.3988  0.4784 -0.9994  1.0953\n",
       "[torch.FloatTensor of size 2x4]"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[t.LongTensor([0,1])] # 第0行和第1行"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其它常用的选择函数如表3-2所示。\n",
    "\n",
    "表3-2常用的选择函数\n",
    "\n",
    "函数|功能|\n",
    ":---:|:---:|\n",
    "index_select(input, dim, index)|在指定维度dim上选取，比如选取某些行、某些列\n",
    "masked_select(input, mask)|例子如上，a[a>0]，使用ByteTensor进行选取\n",
    "non_zero(input)|非0元素的下标\n",
    "gather(input, dim, index)|根据index，在dim维度上选取数据，输出的size与index一样\n",
    "\n",
    "\n",
    "`gather`是一个比较复杂的操作，对一个2维tensor，输出的每个元素如下：\n",
    "\n",
    "```python\n",
    "out[i][j] = input[index[i][j]][j]  # dim=0\n",
    "out[i][j] = input[i][index[i][j]]  # dim=1\n",
    "```\n",
    "三维tensor的`gather`操作同理，下面举几个例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   1   2   3\n",
       "  4   5   6   7\n",
       "  8   9  10  11\n",
       " 12  13  14  15\n",
       "[torch.FloatTensor of size 4x4]"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 16).view(4, 4)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   5  10  15\n",
       "[torch.FloatTensor of size 1x4]"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取对角线的元素\n",
    "index = t.LongTensor([[0,1,2,3]])\n",
    "a.gather(0, index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  3\n",
       "  6\n",
       "  9\n",
       " 12\n",
       "[torch.FloatTensor of size 4x1]"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取反对角线上的元素\n",
    "index = t.LongTensor([[3,2,1,0]]).t()\n",
    "a.gather(1, index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 12   9   6   3\n",
       "[torch.FloatTensor of size 1x4]"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取反对角线上的元素，注意与上面的不同\n",
    "index = t.LongTensor([[3,2,1,0]])\n",
    "a.gather(0, index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   3\n",
       "  5   6\n",
       " 10   9\n",
       " 15  12\n",
       "[torch.FloatTensor of size 4x2]"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 选取两个对角线上的元素\n",
    "index = t.LongTensor([[0,1,2,3],[3,2,1,0]]).t()\n",
    "b = a.gather(1, index)\n",
    "b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "与`gather`相对应的逆操作是`scatter_`，`gather`把数据从input中按index取出，而`scatter_`是把取出的数据再放回去。注意`scatter_`函数是inplace操作。\n",
    "\n",
    "```python\n",
    "out = input.gather(dim, index)\n",
    "-->近似逆操作\n",
    "out = Tensor()\n",
    "out.scatter_(dim, index)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   0   0   3\n",
       "  0   5   6   0\n",
       "  0   9  10   0\n",
       " 12   0   0  15\n",
       "[torch.FloatTensor of size 4x4]"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 把两个对角线元素放回去到指定位置\n",
    "c = t.zeros(4,4)\n",
    "c.scatter_(1, index, b)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 高级索引\n",
    "PyTorch在0.2版本中完善了索引操作，目前已经支持绝大多数numpy的高级索引[^10]。高级索引可以看成是普通索引操作的扩展，但是高级索引操作的结果一般不和原始的Tensor贡献内出。 \n",
    "[^10]: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "   0   1   2\n",
       "   3   4   5\n",
       "   6   7   8\n",
       "\n",
       "(1 ,.,.) = \n",
       "   9  10  11\n",
       "  12  13  14\n",
       "  15  16  17\n",
       "\n",
       "(2 ,.,.) = \n",
       "  18  19  20\n",
       "  21  22  23\n",
       "  24  25  26\n",
       "[torch.FloatTensor of size 3x3x3]"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = t.arange(0,27).view(3,3,3)\n",
    "x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 14\n",
       " 24\n",
       "[torch.FloatTensor of size 2]"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x[[1, 2], [1, 2], [2, 0]] # x[1,1,2]和x[2,2,0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 19\n",
       " 10\n",
       "  1\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x[[2, 1, 0], [0], [1]] # x[2,0,1],x[1,0,1],x[0,0,1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "   0   1   2\n",
       "   3   4   5\n",
       "   6   7   8\n",
       "\n",
       "(1 ,.,.) = \n",
       "  18  19  20\n",
       "  21  22  23\n",
       "  24  25  26\n",
       "[torch.FloatTensor of size 2x3x3]"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x[[0, 2], ...] # x[0] 和 x[2]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Tensor类型\n",
    "\n",
    "Tensor有不同的数据类型，如表3-3所示，每种类型分别对应有CPU和GPU版本(HalfTensor除外)。默认的tensor是FloatTensor，可通过`t.set_default_tensor_type` 来修改默认tensor类型(如果默认类型为GPU tensor，则所有操作都将在GPU上进行)。Tensor的类型对分析内存占用很有帮助。例如对于一个size为(1000, 1000, 1000)的FloatTensor，它有`1000*1000*1000=10^9`个元素，每个元素占32bit/8 = 4Byte内存，所以共占大约4GB内存/显存。HalfTensor是专门为GPU版本设计的，同样的元素个数，显存占用只有FloatTensor的一半，所以可以极大缓解GPU显存不足的问题，但由于HalfTensor所能表示的数值大小和精度有限[^2]，所以可能出现溢出等问题。\n",
    "\n",
    "[^2]: https://stackoverflow.com/questions/872544/what-range-of-numbers-can-be-represented-in-a-16-32-and-64-bit-ieee-754-syste\n",
    "\n",
    "表3-3: tensor数据类型\n",
    "\n",
    "数据类型|\tCPU tensor\t|GPU tensor|\n",
    ":---:|:---:|:--:|\n",
    "32-bit 浮点|\ttorch.FloatTensor\t|torch.cuda.FloatTensor\n",
    "64-bit 浮点|\ttorch.DoubleTensor|\ttorch.cuda.DoubleTensor\n",
    "16-bit 半精度浮点|\tN/A\t|torch.cuda.HalfTensor\n",
    "8-bit 无符号整形(0~255)|\ttorch.ByteTensor|\ttorch.cuda.ByteTensor\n",
    "8-bit 有符号整形(-128~127)|\ttorch.CharTensor\t|torch.cuda.CharTensor\n",
    "16-bit 有符号整形  |\ttorch.ShortTensor|\ttorch.cuda.ShortTensor\n",
    "32-bit 有符号整形 \t|torch.IntTensor\t|torch.cuda.IntTensor\n",
    "64-bit 有符号整形  \t|torch.LongTensor\t|torch.cuda.LongTensor\n",
    "\n",
    "各数据类型之间可以互相转换，`type(new_type)`是通用的做法，同时还有`float`、`long`、`half`等快捷方法。CPU tensor与GPU tensor之间的互相转换通过`tensor.cuda`和`tensor.cpu`方法实现。Tensor还有一个`new`方法，用法与`t.Tensor`一样，会调用该tensor对应类型的构造函数，生成与当前tensor类型一致的tensor。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 设置默认tensor，注意参数是字符串\n",
    "t.set_default_tensor_type('torch.IntTensor')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.7900e+09  3.2564e+04  4.3056e+07\n",
       " 0.0000e+00  3.2000e+01  0.0000e+00\n",
       "[torch.IntTensor of size 2x3]"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.Tensor(2,3)\n",
    "a # 现在a是IntTensor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.7900e+09  3.2564e+04  4.3056e+07\n",
       " 0.0000e+00  3.2000e+01  0.0000e+00\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 把a转成FloatTensor，等价于b=a.type(t.FloatTensor)\n",
    "b = a.float() \n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.7900e+09  3.2564e+04  4.3056e+07\n",
       " 0.0000e+00  3.2000e+01  0.0000e+00\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = a.type_as(b)\n",
    "c"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.7900e+09  3.2564e+04  4.3020e+07\n",
       " 0.0000e+00  2.1139e+09  3.2563e+04\n",
       "[torch.IntTensor of size 2x3]"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "d = a.new(2,3) # 等价于torch.IntTensor(2,3)\n",
    "d"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 查看函数new的源码\n",
    "a.new??"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 恢复之前的默认设置\n",
    "t.set_default_tensor_type('torch.FloatTensor')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 逐元素操作\n",
    "\n",
    "这部分操作会对tensor的每一个元素(point-wise，又名element-wise)进行操作，此类操作的输入与输出形状一致。常用的操作如表3-4所示。\n",
    "\n",
    "表3-4: 常见的逐元素操作\n",
    "\n",
    "|函数|功能|\n",
    "|:--:|:--:|\n",
    "|abs/sqrt/div/exp/fmod/log/pow..|绝对值/平方根/除法/指数/求余/求幂..|\n",
    "|cos/sin/asin/atan2/cosh..|相关三角函数|\n",
    "|ceil/round/floor/trunc| 上取整/四舍五入/下取整/只保留整数部分|\n",
    "|clamp(input, min, max)|超过min和max部分截断|\n",
    "|sigmod/tanh..|激活函数\n",
    "\n",
    "对于很多操作，例如div、mul、pow、fmod等，PyTorch都实现了运算符重载，所以可以直接使用运算符。如`a ** 2` 等价于`torch.pow(a,2)`, `a * 2`等价于`torch.mul(a,2)`。\n",
    "\n",
    "其中`clamp(x, min, max)`的输出满足以下公式：\n",
    "$$\n",
    "y_i =\n",
    "\\begin{cases}\n",
    "min,  & \\text{if  } x_i \\lt min \\\\\n",
    "x_i,  & \\text{if  } min \\le x_i \\le max  \\\\\n",
    "max,  & \\text{if  } x_i \\gt max\\\\\n",
    "\\end{cases}\n",
    "$$\n",
    "`clamp`常用在某些需要比较大小的地方，如取一个tensor的每个元素与另一个数的较大值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1.0000  0.5403 -0.4161\n",
       "-0.9900 -0.6536  0.2837\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6).view(2, 3)\n",
    "t.cos(a)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  1  2\n",
       " 0  1  2\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a % 3 # 等价于t.fmod(a, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   1   4\n",
       "  9  16  25\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a ** 2 # 等价于t.pow(a, 2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      " 0  1  2\n",
      " 3  4  5\n",
      "[torch.FloatTensor of size 2x3]\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\n",
       " 3  3  3\n",
       " 3  4  5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 取a中的每一个元素与3相比较大的一个 (小于3的截断成3)\n",
    "print(a)\n",
    "t.clamp(a, min=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "####  归并操作 \n",
    "此类操作会使输出形状小于输入形状，并可以沿着某一维度进行指定操作。如加法`sum`，既可以计算整个tensor的和，也可以计算tensor中每一行或每一列的和。常用的归并操作如表3-5所示。\n",
    "\n",
    "表3-5: 常用归并操作\n",
    "\n",
    "|函数|功能|\n",
    "|:---:|:---:|\n",
    "|mean/sum/median/mode|均值/和/中位数/众数|\n",
    "|norm/dist|范数/距离|\n",
    "|std/var|标准差/方差|\n",
    "|cumsum/cumprod|累加/累乘|\n",
    "\n",
    "以上大多数函数都有一个参数**`dim`**，用来指定这些操作是在哪个维度上执行的。关于dim(对应于Numpy中的axis)的解释众说纷纭，这里提供一个简单的记忆方式：\n",
    "\n",
    "假设输入的形状是(m, n, k)\n",
    "\n",
    "- 如果指定dim=0，输出的形状就是(1, n, k)或者(n, k)\n",
    "- 如果指定dim=1，输出的形状就是(m, 1, k)或者(m, k)\n",
    "- 如果指定dim=2，输出的形状就是(m, n, 1)或者(m, n)\n",
    "\n",
    "size中是否有\"1\"，取决于参数`keepdim`，`keepdim=True`会保留维度`1`。注意，以上只是经验总结，并非所有函数都符合这种形状变化方式，如`cumsum`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 2  2  2\n",
       "[torch.FloatTensor of size 1x3]"
      ]
     },
     "execution_count": 58,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.ones(2, 3)\n",
    "b.sum(dim = 0, keepdim=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 2\n",
       " 2\n",
       " 2\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 59,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# keepdim=False，不保留维度\"1\"，注意形状\n",
    "b.sum(dim=0, keepdim=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 3\n",
       " 3\n",
       "[torch.FloatTensor of size 2]"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.sum(dim=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      " 0  1  2\n",
      " 3  4  5\n",
      "[torch.FloatTensor of size 2x3]\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   1   3\n",
       "  3   7  12\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 61,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6).view(2, 3)\n",
    "print(a)\n",
    "a.cumsum(dim=1) # 沿着行累加"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 比较\n",
    "比较函数中有一些是逐元素比较，操作类似于逐元素操作，还有一些则类似于归并操作。常用比较函数如表3-6所示。\n",
    "\n",
    "表3-6: 常用比较函数\n",
    "\n",
    "|函数|功能|\n",
    "|:--:|:--:|\n",
    "|gt/lt/ge/le/eq/ne|大于/小于/大于等于/小于等于/等于/不等|\n",
    "|topk|最大的k个数|\n",
    "|sort|排序|\n",
    "|max/min|比较两个tensor最大最小值|\n",
    "\n",
    "表中第一行的比较操作已经实现了运算符重载，因此可以使用`a>=b`、`a>b`、`a!=b`、`a==b`，其返回结果是一个`ByteTensor`，可用来选取元素。max/min这两个操作比较特殊，以max来说，它有以下三种使用情况：\n",
    "- t.max(tensor)：返回tensor中最大的一个数\n",
    "- t.max(tensor,dim)：指定维上最大的数，返回tensor和下标\n",
    "- t.max(tensor1, tensor2): 比较两个tensor相比较大的元素\n",
    "\n",
    "至于比较一个tensor和一个数，可以使用clamp函数。下面举例说明。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   3   6\n",
       "  9  12  15\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.linspace(0, 15, 6).view(2, 3)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 15  12   9\n",
       "  6   3   0\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.linspace(15, 0, 6).view(2, 3)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0  0  0\n",
       " 1  1  1\n",
       "[torch.ByteTensor of size 2x3]"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a>b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  9\n",
       " 12\n",
       " 15\n",
       "[torch.FloatTensor of size 3]"
      ]
     },
     "execution_count": 65,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[a>b] # a中大于b的元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "15.0"
      ]
     },
     "execution_count": 66,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.max(a)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(\n",
       "  15\n",
       "   6\n",
       " [torch.FloatTensor of size 2], \n",
       "  0\n",
       "  0\n",
       " [torch.LongTensor of size 2])"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.max(b, dim=1) \n",
    "# 第一个返回值的15和6分别表示第0行和第1行最大的元素\n",
    "# 第二个返回值的0和0表示上述最大的数是该行第0个元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 15  12   9\n",
       "  9  12  15\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 68,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.max(a,b)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 10  10  10\n",
       " 10  12  15\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 69,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 比较a和10较大的元素\n",
    "t.clamp(a, min=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 线性代数\n",
    "\n",
    "PyTorch的线性函数主要封装了Blas和Lapack，其用法和接口都与之类似。常用的线性代数函数如表3-7所示。\n",
    "\n",
    "表3-7: 常用的线性代数函数\n",
    "\n",
    "|函数|功能|\n",
    "|:---:|:---:|\n",
    "|trace|对角线元素之和(矩阵的迹)|\n",
    "|diag|对角线元素|\n",
    "|triu/tril|矩阵的上三角/下三角，可指定偏移量|\n",
    "|mm/bmm|矩阵乘法，batch的矩阵乘法|\n",
    "|addmm/addbmm/addmv/addr/badbmm..|矩阵运算\n",
    "|t|转置|\n",
    "|dot/cross|内积/外积\n",
    "|inverse|求逆矩阵\n",
    "|svd|奇异值分解\n",
    "\n",
    "具体使用说明请参见官方文档[^3]，需要注意的是，矩阵的转置会导致存储空间不连续，需调用它的`.contiguous`方法将其转为连续。\n",
    "[^3]: http://pytorch.org/docs/torch.html#blas-and-lapack-operations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "False"
      ]
     },
     "execution_count": 70,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = a.t()\n",
    "b.is_contiguous()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "  0   9\n",
       "  3  12\n",
       "  6  15\n",
       "[torch.FloatTensor of size 3x2]"
      ]
     },
     "execution_count": 71,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.contiguous()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.2 Tensor和Numpy\n",
    "\n",
    "Tensor和Numpy数组之间具有很高的相似性，彼此之间的互操作也非常简单高效。需要注意的是，Numpy和Tensor共享内存。由于Numpy历史悠久，支持丰富的操作，所以当遇到Tensor不支持的操作时，可先转成Numpy数组，处理后再转回tensor，其转换开销很小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[1., 1., 1.],\n",
       "       [1., 1., 1.]], dtype=float32)"
      ]
     },
     "execution_count": 72,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "a = np.ones([2, 3],dtype=np.float32)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.from_numpy(a)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 74,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.Tensor(a) # 也可以直接将numpy对象传入Tensor\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   1  100    1\n",
       "   1    1    1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 75,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0, 1]=100\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[  1., 100.,   1.],\n",
       "       [  1.,   1.,   1.]], dtype=float32)"
      ]
     },
     "execution_count": 76,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = b.numpy() # a, b, c三个对象共享内存\n",
    "c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**注意**： 当numpy的数据类型和Tensor的类型不一样的时候，数据会被复制，不会共享内存。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[1., 1., 1.],\n",
       "       [1., 1., 1.]])"
      ]
     },
     "execution_count": 77,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = np.ones([2, 3])\n",
    "a # 注意和上面的a的区别（dtype不是float32）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 78,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = t.Tensor(a) # FloatTensor(double64或者float64)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.DoubleTensor of size 2x3]"
      ]
     },
     "execution_count": 79,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = t.from_numpy(a) # 注意c的类型（DoubleTensor）\n",
    "c"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 1  1  1\n",
       " 1  1  1\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 80,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[0, 1] = 100\n",
    "b # b与a不通向内存，所以即使a改变了，b也不变"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   1  100    1\n",
       "   1    1    1\n",
       "[torch.DoubleTensor of size 2x3]"
      ]
     },
     "execution_count": 81,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c # c与a共享内存"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "广播法则(broadcast)是科学运算中经常使用的一个技巧，它在快速执行向量化的同时不会占用额外的内存/显存。\n",
    "Numpy的广播法则定义如下：\n",
    "\n",
    "- 让所有输入数组都向其中shape最长的数组看齐，shape中不足的部分通过在前面加1补齐\n",
    "- 两个数组要么在某一个维度的长度一致，要么其中一个为1，否则不能计算 \n",
    "- 当输入数组的某个维度的长度为1时，计算时沿此维度复制扩充成一样的形状\n",
    "\n",
    "PyTorch当前已经支持了自动广播法则，但是笔者还是建议读者通过以下两个函数的组合手动实现广播法则，这样更直观，更不易出错：\n",
    "\n",
    "- `unsqueeze`或者`view`：为数据某一维的形状补1，实现法则1\n",
    "- `expand`或者`expand_as`，重复数组，实现法则3；该操作不会复制数组，所以不会占用额外的空间。\n",
    "\n",
    "注意，repeat实现与expand相类似的功能，但是repeat会把相同数据复制多份，因此会占用额外的空间。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "a = t.ones(3, 2)\n",
    "b = t.zeros(2, 3,1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "\n",
       "(1 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "[torch.FloatTensor of size 2x3x2]"
      ]
     },
     "execution_count": 83,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 自动广播法则\n",
    "# 第一步：a是2维,b是3维，所以先在较小的a前面补1 ，\n",
    "#               即：a.unsqueeze(0)，a的形状变成（1，3，2），b的形状是（2，3，1）,\n",
    "# 第二步:   a和b在第一维和第三维形状不一样，其中一个为1 ，\n",
    "#               可以利用广播法则扩展，两个形状都变成了（2，3，2）\n",
    "a+b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "(0 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "\n",
       "(1 ,.,.) = \n",
       "  1  1\n",
       "  1  1\n",
       "  1  1\n",
       "[torch.FloatTensor of size 2x3x2]"
      ]
     },
     "execution_count": 84,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 手动广播法则\n",
    "# 或者 a.view(1,3,2).expand(2,3,2)+b.expand(2,3,2)\n",
    "a.unsqueeze(0).expand(2, 3, 2) + b.expand(2,3,2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 85,
   "metadata": {},
   "outputs": [],
   "source": [
    "# expand不会占用额外空间，只会在需要的时候才扩充，可极大节省内存\n",
    "e = a.unsqueeze(0).expand(10000000000000, 3,2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.3 内部结构\n",
    "\n",
    "tensor的数据结构如图3-1所示。tensor分为头信息区(Tensor)和存储区(Storage)，信息区主要保存着tensor的形状（size）、步长（stride）、数据类型（type）等信息，而真正的数据则保存成连续数组。由于数据动辄成千上万，因此信息区元素占用内存较少，主要内存占用则取决于tensor中元素的数目，也即存储区的大小。\n",
    "\n",
    "一般来说一个tensor有着与之相对应的storage, storage是在data之上封装的接口，便于使用，而不同tensor的头信息一般不同，但却可能使用相同的数据。下面看两个例子。\n",
    "\n",
    "![图3-1: Tensor的数据结构](imgs/tensor_data_structure.svg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       " 0.0\n",
       " 1.0\n",
       " 2.0\n",
       " 3.0\n",
       " 4.0\n",
       " 5.0\n",
       "[torch.FloatStorage of size 6]"
      ]
     },
     "execution_count": 86,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 6)\n",
    "a.storage()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       " 0.0\n",
       " 1.0\n",
       " 2.0\n",
       " 3.0\n",
       " 4.0\n",
       " 5.0\n",
       "[torch.FloatStorage of size 6]"
      ]
     },
     "execution_count": 87,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = a.view(2, 3)\n",
    "b.storage()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 88,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 一个对象的id值可以看作它在内存中的地址\n",
    "# storage的内存地址一样，即是同一个storage\n",
    "id(b.storage()) == id(a.storage())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0  100    2\n",
       "   3    4    5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 89,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# a改变，b也随之改变，因为他们共享storage\n",
    "a[1] = 100\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       " 0.0\n",
       " 100.0\n",
       " 2.0\n",
       " 3.0\n",
       " 4.0\n",
       " 5.0\n",
       "[torch.FloatStorage of size 6]"
      ]
     },
     "execution_count": 90,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c = a[2:] \n",
    "c.storage()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(29054536, 29054528)"
      ]
     },
     "execution_count": 91,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c.data_ptr(), a.data_ptr() # data_ptr返回tensor首元素的内存地址\n",
    "# 可以看出相差8，这是因为2*4=8--相差两个元素，每个元素占4个字节(float)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "   0\n",
       " 100\n",
       "-100\n",
       "   3\n",
       "   4\n",
       "   5\n",
       "[torch.FloatTensor of size 6]"
      ]
     },
     "execution_count": 92,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "c[0] = -100 # c[0]的内存地址对应a[2]的内存地址\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 6666   100  -100\n",
       "    3     4     5\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 93,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "d = t.Tensor(c.storage())\n",
    "d[0] = 6666\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 94,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 94,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 下面４个tensor共享storage\n",
    "id(a.storage()) == id(b.storage()) == id(c.storage()) == id(d.storage())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(0, 2, 0)"
      ]
     },
     "execution_count": 95,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a.storage_offset(), c.storage_offset(), d.storage_offset()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 96,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "e = b[::2, ::2] # 隔2行/列取一个元素\n",
    "id(e.storage()) == id(a.storage())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((3, 1), (6, 2))"
      ]
     },
     "execution_count": 97,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b.stride(), e.stride()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "False"
      ]
     },
     "execution_count": 98,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "e.is_contiguous()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可见绝大多数操作并不修改tensor的数据，而只是修改了tensor的头信息。这种做法更节省内存，同时提升了处理速度。在使用中需要注意。\n",
    "此外有些操作会导致tensor不连续，这时需调用`tensor.contiguous`方法将它们变成连续的数据，该方法会使数据复制一份，不再与原来的数据共享storage。\n",
    "另外读者可以思考一下，之前说过的高级索引一般不共享stroage，而普通索引共享storage，这是为什么？（提示：普通索引可以通过只修改tensor的offset，stride和size，而不修改storage来实现）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.4 其它有关Tensor的话题\n",
    "这部分的内容不好专门划分一小节，但是笔者认为仍值得读者注意，故而将其放在这一小节。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 持久化\n",
    "Tensor的保存和加载十分的简单，使用t.save和t.load即可完成相应的功能。在save/load时可指定使用的`pickle`模块，在load时还可将GPU tensor映射到CPU或其它GPU上。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "if t.cuda.is_available():\n",
    "    a = a.cuda(1) # 把a转为GPU1上的tensor,\n",
    "    t.save(a,'a.pth')\n",
    "\n",
    "    # 加载为b, 存储于GPU1上(因为保存时tensor就在GPU1上)\n",
    "    b = t.load('a.pth')\n",
    "    # 加载为c, 存储于CPU\n",
    "    c = t.load('a.pth', map_location=lambda storage, loc: storage)\n",
    "    # 加载为d, 存储于GPU0上\n",
    "    d = t.load('a.pth', map_location={'cuda:1':'cuda:0'})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "####   向量化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "向量化计算是一种特殊的并行计算方式，相对于一般程序在同一时间只执行一个操作的方式，它可在同一时间执行多个操作，通常是对不同的数据执行同样的一个或一批指令，或者说把指令应用于一个数组/向量上。向量化可极大提高科学运算的效率，Python本身是一门高级语言，使用很方便，但这也意味着很多操作很低效，尤其是`for`循环。在科学计算程序中应当极力避免使用Python原生的`for循环`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "metadata": {},
   "outputs": [],
   "source": [
    "def for_loop_add(x, y):\n",
    "    result = []\n",
    "    for i,j in zip(x, y):\n",
    "        result.append(i + j)\n",
    "    return t.Tensor(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "192 µs ± 8.38 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n",
      "The slowest run took 14.64 times longer than the fastest. This could mean that an intermediate result is being cached.\n",
      "9.97 µs ± 13.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n"
     ]
    }
   ],
   "source": [
    "x = t.zeros(100)\n",
    "y = t.ones(100)\n",
    "%timeit -n 10 for_loop_add(x, y)\n",
    "%timeit -n 10 x + y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可见二者有超过40倍的速度差距，因此在实际使用中应尽量调用内建函数(buildin-function)，这些函数底层由C/C++实现，能通过执行底层优化实现高效计算。因此在平时写代码时，就应养成向量化的思维习惯。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此外还有以下几点需要注意：\n",
    "- 大多数`t.function`都有一个参数`out`，这时候产生的结果将保存在out指定tensor之中。\n",
    "- `t.set_num_threads`可以设置PyTorch进行CPU多线程并行计算时候所占用的线程数，这个可以用来限制PyTorch所占用的CPU数目。\n",
    "- `t.set_printoptions`可以用来设置打印tensor时的数值精度和格式。\n",
    "下面举例说明。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "16777216.0 16777216.0\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(199999, 199998)"
      ]
     },
     "execution_count": 102,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.arange(0, 20000000)\n",
    "print(a[-1], a[-2]) # 32bit的IntTensor精度有限导致溢出\n",
    "b = t.LongTensor()\n",
    "t.arange(0, 200000, out=b) # 64bit的LongTensor不会溢出\n",
    "b[-1],b[-2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       " 0.0785 -0.2514 -1.0843\n",
       " 0.7733  0.0812 -0.4563\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 103,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = t.randn(2,3)\n",
    "a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "0.0785463676 -0.2514404655 -1.0843452215\n",
       "0.7733024955 0.0811786801 -0.4562841356\n",
       "[torch.FloatTensor of size 2x3]"
      ]
     },
     "execution_count": 104,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t.set_printoptions(precision=10)\n",
    "a"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1.5 小试牛刀：线性回归"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "线性回归是机器学习入门知识，应用十分广泛。线性回归利用数理统计中回归分析，来确定两种或两种以上变量间相互依赖的定量关系的，其表达形式为$y = wx+b+e$，$e$为误差服从均值为0的正态分布。首先让我们来确认线性回归的损失函数：\n",
    "$$\n",
    "loss = \\sum_i^N \\frac 1 2 ({y_i-(wx_i+b)})^2\n",
    "$$\n",
    "然后利用随机梯度下降法更新参数$\\textbf{w}$和$\\textbf{b}$来最小化损失函数，最终学得$\\textbf{w}$和$\\textbf{b}$的数值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch as t\n",
    "%matplotlib inline\n",
    "from matplotlib import pyplot as plt\n",
    "from IPython import display"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 106,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 设置随机数种子，保证在不同电脑上运行时下面的输出一致\n",
    "t.manual_seed(1000) \n",
    "\n",
    "def get_fake_data(batch_size=8):\n",
    "    ''' 产生随机数据：y=x*2+3，加上了一些噪声'''\n",
    "    x = t.rand(batch_size, 1) * 20\n",
    "    y = x * 2 + (1 + t.randn(batch_size, 1))*3\n",
    "    return x, y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 107,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<matplotlib.collections.PathCollection at 0x7f32f3150908>"
      ]
     },
     "execution_count": 107,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD8CAYAAABn919SAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAD3RJREFUeJzt3XFsnHd9x/H3d0k6XEBzS70qMWUp\nUJmhdsRgVbBuVUdhLgjRUG2MbkPZxlYmgQYDZWvYH5Q/phYFqKb9UamQrvmjVGMQXLQx0ihU6yaN\nbm6dNSlZVmBtVydNzMDAhsXS8N0ffkxj187d2Xf33P38fkmnu/vdYz0fOcrHd7/nd88TmYkkqf/9\nVN0BJEntYaFLUiEsdEkqhIUuSYWw0CWpEBa6JBXCQpekQljoklQIC12SCrGxmzu76KKLcuvWrd3c\npST1vYcffvjbmTnUaLuuFvrWrVuZnJzs5i4lqe9FxJPNbOeUiyQVwkKXpEJY6JJUiIaFHhEviIh/\niYh/i4jHIuJj1filEfFQRHwjIv46Is7rfFxJ0kqaeYf+I+CNmfkaYBtwXUS8Hvg4cHtmvhL4LvCe\nzsWUJDXScJVLzl8B43+qp5uqWwJvBH6zGt8L3ALc0f6IktSfJqam2b3/GMdn59gyOMDO8RG2jw53\nbH9NzaFHxIaIOAScAg4A3wRmM/PZapOngc6llKQ+MzE1za59h5menSOB6dk5du07zMTUdMf22VSh\nZ+aZzNwGvBS4EnhVszuIiJsiYjIiJmdmZlYZU5L6y+79x5g7fWbR2NzpM+zef6xj+2xplUtmzgIP\nAG8ABiNiYcrmpcCyf3Yy887MHMvMsaGhhl90kqQiHJ+da2m8HZpZ5TIUEYPV4wHgzcBR5ov916rN\ndgD3dSqkJPWbLYMDLY23QzPv0DcDD0TEo8C/Agcy82+BPwU+FBHfAF4C7OlYSknqMzvHRxjYtGHR\n2MCmDewcH+nYPptZ5fIoMLrM+LeYn0+XJC2xsJqlm6tcunpyLklaT7aPDne0wJfyq/+SVAgLXZIK\nYaFLUiEsdEkqhIUuSYWw0CWpEBa6JBXCQpekQljoklQIC12SCmGhS1IhLHRJKoSFLkmFsNAlqRAW\nuiQVwkKXpEJY6JJUCAtdkgphoUtSISx0SSqEhS5JhbDQJakQFrokFcJCl6RCWOiSVAgLXZIKYaFL\nUiEsdEkqhIUuSYWw0CWpEBa6JBXCQpekQljoklQIC12SCmGhS1IhGhZ6RFwSEQ9ExNcj4rGI+EA1\nfktETEfEoer21s7HlSStZGMT2zwLfDgzH4mIFwMPR8SB6rXbM/MTnYsnSWpWw0LPzBPAierxDyLi\nKDDc6WCSpNa0NIceEVuBUeChauj9EfFoRNwVERe0OZskqQVNF3pEvAj4AvDBzPw+cAfwCmAb8+/g\nP7nCz90UEZMRMTkzM9OGyJKk5TRV6BGxifkyvycz9wFk5snMPJOZPwY+DVy53M9m5p2ZOZaZY0ND\nQ+3KLUlaoplVLgHsAY5m5qfOGt981mbvAI60P54kqVnNrHK5Cng3cDgiDlVjHwFujIhtQAJPAO/t\nSEJJUlOaWeXyT0As89KX2x9HkrRaflNUkgphoUtSISx0SSpEMwdFpSJNTE2ze/8xjs/OsWVwgJ3j\nI2wf9UvQ6l8Wutalialpdu07zNzpMwBMz86xa99hAEtdfcspF61Lu/cf+0mZL5g7fYbd+4/VlEha\nOwtd69Lx2bmWxqV+YKFrXdoyONDSuNQPLHStSzvHRxjYtGHR2MCmDewcH6kpkbR2HhTVurRw4NNV\nLiqJha51a/vosAWuojjlIkmFsNAlqRAWuiQVwkKXpEJY6JJUCFe5SFKLevXEbha6JLWgl0/s5pSL\nJLWgl0/sZqFLUgt6+cRuFroktaCXT+xmoUtSC3r5xG4eFJWkFvTyid0sdElqUa+e2M0pF0kqhIUu\nSYWw0CWpEBa6JBXCQpekQljoklQIC12SCmGhS1IhLHRJKoSFLkmFsNAlqRAWuiQVomGhR8QlEfFA\nRHw9Ih6LiA9U4xdGxIGIeLy6v6DzcSVJK2nmHfqzwIcz89XA64H3RcSrgZuBg5l5GXCweq4+NDE1\nzVW3fZVLb/47rrrtq0xMTdcdSdIqNCz0zDyRmY9Uj38AHAWGgeuBvdVme4HtnQqpzlm44O307BzJ\ncxe8tdSl/tPSHHpEbAVGgYeAizPzRPXSM8DFbU2mrujlC95Kak3ThR4RLwK+AHwwM79/9muZmUCu\n8HM3RcRkREzOzMysKazar5cveCupNU0VekRsYr7M78nMfdXwyYjYXL2+GTi13M9m5p2ZOZaZY0ND\nQ+3IrDbq5QveSmpNM6tcAtgDHM3MT5310peAHdXjHcB97Y+nTuvlC95Kak0z1xS9Cng3cDgiDlVj\nHwFuAz4XEe8BngTe2ZmI6qRevuCtpNbE/PR3d4yNjeXk5GTX9idJJYiIhzNzrNF2flNUkgphoUtS\nISx0SSqEhS5JhbDQJakQzSxbVJtMTE27PFBSx1joXbJwEqyF86YsnAQLsNQltYWF3iXnOgmWhV4f\nPzWpJBZ6l3gSrN7jpyaVxoOiXeJJsHqPpw5WaSz0LvEkWL3HT00qjYXeJdtHh7n1hisYHhwggOHB\nAW694Qo/2tfIT00qjXPoXbR9dNgC7yE7x0cWzaGDn5rU3yx0rVueOlilsdC1rvmpSSVxDl2SCmGh\nS1IhLHRJKoSFLkmFsNAlqRAWuiQVwkKXpEJY6JJUCAtdkgphoUtSISx0SSqEhS5JhbDQJakQFrok\nFcJCl6RCWOiSVIi+uMDFxNS0V5WRpAZ6vtAnpqYXXfdxenaOXfsOA1jqknSWnp9y2b3/2KKL+ALM\nnT7D7v3HakokSb2p5wv9+OxcS+OStF71fKFvGRxoaVyS1quGhR4Rd0XEqYg4ctbYLRExHRGHqttb\nOxVw5/gIA5s2LBob2LSBneMjndqlJPWlZt6h3w1ct8z47Zm5rbp9ub2xnrN9dJhbb7iC4cEBAhge\nHODWG67wgKgkLdFwlUtmPhgRWzsfZWXbR4ctcElqYC1z6O+PiEerKZkL2pZIkrQqqy30O4BXANuA\nE8AnV9owIm6KiMmImJyZmVnl7iRJjayq0DPzZGaeycwfA58GrjzHtndm5lhmjg0NDa02pySpgVUV\nekRsPuvpO4AjK20rSeqOhgdFI+Je4Brgooh4GvgocE1EbAMSeAJ4bwczSpKa0MwqlxuXGd7TgSyS\npDXo+W+KSpKaY6FLUiEsdEkqhIUuSYWw0CWpEBa6JBXCQpekQljoklQIC12SCmGhS1IhLHRJKoSF\nLkmFsNAlqRAWuiQVwkKXpEJY6JJUCAtdkgphoUtSISx0SSqEhS5JhbDQJakQFrokFcJCl6RCWOiS\nVAgLXZIKYaFLUiEsdEkqhIUuSYWw0CWpEBa6JBXCQpekQljoklQIC12SCmGhS1IhLHRJKoSFLkmF\naFjoEXFXRJyKiCNnjV0YEQci4vHq/oLOxpQkNdLMO/S7geuWjN0MHMzMy4CD1XNJUo0aFnpmPgh8\nZ8nw9cDe6vFeYHubc0mSWrTaOfSLM/NE9fgZ4OI25ZEkrdKaD4pmZgK50usRcVNETEbE5MzMzFp3\nJ0lawWoL/WREbAao7k+ttGFm3pmZY5k5NjQ0tMrdSZIaWW2hfwnYUT3eAdzXnjiSpNVqZtnivcA/\nAyMR8XREvAe4DXhzRDwOvKl6Lkmq0cZGG2TmjSu8dG2bs0iS1sBvikpSISx0SSqEhS5JhbDQJakQ\nFrokFcJCl6RCWOiSVAgLXZIKYaFLUiEsdEkqhIUuSYVoeC6XfjMxNc3u/cc4PjvHlsEBdo6PsH10\nuO5YktRxRRX6xNQ0u/YdZu70GQCmZ+fYte8wgKUuqXhFTbns3n/sJ2W+YO70GXbvP1ZTIknqnqIK\n/fjsXEvjklSSogp9y+BAS+OSVJKiCn3n+AgDmzYsGhvYtIGd4yM1JZKk7inqoOjCgU9XuUhaj4oq\ndJgvdQtc0npU1JSLJK1nFrokFcJCl6RCWOiSVAgLXZIKEZnZvZ1FzABPNtjsIuDbXYizFmZsn37I\nacb26IeM0Js5fy4zhxpt1NVCb0ZETGbmWN05zsWM7dMPOc3YHv2QEfon53KccpGkQljoklSIXiz0\nO+sO0AQztk8/5DRje/RDRuifnM/Tc3PokqTV6cV36JKkVeipQo+IJyLicEQciojJuvMsJyIGI+Lz\nEfHvEXE0It5Qd6azRcRI9ftbuH0/Ij5Yd66lIuKPI+KxiDgSEfdGxAvqzrRURHygyvdYL/0OI+Ku\niDgVEUfOGrswIg5ExOPV/QU9mPHXq9/ljyOi9lUkK2TcXf3ffjQivhgRg3VmbFVPFXrlVzJzWw8v\nG/oL4CuZ+SrgNcDRmvMskpnHqt/fNuB1wA+BL9Yca5GIGAb+CBjLzMuBDcC76k21WERcDvwBcCXz\n/85vi4hX1pvqJ+4GrlsydjNwMDMvAw5Wz+t0N8/PeAS4AXiw62mWdzfPz3gAuDwzfwH4D2BXt0Ot\nRS8Wes+KiJ8Brgb2AGTm/2XmbL2pzula4JuZ2ejLXHXYCAxExEbgfOB4zXmW+nngocz8YWY+C/wD\n82VUu8x8EPjOkuHrgb3V473A9q6GWmK5jJl5NDN75gK/K2S8v/r3Bvga8NKuB1uDXiv0BO6PiIcj\n4qa6wyzjUmAG+KuImIqIz0TEC+sOdQ7vAu6tO8RSmTkNfAJ4CjgBfC8z76831fMcAX45Il4SEecD\nbwUuqTnTuVycmSeqx88AF9cZphC/B/x93SFa0WuF/kuZ+VrgLcD7IuLqugMtsRF4LXBHZo4C/0v9\nH22XFRHnAW8H/qbuLEtV87vXM/8Hcgvwwoj47XpTLZaZR4GPA/cDXwEOAWdqDdWknF+65vK1NYiI\nPwOeBe6pO0sreqrQq3duZOYp5ud9r6w30fM8DTydmQ9Vzz/PfMH3orcAj2TmybqDLONNwH9m5kxm\nngb2Ab9Yc6bnycw9mfm6zLwa+C7zc6q96mREbAao7k/VnKdvRcTvAG8Dfiv7bF13zxR6RLwwIl68\n8Bj4VeY/9vaMzHwG+K+IWLjq9LXA12uMdC430oPTLZWngNdHxPkREcz/Hnvq4DJARPxsdf8y5ufP\nP1tvonP6ErCjerwDuK/GLH0rIq4D/gR4e2b+sO48reqZLxZFxMt5bjXGRuCzmfnnNUZaVkRsAz4D\nnAd8C/jdzPxuvakWq/4gPgW8PDO/V3ee5UTEx4DfYP5j7RTw+5n5o3pTLRYR/wi8BDgNfCgzD9Yc\nCYCIuBe4hvmzAp4EPgpMAJ8DXsb8GU3fmZlLD5zWnfE7wF8CQ8AscCgzx3ss4y7gp4H/rjb7Wmb+\nYS0BV6FnCl2StDY9M+UiSVobC12SCmGhS1IhLHRJKoSFLkmFsNAlqRAWuiQVwkKXpEL8P8Zyh0xJ\noDlpAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x7f32f318dd68>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 来看看产生的x-y分布\n",
    "x, y = get_fake_data()\n",
    "plt.scatter(x.squeeze().numpy(), y.squeeze().numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 108,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXwAAAD8CAYAAAB0IB+mAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJzt3Xd8VHW6x/HPAwQINfQSiDQN0gSM\nYBcr2BH3umt3Lax39e56dxcpYlkrinXvurrY3XUtKwEsKKIi2BUEE1roCCF0Qg2k/e4fc+ImIWWS\n6Znv+/XilZkzZ3Iex5NnzvzOb77HnHOIiEjdVy/SBYiISHio4YuIxAk1fBGROKGGLyISJ9TwRUTi\nhBq+iEicUMMXEYkTavgiInFCDV9EJE40COfG2rZt67p16xbOTYqIhEzugQKyc/MoLpdY0KRhfY5o\n05QG9azMupv3HKSgqJiE+vXo2KIxSU0S/NrOggULtjvn2gVab1gbfrdu3Zg/f344NykiEjInTfqU\nwty8w5YnJyXy5bgzfr4/fWE249MzaVtQ9POyhIT6TBzVn5GDkqvdjpmtD0a9fg/pmFl9M1toZu95\n97ub2bdmtsrM3jSzhsEoSEQkVmRX0OwBNpVbPnlWFnmlmj1AXkERk2dlhay2itRkDP/3wLJS9x8G\nnnDO9QJ2ATcEszARkWi1de9Bbnnth0of75yUWOZ++TeA6paHil8N38y6AOcDz3v3DTgDeNtb5RVg\nZCgKFBGJFs453vp+A2c9NpfZy7Zwfv9ONG5Qto0mJtRnzPDUMsvKvwFUtzxU/D3CfxK4HSj27rcB\ncp1zhd79jUD1A1EiIjFq3fb9XPHct9w+NYOjO7Xgw9+fwtNXDmbSpQNITkrE8I3dP1TBuPyY4akk\nJtQvs6yiN4ZQq/akrZldAGx1zi0ws2E13YCZjQZGA6SkpNS4QBGRSCooKub5z9fy5McraNigHg+N\n6s8v07pSz5uBM3JQcrUnXksenzwri025eXROSmTM8FS/TtgGkz+zdE4CLjKz84DGQAvgKSDJzBp4\nR/ldgOyKnuycmwJMAUhLS9PVVkQkZmRu3M3YqRkszdnDuf068ueL+tK+ReMy60xfmO1XI/fnjSHU\nqm34zrnxwHgA7wj/T865K83s38AvgDeAa4EZIaxTRCRsDuQX8sTsFbzwxVraNmvEs1cdy4h+HQ9b\nr2S6ZckMnOzcPManZwJEvLlXJJB5+GOBN8zsfmAh8EJwShIRiZzPV25jwrRMNuzM4/IhKYw7tzct\nEyv+glRV0y1jvuE75z4DPvNurwGGBL8kEZHw27U/n/veX0r6D9n0aNuUN0cfz9Aebap8TrRMt/RX\nWL9pKyISbZxzvPPjJu59dym78wr4nzN6ccvpvWhcblZNRTonJVb45atwT7f0lxq+iMSt7Nw8Jk7L\nZE7WNo7pmsRrl/and8cWfj9/zPDUMmP4EJnplv5SwxeRuFNU7PjH1+t4xIs2uOuCPlx7Yjfqlwo7\n80e0TLf0lxq+iMSVrM17GTs1g0UbchmW2o77R/ajS6smtf590TDd0l9q+CISEv7OTw+XgwVF/G3O\nKp6Zu5rmjRN46lcDueiYzviSYuKDGr6IBF20zU//bu1OxqVnsGbbfkYNSmbiBX1o3TT+An7V8EUk\n6KJlfvqegwU8/MFyXvv2J7q0SuTV64dw6lEBX0ckZqnhi0jQRcP89I+WbObOGYvZtvcQN57cnT+c\ncxRNGsZ3y4vv/3oRCYlIzk/fuucg97y7hJmZm+ndsTlTrk7jmK5JId9uLFDDF5Ggi8T8dOccb36/\ngQdmLuNQYTFjhqcy+tQeJNSvyXWe6jY1fBEJunDPT1+7fT/j0zP4Zs1OhnZvzUOj+tOjXbOQbCuW\nqeGLSEiEY356QVExz32+hic/XkmjBvWYNKo/l5XKqpey1PBFJCZlbMxl7NRMluXs4bz+HbnnwsOz\n6qUsNXwRiSkH8gt5/KMVvPjlWto1b8Tfrz6W4X0Pz6qXw6nhi0jMmLfCl1W/cVceVw5NYey5vWnR\nuOKsejmcGr6IRERNohd27s/n/veWkr4wmx7tmvLWb05gSPfWYa449qnhi0jY+Ru9UJJV/+d3l7In\nr4DfndGL3/qZVS+HU8MXkbDzJ3ph464DTJy+mM+ytjGwaxKTaphVL4ertuGbWWNgHtDIW/9t59zd\nZvYycBqw21v1OufcolAVKiJ1R1XRC0XFjle/XsdkL6v+7gv7cM0JNc+ql8P5c4R/CDjDObfPzBKA\nL8zsA++xMc65t0NXnojURZVFL7Rr3ohRz3zFj0HKqpeyqv3OsfPZ591N8P65kFYlInXamOGpJJYb\nh29Qz9i+7xAbdh7gqV8N5KXrjlOzDzK/QibMrL6ZLQK2ArOdc996Dz1gZhlm9oSZNarkuaPNbL6Z\nzd+2bVuQyhaRWDZyUDIPjepPshem1qCeUVjsGDkomY//cBoXD0yOqwuThItfDd85V+ScGwh0AYaY\nWT9gPNAbOA5oDYyt5LlTnHNpzrm0du3iN4daRMo64+j2nJbq6wkdWzbm1euH8PhlA+PywiThUqNZ\nOs65XDObA4xwzj3qLT5kZi8Bfwp6dSJSJ81aspm7vKz6m07pzv+eraz6cPBnlk47oMBr9onA2cDD\nZtbJOZdjvs9dI4HFIa5VRGLclj0HuXvGEj5cspmjO7XguWvSGNBFWfXh4s9baifgFTOrj28I6C3n\n3Htm9qn3ZmDAIuDmENYpIjGsuNjx5vwNPDhzGfmFxYwd0ZsbT+murPowq7bhO+cygEEVLD8jJBWJ\nSJ2yZts+xqdn8u3anRzfozUPjRpA97ZNI11WXNKgmYiEREFRMVPmreGpT1bSuEE9Hr7Ul1Wv2TeR\no4YvIkH344Zcxk7NYPnmvZzfvxN3X9SH9s2VVR9pavgiEjQH8gt57KMVvPTlWto3b8yUq4/lHGXV\nRw01fBEJirkrtnGHl1V/1fEp3D5CWfXRRg1fRAJSOqu+Z7um/PvmEzium7Lqo5EavojUinOOGYs2\nce97S9l7sIDfnXkkt5zek0YNQptVX5MLp0hZavgiUmMbdh7gjumLmbdiG4NSkpg0agCpHZuHfLv+\nXjhFKqaGLyJ+Kyp2vPzVOh6dlUU9gz9f1Jerjj/isKz6UB2F+3PhFKmcGr6I+GVZzh7GTc3gx427\nOT21Hfdf8p+0y9JCeRRe1YVTpHpq+CJSpYMFRfz101U8O3c1LRMT+Mvlg7hwQKdKv0AVyqPwyi6c\n0rmCNx45nBq+iFTqmzU7mJCeyZrt+7l0cBcmnn80raqJL67NUbi/Q0BjhqeW+fQAkJhQnzHDU/38\nL4pvavgicpjdeQVM+mA5r3/3E11bJ/KPG4ZwypH+Xc+ipkfhNRkCKrmvWTq1o4YvImV8uNiXVb99\n3yFGn9qD2846skZZ9TU9Cq/pENDIQclq8LWkhi8iQNms+j6dWvDCtcfRv0vLGv+emh6F60Rs+Kjh\ni8S54mLHG99v4KEPfFn1t49I5aZTegSUVV+To3CdiA0fNXyROLbay6r/bu1OTujRhodG9adbmLPq\ndSI2fNTwReJQfmExU+at5i+frqJxg3o8cukA/iutS42y6oP15SqdiA0ff65p2xiYBzTy1n/bOXe3\nmXUH3gDaAAuAq51z+aEsVkQCt2hDLuNKsuoHdOLuC2ueVR/sL1fpRGx4+DNIdwg4wzl3DDAQGGFm\nxwMPA08453oBu4AbQlemiARq/6FC7n13KZf87UtyDxTw3DVpPH3F4FpdmKSqmTUSvfy5pq0D9nl3\nE7x/DjgDuMJb/gpwD/BM8EsUkUB9lrWVO6YtJjs3j6uPP4LbR6TSPICses2siU1+jeGbWX18wza9\ngKeB1UCuc67QW2UjoM9jIlFmx75D3PfeUqYv2kSv9s14++YTSAtCVr1m1sQmv+ZdOeeKnHMDgS7A\nEKC3vxsws9FmNt/M5m/btq2WZYpITTjnmLZwI2c9Ppf3M3P4/ZlH8v7vTg5KswffzJrEhLK595pZ\nE/1qNEvHOZdrZnOAE4AkM2vgHeV3AbIrec4UYApAWlqaC7BeEalG6az6wSlJTLp0AEd1CG5WvWbW\nxCZ/Zum0Awq8Zp8InI3vhO0c4Bf4ZupcC8wIZaEiUrWiYsdLX67lsY9WUM/g3ov7ctXQI6hXz/+p\nljWhmTWxx58j/E7AK944fj3gLefce2a2FHjDzO4HFgIvhLBOEanC0k17GJ/uy6o/s3d77hvZT+Pp\nchh/ZulkAIMqWL4G33i+iETIwYIi/vLJSqbMW0NSk+qz6iW+6Zu2IjHqmzU7GJ+eydrt+/mvY7tw\nx/lHk9Sk6qx6iW9q+CIhFIpru/qy6pfx+ncbSGndhH/eMJSTj2wbpIqlLlPDFwmRUFzb9cPFOdw5\nYwk79h3iN6f24LazjiKxYf3qnyiCGr5IyATz2q6bdx/krhmL+WjpFvp2bsFL1x1Hv+SaZ9VLfFPD\nFwmRYMQPFBc7Xv/+JybNXE5+UTHjz+3NDSd3p0EAWfUSv9TwRUIk0PiB1dv2MX5qJt+t28mJPdvw\n4CXhz6qXukWHCSIhUtv4gfzCYv766UrOffJzsrbs5ZFfDOC1G4eq2UvAdIQvEiK1iR9Y+NMuxk3N\nJGtL7bPqRSqjhi8SQv7GD+w/VMijH2Xx8lfr6NiiMc9fk8ZZfTqEoUKJJ2r4IhE2J2srE6ctZtPu\nPK4aGnhWvUhl1PBFIqSirPpjjwhOfLFIRdTwRcLMl1WfzX3vLWXfoUJuO+tI/ntYTxo10BeoJLTU\n8EXCaMPOA0yYlsnnK7czOCWJhy8dwJFBzqoXqYwavkgYFBYV8/JX637Oqr/v4r5cGcKsepGKqOGL\nhNjSTXsYl55BhrLqJcLU8EVCpCSr/u/z1tCqSQJ/vWIQ5/dXVr1Ejhq+SAh8vXoHE6Ypq16iixq+\nSBDtPlDAQx8s443vfVn1r904lJN6KateooM/FzHvCrwKdAAcMMU595SZ3QPcBGzzVp3gnJsZqkJF\noplzjg8Xb+aud5awc38+vzmtB7edqax6iS7+HOEXAn90zv1gZs2BBWY223vsCefco6ErTyT6Kate\nYoU/FzHPAXK823vNbBkQ2DXaROqA4mLHv777iYc/8GXVjzu3Nzcqq16iWI3G8M2sGzAI+BY4CbjV\nzK4B5uP7FLCrgueMBkYDpKSkBFiuSHRYtXUf49Mz+H7dLk7q5cuqP6KN4osluplzzr8VzZoBc4EH\nnHPpZtYB2I5vXP8+oJNz7vqqfkdaWpqbP39+gCWLRE5+YTF/n7ua//t0FYkN6zPx/KP5xbFdNNVS\nQsrMFjjn0gL9PX4d4ZtZAjAVeM05lw7gnNtS6vHngPcCLUYkmv3w0y7Ge1n1FwzoxN0X9qVd80aR\nLkvEb/7M0jHgBWCZc+7xUss7eeP7AJcAi0NTokhk7TtUyKOzsnjla2XVS2zz5wj/JOBqINPMFnnL\nJgCXm9lAfEM664DfhKRCkQias3wrE6f7suqvOf4I/jRcWfUSu/yZpfMFUNEApebcS521fd8h7n13\nKe/8qKx6qTv0TVuRUpxzpP+QzX3vL2W/suqljlHDF/Eoq17qOjV8iXvKqpd4oYYvcW3Jpt2Mm5pJ\nZray6qXuU8OXuHSwoIgnP17Jc5/7sur/7/JBXDBAWfVSt6nhS9z5avV2JqRnsm7HAWXVS1xRw5c6\nafrCbCbPymJTbh6dkxIZMzyV01Pb8+DMZbw5X1n1Ep/U8KXOmb4wm/HpmeQVFAGQnZvH7W9n0Cih\nHgfyi5RVL3FLDV/qnMmzsn5u9iXyi4pxOGbccpKy6iVuKbhb6pxNuXkVLi8ocmr2EtfU8KXOqSzB\nMlnTLSXOaUhH6oz8wmKenbuaHfvzD3ssMaE+Y4anRqAqkeihhi91woL1uxifnsGKLfu48JjODOnW\nimfnrikzS2fkIF2ZU+KbGr7EtPJZ9S9cm8aZR/uy6q8+oVtEaxOJNmr4ErM+Xb6FidMWk7PnIFcf\nfwS3j+hNs0bapUUqo78OiTmls+qPbN+Mt28+kWOPaBXpskSinhq+xAznHFN/yOZ+L6v+f886ipuH\n9VBWvYif/LmmbVfgVaADvssZTnHOPWVmrYE3gW74LnF4mXNuV+hKlXj20w5fVv0Xq7Zz7BGtmDSq\nv7LqRWrInyP8QuCPzrkfzKw5sMDMZgPXAZ845yaZ2ThgHDA2dKVKPCosKubFL9fy+OwVNKhXT1n1\nIgHw55q2OUCOd3uvmS0DkoGLgWHeaq8An6GGL0G0OHs349IzWJy9h7OO9mXVd2qpL0+J1FaNxvDN\nrBswCPgW6OC9GQBsxjfkIxKwsln1DXn6isGc17+jsupFAuR3wzezZsBU4Dbn3J7Sf3zOOWdmrpLn\njQZGA6SkpARWrdR5X63azvhpmazfcYBfpnVlwnlH07JJQqTLEqkT/Gr4ZpaAr9m/5pxL9xZvMbNO\nzrkcM+sEbK3ouc65KcAUgLS0tArfFER2HyjggZlLeWv+Ro5o04R/3TiUE5VVLxJU/szSMeAFYJlz\n7vFSD70DXAtM8n7OCEmFUqc553g/M4d73lnKrgP53HxaT24760gaJ2iqpUiw+XOEfxJwNZBpZou8\nZRPwNfq3zOwGYD1wWWhKlLoqZ3ced05fzMfLttIvuQUv//o4xReLhJA/s3S+ACo7W3ZmcMuReFBc\n7Hjt2/U8/GEWhcXFTDivN9ef1J0G9ZXWLRJK+qathNXKLXsZl57JgvW7OLlXWx68pD8pbZpEuiyR\nuKCGL2FxqLCIZz5bzd/mrKZJo/o89l/HMGpwsqZaioSRGr6E3IL1uxg3NYOVW/dx0TGduevCPrRt\nVvFVqUQkdNTwJWT2HSpk8ofLefWb9XRq0ZgXr0vjjN76fp5IpKjhS0h8smwLE6cvZvOeg1x7Qjf+\nNDxVWfUiEaa/QAmqbXsP8ed3l/BeRg5HdWjG01eeyOAUZdWLRAM1fAkK5xxvL9jI/e8vIy+/iD+c\nfRQ3n9aThg001VIkWqjhS8DW79jPhGmZfLlqB2lHtGLSpf3p1V5Z9SLRRg1fau2wrPqR/bhySIqy\n6kWilBq+1ErZrPoO3Deyr7LqRaKcGr7USF5+EU9+soLnP1+rrHqRGKOGL377ctV2xqdn8tNOZdWL\nxCI1fKlW7oF8Hnh/Gf9esJFubZrwr5uGcmJPZdWLxBo1fKnUf7Lql7DrQAH/Pawnvz9TWfUisUoN\nXyq0KTePu2b4sur7J7fkleuH0LezsupFYpkavpRRXOz457frefiD5RQ5x8Tzj+a6E7spq16kDlDD\nl5+Vzqo/5ci2PDBSWfUidYkavnCosIi/zVnN3z5bRdNGDZRVL1JH+XMR8xeBC4Ctzrl+3rJ7gJuA\nbd5qE5xzM0NVpITOgvU7GTs1k1Vb93HxwM7ceYGy6kXqKn+O8F8G/gq8Wm75E865R4NekYTF3oMF\nTJ6VxT++WU/nlom8dN1xnN67faTLEpEQ8uci5vPMrFvoS5Fw+XipL6t+y96DXHdiN/50TipNlVUv\nUucF8ld+q5ldA8wH/uic21XRSmY2GhgNkJKSEsDmJFDb9h7inneX8H5GDqkdmvO3qwYrq14kjtR2\nrt0zQE9gIJADPFbZis65Kc65NOdcWrt27Wq5OQmEc4635m/grMfnMnvJFv549lG8+z8nq9mLxJla\nHeE757aU3Daz54D3glaRBNX6HfsZn57JV6t3MKRbax4c1Z9e7ZtFuiwRiYBaNXwz6+Scy/HuXgIs\nDl5JEgyFRcU8/8Vanpi9gob16/HAJf24/Dhl1YvEM3+mZb4ODAPamtlG4G5gmJkNBBywDvhNCGuU\nGlqcvZuxUzNYsmkPZ/fpwH0X96Njy8aRLktEIsyfWTqXV7D4hRDUIgHKyy/iyY9X8PwXa2ndtCHP\nXDmYEf2UVS8iPpqLV0eUzqq/fEhXxo1QVr2IlKWGH+NyD+Rz//vLeHvBRrq3bcrrNx3PCT3bRLos\nEYlCavgxyjnHuxk53PuuL6v+t8N68jtl1YtIFdTwY9Cm3DzunL6YT5ZvZUCXlrx6/VD6dG4R6bJE\nJMqp4ceQomLHP79ZzyMfLqfYoax6EakRNfwYsWLLXsZNzeCHn3I55ci2PHhJf7q2Vla9iPhPDT/K\nHSos4uk5q3nms1U0a9SAJ355DCMHKqteRGpODT8Mpi/MZvKsLDbl5tE5KZExw1MZOSi52ufNX7eT\ncem+rPqRXlZ9G2XVi0gtqeGH2PSF2YxPzySvoAiA7Nw8xqdnAlTY9KcvzObhD5eTs/sgAK2aJPDS\nr4/j9FRl1YtIYNTwQ2zyrKyfm32JvIIiJs/KOqzhT1+Yze1vZ5BfVPyfdfOL2H2goNrt1PZThIjE\nD03vCLFNuXl+Ld+69yDj0zPLNHuAg4XFTJ6VVeU2Sj5FZOfm4fjPp4jpC7MDql1E6hY1/BDrnJRY\n5XLnHG99v4GzHpt72CeBEpW9aZSo6lOEiEgJNfwQGzM8lcQKvv26/1Ahz81bwxXPfcvtUzPo3akF\n7ZtXfEK2sjeNEv5+ihCR+KaGH2IjByXz0Kj+tCoXZJabV8ADM5excMMuHrykP2/cdDwTzjv6sDeH\nxIT6jBmeWuU2qvsUISICavhhMXJQMk0aVnx+PCmxIVcM9V2YpOTNITkpEQOSkxJ5aFT/ak++VvQp\nwp83ChGJL5qlEybZlQyvbNlzsMz9kYOSazy7pmT92szS0ewekfihhh8Gn6/cRv16RlGxO+yxYA27\n1OaNoqbfERCR2FbtkI6ZvWhmW81scallrc1stpmt9H62Cm2ZsWnX/nz++NaPXP3Cd7Ru0pCG5ULO\nIj3sotk9IvHFnzH8l4ER5ZaNAz5xzh0JfOLdr1OmL8zmpEmf0n3c+5w06dMazWl3zvHOj5s46/G5\nzFiUzS2n9+TzsafzyC8G1Hh8PpQ0u0ckvvhzTdt5Ztat3OKL8V3YHOAV4DNgbBDriqhAhjqyc/OY\nOC2TOVnbOKZLS/5541CO7tTi5+dG01BJ56TECs8taHaPSN1U21k6HZxzOd7tzUCHINUTFWoz1FFU\n7Hj5y7Wc/fhcvlmzkzsv6EP6b0/6udlHI83uEYkvAZ+0dc45Mzv8bKTHzEYDowFSUlIC3VxY1HSo\nI2vzXsZOzWDRhlxOPaodD4zsFxNZ9YHM7hGR2FPbhr/FzDo553LMrBOwtbIVnXNTgCkAaWlplb4x\nRBN/hzoOFRbx9KereGbu6pjNqo+2YSYRCZ3aDum8A1zr3b4WmBGccqKDP0Md36/byXlPfc5fPl3F\nBQM68/EfTuOSQV1iqtmLSHyp9gjfzF7Hd4K2rZltBO4GJgFvmdkNwHrgslAWGW5VDXXsPVjAwx8u\n55/f/ERyUiIv//o4himrXkRigDkXvlGWtLQ0N3/+/LBtL9g+WrKZu2YsYeveg1x3Ynf+eM5RNG2k\n766JSGiZ2QLnXFqgv0fdyg9b9x7knneWMDNzM707NufZq49lYNekSJclIlIjavhVcM7x1vwNPPD+\nMg4WFjNmeCqjT+1BQn1lzolI7FHDr8Ta7fuZkJ7J12t2MKR7ax4a1Z+e7ZpFuiwRkVpTwy+noKiY\n5z5fw1Mfr6Rhg3o8NKo/v0zrSr16mn0jIrFNDb+UjI25jJ2aybKcPYzo25E/X9yXDi0aR7osEZGg\nUMMHDuQX8sTsFbzwxVraNmvEs1cNZkS/TpEuS0QkqOK+4X++chsTpmWyYWcelw9JYdy5vWmZmFD9\nE0VEYkzcNvxd+/O57/2lpP+QTY+2TXlz9PEM7dEm0mWJiIRM3DX8kqz6e99dyu68Am49vRe3ntGL\nxuWiFERE6pq4avhVZdWLiNR1cdHwi4od//h6HY/MysI5uPOCPlx3Yjfql5tqqQt6i0hdVucbvr9Z\n9bqgt4jUdXW24ZfOqm/eOIEnfzmQiwd2rjS+uKqrXKnhi0hdUCcb/vfrdjJuagart+1n1KBkJl7Q\nh9ZNG1b5HF3QW0TqujrV8PccLOCRUln1r1w/hNOOaufXc3VBbxGp6+pMwy+dVX/Dyd35w9k1y6of\nMzy1zBg+6ILeIlK3xHzDL59V//erj+WYWmTV64LeIlLXxWzDD0VWvS7oLSJ1WUAN38zWAXuBIqAw\nGJfg8kfprPqhXlZ9D2XVi4hUKRhH+Kc757YH4fdUS1n1IiK1FzNDOqWz6s/t15E/X9SX9sqqFxHx\nW6AN3wEfmZkD/u6cm1J+BTMbDYwGSElJqfEGDuQX8vhHK3jxy5Ks+mMZ0a9jgGWLiMSfQBv+yc65\nbDNrD8w2s+XOuXmlV/DeBKYApKWluZr88nkrfFn1G3flceXQFMae25sWjZVVLyJSGwE1fOdctvdz\nq5lNA4YA86p+VvXKZNW3a8pbvzmBId1bB/prRUTiWq0bvpk1Beo55/Z6t88B7g2kmPJZ9f9zRi9u\nOV1Z9SIiwRDIEX4HYJoXRtYA+Jdz7sPa/rKNuw4wcfpiPsvaxsCuSbx2aX96d1RWvYhIsNS64Tvn\n1gDHBFpAUbHjla/W8ehHWQDcfWEfrjnh8Kx6EREJTESnZS7fvIexUzP5cUMuw1Lbcf/IfnRpdXhW\nvYiIBC4iDf9gQRFPz1nFM5+tpkViAk/9aiAXHVN5Vr2IiAQu7A3/u7U7GZeewZpt+xk1OJmJ51ef\nVS8iIoELa8PPzs3jsr9/TZdWibx6/RBO9TOrXkREAhfWhr9zfz53ndKd/z37KJo0jJlUBxGROiGs\nXbdXu2bccX6fcG5SREQ8tQ+Pr4XEhvoClYhIpIS14YuISOSo4YuIxAk1fBGROBHVU2WmL8zWRcVF\nRIIkahv+9IXZjE/PJK+gCPDN4R+fngmgpi8iUgtRO6QzeVbWz82+RF5BEZNnZUWoIhGR2Ba1DX9T\nbl6NlouISNWituF3Tkqs0XIREala1Db8McNTSSx3pavEhPqMGZ4aoYpERGJb1J60LTkxq1k6IiLB\nEVDDN7MRwFNAfeB559ykoFTlGTkoWQ1eRCRIaj2kY2b1gaeBc4E+wOVmpmQ0EZEoFcgY/hBglXNu\njXMuH3gDuDg4ZYmISLAF0vCTgQ2l7m/0lomISBQK+UlbMxsNjPbuHjKzxaHeZhC0BbZHugg/qM7g\niYUaQXUGW6zUGZTpiYE0/Gzyb7BMAAAFRUlEQVSga6n7XbxlZTjnpgBTAMxsvnMuLYBthoXqDK5Y\nqDMWagTVGWyxVGcwfk8gQzrfA0eaWXczawj8CngnGEWJiEjw1foI3zlXaGa3ArPwTct80Tm3JGiV\niYhIUAU0hu+cmwnMrMFTpgSyvTBSncEVC3XGQo2gOoMtruo051wwfo+IiES5qM3SERGR4ApJwzez\nEWaWZWarzGxcBY83MrM3vce/NbNuoaijmhq7mtkcM1tqZkvM7PcVrDPMzHab2SLv313hrtOrY52Z\nZXo1HHa23nz+4r2eGWY2OMz1pZZ6jRaZ2R4zu63cOhF5Lc3sRTPbWno6sJm1NrPZZrbS+9mqkude\n662z0syujUCdk81suff/dJqZJVXy3Cr3jzDUeY+ZZZf6f3teJc+tsi+Eoc43S9W4zswWVfLcsLye\nlfWgkO6fzrmg/sN3Anc10ANoCPwI9Cm3zm+BZ73bvwLeDHYdftTZCRjs3W4OrKigzmHAe+GurYJa\n1wFtq3j8POADwIDjgW8jWGt9YDNwRDS8lsCpwGBgcalljwDjvNvjgIcreF5rYI33s5V3u1WY6zwH\naODdfriiOv3ZP8JQ5z3An/zYL6rsC6Gus9zjjwF3RfL1rKwHhXL/DMURvj+RCxcDr3i33wbONDML\nQS2Vcs7lOOd+8G7vBZYRu98Uvhh41fl8AySZWacI1XImsNo5tz5C2y/DOTcP2Flucen97xVgZAVP\nHQ7Mds7tdM7tAmYDI8JZp3PuI+dcoXf3G3zfdYmoSl5Pf4Q1iqWqOr1ecxnweqi2748qelDI9s9Q\nNHx/Ihd+XsfboXcDbUJQi1+8IaVBwLcVPHyCmf1oZh+YWd+wFvYfDvjIzBaY75vL5UVTzMWvqPwP\nKRpeS4AOzrkc7/ZmoEMF60TTawpwPb5PcRWpbv8Ih1u9oacXKxmCiKbX8xRgi3NuZSWPh/31LNeD\nQrZ/xv1JWzNrBkwFbnPO7Sn38A/4hiaOAf4PmB7u+jwnO+cG40smvcXMTo1QHVUy3xfwLgL+XcHD\n0fJaluF8n4+jeqqamd0BFAKvVbJKpPePZ4CewEAgB99wSTS7nKqP7sP6elbVg4K9f4ai4fsTufDz\nOmbWAGgJ7AhBLVUyswR8L/Rrzrn08o875/Y45/Z5t2cCCWbWNsxl4pzL9n5uBabh+3hcml8xF2Fw\nLvCDc25L+Qei5bX0bCkZ8vJ+bq1gnah4Tc3sOuAC4Ervj/8wfuwfIeWc2+KcK3LOFQPPVbL9aHk9\nGwCjgDcrWyecr2clPShk+2coGr4/kQvvACVnlX8BfFrZzhwq3jjeC8Ay59zjlazTseTcgpkNwfd6\nhfWNycyamlnzktv4TuSVD6B7B7jGfI4Hdpf6SBhOlR45RcNrWUrp/e9aYEYF68wCzjGzVt4QxTne\nsrAx3wWGbgcucs4dqGQdf/aPkCp3vuiSSrYfLVEsZwHLnXMbK3ownK9nFT0odPtniM4+n4fvjPNq\n4A5v2b34dlyAxvg+9q8CvgN6hKKOamo8Gd9HpQxgkffvPOBm4GZvnVuBJfhmFHwDnBiBOnt42//R\nq6Xk9Sxdp+G7GM1qIBNIi0CdTfE18JallkX8tcT3BpQDFOAb57wB3/miT4CVwMdAa2/dNHxXbit5\n7vXeProK+HUE6lyFb5y2ZP8smdnWGZhZ1f4R5jr/4e13GfiaVafydXr3D+sL4azTW/5yyT5Zat2I\nvJ5V9KCQ7Z/6pq2ISJyI+5O2IiLxQg1fRCROqOGLiMQJNXwRkTihhi8iEifU8EVE4oQavohInFDD\nFxGJE/8PgbHcXU2xP2wAAAAASUVORK5CYII=\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x7f32f2510f28>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.9918574094772339 2.9549660682678223\n"
     ]
    }
   ],
   "source": [
    "# 随机初始化参数\n",
    "w = t.rand(1, 1) \n",
    "b = t.zeros(1, 1)\n",
    "\n",
    "lr =0.001 # 学习率\n",
    "\n",
    "for ii in range(20000):\n",
    "    x, y = get_fake_data()\n",
    "    \n",
    "    # forward：计算loss\n",
    "    y_pred = x.mm(w) + b.expand_as(y) # x@W等价于x.mm(w);for python3 only\n",
    "    loss = 0.5 * (y_pred - y) ** 2 # 均方误差\n",
    "    loss = loss.sum()\n",
    "    \n",
    "    # backward：手动计算梯度\n",
    "    dloss = 1\n",
    "    dy_pred = dloss * (y_pred - y)\n",
    "    \n",
    "    dw = x.t().mm(dy_pred)\n",
    "    db = dy_pred.sum()\n",
    "    \n",
    "    # 更新参数\n",
    "    w.sub_(lr * dw)\n",
    "    b.sub_(lr * db)\n",
    "    \n",
    "    if ii%1000 ==0:\n",
    "       \n",
    "        # 画图\n",
    "        display.clear_output(wait=True)\n",
    "        x = t.arange(0, 20).view(-1, 1)\n",
    "        y = x.mm(w) + b.expand_as(x)\n",
    "        plt.plot(x.numpy(), y.numpy()) # predicted\n",
    "        \n",
    "        x2, y2 = get_fake_data(batch_size=20) \n",
    "        plt.scatter(x2.numpy(), y2.numpy()) # true data\n",
    "        \n",
    "        plt.xlim(0, 20)\n",
    "        plt.ylim(0, 41)\n",
    "        plt.show()\n",
    "        plt.pause(0.5)\n",
    "        \n",
    "print(w.squeeze()[0], b.squeeze()[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可见程序已经基本学出w=2、b=3，并且图中直线和数据已经实现较好的拟合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "虽然上面提到了许多操作，但是只要掌握了这个例子基本上就可以了，其他的知识，读者日后遇到的时候，可以再看看这部份的内容或者查找对应文档。\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
