{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "到目前为止，我们使用过的工具包括 Tensor、Variable、nn、optim。\n",
    "\n",
    "接下来，我们使用这些工具，一步一步的写多层感知机。\n",
    "\n",
    "最后会介绍 nn Modules（这才是我们以后会经常用到的东西）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## 纯纯的Tensor来写多层感知机\n",
    "\n",
    "先做一个热身题目，我们使用Tensor构建一个两层神经网络\n",
    "\n",
    "Tips:通常构建一个神经网络，我们有如下步骤\n",
    "\n",
    "1、构建好网络模型\n",
    "\n",
    "2、参数初始化\n",
    "\n",
    "3、前向传播\n",
    "\n",
    "4、计算损失\n",
    "\n",
    "5、反向传播求出梯度\n",
    "\n",
    "6、更新权重"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在我们构建神经网络之前，我们先介绍一个Tensor的内置函数 clamp()\n",
    "\n",
    "该函数的功能是：将输入 Tensor 的每个元素夹紧到区间 [min,max]中，并返回结果到一个新的Tensor。\n",
    "\n",
    "这样，我们就可以使用 x.clamp(min=0) 来代替 relu函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false,
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "17814.725982620694\n",
      "1780.4133714992126\n",
      "311.0278169253468\n",
      "66.81572479427159\n",
      "16.11404561124818\n",
      "4.2082032461339\n",
      "1.166661690631095\n",
      "0.3393352667147358\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "# M是样本数量，input_size是输入层大小\n",
    "# hidden_size是隐含层大小，output_size是输出层大小\n",
    "M, input_size, hidden_size, output_size = 64, 1000, 100, 10\n",
    "\n",
    "# 生成随机数当作样本\n",
    "x = torch.randn(M, input_size) #size(64, 1000)\n",
    "y = torch.randn(M, output_size) #size(64, 10)\n",
    "\n",
    "# 参数初始化\n",
    "def init_parameters():\n",
    "    w1 = torch.randn(input_size, hidden_size)\n",
    "    w2 = torch.randn(hidden_size, output_size)\n",
    "    b1 = torch.randn(1, hidden_size)\n",
    "    b2 = torch.randn(1, output_size)\n",
    "    return {\"w1\": w1, \"w2\":w2, \"b1\": b1, \"b2\": b2}\n",
    "\n",
    "# 定义模型\n",
    "def model(x, parameters):\n",
    "    Z1 = x.mm(parameters[\"w1\"]) + parameters[\"b1\"] # 线性层\n",
    "    A1 = Z1.clamp(min=0) # relu激活函数\n",
    "    Z2 = A1.mm(parameters[\"w2\"]) + parameters[\"b2\"] #线性层\n",
    "    # 为了方便反向求导，我们会把当前求得的结果保存在一个cache中\n",
    "    cache = {\"Z1\": Z1, \"A1\": A1}\n",
    "    return Z2, cache\n",
    "\n",
    "# 计算损失\n",
    "def loss_fn(y_pred, y):\n",
    "    loss = (y_pred - y).pow(2).sum() # 我们这里直接使用 MSE(均方误差) 作为损失函数\n",
    "    return loss\n",
    "\n",
    "# 反向传播，求出梯度\n",
    "def backpropogation(x, y, y_pred, cache, parameters):\n",
    "    m = y.size()[0] # m个样本\n",
    "    # 以下是反向求导的过程：\n",
    "    d_y_pred = 1/m * (y_pred - y)\n",
    "    d_w2 = 1/m * cache[\"A1\"].t().mm(d_y_pred)\n",
    "    d_b2 = 1/m * torch.sum(d_y_pred, 0, keepdim=True)\n",
    "    d_A1 = d_y_pred.mm(parameters[\"w2\"].t())\n",
    "    # 对 relu 函数求导: start\n",
    "    d_Z1 = d_A1.clone()\n",
    "    d_Z1[cache[\"Z1\"] < 0] = 0\n",
    "    # 对 relu 函数求导: end\n",
    "    d_w1 = 1/m * x.t().mm(d_Z1)\n",
    "    d_b1 = 1/m * torch.sum(d_Z1, 0, keepdim=True)\n",
    "    grads = {\n",
    "        \"d_w1\": d_w1, \n",
    "        \"d_b1\": d_b1, \n",
    "        \"d_w2\": d_w2, \n",
    "        \"d_b2\": d_b2\n",
    "    }\n",
    "    return grads\n",
    "\n",
    "# 更新参数\n",
    "def update(lr, parameters, grads):\n",
    "    parameters[\"w1\"] -= lr * grads[\"d_w1\"]\n",
    "    parameters[\"w2\"] -= lr * grads[\"d_w2\"]\n",
    "    parameters[\"b1\"] -= lr * grads[\"d_b1\"]\n",
    "    parameters[\"b2\"] -= lr * grads[\"d_b2\"]\n",
    "    return parameters\n",
    "\n",
    "## 设置超参数 ##\n",
    "\n",
    "learning_rate = 1e-2\n",
    "EPOCH = 400\n",
    "\n",
    "# 参数初始化\n",
    "parameters = init_parameters()\n",
    "\n",
    "## 开始训练 ##\n",
    "for t in range(EPOCH):    \n",
    "    # 向前传播\n",
    "    y_pred, cache = model(x, parameters)\n",
    "    \n",
    "    # 计算损失\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    if (t+1) % 50 == 0:\n",
    "        print(loss)\n",
    "    # 反向传播\n",
    "    grads = backpropogation(x, y, y_pred, cache, parameters)\n",
    "    # 更新参数\n",
    "    parameters = update(learning_rate, parameters, grads)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 加上Variable来写多层感知机\n",
    "\n",
    "\n",
    "现在，我们来使用 Variable 重新构建上述的两层神经网络，这个时候，我们已经不需要再使用手动求导了（因为有了自动求导的机制啊~）\n",
    "\n",
    "可以看到，我们的下面的代码已经精简很多了..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "14172.5\n",
      "417.0944519042969\n",
      "19.609352111816406\n",
      "1.0894582271575928\n",
      "0.06629468500614166\n",
      "0.004510130733251572\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from torch.autograd import Variable # 导入 Variable 对象\n",
    "\n",
    "# M是样本数量，input_size是输入层大小\n",
    "# hidden_size是隐含层大小，output_size是输出层大小\n",
    "M, input_size, hidden_size, output_size = 64, 1000, 100, 10\n",
    "\n",
    "# 生成随机数当作样本，同时用Variable 来包装这些数据，设置 requires_grad=False 表示在方向传播的时候，\n",
    "# 我们不需要求这几个 Variable 的导数\n",
    "x = Variable(torch.randn(M, input_size), requires_grad=False)\n",
    "y = Variable(torch.randn(M, output_size), requires_grad=False)\n",
    "\n",
    "# 参数初始化，同时用Variable 来包装这些数据，设置 requires_grad=True 表示在方向传播的时候，\n",
    "# 我们需要自动求这几个 Variable 的导数\n",
    "def init_parameters():\n",
    "    w1 = Variable(torch.randn(input_size, hidden_size), requires_grad=True)\n",
    "    w2 = Variable(torch.randn(hidden_size, output_size), requires_grad=True)\n",
    "    b1 = Variable(torch.randn(1, hidden_size), requires_grad=True)\n",
    "    b2 = Variable(torch.randn(1, output_size), requires_grad=True)\n",
    "    return {\"w1\": w1, \"w2\":w2, \"b1\": b1, \"b2\": b2}\n",
    "\n",
    "# 向前传播\n",
    "def model(x, parameters):\n",
    "    Z1 = x.mm(parameters[\"w1\"]) + parameters[\"b1\"] # 线性层\n",
    "    A1 = Z1.clamp(min=0) # relu激活函数\n",
    "    Z2 = A1.mm(parameters[\"w2\"]) + parameters[\"b2\"] #线性层\n",
    "    return Z2\n",
    "\n",
    "# 计算损失\n",
    "def loss_fn(y_pred, y):\n",
    "    loss = (y_pred - y).pow(2).sum() # 我们这里直接使用 MSE(均方误差) 作为损失函数\n",
    "    return loss\n",
    "\n",
    "## 设置超参数 ##\n",
    "learning_rate = 1e-6\n",
    "EPOCH = 300\n",
    "\n",
    "# 参数初始化\n",
    "parameters = init_parameters()\n",
    "\n",
    "## 开始训练 ##\n",
    "for t in range(EPOCH):    \n",
    "    # 向前传播\n",
    "    y_pred= model(x, parameters)\n",
    "    # 计算损失\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    # 计算和打印都是在 Variables 上进行操作的，这时候的 loss 时一个 Variable ，\n",
    "    # 它的size() 是 (1,)，所以 loss.data 的size() 也是 (1,)\n",
    "    # 所以， loss.data[0] 才是一个实值\n",
    "    if (t+1) % 50 == 0:\n",
    "        print(loss.data[0])\n",
    "    # 使用自动求导来计算反向传播过程中的梯度，这个方法会把所有的设置了requires_grad=True 的Variable 对象的梯度全部自动出来\n",
    "    # 在这里，就是求出了 w1, w2, b1, b2的梯度\n",
    "    loss.backward()\n",
    "    \n",
    "    # 更新参数， .data 表示的都是Tensor\n",
    "    parameters[\"w1\"].data -= learning_rate * parameters[\"w1\"].grad.data\n",
    "    parameters[\"w2\"].data -= learning_rate * parameters[\"w2\"].grad.data\n",
    "    parameters[\"b1\"].data -= learning_rate * parameters[\"b1\"].grad.data\n",
    "    parameters[\"b2\"].data -= learning_rate * parameters[\"b2\"].grad.data\n",
    "    \n",
    "    # 由于PyTorch中的梯度是会累加的，所以如果你没有手动清空梯度，那么下次你家的grad就是这次和上次的grad的累加和。\n",
    "    # 所以，为了每次都只是使用当前的梯度，我们需要手动清空梯度\n",
    "    parameters[\"w1\"].grad.data.zero_()\n",
    "    parameters[\"w2\"].grad.data.zero_()\n",
    "    parameters[\"b1\"].grad.data.zero_()\n",
    "    parameters[\"b2\"].grad.data.zero_()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 使用nn和optim来构建多层感知机\n",
    "\n",
    "\n",
    "我们之前已经学过了，使用 nn 快速搭建一个线性模型。\n",
    "\n",
    "现在就用 nn 来快速的搭建一个多层感知机，同样的optim来为我们提供优化功能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "36.990989685058594\n",
      "2.2616655826568604\n",
      "0.23959983885288239\n",
      "0.03270247206091881\n",
      "0.005442502908408642\n",
      "0.0010267721954733133\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from torch.autograd import Variable\n",
    "import torch.nn as nn\n",
    "\n",
    "# M是样本数量，input_size是输入层大小\n",
    "# hidden_size是隐含层大小，output_size是输出层大小\n",
    "M, input_size, hidden_size, output_size = 64, 1000, 100, 10\n",
    "\n",
    "# 生成随机数当作样本，同时用Variable 来包装这些数据，设置 requires_grad=False 表示在方向传播的时候，\n",
    "# 我们不需要求这几个 Variable 的导数\n",
    "x = Variable(torch.randn(M, input_size))\n",
    "y = Variable(torch.randn(M, output_size))\n",
    "\n",
    "# 使用 nn 包的 Sequential 来快速构建模型，Sequential可以看成一个组件的容器。\n",
    "# 它涵盖神经网络中的很多层，并将这些层组合在一起构成一个模型.\n",
    "# 之后，我们输入的数据会按照这个Sequential的流程进行数据的传输，最后一层就是输出层。\n",
    "# 默认会帮我们进行参数初始化\n",
    "model = nn.Sequential(\n",
    "    nn.Linear(input_size, hidden_size),\n",
    "    nn.ReLU(),\n",
    "    nn.Linear(hidden_size, output_size),\n",
    ")\n",
    "\n",
    "# 定义损失函数\n",
    "loss_fn = nn.MSELoss(size_average=False)\n",
    "\n",
    "## 设置超参数 ##\n",
    "learning_rate = 1e-4\n",
    "EPOCH = 300\n",
    "\n",
    "# 使用optim包来定义优化算法，可以自动的帮我们对模型的参数进行梯度更新。这里我们使用的是随机梯度下降法。\n",
    "# 第一个传入的参数是告诉优化器，我们需要进行梯度更新的Variable 是哪些，\n",
    "# 第二个参数就是学习速率了。\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
    "\n",
    "\n",
    "## 开始训练 ##\n",
    "for t in range(EPOCH):    \n",
    "    \n",
    "    # 向前传播\n",
    "    y_pred= model(x)\n",
    "    \n",
    "    # 计算损失\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    \n",
    "    # 显示损失\n",
    "    if (t+1) % 50 == 0:\n",
    "        print(loss.data[0])\n",
    "    \n",
    "    # 在我们进行梯度更新之前，先使用optimier对象提供的清除已经积累的梯度。\n",
    "    optimizer.zero_grad()\n",
    "    \n",
    "    # 计算梯度\n",
    "    loss.backward()\n",
    "    \n",
    "    # 更新梯度\n",
    "    optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 自己定制一个 nn Modules\n",
    "\n",
    "很多时候，我们要建立的模型会比PyTorch现有的模型更加复杂，所以，在这个时候，我们需要自己定制自己的 nn.Module 这个时候，我们也需要定义 forward 方法，这个方法的输入是Variable，输出也是Variable。\n",
    "\n",
    "定制一个自己的 nn Modules，其实就是在 __init__初始化函数中，将模型需要用到的层给定义好了。\n",
    "\n",
    "然后重写forward()，在里面把数据在模型中流动的过程给写出来，就完成自己模型的定制了。\n",
    "\n",
    "还是使用上面的多层感知机的例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "37.752586364746094\n",
      "2.605687141418457\n",
      "0.29293784499168396\n",
      "0.04110687971115112\n",
      "0.006826839409768581\n",
      "0.0013026120141148567\n"
     ]
    }
   ],
   "source": [
    "###   这里我们展示两种定义模型的写法: 第一种如下   ###\n",
    "\n",
    "import torch\n",
    "from torch.autograd import Variable\n",
    "\n",
    "# 一定要继承 nn.Module\n",
    "class TwoLayerNet(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, output_size):\n",
    "        \"\"\"\n",
    "        在构造器中，我们会实例化两个线性层，并且注意，下面这句 super(TwoLayerNet, self).__init__()\n",
    "        千万别忘记了\n",
    "        \"\"\"\n",
    "        super(TwoLayerNet, self).__init__()\n",
    "        self.linear1 = torch.nn.Linear(input_size, hidden_size)\n",
    "        self.relu1 = torch.nn.ReLU()\n",
    "        self.linear2 = torch.nn.Linear(hidden_size, output_size)\n",
    "\n",
    "    def forward(self, x):\n",
    "        \"\"\"\n",
    "        在forward函数中，我们会接受一个Variable，然后我们也会返回一个Varible\n",
    "        \"\"\"\n",
    "        Z1 = self.linear1(x)\n",
    "        A1 = self.relu1(Z1)\n",
    "        y_pred = self.linear2(A1)\n",
    "        return y_pred\n",
    "\n",
    "    \n",
    "# M是样本数量，input_size是输入层大小\n",
    "# hidden_size是隐含层大小，output_size是输出层大小\n",
    "M, input_size, hidden_size, output_size = 64, 1000, 100, 10\n",
    "\n",
    "# 生成随机数当作样本，同时用Variable 来包装这些数据，设置 requires_grad=False 表示在方向传播的时候，\n",
    "# 我们不需要求这几个 Variable 的导数\n",
    "x = Variable(torch.randn(M, input_size))\n",
    "y = Variable(torch.randn(M, output_size))\n",
    "\n",
    "\n",
    "model = TwoLayerNet(input_size, hidden_size, output_size)\n",
    "\n",
    "# 定义损失函数\n",
    "loss_fn = nn.MSELoss(size_average=False)\n",
    "\n",
    "## 设置超参数 ##\n",
    "learning_rate = 1e-4\n",
    "EPOCH = 300\n",
    "\n",
    "# 使用optim包来定义优化算法，可以自动的帮我们对模型的参数进行梯度更新。这里我们使用的是随机梯度下降法。\n",
    "# 第一个传入的参数是告诉优化器，我们需要进行梯度更新的Variable 是哪些，\n",
    "# 第二个参数就是学习速率了。\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
    "\n",
    "## 开始训练 ##\n",
    "for t in range(EPOCH):    \n",
    "    \n",
    "    # 向前传播\n",
    "    y_pred= model(x)\n",
    "    \n",
    "    # 计算损失\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    \n",
    "    # 显示损失\n",
    "    if (t+1) % 50 == 0:\n",
    "        print(loss.data[0])\n",
    "    \n",
    "    # 在我们进行梯度更新之前，先使用optimier对象提供的清除已经积累的梯度。\n",
    "    optimizer.zero_grad()\n",
    "    \n",
    "    # 计算梯度\n",
    "    loss.backward()\n",
    "    \n",
    "    # 更新梯度\n",
    "    optimizer.step()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "46.5057258605957\n",
      "2.82735276222229\n",
      "0.27365124225616455\n",
      "0.03486333414912224\n",
      "0.0050351982936263084\n",
      "0.000801865360699594\n"
     ]
    }
   ],
   "source": [
    "###   这里我们展示两种定义模型的写法: 第二种如下 （推荐） ###\n",
    "\n",
    "import torch\n",
    "from torch.autograd import Variable\n",
    "\n",
    "# 一定要继承 nn.Module\n",
    "class TwoLayerNet(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, output_size):\n",
    "        \"\"\"\n",
    "            我们在构建模型的时候，能够使用nn.Sequential的地方，尽量使用它，因为这样可以让结构更加清晰\n",
    "        \"\"\"\n",
    "        super(TwoLayerNet, self).__init__()\n",
    "        self.twolayernet = nn.Sequential(\n",
    "            nn.Linear(input_size, hidden_size),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(hidden_size, output_size),\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        \"\"\"\n",
    "        在forward函数中，我们会接受一个Variable，然后我们也会返回一个Varible\n",
    "        \"\"\"\n",
    "        y_pred = self.twolayernet(x)\n",
    "        return y_pred\n",
    "\n",
    "    \n",
    "# M是样本数量，input_size是输入层大小\n",
    "# hidden_size是隐含层大小，output_size是输出层大小\n",
    "M, input_size, hidden_size, output_size = 64, 1000, 100, 10\n",
    "\n",
    "# 生成随机数当作样本，同时用Variable 来包装这些数据，设置 requires_grad=False 表示在方向传播的时候，\n",
    "# 我们不需要求这几个 Variable 的导数\n",
    "x = Variable(torch.randn(M, input_size))\n",
    "y = Variable(torch.randn(M, output_size))\n",
    "\n",
    "\n",
    "model = TwoLayerNet(input_size, hidden_size, output_size)\n",
    "\n",
    "# 定义损失函数\n",
    "loss_fn = nn.MSELoss(size_average=False)\n",
    "\n",
    "## 设置超参数 ##\n",
    "learning_rate = 1e-4\n",
    "EPOCH = 300\n",
    "\n",
    "# 使用optim包来定义优化算法，可以自动的帮我们对模型的参数进行梯度更新。这里我们使用的是随机梯度下降法。\n",
    "# 第一个传入的参数是告诉优化器，我们需要进行梯度更新的Variable 是哪些，\n",
    "# 第二个参数就是学习速率了。\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n",
    "\n",
    "## 开始训练 ##\n",
    "for t in range(EPOCH):    \n",
    "    \n",
    "    # 向前传播\n",
    "    y_pred= model(x)\n",
    "    \n",
    "    # 计算损失\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    \n",
    "    # 显示损失\n",
    "    if (t+1) % 50 == 0:\n",
    "        print(loss.data[0])\n",
    "    \n",
    "    # 在我们进行梯度更新之前，先使用optimier对象提供的清除已经积累的梯度。\n",
    "    optimizer.zero_grad()\n",
    "    \n",
    "    # 计算梯度\n",
    "    loss.backward()\n",
    "    \n",
    "    # 更新梯度\n",
    "    optimizer.step()"
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python [default]",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
