{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 线性回归的简洁实现\n",
    "\n",
    "随着深度学习框架的发展，开发深度学习应用变得越来越便利。实践中，我们通常可以用比上一节更简洁的代码来实现同样的模型。在本节中，我们将介绍如何使用PyTorch提供的接口更方便地实现线性回归的训练。\n",
    "\n",
    "## 生成数据集\n",
    "\n",
    "我们生成与上一节中相同的数据集。其中`features`是训练数据特征，`labels`是标签。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "num_inputs = 2\n",
    "num_examples = 1000\n",
    "true_w = [2, -3.4]\n",
    "true_b = 4.2\n",
    "features = torch.randn(num_examples, num_inputs)\n",
    "labels = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] + true_b\n",
    "labels += torch.normal(mean=torch.zeros(labels.shape), std=0.01)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 读取数据\n",
    "\n",
    "Pytorch的`utils`中提供了`data`包来读取数据。由于`data`常用作变量名，我们将导入的`data`模块用添加了torch首字母的假名`tdata`代替。在每一次迭代中，我们将随机读取包含10个数据样本的小批量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.utils import data as tdata\n",
    "\n",
    "batch_size = 10\n",
    "# 将训练数据的特征和标签组合\n",
    "dataset = tdata.TensorDataset(features, labels)\n",
    "# 随机读取小批量\n",
    "data_iter = tdata.DataLoader(dataset, batch_size, shuffle=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里`data_iter`的使用跟上一节中的一样。让我们读取并打印第一个小批量数据样本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 0.0255, -1.3971],\n",
      "        [ 2.1460, -1.1916],\n",
      "        [ 0.6476, -0.1836],\n",
      "        [-1.2027, -0.7906],\n",
      "        [-1.4211, -0.9525],\n",
      "        [ 0.2168, -1.3676],\n",
      "        [ 0.0313,  1.6980],\n",
      "        [-0.9016,  0.3065],\n",
      "        [ 0.4038,  1.4425],\n",
      "        [ 0.8925,  0.9993]]) tensor([ 9.0113, 12.5377,  6.1233,  4.4782,  4.5965,  9.2797, -1.5120,  1.3541,\n",
      "         0.0972,  2.5914])\n"
     ]
    }
   ],
   "source": [
    "for X, y in data_iter:\n",
    "    print(X, y)\n",
    "    break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义模型\n",
    "\n",
    "在上一节从零开始的实现中，我们需要定义模型参数，并使用它们一步步描述模型是怎样计算的。当模型结构变得更复杂时，这些步骤将变得更繁琐。其实，nn提供了大量预定义的层，这使我们只需关注使用哪些层来构造模型。下面将介绍如何使用nn更简洁地定义线性回归。\n",
    "\n",
    "首先，导入`nn`模块。实际上，“nn”是neural networks（神经网络）的缩写。顾名思义，该模块定义了大量神经网络的层。我们先定义一个模型变量`net`，它是一个`Sequential`实例。在nn中，`Sequential`实例可以看作是一个串联各个层的容器。在构造模型时，我们在该容器中依次添加层。当给定输入数据时，容器中的每一层将依次计算并将输出作为下一层的输入。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch import nn\n",
    "\n",
    "net = nn.Sequential()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "回顾图3.1中线性回归在神经网络图中的表示。作为一个单层神经网络，线性回归输出层中的神经元和输入层中各个输入完全连接。因此，线性回归的输出层又叫全连接层。在nn中，全连接层是一个`Linear`实例。该层输入特征数为2，输出个数为1。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "net.add_module('linear', nn.Linear(2, 1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 初始化模型参数\n",
    "\n",
    "在使用`net`前，我们需要初始化模型参数，如线性回归模型中的权重和偏差。我们从nn导入`init`模块。该模块提供了模型参数初始化的各种方法。这里的`init`是`initializer`的缩写形式。我们通过`init.normal(tensor, std=0.01)`指定权重参数每个元素将在初始化时随机采样于均值为0、标准差为0.01的正态分布。偏差参数默认会初始化为零。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**因为PyTorch没有直接提供对整个模型参数进行初始化的接口，这里自定义函数初始化整个模型的参数**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (linear): Linear(in_features=2, out_features=1, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from torch.nn import init\n",
    "\n",
    "def params_init(model):\n",
    "    if isinstance(model, nn.Linear):\n",
    "        init.normal_(tensor=model.weight.data, std=0.01)\n",
    "        init.constant_(tensor=model.bias.data, val=0)\n",
    "\n",
    "net.apply(params_init)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义损失函数\n",
    "\n",
    "在PyTorch中，`nn`模块定义了各种损失函数。我们直接使用它提供的均方损失作为模型的损失函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss = nn.MSELoss() # 均方损失等于平方损失除以样本数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义优化算法\n",
    "\n",
    "同样，我们也无须实现小批量随机梯度下降。在导入optim后，我们创建一个`SGD`实例，并指定学习率为0.03的小批量随机梯度下降（`sgd`）为优化算法。该优化算法将用来迭代`net`实例所有通过`add_module`函数嵌套的层所包含的全部参数。这些参数可以通过`parameters`函数获取。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch import optim\n",
    "\n",
    "optimizer = optim.SGD(net.parameters(), lr=0.03)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练模型\n",
    "\n",
    "在使用PyTorch训练模型时，我们通过调用`optimizer`实例的`step`函数来迭代模型参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 1, loss: 0.000241\n",
      "epoch 2, loss: 0.000099\n",
      "epoch 3, loss: 0.000099\n"
     ]
    }
   ],
   "source": [
    "num_epochs = 3\n",
    "for epoch in range(1, num_epochs + 1):\n",
    "    for X, y in data_iter:\n",
    "        net.zero_grad()\n",
    "        l = loss(net(X), y.reshape(batch_size, -1))\n",
    "        l.backward()\n",
    "        optimizer.step()\n",
    "    with torch.no_grad():\n",
    "        l = loss(net(features), labels.reshape(num_examples, -1))\n",
    "        print('epoch %d, loss: %f' % (epoch, l.data.numpy()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面我们分别比较学到的模型参数和真实的模型参数。我们从`net`获得需要的层，并访问其权重（`weight`）和偏差（`bias`）。学到的参数和真实的参数很接近。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "([2, -3.4], tensor([[ 1.9997, -3.3999]]))"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "linear = net[0]\n",
    "true_w, linear.weight.data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(4.2, tensor([4.2007]))"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "true_b, linear.bias.data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 小结\n",
    "\n",
    "* 使用PyTorch接口可以更简洁地实现模型。\n",
    "* 在PyTorch中，`data`模块提供了有关数据处理的工具，`nn`模块定义了大量神经网络的层以及各种损失函数。\n",
    "* PyTorch的`init`模块提供了模型参数初始化的各种方法。\n",
    "\n",
    "\n",
    "## 练习\n",
    "\n",
    "* 查阅PyTorch文档，看看`nn`和`init`模块里提供了哪些损失函数和初始化方法。\n",
    "* 如何访问`linear.weight`的梯度？\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/742)\n",
    "\n",
    "![](../img/qr_linear-regression-gluon.svg)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:pytorch]",
   "language": "python",
   "name": "conda-env-pytorch-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
