{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据准备"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "from torch.utils import data\n",
    "from d2l import torch as d2l"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "true_w = torch.tensor([2, -3.4])\n",
    "true_b = 4.2\n",
    "features, labels = d2l.synthetic_data(true_w, true_b, 1000)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "读取数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def load_array(data_arrays,batch_size,is_train=True):\n",
    "    #构造pytorch数据迭代器\n",
    "    #将数据读取，变为张量数据集格式\n",
    "    dataset=data.TensorDataset(*data_arrays)\n",
    "    #返回多次迭代对象，分batch_size批次加载完dataset数据，打乱顺序\n",
    "    return  data.DataLoader(dataset,batch_size,shuffle=is_train)\n",
    "batch_size=10\n",
    "data_iter=load_array((features,labels),batch_size)\n",
    "#迭代\n",
    "# next(iter(data_iter))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义模型\n",
    "我们首先定义一个模型变量net，它是一个Sequential类的实例。</br>\n",
    "**Sequential类将多个层串联在一起。**</br>\n",
    "当给定输入数据时，Sequential实例将数据传入到第一层，然后将第一层的输出作为第二\n",
    "层的输入，以此类推。</br>\n",
    "**Pytorch中，全连接层由Linear类定义**</br>\n",
    "需要将两个参数传入Linear中，第一个是输入特征形状，第二个是输出特征形状</br>\n",
    "在此案例中，分别传入2，1（x1,x2，y）</br>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "#nn是神经网络的缩写\n",
    "from torch import nn\n",
    "net=nn.Sequential(nn.Linear(2,1))#一层网络，2个输入，一个输出"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 初始化参数\n",
    "net[0]表示网络对象中的第一层</br>\n",
    "权重设置：使用平均值0，标准差0.01的正态分布随机采样</br>\n",
    "偏差设置：设为0</br>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.0006, 0.0040]])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "net[0].weight.data.normal_(0,0.01)#初始化权重为均值0，标准差0.01正态分布"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([0.])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "net[0].bias.data.fill_(0)#初始化偏差全是0"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义损失函数\n",
    "使用MSELoss，计算均方误差，也叫L2范数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss=nn.MSELoss()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义优化算法\n",
    "小批量随机梯度下降算法是一种优化神经网络的标准工具，</br>PyTorch在optim模块中实现了该算法的许多变\n",
    "种。</br>当我们实例化一个SGD实例时，我们要指定优化的参数（可通过net.parameters()从我们的模型中获得）以及优化算法所需的超参数字典。</br>小批量随机梯度下降只需要设置lr值，这里设置为0.03。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "trainer=torch.optim.SGD(net.parameters(),lr=0.03)#随机梯度下降，net.parameters()代表网络的超参数"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch:1;loss0.00010366801143391058\n",
      "epoch:2;loss0.00010368446965003386\n",
      "epoch:3;loss0.00010339235450373963\n"
     ]
    }
   ],
   "source": [
    "num_epochs=3#所有数据集上进行三次完整训练\n",
    "for epoch in range(num_epochs):\n",
    "    for X,y in data_iter:\n",
    "        l=loss(net(X),y)#计算mseLOSS\n",
    "        trainer.zero_grad()#将上一次反向传播梯度清0 \n",
    "        l.backward()#反向传播\n",
    "        trainer.step()\n",
    "    l=loss(net(features),labels)#训练完所有数据一遍的损失\n",
    "    print(f'epoch:{epoch+1};loss{l}')\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "获取模型估计的w,b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 2.0002, -3.3994]])\n",
      "tensor([4.1994])\n"
     ]
    }
   ],
   "source": [
    "w_from_model=net[0].weight.data#估计的w\n",
    "bias_from_model=net[0].bias.data#估计的b\n",
    "print(w_from_model)\n",
    "print(bias_from_model)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看模型得到的超参数和真实超参数的误差"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.0002, -0.0006]])\n",
      "tensor([0.0006])\n"
     ]
    }
   ],
   "source": [
    "print(true_w-w_from_model)\n",
    "print(true_b-bias_from_model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.5"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "12cf4d0b9b7b18c55261077a6853aabe6f033db06abf1184072cd2e823f414c8"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
