{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a54c0183a55312aa",
   "metadata": {},
   "source": [
    "## 1、导包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "20d4b167cce688d9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T03:00:59.541288Z",
     "start_time": "2024-11-05T03:00:59.536328Z"
    }
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "from torch.utils.data import Dataset, DataLoader, TensorDataset\n",
    "# Dataset 是一个抽象类，代表一个数据集\n",
    "# TensorDataset 是一个继承自 Dataset 的类，用于包装张量数据。它特别适用于已经准备好数据的情况，可以直接传入多个张量来创建数据集。\n",
    "# DataLoader 是一个迭代器，用于批量加载数据集中的数据。它可以从 Dataset 对象中批量读取数据，并支持并行加载、打乱数据等功能。\n",
    "# dataset 参数：要加载的数据集，可以是 Dataset 或 TensorDataset 的实例。\n",
    "# batch_size 参数：每个批次的数据量。\n",
    "# shuffle 参数：是否在每个 epoch 开始时打乱数据。\n",
    "# num_workers 参数：用于数据加载的子进程数。设置为大于0可以加速数据加载。\n",
    "# pin_memory 参数：是否将数据加载到带有页锁定的内存中，这在使用GPU时可以加速数据传输。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "728e89e843c49946",
   "metadata": {},
   "source": [
    "## 2、生成数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "b7920fb7c08bb262",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T02:55:00.875734Z",
     "start_time": "2024-11-05T02:55:00.848616Z"
    }
   },
   "outputs": [],
   "source": [
    "w = torch.tensor([2,-3.4])\n",
    "b = 4.2\n",
    "def linear_data(w,b,num_examples):\n",
    "    X = torch.normal(0,1,[num_examples,len(w)]) # 正态分布\n",
    "    y = torch.matmul(X, w) + b # 矩阵乘法\n",
    "    y += torch.normal(0,0.01,y.shape)\n",
    "    return X, y.reshape(-1,1)\n",
    "features,labels = linear_data(w,b,100)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98cd18a2667bc034",
   "metadata": {},
   "source": [
    "## 3、读取数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "899642701e0028a8",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T04:48:17.710595Z",
     "start_time": "2024-11-05T04:48:17.702968Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.utils.data.dataloader.DataLoader at 0x11d99c500>"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def load_array(data_arrays,batch_size,is_train=True):\n",
    "    \"\"\"\n",
    "    读取数据集\n",
    "    :param data_arrays: features和labels\n",
    "    :param batch_size: 批次数 \n",
    "    :param is_train: 是否每轮打乱数据\n",
    "    :return: \n",
    "    \"\"\"\n",
    "    dataset = TensorDataset(*data_arrays)\n",
    "    return DataLoader(dataset, batch_size=batch_size, shuffle=is_train)\n",
    "\n",
    "batch_size=10 # 批次数\n",
    "data_iter = load_array((features, labels),batch_size,is_train=True)\n",
    "data_iter\n",
    "# next(iter(data_iter))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c3e72b4e3e38d9d",
   "metadata": {},
   "source": [
    "## 4、定义模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "7db7fa366a8c9d60",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T03:10:58.441637Z",
     "start_time": "2024-11-05T03:10:58.436162Z"
    }
   },
   "outputs": [],
   "source": [
    "from torch import nn # nn 神经网络缩写\n",
    "net = nn.Sequential(nn.Linear(2,1)) # 2：输入特征形状 1：输出特征形状"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c5e2bf5be2067db0",
   "metadata": {},
   "source": [
    "## 5、初始化参数\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "4b2f67abe803d1c6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T04:46:18.544697Z",
     "start_time": "2024-11-05T04:46:18.537837Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([0.])"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# net[0] 选择网络中的第一层，使用weight.data和bias.data设置权重和偏置，使用normal设置均值为0,标准差为0.01的正态分布；截距设置为0\n",
    "net[0].weight.data.normal_(0,0.01) \n",
    "net[0].bias.data.fill_(0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb1e70a58703da1a",
   "metadata": {},
   "source": [
    "## 6、定义损失函数\n",
    "计算均方误差使用MSELoss类，也称为平方L2范数，默认情况下返回所有样本损失平均值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "917030f0df867b8f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T04:53:35.069634Z",
     "start_time": "2024-11-05T04:53:35.065267Z"
    }
   },
   "outputs": [],
   "source": [
    "loss_fn = nn.MSELoss()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9fc2ea4297e483d5",
   "metadata": {},
   "source": [
    "## 7、定义优化算法\n",
    "小批量随机梯度下降是一种优化神经网络的标准工具，通过torch.optim.SGD(net.parameters(),lr=0.03)实例化SGD，参数1是net传输层参数，参数2为学习率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "691c37c75476278",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T04:53:37.809440Z",
     "start_time": "2024-11-05T04:53:37.804519Z"
    }
   },
   "outputs": [],
   "source": [
    "optimizer = torch.optim.SGD(net.parameters(),lr=0.03)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e68c76a6f4cf2861",
   "metadata": {},
   "source": [
    "## 8、训练\n",
    "- 通过调用net(x)生成预测并计算损失l（前向传播）\n",
    "- 通过反向传播计算梯度\n",
    "- 通过调用优化器来更新模型参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "8f7dfe3528a98c2c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-05T04:54:09.404387Z",
     "start_time": "2024-11-05T04:54:09.375045Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 1 Loss: tensor(9.1082, grad_fn=<MseLossBackward0>)\n",
      "Epoch: 2 Loss: tensor(2.4654, grad_fn=<MseLossBackward0>)\n",
      "Epoch: 3 Loss: tensor(0.7094, grad_fn=<MseLossBackward0>)\n"
     ]
    }
   ],
   "source": [
    "num_epochs = 3\n",
    "for epoch in range(num_epochs):\n",
    "    for X,y in data_iter:\n",
    "        ret = net(X) # 生成预测数据\n",
    "        l = loss_fn(ret,y) # 计算损失\n",
    "        optimizer.zero_grad() # 清空梯度\n",
    "        l.backward() # 反向传播计算梯度\n",
    "        optimizer.step() # 用于更新模型的参数。这个方法根据计算出的梯度来调整模型参数，以最小化损失函数\n",
    "    l = loss_fn(net(features),labels)\n",
    "    print('Epoch:',epoch + 1,'Loss:',l)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "afe0dd4ffc785839",
   "metadata": {},
   "source": [
    "## 9、optimizer 原理说明\n",
    "在 PyTorch 中，`optimizer.step()` 是优化器的一个关键方法，用于更新模型的参数。这个方法根据计算出的梯度来调整模型参数，以最小化损失函数。下面详细解释 `optimizer.step()` 的作用和工作原理。\n",
    "\n",
    "### 优化器的工作原理\n",
    "\n",
    "1. **初始化模型参数**：在训练开始之前，模型的参数（权重和偏置）会被初始化。\n",
    "2. **前向传播**：输入数据通过模型进行前向传播，计算出预测值。\n",
    "3. **计算损失**：根据预测值和真实值计算损失函数。\n",
    "4. **反向传播**：通过调用 `loss.backward()` 计算损失函数关于模型参数的梯度。\n",
    "5. **更新参数**：通过调用 `optimizer.step()` 更新模型参数。\n",
    "\n",
    "### `optimizer.step()` 的作用\n",
    "\n",
    "`optimizer.step()` 方法根据优化器的配置（如学习率、动量等）和计算出的梯度来更新模型参数。具体来说，它执行以下操作：\n",
    "\n",
    "1. **获取梯度**：从模型参数中获取已经计算好的梯度。\n",
    "2. **更新参数**：根据优化算法（如SGD、Adam等）的规则，使用梯度来更新模型参数。\n",
    "\n",
    "### 示例代码\n",
    "\n",
    "下面是一个完整的示例，展示了如何使用 `optimizer.step()` 来更新模型参数：\n",
    "\n",
    "```python\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "\n",
    "# 创建一个简单的线性模型\n",
    "model = nn.Linear(10, 1)\n",
    "\n",
    "# 创建一些示例数据\n",
    "input_data = torch.randn(1, 10)\n",
    "target = torch.randn(1, 1)\n",
    "\n",
    "# 定义损失函数\n",
    "loss_fn = nn.MSELoss()\n",
    "\n",
    "# 定义优化器\n",
    "optimizer = optim.SGD(model.parameters(), lr=0.01)\n",
    "\n",
    "# 前向传播\n",
    "output = model(input_data)\n",
    "\n",
    "# 计算损失\n",
    "loss = loss_fn(output, target)\n",
    "\n",
    "# 反向传播\n",
    "loss.backward()\n",
    "\n",
    "# 更新参数\n",
    "optimizer.step()\n",
    "\n",
    "# 清零梯度\n",
    "optimizer.zero_grad()\n",
    "```\n",
    "\n",
    "### 详细步骤\n",
    "\n",
    "1. **前向传播**：\n",
    "   ```python\n",
    "   output = model(input_data)\n",
    "   ```\n",
    "\n",
    "2. **计算损失**：\n",
    "   ```python\n",
    "   loss = loss_fn(output, target)\n",
    "   ```\n",
    "\n",
    "3. **反向传播**：\n",
    "   ```python\n",
    "   loss.backward()\n",
    "   ```\n",
    "\n",
    "4. **更新参数**：\n",
    "   ```python\n",
    "   optimizer.step()\n",
    "   ```\n",
    "\n",
    "5. **清零梯度**：\n",
    "   ```python\n",
    "   optimizer.zero_grad()\n",
    "   ```\n",
    "\n",
    "### 为什么需要 `optimizer.zero_grad()`\n",
    "\n",
    "在每次迭代结束时，调用 `optimizer.zero_grad()` 是必要的，因为 PyTorch 默认会累加梯度。如果不清零梯度，每次迭代的梯度会累加，导致梯度值越来越大，最终可能导致模型无法收敛。\n",
    "\n",
    "### 不同优化器的更新规则\n",
    "\n",
    "不同的优化器有不同的参数更新规则：\n",
    "\n",
    "- **SGD（随机梯度下降）**：\n",
    "  ```python\n",
    "  new_param = param - learning_rate * gradient\n",
    "  ```\n",
    "\n",
    "- **Adam（自适应矩估计）**：\n",
    "  Adam 结合了动量和 RMSprop 的思想，使用一阶和二阶矩来调整学习率：\n",
    "  ```python\n",
    "  m_t = beta1 * m_{t-1} + (1 - beta1) * gradient\n",
    "  v_t = beta2 * v_{t-1} + (1 - beta2) * gradient^2\n",
    "  m_hat = m_t / (1 - beta1^t)\n",
    "  v_hat = v_t / (1 - beta2^t)\n",
    "  new_param = param - learning_rate * m_hat / (sqrt(v_hat) + epsilon)\n",
    "  ```\n",
    "\n",
    "### 总结\n",
    "\n",
    "- **`optimizer.step()`**：根据计算出的梯度更新模型参数。\n",
    "- **`optimizer.zero_grad()`**：清零梯度，防止梯度累加。\n",
    "\n",
    "通过这些方法，你可以有效地训练和优化深度学习模型。希望这些解释对你有所帮助！如果你有更多问题，请随时提问。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "92e616bc3f41f710",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
