{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c7dae1db",
   "metadata": {},
   "source": [
    "# Pytorch\n",
    "\n",
    "## 一、张量操作与自动微分​\n",
    "\n",
    "### 1. ​张量创建与运算​\n",
    "\n",
    "- 创建方式：torch.tensor()、torch.zeros()、torch.randn()等\n",
    "- 设备切换：.to(device)显式管理CPU/GPU数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "a59f39bb",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torchvision\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "import torchvision.datasets as datasets\n",
    "import torchvision.transforms as transforms\n",
    "from torchvision.datasets import ImageFolder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "cfe17ee1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0.4441,  1.0663, -0.9011],\n",
       "         [ 1.1703, -1.8581,  0.2923]]),\n",
       " tensor([[1., 1., 1.],\n",
       "         [1., 1., 1.]]))"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 创建张量并移至GPU\n",
    "x = torch.randn(2, 3)  # 或 .to('cuda:0')\n",
    "y = torch.ones_like(x, dtype=torch.float32)\n",
    "x, y"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73fd52bc",
   "metadata": {},
   "source": [
    "### 2. 自动微分（Autograd）  \n",
    "\n",
    "a.动态计算图：通过requires_grad=True追踪梯度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5ee9da20",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([14.])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.tensor([2.0], requires_grad=True)\n",
    "y = x**3 + 2*x\n",
    "y.backward()  # 自动计算梯度\n",
    "x.grad        # tensor([14.]) 即 dy/dx=3x²+2"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6ca50f4",
   "metadata": {},
   "source": [
    "## 二、模型定义与模块化 \n",
    "\n",
    "模型模块化设计的本质是通过高内聚、低耦合的组件化思想，将复杂系统分解为可独立开发、测试和重用的功能单元。其本质特征可以理解为**分治策略**：\n",
    "将大模型拆解为多个功能明确的子模块，如同搭积木。\n",
    "\n",
    "#### 1. 自定义网络（nn.Module）  \n",
    "\n",
    "a.继承nn.Module并实现forward方法\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "93a5ab45",
   "metadata": {},
   "outputs": [],
   "source": [
    "class SimpleCNN(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(3, 16, kernel_size=3)\n",
    "        self.pool = nn.MaxPool2d(2)\n",
    "        self.fc = nn.Linear(16 * 14 * 14, 10)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.pool(nn.ReLU()(self.conv1(x)))\n",
    "        x = x.view(-1, 16 * 14 * 14)\n",
    "        return self.fc(x)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58a5ba45",
   "metadata": {},
   "source": [
    "### 2. 模型微调（Fine-tuning）  \n",
    "\n",
    "a.冻结预训练层并替换分类层\n",
    "\n",
    "通过设置requires_grad=False冻结所有层的参数，这样在训练时这些层的权重不会被更新"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "75b50b55",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "c:\\Users\\cyy05\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\torchvision\\models\\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\n",
      "  warnings.warn(\n",
      "c:\\Users\\cyy05\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\torchvision\\models\\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.\n",
      "  warnings.warn(msg)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "ResNet(\n",
       "  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\n",
       "  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "  (relu): ReLU(inplace=True)\n",
       "  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\n",
       "  (layer1): Sequential(\n",
       "    (0): BasicBlock(\n",
       "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    )\n",
       "    (1): BasicBlock(\n",
       "      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    )\n",
       "  )\n",
       "  (layer2): Sequential(\n",
       "    (0): BasicBlock(\n",
       "      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (downsample): Sequential(\n",
       "        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
       "        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      )\n",
       "    )\n",
       "    (1): BasicBlock(\n",
       "      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    )\n",
       "  )\n",
       "  (layer3): Sequential(\n",
       "    (0): BasicBlock(\n",
       "      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (downsample): Sequential(\n",
       "        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
       "        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      )\n",
       "    )\n",
       "    (1): BasicBlock(\n",
       "      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    )\n",
       "  )\n",
       "  (layer4): Sequential(\n",
       "    (0): BasicBlock(\n",
       "      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (downsample): Sequential(\n",
       "        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
       "        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      )\n",
       "    )\n",
       "    (1): BasicBlock(\n",
       "      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (relu): ReLU(inplace=True)\n",
       "      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
       "    )\n",
       "  )\n",
       "  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))\n",
       "  (fc): Linear(in_features=512, out_features=1000, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = torchvision.models.resnet18(pretrained=True)\n",
    "for param in model.parameters():\n",
    "    param.requires_grad = False  # 冻结所有层\n",
    "model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "d3bc15b5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "SGD (\n",
       "Parameter Group 0\n",
       "    dampening: 0\n",
       "    differentiable: False\n",
       "    foreach: None\n",
       "    fused: None\n",
       "    lr: 0.01\n",
       "    maximize: False\n",
       "    momentum: 0\n",
       "    nesterov: False\n",
       "    weight_decay: 0\n",
       ")"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.fc = nn.Linear(512, 100)  # 替换最后一层\n",
    "optimizer = optim.SGD(model.fc.parameters(), lr=0.01)  # 仅优化新层\n",
    "optimizer"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0214ba6",
   "metadata": {},
   "source": [
    "### 3 关键设计原则\n",
    "\n",
    "1. 接口标准化\n",
    "\n",
    "每个模块应通过明确定义的接口交互："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "265bf1c8",
   "metadata": {},
   "outputs": [],
   "source": [
    "class AttentionModule(nn.Module):\n",
    "    def forward(self, q, k, v, mask=None):  # 清晰接口定义\n",
    "        \"\"\"输入维度: (B, T, C)\"\"\"\n",
    "        # ...实现细节...\n",
    "        return attended_output"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77351dfc",
   "metadata": {},
   "source": [
    "2. 层次化封装\n",
    "\n",
    "构建多级模块体系（原子操作→复合模块→完整模型）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0842705",
   "metadata": {},
   "outputs": [],
   "source": [
    "Encoder (顶层)\n",
    "├── TokenEmbedding (原子模块)\n",
    "├── TransformerLayer (复合模块)\n",
    "│   ├── MultiHeadAttention (子模块)\n",
    "│   └── FFN (子模块)\n",
    "└── LayerNorm (原子模块)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "657c7903",
   "metadata": {},
   "source": [
    "3. 可配置性\n",
    "\n",
    "通过参数化实现模块变体："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "82bf1cfe",
   "metadata": {},
   "outputs": [],
   "source": [
    "class ResBlock(nn.Module):\n",
    "    def __init__(self, in_ch, out_ch, stride=1):\n",
    "        self.conv1 = nn.Conv2d(in_ch, out_ch, stride=stride)\n",
    "        # ...其他可配置参数..."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "582d223e",
   "metadata": {},
   "source": [
    "设计检查清单\n",
    "\n",
    "- 单一职责：每个模块只解决一个特定问题\n",
    "- 明确依赖：避免模块间隐式耦合\n",
    "- 版本控制：对核心模块进行版本化管理\n",
    "- 文档规范：每个模块需包含："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "98b528e0",
   "metadata": {},
   "outputs": [],
   "source": [
    "class ModuleA(nn.Module):\n",
    "    \"\"\"模块功能说明\n",
    "    Args:\n",
    "        in_dim: 输入特征维度\n",
    "        out_dim: 输出特征维度\n",
    "    Example:\n",
    "        >>> module = ModuleA(256, 512)\n",
    "        >>> module(torch.randn(2,256)).shape\n",
    "        torch.Size([2, 512])\n",
    "    \"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4705e82b",
   "metadata": {},
   "source": [
    "## 三、数据处理与加载 （工程能力考察）\n",
    "\n",
    "### 1. 自定义Dataset与DataLoader \n",
    "\n",
    "a.实现__len__和__getitem__方法\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80035bfc",
   "metadata": {},
   "outputs": [],
   "source": [
    "class CustomDataset(Dataset):\n",
    "    def __init__(self, data, transform=None):\n",
    "        self.data = data\n",
    "        self.transform = transform\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.data)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        img, label = self.data[idx]\n",
    "        if self.transform:\n",
    "            img = self.transform(img)\n",
    "        return img, label\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1eed7e82",
   "metadata": {},
   "source": [
    "### 2. 数据增强与批处理 \n",
    "\n",
    "a.使用torchvision.transforms组合预处理\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f2c30395",
   "metadata": {},
   "outputs": [],
   "source": [
    "transform = transforms.Compose([\n",
    "    transforms.RandomHorizontalFlip(),\n",
    "    transforms.ToTensor(),\n",
    "    transforms.Normalize(mean=[0.485], std=[0.229])\n",
    "])\n",
    "train_loader = DataLoader(dataset, batch_size=64, shuffle=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3948e7b4",
   "metadata": {},
   "source": [
    "## 四、训练流程与调优 （实战能力核心）\n",
    "\n",
    "### 1. 训练循环模板 \n",
    "\n",
    "a.包含前向传播、损失计算、反向传播三要素\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "5f386345",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 分类任务常用交叉熵损失\n",
    "criterion = nn.CrossEntropyLoss()  # 适用于多分类问题\n",
    "\n",
    "# 回归任务常用均方误差损失\n",
    "# criterion = nn.MSELoss()  # 适用于回归问题\n",
    "\n",
    "# 二分类任务常用BCE损失\n",
    "# criterion = nn.BCELoss()  # 需配合sigmoid使用\n",
    "# 或\n",
    "# criterion = nn.BCEWithLogitsLoss()  # 内置sigmoid"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "85e4660f",
   "metadata": {},
   "outputs": [],
   "source": [
    "model.train()\n",
    "for epoch in range(10):\n",
    "    for inputs, labels in train_loader:\n",
    "        optimizer.zero_grad()\n",
    "        outputs = model(inputs)\n",
    "        loss = criterion(outputs, labels)\n",
    "        loss.backward()\n",
    "        optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ed35a2f6",
   "metadata": {},
   "source": [
    "### 2. 混合精度训练（AMP）  \n",
    "\n",
    "混合精度训练（Mixed Precision Training）是PyTorch中一种加速深度学习训练的技术，它通过结合使用FP16（半精度）和FP32（单精度）来减少内存占用并提高计算速度。以下是详细介绍和完整示例：\n",
    "\n",
    "核心原理\n",
    "\n",
    "1. FP16计算：前向传播和反向传播使用半精度(16位浮点数)\n",
    "2. FP32主权重：保留单精度(32位)的主权重副本用于参数更新\n",
    "3. 梯度缩放：使用GradScaler放大梯度值，防止FP16下溢\n",
    "\n",
    "优势\n",
    "\n",
    "- 减少约50%的GPU显存占用\n",
    "- 提高训练速度（NVIDIA GPU有专门的Tensor Core加速FP16）\n",
    "- 保持与FP32相当的模型精度\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1a1f6b61",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch.amp import autocast, GradScaler\n",
    "\n",
    "# 初始化\n",
    "scaler = GradScaler()  # 梯度缩放器\n",
    "model = YourModel().cuda()\n",
    "optimizer = torch.optim.Adam(model.parameters())\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "for epoch in range(epochs):\n",
    "    for inputs, labels in train_loader:\n",
    "        inputs, labels = inputs.cuda(), labels.cuda()\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "\n",
    "        # 混合精度上下文\n",
    "        with autocast():\n",
    "            outputs = model(inputs)\n",
    "            loss = criterion(outputs, labels)\n",
    "\n",
    "        # 缩放梯度并反向传播\n",
    "        scaler.scale(loss).backward()\n",
    "\n",
    "        # 更新参数（自动unscale梯度）\n",
    "        scaler.step(optimizer)\n",
    "\n",
    "        # 更新缩放因子\n",
    "        scaler.update()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "477be321",
   "metadata": {},
   "source": [
    "关键组件说明\n",
    "\n",
    "1. autocast()上下文管理器：\n",
    "\n",
    "  - 自动将操作转换为适合的精度\n",
    "  - 前向传播中：卷积/矩阵乘法用FP16，softmax等用FP32\n",
    "\n",
    "2. GradScaler：\n",
    "\n",
    "  - scale(loss)：放大损失值，防止梯度下溢\n",
    "  - step(optimizer)：先unscale梯度再更新参数\n",
    "  - update()：动态调整缩放因子\n",
    "\n",
    "注意事项\n",
    "\n",
    "- 仅适用于NVIDIA GPU（需Compute Capability >= 7.0）\n",
    "- 某些操作（如递归网络）可能需要手动设置精度\n",
    "- 如果出现NaN/inf，尝试调小GradScaler的初始缩放因子："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "139c144c",
   "metadata": {},
   "outputs": [],
   "source": [
    "scaler = GradScaler(init_scale=1024)  # 默认是65536"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b878cda",
   "metadata": {},
   "source": [
    "- 验证/测试时不需要混合精度（除非显存不足）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e2a8696",
   "metadata": {},
   "source": [
    "为什么需要梯度缩放？\n",
    "\n",
    "FP16的数值范围问题：\n",
    "\n",
    "- FP16的有效范围是6.1e-5 ~ 65504，而深度学习中的梯度经常小于1e-5\n",
    "- 直接使用FP16会导致梯度值被截断为0（称为\"下溢\"）\n",
    "\n",
    "解决方案："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c417c02b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 原始梯度: 0.00002 (FP16会变成0)\n",
    "# 放大后的梯度: 0.00002 * 65536 = 1.31 (FP16可以表示)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ed5c6053",
   "metadata": {},
   "source": [
    "GradScaler工作流程图示"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "900cf3a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "[小梯度] --(放大)--> [可表示的梯度] --(计算)--> [缩小]--> [实际更新]\n",
    "      (FP16范围外)       (FP16范围内)               (还原真实值)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20b754e3",
   "metadata": {},
   "source": [
    "关键点总结\n",
    "\n",
    "1. 放大是为了让梯度进入FP16的有效范围\n",
    "2. scaler.step()会自动缩小梯度到原始值\n",
    "3. scaler.update()会智能调整放大倍数\n",
    "    - 如果连续多次出现NaN，会减小放大倍数\n",
    "    - 如果梯度都很稳定，会尝试增大放大倍数\n",
    "\n",
    "实际效果相当于给训练过程加了一个\"自动调节的放大镜\"，既避免了数值下溢，又不会影响最终的学习效果。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5a57a65",
   "metadata": {},
   "source": [
    "数学原理（为什么不影响结果）\n",
    "\n",
    "缩放的一致性：\n",
    "\n",
    "- 正向传播：loss = f(x)\n",
    "- 反向传播：∇loss = ∂f/∂x\n",
    "- 缩放后的梯度：scaler × ∇loss\n",
    "- 参数更新时：w = w - lr × (scaler × ∇loss)/scaler = w - lr × ∇loss\n",
    "\n",
    "可以看到scaler在更新时被抵消了\n",
    "\n",
    "类比解释就像用显微镜观察细胞：\n",
    "\n",
    "- 先用高倍镜（放大梯度）观察细节\n",
    "- 记录下需要移动的方向（梯度方向）\n",
    "- 实际操作时按照原始比例移动（参数更新）\n",
    "- 最终移动距离与直接观察时相同\n",
    "\n",
    "实际保障机制\n",
    "\n",
    "PyTorch的GradScaler通过以下方式确保稳定性：\n",
    "\n",
    "- NaN检测："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c8db270c",
   "metadata": {},
   "outputs": [],
   "source": [
    "if torch.isnan(grad).any():\n",
    "    scaler.update(scale_factor=0.5)  # 遇到NaN时自动减小放大倍数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "317d63a7",
   "metadata": {},
   "source": [
    "- 动态平衡：\n",
    "  - 每2-4个batch会自动评估梯度质量\n",
    "  - 放大倍数会在128-65536之间智能调整\n",
    "\n",
    "梯度缩放就像给计算过程加了一个临时放大镜，最终参数更新时会精确还原真实梯度值，因此不会影响训练结果。这种设计既解决了FP16的数值范围问题，又保持了数学上的等价性。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "50062edb",
   "metadata": {},
   "source": [
    "## 五、性能优化与扩展 \n",
    "\n",
    "### 1. 多GPU训练 \n",
    "\n",
    "a.使用DataParallel或DistributedDataParallel\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60959108",
   "metadata": {},
   "outputs": [],
   "source": [
    "model = nn.DataParallel(model)  # 单机多卡\n",
    "# 分布式训练（需初始化进程组）\n",
    "model = DDP(model, device_ids=[local_rank])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e7ebd2b",
   "metadata": {},
   "source": [
    "### 2. 模型导出与部署 \n",
    "\n",
    "a.导出为ONNX格式或TorchScript"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7f0dcf8a",
   "metadata": {},
   "outputs": [],
   "source": [
    "dummy_input = torch.randn(1, 3, 224, 224)\n",
    "torch.onnx.export(model, dummy_input, \"model.onnx\")\n",
    "script_model = torch.jit.script(model)  # TorchScript"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8f9be7d0",
   "metadata": {},
   "source": [
    "### 3. 动态图与静态图的区别\n",
    "\n",
    "动态图和静态图是深度学习框架中两种不同的计算图构建和执行方式，PyTorch和TensorFlow分别是它们的典型代表。\n",
    "\n",
    "#### 3.1 动态图（PyTorch风格）\n",
    "\n",
    "优势：\n",
    "\n",
    "- 调试直观（可使用标准Python调试工具）\n",
    "- 支持动态控制流（如循环、条件语句）\n",
    "- 更灵活的模型结构（可随时修改）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eadeba09",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 动态图示例（实时构建）\n",
    "for data, label in loader:\n",
    "    output = model(data)       # 前向传播时即时构建计算图\n",
    "    loss = criterion(output, label)\n",
    "    loss.backward()            # 反向传播后立即释放计算图"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c368b606",
   "metadata": {},
   "source": [
    "PyTorch的特别说明\n",
    "\n",
    "虽然PyTorch默认使用动态图，但通过以下方式可转换为静态图："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a6abba8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. TorchScript（部分静态化）\n",
    "script_model = torch.jit.script(model)  # 保留Python控制流\n",
    "\n",
    "# 2. ONNX导出（完全静态化）\n",
    "torch.onnx.export(model, dummy_input, \"model.onnx\")  # 生成静态计算图"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9572952a",
   "metadata": {},
   "source": [
    "#### 3.2 静态图（TensorFlow 1.x风格）\n",
    "\n",
    "优势：\n",
    "\n",
    "- 更高的运行效率（可做全局优化）\n",
    "- 更好的跨平台部署能力\n",
    "- 内存使用更高效"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c0a1adb4",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 静态图示例（先定义后执行）\n",
    "graph = tf.Graph()\n",
    "with graph.as_default():\n",
    "    x = tf.placeholder(tf.float32)\n",
    "    y = tf.matmul(x, W) + b    # 先定义计算图结构\n",
    "    loss = tf.reduce_mean(y)\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    sess.run(loss, feed_dict={x: data})  # 最后统一执行"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b463315",
   "metadata": {},
   "source": [
    "#### 3.3 现代框架的发展趋势\n",
    "\n",
    "- TensorFlow 2.x：默认启用动态图（Eager模式），但保留@tf.function静态图转换\n",
    "- PyTorch：通过TorchScript提供静态图支持\n",
    "- 实际工业部署时，通常最终都会转为静态图以获得最佳性能"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53f1f70b",
   "metadata": {},
   "source": [
    "### 4. 显存溢出（OOM）问题\n",
    "\n",
    "以下是具体解决方案：\n",
    "\n",
    "1. 即时显存释放技巧"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ebec9e8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练循环中加入这些操作\n",
    "for data, label in loader:\n",
    "    optimizer.zero_grad(set_to_none=True)  # 比False更省显存\n",
    "    output = model(data)\n",
    "    loss = criterion(output, label)\n",
    "    loss.backward()\n",
    "    optimizer.step()\n",
    "    torch.cuda.empty_cache()  # 立即释放未使用的缓存"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf893b51",
   "metadata": {},
   "source": [
    "2. 梯度累积（模拟更大batch size）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6f0c7eb5",
   "metadata": {},
   "outputs": [],
   "source": [
    "accum_steps = 4  # 累积4个batch的梯度\n",
    "for i, (data, label) in enumerate(loader):\n",
    "    output = model(data)\n",
    "    loss = criterion(output, label) / accum_steps  # 损失值平均\n",
    "    loss.backward()\n",
    "\n",
    "    if (i+1) % accum_steps == 0:  # 每4个batch更新一次\n",
    "        optimizer.step()\n",
    "        optimizer.zero_grad()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f3491f4",
   "metadata": {},
   "source": [
    "3. 混合精度训练（AMP）\n",
    "4. 模型优化技术"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8e5a7aa1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 检查点激活（Checkpointing）\n",
    "from torch.utils.checkpoint import checkpoint\n",
    "\n",
    "class CustomModel(nn.Module):\n",
    "    def forward(self, x):\n",
    "        x = checkpoint(self.layer1, x)  # 不保存中间激活值\n",
    "        x = checkpoint(self.layer2, x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89b076c7",
   "metadata": {},
   "source": [
    "5. 数据加载优化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9fb94879",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用PIN_MEMORY和non_blocking\n",
    "loader = DataLoader(dataset,\n",
    "                   batch_size=64,\n",
    "                   pin_memory=True,  # 锁页内存\n",
    "                   num_workers=4)\n",
    "\n",
    "for data, label in loader:\n",
    "    data = data.to(device, non_blocking=True)\n",
    "    label = label.to(device, non_blocking=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5ba1c62",
   "metadata": {},
   "source": [
    "关键参数调整建议\n",
    "\n",
    "- batch_size：优先减小batch size（通常减半测试）\n",
    "- 模型尺寸：尝试更小的模型或减少hidden_size\n",
    "- 输入分辨率：降低图像/序列长度\n",
    "- 数据类型：使用torch.float16或bfloat16"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d58627b0",
   "metadata": {},
   "source": [
    "实际应用中建议组合使用这些方法，例如：混合精度+梯度累积+checkpointing可以显著减少显存占用。如果仍然OOM，可以使用torch.cuda.memory_summary()分析具体哪一层的显存消耗最大。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66f9729e",
   "metadata": {},
   "source": [
    "#### 4.1 检查点激活\n",
    "\n",
    "检查点激活（Gradient Checkpointing）能够显著降低显存需求的核心原理是用计算时间换取显存空间。具体工作原理如下：\n",
    "\n",
    "1. 分段计算："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5b4155a8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 传统方式（存储所有中间激活值）\n",
    "x1 = layer1(x)  # 保存x1\n",
    "x2 = layer2(x1) # 保存x2\n",
    "x3 = layer3(x2) # 保存x3\n",
    "loss = f(x3)\n",
    "loss.backward()  # 需要x1,x2,x3计算梯度\n",
    "\n",
    "# 检查点方式（只保存关键节点）\n",
    "x1 = checkpoint(layer1, x)  # 不保存x1\n",
    "x2 = checkpoint(layer2, x1) # 不保存x2\n",
    "x3 = layer3(x2)  # 只保存x3\n",
    "loss = f(x3)\n",
    "loss.backward()   # 需要时重新计算x1,x2"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d4366d7",
   "metadata": {},
   "source": [
    "2. 显存节省机制：\n",
    "- 前向传播时只保留检查点的输出（如每2-3层保留一个）\n",
    "- 反向传播时临时重新计算被丢弃的中间结果\n",
    "- 显存消耗从O(n)降低到O(√n)（n为层数）\n",
    "\n",
    "实际应用建议\n",
    "\n",
    "1. 分段策略"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2457de70",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 更好的做法是分组检查点\n",
    "def custom_forward(blocks, x):\n",
    "    for block in blocks:\n",
    "        x = block(x)\n",
    "    return x\n",
    "\n",
    "x = checkpoint.checkpoint(custom_forward, [self.block1, self.block2], x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3033fd9d",
   "metadata": {},
   "source": [
    "2. 组合优化：\n",
    "  - 检查点激活 + 混合精度训练 → 显存减少50-70%\n",
    "  - 检查点激活 + 梯度累积 → 可训练更大batch size\n",
    "  \n",
    "这种方法本质上是通过丢弃并重算中间结果来降低显存需求，适合层数深的模型（如ResNet101、Transformer等）。虽然会增加约20-30%的计算时间，但通常比减小batch size带来的精度损失更可取。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
