{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 01. PyTorch 计算图介绍\n",
        "\n",
        "## 学习目标\n",
        "- 理解什么是计算图\n",
        "- 了解PyTorch的动态计算图\n",
        "- 学习自动微分（Autograd）\n",
        "- 掌握张量操作和梯度计算\n",
        "\n",
        "## 什么是计算图？\n",
        "\n",
        "计算图是深度学习框架的核心概念。它是一个有向无环图（DAG），其中：\n",
        "- **节点**：表示操作（如加法、乘法）\n",
        "- **边**：表示数据流（张量）\n",
        "\n",
        "PyTorch使用**动态计算图**，这意味着图在运行时构建，每次前向传播都会重新构建图。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "# 设置中文字体\n",
        "plt.rcParams['font.sans-serif'] = ['SimHei']\n",
        "plt.rcParams['axes.unicode_minus'] = False\n",
        "\n",
        "print(f\"PyTorch版本: {torch.__version__}\")\n",
        "print(f\"CUDA可用: {torch.cuda.is_available()}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. 张量基础\n",
        "\n",
        "张量是PyTorch的基本数据结构，类似于NumPy数组，但支持GPU加速和自动微分。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 创建张量\n",
        "x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)\n",
        "print(f\"张量x: {x}\")\n",
        "print(f\"需要梯度: {x.requires_grad}\")\n",
        "print(f\"数据类型: {x.dtype}\")\n",
        "print(f\"设备: {x.device}\")\n",
        "\n",
        "# 从NumPy创建张量\n",
        "np_array = np.array([4.0, 5.0, 6.0])\n",
        "y = torch.from_numpy(np_array)\n",
        "print(f\"从NumPy创建的张量y: {y}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. 简单的计算图示例\n",
        "\n",
        "让我们创建一个简单的计算：z = x² + 2x + 1\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 定义输入\n",
        "x = torch.tensor(2.0, requires_grad=True)\n",
        "print(f\"输入x: {x}\")\n",
        "\n",
        "# 前向传播\n",
        "y = x ** 2  # x²\n",
        "z = y + 2 * x + 1  # x² + 2x + 1\n",
        "\n",
        "print(f\"y = x² = {y}\")\n",
        "print(f\"z = x² + 2x + 1 = {z}\")\n",
        "\n",
        "# 反向传播\n",
        "z.backward()\n",
        "print(f\"dz/dx = {x.grad}\")\n",
        "print(f\"手动计算: 2x + 2 = {2 * 2 + 2}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. 可视化计算图\n",
        "\n",
        "让我们创建一个更复杂的例子来理解计算图的结构。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 创建两个输入\n",
        "a = torch.tensor(3.0, requires_grad=True)\n",
        "b = torch.tensor(4.0, requires_grad=True)\n",
        "\n",
        "# 构建计算图\n",
        "c = a * b      # 乘法节点\n",
        "d = a + b      # 加法节点\n",
        "e = c + d      # 最终结果\n",
        "\n",
        "print(f\"a = {a}\")\n",
        "print(f\"b = {b}\")\n",
        "print(f\"c = a * b = {c}\")\n",
        "print(f\"d = a + b = {d}\")\n",
        "print(f\"e = c + d = {e}\")\n",
        "\n",
        "# 计算梯度\n",
        "e.backward()\n",
        "print(f\"de/da = {a.grad}\")\n",
        "print(f\"de/db = {b.grad}\")\n",
        "\n",
        "# 手动验证\n",
        "print(f\"手动计算 de/da = b + 1 = {b.item() + 1}\")\n",
        "print(f\"手动计算 de/db = a + 1 = {a.item() + 1}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. 梯度累积和清零\n",
        "\n",
        "在PyTorch中，梯度会累积。在训练循环中，我们需要手动清零梯度。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 创建参数\n",
        "w = torch.tensor(1.0, requires_grad=True)\n",
        "b = torch.tensor(0.0, requires_grad=True)\n",
        "\n",
        "# 第一次前向传播\n",
        "x1 = torch.tensor(2.0)\n",
        "y1_pred = w * x1 + b\n",
        "loss1 = (y1_pred - 5.0) ** 2\n",
        "loss1.backward()\n",
        "\n",
        "print(f\"第一次反向传播后:\")\n",
        "print(f\"w.grad = {w.grad}\")\n",
        "print(f\"b.grad = {b.grad}\")\n",
        "\n",
        "# 第二次前向传播（不清零梯度）\n",
        "x2 = torch.tensor(3.0)\n",
        "y2_pred = w * x2 + b\n",
        "loss2 = (y2_pred - 7.0) ** 2\n",
        "loss2.backward()\n",
        "\n",
        "print(f\"\\n第二次反向传播后（梯度累积）:\")\n",
        "print(f\"w.grad = {w.grad}\")\n",
        "print(f\"b.grad = {b.grad}\")\n",
        "\n",
        "# 清零梯度\n",
        "w.grad.zero_()\n",
        "b.grad.zero_()\n",
        "\n",
        "print(f\"\\n清零梯度后:\")\n",
        "print(f\"w.grad = {w.grad}\")\n",
        "print(f\"b.grad = {b.grad}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. 实际应用：简单的线性回归\n",
        "\n",
        "让我们用计算图的概念实现一个简单的线性回归。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 生成数据\n",
        "torch.manual_seed(42)\n",
        "x_data = torch.linspace(0, 10, 100)\n",
        "y_data = 2 * x_data + 1 + torch.randn(100) * 0.5\n",
        "\n",
        "# 初始化参数\n",
        "w = torch.tensor(0.0, requires_grad=True)\n",
        "b = torch.tensor(0.0, requires_grad=True)\n",
        "\n",
        "# 学习率\n",
        "learning_rate = 0.01\n",
        "\n",
        "# 训练循环\n",
        "losses = []\n",
        "for epoch in range(100):\n",
        "    # 前向传播\n",
        "    y_pred = w * x_data + b\n",
        "    loss = torch.mean((y_pred - y_data) ** 2)\n",
        "    \n",
        "    # 反向传播\n",
        "    loss.backward()\n",
        "    \n",
        "    # 更新参数\n",
        "    with torch.no_grad():\n",
        "        w -= learning_rate * w.grad\n",
        "        b -= learning_rate * b.grad\n",
        "        \n",
        "        # 清零梯度\n",
        "        w.grad.zero_()\n",
        "        b.grad.zero_()\n",
        "    \n",
        "    losses.append(loss.item())\n",
        "    \n",
        "    if epoch % 20 == 0:\n",
        "        print(f\"Epoch {epoch}, Loss: {loss.item():.4f}, w: {w.item():.4f}, b: {b.item():.4f}\")\n",
        "\n",
        "print(f\"\\n最终参数: w = {w.item():.4f}, b = {b.item():.4f}\")\n",
        "print(f\"真实参数: w = 2.0, b = 1.0\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 可视化结果\n",
        "plt.figure(figsize=(12, 4))\n",
        "\n",
        "# 损失曲线\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.plot(losses)\n",
        "plt.title('训练损失')\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.grid(True)\n",
        "\n",
        "# 拟合结果\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.scatter(x_data, y_data, alpha=0.5, label='数据点')\n",
        "plt.plot(x_data, w.item() * x_data + b.item(), 'r-', label=f'拟合直线: y = {w.item():.2f}x + {b.item():.2f}')\n",
        "plt.title('线性回归结果')\n",
        "plt.xlabel('x')\n",
        "plt.ylabel('y')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 6. 练习题目\n",
        "\n",
        "### 练习1：手动计算梯度\n",
        "对于函数 f(x, y) = x²y + xy²，计算 ∂f/∂x 和 ∂f/∂y，并验证PyTorch的结果。\n",
        "\n",
        "### 练习2：链式法则\n",
        "实现复合函数 f(x) = sin(x² + 1)，计算导数并验证结果。\n",
        "\n",
        "### 练习3：多变量函数\n",
        "对于函数 f(x, y, z) = x² + y² + z²，计算所有偏导数。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 练习1：手动计算梯度\n",
        "x = torch.tensor(2.0, requires_grad=True)\n",
        "y = torch.tensor(3.0, requires_grad=True)\n",
        "\n",
        "f = x**2 * y + x * y**2\n",
        "f.backward()\n",
        "\n",
        "print(f\"f(x,y) = x²y + xy² = {f.item()}\")\n",
        "print(f\"∂f/∂x = 2xy + y² = {x.grad.item()}\")\n",
        "print(f\"∂f/∂y = x² + 2xy = {y.grad.item()}\")\n",
        "\n",
        "# 手动验证\n",
        "print(f\"\\n手动验证:\")\n",
        "print(f\"∂f/∂x = 2*2*3 + 3² = {2*2*3 + 3**2}\")\n",
        "print(f\"∂f/∂y = 2² + 2*2*3 = {2**2 + 2*2*3}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 总结\n",
        "\n",
        "在这个笔记本中，我们学习了：\n",
        "\n",
        "1. **计算图概念**：PyTorch使用动态计算图来跟踪操作\n",
        "2. **自动微分**：通过`requires_grad=True`启用梯度计算\n",
        "3. **反向传播**：使用`.backward()`计算梯度\n",
        "4. **梯度管理**：理解梯度累积和清零的重要性\n",
        "5. **实际应用**：用计算图实现简单的线性回归\n",
        "\n",
        "### 关键要点：\n",
        "- 计算图是PyTorch的核心，它自动跟踪所有操作\n",
        "- `requires_grad=True`告诉PyTorch需要计算梯度\n",
        "- 梯度会累积，训练时需要手动清零\n",
        "- `torch.no_grad()`可以禁用梯度计算，节省内存\n",
        "\n",
        "### 下一步：\n",
        "在下一个笔记本中，我们将构建第一个神经网络，学习如何使用PyTorch的高级API。\n"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
