{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 03. 线性回归从零实现\n",
        "\n",
        "## 学习目标\n",
        "- 理解线性回归的数学原理\n",
        "- 从零实现线性回归算法\n",
        "- 学习梯度下降优化\n",
        "- 掌握PyTorch张量操作\n",
        "- 实现批量梯度下降\n",
        "- 可视化训练过程\n",
        "\n",
        "## 什么是线性回归？\n",
        "\n",
        "线性回归是机器学习中最基础的算法之一，用于预测连续值。它假设输入特征和输出之间存在线性关系：\n",
        "\n",
        "**数学公式：**\n",
        "- 假设函数：$h_θ(x) = θ₀ + θ₁x₁ + θ₂x₂ + ... + θₙxₙ$\n",
        "- 简化为：$h_θ(x) = θ^T x$（其中 $θ₀$ 是偏置项）\n",
        "\n",
        "**目标：** 找到最佳的参数 $θ$，使得预测值与真实值之间的误差最小。\n",
        "\n",
        "**损失函数：** 均方误差 (MSE)\n",
        "$$J(θ) = \\frac{1}{2m} \\sum_{i=1}^{m} (h_θ(x^{(i)}) - y^{(i)})^2$$\n",
        "\n",
        "**梯度下降更新规则：**\n",
        "$$θ_j := θ_j - α \\frac{\\partial J(θ)}{\\partial θ_j}$$\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "from sklearn.datasets import make_regression\n",
        "from sklearn.model_selection import train_test_split\n",
        "from sklearn.preprocessing import StandardScaler\n",
        "import time\n",
        "\n",
        "# 设置中文字体\n",
        "plt.rcParams['font.sans-serif'] = ['SimHei']\n",
        "plt.rcParams['axes.unicode_minus'] = False\n",
        "\n",
        "# 设置随机种子\n",
        "torch.manual_seed(42)\n",
        "np.random.seed(42)\n",
        "\n",
        "print(f\"PyTorch版本: {torch.__version__}\")\n",
        "print(f\"CUDA可用: {torch.cuda.is_available()}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. 数据准备\n",
        "\n",
        "首先，我们生成一些线性回归数据，然后进行预处理。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 生成回归数据\n",
        "X, y = make_regression(\n",
        "    n_samples=1000,\n",
        "    n_features=1,\n",
        "    noise=10,\n",
        "    random_state=42\n",
        ")\n",
        "\n",
        "# 数据分割\n",
        "X_train, X_test, y_train, y_test = train_test_split(\n",
        "    X, y, test_size=0.2, random_state=42\n",
        ")\n",
        "\n",
        "# 数据标准化（可选，但通常有助于训练）\n",
        "scaler_X = StandardScaler()\n",
        "scaler_y = StandardScaler()\n",
        "\n",
        "X_train_scaled = scaler_X.fit_transform(X_train)\n",
        "X_test_scaled = scaler_X.transform(X_test)\n",
        "y_train_scaled = scaler_y.fit_transform(y_train.reshape(-1, 1)).flatten()\n",
        "y_test_scaled = scaler_y.transform(y_test.reshape(-1, 1)).flatten()\n",
        "\n",
        "# 转换为PyTorch张量\n",
        "X_train_tensor = torch.FloatTensor(X_train_scaled)\n",
        "X_test_tensor = torch.FloatTensor(X_test_scaled)\n",
        "y_train_tensor = torch.FloatTensor(y_train_scaled)\n",
        "y_test_tensor = torch.FloatTensor(y_test_scaled)\n",
        "\n",
        "print(f\"训练集大小: {X_train_tensor.shape}\")\n",
        "print(f\"测试集大小: {X_test_tensor.shape}\")\n",
        "print(f\"特征数量: {X_train_tensor.shape[1]}\")\n",
        "\n",
        "# 可视化数据\n",
        "plt.figure(figsize=(12, 4))\n",
        "\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.scatter(X_train, y_train, alpha=0.6, label='训练数据')\n",
        "plt.scatter(X_test, y_test, alpha=0.6, label='测试数据')\n",
        "plt.xlabel('特征 X')\n",
        "plt.ylabel('目标值 y')\n",
        "plt.title('原始数据')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.scatter(X_train_scaled, y_train_scaled, alpha=0.6, label='训练数据')\n",
        "plt.scatter(X_test_scaled, y_test_scaled, alpha=0.6, label='测试数据')\n",
        "plt.xlabel('标准化特征 X')\n",
        "plt.ylabel('标准化目标值 y')\n",
        "plt.title('标准化后数据')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. 从零实现线性回归\n",
        "\n",
        "现在我们从零开始实现线性回归算法，包括：\n",
        "- 前向传播（预测）\n",
        "- 损失函数计算\n",
        "- 梯度计算\n",
        "- 参数更新\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "class LinearRegressionFromScratch:\n",
        "    \"\"\"\n",
        "    从零实现的线性回归类\n",
        "    \"\"\"\n",
        "    def __init__(self, learning_rate=0.01, max_iterations=1000):\n",
        "        self.learning_rate = learning_rate\n",
        "        self.max_iterations = max_iterations\n",
        "        self.weights = None\n",
        "        self.bias = None\n",
        "        self.loss_history = []\n",
        "        \n",
        "    def initialize_parameters(self, n_features):\n",
        "        \"\"\"初始化参数\"\"\"\n",
        "        # 使用Xavier初始化\n",
        "        self.weights = torch.randn(n_features, requires_grad=True)\n",
        "        self.bias = torch.randn(1, requires_grad=True)\n",
        "        \n",
        "    def forward(self, X):\n",
        "        \"\"\"前向传播：计算预测值\"\"\"\n",
        "        # h(x) = w^T * x + b\n",
        "        return torch.matmul(X, self.weights) + self.bias\n",
        "    \n",
        "    def compute_loss(self, y_pred, y_true):\n",
        "        \"\"\"计算均方误差损失\"\"\"\n",
        "        # MSE = (1/2m) * sum((y_pred - y_true)^2)\n",
        "        m = y_true.shape[0]\n",
        "        loss = torch.mean((y_pred - y_true) ** 2) / 2\n",
        "        return loss\n",
        "    \n",
        "    def compute_gradients(self, X, y_pred, y_true):\n",
        "        \"\"\"计算梯度\"\"\"\n",
        "        m = y_true.shape[0]\n",
        "        \n",
        "        # 计算损失对预测值的梯度\n",
        "        dloss_dpred = (y_pred - y_true) / m\n",
        "        \n",
        "        # 计算权重梯度\n",
        "        dloss_dweights = torch.matmul(X.T, dloss_dpred)\n",
        "        \n",
        "        # 计算偏置梯度\n",
        "        dloss_dbias = torch.sum(dloss_dpred)\n",
        "        \n",
        "        return dloss_dweights, dloss_dbias\n",
        "    \n",
        "    def update_parameters(self, dloss_dweights, dloss_dbias):\n",
        "        \"\"\"更新参数\"\"\"\n",
        "        # 梯度下降更新规则\n",
        "        self.weights = self.weights - self.learning_rate * dloss_dweights\n",
        "        self.bias = self.bias - self.learning_rate * dloss_dbias\n",
        "    \n",
        "    def fit(self, X, y, verbose=True):\n",
        "        \"\"\"训练模型\"\"\"\n",
        "        n_samples, n_features = X.shape\n",
        "        \n",
        "        # 初始化参数\n",
        "        self.initialize_parameters(n_features)\n",
        "        \n",
        "        print(f\"开始训练线性回归模型...\")\n",
        "        print(f\"学习率: {self.learning_rate}\")\n",
        "        print(f\"最大迭代次数: {self.max_iterations}\")\n",
        "        print(f\"训练样本数: {n_samples}\")\n",
        "        print(f\"特征数: {n_features}\")\n",
        "        \n",
        "        start_time = time.time()\n",
        "        \n",
        "        for iteration in range(self.max_iterations):\n",
        "            # 前向传播\n",
        "            y_pred = self.forward(X)\n",
        "            \n",
        "            # 计算损失\n",
        "            loss = self.compute_loss(y_pred, y)\n",
        "            self.loss_history.append(loss.item())\n",
        "            \n",
        "            # 计算梯度\n",
        "            dloss_dweights, dloss_dbias = self.compute_gradients(X, y_pred, y)\n",
        "            \n",
        "            # 更新参数\n",
        "            self.update_parameters(dloss_dweights, dloss_dbias)\n",
        "            \n",
        "            # 打印进度\n",
        "            if verbose and iteration % 100 == 0:\n",
        "                print(f\"迭代 {iteration:4d}: 损失 = {loss.item():.6f}\")\n",
        "        \n",
        "        training_time = time.time() - start_time\n",
        "        print(f\"训练完成！用时: {training_time:.2f}秒\")\n",
        "        print(f\"最终损失: {self.loss_history[-1]:.6f}\")\n",
        "        \n",
        "        return self\n",
        "    \n",
        "    def predict(self, X):\n",
        "        \"\"\"预测\"\"\"\n",
        "        with torch.no_grad():\n",
        "            return self.forward(X)\n",
        "    \n",
        "    def get_parameters(self):\n",
        "        \"\"\"获取模型参数\"\"\"\n",
        "        return {\n",
        "            'weights': self.weights.detach().numpy(),\n",
        "            'bias': self.bias.detach().numpy()\n",
        "        }\n",
        "\n",
        "# 创建并训练模型\n",
        "model = LinearRegressionFromScratch(learning_rate=0.1, max_iterations=1000)\n",
        "model.fit(X_train_tensor, y_train_tensor)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. 模型评估和可视化\n",
        "\n",
        "现在让我们评估模型性能并可视化结果。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 模型预测\n",
        "y_train_pred = model.predict(X_train_tensor)\n",
        "y_test_pred = model.predict(X_test_tensor)\n",
        "\n",
        "# 计算评估指标\n",
        "def calculate_metrics(y_true, y_pred):\n",
        "    \"\"\"计算回归评估指标\"\"\"\n",
        "    mse = torch.mean((y_true - y_pred) ** 2)\n",
        "    rmse = torch.sqrt(mse)\n",
        "    mae = torch.mean(torch.abs(y_true - y_pred))\n",
        "    \n",
        "    # R² 决定系数\n",
        "    ss_res = torch.sum((y_true - y_pred) ** 2)\n",
        "    ss_tot = torch.sum((y_true - torch.mean(y_true)) ** 2)\n",
        "    r2 = 1 - (ss_res / ss_tot)\n",
        "    \n",
        "    return {\n",
        "        'MSE': mse.item(),\n",
        "        'RMSE': rmse.item(),\n",
        "        'MAE': mae.item(),\n",
        "        'R²': r2.item()\n",
        "    }\n",
        "\n",
        "# 训练集评估\n",
        "train_metrics = calculate_metrics(y_train_tensor, y_train_pred)\n",
        "print(\"训练集评估指标:\")\n",
        "for metric, value in train_metrics.items():\n",
        "    print(f\"  {metric}: {value:.4f}\")\n",
        "\n",
        "# 测试集评估\n",
        "test_metrics = calculate_metrics(y_test_tensor, y_test_pred)\n",
        "print(\"\\n测试集评估指标:\")\n",
        "for metric, value in test_metrics.items():\n",
        "    print(f\"  {metric}: {value:.4f}\")\n",
        "\n",
        "# 获取学习到的参数\n",
        "params = model.get_parameters()\n",
        "print(f\"\\n学习到的参数:\")\n",
        "print(f\"  权重 (w): {params['weights'][0]:.4f}\")\n",
        "print(f\"  偏置 (b): {params['bias'][0]:.4f}\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 可视化结果\n",
        "plt.figure(figsize=(15, 10))\n",
        "\n",
        "# 1. 训练损失曲线\n",
        "plt.subplot(2, 3, 1)\n",
        "plt.plot(model.loss_history)\n",
        "plt.title('训练损失曲线')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('损失值')\n",
        "plt.grid(True)\n",
        "plt.yscale('log')  # 使用对数坐标更好地显示收敛过程\n",
        "\n",
        "# 2. 训练集拟合结果\n",
        "plt.subplot(2, 3, 2)\n",
        "plt.scatter(X_train_scaled, y_train_scaled, alpha=0.6, label='真实值', color='blue')\n",
        "plt.scatter(X_train_scaled, y_train_pred.detach().numpy(), alpha=0.6, label='预测值', color='red')\n",
        "plt.plot(X_train_scaled, y_train_pred.detach().numpy(), 'r-', alpha=0.8, label='拟合直线')\n",
        "plt.title('训练集拟合结果')\n",
        "plt.xlabel('标准化特征 X')\n",
        "plt.ylabel('标准化目标值 y')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "# 3. 测试集拟合结果\n",
        "plt.subplot(2, 3, 3)\n",
        "plt.scatter(X_test_scaled, y_test_scaled, alpha=0.6, label='真实值', color='blue')\n",
        "plt.scatter(X_test_scaled, y_test_pred.detach().numpy(), alpha=0.6, label='预测值', color='red')\n",
        "plt.plot(X_test_scaled, y_test_pred.detach().numpy(), 'r-', alpha=0.8, label='拟合直线')\n",
        "plt.title('测试集拟合结果')\n",
        "plt.xlabel('标准化特征 X')\n",
        "plt.ylabel('标准化目标值 y')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "# 4. 预测值 vs 真实值散点图\n",
        "plt.subplot(2, 3, 4)\n",
        "plt.scatter(y_train_tensor.numpy(), y_train_pred.detach().numpy(), alpha=0.6, label='训练集')\n",
        "plt.scatter(y_test_tensor.numpy(), y_test_pred.detach().numpy(), alpha=0.6, label='测试集')\n",
        "plt.plot([y_train_tensor.min(), y_train_tensor.max()], [y_train_tensor.min(), y_train_tensor.max()], 'r--', label='完美预测')\n",
        "plt.title('预测值 vs 真实值')\n",
        "plt.xlabel('真实值')\n",
        "plt.ylabel('预测值')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "# 5. 残差图\n",
        "plt.subplot(2, 3, 5)\n",
        "residuals_train = y_train_tensor.numpy() - y_train_pred.detach().numpy()\n",
        "residuals_test = y_test_tensor.numpy() - y_test_pred.detach().numpy()\n",
        "plt.scatter(y_train_pred.detach().numpy(), residuals_train, alpha=0.6, label='训练集')\n",
        "plt.scatter(y_test_pred.detach().numpy(), residuals_test, alpha=0.6, label='测试集')\n",
        "plt.axhline(y=0, color='r', linestyle='--')\n",
        "plt.title('残差图')\n",
        "plt.xlabel('预测值')\n",
        "plt.ylabel('残差 (真实值 - 预测值)')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "# 6. 评估指标对比\n",
        "plt.subplot(2, 3, 6)\n",
        "metrics = ['MSE', 'RMSE', 'MAE', 'R²']\n",
        "train_values = [train_metrics[m] for m in metrics]\n",
        "test_values = [test_metrics[m] for m in metrics]\n",
        "\n",
        "x = np.arange(len(metrics))\n",
        "width = 0.35\n",
        "\n",
        "plt.bar(x - width/2, train_values, width, label='训练集', alpha=0.8)\n",
        "plt.bar(x + width/2, test_values, width, label='测试集', alpha=0.8)\n",
        "plt.title('评估指标对比')\n",
        "plt.xlabel('指标')\n",
        "plt.ylabel('值')\n",
        "plt.xticks(x, metrics)\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. 与PyTorch内置方法对比\n",
        "\n",
        "让我们使用PyTorch的内置线性层来对比我们的实现。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 使用PyTorch内置的线性回归\n",
        "class PyTorchLinearRegression(nn.Module):\n",
        "    def __init__(self, input_size):\n",
        "        super(PyTorchLinearRegression, self).__init__()\n",
        "        self.linear = nn.Linear(input_size, 1)\n",
        "    \n",
        "    def forward(self, x):\n",
        "        return self.linear(x).squeeze()\n",
        "\n",
        "# 创建PyTorch模型\n",
        "pytorch_model = PyTorchLinearRegression(input_size=1)\n",
        "criterion = nn.MSELoss()\n",
        "optimizer = torch.optim.SGD(pytorch_model.parameters(), lr=0.1)\n",
        "\n",
        "# 训练PyTorch模型\n",
        "pytorch_loss_history = []\n",
        "print(\"训练PyTorch内置线性回归模型...\")\n",
        "\n",
        "for epoch in range(1000):\n",
        "    # 前向传播\n",
        "    y_pred = pytorch_model(X_train_tensor)\n",
        "    loss = criterion(y_pred, y_train_tensor)\n",
        "    pytorch_loss_history.append(loss.item())\n",
        "    \n",
        "    # 反向传播\n",
        "    optimizer.zero_grad()\n",
        "    loss.backward()\n",
        "    optimizer.step()\n",
        "    \n",
        "    if epoch % 100 == 0:\n",
        "        print(f\"Epoch {epoch:4d}: Loss = {loss.item():.6f}\")\n",
        "\n",
        "print(f\"PyTorch模型训练完成！最终损失: {pytorch_loss_history[-1]:.6f}\")\n",
        "\n",
        "# PyTorch模型预测\n",
        "with torch.no_grad():\n",
        "    y_train_pred_pytorch = pytorch_model(X_train_tensor)\n",
        "    y_test_pred_pytorch = pytorch_model(X_test_tensor)\n",
        "\n",
        "# 计算PyTorch模型的评估指标\n",
        "pytorch_train_metrics = calculate_metrics(y_train_tensor, y_train_pred_pytorch)\n",
        "pytorch_test_metrics = calculate_metrics(y_test_tensor, y_test_pred_pytorch)\n",
        "\n",
        "print(\"\\nPyTorch内置模型评估指标:\")\n",
        "print(\"训练集:\")\n",
        "for metric, value in pytorch_train_metrics.items():\n",
        "    print(f\"  {metric}: {value:.4f}\")\n",
        "print(\"测试集:\")\n",
        "for metric, value in pytorch_test_metrics.items():\n",
        "    print(f\"  {metric}: {value:.4f}\")\n",
        "\n",
        "# 获取PyTorch模型参数\n",
        "pytorch_params = {}\n",
        "for name, param in pytorch_model.named_parameters():\n",
        "    pytorch_params[name] = param.data.numpy()\n",
        "\n",
        "print(f\"\\nPyTorch模型参数:\")\n",
        "print(f\"  权重 (weight): {pytorch_params['linear.weight'][0][0]:.4f}\")\n",
        "print(f\"  偏置 (bias): {pytorch_params['linear.bias'][0]:.4f}\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 对比两种实现\n",
        "plt.figure(figsize=(15, 8))\n",
        "\n",
        "# 1. 损失曲线对比\n",
        "plt.subplot(2, 3, 1)\n",
        "plt.plot(model.loss_history, label='从零实现', alpha=0.8)\n",
        "plt.plot(pytorch_loss_history, label='PyTorch内置', alpha=0.8)\n",
        "plt.title('训练损失对比')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('损失值')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "plt.yscale('log')\n",
        "\n",
        "# 2. 训练集拟合对比\n",
        "plt.subplot(2, 3, 2)\n",
        "plt.scatter(X_train_scaled, y_train_scaled, alpha=0.3, label='真实值', color='gray')\n",
        "plt.plot(X_train_scaled, y_train_pred.detach().numpy(), 'r-', label='从零实现', linewidth=2)\n",
        "plt.plot(X_train_scaled, y_train_pred_pytorch.numpy(), 'b--', label='PyTorch内置', linewidth=2)\n",
        "plt.title('训练集拟合对比')\n",
        "plt.xlabel('标准化特征 X')\n",
        "plt.ylabel('标准化目标值 y')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "# 3. 测试集拟合对比\n",
        "plt.subplot(2, 3, 3)\n",
        "plt.scatter(X_test_scaled, y_test_scaled, alpha=0.3, label='真实值', color='gray')\n",
        "plt.plot(X_test_scaled, y_test_pred.detach().numpy(), 'r-', label='从零实现', linewidth=2)\n",
        "plt.plot(X_test_scaled, y_test_pred_pytorch.numpy(), 'b--', label='PyTorch内置', linewidth=2)\n",
        "plt.title('测试集拟合对比')\n",
        "plt.xlabel('标准化特征 X')\n",
        "plt.ylabel('标准化目标值 y')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "# 4. 参数对比\n",
        "plt.subplot(2, 3, 4)\n",
        "methods = ['从零实现', 'PyTorch内置']\n",
        "weights = [params['weights'][0], pytorch_params['linear.weight'][0][0]]\n",
        "biases = [params['bias'][0], pytorch_params['linear.bias'][0]]\n",
        "\n",
        "x = np.arange(len(methods))\n",
        "width = 0.35\n",
        "\n",
        "plt.bar(x - width/2, weights, width, label='权重', alpha=0.8)\n",
        "plt.bar(x + width/2, biases, width, label='偏置', alpha=0.8)\n",
        "plt.title('学习到的参数对比')\n",
        "plt.xlabel('实现方法')\n",
        "plt.ylabel('参数值')\n",
        "plt.xticks(x, methods)\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 5. 评估指标对比 - 训练集\n",
        "plt.subplot(2, 3, 5)\n",
        "metrics = ['MSE', 'RMSE', 'MAE', 'R²']\n",
        "scratch_values = [train_metrics[m] for m in metrics]\n",
        "pytorch_values = [pytorch_train_metrics[m] for m in metrics]\n",
        "\n",
        "x = np.arange(len(metrics))\n",
        "width = 0.35\n",
        "\n",
        "plt.bar(x - width/2, scratch_values, width, label='从零实现', alpha=0.8)\n",
        "plt.bar(x + width/2, pytorch_values, width, label='PyTorch内置', alpha=0.8)\n",
        "plt.title('训练集评估指标对比')\n",
        "plt.xlabel('指标')\n",
        "plt.ylabel('值')\n",
        "plt.xticks(x, metrics)\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 6. 评估指标对比 - 测试集\n",
        "plt.subplot(2, 3, 6)\n",
        "scratch_test_values = [test_metrics[m] for m in metrics]\n",
        "pytorch_test_values = [pytorch_test_metrics[m] for m in metrics]\n",
        "\n",
        "plt.bar(x - width/2, scratch_test_values, width, label='从零实现', alpha=0.8)\n",
        "plt.bar(x + width/2, pytorch_test_values, width, label='PyTorch内置', alpha=0.8)\n",
        "plt.title('测试集评估指标对比')\n",
        "plt.xlabel('指标')\n",
        "plt.ylabel('值')\n",
        "plt.xticks(x, metrics)\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 打印详细对比结果\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"两种实现方法对比总结\")\n",
        "print(\"=\"*60)\n",
        "print(f\"{'指标':<15} {'从零实现':<15} {'PyTorch内置':<15} {'差异':<15}\")\n",
        "print(\"-\"*60)\n",
        "print(f\"{'训练损失':<15} {model.loss_history[-1]:<15.6f} {pytorch_loss_history[-1]:<15.6f} {abs(model.loss_history[-1] - pytorch_loss_history[-1]):<15.6f}\")\n",
        "print(f\"{'测试MSE':<15} {test_metrics['MSE']:<15.6f} {pytorch_test_metrics['MSE']:<15.6f} {abs(test_metrics['MSE'] - pytorch_test_metrics['MSE']):<15.6f}\")\n",
        "print(f\"{'测试R²':<15} {test_metrics['R²']:<15.6f} {pytorch_test_metrics['R²']:<15.6f} {abs(test_metrics['R²'] - pytorch_test_metrics['R²']):<15.6f}\")\n",
        "print(f\"{'权重参数':<15} {params['weights'][0]:<15.6f} {pytorch_params['linear.weight'][0][0]:<15.6f} {abs(params['weights'][0] - pytorch_params['linear.weight'][0][0]):<15.6f}\")\n",
        "print(f\"{'偏置参数':<15} {params['bias'][0]:<15.6f} {pytorch_params['linear.bias'][0]:<15.6f} {abs(params['bias'][0] - pytorch_params['linear.bias'][0]):<15.6f}\")\n",
        "print(\"=\"*60)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. 练习：不同学习率的影响\n",
        "\n",
        "让我们测试不同学习率对训练过程的影响。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 测试不同学习率\n",
        "learning_rates = [0.001, 0.01, 0.1, 0.5, 1.0]\n",
        "lr_results = []\n",
        "\n",
        "print(\"测试不同学习率的影响...\")\n",
        "print(\"=\"*50)\n",
        "\n",
        "for lr in learning_rates:\n",
        "    print(f\"\\n测试学习率: {lr}\")\n",
        "    \n",
        "    # 创建新模型\n",
        "    test_model = LinearRegressionFromScratch(learning_rate=lr, max_iterations=500)\n",
        "    test_model.fit(X_train_tensor, y_train_tensor, verbose=False)\n",
        "    \n",
        "    # 预测和评估\n",
        "    y_pred = test_model.predict(X_test_tensor)\n",
        "    metrics = calculate_metrics(y_test_tensor, y_pred)\n",
        "    \n",
        "    lr_results.append({\n",
        "        'learning_rate': lr,\n",
        "        'final_loss': test_model.loss_history[-1],\n",
        "        'test_mse': metrics['MSE'],\n",
        "        'test_r2': metrics['R²'],\n",
        "        'loss_history': test_model.loss_history.copy()\n",
        "    })\n",
        "    \n",
        "    print(f\"  最终损失: {test_model.loss_history[-1]:.6f}\")\n",
        "    print(f\"  测试MSE: {metrics['MSE']:.6f}\")\n",
        "    print(f\"  测试R²: {metrics['R²']:.6f}\")\n",
        "\n",
        "# 可视化不同学习率的效果\n",
        "plt.figure(figsize=(15, 10))\n",
        "\n",
        "# 1. 损失曲线对比\n",
        "plt.subplot(2, 3, 1)\n",
        "for result in lr_results:\n",
        "    plt.plot(result['loss_history'], label=f'LR={result[\"learning_rate\"]}', alpha=0.8)\n",
        "plt.title('不同学习率的损失曲线')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('损失值')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "plt.yscale('log')\n",
        "\n",
        "# 2. 最终损失 vs 学习率\n",
        "plt.subplot(2, 3, 2)\n",
        "lrs = [r['learning_rate'] for r in lr_results]\n",
        "final_losses = [r['final_loss'] for r in lr_results]\n",
        "plt.plot(lrs, final_losses, 'bo-', linewidth=2, markersize=8)\n",
        "plt.title('最终损失 vs 学习率')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('最终损失')\n",
        "plt.grid(True)\n",
        "plt.xscale('log')\n",
        "plt.yscale('log')\n",
        "\n",
        "# 3. 测试MSE vs 学习率\n",
        "plt.subplot(2, 3, 3)\n",
        "test_mses = [r['test_mse'] for r in lr_results]\n",
        "plt.plot(lrs, test_mses, 'ro-', linewidth=2, markersize=8)\n",
        "plt.title('测试MSE vs 学习率')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('测试MSE')\n",
        "plt.grid(True)\n",
        "plt.xscale('log')\n",
        "plt.yscale('log')\n",
        "\n",
        "# 4. 测试R² vs 学习率\n",
        "plt.subplot(2, 3, 4)\n",
        "test_r2s = [r['test_r2'] for r in lr_results]\n",
        "plt.plot(lrs, test_r2s, 'go-', linewidth=2, markersize=8)\n",
        "plt.title('测试R² vs 学习率')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('测试R²')\n",
        "plt.grid(True)\n",
        "plt.xscale('log')\n",
        "\n",
        "# 5. 收敛速度分析（前100次迭代）\n",
        "plt.subplot(2, 3, 5)\n",
        "for result in lr_results:\n",
        "    plt.plot(result['loss_history'][:100], label=f'LR={result[\"learning_rate\"]}', alpha=0.8)\n",
        "plt.title('前100次迭代的收敛速度')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('损失值')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "plt.yscale('log')\n",
        "\n",
        "# 6. 学习率效果总结\n",
        "plt.subplot(2, 3, 6)\n",
        "metrics_names = ['最终损失', '测试MSE', '测试R²']\n",
        "metrics_values = [final_losses, test_mses, test_r2s]\n",
        "colors = ['blue', 'red', 'green']\n",
        "\n",
        "for i, (name, values, color) in enumerate(zip(metrics_names, metrics_values, colors)):\n",
        "    plt.plot(lrs, values, 'o-', label=name, color=color, linewidth=2, markersize=6)\n",
        "\n",
        "plt.title('学习率效果综合对比')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('指标值')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "plt.xscale('log')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 找出最佳学习率\n",
        "best_lr_result = min(lr_results, key=lambda x: x['test_mse'])\n",
        "print(f\"\\n最佳学习率: {best_lr_result['learning_rate']}\")\n",
        "print(f\"对应的测试MSE: {best_lr_result['test_mse']:.6f}\")\n",
        "print(f\"对应的测试R²: {best_lr_result['test_r2']:.6f}\")\n",
        "\n",
        "# 学习率选择建议\n",
        "print(f\"\\n学习率选择建议:\")\n",
        "print(f\"- 学习率过小 (如 {learning_rates[0]}): 收敛慢，但稳定\")\n",
        "print(f\"- 学习率适中 (如 {best_lr_result['learning_rate']}): 收敛快且效果好\")\n",
        "print(f\"- 学习率过大 (如 {learning_rates[-1]}): 可能震荡或不收敛\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 总结\n",
        "\n",
        "在这个笔记本中，我们深入学习了线性回归的从零实现：\n",
        "\n",
        "### 🎯 学习成果\n",
        "\n",
        "1. **数学原理理解**\n",
        "   - 线性回归的数学公式和损失函数\n",
        "   - 梯度下降算法的原理和实现\n",
        "   - 参数更新规则\n",
        "\n",
        "2. **从零实现**\n",
        "   - 手动实现前向传播、损失计算、梯度计算\n",
        "   - 完整的训练循环\n",
        "   - 参数初始化和更新\n",
        "\n",
        "3. **模型评估**\n",
        "   - MSE、RMSE、MAE、R²等评估指标\n",
        "   - 训练和测试性能分析\n",
        "   - 残差分析\n",
        "\n",
        "4. **对比分析**\n",
        "   - 从零实现 vs PyTorch内置方法\n",
        "   - 不同学习率的影响\n",
        "   - 收敛速度和稳定性分析\n",
        "\n",
        "### 🔑 关键要点\n",
        "\n",
        "- **梯度下降**：通过计算损失函数对参数的梯度来更新参数\n",
        "- **学习率**：控制参数更新的步长，影响收敛速度和稳定性\n",
        "- **数据标准化**：有助于模型训练和收敛\n",
        "- **评估指标**：R²决定系数是衡量模型拟合优度的重要指标\n",
        "\n",
        "### 🚀 下一步\n",
        "\n",
        "在下一个笔记本中，我们将学习逻辑回归从零实现，这是分类问题的基础算法。\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
