{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 04. 逻辑回归从零实现\n",
        "\n",
        "## 学习目标\n",
        "- 理解逻辑回归的数学原理\n",
        "- 从零实现逻辑回归算法\n",
        "- 学习Sigmoid激活函数\n",
        "- 掌握交叉熵损失函数\n",
        "- 实现二分类和多分类逻辑回归\n",
        "- 可视化决策边界\n",
        "\n",
        "## 什么是逻辑回归？\n",
        "\n",
        "逻辑回归是一种用于分类问题的线性模型，虽然名字叫\"回归\"，但实际上是分类算法。它使用逻辑函数（Sigmoid）将线性组合映射到0-1之间的概率。\n",
        "\n",
        "**数学公式：**\n",
        "\n",
        "**假设函数：**\n",
        "$$h_θ(x) = \\frac{1}{1 + e^{-θ^T x}} = σ(θ^T x)$$\n",
        "\n",
        "其中 $σ(z) = \\frac{1}{1 + e^{-z}}$ 是Sigmoid函数\n",
        "\n",
        "**损失函数（交叉熵）：**\n",
        "$$J(θ) = -\\frac{1}{m} \\sum_{i=1}^{m} [y^{(i)} \\log(h_θ(x^{(i)})) + (1-y^{(i)}) \\log(1-h_θ(x^{(i)}))]$$\n",
        "\n",
        "**梯度：**\n",
        "$$\\frac{\\partial J(θ)}{\\partial θ_j} = \\frac{1}{m} \\sum_{i=1}^{m} (h_θ(x^{(i)}) - y^{(i)}) x_j^{(i)}$$\n",
        "\n",
        "**参数更新：**\n",
        "$$θ_j := θ_j - α \\frac{\\partial J(θ)}{\\partial θ_j}$$\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "from sklearn.datasets import make_classification, make_blobs\n",
        "from sklearn.model_selection import train_test_split\n",
        "from sklearn.preprocessing import StandardScaler\n",
        "from sklearn.metrics import accuracy_score, classification_report, confusion_matrix\n",
        "import seaborn as sns\n",
        "import time\n",
        "\n",
        "# 设置中文字体\n",
        "plt.rcParams['font.sans-serif'] = ['SimHei']\n",
        "plt.rcParams['axes.unicode_minus'] = False\n",
        "\n",
        "# 设置随机种子\n",
        "torch.manual_seed(42)\n",
        "np.random.seed(42)\n",
        "\n",
        "print(f\"PyTorch版本: {torch.__version__}\")\n",
        "print(f\"CUDA可用: {torch.cuda.is_available()}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. Sigmoid函数和交叉熵损失\n",
        "\n",
        "首先，让我们理解逻辑回归的核心组件：Sigmoid函数和交叉熵损失函数。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 定义Sigmoid函数\n",
        "def sigmoid(z):\n",
        "    \"\"\"Sigmoid激活函数\"\"\"\n",
        "    # 防止数值溢出\n",
        "    z = torch.clamp(z, -500, 500)\n",
        "    return 1 / (1 + torch.exp(-z))\n",
        "\n",
        "# 定义交叉熵损失函数\n",
        "def cross_entropy_loss(y_pred, y_true):\n",
        "    \"\"\"交叉熵损失函数\"\"\"\n",
        "    # 防止log(0)的情况\n",
        "    epsilon = 1e-15\n",
        "    y_pred = torch.clamp(y_pred, epsilon, 1 - epsilon)\n",
        "    \n",
        "    # 计算交叉熵损失\n",
        "    loss = -torch.mean(y_true * torch.log(y_pred) + (1 - y_true) * torch.log(1 - y_pred))\n",
        "    return loss\n",
        "\n",
        "# 可视化Sigmoid函数\n",
        "z = torch.linspace(-10, 10, 1000)\n",
        "sigmoid_values = sigmoid(z)\n",
        "\n",
        "plt.figure(figsize=(12, 4))\n",
        "\n",
        "# Sigmoid函数\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.plot(z.numpy(), sigmoid_values.numpy(), 'b-', linewidth=2, label='Sigmoid函数')\n",
        "plt.axhline(y=0.5, color='r', linestyle='--', alpha=0.7, label='决策边界 (0.5)')\n",
        "plt.axvline(x=0, color='g', linestyle='--', alpha=0.7, label='z=0')\n",
        "plt.title('Sigmoid函数')\n",
        "plt.xlabel('z')\n",
        "plt.ylabel('σ(z)')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# Sigmoid函数的导数\n",
        "plt.subplot(1, 2, 2)\n",
        "sigmoid_derivative = sigmoid_values * (1 - sigmoid_values)\n",
        "plt.plot(z.numpy(), sigmoid_derivative.numpy(), 'r-', linewidth=2, label='Sigmoid导数')\n",
        "plt.title('Sigmoid函数的导数')\n",
        "plt.xlabel('z')\n",
        "plt.ylabel('σ\\'(z)')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 演示交叉熵损失\n",
        "print(\"交叉熵损失函数演示:\")\n",
        "y_true_demo = torch.tensor([1.0, 0.0, 1.0, 0.0])\n",
        "y_pred_demo = torch.tensor([0.9, 0.1, 0.3, 0.8])\n",
        "\n",
        "loss_demo = cross_entropy_loss(y_pred_demo, y_true_demo)\n",
        "print(f\"真实标签: {y_true_demo.numpy()}\")\n",
        "print(f\"预测概率: {y_pred_demo.numpy()}\")\n",
        "print(f\"交叉熵损失: {loss_demo.item():.4f}\")\n",
        "\n",
        "# 分析不同预测情况下的损失\n",
        "print(\"\\n不同预测情况下的损失:\")\n",
        "scenarios = [\n",
        "    (\"完美预测\", [1.0, 0.0, 1.0, 0.0], [1.0, 0.0, 1.0, 0.0]),\n",
        "    (\"较好预测\", [1.0, 0.0, 1.0, 0.0], [0.9, 0.1, 0.8, 0.2]),\n",
        "    (\"一般预测\", [1.0, 0.0, 1.0, 0.0], [0.7, 0.3, 0.6, 0.4]),\n",
        "    (\"较差预测\", [1.0, 0.0, 1.0, 0.0], [0.3, 0.7, 0.4, 0.6]),\n",
        "    (\"完全错误\", [1.0, 0.0, 1.0, 0.0], [0.0, 1.0, 0.0, 1.0])\n",
        "]\n",
        "\n",
        "for name, true_vals, pred_vals in scenarios:\n",
        "    loss = cross_entropy_loss(torch.tensor(pred_vals), torch.tensor(true_vals))\n",
        "    print(f\"{name}: {loss.item():.4f}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. 数据准备\n",
        "\n",
        "现在让我们生成一些分类数据来训练我们的逻辑回归模型。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 生成二分类数据\n",
        "X, y = make_classification(\n",
        "    n_samples=1000,\n",
        "    n_features=2,\n",
        "    n_redundant=0,\n",
        "    n_informative=2,\n",
        "    n_clusters_per_class=1,\n",
        "    random_state=42\n",
        ")\n",
        "\n",
        "# 数据分割\n",
        "X_train, X_test, y_train, y_test = train_test_split(\n",
        "    X, y, test_size=0.2, random_state=42\n",
        ")\n",
        "\n",
        "# 数据标准化\n",
        "scaler = StandardScaler()\n",
        "X_train_scaled = scaler.fit_transform(X_train)\n",
        "X_test_scaled = scaler.transform(X_test)\n",
        "\n",
        "# 转换为PyTorch张量\n",
        "X_train_tensor = torch.FloatTensor(X_train_scaled)\n",
        "X_test_tensor = torch.FloatTensor(X_test_scaled)\n",
        "y_train_tensor = torch.FloatTensor(y_train)\n",
        "y_test_tensor = torch.FloatTensor(y_test)\n",
        "\n",
        "print(f\"训练集大小: {X_train_tensor.shape}\")\n",
        "print(f\"测试集大小: {X_test_tensor.shape}\")\n",
        "print(f\"类别分布 - 训练集: {np.bincount(y_train)}\")\n",
        "print(f\"类别分布 - 测试集: {np.bincount(y_test)}\")\n",
        "\n",
        "# 可视化数据\n",
        "plt.figure(figsize=(12, 4))\n",
        "\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.scatter(X_train[y_train == 0, 0], X_train[y_train == 0, 1], \n",
        "           c='red', label='类别 0', alpha=0.7)\n",
        "plt.scatter(X_train[y_train == 1, 0], X_train[y_train == 1, 1], \n",
        "           c='blue', label='类别 1', alpha=0.7)\n",
        "plt.title('原始训练数据')\n",
        "plt.xlabel('特征 1')\n",
        "plt.ylabel('特征 2')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.scatter(X_train_scaled[y_train == 0, 0], X_train_scaled[y_train == 0, 1], \n",
        "           c='red', label='类别 0', alpha=0.7)\n",
        "plt.scatter(X_train_scaled[y_train == 1, 0], X_train_scaled[y_train == 1, 1], \n",
        "           c='blue', label='类别 1', alpha=0.7)\n",
        "plt.title('标准化后训练数据')\n",
        "plt.xlabel('标准化特征 1')\n",
        "plt.ylabel('标准化特征 2')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. 从零实现逻辑回归\n",
        "\n",
        "现在让我们从零开始实现逻辑回归算法。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "class LogisticRegressionFromScratch:\n",
        "    \"\"\"从零实现的逻辑回归\"\"\"\n",
        "    \n",
        "    def __init__(self, learning_rate=0.01, max_iterations=1000, tolerance=1e-6):\n",
        "        self.learning_rate = learning_rate\n",
        "        self.max_iterations = max_iterations\n",
        "        self.tolerance = tolerance\n",
        "        self.weights = None\n",
        "        self.bias = None\n",
        "        self.cost_history = []\n",
        "        \n",
        "    def _add_bias(self, X):\n",
        "        \"\"\"添加偏置项\"\"\"\n",
        "        return torch.cat([torch.ones(X.shape[0], 1), X], dim=1)\n",
        "    \n",
        "    def _sigmoid(self, z):\n",
        "        \"\"\"Sigmoid激活函数\"\"\"\n",
        "        z = torch.clamp(z, -500, 500)  # 防止数值溢出\n",
        "        return 1 / (1 + torch.exp(-z))\n",
        "    \n",
        "    def _compute_cost(self, X, y):\n",
        "        \"\"\"计算交叉熵损失\"\"\"\n",
        "        m = X.shape[0]\n",
        "        z = torch.matmul(X, self.weights) + self.bias\n",
        "        h = self._sigmoid(z)\n",
        "        \n",
        "        # 防止log(0)\n",
        "        epsilon = 1e-15\n",
        "        h = torch.clamp(h, epsilon, 1 - epsilon)\n",
        "        \n",
        "        cost = -torch.mean(y * torch.log(h) + (1 - y) * torch.log(1 - h))\n",
        "        return cost\n",
        "    \n",
        "    def _compute_gradients(self, X, y):\n",
        "        \"\"\"计算梯度\"\"\"\n",
        "        m = X.shape[0]\n",
        "        z = torch.matmul(X, self.weights) + self.bias\n",
        "        h = self._sigmoid(z)\n",
        "        \n",
        "        # 计算梯度\n",
        "        dw = torch.matmul(X.T, (h - y)) / m\n",
        "        db = torch.mean(h - y)\n",
        "        \n",
        "        return dw, db\n",
        "    \n",
        "    def fit(self, X, y):\n",
        "        \"\"\"训练模型\"\"\"\n",
        "        # 初始化参数\n",
        "        n_features = X.shape[1]\n",
        "        self.weights = torch.zeros(n_features, 1, requires_grad=False)\n",
        "        self.bias = torch.zeros(1, requires_grad=False)\n",
        "        \n",
        "        print(f\"开始训练逻辑回归模型...\")\n",
        "        print(f\"学习率: {self.learning_rate}\")\n",
        "        print(f\"最大迭代次数: {self.max_iterations}\")\n",
        "        print(f\"特征数量: {n_features}\")\n",
        "        print(f\"样本数量: {X.shape[0]}\")\n",
        "        print(\"-\" * 50)\n",
        "        \n",
        "        start_time = time.time()\n",
        "        \n",
        "        for i in range(self.max_iterations):\n",
        "            # 前向传播\n",
        "            z = torch.matmul(X, self.weights) + self.bias\n",
        "            h = self._sigmoid(z)\n",
        "            \n",
        "            # 计算损失\n",
        "            cost = self._compute_cost(X, y)\n",
        "            self.cost_history.append(cost.item())\n",
        "            \n",
        "            # 计算梯度\n",
        "            dw, db = self._compute_gradients(X, y)\n",
        "            \n",
        "            # 更新参数\n",
        "            self.weights -= self.learning_rate * dw\n",
        "            self.bias -= self.learning_rate * db\n",
        "            \n",
        "            # 检查收敛\n",
        "            if i > 0 and abs(self.cost_history[-1] - self.cost_history[-2]) < self.tolerance:\n",
        "                print(f\"在第 {i+1} 次迭代后收敛\")\n",
        "                break\n",
        "            \n",
        "            # 打印进度\n",
        "            if (i + 1) % 100 == 0:\n",
        "                print(f\"迭代 {i+1}/{self.max_iterations}, 损失: {cost.item():.6f}\")\n",
        "        \n",
        "        training_time = time.time() - start_time\n",
        "        print(f\"训练完成! 用时: {training_time:.2f}秒\")\n",
        "        print(f\"最终损失: {self.cost_history[-1]:.6f}\")\n",
        "        print(f\"总迭代次数: {len(self.cost_history)}\")\n",
        "    \n",
        "    def predict_proba(self, X):\n",
        "        \"\"\"预测概率\"\"\"\n",
        "        z = torch.matmul(X, self.weights) + self.bias\n",
        "        return self._sigmoid(z)\n",
        "    \n",
        "    def predict(self, X, threshold=0.5):\n",
        "        \"\"\"预测类别\"\"\"\n",
        "        probabilities = self.predict_proba(X)\n",
        "        return (probabilities >= threshold).float()\n",
        "    \n",
        "    def score(self, X, y):\n",
        "        \"\"\"计算准确率\"\"\"\n",
        "        predictions = self.predict(X)\n",
        "        return torch.mean((predictions.squeeze() == y).float()).item()\n",
        "\n",
        "# 创建并训练模型\n",
        "model = LogisticRegressionFromScratch(learning_rate=0.1, max_iterations=1000)\n",
        "model.fit(X_train_tensor, y_train_tensor.unsqueeze(1))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. 模型评估和可视化\n",
        "\n",
        "现在让我们评估模型性能并可视化结果。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 模型评估\n",
        "train_accuracy = model.score(X_train_tensor, y_train_tensor)\n",
        "test_accuracy = model.score(X_test_tensor, y_test_tensor)\n",
        "\n",
        "print(f\"训练集准确率: {train_accuracy:.4f}\")\n",
        "print(f\"测试集准确率: {test_accuracy:.4f}\")\n",
        "\n",
        "# 预测概率\n",
        "train_proba = model.predict_proba(X_train_tensor)\n",
        "test_proba = model.predict_proba(X_test_tensor)\n",
        "\n",
        "# 预测类别\n",
        "train_pred = model.predict(X_train_tensor)\n",
        "test_pred = model.predict(X_test_tensor)\n",
        "\n",
        "print(f\"\\n模型参数:\")\n",
        "print(f\"权重: {model.weights.squeeze().numpy()}\")\n",
        "print(f\"偏置: {model.bias.item():.4f}\")\n",
        "\n",
        "# 可视化训练过程\n",
        "plt.figure(figsize=(15, 10))\n",
        "\n",
        "# 损失函数变化\n",
        "plt.subplot(2, 3, 1)\n",
        "plt.plot(model.cost_history, 'b-', linewidth=2)\n",
        "plt.title('训练损失变化')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('交叉熵损失')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 决策边界可视化\n",
        "plt.subplot(2, 3, 2)\n",
        "h = 0.01\n",
        "x_min, x_max = X_train_scaled[:, 0].min() - 1, X_train_scaled[:, 0].max() + 1\n",
        "y_min, y_max = X_train_scaled[:, 1].min() - 1, X_train_scaled[:, 1].max() + 1\n",
        "xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n",
        "                     np.arange(y_min, y_max, h))\n",
        "\n",
        "# 创建网格点\n",
        "grid_points = torch.FloatTensor(np.c_[xx.ravel(), yy.ravel()])\n",
        "Z = model.predict_proba(grid_points)\n",
        "Z = Z.reshape(xx.shape)\n",
        "\n",
        "# 绘制决策边界\n",
        "plt.contourf(xx, yy, Z.numpy(), levels=50, alpha=0.8, cmap='RdYlBu')\n",
        "plt.colorbar(label='预测概率')\n",
        "\n",
        "# 绘制数据点\n",
        "plt.scatter(X_train_scaled[y_train == 0, 0], X_train_scaled[y_train == 0, 1], \n",
        "           c='red', label='类别 0', alpha=0.7, edgecolors='black')\n",
        "plt.scatter(X_train_scaled[y_train == 1, 0], X_train_scaled[y_train == 1, 1], \n",
        "           c='blue', label='类别 1', alpha=0.7, edgecolors='black')\n",
        "\n",
        "plt.title('决策边界')\n",
        "plt.xlabel('标准化特征 1')\n",
        "plt.ylabel('标准化特征 2')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 预测概率分布\n",
        "plt.subplot(2, 3, 3)\n",
        "plt.hist(train_proba[y_train == 0].numpy(), bins=20, alpha=0.7, label='类别 0', color='red')\n",
        "plt.hist(train_proba[y_train == 1].numpy(), bins=20, alpha=0.7, label='类别 1', color='blue')\n",
        "plt.axvline(x=0.5, color='black', linestyle='--', label='决策阈值')\n",
        "plt.title('预测概率分布')\n",
        "plt.xlabel('预测概率')\n",
        "plt.ylabel('频次')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 混淆矩阵 - 训练集\n",
        "plt.subplot(2, 3, 4)\n",
        "cm_train = confusion_matrix(y_train, train_pred.squeeze().numpy())\n",
        "sns.heatmap(cm_train, annot=True, fmt='d', cmap='Blues', \n",
        "            xticklabels=['预测 0', '预测 1'], \n",
        "            yticklabels=['真实 0', '真实 1'])\n",
        "plt.title(f'训练集混淆矩阵\\n准确率: {train_accuracy:.4f}')\n",
        "\n",
        "# 混淆矩阵 - 测试集\n",
        "plt.subplot(2, 3, 5)\n",
        "cm_test = confusion_matrix(y_test, test_pred.squeeze().numpy())\n",
        "sns.heatmap(cm_test, annot=True, fmt='d', cmap='Greens', \n",
        "            xticklabels=['预测 0', '预测 1'], \n",
        "            yticklabels=['真实 0', '真实 1'])\n",
        "plt.title(f'测试集混淆矩阵\\n准确率: {test_accuracy:.4f}')\n",
        "\n",
        "# 特征重要性\n",
        "plt.subplot(2, 3, 6)\n",
        "feature_names = ['特征 1', '特征 2']\n",
        "weights = model.weights.squeeze().numpy()\n",
        "plt.bar(feature_names, weights, color=['skyblue', 'lightcoral'])\n",
        "plt.title('特征权重')\n",
        "plt.ylabel('权重值')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, v in enumerate(weights):\n",
        "    plt.text(i, v + 0.01 if v >= 0 else v - 0.01, f'{v:.3f}', \n",
        "             ha='center', va='bottom' if v >= 0 else 'top')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 详细分类报告\n",
        "print(\"\\n训练集分类报告:\")\n",
        "print(classification_report(y_train, train_pred.squeeze().numpy(), \n",
        "                          target_names=['类别 0', '类别 1']))\n",
        "\n",
        "print(\"\\n测试集分类报告:\")\n",
        "print(classification_report(y_test, test_pred.squeeze().numpy(), \n",
        "                          target_names=['类别 0', '类别 1']))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. 与PyTorch内置逻辑回归对比\n",
        "\n",
        "让我们使用PyTorch的内置模块来实现逻辑回归，并对比性能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 使用PyTorch内置模块实现逻辑回归\n",
        "class PyTorchLogisticRegression(nn.Module):\n",
        "    \"\"\"使用PyTorch内置模块的逻辑回归\"\"\"\n",
        "    \n",
        "    def __init__(self, input_size):\n",
        "        super(PyTorchLogisticRegression, self).__init__()\n",
        "        self.linear = nn.Linear(input_size, 1)\n",
        "        self.sigmoid = nn.Sigmoid()\n",
        "        \n",
        "    def forward(self, x):\n",
        "        out = self.linear(x)\n",
        "        out = self.sigmoid(out)\n",
        "        return out\n",
        "\n",
        "# 创建PyTorch模型\n",
        "pytorch_model = PyTorchLogisticRegression(input_size=2)\n",
        "\n",
        "# 定义损失函数和优化器\n",
        "criterion = nn.BCELoss()  # 二元交叉熵损失\n",
        "optimizer = torch.optim.SGD(pytorch_model.parameters(), lr=0.1)\n",
        "\n",
        "# 训练PyTorch模型\n",
        "print(\"训练PyTorch内置逻辑回归模型...\")\n",
        "print(\"-\" * 50)\n",
        "\n",
        "pytorch_losses = []\n",
        "start_time = time.time()\n",
        "\n",
        "for epoch in range(1000):\n",
        "    # 前向传播\n",
        "    outputs = pytorch_model(X_train_tensor)\n",
        "    loss = criterion(outputs, y_train_tensor.unsqueeze(1))\n",
        "    \n",
        "    # 反向传播和优化\n",
        "    optimizer.zero_grad()\n",
        "    loss.backward()\n",
        "    optimizer.step()\n",
        "    \n",
        "    pytorch_losses.append(loss.item())\n",
        "    \n",
        "    # 检查收敛\n",
        "    if epoch > 0 and abs(pytorch_losses[-1] - pytorch_losses[-2]) < 1e-6:\n",
        "        print(f\"在第 {epoch+1} 次迭代后收敛\")\n",
        "        break\n",
        "    \n",
        "    # 打印进度\n",
        "    if (epoch + 1) % 100 == 0:\n",
        "        print(f\"Epoch {epoch+1}/1000, Loss: {loss.item():.6f}\")\n",
        "\n",
        "pytorch_training_time = time.time() - start_time\n",
        "print(f\"PyTorch模型训练完成! 用时: {pytorch_training_time:.2f}秒\")\n",
        "print(f\"最终损失: {pytorch_losses[-1]:.6f}\")\n",
        "\n",
        "# PyTorch模型预测\n",
        "with torch.no_grad():\n",
        "    pytorch_train_pred = pytorch_model(X_train_tensor)\n",
        "    pytorch_test_pred = pytorch_model(X_test_tensor)\n",
        "    \n",
        "    # 转换为类别预测\n",
        "    pytorch_train_class = (pytorch_train_pred >= 0.5).float()\n",
        "    pytorch_test_class = (pytorch_test_pred >= 0.5).float()\n",
        "    \n",
        "    # 计算准确率\n",
        "    pytorch_train_acc = torch.mean((pytorch_train_class.squeeze() == y_train_tensor).float()).item()\n",
        "    pytorch_test_acc = torch.mean((pytorch_test_class.squeeze() == y_test_tensor).float()).item()\n",
        "\n",
        "print(f\"\\nPyTorch模型性能:\")\n",
        "print(f\"训练集准确率: {pytorch_train_acc:.4f}\")\n",
        "print(f\"测试集准确率: {pytorch_test_acc:.4f}\")\n",
        "\n",
        "# 获取PyTorch模型参数\n",
        "pytorch_weights = pytorch_model.linear.weight.data.squeeze().numpy()\n",
        "pytorch_bias = pytorch_model.linear.bias.data.item()\n",
        "\n",
        "print(f\"\\nPyTorch模型参数:\")\n",
        "print(f\"权重: {pytorch_weights}\")\n",
        "print(f\"偏置: {pytorch_bias:.4f}\")\n",
        "\n",
        "# 对比两个模型\n",
        "print(f\"\\n模型对比:\")\n",
        "print(f\"{'指标':<20} {'从零实现':<15} {'PyTorch内置':<15} {'差异':<15}\")\n",
        "print(\"-\" * 65)\n",
        "print(f\"{'训练集准确率':<20} {train_accuracy:<15.4f} {pytorch_train_acc:<15.4f} {abs(train_accuracy - pytorch_train_acc):<15.4f}\")\n",
        "print(f\"{'测试集准确率':<20} {test_accuracy:<15.4f} {pytorch_test_acc:<15.4f} {abs(test_accuracy - pytorch_test_acc):<15.4f}\")\n",
        "print(f\"{'最终损失':<20} {model.cost_history[-1]:<15.6f} {pytorch_losses[-1]:<15.6f} {abs(model.cost_history[-1] - pytorch_losses[-1]):<15.6f}\")\n",
        "print(f\"{'训练时间(秒)':<20} {model.cost_history.__len__() * 0.01:<15.2f} {pytorch_training_time:<15.2f} {abs(len(model.cost_history) * 0.01 - pytorch_training_time):<15.2f}\")\n",
        "\n",
        "# 可视化对比\n",
        "plt.figure(figsize=(15, 5))\n",
        "\n",
        "# 损失函数对比\n",
        "plt.subplot(1, 3, 1)\n",
        "plt.plot(model.cost_history, 'b-', label='从零实现', linewidth=2)\n",
        "plt.plot(pytorch_losses, 'r-', label='PyTorch内置', linewidth=2)\n",
        "plt.title('训练损失对比')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('损失值')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 权重对比\n",
        "plt.subplot(1, 3, 2)\n",
        "x = np.arange(len(model.weights.squeeze().numpy()))\n",
        "width = 0.35\n",
        "\n",
        "plt.bar(x - width/2, model.weights.squeeze().numpy(), width, label='从零实现', alpha=0.8)\n",
        "plt.bar(x + width/2, pytorch_weights, width, label='PyTorch内置', alpha=0.8)\n",
        "\n",
        "plt.title('权重对比')\n",
        "plt.xlabel('特征')\n",
        "plt.ylabel('权重值')\n",
        "plt.xticks(x, ['特征 1', '特征 2'])\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 准确率对比\n",
        "plt.subplot(1, 3, 3)\n",
        "models = ['从零实现', 'PyTorch内置']\n",
        "train_accs = [train_accuracy, pytorch_train_acc]\n",
        "test_accs = [test_accuracy, pytorch_test_acc]\n",
        "\n",
        "x = np.arange(len(models))\n",
        "width = 0.35\n",
        "\n",
        "plt.bar(x - width/2, train_accs, width, label='训练集', alpha=0.8)\n",
        "plt.bar(x + width/2, test_accs, width, label='测试集', alpha=0.8)\n",
        "\n",
        "plt.title('准确率对比')\n",
        "plt.ylabel('准确率')\n",
        "plt.xticks(x, models)\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, (train_acc, test_acc) in enumerate(zip(train_accs, test_accs)):\n",
        "    plt.text(i - width/2, train_acc + 0.01, f'{train_acc:.3f}', ha='center', va='bottom')\n",
        "    plt.text(i + width/2, test_acc + 0.01, f'{test_acc:.3f}', ha='center', va='bottom')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 6. 多分类逻辑回归\n",
        "\n",
        "现在让我们实现多分类逻辑回归（Softmax回归）。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 生成多分类数据\n",
        "X_multi, y_multi = make_blobs(\n",
        "    n_samples=1000, \n",
        "    centers=3, \n",
        "    n_features=2, \n",
        "    random_state=42\n",
        ")\n",
        "\n",
        "# 数据分割\n",
        "X_multi_train, X_multi_test, y_multi_train, y_multi_test = train_test_split(\n",
        "    X_multi, y_multi, test_size=0.2, random_state=42\n",
        ")\n",
        "\n",
        "# 数据标准化\n",
        "scaler_multi = StandardScaler()\n",
        "X_multi_train_scaled = scaler_multi.fit_transform(X_multi_train)\n",
        "X_multi_test_scaled = scaler_multi.transform(X_multi_test)\n",
        "\n",
        "# 转换为PyTorch张量\n",
        "X_multi_train_tensor = torch.FloatTensor(X_multi_train_scaled)\n",
        "X_multi_test_tensor = torch.FloatTensor(X_multi_test_scaled)\n",
        "y_multi_train_tensor = torch.LongTensor(y_multi_train)\n",
        "y_multi_test_tensor = torch.LongTensor(y_multi_test)\n",
        "\n",
        "print(f\"多分类数据:\")\n",
        "print(f\"训练集大小: {X_multi_train_tensor.shape}\")\n",
        "print(f\"测试集大小: {X_multi_test_tensor.shape}\")\n",
        "print(f\"类别数量: {len(np.unique(y_multi))}\")\n",
        "print(f\"类别分布 - 训练集: {np.bincount(y_multi_train)}\")\n",
        "print(f\"类别分布 - 测试集: {np.bincount(y_multi_test)}\")\n",
        "\n",
        "# 可视化多分类数据\n",
        "plt.figure(figsize=(12, 4))\n",
        "\n",
        "plt.subplot(1, 2, 1)\n",
        "colors = ['red', 'blue', 'green']\n",
        "for i in range(3):\n",
        "    mask = y_multi_train == i\n",
        "    plt.scatter(X_multi_train[mask, 0], X_multi_train[mask, 1], \n",
        "               c=colors[i], label=f'类别 {i}', alpha=0.7)\n",
        "plt.title('原始多分类数据')\n",
        "plt.xlabel('特征 1')\n",
        "plt.ylabel('特征 2')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "for i in range(3):\n",
        "    mask = y_multi_train == i\n",
        "    plt.scatter(X_multi_train_scaled[mask, 0], X_multi_train_scaled[mask, 1], \n",
        "               c=colors[i], label=f'类别 {i}', alpha=0.7)\n",
        "plt.title('标准化后多分类数据')\n",
        "plt.xlabel('标准化特征 1')\n",
        "plt.ylabel('标准化特征 2')\n",
        "plt.legend()\n",
        "plt.grid(True)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 多分类逻辑回归类\n",
        "class MultiClassLogisticRegression:\n",
        "    \"\"\"多分类逻辑回归（Softmax回归）\"\"\"\n",
        "    \n",
        "    def __init__(self, learning_rate=0.01, max_iterations=1000, tolerance=1e-6):\n",
        "        self.learning_rate = learning_rate\n",
        "        self.max_iterations = max_iterations\n",
        "        self.tolerance = tolerance\n",
        "        self.weights = None\n",
        "        self.bias = None\n",
        "        self.cost_history = []\n",
        "        self.n_classes = None\n",
        "        \n",
        "    def _softmax(self, z):\n",
        "        \"\"\"Softmax函数\"\"\"\n",
        "        # 防止数值溢出\n",
        "        z_max = torch.max(z, dim=1, keepdim=True)[0]\n",
        "        z_shifted = z - z_max\n",
        "        exp_z = torch.exp(z_shifted)\n",
        "        return exp_z / torch.sum(exp_z, dim=1, keepdim=True)\n",
        "    \n",
        "    def _compute_cost(self, X, y):\n",
        "        \"\"\"计算交叉熵损失\"\"\"\n",
        "        m = X.shape[0]\n",
        "        z = torch.matmul(X, self.weights) + self.bias\n",
        "        h = self._softmax(z)\n",
        "        \n",
        "        # 防止log(0)\n",
        "        epsilon = 1e-15\n",
        "        h = torch.clamp(h, epsilon, 1 - epsilon)\n",
        "        \n",
        "        # 创建one-hot编码\n",
        "        y_one_hot = torch.zeros(m, self.n_classes)\n",
        "        y_one_hot.scatter_(1, y.unsqueeze(1), 1)\n",
        "        \n",
        "        # 计算交叉熵损失\n",
        "        cost = -torch.mean(torch.sum(y_one_hot * torch.log(h), dim=1))\n",
        "        return cost\n",
        "    \n",
        "    def _compute_gradients(self, X, y):\n",
        "        \"\"\"计算梯度\"\"\"\n",
        "        m = X.shape[0]\n",
        "        z = torch.matmul(X, self.weights) + self.bias\n",
        "        h = self._softmax(z)\n",
        "        \n",
        "        # 创建one-hot编码\n",
        "        y_one_hot = torch.zeros(m, self.n_classes)\n",
        "        y_one_hot.scatter_(1, y.unsqueeze(1), 1)\n",
        "        \n",
        "        # 计算梯度\n",
        "        dw = torch.matmul(X.T, (h - y_one_hot)) / m\n",
        "        db = torch.mean(h - y_one_hot, dim=0, keepdim=True)\n",
        "        \n",
        "        return dw, db\n",
        "    \n",
        "    def fit(self, X, y):\n",
        "        \"\"\"训练模型\"\"\"\n",
        "        # 确定类别数量\n",
        "        self.n_classes = len(torch.unique(y))\n",
        "        n_features = X.shape[1]\n",
        "        \n",
        "        # 初始化参数\n",
        "        self.weights = torch.zeros(n_features, self.n_classes, requires_grad=False)\n",
        "        self.bias = torch.zeros(1, self.n_classes, requires_grad=False)\n",
        "        \n",
        "        print(f\"开始训练多分类逻辑回归模型...\")\n",
        "        print(f\"学习率: {self.learning_rate}\")\n",
        "        print(f\"最大迭代次数: {self.max_iterations}\")\n",
        "        print(f\"特征数量: {n_features}\")\n",
        "        print(f\"类别数量: {self.n_classes}\")\n",
        "        print(f\"样本数量: {X.shape[0]}\")\n",
        "        print(\"-\" * 50)\n",
        "        \n",
        "        start_time = time.time()\n",
        "        \n",
        "        for i in range(self.max_iterations):\n",
        "            # 前向传播\n",
        "            z = torch.matmul(X, self.weights) + self.bias\n",
        "            h = self._softmax(z)\n",
        "            \n",
        "            # 计算损失\n",
        "            cost = self._compute_cost(X, y)\n",
        "            self.cost_history.append(cost.item())\n",
        "            \n",
        "            # 计算梯度\n",
        "            dw, db = self._compute_gradients(X, y)\n",
        "            \n",
        "            # 更新参数\n",
        "            self.weights -= self.learning_rate * dw\n",
        "            self.bias -= self.learning_rate * db\n",
        "            \n",
        "            # 检查收敛\n",
        "            if i > 0 and abs(self.cost_history[-1] - self.cost_history[-2]) < self.tolerance:\n",
        "                print(f\"在第 {i+1} 次迭代后收敛\")\n",
        "                break\n",
        "            \n",
        "            # 打印进度\n",
        "            if (i + 1) % 100 == 0:\n",
        "                print(f\"迭代 {i+1}/{self.max_iterations}, 损失: {cost.item():.6f}\")\n",
        "        \n",
        "        training_time = time.time() - start_time\n",
        "        print(f\"训练完成! 用时: {training_time:.2f}秒\")\n",
        "        print(f\"最终损失: {self.cost_history[-1]:.6f}\")\n",
        "        print(f\"总迭代次数: {len(self.cost_history)}\")\n",
        "    \n",
        "    def predict_proba(self, X):\n",
        "        \"\"\"预测概率\"\"\"\n",
        "        z = torch.matmul(X, self.weights) + self.bias\n",
        "        return self._softmax(z)\n",
        "    \n",
        "    def predict(self, X):\n",
        "        \"\"\"预测类别\"\"\"\n",
        "        probabilities = self.predict_proba(X)\n",
        "        return torch.argmax(probabilities, dim=1)\n",
        "    \n",
        "    def score(self, X, y):\n",
        "        \"\"\"计算准确率\"\"\"\n",
        "        predictions = self.predict(X)\n",
        "        return torch.mean((predictions == y).float()).item()\n",
        "\n",
        "# 创建并训练多分类模型\n",
        "multi_model = MultiClassLogisticRegression(learning_rate=0.1, max_iterations=1000)\n",
        "multi_model.fit(X_multi_train_tensor, y_multi_train_tensor)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 多分类模型评估\n",
        "multi_train_accuracy = multi_model.score(X_multi_train_tensor, y_multi_train_tensor)\n",
        "multi_test_accuracy = multi_model.score(X_multi_test_tensor, y_multi_test_tensor)\n",
        "\n",
        "print(f\"多分类模型性能:\")\n",
        "print(f\"训练集准确率: {multi_train_accuracy:.4f}\")\n",
        "print(f\"测试集准确率: {multi_test_accuracy:.4f}\")\n",
        "\n",
        "# 预测\n",
        "multi_train_pred = multi_model.predict(X_multi_train_tensor)\n",
        "multi_test_pred = multi_model.predict(X_multi_test_tensor)\n",
        "\n",
        "# 可视化多分类结果\n",
        "plt.figure(figsize=(15, 10))\n",
        "\n",
        "# 损失函数变化\n",
        "plt.subplot(2, 3, 1)\n",
        "plt.plot(multi_model.cost_history, 'b-', linewidth=2)\n",
        "plt.title('多分类训练损失变化')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('交叉熵损失')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 决策边界可视化\n",
        "plt.subplot(2, 3, 2)\n",
        "h = 0.01\n",
        "x_min, x_max = X_multi_train_scaled[:, 0].min() - 1, X_multi_train_scaled[:, 0].max() + 1\n",
        "y_min, y_max = X_multi_train_scaled[:, 1].min() - 1, X_multi_train_scaled[:, 1].max() + 1\n",
        "xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n",
        "                     np.arange(y_min, y_max, h))\n",
        "\n",
        "# 创建网格点\n",
        "grid_points = torch.FloatTensor(np.c_[xx.ravel(), yy.ravel()])\n",
        "Z = multi_model.predict(grid_points)\n",
        "Z = Z.reshape(xx.shape)\n",
        "\n",
        "# 绘制决策边界\n",
        "plt.contourf(xx, yy, Z.numpy(), levels=50, alpha=0.8, cmap='viridis')\n",
        "plt.colorbar(label='预测类别')\n",
        "\n",
        "# 绘制数据点\n",
        "colors = ['red', 'blue', 'green']\n",
        "for i in range(3):\n",
        "    mask = y_multi_train == i\n",
        "    plt.scatter(X_multi_train_scaled[mask, 0], X_multi_train_scaled[mask, 1], \n",
        "               c=colors[i], label=f'类别 {i}', alpha=0.7, edgecolors='black')\n",
        "\n",
        "plt.title('多分类决策边界')\n",
        "plt.xlabel('标准化特征 1')\n",
        "plt.ylabel('标准化特征 2')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 预测概率分布\n",
        "plt.subplot(2, 3, 3)\n",
        "multi_train_proba = multi_model.predict_proba(X_multi_train_tensor)\n",
        "for i in range(3):\n",
        "    mask = y_multi_train == i\n",
        "    plt.hist(multi_train_proba[mask, i].numpy(), bins=20, alpha=0.7, \n",
        "             label=f'类别 {i}', color=colors[i])\n",
        "plt.title('各类别预测概率分布')\n",
        "plt.xlabel('预测概率')\n",
        "plt.ylabel('频次')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 混淆矩阵 - 训练集\n",
        "plt.subplot(2, 3, 4)\n",
        "cm_multi_train = confusion_matrix(y_multi_train, multi_train_pred.numpy())\n",
        "sns.heatmap(cm_multi_train, annot=True, fmt='d', cmap='Blues',\n",
        "            xticklabels=[f'预测 {i}' for i in range(3)],\n",
        "            yticklabels=[f'真实 {i}' for i in range(3)])\n",
        "plt.title(f'训练集混淆矩阵\\n准确率: {multi_train_accuracy:.4f}')\n",
        "\n",
        "# 混淆矩阵 - 测试集\n",
        "plt.subplot(2, 3, 5)\n",
        "cm_multi_test = confusion_matrix(y_multi_test, multi_test_pred.numpy())\n",
        "sns.heatmap(cm_multi_test, annot=True, fmt='d', cmap='Greens',\n",
        "            xticklabels=[f'预测 {i}' for i in range(3)],\n",
        "            yticklabels=[f'真实 {i}' for i in range(3)])\n",
        "plt.title(f'测试集混淆矩阵\\n准确率: {multi_test_accuracy:.4f}')\n",
        "\n",
        "# 权重可视化\n",
        "plt.subplot(2, 3, 6)\n",
        "weights_multi = multi_model.weights.numpy()\n",
        "feature_names = ['特征 1', '特征 2']\n",
        "x = np.arange(len(feature_names))\n",
        "width = 0.25\n",
        "\n",
        "for i in range(3):\n",
        "    plt.bar(x + i*width, weights_multi[:, i], width, \n",
        "            label=f'类别 {i}', alpha=0.8, color=colors[i])\n",
        "\n",
        "plt.title('各类别特征权重')\n",
        "plt.xlabel('特征')\n",
        "plt.ylabel('权重值')\n",
        "plt.xticks(x + width, feature_names)\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 详细分类报告\n",
        "print(\"\\n多分类训练集分类报告:\")\n",
        "print(classification_report(y_multi_train, multi_train_pred.numpy(), \n",
        "                          target_names=[f'类别 {i}' for i in range(3)]))\n",
        "\n",
        "print(\"\\n多分类测试集分类报告:\")\n",
        "print(classification_report(y_multi_test, multi_test_pred.numpy(), \n",
        "                          target_names=[f'类别 {i}' for i in range(3)]))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 7. 练习：不同学习率的影响\n",
        "\n",
        "让我们通过实验来理解学习率对逻辑回归训练的影响。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 不同学习率实验\n",
        "learning_rates = [0.001, 0.01, 0.1, 0.5, 1.0]\n",
        "models_lr = {}\n",
        "results_lr = {}\n",
        "\n",
        "print(\"测试不同学习率对逻辑回归的影响...\")\n",
        "print(\"=\" * 60)\n",
        "\n",
        "for lr in learning_rates:\n",
        "    print(f\"\\n学习率: {lr}\")\n",
        "    print(\"-\" * 30)\n",
        "    \n",
        "    # 创建模型\n",
        "    model_lr = LogisticRegressionFromScratch(\n",
        "        learning_rate=lr, \n",
        "        max_iterations=1000, \n",
        "        tolerance=1e-6\n",
        "    )\n",
        "    \n",
        "    # 训练模型\n",
        "    start_time = time.time()\n",
        "    model_lr.fit(X_train_tensor, y_train_tensor.unsqueeze(1))\n",
        "    training_time = time.time() - start_time\n",
        "    \n",
        "    # 评估模型\n",
        "    train_acc = model_lr.score(X_train_tensor, y_train_tensor)\n",
        "    test_acc = model_lr.score(X_test_tensor, y_test_tensor)\n",
        "    final_loss = model_lr.cost_history[-1]\n",
        "    iterations = len(model_lr.cost_history)\n",
        "    \n",
        "    # 保存结果\n",
        "    models_lr[lr] = model_lr\n",
        "    results_lr[lr] = {\n",
        "        'train_accuracy': train_acc,\n",
        "        'test_accuracy': test_acc,\n",
        "        'final_loss': final_loss,\n",
        "        'iterations': iterations,\n",
        "        'training_time': training_time\n",
        "    }\n",
        "    \n",
        "    print(f\"训练集准确率: {train_acc:.4f}\")\n",
        "    print(f\"测试集准确率: {test_acc:.4f}\")\n",
        "    print(f\"最终损失: {final_loss:.6f}\")\n",
        "    print(f\"迭代次数: {iterations}\")\n",
        "    print(f\"训练时间: {training_time:.2f}秒\")\n",
        "\n",
        "# 可视化不同学习率的结果\n",
        "plt.figure(figsize=(15, 10))\n",
        "\n",
        "# 损失函数变化对比\n",
        "plt.subplot(2, 3, 1)\n",
        "for lr in learning_rates:\n",
        "    plt.plot(models_lr[lr].cost_history, label=f'LR={lr}', linewidth=2)\n",
        "plt.title('不同学习率的损失变化')\n",
        "plt.xlabel('迭代次数')\n",
        "plt.ylabel('损失值')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "plt.yscale('log')  # 使用对数坐标更好地显示差异\n",
        "\n",
        "# 准确率对比\n",
        "plt.subplot(2, 3, 2)\n",
        "lrs = list(results_lr.keys())\n",
        "train_accs = [results_lr[lr]['train_accuracy'] for lr in lrs]\n",
        "test_accs = [results_lr[lr]['test_accuracy'] for lr in lrs]\n",
        "\n",
        "x = np.arange(len(lrs))\n",
        "width = 0.35\n",
        "\n",
        "plt.bar(x - width/2, train_accs, width, label='训练集', alpha=0.8)\n",
        "plt.bar(x + width/2, test_accs, width, label='测试集', alpha=0.8)\n",
        "\n",
        "plt.title('不同学习率的准确率')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('准确率')\n",
        "plt.xticks(x, lrs)\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, (train_acc, test_acc) in enumerate(zip(train_accs, test_accs)):\n",
        "    plt.text(i - width/2, train_acc + 0.01, f'{train_acc:.3f}', ha='center', va='bottom')\n",
        "    plt.text(i + width/2, test_acc + 0.01, f'{test_acc:.3f}', ha='center', va='bottom')\n",
        "\n",
        "# 最终损失对比\n",
        "plt.subplot(2, 3, 3)\n",
        "final_losses = [results_lr[lr]['final_loss'] for lr in lrs]\n",
        "plt.bar(lrs, final_losses, alpha=0.8, color='orange')\n",
        "plt.title('不同学习率的最终损失')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('最终损失')\n",
        "plt.yscale('log')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, loss in enumerate(final_losses):\n",
        "    plt.text(i, loss * 1.1, f'{loss:.4f}', ha='center', va='bottom')\n",
        "\n",
        "# 迭代次数对比\n",
        "plt.subplot(2, 3, 4)\n",
        "iterations = [results_lr[lr]['iterations'] for lr in lrs]\n",
        "plt.bar(lrs, iterations, alpha=0.8, color='green')\n",
        "plt.title('不同学习率的收敛迭代次数')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('迭代次数')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, iters in enumerate(iterations):\n",
        "    plt.text(i, iters + 10, f'{iters}', ha='center', va='bottom')\n",
        "\n",
        "# 训练时间对比\n",
        "plt.subplot(2, 3, 5)\n",
        "training_times = [results_lr[lr]['training_time'] for lr in lrs]\n",
        "plt.bar(lrs, training_times, alpha=0.8, color='purple')\n",
        "plt.title('不同学习率的训练时间')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('训练时间 (秒)')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, time_val in enumerate(training_times):\n",
        "    plt.text(i, time_val + 0.01, f'{time_val:.2f}s', ha='center', va='bottom')\n",
        "\n",
        "# 学习率效果总结\n",
        "plt.subplot(2, 3, 6)\n",
        "# 创建一个综合评分（准确率 - 损失 - 时间惩罚）\n",
        "scores = []\n",
        "for lr in lrs:\n",
        "    # 综合评分：准确率权重0.4，损失权重0.3，时间权重0.3\n",
        "    score = (results_lr[lr]['test_accuracy'] * 0.4 + \n",
        "             (1 - results_lr[lr]['final_loss']) * 0.3 + \n",
        "             (1 - results_lr[lr]['training_time'] / max(training_times)) * 0.3)\n",
        "    scores.append(score)\n",
        "\n",
        "plt.bar(lrs, scores, alpha=0.8, color='red')\n",
        "plt.title('学习率综合评分\\n(准确率+低损失+快速训练)')\n",
        "plt.xlabel('学习率')\n",
        "plt.ylabel('综合评分')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, score in enumerate(scores):\n",
        "    plt.text(i, score + 0.01, f'{score:.3f}', ha='center', va='bottom')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 结果总结表\n",
        "print(\"\\n\" + \"=\" * 80)\n",
        "print(\"学习率实验结果总结\")\n",
        "print(\"=\" * 80)\n",
        "print(f\"{'学习率':<8} {'训练准确率':<12} {'测试准确率':<12} {'最终损失':<12} {'迭代次数':<10} {'训练时间':<10}\")\n",
        "print(\"-\" * 80)\n",
        "\n",
        "for lr in learning_rates:\n",
        "    result = results_lr[lr]\n",
        "    print(f\"{lr:<8} {result['train_accuracy']:<12.4f} {result['test_accuracy']:<12.4f} \"\n",
        "          f\"{result['final_loss']:<12.6f} {result['iterations']:<10} {result['training_time']:<10.2f}\")\n",
        "\n",
        "# 找出最佳学习率\n",
        "best_lr = max(learning_rates, key=lambda lr: results_lr[lr]['test_accuracy'])\n",
        "print(f\"\\n最佳学习率: {best_lr} (测试准确率: {results_lr[best_lr]['test_accuracy']:.4f})\")\n",
        "\n",
        "print(\"\\n学习率分析:\")\n",
        "print(\"1. 学习率太小 (0.001): 收敛慢，需要更多迭代\")\n",
        "print(\"2. 学习率适中 (0.01-0.1): 通常能获得最佳性能\")\n",
        "print(\"3. 学习率太大 (0.5-1.0): 可能震荡或不收敛\")\n",
        "print(\"4. 选择学习率需要在收敛速度和稳定性之间平衡\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 8. 总结\n",
        "\n",
        "恭喜！你已经完成了逻辑回归从零实现的完整教程。让我们总结一下学到的重要概念：\n",
        "\n",
        "### 关键概念回顾\n",
        "\n",
        "1. **Sigmoid函数**: 将线性组合映射到0-1之间的概率\n",
        "2. **交叉熵损失**: 用于分类问题的损失函数\n",
        "3. **梯度下降**: 通过计算梯度来更新模型参数\n",
        "4. **二分类vs多分类**: 从Sigmoid到Softmax的扩展\n",
        "5. **学习率**: 控制参数更新步长的超参数\n",
        "\n",
        "### 实现要点\n",
        "\n",
        "- **数值稳定性**: 使用clamp防止数值溢出\n",
        "- **收敛检测**: 监控损失变化来判断训练是否完成\n",
        "- **可视化**: 决策边界、损失曲线、混淆矩阵等\n",
        "- **性能对比**: 从零实现vs PyTorch内置模块\n",
        "\n",
        "### 下一步学习建议\n",
        "\n",
        "1. **正则化**: 添加L1/L2正则化防止过拟合\n",
        "2. **特征工程**: 多项式特征、特征选择\n",
        "3. **高级优化器**: Adam、RMSprop等\n",
        "4. **实际数据集**: 在真实数据上应用逻辑回归\n",
        "\n",
        "现在你已经掌握了逻辑回归的核心原理和实现方法！🎉\n"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
