{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 第一部分",
   "id": "c216d8cee0365df1"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 简答题",
   "id": "c63b52b914e7bb74"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "1. 为什么激活函数是训练一个多层感知机（MLP）的关键要素\n",
    "引入非线性：如果没有激活函数，无论神经网络有多少层，其最终的输出都只是输入特征的线性组合（因为矩阵乘法和向量加法都是线性运算）。这样的模型无法学习复杂的非线性关系，其能力等价于一个线性回归模型，极大地限制了模型的应用。激活函数通过其非线性变换，使得多层网络能够逼近任何复杂的非线性函数，成为真正的“万能近似器”。\n",
    "\n",
    "决定神经元的激活状态：激活函数根据输入信号的综合强度（加权和加偏置）来决定一个神经元是否应该被“激活”，即输出一个非零的值。这模拟了生物神经元中“全有或全无”的放电法则。\n",
    "\n",
    "支持梯度下降学习：反向传播算法依赖于计算梯度来更新权重。激活函数的导数（或其近似）是梯度计算中的关键一环，它决定了误差信号如何从后一层传递到前一层，从而指导权重的更新。\n",
    "2. 列举三种常用的激活函数，说明一下它们的大概形状\n",
    "Sigmoid (Logistic函数)\n",
    "公式：σ(x) = 1 / (1 + e^(-x))\n",
    "形状：将任何实数“挤压”到 (0, 1) 区间内。其图形是一条光滑的“S”形曲线。当输入值非常大或非常小时，曲线会变得非常平缓。\n",
    "问题：容易导致梯度消失（在饱和区梯度接近0）；输出不是零中心的。\n",
    "Hyperbolic Tangent (Tanh)\n",
    "公式：tanh(x) = (e^x - e^(-x)) / (e^x + e^(-x))\n",
    "形状：同样是一条“S”形曲线，但它将输入值压缩到 (-1, 1) 区间。其形状与Sigmoid相似，但关于原点对称。\n",
    "优点：因为是零中心化的，通常比Sigmoid表现更好。\n",
    "问题：仍然存在梯度消失问题。\n",
    "Rectified Linear Unit (ReLU)\n",
    "公式：ReLU(x) = max(0, x)\n",
    "形状：对于所有负的输入，输出为0；对于所有正的输入，输出等于输入。其图形是一条在原点处拐弯的折线。\n",
    "优点：计算极其简单、高效；在实践中能极大地加速收敛；缓解了梯度消失问题（在正区间梯度恒为1）。\n",
    "问题：存在“Dying ReLU”问题（输入为负时，梯度永远为0，神经元可能再也无法被激活）。\n",
    "3. 反向传播的算法解决什么问题，如何工作的\n",
    "解决的问题：反向传播算法解决了在多层神经网络中高效计算损失函数关于所有权重和偏置参数的梯度的问题。没有它，我们将无法使用梯度下降等优化算法来训练复杂的神经网络。\n",
    "如何工作（简要步骤）：\n",
    "前向传播：输入数据从输入层开始，逐层计算每个神经元的加权和并应用激活函数，直到得到输出层的预测结果。\n",
    "计算损失：使用损失函数（如均方误差、交叉熵）计算网络预测输出与真实标签之间的误差。\n",
    "反向传播误差：这是核心步骤。算法利用链式法则从输出层开始，反向逐层计算损失函数对每一层权重和偏置的梯度。\n",
    "先计算损失对输出层参数的梯度。\n",
    "然后将这个误差信号反向传递到前一隐藏层，并计算该隐藏层参数的梯度。\n",
    "重复此过程，直到传播到输入层。\n",
    "更新参数：使用优化器（如SGD、Adam），根据计算出的梯度对网络中的所有权重和偏置进行更新，以减小损失。\n",
    "4. 列出可以在基本MLP（不考虑其他神经网络架构）中进行调整的所有超参数？如果MLP过拟合训练数据，如何调整这些超参数来解决该问题？\n",
    "可调整的超参数：\n",
    "\n",
    "隐藏层的数量\n",
    "\n",
    "每个隐藏层中神经元的数量\n",
    "\n",
    "激活函数的选择\n",
    "\n",
    "学习率\n",
    "\n",
    "批大小 (Batch Size)\n",
    "\n",
    "优化器的选择（如SGD, Adam）及其参数（如动量）\n",
    "\n",
    "迭代次数 (Epochs)\n",
    "\n",
    "正则化参数（如L1、L2正则化的系数）\n",
    "\n",
    "丢弃法 (Dropout) 的比率\n",
    "\n",
    "解决过拟合的调整策略：\n",
    "如果MLP过拟合训练数据（表现为训练误差低，验证误差高），可以采用以下策略：\n",
    "\n",
    "增加正则化：增大L2正则化的系数（λ），或提高Dropout的比率。这是最直接有效的方法。\n",
    "\n",
    "减小模型容量：减少隐藏层的数量或减少每层神经元的数量。一个更小的网络更不容易过拟合。\n",
    "\n",
    "获取更多数据：这是解决过拟合最根本的方法，但通常不受控制。\n",
    "\n",
    "早停 (Early Stopping)：在训练过程中持续监控验证集误差，当验证误差不再下降反而开始上升时，停止训练。\n",
    "\n",
    "降低模型训练程度：减少训练迭代次数 (Epochs)。\n",
    "\n",
    "调整学习率：有时学习率太大可能导致优化“跳过”最优点并在其周围震荡，适当降低学习率可能有助于找到更平坦（泛化能力更好）的最优点。\n",
    "\n",
    "\n",
    "5. 假设有一个MLP，该MLP由一个输入层，10个特征，随后是一个包含50个神经元的隐藏层，最后是3个神经元组成的输出层。所有人工神经元都使用ReLU激活函数。\n",
    "\n",
    "   a. 输入矩阵X的形状是什么\n",
    " 行数batch_size代表一次输入的网络样本数量，列数10对应输入特征数。\n",
    "   b. 隐藏层的权重W_hidden及其偏置b_hidden的形状分别是什么\n",
    " W_hidden：(10, 50)\n",
    "* 它将一个10维的输入向量映射到50维的隐藏空间。权重矩阵的列数对应本层神经元数量，行数对应前一层的输出/输入维度。\n",
    "* b_hidden：(50,) 或 (1, 50)\n",
    "* 偏置向量与隐藏层神经元一一对应。\n",
    "   c. 输出层的权重W_output及其偏置b_output的形状是什么\n",
    "* W_output：(50, 3)\n",
    "* 它将50维的隐藏层输出映射到3维的输出空间。\n",
    "* b_output：(3,) 或 (1, 3)\n",
    "* 偏置向量与输出层神经元一一对应。\n",
    "   d. 网络输出矩阵Y的形状是什么\n",
    "* 行数batch_size与输入样本数对应，列数3与输出神经元数对应。\n",
    "   e. 写出输出矩阵Y的计算公式，满足Y是W_hidden, b_hidden, W_output, b_output的函数\n",
    "Y = ReLU( ReLU(X • W_hidden + b_hidden) • W_output + b_output )\n",
    "* 其中 • 表示矩阵乘法。\n",
    "* 首先计算隐藏层输出：H = ReLU(X • W_hidden + b_hidden)\n",
    "* 然后计算最终输出：Y = ReLU(H • W_output + b_output)\n",
    "6. 如果要将电子邮件分类为垃圾邮件或正常邮件，需要在输出层中有多少个神经元？应该在输出层中使用什么激活函数？相反，如果想解决MNIST图片分类问题，则在输出层中需要有多少个神经元，应该使用哪种激活函数？如何使神经网络预测 回归话题里提到的房价？\n",
    "电子邮件分类（二分类问题）：\n",
    "神经元数量：1个。可以输出一个标量值，表示“是垃圾邮件”的概率。\n",
    "输出层激活函数：Sigmoid。因为它能将输出压缩到(0,1)区间，完美地解释为概率。\n",
    "MNIST图片分类（十分类问题）：\n",
    "神经元数量：10个。每个神经元对应一个数字类别（0到9）。\n",
    "输出层激活函数：Softmax。它可以将10个神经元的输出压缩成一个概率分布，所有输出值之和为1。每个值代表输入图片属于对应类别的概率。\n",
    "预测房价（回归问题）：\n",
    "神经元数量：1个。输出一个连续的标量值，即预测的房价。\n",
    "输出层激活函数：无激活函数（线性激活） 或 ReLU（如果确定房价不会为负）。\n",
    "通常使用线性激活（即 f(x) = x），因为我们需要网络能够输出在负无穷到正无穷范围内的任意值。使用非线性激活函数会不必要地限制输出范围。"
   ],
   "id": "efbfab39dff1fdac"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 编程题",
   "id": "3a345ebdc81ebc04"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "在MNIST数据集上训练深度MLP（可以使用tf.keras.datasets.mnist.load_data()加载它）。看看是否可以通过手动调整超参数获得98%以上的精度。\n",
    "\n",
    "首先尝试使用课堂上介绍的方法搜索最佳学习率（即通过以指数方式增加学习率，根据学习率变化绘制训练损失，并找到损失激增的点）。\n",
    "\n",
    "接下来，尝试使用Keras Tuner调整超参数——保存检查点，使用早停，并使用TensorBoard绘制在MNIST数据集上训练深度MLP（可以使用tf.keras.datasets.mnist.load_data()加载它）。看看是否可以通过手动调整超参数获得98%以上的精度。\n",
    "\n",
    "首先尝试使用课堂上介绍的方法搜索最佳学习率（即通过以指数方式增加学习率，根据学习率变化绘制训练损失，并找到损失激增的点）。\n",
    "\n",
    "接下来，尝试使用Keras Tuner调整超参数——保存检查点，使用早停，并使用TensorBoard绘制"
   ],
   "id": "8ae50ffff27ba189"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-01T09:48:34.134270Z",
     "start_time": "2025-09-01T09:48:32.671981Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.model_selection import train_test_split\n",
    "import kerastuner as kt\n",
    "import os\n",
    "import datetime\n",
    "\n",
    "# 设置随机种子以确保可重复性\n",
    "tf.random.set_seed(42)\n",
    "np.random.seed(42)\n",
    "\n",
    "# 加载MNIST数据集\n",
    "(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data()\n",
    "\n",
    "# 数据预处理\n",
    "X_train_full = X_train_full.astype('float32') / 255.0\n",
    "X_test = X_test.astype('float32') / 255.0\n",
    "\n",
    "# 将图像展平为向量\n",
    "X_train_full = X_train_full.reshape(-1, 784)\n",
    "X_test = X_test.reshape(-1, 784)\n",
    "\n",
    "# 划分训练集和验证集\n",
    "X_train, X_val, y_train, y_val = train_test_split(\n",
    "    X_train_full, y_train_full, test_size=0.1, random_state=42\n",
    ")\n",
    "\n",
    "print(f\"训练集: {X_train.shape}\")\n",
    "print(f\"验证集: {X_val.shape}\")\n",
    "print(f\"测试集: {X_test.shape}\")\n",
    "\n",
    "# 方法1：使用现有的LRFinder库（推荐）\n",
    "# 首先安装：pip install keras-lr-finder\n",
    "\n",
    "try:\n",
    "    from keras_lr_finder import LRFinder\n",
    "\n",
    "    # 创建基础模型\n",
    "    def create_base_model():\n",
    "        model = keras.Sequential([\n",
    "            keras.layers.Dense(512, activation='relu', input_shape=(784,)),\n",
    "            keras.layers.Dropout(0.2),\n",
    "            keras.layers.Dense(256, activation='relu'),\n",
    "            keras.layers.Dropout(0.2),\n",
    "            keras.layers.Dense(128, activation='relu'),\n",
    "            keras.layers.Dropout(0.2),\n",
    "            keras.layers.Dense(10, activation='softmax')\n",
    "        ])\n",
    "        return model\n",
    "\n",
    "    # 创建并编译模型\n",
    "    model = create_base_model()\n",
    "    model.compile(\n",
    "        optimizer=keras.optimizers.Adam(learning_rate=1e-3),\n",
    "        loss='sparse_categorical_crossentropy',\n",
    "        metrics=['accuracy']\n",
    "    )\n",
    "\n",
    "    # 使用LRFinder\n",
    "    lr_finder = LRFinder(model)\n",
    "    lr_finder.find(\n",
    "        X_train, y_train,\n",
    "        start_lr=1e-6,\n",
    "        end_lr=1e-1,\n",
    "        batch_size=128,\n",
    "        epochs=1\n",
    "    )\n",
    "\n",
    "    # 绘制结果\n",
    "    lr_finder.plot_loss()\n",
    "\n",
    "    # 获取最佳学习率\n",
    "    best_lr = lr_finder.get_best_lr(sma=20)\n",
    "    print(f\"建议的最佳学习率: {best_lr:.6f}\")\n",
    "\n",
    "except ImportError:\n",
    "    print(\"keras-lr-finder 未安装，使用替代方法...\")\n",
    "    # 方法2：手动实现简化版学习率查找\n",
    "    def manual_lr_find(X_train, y_train, batch_size=128, epochs=1):\n",
    "        # 创建临时模型\n",
    "        temp_model = keras.Sequential([\n",
    "            keras.layers.Dense(128, activation='relu', input_shape=(784,)),\n",
    "            keras.layers.Dropout(0.2),\n",
    "            keras.layers.Dense(10, activation='softmax')\n",
    "        ])\n",
    "\n",
    "        temp_model.compile(\n",
    "            optimizer=keras.optimizers.Adam(),\n",
    "            loss='sparse_categorical_crossentropy',\n",
    "            metrics=['accuracy']\n",
    "        )\n",
    "\n",
    "        # 使用不同的学习率进行快速测试\n",
    "        lrs = [1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1]\n",
    "        losses = []\n",
    "\n",
    "        for lr in lrs:\n",
    "            keras.backend.set_value(temp_model.optimizer.learning_rate, lr)\n",
    "            history = temp_model.fit(\n",
    "                X_train[:1000], y_train[:1000],  # 使用子集加快速度\n",
    "                epochs=1,\n",
    "                batch_size=batch_size,\n",
    "                verbose=0\n",
    "            )\n",
    "            losses.append(history.history['loss'][0])\n",
    "            print(f\"LR: {lr:.1e}, Loss: {history.history['loss'][0]:.4f}\")\n",
    "\n",
    "        # 找到损失开始上升的点\n",
    "        for i in range(1, len(losses)):\n",
    "            if losses[i] > losses[i-1] * 1.5:  # 损失显著上升\n",
    "                best_lr = lrs[i-1] / 3  # 选择上升前的一个值\n",
    "                print(f\"建议的学习率: {best_lr:.6f}\")\n",
    "                return best_lr\n",
    "\n",
    "        # 如果没有明显上升，选择中间值\n",
    "        best_lr = lrs[len(lrs)//2]\n",
    "        print(f\"建议的学习率: {best_lr:.6f}\")\n",
    "        return best_lr\n",
    "\n",
    "    best_lr = manual_lr_find(X_train, y_train)\n",
    "    print(f\"使用手动方法找到的学习率: {best_lr:.6f}\")\n",
    "\n",
    "# 使用Keras Tuner进行超参数调整\n",
    "def build_model(hp):\n",
    "    model = keras.Sequential()\n",
    "    model.add(keras.layers.Input(shape=(784,)))\n",
    "\n",
    "    # 隐藏层数量和单元数\n",
    "    num_layers = hp.Int('num_layers', 2, 4)\n",
    "    for i in range(num_layers):\n",
    "        model.add(keras.layers.Dense(\n",
    "            units=hp.Int(f'units_{i}', min_value=128, max_value=512, step=64),\n",
    "            activation=hp.Choice(f'activation_{i}', ['relu', 'tanh', 'elu'])\n",
    "        ))\n",
    "        model.add(keras.layers.Dropout(\n",
    "            rate=hp.Float(f'dropout_{i}', min_value=0.1, max_value=0.4, step=0.1)\n",
    "        ))\n",
    "\n",
    "    # 输出层\n",
    "    model.add(keras.layers.Dense(10, activation='softmax'))\n",
    "\n",
    "    # 编译模型\n",
    "    learning_rate = hp.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log')\n",
    "    optimizer = hp.Choice('optimizer', ['adam', 'rmsprop'])\n",
    "\n",
    "    if optimizer == 'adam':\n",
    "        opt = keras.optimizers.Adam(learning_rate=learning_rate)\n",
    "    else:\n",
    "        opt = keras.optimizers.RMSprop(learning_rate=learning_rate)\n",
    "\n",
    "    model.compile(\n",
    "        optimizer=opt,\n",
    "        loss='sparse_categorical_crossentropy',\n",
    "        metrics=['accuracy']\n",
    "    )\n",
    "    return model\n",
    "\n",
    "# 创建目录\n",
    "os.makedirs('checkpoints', exist_ok=True)\n",
    "os.makedirs('logs', exist_ok=True)\n",
    "\n",
    "# 创建调谐器\n",
    "tuner = kt.RandomSearch(\n",
    "    build_model,\n",
    "    objective='val_accuracy',\n",
    "    max_trials=15,\n",
    "    executions_per_trial=1,\n",
    "    directory='mnist_tuning',\n",
    "    project_name='mlp_tuning'\n",
    ")\n",
    "\n",
    "# 创建回调函数\n",
    "checkpoint_cb = keras.callbacks.ModelCheckpoint(\n",
    "    'checkpoints/best_model.h5',\n",
    "    save_best_only=True,\n",
    "    monitor='val_accuracy',\n",
    "    mode='max'\n",
    ")\n",
    "\n",
    "early_stopping_cb = keras.callbacks.EarlyStopping(\n",
    "    patience=8,\n",
    "    restore_best_weights=True,\n",
    "    monitor='val_accuracy',\n",
    "    mode='max'\n",
    ")\n",
    "\n",
    "log_dir = os.path.join(\"logs\", \"fit\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\n",
    "tensorboard_cb = keras.callbacks.TensorBoard(log_dir=log_dir)\n",
    "\n",
    "print(\"开始超参数搜索...\")\n",
    "tuner.search(\n",
    "    X_train, y_train,\n",
    "    epochs=30,\n",
    "    validation_data=(X_val, y_val),\n",
    "    callbacks=[checkpoint_cb, early_stopping_cb, tensorboard_cb],\n",
    "    verbose=1,\n",
    "    batch_size=128\n",
    ")\n",
    "\n",
    "# 获取最佳超参数\n",
    "best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]\n",
    "print(\"\\n最佳超参数:\")\n",
    "print(f\"层数: {best_hps.get('num_layers')}\")\n",
    "print(f\"学习率: {best_hps.get('learning_rate')}\")\n",
    "print(f\"优化器: {best_hps.get('optimizer')}\")\n",
    "for i in range(best_hps.get('num_layers')):\n",
    "    print(f\"层{i+1}单元数: {best_hps.get(f'units_{i}')}\")\n",
    "    print(f\"层{i+1}激活函数: {best_hps.get(f'activation_{i}')}\")\n",
    "    print(f\"层{i+1}Dropout率: {best_hps.get(f'dropout_{i}')}\")\n",
    "\n",
    "# 使用最佳超参数构建和训练最终模型\n",
    "print(\"\\n使用最佳超参数训练最终模型...\")\n",
    "best_model = tuner.hypermodel.build(best_hps)\n",
    "\n",
    "# 重新定义早停回调，增加耐心值\n",
    "final_early_stopping = keras.callbacks.EarlyStopping(\n",
    "    patience=15,\n",
    "    restore_best_weights=True,\n",
    "    monitor='val_accuracy',\n",
    "    mode='max'\n",
    ")\n",
    "\n",
    "final_history = best_model.fit(\n",
    "    X_train, y_train,\n",
    "    epochs=100,\n",
    "    batch_size=128,\n",
    "    validation_data=(X_val, y_val),\n",
    "    callbacks=[checkpoint_cb, final_early_stopping, tensorboard_cb],\n",
    "    verbose=1\n",
    ")\n",
    "\n",
    "# 评估最终模型\n",
    "test_loss, test_acc = best_model.evaluate(X_test, y_test, verbose=0)\n",
    "print(f\"\\n测试集准确率: {test_acc:.4f}\")\n",
    "\n",
    "# 绘制训练历史\n",
    "plt.figure(figsize=(12, 4))\n",
    "plt.subplot(1, 2, 1)\n",
    "plt.plot(final_history.history['accuracy'], label='Training Accuracy')\n",
    "plt.plot(final_history.history['val_accuracy'], label='Validation Accuracy')\n",
    "plt.title('Model Accuracy')\n",
    "plt.xlabel('Epoch')\n",
    "plt.ylabel('Accuracy')\n",
    "plt.legend()\n",
    "\n",
    "plt.subplot(1, 2, 2)\n",
    "plt.plot(final_history.history['loss'], label='Training Loss')\n",
    "plt.plot(final_history.history['val_loss'], label='Validation Loss')\n",
    "plt.title('Model Loss')\n",
    "plt.xlabel('Epoch')\n",
    "plt.ylabel('Loss')\n",
    "plt.legend()\n",
    "plt.tight_layout()\n",
    "plt.show()\n",
    "\n",
    "# 保存最终模型\n",
    "best_model.save('mnist_mlp_final_model.h5')\n",
    "print(\"模型已保存为 'mnist_mlp_final_model.h5'\")\n",
    "\n",
    "# 结果评估\n",
    "if test_acc >= 0.98:\n",
    "    print(\"🎉 成功达到98%以上的测试精度！\")\n",
    "else:\n",
    "    print(f\"⚠️ 当前精度: {test_acc:.4f}, 未达到98%目标\")\n",
    "    print(\"可以尝试增加训练轮次或进一步调整超参数\")"
   ],
   "id": "e569945be156e098",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练集: (54000, 784)\n",
      "验证集: (6000, 784)\n",
      "测试集: (10000, 784)\n",
      "keras-lr-finder 未安装，使用替代方法...\n"
     ]
    },
    {
     "ename": "AttributeError",
     "evalue": "'str' object has no attribute 'name'",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mModuleNotFoundError\u001B[0m                       Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[13], line 38\u001B[0m\n\u001B[0;32m     37\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m---> 38\u001B[0m     \u001B[38;5;28;01mfrom\u001B[39;00m \u001B[38;5;21;01mkeras_lr_finder\u001B[39;00m \u001B[38;5;28;01mimport\u001B[39;00m LRFinder\n\u001B[0;32m     40\u001B[0m     \u001B[38;5;66;03m# 创建基础模型\u001B[39;00m\n",
      "\u001B[1;31mModuleNotFoundError\u001B[0m: No module named 'keras_lr_finder'",
      "\nDuring handling of the above exception, another exception occurred:\n",
      "\u001B[1;31mAttributeError\u001B[0m                            Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[13], line 122\u001B[0m\n\u001B[0;32m    119\u001B[0m         \u001B[38;5;28mprint\u001B[39m(\u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m建议的学习率: \u001B[39m\u001B[38;5;132;01m{\u001B[39;00mbest_lr\u001B[38;5;132;01m:\u001B[39;00m\u001B[38;5;124m.6f\u001B[39m\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m)\n\u001B[0;32m    120\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m best_lr\n\u001B[1;32m--> 122\u001B[0m     best_lr \u001B[38;5;241m=\u001B[39m manual_lr_find(X_train, y_train)\n\u001B[0;32m    123\u001B[0m     \u001B[38;5;28mprint\u001B[39m(\u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m使用手动方法找到的学习率: \u001B[39m\u001B[38;5;132;01m{\u001B[39;00mbest_lr\u001B[38;5;132;01m:\u001B[39;00m\u001B[38;5;124m.6f\u001B[39m\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m)\n\u001B[0;32m    125\u001B[0m \u001B[38;5;66;03m# 使用Keras Tuner进行超参数调整\u001B[39;00m\n",
      "Cell \u001B[1;32mIn[13], line 100\u001B[0m, in \u001B[0;36mmanual_lr_find\u001B[1;34m(X_train, y_train, batch_size, epochs)\u001B[0m\n\u001B[0;32m     97\u001B[0m losses \u001B[38;5;241m=\u001B[39m []\n\u001B[0;32m     99\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m lr \u001B[38;5;129;01min\u001B[39;00m lrs:\n\u001B[1;32m--> 100\u001B[0m     keras\u001B[38;5;241m.\u001B[39mbackend\u001B[38;5;241m.\u001B[39mset_value(temp_model\u001B[38;5;241m.\u001B[39moptimizer\u001B[38;5;241m.\u001B[39mlearning_rate, lr)\n\u001B[0;32m    101\u001B[0m     history \u001B[38;5;241m=\u001B[39m temp_model\u001B[38;5;241m.\u001B[39mfit(\n\u001B[0;32m    102\u001B[0m         X_train[:\u001B[38;5;241m1000\u001B[39m], y_train[:\u001B[38;5;241m1000\u001B[39m],  \u001B[38;5;66;03m# 使用子集加快速度\u001B[39;00m\n\u001B[0;32m    103\u001B[0m         epochs\u001B[38;5;241m=\u001B[39m\u001B[38;5;241m1\u001B[39m,\n\u001B[0;32m    104\u001B[0m         batch_size\u001B[38;5;241m=\u001B[39mbatch_size,\n\u001B[0;32m    105\u001B[0m         verbose\u001B[38;5;241m=\u001B[39m\u001B[38;5;241m0\u001B[39m\n\u001B[0;32m    106\u001B[0m     )\n\u001B[0;32m    107\u001B[0m     losses\u001B[38;5;241m.\u001B[39mappend(history\u001B[38;5;241m.\u001B[39mhistory[\u001B[38;5;124m'\u001B[39m\u001B[38;5;124mloss\u001B[39m\u001B[38;5;124m'\u001B[39m][\u001B[38;5;241m0\u001B[39m])\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\keras\\src\\legacy\\backend.py:1885\u001B[0m, in \u001B[0;36mset_value\u001B[1;34m(x, value)\u001B[0m\n\u001B[0;32m   1882\u001B[0m \u001B[38;5;129m@keras_export\u001B[39m(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mkeras._legacy.backend.set_value\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[0;32m   1883\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21mset_value\u001B[39m(x, value):\n\u001B[0;32m   1884\u001B[0m \u001B[38;5;250m    \u001B[39m\u001B[38;5;124;03m\"\"\"DEPRECATED.\"\"\"\u001B[39;00m\n\u001B[1;32m-> 1885\u001B[0m     value \u001B[38;5;241m=\u001B[39m np\u001B[38;5;241m.\u001B[39masarray(value, dtype\u001B[38;5;241m=\u001B[39mx\u001B[38;5;241m.\u001B[39mdtype\u001B[38;5;241m.\u001B[39mname)\n\u001B[0;32m   1886\u001B[0m     x\u001B[38;5;241m.\u001B[39massign(value)\n",
      "\u001B[1;31mAttributeError\u001B[0m: 'str' object has no attribute 'name'"
     ]
    }
   ],
   "execution_count": 13
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 第二部分",
   "id": "aa17e7e1e0f2258c"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "第二部分要求构建一个基本的softmax回归算法，以及一个简单的两层神经网络。将使用原生Python（使用numpy库），不借助keras实现这些算法\n",
    "\n",
    "在此过程中，将提供一些关于如何实现这些不同函数的指导，但总体而言，细节需要自己实现。 应该尽量使用 numpy 中的线性代数调用：for/while循环通常会使代码运行速度比预期慢得多。\n",
    "\n",
    "**请仔细阅读作业说明!!!**"
   ],
   "id": "6c3f1fe62132569f"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "去命令行（cmd/Anaconda Powershell Prompt /其他终端）运行如下指令（激活开发环境一定要最先执行），安装这部分作业依赖的python库：\n",
    "- 激活开发环境：conda activate homl3\n",
    "- 安装numdifftools：conda install numdifftools\n",
    "- 安装pytest：conda install pytest\n"
   ],
   "id": "bbec9387c82d28a6"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-01T10:15:07.968969Z",
     "start_time": "2025-09-01T10:15:01.612920Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.datasets import fetch_openml\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "import time\n",
    "\n",
    "# 加载MNIST数据集\n",
    "print(\"加载MNIST数据集...\")\n",
    "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n",
    "X, y = mnist.data, mnist.target.astype(int)\n",
    "\n",
    "# 数据预处理\n",
    "X = X.astype(np.float32) / 255.0\n",
    "y = y.astype(np.int32)\n",
    "\n",
    "# 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(\n",
    "    X, y, test_size=10000, random_state=42, stratify=y\n",
    ")\n",
    "X_train, X_val, y_train, y_val = train_test_split(\n",
    "    X_train, y_train, test_size=10000, random_state=42, stratify=y_train\n",
    ")\n",
    "\n",
    "print(f\"训练集: {X_train.shape}\")\n",
    "print(f\"验证集: {X_val.shape}\")\n",
    "print(f\"测试集: {X_test.shape}\")\n",
    "\n",
    "# 工具函数\n",
    "def one_hot_encode(y, num_classes=10):\n",
    "    \"\"\"将标签转换为one-hot编码\"\"\"\n",
    "    return np.eye(num_classes)[y]\n",
    "\n",
    "def softmax(x):\n",
    "    \"\"\"Softmax函数\"\"\"\n",
    "    exp_x = np.exp(x - np.max(x, axis=1, keepdims=True))\n",
    "    return exp_x / np.sum(exp_x, axis=1, keepdims=True)\n",
    "\n",
    "def cross_entropy_loss(y_true, y_pred, epsilon=1e-12):\n",
    "    \"\"\"交叉熵损失函数\"\"\"\n",
    "    y_pred = np.clip(y_pred, epsilon, 1. - epsilon)\n",
    "    return -np.mean(np.sum(y_true * np.log(y_pred), axis=1))\n",
    "\n",
    "def accuracy(y_true, y_pred):\n",
    "    \"\"\"计算准确率\"\"\"\n",
    "    return np.mean(np.argmax(y_true, axis=1) == np.argmax(y_pred, axis=1))\n",
    "\n",
    "# 1. Softmax回归实现\n",
    "class SoftmaxRegression:\n",
    "    def __init__(self, input_size, num_classes, learning_rate=0.01, reg_strength=0.01):\n",
    "        self.W = np.random.randn(input_size, num_classes) * 0.01\n",
    "        self.b = np.zeros((1, num_classes))\n",
    "        self.learning_rate = learning_rate\n",
    "        self.reg_strength = reg_strength\n",
    "\n",
    "    def forward(self, X):\n",
    "        \"\"\"前向传播\"\"\"\n",
    "        scores = np.dot(X, self.W) + self.b\n",
    "        return softmax(scores)\n",
    "\n",
    "    def backward(self, X, y_true, y_pred):\n",
    "        \"\"\"反向传播\"\"\"\n",
    "        m = X.shape[0]\n",
    "        grad_scores = (y_pred - y_true) / m\n",
    "\n",
    "        # 计算梯度\n",
    "        dW = np.dot(X.T, grad_scores) + self.reg_strength * self.W\n",
    "        db = np.sum(grad_scores, axis=0, keepdims=True)\n",
    "\n",
    "        return dW, db\n",
    "\n",
    "    def update_parameters(self, dW, db):\n",
    "        \"\"\"更新参数\"\"\"\n",
    "        self.W -= self.learning_rate * dW\n",
    "        self.b -= self.learning_rate * db\n",
    "\n",
    "    def compute_loss(self, X, y_true):\n",
    "        \"\"\"计算损失（包含正则化）\"\"\"\n",
    "        y_pred = self.forward(X)\n",
    "        data_loss = cross_entropy_loss(y_true, y_pred)\n",
    "        reg_loss = 0.5 * self.reg_strength * np.sum(self.W * self.W)\n",
    "        return data_loss + reg_loss\n",
    "\n",
    "    def fit(self, X_train, y_train, X_val, y_val, epochs=100, batch_size=128, verbose=True):\n",
    "        \"\"\"训练模型\"\"\"\n",
    "        train_losses = []\n",
    "        val_losses = []\n",
    "        train_accs = []\n",
    "        val_accs = []\n",
    "\n",
    "        y_train_oh = one_hot_encode(y_train)\n",
    "        y_val_oh = one_hot_encode(y_val)\n",
    "\n",
    "        n_batches = len(X_train) // batch_size\n",
    "\n",
    "        for epoch in range(epochs):\n",
    "            # 打乱数据\n",
    "            indices = np.random.permutation(len(X_train))\n",
    "            X_shuffled = X_train[indices]\n",
    "            y_shuffled_oh = y_train_oh[indices]\n",
    "\n",
    "            epoch_loss = 0\n",
    "\n",
    "            for i in range(n_batches):\n",
    "                # 获取批次数据\n",
    "                start_idx = i * batch_size\n",
    "                end_idx = start_idx + batch_size\n",
    "                X_batch = X_shuffled[start_idx:end_idx]\n",
    "                y_batch_oh = y_shuffled_oh[start_idx:end_idx]\n",
    "\n",
    "                # 前向传播\n",
    "                y_pred = self.forward(X_batch)\n",
    "\n",
    "                # 反向传播\n",
    "                dW, db = self.backward(X_batch, y_batch_oh, y_pred)\n",
    "\n",
    "                # 更新参数\n",
    "                self.update_parameters(dW, db)\n",
    "\n",
    "                # 计算批次损失\n",
    "                batch_loss = self.compute_loss(X_batch, y_batch_oh)\n",
    "                epoch_loss += batch_loss\n",
    "\n",
    "            # 计算平均训练损失和准确率\n",
    "            avg_train_loss = epoch_loss / n_batches\n",
    "            train_pred = self.forward(X_train)\n",
    "            train_acc = accuracy(y_train_oh, train_pred)\n",
    "\n",
    "            # 计算验证损失和准确率\n",
    "            val_pred = self.forward(X_val)\n",
    "            val_loss = self.compute_loss(X_val, y_val_oh)\n",
    "            val_acc = accuracy(y_val_oh, val_pred)\n",
    "\n",
    "            train_losses.append(avg_train_loss)\n",
    "            val_losses.append(val_loss)\n",
    "            train_accs.append(train_acc)\n",
    "            val_accs.append(val_acc)\n",
    "\n",
    "            if verbose and (epoch + 1) % 10 == 0:\n",
    "                print(f\"Epoch {epoch + 1}/{epochs}, \"\n",
    "                      f\"Train Loss: {avg_train_loss:.4f}, Val Loss: {val_loss:.4f}, \"\n",
    "                      f\"Train Acc: {train_acc:.4f}, Val Acc: {val_acc:.4f}\")\n",
    "\n",
    "        return train_losses, val_losses, train_accs, val_accs\n",
    "\n",
    "    def predict(self, X):\n",
    "        \"\"\"预测\"\"\"\n",
    "        return np.argmax(self.forward(X), axis=1)\n",
    "\n",
    "    def evaluate(self, X, y):\n",
    "        \"\"\"评估模型\"\"\"\n",
    "        y_pred = self.predict(X)\n",
    "        return np.mean(y_pred == y)\n",
    "\n",
    "# 2. 两层神经网络实现\n",
    "class TwoLayerNeuralNetwork:\n",
    "    def __init__(self, input_size, hidden_size, num_classes, learning_rate=0.01, reg_strength=0.01):\n",
    "        # 初始化参数\n",
    "        self.W1 = np.random.randn(input_size, hidden_size) * 0.01\n",
    "        self.b1 = np.zeros((1, hidden_size))\n",
    "        self.W2 = np.random.randn(hidden_size, num_classes) * 0.01\n",
    "        self.b2 = np.zeros((1, num_classes))\n",
    "        self.learning_rate = learning_rate\n",
    "        self.reg_strength = reg_strength\n",
    "\n",
    "    def relu(self, x):\n",
    "        \"\"\"ReLU激活函数\"\"\"\n",
    "        return np.maximum(0, x)\n",
    "\n",
    "    def relu_derivative(self, x):\n",
    "        \"\"\"ReLU导数\"\"\"\n",
    "        return (x > 0).astype(float)\n",
    "\n",
    "    def forward(self, X):\n",
    "        \"\"\"前向传播\"\"\"\n",
    "        self.z1 = np.dot(X, self.W1) + self.b1\n",
    "        self.a1 = self.relu(self.z1)\n",
    "        self.z2 = np.dot(self.a1, self.W2) + self.b2\n",
    "        return softmax(self.z2)\n",
    "\n",
    "    def backward(self, X, y_true, y_pred):\n",
    "        \"\"\"反向传播\"\"\"\n",
    "        m = X.shape[0]\n",
    "\n",
    "        # 输出层梯度\n",
    "        dz2 = (y_pred - y_true) / m\n",
    "\n",
    "        # 隐藏层梯度\n",
    "        dW2 = np.dot(self.a1.T, dz2) + self.reg_strength * self.W2\n",
    "        db2 = np.sum(dz2, axis=0, keepdims=True)\n",
    "\n",
    "        da1 = np.dot(dz2, self.W2.T)\n",
    "        dz1 = da1 * self.relu_derivative(self.z1)\n",
    "\n",
    "        dW1 = np.dot(X.T, dz1) + self.reg_strength * self.W1\n",
    "        db1 = np.sum(dz1, axis=0, keepdims=True)\n",
    "\n",
    "        return dW1, db1, dW2, db2\n",
    "\n",
    "    def update_parameters(self, dW1, db1, dW2, db2):\n",
    "        \"\"\"更新参数\"\"\"\n",
    "        self.W1 -= self.learning_rate * dW1\n",
    "        self.b1 -= self.learning_rate * db1\n",
    "        self.W2 -= self.learning_rate * dW2\n",
    "        self.b2 -= self.learning_rate * db2\n",
    "\n",
    "    def compute_loss(self, X, y_true):\n",
    "        \"\"\"计算损失\"\"\"\n",
    "        y_pred = self.forward(X)\n",
    "        data_loss = cross_entropy_loss(y_true, y_pred)\n",
    "        reg_loss = 0.5 * self.reg_strength * (np.sum(self.W1 * self.W1) + np.sum(self.W2 * self.W2))\n",
    "        return data_loss + reg_loss\n",
    "\n",
    "    def fit(self, X_train, y_train, X_val, y_val, epochs=100, batch_size=128, verbose=True):\n",
    "        \"\"\"训练模型\"\"\"\n",
    "        train_losses = []\n",
    "        val_losses = []\n",
    "        train_accs = []\n",
    "        val_accs = []\n",
    "\n",
    "        y_train_oh = one_hot_encode(y_train)\n",
    "        y_val_oh = one_hot_encode(y_val)\n",
    "\n",
    "        n_batches = len(X_train) // batch_size\n",
    "\n",
    "        for epoch in range(epochs):\n",
    "            # 打乱数据\n",
    "            indices = np.random.permutation(len(X_train))\n",
    "            X_shuffled = X_train[indices]\n",
    "            y_shuffled_oh = y_train_oh[indices]\n",
    "\n",
    "            epoch_loss = 0\n",
    "\n",
    "            for i in range(n_batches):\n",
    "                # 获取批次数据\n",
    "                start_idx = i * batch_size\n",
    "                end_idx = start_idx + batch_size\n",
    "                X_batch = X_shuffled[start_idx:end_idx]\n",
    "                y_batch_oh = y_shuffled_oh[start_idx:end_idx]\n",
    "\n",
    "                # 前向传播\n",
    "                y_pred = self.forward(X_batch)\n",
    "\n",
    "                # 反向传播\n",
    "                dW1, db1, dW2, db2 = self.backward(X_batch, y_batch_oh, y_pred)\n",
    "\n",
    "                # 更新参数\n",
    "                self.update_parameters(dW1, db1, dW2, db2)\n",
    "\n",
    "                # 计算批次损失\n",
    "                batch_loss = self.compute_loss(X_batch, y_batch_oh)\n",
    "                epoch_loss += batch_loss\n",
    "\n",
    "            # 计算平均训练损失和准确率\n",
    "            avg_train_loss = epoch_loss / n_batches\n",
    "            train_pred = self.forward(X_train)\n",
    "            train_acc = accuracy(y_train_oh, train_pred)\n",
    "\n",
    "            # 计算验证损失和准确率\n",
    "            val_pred = self.forward(X_val)\n",
    "            val_loss = self.compute_loss(X_val, y_val_oh)\n",
    "            val_acc = accuracy(y_val_oh, val_pred)\n",
    "\n",
    "            train_losses.append(avg_train_loss)\n",
    "            val_losses.append(val_loss)\n",
    "            train_accs.append(train_acc)\n",
    "            val_accs.append(val_acc)\n",
    "\n",
    "            if verbose and (epoch + 1) % 10 == 0:\n",
    "                print(f\"Epoch {epoch + 1}/{epochs}, \"\n",
    "                      f\"Train Loss: {avg_train_loss:.4f}, Val Loss: {val_loss:.4f}, \"\n",
    "                      f\"Train Acc: {train_acc:.4f}, Val Acc: {val_acc:.4f}\")\n",
    "\n",
    "        return train_losses, val_losses, train_accs, val_accs\n",
    "\n",
    "    def predict(self, X):\n",
    "        \"\"\"预测\"\"\"\n",
    "        return np.argmax(self.forward(X), axis=1)\n",
    "\n",
    "    def evaluate(self, X, y):\n",
    "        \"\"\"评估模型\"\"\"\n",
    "        y_pred = self.predict(X)\n",
    "        return np.mean(y_pred == y)\n",
    "\n",
    "# 训练和评估模型\n",
    "print(\"\\n\" + \"=\"*50)\n",
    "print(\"训练Softmax回归模型\")\n",
    "print(\"=\"*50)\n",
    "\n",
    "softmax_model = SoftmaxRegression(\n",
    "    input_size=784,\n",
    "    num_classes=10,\n",
    "    learning_rate=0.1,\n",
    "    reg_strength=0.001\n",
    ")\n",
    "\n",
    "start_time = time.time()\n",
    "softmax_train_losses, softmax_val_losses, softmax_train_accs, softmax_val_accs = softmax_model.fit(\n",
    "    X_train, y_train, X_val, y_val, epochs=50, batch_size=128\n",
    ")\n",
    "softmax_time = time.time() - start_time\n",
    "\n",
    "softmax_test_acc = softmax_model.evaluate(X_test, y_test)\n",
    "print(f\"\\nSoftmax回归测试准确率: {softmax_test_acc:.4f}\")\n",
    "print(f\"训练时间: {softmax_time:.2f}秒\")\n",
    "\n",
    "print(\"\\n\" + \"=\"*50)\n",
    "print(\"训练两层神经网络\")\n",
    "print(\"=\"*50)\n",
    "\n",
    "nn_model = TwoLayerNeuralNetwork(\n",
    "    input_size=784,\n",
    "    hidden_size=128,\n",
    "    num_classes=10,\n",
    "    learning_rate=0.01,\n",
    "    reg_strength=0.001\n",
    ")\n",
    "\n",
    "start_time = time.time()\n",
    "nn_train_losses, nn_val_losses, nn_train_accs, nn_val_accs = nn_model.fit(\n",
    "    X_train, y_train, X_val, y_val, epochs=50, batch_size=128\n",
    ")\n",
    "nn_time = time.time() - start_time\n",
    "\n",
    "nn_test_acc = nn_model.evaluate(X_test, y_test)\n",
    "print(f\"\\n两层神经网络测试准确率: {nn_test_acc:.4f}\")\n",
    "print(f\"训练时间: {nn_time:.2f}秒\")\n",
    "\n",
    "# 绘制结果\n",
    "plt.figure(figsize=(15, 10))\n",
    "\n",
    "# 损失曲线\n",
    "plt.subplot(2, 2, 1)\n",
    "plt.plot(softmax_train_losses, label='Softmax Train Loss')\n",
    "plt.plot(softmax_val_losses, label='Softmax Val Loss')\n",
    "plt.plot(nn_train_losses, label='NN Train Loss')\n",
    "plt.plot(nn_val_losses, label='NN Val Loss')\n",
    "plt.xlabel('Epochs')\n",
    "plt.ylabel('Loss')\n",
    "plt.title('Training and Validation Loss')\n",
    "plt.legend()\n",
    "plt.grid(True)\n",
    "\n",
    "# 准确率曲线\n",
    "plt.subplot(2, 2, 2)\n",
    "plt.plot(softmax_train_accs, label='Softmax Train Acc')\n",
    "plt.plot(softmax_val_accs, label='Softmax Val Acc')\n",
    "plt.plot(nn_train_accs, label='NN Train Acc')\n",
    "plt.plot(nn_val_accs, label='NN Val Acc')\n",
    "plt.xlabel('Epochs')\n",
    "plt.ylabel('Accuracy')\n",
    "plt.title('Training and Validation Accuracy')\n",
    "plt.legend()\n",
    "plt.grid(True)\n",
    "\n",
    "# 测试准确率比较\n",
    "plt.subplot(2, 2, 3)\n",
    "models = ['Softmax Regression', 'Two-Layer NN']\n",
    "accuracies = [softmax_test_acc, nn_test_acc]\n",
    "plt.bar(models, accuracies, color=['blue', 'orange'])\n",
    "plt.ylabel('Test Accuracy')\n",
    "plt.title('Test Accuracy Comparison')\n",
    "plt.ylim(0.8, 1.0)\n",
    "for i, v in enumerate(accuracies):\n",
    "    plt.text(i, v + 0.005, f'{v:.4f}', ha='center')\n",
    "\n",
    "# 训练时间比较\n",
    "plt.subplot(2, 2, 4)\n",
    "times = [softmax_time, nn_time]\n",
    "plt.bar(models, times, color=['blue', 'orange'])\n",
    "plt.ylabel('Training Time (seconds)')\n",
    "plt.title('Training Time Comparison')\n",
    "for i, v in enumerate(times):\n",
    "    plt.text(i, v + 0.1, f'{v:.2f}s', ha='center')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()\n",
    "\n",
    "print(\"\\n\" + \"=\"*50)\n",
    "print(\"最终结果总结\")\n",
    "print(\"=\"*50)\n",
    "print(f\"Softmax回归 - 测试准确率: {softmax_test_acc:.4f}, 训练时间: {softmax_time:.2f}秒\")\n",
    "print(f\"两层神经网络 - 测试准确率: {nn_test_acc:.4f}, 训练时间: {nn_time:.2f}秒\")\n",
    "print(f\"性能提升: {(nn_test_acc - softmax_test_acc) * 100:.2f}%\")"
   ],
   "id": "8dc3fd6cdbe7335",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "加载MNIST数据集...\n"
     ]
    },
    {
     "ename": "URLError",
     "evalue": "<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)>",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mSSLCertVerificationError\u001B[0m                  Traceback (most recent call last)",
      "File \u001B[1;32mD:\\anaconda\\Lib\\urllib\\request.py:1344\u001B[0m, in \u001B[0;36mAbstractHTTPHandler.do_open\u001B[1;34m(self, http_class, req, **http_conn_args)\u001B[0m\n\u001B[0;32m   1343\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m-> 1344\u001B[0m     h\u001B[38;5;241m.\u001B[39mrequest(req\u001B[38;5;241m.\u001B[39mget_method(), req\u001B[38;5;241m.\u001B[39mselector, req\u001B[38;5;241m.\u001B[39mdata, headers,\n\u001B[0;32m   1345\u001B[0m               encode_chunked\u001B[38;5;241m=\u001B[39mreq\u001B[38;5;241m.\u001B[39mhas_header(\u001B[38;5;124m'\u001B[39m\u001B[38;5;124mTransfer-encoding\u001B[39m\u001B[38;5;124m'\u001B[39m))\n\u001B[0;32m   1346\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mOSError\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m err: \u001B[38;5;66;03m# timeout error\u001B[39;00m\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\http\\client.py:1336\u001B[0m, in \u001B[0;36mHTTPConnection.request\u001B[1;34m(self, method, url, body, headers, encode_chunked)\u001B[0m\n\u001B[0;32m   1335\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Send a complete request to the server.\"\"\"\u001B[39;00m\n\u001B[1;32m-> 1336\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_send_request(method, url, body, headers, encode_chunked)\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\http\\client.py:1382\u001B[0m, in \u001B[0;36mHTTPConnection._send_request\u001B[1;34m(self, method, url, body, headers, encode_chunked)\u001B[0m\n\u001B[0;32m   1381\u001B[0m     body \u001B[38;5;241m=\u001B[39m _encode(body, \u001B[38;5;124m'\u001B[39m\u001B[38;5;124mbody\u001B[39m\u001B[38;5;124m'\u001B[39m)\n\u001B[1;32m-> 1382\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mendheaders(body, encode_chunked\u001B[38;5;241m=\u001B[39mencode_chunked)\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\http\\client.py:1331\u001B[0m, in \u001B[0;36mHTTPConnection.endheaders\u001B[1;34m(self, message_body, encode_chunked)\u001B[0m\n\u001B[0;32m   1330\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m CannotSendHeader()\n\u001B[1;32m-> 1331\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_send_output(message_body, encode_chunked\u001B[38;5;241m=\u001B[39mencode_chunked)\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\http\\client.py:1091\u001B[0m, in \u001B[0;36mHTTPConnection._send_output\u001B[1;34m(self, message_body, encode_chunked)\u001B[0m\n\u001B[0;32m   1090\u001B[0m \u001B[38;5;28;01mdel\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_buffer[:]\n\u001B[1;32m-> 1091\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msend(msg)\n\u001B[0;32m   1093\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m message_body \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m   1094\u001B[0m \n\u001B[0;32m   1095\u001B[0m     \u001B[38;5;66;03m# create a consistent interface to message_body\u001B[39;00m\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\http\\client.py:1035\u001B[0m, in \u001B[0;36mHTTPConnection.send\u001B[1;34m(self, data)\u001B[0m\n\u001B[0;32m   1034\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mauto_open:\n\u001B[1;32m-> 1035\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mconnect()\n\u001B[0;32m   1036\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\http\\client.py:1477\u001B[0m, in \u001B[0;36mHTTPSConnection.connect\u001B[1;34m(self)\u001B[0m\n\u001B[0;32m   1475\u001B[0m     server_hostname \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mhost\n\u001B[1;32m-> 1477\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msock \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_context\u001B[38;5;241m.\u001B[39mwrap_socket(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msock,\n\u001B[0;32m   1478\u001B[0m                                       server_hostname\u001B[38;5;241m=\u001B[39mserver_hostname)\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\ssl.py:455\u001B[0m, in \u001B[0;36mSSLContext.wrap_socket\u001B[1;34m(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)\u001B[0m\n\u001B[0;32m    449\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21mwrap_socket\u001B[39m(\u001B[38;5;28mself\u001B[39m, sock, server_side\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mFalse\u001B[39;00m,\n\u001B[0;32m    450\u001B[0m                 do_handshake_on_connect\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mTrue\u001B[39;00m,\n\u001B[0;32m    451\u001B[0m                 suppress_ragged_eofs\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mTrue\u001B[39;00m,\n\u001B[0;32m    452\u001B[0m                 server_hostname\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mNone\u001B[39;00m, session\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mNone\u001B[39;00m):\n\u001B[0;32m    453\u001B[0m     \u001B[38;5;66;03m# SSLSocket class handles server_hostname encoding before it calls\u001B[39;00m\n\u001B[0;32m    454\u001B[0m     \u001B[38;5;66;03m# ctx._wrap_socket()\u001B[39;00m\n\u001B[1;32m--> 455\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msslsocket_class\u001B[38;5;241m.\u001B[39m_create(\n\u001B[0;32m    456\u001B[0m         sock\u001B[38;5;241m=\u001B[39msock,\n\u001B[0;32m    457\u001B[0m         server_side\u001B[38;5;241m=\u001B[39mserver_side,\n\u001B[0;32m    458\u001B[0m         do_handshake_on_connect\u001B[38;5;241m=\u001B[39mdo_handshake_on_connect,\n\u001B[0;32m    459\u001B[0m         suppress_ragged_eofs\u001B[38;5;241m=\u001B[39msuppress_ragged_eofs,\n\u001B[0;32m    460\u001B[0m         server_hostname\u001B[38;5;241m=\u001B[39mserver_hostname,\n\u001B[0;32m    461\u001B[0m         context\u001B[38;5;241m=\u001B[39m\u001B[38;5;28mself\u001B[39m,\n\u001B[0;32m    462\u001B[0m         session\u001B[38;5;241m=\u001B[39msession\n\u001B[0;32m    463\u001B[0m     )\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\ssl.py:1041\u001B[0m, in \u001B[0;36mSSLSocket._create\u001B[1;34m(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)\u001B[0m\n\u001B[0;32m   1040\u001B[0m                 \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mValueError\u001B[39;00m(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mdo_handshake_on_connect should not be specified for non-blocking sockets\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m-> 1041\u001B[0m             \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mdo_handshake()\n\u001B[0;32m   1042\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m:\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\ssl.py:1319\u001B[0m, in \u001B[0;36mSSLSocket.do_handshake\u001B[1;34m(self, block)\u001B[0m\n\u001B[0;32m   1318\u001B[0m         \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msettimeout(\u001B[38;5;28;01mNone\u001B[39;00m)\n\u001B[1;32m-> 1319\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_sslobj\u001B[38;5;241m.\u001B[39mdo_handshake()\n\u001B[0;32m   1320\u001B[0m \u001B[38;5;28;01mfinally\u001B[39;00m:\n",
      "\u001B[1;31mSSLCertVerificationError\u001B[0m: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)",
      "\nDuring handling of the above exception, another exception occurred:\n",
      "\u001B[1;31mURLError\u001B[0m                                  Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[15], line 10\u001B[0m\n\u001B[0;32m      8\u001B[0m \u001B[38;5;66;03m# 加载MNIST数据集\u001B[39;00m\n\u001B[0;32m      9\u001B[0m \u001B[38;5;28mprint\u001B[39m(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m加载MNIST数据集...\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m---> 10\u001B[0m mnist \u001B[38;5;241m=\u001B[39m fetch_openml(\u001B[38;5;124m'\u001B[39m\u001B[38;5;124mmnist_784\u001B[39m\u001B[38;5;124m'\u001B[39m, version\u001B[38;5;241m=\u001B[39m\u001B[38;5;241m1\u001B[39m, as_frame\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mFalse\u001B[39;00m)\n\u001B[0;32m     11\u001B[0m X, y \u001B[38;5;241m=\u001B[39m mnist\u001B[38;5;241m.\u001B[39mdata, mnist\u001B[38;5;241m.\u001B[39mtarget\u001B[38;5;241m.\u001B[39mastype(\u001B[38;5;28mint\u001B[39m)\n\u001B[0;32m     13\u001B[0m \u001B[38;5;66;03m# 数据预处理\u001B[39;00m\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\utils\\_param_validation.py:213\u001B[0m, in \u001B[0;36mvalidate_params.<locals>.decorator.<locals>.wrapper\u001B[1;34m(*args, **kwargs)\u001B[0m\n\u001B[0;32m    207\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    208\u001B[0m     \u001B[38;5;28;01mwith\u001B[39;00m config_context(\n\u001B[0;32m    209\u001B[0m         skip_parameter_validation\u001B[38;5;241m=\u001B[39m(\n\u001B[0;32m    210\u001B[0m             prefer_skip_nested_validation \u001B[38;5;129;01mor\u001B[39;00m global_skip_validation\n\u001B[0;32m    211\u001B[0m         )\n\u001B[0;32m    212\u001B[0m     ):\n\u001B[1;32m--> 213\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m func(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m    214\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m InvalidParameterError \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[0;32m    215\u001B[0m     \u001B[38;5;66;03m# When the function is just a wrapper around an estimator, we allow\u001B[39;00m\n\u001B[0;32m    216\u001B[0m     \u001B[38;5;66;03m# the function to delegate validation to the estimator, but we replace\u001B[39;00m\n\u001B[0;32m    217\u001B[0m     \u001B[38;5;66;03m# the name of the estimator by the name of the function in the error\u001B[39;00m\n\u001B[0;32m    218\u001B[0m     \u001B[38;5;66;03m# message to avoid confusion.\u001B[39;00m\n\u001B[0;32m    219\u001B[0m     msg \u001B[38;5;241m=\u001B[39m re\u001B[38;5;241m.\u001B[39msub(\n\u001B[0;32m    220\u001B[0m         \u001B[38;5;124mr\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mparameter of \u001B[39m\u001B[38;5;124m\\\u001B[39m\u001B[38;5;124mw+ must be\u001B[39m\u001B[38;5;124m\"\u001B[39m,\n\u001B[0;32m    221\u001B[0m         \u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mparameter of \u001B[39m\u001B[38;5;132;01m{\u001B[39;00mfunc\u001B[38;5;241m.\u001B[39m\u001B[38;5;18m__qualname__\u001B[39m\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m must be\u001B[39m\u001B[38;5;124m\"\u001B[39m,\n\u001B[0;32m    222\u001B[0m         \u001B[38;5;28mstr\u001B[39m(e),\n\u001B[0;32m    223\u001B[0m     )\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\datasets\\_openml.py:1008\u001B[0m, in \u001B[0;36mfetch_openml\u001B[1;34m(name, version, data_id, data_home, target_column, cache, return_X_y, as_frame, n_retries, delay, parser, read_csv_kwargs)\u001B[0m\n\u001B[0;32m   1002\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m data_id \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m   1003\u001B[0m         \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mValueError\u001B[39;00m(\n\u001B[0;32m   1004\u001B[0m             \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mDataset data_id=\u001B[39m\u001B[38;5;132;01m{}\u001B[39;00m\u001B[38;5;124m and name=\u001B[39m\u001B[38;5;132;01m{}\u001B[39;00m\u001B[38;5;124m passed, but you can only \u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m   1005\u001B[0m             \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mspecify a numeric data_id or a name, not \u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m   1006\u001B[0m             \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mboth.\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;241m.\u001B[39mformat(data_id, name)\n\u001B[0;32m   1007\u001B[0m         )\n\u001B[1;32m-> 1008\u001B[0m     data_info \u001B[38;5;241m=\u001B[39m _get_data_info_by_name(\n\u001B[0;32m   1009\u001B[0m         name, version, data_home, n_retries\u001B[38;5;241m=\u001B[39mn_retries, delay\u001B[38;5;241m=\u001B[39mdelay\n\u001B[0;32m   1010\u001B[0m     )\n\u001B[0;32m   1011\u001B[0m     data_id \u001B[38;5;241m=\u001B[39m data_info[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mdid\u001B[39m\u001B[38;5;124m\"\u001B[39m]\n\u001B[0;32m   1012\u001B[0m \u001B[38;5;28;01melif\u001B[39;00m data_id \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m   1013\u001B[0m     \u001B[38;5;66;03m# from the previous if statement, it is given that name is None\u001B[39;00m\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\datasets\\_openml.py:326\u001B[0m, in \u001B[0;36m_get_data_info_by_name\u001B[1;34m(name, version, data_home, n_retries, delay)\u001B[0m\n\u001B[0;32m    324\u001B[0m url \u001B[38;5;241m=\u001B[39m (_SEARCH_NAME \u001B[38;5;241m+\u001B[39m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m/data_version/\u001B[39m\u001B[38;5;132;01m{}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m)\u001B[38;5;241m.\u001B[39mformat(name, version)\n\u001B[0;32m    325\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m--> 326\u001B[0m     json_data \u001B[38;5;241m=\u001B[39m _get_json_content_from_openml_api(\n\u001B[0;32m    327\u001B[0m         url,\n\u001B[0;32m    328\u001B[0m         error_message\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[0;32m    329\u001B[0m         data_home\u001B[38;5;241m=\u001B[39mdata_home,\n\u001B[0;32m    330\u001B[0m         n_retries\u001B[38;5;241m=\u001B[39mn_retries,\n\u001B[0;32m    331\u001B[0m         delay\u001B[38;5;241m=\u001B[39mdelay,\n\u001B[0;32m    332\u001B[0m     )\n\u001B[0;32m    333\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m OpenMLError:\n\u001B[0;32m    334\u001B[0m     \u001B[38;5;66;03m# we can do this in 1 function call if OpenML does not require the\u001B[39;00m\n\u001B[0;32m    335\u001B[0m     \u001B[38;5;66;03m# specification of the dataset status (i.e., return datasets with a\u001B[39;00m\n\u001B[0;32m    336\u001B[0m     \u001B[38;5;66;03m# given name / version regardless of active, deactivated, etc. )\u001B[39;00m\n\u001B[0;32m    337\u001B[0m     \u001B[38;5;66;03m# TODO: feature request OpenML.\u001B[39;00m\n\u001B[0;32m    338\u001B[0m     url \u001B[38;5;241m+\u001B[39m\u001B[38;5;241m=\u001B[39m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m/status/deactivated\u001B[39m\u001B[38;5;124m\"\u001B[39m\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\datasets\\_openml.py:243\u001B[0m, in \u001B[0;36m_get_json_content_from_openml_api\u001B[1;34m(url, error_message, data_home, n_retries, delay)\u001B[0m\n\u001B[0;32m    240\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m json\u001B[38;5;241m.\u001B[39mloads(response\u001B[38;5;241m.\u001B[39mread()\u001B[38;5;241m.\u001B[39mdecode(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mutf-8\u001B[39m\u001B[38;5;124m\"\u001B[39m))\n\u001B[0;32m    242\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m--> 243\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m _load_json()\n\u001B[0;32m    244\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m HTTPError \u001B[38;5;28;01mas\u001B[39;00m error:\n\u001B[0;32m    245\u001B[0m     \u001B[38;5;66;03m# 412 is an OpenML specific error code, indicating a generic error\u001B[39;00m\n\u001B[0;32m    246\u001B[0m     \u001B[38;5;66;03m# (e.g., data not found)\u001B[39;00m\n\u001B[0;32m    247\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m error\u001B[38;5;241m.\u001B[39mcode \u001B[38;5;241m!=\u001B[39m \u001B[38;5;241m412\u001B[39m:\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\datasets\\_openml.py:64\u001B[0m, in \u001B[0;36m_retry_with_clean_cache.<locals>.decorator.<locals>.wrapper\u001B[1;34m(*args, **kw)\u001B[0m\n\u001B[0;32m     62\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m f(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkw)\n\u001B[0;32m     63\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m---> 64\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m f(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkw)\n\u001B[0;32m     65\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m URLError:\n\u001B[0;32m     66\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\datasets\\_openml.py:238\u001B[0m, in \u001B[0;36m_get_json_content_from_openml_api.<locals>._load_json\u001B[1;34m()\u001B[0m\n\u001B[0;32m    235\u001B[0m \u001B[38;5;129m@_retry_with_clean_cache\u001B[39m(url, data_home\u001B[38;5;241m=\u001B[39mdata_home)\n\u001B[0;32m    236\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_load_json\u001B[39m():\n\u001B[0;32m    237\u001B[0m     \u001B[38;5;28;01mwith\u001B[39;00m closing(\n\u001B[1;32m--> 238\u001B[0m         _open_openml_url(url, data_home, n_retries\u001B[38;5;241m=\u001B[39mn_retries, delay\u001B[38;5;241m=\u001B[39mdelay)\n\u001B[0;32m    239\u001B[0m     ) \u001B[38;5;28;01mas\u001B[39;00m response:\n\u001B[0;32m    240\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m json\u001B[38;5;241m.\u001B[39mloads(response\u001B[38;5;241m.\u001B[39mread()\u001B[38;5;241m.\u001B[39mdecode(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mutf-8\u001B[39m\u001B[38;5;124m\"\u001B[39m))\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\datasets\\_openml.py:170\u001B[0m, in \u001B[0;36m_open_openml_url\u001B[1;34m(openml_path, data_home, n_retries, delay)\u001B[0m\n\u001B[0;32m    163\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    164\u001B[0m     \u001B[38;5;66;03m# Create a tmpdir as a subfolder of dir_name where the final file will\u001B[39;00m\n\u001B[0;32m    165\u001B[0m     \u001B[38;5;66;03m# be moved to if the download is successful. This guarantees that the\u001B[39;00m\n\u001B[0;32m    166\u001B[0m     \u001B[38;5;66;03m# renaming operation to the final location is atomic to ensure the\u001B[39;00m\n\u001B[0;32m    167\u001B[0m     \u001B[38;5;66;03m# concurrence safety of the dataset caching mechanism.\u001B[39;00m\n\u001B[0;32m    168\u001B[0m     \u001B[38;5;28;01mwith\u001B[39;00m TemporaryDirectory(\u001B[38;5;28mdir\u001B[39m\u001B[38;5;241m=\u001B[39mdir_name) \u001B[38;5;28;01mas\u001B[39;00m tmpdir:\n\u001B[0;32m    169\u001B[0m         \u001B[38;5;28;01mwith\u001B[39;00m closing(\n\u001B[1;32m--> 170\u001B[0m             _retry_on_network_error(n_retries, delay, req\u001B[38;5;241m.\u001B[39mfull_url)(urlopen)(\n\u001B[0;32m    171\u001B[0m                 req\n\u001B[0;32m    172\u001B[0m             )\n\u001B[0;32m    173\u001B[0m         ) \u001B[38;5;28;01mas\u001B[39;00m fsrc:\n\u001B[0;32m    174\u001B[0m             opener: Callable\n\u001B[0;32m    175\u001B[0m             \u001B[38;5;28;01mif\u001B[39;00m is_gzip_encoded(fsrc):\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\site-packages\\sklearn\\datasets\\_openml.py:100\u001B[0m, in \u001B[0;36m_retry_on_network_error.<locals>.decorator.<locals>.wrapper\u001B[1;34m(*args, **kwargs)\u001B[0m\n\u001B[0;32m     98\u001B[0m \u001B[38;5;28;01mwhile\u001B[39;00m \u001B[38;5;28;01mTrue\u001B[39;00m:\n\u001B[0;32m     99\u001B[0m     \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m--> 100\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m f(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[0;32m    101\u001B[0m     \u001B[38;5;28;01mexcept\u001B[39;00m (URLError, \u001B[38;5;167;01mTimeoutError\u001B[39;00m) \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[0;32m    102\u001B[0m         \u001B[38;5;66;03m# 412 is a specific OpenML error code.\u001B[39;00m\n\u001B[0;32m    103\u001B[0m         \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(e, HTTPError) \u001B[38;5;129;01mand\u001B[39;00m e\u001B[38;5;241m.\u001B[39mcode \u001B[38;5;241m==\u001B[39m \u001B[38;5;241m412\u001B[39m:\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\urllib\\request.py:215\u001B[0m, in \u001B[0;36murlopen\u001B[1;34m(url, data, timeout, cafile, capath, cadefault, context)\u001B[0m\n\u001B[0;32m    213\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m    214\u001B[0m     opener \u001B[38;5;241m=\u001B[39m _opener\n\u001B[1;32m--> 215\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m opener\u001B[38;5;241m.\u001B[39mopen(url, data, timeout)\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\urllib\\request.py:515\u001B[0m, in \u001B[0;36mOpenerDirector.open\u001B[1;34m(self, fullurl, data, timeout)\u001B[0m\n\u001B[0;32m    512\u001B[0m     req \u001B[38;5;241m=\u001B[39m meth(req)\n\u001B[0;32m    514\u001B[0m sys\u001B[38;5;241m.\u001B[39maudit(\u001B[38;5;124m'\u001B[39m\u001B[38;5;124murllib.Request\u001B[39m\u001B[38;5;124m'\u001B[39m, req\u001B[38;5;241m.\u001B[39mfull_url, req\u001B[38;5;241m.\u001B[39mdata, req\u001B[38;5;241m.\u001B[39mheaders, req\u001B[38;5;241m.\u001B[39mget_method())\n\u001B[1;32m--> 515\u001B[0m response \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_open(req, data)\n\u001B[0;32m    517\u001B[0m \u001B[38;5;66;03m# post-process response\u001B[39;00m\n\u001B[0;32m    518\u001B[0m meth_name \u001B[38;5;241m=\u001B[39m protocol\u001B[38;5;241m+\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m_response\u001B[39m\u001B[38;5;124m\"\u001B[39m\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\urllib\\request.py:532\u001B[0m, in \u001B[0;36mOpenerDirector._open\u001B[1;34m(self, req, data)\u001B[0m\n\u001B[0;32m    529\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m result\n\u001B[0;32m    531\u001B[0m protocol \u001B[38;5;241m=\u001B[39m req\u001B[38;5;241m.\u001B[39mtype\n\u001B[1;32m--> 532\u001B[0m result \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_call_chain(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mhandle_open, protocol, protocol \u001B[38;5;241m+\u001B[39m\n\u001B[0;32m    533\u001B[0m                           \u001B[38;5;124m'\u001B[39m\u001B[38;5;124m_open\u001B[39m\u001B[38;5;124m'\u001B[39m, req)\n\u001B[0;32m    534\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m result:\n\u001B[0;32m    535\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m result\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\urllib\\request.py:492\u001B[0m, in \u001B[0;36mOpenerDirector._call_chain\u001B[1;34m(self, chain, kind, meth_name, *args)\u001B[0m\n\u001B[0;32m    490\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m handler \u001B[38;5;129;01min\u001B[39;00m handlers:\n\u001B[0;32m    491\u001B[0m     func \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mgetattr\u001B[39m(handler, meth_name)\n\u001B[1;32m--> 492\u001B[0m     result \u001B[38;5;241m=\u001B[39m func(\u001B[38;5;241m*\u001B[39margs)\n\u001B[0;32m    493\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m result \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m    494\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m result\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\urllib\\request.py:1392\u001B[0m, in \u001B[0;36mHTTPSHandler.https_open\u001B[1;34m(self, req)\u001B[0m\n\u001B[0;32m   1391\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21mhttps_open\u001B[39m(\u001B[38;5;28mself\u001B[39m, req):\n\u001B[1;32m-> 1392\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mdo_open(http\u001B[38;5;241m.\u001B[39mclient\u001B[38;5;241m.\u001B[39mHTTPSConnection, req,\n\u001B[0;32m   1393\u001B[0m                         context\u001B[38;5;241m=\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_context)\n",
      "File \u001B[1;32mD:\\anaconda\\Lib\\urllib\\request.py:1347\u001B[0m, in \u001B[0;36mAbstractHTTPHandler.do_open\u001B[1;34m(self, http_class, req, **http_conn_args)\u001B[0m\n\u001B[0;32m   1344\u001B[0m         h\u001B[38;5;241m.\u001B[39mrequest(req\u001B[38;5;241m.\u001B[39mget_method(), req\u001B[38;5;241m.\u001B[39mselector, req\u001B[38;5;241m.\u001B[39mdata, headers,\n\u001B[0;32m   1345\u001B[0m                   encode_chunked\u001B[38;5;241m=\u001B[39mreq\u001B[38;5;241m.\u001B[39mhas_header(\u001B[38;5;124m'\u001B[39m\u001B[38;5;124mTransfer-encoding\u001B[39m\u001B[38;5;124m'\u001B[39m))\n\u001B[0;32m   1346\u001B[0m     \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mOSError\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m err: \u001B[38;5;66;03m# timeout error\u001B[39;00m\n\u001B[1;32m-> 1347\u001B[0m         \u001B[38;5;28;01mraise\u001B[39;00m URLError(err)\n\u001B[0;32m   1348\u001B[0m     r \u001B[38;5;241m=\u001B[39m h\u001B[38;5;241m.\u001B[39mgetresponse()\n\u001B[0;32m   1349\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m:\n",
      "\u001B[1;31mURLError\u001B[0m: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)>"
     ]
    }
   ],
   "execution_count": 15
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 第一题：简单的加法函数，以及使用pytest测试代码",
   "id": "4c898f53cabb194f"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "为了说明这部分作业的代码+数据，以及pytest使用，将使用一个实现 add函数 的简单示例。\n",
    "\n",
    "```\n",
    "data/\n",
    "    train-images-idx3-ubyte.gz\n",
    "    train-labels-idx1-ubyte.gz\n",
    "    t10k-images-idx3-ubyte.gz\n",
    "    t10k-labels-idx1-ubyte.gz\n",
    "src/\n",
    "    simple_nn.py\n",
    "tests/\n",
    "    test_simple_nn.py\n",
    "```\n",
    "\n",
    "data/ 目录包含这部分作业所需的数据（MNIST 数据集的副本）；src/ 目录包含实现功能所需的源代码；tests/ 目录包含用于测试实现代码是否正确的代码\n",
    "\n",
    "第一题要求实现 src/目录里 simple_nn.py内的 add函数（这个简单的函数实际上并没有用到，它只是一个帮助熟悉作业结构的示例）。查看 src/simple_nn.py 文件，将找到 add() 函数的定义"
   ],
   "id": "e9c91dfa3c0912a8"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "```python\n",
    "def add(x, y):\n",
    "    \"\"\"一个简单的add函数，以便熟悉自动测试（pytest）\n",
    "\n",
    "    Args:\n",
    "        x (Python数字 或者 numpy array)\n",
    "        y (Python数字 或者 numpy array)\n",
    "\n",
    "    Return:\n",
    "        x+y的和\n",
    "    \"\"\"\n",
    "    ### 你的代码开始\n",
    "    pass\n",
    "    ### 你的代码结束\n",
    "```\n",
    "\n",
    "函数内的文档字符串（docstring）定义了函数应该产生的预期输入和输出（需要养成仔细阅读文档的习惯，很多错误来源就是没有阅读规范）。实现这个函数。你只需将 pass 语句替换为正确的代码即可，即：\n",
    "\n",
    "```python\n",
    "def add(x, y):\n",
    "    \"\"\"一个简单的add函数，以便熟悉自动测试（pytest）\n",
    "\n",
    "    Args:\n",
    "        x (Python数字 或者 numpy array)\n",
    "        y (Python数字 或者 numpy array)\n",
    "\n",
    "    Return:\n",
    "        x+y的和\n",
    "    \"\"\"\n",
    "    ### 你的代码开始\n",
    "    return x + y\n",
    "    ### 你的代码结束\n",
    "```\n",
    "\n",
    "现在可以去src/simple_nn.py里，把add函数里的pass 换成 return x + y"
   ],
   "id": "b968917c2ee80ca7"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "#### 测试代码",
   "id": "919bb56f32b06378"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "现在需要测试一下你的代码是否能正确运行，正确运行才说明实现没问题。\n",
    "\n",
    "在这部分作业中，将使用pytest对代码进行单元测试。在 src/simple_nn.py 文件中 写完 add函数的实现后，去命令行里确保已经激活了homl3环境（conda activate homl3）， 确保homl3环境里安装过了numdifftools和pytest，确定命令行里显示的文件路径在 作业8的目录（这个目录同时有data/, src/和tests/文件夹），然后执行以下命令：\n",
    "\n",
    "python -m pytest -k \"add\"\n",
    "\n",
    "如果一切正常，你会看到类似这样的图片：\n",
    "![测试add通过](../../images/homework/neural_network/p1.png)\n",
    "\n",
    "想看测试如何进行的，可以去查看tests/test_simple_nn.py文件，python -m pytest -k \"add\"指令刚刚运行的是 文件里的test_add() 函数\n",
    "\n",
    "如果错误地实现了某些内容（例如，将上面的 x + y 更改为 x - y），那么测试将会失败，并且 pytest 将会指示相应的测试失败。\n",
    "\n",
    "比如把x+y，换成x-y后，执行python -m pytest -k \"add\"：\n",
    "![测试add不通过](../../images/homework/neural_network/p2.png)"
   ],
   "id": "8cd29340f4d80aac"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "如图所见，将收到一个错误，指示断言失败，然后就可以使用它来返回并调整实现代码。应该能够熟练地阅读和跟踪测试文件，以便更好地理解正确的实现应该如何工作\n",
    "\n",
    "学习正确开发和使用单元测试对于现代软件开发至关重要，希望这次作业帮助了解单元测试在软件开发中的典型用法。\n",
    "\n",
    "当然，这次作业不一定需要编写自己的测试去确保自己实现正确，但应该熟悉如何阅读提供的测试文件，以便了解要实现的函数应该如何运行。但是，也绝对鼓励为自己的实现编写额外的测试。\n",
    "\n",
    "如果习惯通过打印语句调试代码，请注意，pytest 默认会捕获任何输出（隐藏掉测试代码执行的print）。可以通过将 -s 传递给 pytest 来禁用此行为并让测试在所有情况下显示所有输出。"
   ],
   "id": "33304ba8ce802fd2"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 第二题：用gzip和struct处理压缩文件和二进制数据，加载MNIST数据",
   "id": "ebec5034752b27"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "现在已经熟悉了测试工具pytest，接下来在 src/simple_nn.py 中需要实现的函数上尝试一下：parse_mnist_data() 函数。\n",
    "\n",
    "这个函数也有文档字符串（docstring），请仔细阅读它们。\n",
    "\n",
    "然后，请访问 https://web.archive.org/web/20220509025752/http://yann.lecun.com/exdb/mnist/ 了解 MNIST 数据的二进制格式。然后编写函数读取此类文件，并根据文档字符串中的规范返回 numpy 数组）。建议使用 Python 中的 struct 模块（以及 gzip 模块，当然还有 numpy 本身）来实现此函数。\n",
    "\n",
    "当然可以利用AI搜索这个部分的代码实现，但了解了MNIST数据的二进制格式和gzip，struct的简单使用后，能理解AI产出的代码为什么正确\n",
    "\n",
    "实现函数后，去命令行运行本地单元测试， 同样确保命令行激活了homl3环境，确保路径在作业8目录下（有data/,src/和tests/ 文件夹）， 后面的题不再强调\n",
    "\n",
    "python -m pytest -k \"parse_mnist\""
   ],
   "id": "f22b95e3905b669b"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 第三题：Softmax损失",
   "id": "1323562d0cba4cfa"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "在 `src/simple_nn.py` 文件的 `softmax_loss()` 函数中实现 softmax（也称为交叉熵）损失。对于一个可以取值 $ y \\in \\{1, \\ldots, k\\} $ 的多类输出，softmax 损失接收一个对数几率向量 $ z \\in \\mathbb{R}^k $ 和真实类别 $ y \\in \\{1, \\ldots, k\\} $ 作为输入，并返回由以下公式定义的损失：\n",
    "\n",
    "$\\ell_{\\text{softmax}}(z, y) = \\log (\\sum_{i=1}^{k} \\exp z_i) - z_y$\n",
    "\n",
    "对数几率向量z，可以看成被softmax激活之前的值，对公式有疑惑，或者对z的意义有疑惑的，可以参考softmax回归的笔记，并自己推导一下损失公式是否正确\n",
    "\n",
    "请注意，如其文档字符串（docstring）所述，`softmax_loss()` 函数接收一个二维的对数几率数组（即，一批不同样本的 $ k $ 维对数几率）加上一个对应的一维真实标签数组，并应返回整批样本的平均 softmax 损失。请注意，为了正确实现此功能，你不应使用任何循环，而是完全使用 numpy 的向量化操作进行计算（为此设定预期，实现代码可以少到一行代码）。"
   ],
   "id": "3cdb52032ac61c62"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "实现完成后，可以去命令行进行单元测试：python -m pytest -k \"softmax_loss\"",
   "id": "413eec903944c28e"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 第四题：softmax回归小批量梯度下降",
   "id": "a3f4e9cefaaa8dc4"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "在这个问题中，你将实现（线性）softmax 回归的小批量梯度下降）。考虑一个假设函数，该函数通过以下公式将 $ n $ 维输入转换为 $ k $ 维对数几率：\n",
    "\n",
    "$h(x) = \\Theta^T x$\n",
    "\n",
    "其中 $ x \\in \\mathbb{R}^n $ 是输入，$\\Theta \\in \\mathbb{R}^{n \\times k}$ 是模型参数。给定数据集 $\\{(x^{(i)} \\in \\mathbb{R}^n, y^{(i)} \\in \\{1, \\ldots, k\\})\\}$，其中 $ i = 1, \\ldots, m $，softmax 回归相关的优化问题因此由下式给出：\n",
    "\n",
    "$\\text{minimize} \\frac{1}{m} \\sum_{i=1}^{m} \\ell_{\\text{softmax}} (\\Theta^T x^{(i)}, y^{(i)})$\n",
    "\n",
    "线性 softmax 目标的梯度由下式给出，有疑惑的可以结合softmax回归的笔记验证\n",
    "\n",
    "$\\nabla_\\Theta \\ell_{\\text{softmax}} (\\Theta^T x, y) = x(z - e_y)^T$\n",
    "\n",
    "其中\n",
    "\n",
    "$z = \\frac{\\exp(\\Theta^T x)}{1^T \\exp(\\Theta^T x)} = \\text{normalize}(\\exp(\\Theta^T x))$\n",
    "\n",
    "（即 $ z $ 只是归一化的 softmax 概率），并且 $ e_y $ 表示 y 分类的独热编码，即一个所有元素为零，只有第 $ y $ 个位置为 1 的向量。\n",
    "\n",
    "也可以用更紧凑的符号来表示，方便代码实现，即，如果让 $ X \\in \\mathbb{R}^{m \\times n} $ 表示某个 $ m $ 个输入的特征矩阵（整个数据集或一个小批量），$ y \\in \\{1, \\ldots, k\\}^m $ 是对应的标签向量，并且 $ \\ell_{\\text{softmax}} $ 表示平均 softmax 损失，那么\n",
    "\n",
    "$\\nabla_\\Theta \\ell_{\\text{softmax}}(X \\Theta, y) = \\frac{1}{m} X^T (Z - I_y)$\n",
    "\n",
    "其中\n",
    "\n",
    "$Z = \\text{normalize}(\\exp(X \\Theta)) \\quad (\\text{归一化按行应用})$\n",
    "\n",
    "表示对数几率矩阵，而 $ I_y \\in \\mathbb{R}^{m \\times k} $ 表示 $ y $ 中标签的 逐个转成 独热编码，按行连接\n",
    "\n",
    "使用这些梯度，实现 `softmax_regression_epoch()` 函数，该函数使用指定的学习率/步长 $ \\eta $ 和小批量大小 `batch_size` 运行单个轮次（对数据集的一次遍历）。如其文档字符串所述，你的函数应该就地修改 Theta 数组。实现后，请去命令行运行测试。\n",
    "\n",
    "python -m pytest -k \"softmax_regression_epoch\""
   ],
   "id": "67f005120ef5a202"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "#### 用softmax回归训练MNIST",
   "id": "f0e6a75ce062199d"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "虽然这不包含在测试中，但既然你已经编写了这段代码，你也可以尝试使用 SGD 训练一个完整的 MNIST 线性分类器。为此，你可以使用 src/simple_nn.py 文件中的 train_softmax() 函数（已经编写好了这个函数，所以无需自行编写，但可以查看一下它的功能）。\n",
    "\n",
    "可以使用以下代码了解它的工作原理。作为参考，如下所示，我的实现在 notebook 上运行时间约为 2 秒，测试集错误率为 7.97%。"
   ],
   "id": "fc5478e35ebb74bb"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T09:47:58.231066Z",
     "start_time": "2025-09-02T09:47:53.598981Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import sys\n",
    "sys.path.append(\"src/\")\n",
    "from simple_nn import train_softmax, parse_mnist\n",
    "\n",
    "X_tr, y_tr = parse_mnist(\"data/train-images-idx3-ubyte.gz\",\n",
    "                         \"data/train-labels-idx1-ubyte.gz\")\n",
    "X_te, y_te = parse_mnist(\"data/t10k-images-idx3-ubyte.gz\",\n",
    "                         \"data/t10k-labels-idx1-ubyte.gz\")\n",
    "\n",
    "train_softmax(X_tr, y_tr, X_te, y_te, epochs=10, lr=0.2, batch=100)"
   ],
   "id": "2a7bf41f4f1f5211",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| Epoch | Train Loss | Train Err | Test Loss | Test Err |\n",
      "|     0 |    0.35134 |   0.10182 |   0.33588 |  0.09400 |\n",
      "|     1 |    0.32142 |   0.09268 |   0.31086 |  0.08730 |\n",
      "|     2 |    0.30802 |   0.08795 |   0.30097 |  0.08550 |\n",
      "|     3 |    0.29987 |   0.08532 |   0.29558 |  0.08370 |\n",
      "|     4 |    0.29415 |   0.08323 |   0.29215 |  0.08230 |\n",
      "|     5 |    0.28981 |   0.08182 |   0.28973 |  0.08090 |\n",
      "|     6 |    0.28633 |   0.08085 |   0.28793 |  0.08080 |\n",
      "|     7 |    0.28345 |   0.07997 |   0.28651 |  0.08040 |\n",
      "|     8 |    0.28100 |   0.07923 |   0.28537 |  0.08010 |\n",
      "|     9 |    0.27887 |   0.07847 |   0.28442 |  0.07970 |\n"
     ]
    }
   ],
   "execution_count": 1
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 在1个隐藏层的神经网络上进行小批量梯度下降",
   "id": "6e72acd86987b24e"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "现在已经为线性分类器编写了SGD，现在考虑一个简单的两层神经网络的情况。具体来说，对于输入 $ x \\in \\mathbb{R}^n $，考虑一个形式如下的两层神经网络（无偏置项）：\n",
    "\n",
    "$ z = W_2^T ReLU(W_1^T x) $\n",
    "\n",
    "其中 $ W_1 \\in \\mathbb{R}^{n \\times d} $ 和 $ W_2 \\in \\mathbb{R}^{d \\times k} $ 表示网络的权重（具有 $ d $ 维隐藏单元），而 $ z \\in \\mathbb{R}^k $ 表示网络输出的对数几率。我们再次使用 softmax/交叉熵损失，这意味着我们要解决以下优化问题：\n",
    "\n",
    "$\\text{minimize } \\frac{1}{W_1, W_2} \\sum_{i=1}^m \\ell_{\\text{softmax}}(W_2^T ReLU(W_1^T x^{(i)}), y^{(i)})$\n",
    "\n",
    "或者，使用矩阵 $ X \\in \\mathbb{R}^{m \\times n} $ 来描述批量形式，这也可以写成：\n",
    "\n",
    "$\\text{minimize } \\ell_{\\text{softmax}}(ReLU(XW_1)W_2, y)$\n",
    "\n",
    "使用链式法则，可以推导出该网络的反向传播更新（为了便于实现，这里提供最终形式）。具体来说，令：\n",
    "\n",
    "$Z_1 \\in \\mathbb{R}^{m \\times d} = ReLU(XW_1)$\n",
    "\n",
    "$G_2 \\in \\mathbb{R}^{m \\times k} = \\text{normalize}(\\exp(Z_1 W_2)) - I_y$\n",
    "\n",
    "$G_1 \\in \\mathbb{R}^{m \\times d} = 1\\{Z_1 > 0\\} \\circ (G_2 W_2^T)$\n",
    "\n",
    "其中 $ 1\\{Z_1 > 0\\} $ 是一个二进制矩阵，其条目根据 $ Z_1 $ 中的每个项是否严格为正而等于零或一，而 $\\circ$ 表示逐元素乘法。那么目标的梯度由下式给出：\n",
    "\n",
    "$\\nabla_{W_1} \\ell_{\\text{softmax}}(ReLU(XW_1)W_2, y) = \\frac{1}{m} X^T G_1$\n",
    "\n",
    "$\\nabla_{W_2} \\ell_{\\text{softmax}}(ReLU(XW_1)W_2, y) = \\frac{1}{m} Z_1^T G_2$\n",
    "\n",
    "**注意：** 如果这些精确方程的细节对你来说有点神秘，不必太担心。这些只是两层ReLU网络的标准反向传播方程：$ Z_1 $ 项只是计算\"前向\"传播，而 $ G_2 $ 和 $ G_1 $ 项表示反向传播。但是更新的精确形式可能会因你使用的神经网络符号、制定损失函数的具体方式、是否之前以矩阵形式推导过这些等因素而有所不同。（毕竟，在某种程度上，深度学习系统（比如tensorflow）的整个重点是我们不需要费心进行这些手动计算）。\n"
   ],
   "id": "f6a35e81834a15a2"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "使用这些梯度，现在在 src/simple_nn.py 文件中编写 nn_epoch() 函数。与上一个问题一样，你的解决方案应该修改 W1 和 W2 数组。实现该函数后，运行以下测试。请务必使用上述表达式所示的矩阵运算来实现该函数：这比尝试使用循环更快、更高效（并且所需的代码也更少）。\n",
    "\n",
    "实现完成后，去命令运行单元测试：python -m pytest -k \"nn_epoch\""
   ],
   "id": "53c19e9d5f2d9c32"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "#### 训练神经网络",
   "id": "ff593cbba822cafb"
  },
  {
   "metadata": {
    "jupyter": {
     "is_executing": true
    },
    "ExecuteTime": {
     "start_time": "2025-09-02T10:12:53.863204Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import sys\n",
    "\n",
    "\n",
    "import importlib\n",
    "import simple_nn\n",
    "importlib.reload(simple_nn) # 重新载入simple_nn， 防止刚才的训练代码产生了缓存，影响了simple_nn\n",
    "\n",
    "sys.path.append(\"src/\")\n",
    "from simple_nn import train_nn, parse_mnist\n",
    "\n",
    "X_tr, y_tr = parse_mnist(\"data/train-images-idx3-ubyte.gz\",\n",
    "                         \"data/train-labels-idx1-ubyte.gz\")\n",
    "X_te, y_te = parse_mnist(\"data/t10k-images-idx3-ubyte.gz\",\n",
    "                         \"data/t10k-labels-idx1-ubyte.gz\")\n",
    "train_nn(X_tr, y_tr, X_te, y_te, hidden_dim=400, epochs=20, lr=0.2)"
   ],
   "id": "3f41ff4fe2309f37",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| Epoch | Train Loss | Train Err | Test Loss | Test Err |\n",
      "|     0 |    0.15324 |   0.04697 |   0.16306 |  0.04920 |\n",
      "|     1 |    0.09894 |   0.02947 |   0.11643 |  0.03590 |\n",
      "|     2 |    0.07388 |   0.02117 |   0.09733 |  0.03140 |\n",
      "|     3 |    0.05906 |   0.01682 |   0.08741 |  0.02900 |\n",
      "|     4 |    0.04810 |   0.01347 |   0.08068 |  0.02610 |\n",
      "|     5 |    0.04017 |   0.01067 |   0.07667 |  0.02480 |\n",
      "|     6 |    0.03467 |   0.00910 |   0.07428 |  0.02380 |\n",
      "|     7 |    0.03052 |   0.00790 |   0.07278 |  0.02330 |\n"
     ]
    }
   ],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "作为参考，我的实现花了30多秒训练，最终在mnist的测试集达到了1.93%的错误率，只用了大概20多行代码",
   "id": "24c53df020c227d7"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
