{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 05. 第一个神经网络图像分类器\n",
        "\n",
        "## 学习目标\n",
        "- 理解神经网络的基本结构\n",
        "- 学习全连接层（Dense Layer）\n",
        "- 掌握前向传播和反向传播\n",
        "- 使用MNIST数据集进行手写数字识别\n",
        "- 学习模型训练、验证和测试\n",
        "- 可视化训练过程和结果\n",
        "\n",
        "## 什么是神经网络？\n",
        "\n",
        "神经网络是由多个神经元（节点）组成的计算模型，模拟人脑的工作方式。每个神经元接收输入，进行加权求和，然后通过激活函数产生输出。\n",
        "\n",
        "**基本结构：**\n",
        "- **输入层**: 接收原始数据\n",
        "- **隐藏层**: 进行特征提取和变换\n",
        "- **输出层**: 产生最终预测结果\n",
        "\n",
        "**前向传播公式：**\n",
        "$$z^{(l)} = W^{(l)}a^{(l-1)} + b^{(l)}$$\n",
        "$$a^{(l)} = \\sigma(z^{(l)})$$\n",
        "\n",
        "其中：\n",
        "- $W^{(l)}$ 是第l层的权重矩阵\n",
        "- $b^{(l)}$ 是第l层的偏置向量\n",
        "- $\\sigma$ 是激活函数\n",
        "- $a^{(l)}$ 是第l层的激活值\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.optim as optim\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import DataLoader, TensorDataset\n",
        "import torchvision\n",
        "import torchvision.transforms as transforms\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "import seaborn as sns\n",
        "from sklearn.metrics import classification_report, confusion_matrix\n",
        "import time\n",
        "from tqdm import tqdm\n",
        "\n",
        "# 设置中文字体\n",
        "plt.rcParams['font.sans-serif'] = ['SimHei']\n",
        "plt.rcParams['axes.unicode_minus'] = False\n",
        "\n",
        "# 设置随机种子\n",
        "torch.manual_seed(42)\n",
        "np.random.seed(42)\n",
        "\n",
        "print(f\"PyTorch版本: {torch.__version__}\")\n",
        "print(f\"CUDA可用: {torch.cuda.is_available()}\")\n",
        "\n",
        "# 设置设备\n",
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "print(f\"使用设备: {device}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. 数据准备 - MNIST手写数字数据集\n",
        "\n",
        "MNIST是一个经典的手写数字识别数据集，包含60,000个训练样本和10,000个测试样本，每个样本是28x28像素的灰度图像。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 定义数据变换\n",
        "transform = transforms.Compose([\n",
        "    transforms.ToTensor(),  # 转换为张量并归一化到[0,1]\n",
        "    transforms.Normalize((0.5,), (0.5,))  # 归一化到[-1,1]\n",
        "])\n",
        "\n",
        "# 下载并加载MNIST数据集\n",
        "print(\"正在下载MNIST数据集...\")\n",
        "train_dataset = torchvision.datasets.MNIST(\n",
        "    root='./data', \n",
        "    train=True, \n",
        "    download=True, \n",
        "    transform=transform\n",
        ")\n",
        "\n",
        "test_dataset = torchvision.datasets.MNIST(\n",
        "    root='./data', \n",
        "    train=False, \n",
        "    download=True, \n",
        "    transform=transform\n",
        ")\n",
        "\n",
        "# 创建数据加载器\n",
        "batch_size = 64\n",
        "train_loader = DataLoader(\n",
        "    train_dataset, \n",
        "    batch_size=batch_size, \n",
        "    shuffle=True,\n",
        "    num_workers=2\n",
        ")\n",
        "\n",
        "test_loader = DataLoader(\n",
        "    test_dataset, \n",
        "    batch_size=batch_size, \n",
        "    shuffle=False,\n",
        "    num_workers=2\n",
        ")\n",
        "\n",
        "print(f\"训练集大小: {len(train_dataset)}\")\n",
        "print(f\"测试集大小: {len(test_dataset)}\")\n",
        "print(f\"批次大小: {batch_size}\")\n",
        "print(f\"训练批次数: {len(train_loader)}\")\n",
        "print(f\"测试批次数: {len(test_loader)}\")\n",
        "\n",
        "# 查看数据形状\n",
        "sample_batch = next(iter(train_loader))\n",
        "images, labels = sample_batch\n",
        "print(f\"图像张量形状: {images.shape}\")\n",
        "print(f\"标签张量形状: {labels.shape}\")\n",
        "print(f\"像素值范围: [{images.min():.3f}, {images.max():.3f}]\")\n",
        "\n",
        "# 可视化一些样本\n",
        "fig, axes = plt.subplots(2, 5, figsize=(12, 6))\n",
        "fig.suptitle('MNIST手写数字样本', fontsize=16)\n",
        "\n",
        "for i in range(10):\n",
        "    row = i // 5\n",
        "    col = i % 5\n",
        "    \n",
        "    # 反归一化图像用于显示\n",
        "    img = images[i].squeeze()\n",
        "    img = (img + 1) / 2  # 从[-1,1]转换到[0,1]\n",
        "    \n",
        "    axes[row, col].imshow(img, cmap='gray')\n",
        "    axes[row, col].set_title(f'标签: {labels[i].item()}')\n",
        "    axes[row, col].axis('off')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 统计类别分布\n",
        "train_labels = [label for _, label in train_dataset]\n",
        "test_labels = [label for _, label in test_dataset]\n",
        "\n",
        "print(\"\\n类别分布:\")\n",
        "print(\"训练集:\", np.bincount(train_labels))\n",
        "print(\"测试集:\", np.bincount(test_labels))\n",
        "\n",
        "# 可视化类别分布\n",
        "plt.figure(figsize=(12, 4))\n",
        "\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.hist(train_labels, bins=10, alpha=0.7, color='blue', edgecolor='black')\n",
        "plt.title('训练集类别分布')\n",
        "plt.xlabel('数字')\n",
        "plt.ylabel('样本数量')\n",
        "plt.xticks(range(10))\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.hist(test_labels, bins=10, alpha=0.7, color='red', edgecolor='black')\n",
        "plt.title('测试集类别分布')\n",
        "plt.xlabel('数字')\n",
        "plt.ylabel('样本数量')\n",
        "plt.xticks(range(10))\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. 构建神经网络模型\n",
        "\n",
        "现在让我们构建一个多层感知机（MLP）来分类手写数字。我们将从简单的网络开始，然后逐步增加复杂度。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 定义简单的神经网络\n",
        "class SimpleMLP(nn.Module):\n",
        "    \"\"\"简单的多层感知机\"\"\"\n",
        "    \n",
        "    def __init__(self, input_size=784, hidden_size=128, num_classes=10):\n",
        "        super(SimpleMLP, self).__init__()\n",
        "        \n",
        "        # 定义网络层\n",
        "        self.fc1 = nn.Linear(input_size, hidden_size)  # 输入层到隐藏层\n",
        "        self.fc2 = nn.Linear(hidden_size, num_classes)  # 隐藏层到输出层\n",
        "        \n",
        "        # 定义激活函数\n",
        "        self.relu = nn.ReLU()\n",
        "        self.dropout = nn.Dropout(0.2)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        # 展平输入图像 (batch_size, 1, 28, 28) -> (batch_size, 784)\n",
        "        x = x.view(x.size(0), -1)\n",
        "        \n",
        "        # 前向传播\n",
        "        x = self.fc1(x)      # 线性变换\n",
        "        x = self.relu(x)     # ReLU激活\n",
        "        x = self.dropout(x)  # Dropout正则化\n",
        "        \n",
        "        x = self.fc2(x)      # 输出层\n",
        "        \n",
        "        return x\n",
        "\n",
        "# 定义更复杂的神经网络\n",
        "class DeepMLP(nn.Module):\n",
        "    \"\"\"更深的神经网络\"\"\"\n",
        "    \n",
        "    def __init__(self, input_size=784, hidden_sizes=[512, 256, 128], num_classes=10):\n",
        "        super(DeepMLP, self).__init__()\n",
        "        \n",
        "        # 构建隐藏层\n",
        "        layers = []\n",
        "        prev_size = input_size\n",
        "        \n",
        "        for hidden_size in hidden_sizes:\n",
        "            layers.extend([\n",
        "                nn.Linear(prev_size, hidden_size),\n",
        "                nn.ReLU(),\n",
        "                nn.Dropout(0.3)\n",
        "            ])\n",
        "            prev_size = hidden_size\n",
        "        \n",
        "        # 输出层\n",
        "        layers.append(nn.Linear(prev_size, num_classes))\n",
        "        \n",
        "        # 组合所有层\n",
        "        self.network = nn.Sequential(*layers)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        # 展平输入\n",
        "        x = x.view(x.size(0), -1)\n",
        "        return self.network(x)\n",
        "\n",
        "# 创建模型实例\n",
        "simple_model = SimpleMLP(input_size=784, hidden_size=128, num_classes=10)\n",
        "deep_model = DeepMLP(input_size=784, hidden_sizes=[512, 256, 128], num_classes=10)\n",
        "\n",
        "# 将模型移动到设备\n",
        "simple_model = simple_model.to(device)\n",
        "deep_model = deep_model.to(device)\n",
        "\n",
        "print(\"模型结构:\")\n",
        "print(\"\\n简单MLP:\")\n",
        "print(simple_model)\n",
        "\n",
        "print(\"\\n深度MLP:\")\n",
        "print(deep_model)\n",
        "\n",
        "# 计算模型参数数量\n",
        "def count_parameters(model):\n",
        "    return sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
        "\n",
        "simple_params = count_parameters(simple_model)\n",
        "deep_params = count_parameters(deep_model)\n",
        "\n",
        "print(f\"\\n参数数量:\")\n",
        "print(f\"简单MLP: {simple_params:,} 参数\")\n",
        "print(f\"深度MLP: {deep_params:,} 参数\")\n",
        "\n",
        "# 测试模型前向传播\n",
        "sample_input = torch.randn(4, 1, 28, 28).to(device)  # 批次大小为4的样本\n",
        "\n",
        "with torch.no_grad():\n",
        "    simple_output = simple_model(sample_input)\n",
        "    deep_output = deep_model(sample_input)\n",
        "\n",
        "print(f\"\\n前向传播测试:\")\n",
        "print(f\"输入形状: {sample_input.shape}\")\n",
        "print(f\"简单MLP输出形状: {simple_output.shape}\")\n",
        "print(f\"深度MLP输出形状: {deep_output.shape}\")\n",
        "\n",
        "# 可视化网络结构\n",
        "def visualize_model_structure():\n",
        "    \"\"\"可视化模型结构\"\"\"\n",
        "    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
        "    \n",
        "    # 简单MLP结构\n",
        "    layers_simple = ['输入层\\n(784)', '隐藏层\\n(128)', '输出层\\n(10)']\n",
        "    connections_simple = [(0, 1), (1, 2)]\n",
        "    \n",
        "    ax1.set_xlim(-0.5, 2.5)\n",
        "    ax1.set_ylim(-0.5, 2.5)\n",
        "    \n",
        "    for i, layer in enumerate(layers_simple):\n",
        "        ax1.text(i, 1, layer, ha='center', va='center', \n",
        "                bbox=dict(boxstyle=\"round,pad=0.3\", facecolor=\"lightblue\"),\n",
        "                fontsize=12, fontweight='bold')\n",
        "    \n",
        "    for start, end in connections_simple:\n",
        "        ax1.arrow(start, 0.7, end-start, 0, head_width=0.05, head_length=0.05, \n",
        "                 fc='red', ec='red')\n",
        "    \n",
        "    ax1.set_title('简单MLP结构', fontsize=14, fontweight='bold')\n",
        "    ax1.axis('off')\n",
        "    \n",
        "    # 深度MLP结构\n",
        "    layers_deep = ['输入层\\n(784)', '隐藏层1\\n(512)', '隐藏层2\\n(256)', \n",
        "                   '隐藏层3\\n(128)', '输出层\\n(10)']\n",
        "    connections_deep = [(0, 1), (1, 2), (2, 3), (3, 4)]\n",
        "    \n",
        "    ax2.set_xlim(-0.5, 4.5)\n",
        "    ax2.set_ylim(-0.5, 2.5)\n",
        "    \n",
        "    for i, layer in enumerate(layers_deep):\n",
        "        ax2.text(i, 1, layer, ha='center', va='center', \n",
        "                bbox=dict(boxstyle=\"round,pad=0.3\", facecolor=\"lightgreen\"),\n",
        "                fontsize=10, fontweight='bold')\n",
        "    \n",
        "    for start, end in connections_deep:\n",
        "        ax2.arrow(start, 0.7, end-start, 0, head_width=0.05, head_length=0.05, \n",
        "                 fc='red', ec='red')\n",
        "    \n",
        "    ax2.set_title('深度MLP结构', fontsize=14, fontweight='bold')\n",
        "    ax2.axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_model_structure()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. 训练函数和评估函数\n",
        "\n",
        "现在让我们定义训练和评估函数，这些函数将帮助我们训练模型并监控性能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 训练函数\n",
        "def train_model(model, train_loader, criterion, optimizer, device, epoch):\n",
        "    \"\"\"训练一个epoch\"\"\"\n",
        "    model.train()  # 设置为训练模式\n",
        "    running_loss = 0.0\n",
        "    correct = 0\n",
        "    total = 0\n",
        "    \n",
        "    # 使用tqdm显示进度条\n",
        "    pbar = tqdm(train_loader, desc=f'Epoch {epoch+1}')\n",
        "    \n",
        "    for batch_idx, (data, target) in enumerate(pbar):\n",
        "        # 将数据移动到设备\n",
        "        data, target = data.to(device), target.to(device)\n",
        "        \n",
        "        # 清零梯度\n",
        "        optimizer.zero_grad()\n",
        "        \n",
        "        # 前向传播\n",
        "        output = model(data)\n",
        "        loss = criterion(output, target)\n",
        "        \n",
        "        # 反向传播\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "        \n",
        "        # 统计\n",
        "        running_loss += loss.item()\n",
        "        _, predicted = torch.max(output.data, 1)\n",
        "        total += target.size(0)\n",
        "        correct += (predicted == target).sum().item()\n",
        "        \n",
        "        # 更新进度条\n",
        "        pbar.set_postfix({\n",
        "            'Loss': f'{loss.item():.4f}',\n",
        "            'Acc': f'{100.*correct/total:.2f}%'\n",
        "        })\n",
        "    \n",
        "    epoch_loss = running_loss / len(train_loader)\n",
        "    epoch_acc = 100. * correct / total\n",
        "    \n",
        "    return epoch_loss, epoch_acc\n",
        "\n",
        "# 评估函数\n",
        "def evaluate_model(model, test_loader, criterion, device):\n",
        "    \"\"\"评估模型\"\"\"\n",
        "    model.eval()  # 设置为评估模式\n",
        "    test_loss = 0.0\n",
        "    correct = 0\n",
        "    total = 0\n",
        "    all_predictions = []\n",
        "    all_targets = []\n",
        "    \n",
        "    with torch.no_grad():  # 不计算梯度，节省内存\n",
        "        for data, target in tqdm(test_loader, desc='Evaluating'):\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            \n",
        "            output = model(data)\n",
        "            loss = criterion(output, target)\n",
        "            \n",
        "            test_loss += loss.item()\n",
        "            _, predicted = torch.max(output.data, 1)\n",
        "            total += target.size(0)\n",
        "            correct += (predicted == target).sum().item()\n",
        "            \n",
        "            # 收集预测结果用于后续分析\n",
        "            all_predictions.extend(predicted.cpu().numpy())\n",
        "            all_targets.extend(target.cpu().numpy())\n",
        "    \n",
        "    test_loss /= len(test_loader)\n",
        "    test_acc = 100. * correct / total\n",
        "    \n",
        "    return test_loss, test_acc, all_predictions, all_targets\n",
        "\n",
        "# 完整的训练循环\n",
        "def train_complete_model(model, train_loader, test_loader, num_epochs=10, \n",
        "                        learning_rate=0.001, model_name=\"Model\"):\n",
        "    \"\"\"完整的训练循环\"\"\"\n",
        "    \n",
        "    # 定义损失函数和优化器\n",
        "    criterion = nn.CrossEntropyLoss()\n",
        "    optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n",
        "    \n",
        "    # 记录训练历史\n",
        "    train_losses = []\n",
        "    train_accuracies = []\n",
        "    test_losses = []\n",
        "    test_accuracies = []\n",
        "    \n",
        "    print(f\"开始训练 {model_name}...\")\n",
        "    print(f\"训练参数: Epochs={num_epochs}, LR={learning_rate}\")\n",
        "    print(\"=\" * 60)\n",
        "    \n",
        "    start_time = time.time()\n",
        "    \n",
        "    for epoch in range(num_epochs):\n",
        "        # 训练\n",
        "        train_loss, train_acc = train_model(\n",
        "            model, train_loader, criterion, optimizer, device, epoch\n",
        "        )\n",
        "        \n",
        "        # 评估\n",
        "        test_loss, test_acc, _, _ = evaluate_model(\n",
        "            model, test_loader, criterion, device\n",
        "        )\n",
        "        \n",
        "        # 记录历史\n",
        "        train_losses.append(train_loss)\n",
        "        train_accuracies.append(train_acc)\n",
        "        test_losses.append(test_loss)\n",
        "        test_accuracies.append(test_acc)\n",
        "        \n",
        "        # 打印结果\n",
        "        print(f'Epoch {epoch+1:2d}/{num_epochs}: '\n",
        "              f'Train Loss: {train_loss:.4f}, Train Acc: {train_acc:.2f}% | '\n",
        "              f'Test Loss: {test_loss:.4f}, Test Acc: {test_acc:.2f}%')\n",
        "    \n",
        "    training_time = time.time() - start_time\n",
        "    print(f\"\\n{model_name} 训练完成!\")\n",
        "    print(f\"总训练时间: {training_time:.2f}秒\")\n",
        "    print(f\"最终测试准确率: {test_accuracies[-1]:.2f}%\")\n",
        "    \n",
        "    return {\n",
        "        'train_losses': train_losses,\n",
        "        'train_accuracies': train_accuracies,\n",
        "        'test_losses': test_losses,\n",
        "        'test_accuracies': test_accuracies,\n",
        "        'training_time': training_time,\n",
        "        'final_test_acc': test_accuracies[-1]\n",
        "    }\n",
        "\n",
        "# 可视化训练结果\n",
        "def plot_training_results(results, model_name):\n",
        "    \"\"\"可视化训练结果\"\"\"\n",
        "    fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 10))\n",
        "    \n",
        "    epochs = range(1, len(results['train_losses']) + 1)\n",
        "    \n",
        "    # 损失曲线\n",
        "    ax1.plot(epochs, results['train_losses'], 'b-', label='训练损失', linewidth=2)\n",
        "    ax1.plot(epochs, results['test_losses'], 'r-', label='测试损失', linewidth=2)\n",
        "    ax1.set_title(f'{model_name} - 损失变化')\n",
        "    ax1.set_xlabel('Epoch')\n",
        "    ax1.set_ylabel('损失')\n",
        "    ax1.legend()\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 准确率曲线\n",
        "    ax2.plot(epochs, results['train_accuracies'], 'b-', label='训练准确率', linewidth=2)\n",
        "    ax2.plot(epochs, results['test_accuracies'], 'r-', label='测试准确率', linewidth=2)\n",
        "    ax2.set_title(f'{model_name} - 准确率变化')\n",
        "    ax2.set_xlabel('Epoch')\n",
        "    ax2.set_ylabel('准确率 (%)')\n",
        "    ax2.legend()\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 训练vs测试损失对比\n",
        "    ax3.scatter(results['train_losses'], results['test_losses'], \n",
        "               c=epochs, cmap='viridis', s=50, alpha=0.7)\n",
        "    ax3.plot([0, max(max(results['train_losses']), max(results['test_losses']))], \n",
        "             [0, max(max(results['train_losses']), max(results['test_losses']))], \n",
        "             'k--', alpha=0.5)\n",
        "    ax3.set_title(f'{model_name} - 训练vs测试损失')\n",
        "    ax3.set_xlabel('训练损失')\n",
        "    ax3.set_ylabel('测试损失')\n",
        "    ax3.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 训练vs测试准确率对比\n",
        "    ax4.scatter(results['train_accuracies'], results['test_accuracies'], \n",
        "               c=epochs, cmap='viridis', s=50, alpha=0.7)\n",
        "    ax4.plot([0, 100], [0, 100], 'k--', alpha=0.5)\n",
        "    ax4.set_title(f'{model_name} - 训练vs测试准确率')\n",
        "    ax4.set_xlabel('训练准确率 (%)')\n",
        "    ax4.set_ylabel('测试准确率 (%)')\n",
        "    ax4.grid(True, alpha=0.3)\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "print(\"训练和评估函数定义完成!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. 训练简单MLP模型\n",
        "\n",
        "现在让我们开始训练我们的第一个神经网络模型！\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 训练简单MLP模型\n",
        "print(\"开始训练简单MLP模型...\")\n",
        "simple_results = train_complete_model(\n",
        "    model=simple_model,\n",
        "    train_loader=train_loader,\n",
        "    test_loader=test_loader,\n",
        "    num_epochs=10,\n",
        "    learning_rate=0.001,\n",
        "    model_name=\"简单MLP\"\n",
        ")\n",
        "\n",
        "# 可视化简单MLP的训练结果\n",
        "plot_training_results(simple_results, \"简单MLP\")\n",
        "\n",
        "# 获取最终预测结果\n",
        "print(\"\\n获取简单MLP的最终预测结果...\")\n",
        "_, _, simple_predictions, simple_targets = evaluate_model(\n",
        "    simple_model, test_loader, nn.CrossEntropyLoss(), device\n",
        ")\n",
        "\n",
        "# 计算混淆矩阵\n",
        "simple_cm = confusion_matrix(simple_targets, simple_predictions)\n",
        "\n",
        "# 可视化混淆矩阵\n",
        "plt.figure(figsize=(10, 8))\n",
        "sns.heatmap(simple_cm, annot=True, fmt='d', cmap='Blues',\n",
        "            xticklabels=range(10), yticklabels=range(10))\n",
        "plt.title('简单MLP - 混淆矩阵')\n",
        "plt.xlabel('预测标签')\n",
        "plt.ylabel('真实标签')\n",
        "plt.show()\n",
        "\n",
        "# 打印分类报告\n",
        "print(\"\\n简单MLP分类报告:\")\n",
        "print(classification_report(simple_targets, simple_predictions, \n",
        "                          target_names=[f'数字 {i}' for i in range(10)]))\n",
        "\n",
        "# 显示一些预测错误的样本\n",
        "def show_prediction_errors(model, test_loader, device, num_errors=10):\n",
        "    \"\"\"显示预测错误的样本\"\"\"\n",
        "    model.eval()\n",
        "    errors = []\n",
        "    \n",
        "    with torch.no_grad():\n",
        "        for data, target in test_loader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output = model(data)\n",
        "            _, predicted = torch.max(output, 1)\n",
        "            \n",
        "            # 找到预测错误的样本\n",
        "            wrong_mask = predicted != target\n",
        "            if wrong_mask.any():\n",
        "                wrong_data = data[wrong_mask]\n",
        "                wrong_targets = target[wrong_mask]\n",
        "                wrong_predictions = predicted[wrong_mask]\n",
        "                \n",
        "                for i in range(len(wrong_data)):\n",
        "                    errors.append({\n",
        "                        'image': wrong_data[i].cpu(),\n",
        "                        'true': wrong_targets[i].cpu().item(),\n",
        "                        'pred': wrong_predictions[i].cpu().item()\n",
        "                    })\n",
        "                    \n",
        "                    if len(errors) >= num_errors:\n",
        "                        break\n",
        "            \n",
        "            if len(errors) >= num_errors:\n",
        "                break\n",
        "    \n",
        "    # 可视化错误样本\n",
        "    fig, axes = plt.subplots(2, 5, figsize=(15, 6))\n",
        "    fig.suptitle('预测错误的样本', fontsize=16)\n",
        "    \n",
        "    for i in range(min(num_errors, 10)):\n",
        "        row = i // 5\n",
        "        col = i % 5\n",
        "        \n",
        "        img = errors[i]['image'].squeeze()\n",
        "        img = (img + 1) / 2  # 反归一化\n",
        "        \n",
        "        axes[row, col].imshow(img, cmap='gray')\n",
        "        axes[row, col].set_title(f'真实: {errors[i][\"true\"]}, 预测: {errors[i][\"pred\"]}')\n",
        "        axes[row, col].axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "print(\"\\n显示简单MLP预测错误的样本:\")\n",
        "show_prediction_errors(simple_model, test_loader, device)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. 训练深度MLP模型\n",
        "\n",
        "现在让我们训练更深的神经网络，看看是否能获得更好的性能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 训练深度MLP模型\n",
        "print(\"开始训练深度MLP模型...\")\n",
        "deep_results = train_complete_model(\n",
        "    model=deep_model,\n",
        "    train_loader=train_loader,\n",
        "    test_loader=test_loader,\n",
        "    num_epochs=10,\n",
        "    learning_rate=0.001,\n",
        "    model_name=\"深度MLP\"\n",
        ")\n",
        "\n",
        "# 可视化深度MLP的训练结果\n",
        "plot_training_results(deep_results, \"深度MLP\")\n",
        "\n",
        "# 获取最终预测结果\n",
        "print(\"\\n获取深度MLP的最终预测结果...\")\n",
        "_, _, deep_predictions, deep_targets = evaluate_model(\n",
        "    deep_model, test_loader, nn.CrossEntropyLoss(), device\n",
        ")\n",
        "\n",
        "# 计算混淆矩阵\n",
        "deep_cm = confusion_matrix(deep_targets, deep_predictions)\n",
        "\n",
        "# 可视化混淆矩阵\n",
        "plt.figure(figsize=(10, 8))\n",
        "sns.heatmap(deep_cm, annot=True, fmt='d', cmap='Greens',\n",
        "            xticklabels=range(10), yticklabels=range(10))\n",
        "plt.title('深度MLP - 混淆矩阵')\n",
        "plt.xlabel('预测标签')\n",
        "plt.ylabel('真实标签')\n",
        "plt.show()\n",
        "\n",
        "# 打印分类报告\n",
        "print(\"\\n深度MLP分类报告:\")\n",
        "print(classification_report(deep_targets, deep_predictions, \n",
        "                          target_names=[f'数字 {i}' for i in range(10)]))\n",
        "\n",
        "print(\"\\n显示深度MLP预测错误的样本:\")\n",
        "show_prediction_errors(deep_model, test_loader, device)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 6. 模型对比分析\n",
        "\n",
        "让我们对比两个模型的性能，分析它们的优缺点。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 模型对比分析\n",
        "print(\"=\" * 80)\n",
        "print(\"模型性能对比分析\")\n",
        "print(\"=\" * 80)\n",
        "\n",
        "# 性能对比表\n",
        "comparison_data = {\n",
        "    '模型': ['简单MLP', '深度MLP'],\n",
        "    '参数数量': [simple_params, deep_params],\n",
        "    '最终测试准确率': [simple_results['final_test_acc'], deep_results['final_test_acc']],\n",
        "    '训练时间(秒)': [simple_results['training_time'], deep_results['training_time']],\n",
        "    '最终训练损失': [simple_results['train_losses'][-1], deep_results['train_losses'][-1]],\n",
        "    '最终测试损失': [simple_results['test_losses'][-1], deep_results['test_losses'][-1]]\n",
        "}\n",
        "\n",
        "print(f\"{'指标':<20} {'简单MLP':<15} {'深度MLP':<15} {'差异':<15}\")\n",
        "print(\"-\" * 65)\n",
        "\n",
        "for metric, values in comparison_data.items():\n",
        "    if metric == '模型':\n",
        "        continue\n",
        "    simple_val = values[0]\n",
        "    deep_val = values[1]\n",
        "    diff = deep_val - simple_val\n",
        "    \n",
        "    if metric == '参数数量':\n",
        "        print(f\"{metric:<20} {simple_val:<15,} {deep_val:<15,} {diff:<15,}\")\n",
        "    elif metric == '训练时间(秒)':\n",
        "        print(f\"{metric:<20} {simple_val:<15.2f} {deep_val:<15.2f} {diff:<15.2f}\")\n",
        "    else:\n",
        "        print(f\"{metric:<20} {simple_val:<15.4f} {deep_val:<15.4f} {diff:<15.4f}\")\n",
        "\n",
        "# 可视化对比\n",
        "fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 12))\n",
        "\n",
        "# 1. 准确率对比\n",
        "models = ['简单MLP', '深度MLP']\n",
        "train_accs = [simple_results['train_accuracies'][-1], deep_results['train_accuracies'][-1]]\n",
        "test_accs = [simple_results['test_accuracies'][-1], deep_results['test_accuracies'][-1]]\n",
        "\n",
        "x = np.arange(len(models))\n",
        "width = 0.35\n",
        "\n",
        "ax1.bar(x - width/2, train_accs, width, label='训练准确率', alpha=0.8, color='skyblue')\n",
        "ax1.bar(x + width/2, test_accs, width, label='测试准确率', alpha=0.8, color='lightcoral')\n",
        "ax1.set_title('模型准确率对比')\n",
        "ax1.set_ylabel('准确率 (%)')\n",
        "ax1.set_xticks(x)\n",
        "ax1.set_xticklabels(models)\n",
        "ax1.legend()\n",
        "ax1.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, (train_acc, test_acc) in enumerate(zip(train_accs, test_accs)):\n",
        "    ax1.text(i - width/2, train_acc + 0.5, f'{train_acc:.2f}%', ha='center', va='bottom')\n",
        "    ax1.text(i + width/2, test_acc + 0.5, f'{test_acc:.2f}%', ha='center', va='bottom')\n",
        "\n",
        "# 2. 损失对比\n",
        "train_losses = [simple_results['train_losses'][-1], deep_results['train_losses'][-1]]\n",
        "test_losses = [simple_results['test_losses'][-1], deep_results['test_losses'][-1]]\n",
        "\n",
        "ax2.bar(x - width/2, train_losses, width, label='训练损失', alpha=0.8, color='lightgreen')\n",
        "ax2.bar(x + width/2, test_losses, width, label='测试损失', alpha=0.8, color='orange')\n",
        "ax2.set_title('模型损失对比')\n",
        "ax2.set_ylabel('损失值')\n",
        "ax2.set_xticks(x)\n",
        "ax2.set_xticklabels(models)\n",
        "ax2.legend()\n",
        "ax2.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, (train_loss, test_loss) in enumerate(zip(train_losses, test_losses)):\n",
        "    ax2.text(i - width/2, train_loss + 0.01, f'{train_loss:.4f}', ha='center', va='bottom')\n",
        "    ax2.text(i + width/2, test_loss + 0.01, f'{test_loss:.4f}', ha='center', va='bottom')\n",
        "\n",
        "# 3. 训练时间对比\n",
        "training_times = [simple_results['training_time'], deep_results['training_time']]\n",
        "ax3.bar(models, training_times, alpha=0.8, color=['purple', 'brown'])\n",
        "ax3.set_title('训练时间对比')\n",
        "ax3.set_ylabel('时间 (秒)')\n",
        "ax3.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, time_val in enumerate(training_times):\n",
        "    ax3.text(i, time_val + 1, f'{time_val:.1f}s', ha='center', va='bottom')\n",
        "\n",
        "# 4. 参数数量对比\n",
        "param_counts = [simple_params, deep_params]\n",
        "ax4.bar(models, param_counts, alpha=0.8, color=['red', 'blue'])\n",
        "ax4.set_title('模型参数数量对比')\n",
        "ax4.set_ylabel('参数数量')\n",
        "ax4.set_yscale('log')  # 使用对数坐标\n",
        "ax4.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, count in enumerate(param_counts):\n",
        "    ax4.text(i, count * 1.2, f'{count:,}', ha='center', va='bottom')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 训练过程对比\n",
        "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
        "\n",
        "# 损失曲线对比\n",
        "epochs = range(1, len(simple_results['train_losses']) + 1)\n",
        "ax1.plot(epochs, simple_results['train_losses'], 'b-', label='简单MLP-训练', linewidth=2)\n",
        "ax1.plot(epochs, simple_results['test_losses'], 'b--', label='简单MLP-测试', linewidth=2)\n",
        "ax1.plot(epochs, deep_results['train_losses'], 'r-', label='深度MLP-训练', linewidth=2)\n",
        "ax1.plot(epochs, deep_results['test_losses'], 'r--', label='深度MLP-测试', linewidth=2)\n",
        "ax1.set_title('训练过程损失对比')\n",
        "ax1.set_xlabel('Epoch')\n",
        "ax1.set_ylabel('损失')\n",
        "ax1.legend()\n",
        "ax1.grid(True, alpha=0.3)\n",
        "\n",
        "# 准确率曲线对比\n",
        "ax2.plot(epochs, simple_results['train_accuracies'], 'b-', label='简单MLP-训练', linewidth=2)\n",
        "ax2.plot(epochs, simple_results['test_accuracies'], 'b--', label='简单MLP-测试', linewidth=2)\n",
        "ax2.plot(epochs, deep_results['train_accuracies'], 'r-', label='深度MLP-训练', linewidth=2)\n",
        "ax2.plot(epochs, deep_results['test_accuracies'], 'r--', label='深度MLP-测试', linewidth=2)\n",
        "ax2.set_title('训练过程准确率对比')\n",
        "ax2.set_xlabel('Epoch')\n",
        "ax2.set_ylabel('准确率 (%)')\n",
        "ax2.legend()\n",
        "ax2.grid(True, alpha=0.3)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 分析总结\n",
        "print(\"\\n\" + \"=\" * 80)\n",
        "print(\"分析总结\")\n",
        "print(\"=\" * 80)\n",
        "\n",
        "print(\"1. 性能分析:\")\n",
        "if deep_results['final_test_acc'] > simple_results['final_test_acc']:\n",
        "    improvement = deep_results['final_test_acc'] - simple_results['final_test_acc']\n",
        "    print(f\"   - 深度MLP比简单MLP准确率提高了 {improvement:.2f}%\")\n",
        "else:\n",
        "    print(\"   - 简单MLP和深度MLP性能相近\")\n",
        "\n",
        "print(\"2. 效率分析:\")\n",
        "if deep_results['training_time'] > simple_results['training_time']:\n",
        "    time_ratio = deep_results['training_time'] / simple_results['training_time']\n",
        "    print(f\"   - 深度MLP训练时间是简单MLP的 {time_ratio:.1f} 倍\")\n",
        "else:\n",
        "    print(\"   - 两个模型训练时间相近\")\n",
        "\n",
        "print(\"3. 复杂度分析:\")\n",
        "param_ratio = deep_params / simple_params\n",
        "print(f\"   - 深度MLP参数数量是简单MLP的 {param_ratio:.1f} 倍\")\n",
        "\n",
        "print(\"4. 过拟合分析:\")\n",
        "simple_overfitting = simple_results['train_accuracies'][-1] - simple_results['test_accuracies'][-1]\n",
        "deep_overfitting = deep_results['train_accuracies'][-1] - deep_results['test_accuracies'][-1]\n",
        "\n",
        "if deep_overfitting > simple_overfitting:\n",
        "    print(f\"   - 深度MLP存在更严重的过拟合 ({deep_overfitting:.2f}% vs {simple_overfitting:.2f}%)\")\n",
        "else:\n",
        "    print(f\"   - 两个模型过拟合程度相近\")\n",
        "\n",
        "print(\"5. 建议:\")\n",
        "if deep_results['final_test_acc'] > simple_results['final_test_acc'] + 1:\n",
        "    print(\"   - 深度MLP性能更好，建议使用深度模型\")\n",
        "elif simple_results['training_time'] < deep_results['training_time'] * 0.5:\n",
        "    print(\"   - 简单MLP训练更快，如果对性能要求不高可以使用简单模型\")\n",
        "else:\n",
        "    print(\"   - 两个模型各有优势，可根据具体需求选择\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 7. 模型保存和加载\n",
        "\n",
        "学会如何保存训练好的模型，以便后续使用。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 模型保存和加载\n",
        "import os\n",
        "\n",
        "# 创建模型保存目录\n",
        "model_dir = './saved_models'\n",
        "os.makedirs(model_dir, exist_ok=True)\n",
        "\n",
        "# 保存模型\n",
        "def save_model(model, model_name, results):\n",
        "    \"\"\"保存模型和训练结果\"\"\"\n",
        "    # 保存模型状态字典\n",
        "    model_path = os.path.join(model_dir, f'{model_name}_state_dict.pth')\n",
        "    torch.save(model.state_dict(), model_path)\n",
        "    \n",
        "    # 保存完整模型\n",
        "    full_model_path = os.path.join(model_dir, f'{model_name}_full.pth')\n",
        "    torch.save(model, full_model_path)\n",
        "    \n",
        "    # 保存训练结果\n",
        "    results_path = os.path.join(model_dir, f'{model_name}_results.pth')\n",
        "    torch.save(results, results_path)\n",
        "    \n",
        "    print(f\"模型已保存到:\")\n",
        "    print(f\"  - 状态字典: {model_path}\")\n",
        "    print(f\"  - 完整模型: {full_model_path}\")\n",
        "    print(f\"  - 训练结果: {results_path}\")\n",
        "\n",
        "# 加载模型\n",
        "def load_model(model_class, model_name, **kwargs):\n",
        "    \"\"\"加载模型\"\"\"\n",
        "    # 加载状态字典\n",
        "    state_dict_path = os.path.join(model_dir, f'{model_name}_state_dict.pth')\n",
        "    results_path = os.path.join(model_dir, f'{model_name}_results.pth')\n",
        "    \n",
        "    if os.path.exists(state_dict_path):\n",
        "        # 创建模型实例\n",
        "        model = model_class(**kwargs)\n",
        "        model.load_state_dict(torch.load(state_dict_path, map_location=device))\n",
        "        model = model.to(device)\n",
        "        \n",
        "        # 加载训练结果\n",
        "        if os.path.exists(results_path):\n",
        "            results = torch.load(results_path, map_location='cpu')\n",
        "        else:\n",
        "            results = None\n",
        "            \n",
        "        print(f\"模型 {model_name} 加载成功!\")\n",
        "        return model, results\n",
        "    else:\n",
        "        print(f\"模型文件 {state_dict_path} 不存在!\")\n",
        "        return None, None\n",
        "\n",
        "# 保存训练好的模型\n",
        "print(\"保存简单MLP模型...\")\n",
        "save_model(simple_model, 'simple_mlp', simple_results)\n",
        "\n",
        "print(\"\\n保存深度MLP模型...\")\n",
        "save_model(deep_model, 'deep_mlp', deep_results)\n",
        "\n",
        "# 测试模型加载\n",
        "print(\"\\n测试模型加载...\")\n",
        "loaded_simple_model, loaded_simple_results = load_model(\n",
        "    SimpleMLP, 'simple_mlp', \n",
        "    input_size=784, hidden_size=128, num_classes=10\n",
        ")\n",
        "\n",
        "loaded_deep_model, loaded_deep_results = load_model(\n",
        "    DeepMLP, 'deep_mlp',\n",
        "    input_size=784, hidden_sizes=[512, 256, 128], num_classes=10\n",
        ")\n",
        "\n",
        "# 验证加载的模型\n",
        "if loaded_simple_model is not None:\n",
        "    print(\"\\n验证简单MLP模型...\")\n",
        "    loaded_simple_model.eval()\n",
        "    with torch.no_grad():\n",
        "        # 测试一个批次\n",
        "        sample_batch = next(iter(test_loader))\n",
        "        images, labels = sample_batch\n",
        "        images, labels = images.to(device), labels.to(device)\n",
        "        \n",
        "        # 原始模型预测\n",
        "        original_output = simple_model(images)\n",
        "        _, original_pred = torch.max(original_output, 1)\n",
        "        \n",
        "        # 加载模型预测\n",
        "        loaded_output = loaded_simple_model(images)\n",
        "        _, loaded_pred = torch.max(loaded_output, 1)\n",
        "        \n",
        "        # 比较预测结果\n",
        "        predictions_match = torch.equal(original_pred, loaded_pred)\n",
        "        print(f\"预测结果是否一致: {predictions_match}\")\n",
        "        \n",
        "        if predictions_match:\n",
        "            print(\"✅ 模型加载验证成功!\")\n",
        "        else:\n",
        "            print(\"❌ 模型加载验证失败!\")\n",
        "\n",
        "# 模型推理函数\n",
        "def predict_single_image(model, image_tensor, class_names=None):\n",
        "    \"\"\"对单张图像进行预测\"\"\"\n",
        "    model.eval()\n",
        "    \n",
        "    with torch.no_grad():\n",
        "        # 确保输入是正确的形状\n",
        "        if image_tensor.dim() == 3:\n",
        "            image_tensor = image_tensor.unsqueeze(0)  # 添加批次维度\n",
        "        \n",
        "        image_tensor = image_tensor.to(device)\n",
        "        \n",
        "        # 前向传播\n",
        "        output = model(image_tensor)\n",
        "        probabilities = F.softmax(output, dim=1)\n",
        "        confidence, predicted = torch.max(probabilities, 1)\n",
        "        \n",
        "        predicted_class = predicted.item()\n",
        "        confidence_score = confidence.item()\n",
        "        \n",
        "        if class_names is None:\n",
        "            class_names = [f'数字 {i}' for i in range(10)]\n",
        "        \n",
        "        return {\n",
        "            'predicted_class': predicted_class,\n",
        "            'class_name': class_names[predicted_class],\n",
        "            'confidence': confidence_score,\n",
        "            'all_probabilities': probabilities.cpu().numpy()[0]\n",
        "        }\n",
        "\n",
        "# 测试单张图像预测\n",
        "print(\"\\n测试单张图像预测...\")\n",
        "sample_image, sample_label = next(iter(test_loader))\n",
        "test_image = sample_image[0]  # 取第一张图像\n",
        "true_label = sample_label[0].item()\n",
        "\n",
        "# 使用简单MLP预测\n",
        "prediction = predict_single_image(simple_model, test_image)\n",
        "print(f\"真实标签: {true_label}\")\n",
        "print(f\"预测标签: {prediction['predicted_class']}\")\n",
        "print(f\"预测类别: {prediction['class_name']}\")\n",
        "print(f\"置信度: {prediction['confidence']:.4f}\")\n",
        "\n",
        "# 可视化预测结果\n",
        "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n",
        "\n",
        "# 显示图像\n",
        "img = test_image.squeeze()\n",
        "img = (img + 1) / 2  # 反归一化\n",
        "ax1.imshow(img, cmap='gray')\n",
        "ax1.set_title(f'测试图像\\n真实: {true_label}, 预测: {prediction[\"predicted_class\"]}')\n",
        "ax1.axis('off')\n",
        "\n",
        "# 显示预测概率\n",
        "class_names = [f'数字 {i}' for i in range(10)]\n",
        "probabilities = prediction['all_probabilities']\n",
        "bars = ax2.bar(class_names, probabilities, alpha=0.7, color='skyblue')\n",
        "ax2.set_title('预测概率分布')\n",
        "ax2.set_ylabel('概率')\n",
        "ax2.set_ylim(0, 1)\n",
        "\n",
        "# 高亮预测结果\n",
        "bars[prediction['predicted_class']].set_color('red')\n",
        "bars[prediction['predicted_class']].set_alpha(1.0)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, (bar, prob) in enumerate(zip(bars, probabilities)):\n",
        "    height = bar.get_height()\n",
        "    ax2.text(bar.get_x() + bar.get_width()/2., height + 0.01,\n",
        "             f'{prob:.3f}', ha='center', va='bottom', fontsize=8)\n",
        "\n",
        "plt.xticks(rotation=45)\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "print(f\"\\n模型保存和加载功能演示完成!\")\n",
        "print(f\"保存的模型文件位于: {model_dir}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 8. 总结\n",
        "\n",
        "恭喜！你已经成功完成了第一个神经网络图像分类器的构建和训练。让我们总结一下学到的重要概念：\n",
        "\n",
        "### 🎯 学习成果\n",
        "\n",
        "1. **神经网络基础**\n",
        "   - 理解了多层感知机（MLP）的结构\n",
        "   - 掌握了前向传播和反向传播\n",
        "   - 学会了激活函数（ReLU）和正则化（Dropout）\n",
        "\n",
        "2. **数据处理**\n",
        "   - MNIST手写数字数据集的加载和预处理\n",
        "   - 数据标准化和批次处理\n",
        "   - 数据可视化技术\n",
        "\n",
        "3. **模型训练**\n",
        "   - 完整的训练循环实现\n",
        "   - 损失函数（交叉熵）和优化器（Adam）\n",
        "   - 训练过程监控和可视化\n",
        "\n",
        "4. **模型评估**\n",
        "   - 准确率、损失等性能指标\n",
        "   - 混淆矩阵和分类报告\n",
        "   - 错误样本分析\n",
        "\n",
        "5. **模型管理**\n",
        "   - 模型保存和加载\n",
        "   - 单张图像预测\n",
        "   - 模型部署准备\n",
        "\n",
        "### 🔍 关键发现\n",
        "\n",
        "- **简单vs深度模型**: 深度模型通常有更好的表达能力，但也更容易过拟合\n",
        "- **训练监控**: 实时监控训练过程对于调试和优化非常重要\n",
        "- **数据预处理**: 正确的数据预处理对模型性能有重要影响\n",
        "- **模型保存**: 学会保存和加载模型是实际应用的基础\n",
        "\n",
        "### 🚀 下一步学习建议\n",
        "\n",
        "1. **卷积神经网络（CNN）**: 学习更适合图像处理的网络结构\n",
        "2. **数据增强**: 通过数据增强提高模型泛化能力\n",
        "3. **超参数调优**: 学习如何优化学习率、批次大小等超参数\n",
        "4. **更复杂的数据集**: 尝试CIFAR-10、ImageNet等更复杂的数据集\n",
        "\n",
        "现在你已经掌握了神经网络的基础知识，可以继续探索更高级的深度学习技术了！🎉\n"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
