{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 06. 卷积神经网络（CNN）\n",
        "\n",
        "## 学习目标\n",
        "- 理解卷积神经网络的基本原理\n",
        "- 学习卷积层、池化层、全连接层的作用\n",
        "- 掌握CNN在图像分类中的应用\n",
        "- 实现经典的CNN架构（LeNet、AlexNet风格）\n",
        "- 学习数据增强技术\n",
        "- 对比CNN与MLP的性能差异\n",
        "\n",
        "## 什么是卷积神经网络？\n",
        "\n",
        "卷积神经网络（CNN）是一种专门用于处理具有网格结构数据的深度学习模型，特别适合图像处理任务。\n",
        "\n",
        "### CNN的核心组件\n",
        "\n",
        "1. **卷积层（Convolutional Layer）**\n",
        "   - 使用卷积核（滤波器）提取局部特征\n",
        "   - 参数共享，大大减少参数量\n",
        "   - 保持空间结构信息\n",
        "\n",
        "2. **池化层（Pooling Layer）**\n",
        "   - 降低特征图的空间维度\n",
        "   - 减少计算量和过拟合\n",
        "   - 增强模型的平移不变性\n",
        "\n",
        "3. **全连接层（Fully Connected Layer）**\n",
        "   - 将提取的特征映射到最终输出\n",
        "   - 进行最终的分类决策\n",
        "\n",
        "### 卷积操作\n",
        "\n",
        "**数学公式：**\n",
        "$$(f * g)(t) = \\int_{-\\infty}^{\\infty} f(\\tau)g(t-\\tau)d\\tau$$\n",
        "\n",
        "**离散卷积：**\n",
        "$$(f * g)[n] = \\sum_{m} f[m]g[n-m]$$\n",
        "\n",
        "**2D卷积（图像）：**\n",
        "$$(I * K)(i,j) = \\sum_{m}\\sum_{n} I(m,n)K(i-m,j-n)$$\n",
        "\n",
        "其中：\n",
        "- $I$ 是输入图像\n",
        "- $K$ 是卷积核\n",
        "- $(i,j)$ 是输出位置\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.optim as optim\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import DataLoader\n",
        "import torchvision\n",
        "import torchvision.transforms as transforms\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "import seaborn as sns\n",
        "from sklearn.metrics import classification_report, confusion_matrix\n",
        "import time\n",
        "from tqdm import tqdm\n",
        "import warnings\n",
        "warnings.filterwarnings('ignore')\n",
        "\n",
        "# 设置中文字体\n",
        "plt.rcParams['font.sans-serif'] = ['SimHei']\n",
        "plt.rcParams['axes.unicode_minus'] = False\n",
        "\n",
        "# 设置随机种子\n",
        "torch.manual_seed(42)\n",
        "np.random.seed(42)\n",
        "\n",
        "print(f\"PyTorch版本: {torch.__version__}\")\n",
        "print(f\"CUDA可用: {torch.cuda.is_available()}\")\n",
        "\n",
        "# 设置设备\n",
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "print(f\"使用设备: {device}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. 卷积操作可视化\n",
        "\n",
        "首先让我们直观地理解卷积操作是如何工作的。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 创建示例图像和卷积核\n",
        "def create_sample_image():\n",
        "    \"\"\"创建一个简单的示例图像\"\"\"\n",
        "    # 创建一个8x8的图像，包含一些模式\n",
        "    image = np.zeros((8, 8))\n",
        "    \n",
        "    # 添加一些特征\n",
        "    image[1:3, 1:3] = 1  # 左上角方块\n",
        "    image[1:3, 5:7] = 1  # 右上角方块\n",
        "    image[5:7, 1:3] = 1  # 左下角方块\n",
        "    image[5:7, 5:7] = 1  # 右下角方块\n",
        "    \n",
        "    # 添加一些线条\n",
        "    image[3, :] = 0.5  # 水平线\n",
        "    image[:, 3] = 0.5  # 垂直线\n",
        "    \n",
        "    return image\n",
        "\n",
        "def create_edge_detection_kernel():\n",
        "    \"\"\"创建边缘检测卷积核\"\"\"\n",
        "    # Sobel边缘检测核\n",
        "    sobel_x = np.array([[-1, 0, 1],\n",
        "                        [-2, 0, 2],\n",
        "                        [-1, 0, 1]])\n",
        "    \n",
        "    sobel_y = np.array([[-1, -2, -1],\n",
        "                        [ 0,  0,  0],\n",
        "                        [ 1,  2,  1]])\n",
        "    \n",
        "    return sobel_x, sobel_y\n",
        "\n",
        "def create_blur_kernel():\n",
        "    \"\"\"创建模糊卷积核\"\"\"\n",
        "    # 3x3平均模糊核\n",
        "    blur = np.ones((3, 3)) / 9\n",
        "    return blur\n",
        "\n",
        "def create_sharpen_kernel():\n",
        "    \"\"\"创建锐化卷积核\"\"\"\n",
        "    sharpen = np.array([[ 0, -1,  0],\n",
        "                        [-1,  5, -1],\n",
        "                        [ 0, -1,  0]])\n",
        "    return sharpen\n",
        "\n",
        "# 手动实现卷积操作\n",
        "def manual_conv2d(image, kernel, padding=0, stride=1):\n",
        "    \"\"\"手动实现2D卷积\"\"\"\n",
        "    # 获取输入和核的尺寸\n",
        "    img_h, img_w = image.shape\n",
        "    kernel_h, kernel_w = kernel.shape\n",
        "    \n",
        "    # 计算输出尺寸\n",
        "    out_h = (img_h + 2*padding - kernel_h) // stride + 1\n",
        "    out_w = (img_w + 2*padding - kernel_w) // stride + 1\n",
        "    \n",
        "    # 添加padding\n",
        "    if padding > 0:\n",
        "        padded_image = np.pad(image, padding, mode='constant', constant_values=0)\n",
        "    else:\n",
        "        padded_image = image\n",
        "    \n",
        "    # 初始化输出\n",
        "    output = np.zeros((out_h, out_w))\n",
        "    \n",
        "    # 执行卷积\n",
        "    for i in range(out_h):\n",
        "        for j in range(out_w):\n",
        "            # 计算卷积窗口\n",
        "            start_i = i * stride\n",
        "            start_j = j * stride\n",
        "            end_i = start_i + kernel_h\n",
        "            end_j = start_j + kernel_w\n",
        "            \n",
        "            # 提取窗口并计算卷积\n",
        "            window = padded_image[start_i:end_i, start_j:end_j]\n",
        "            output[i, j] = np.sum(window * kernel)\n",
        "    \n",
        "    return output\n",
        "\n",
        "# 创建示例数据\n",
        "sample_image = create_sample_image()\n",
        "sobel_x, sobel_y = create_edge_detection_kernel()\n",
        "blur_kernel = create_blur_kernel()\n",
        "sharpen_kernel = create_sharpen_kernel()\n",
        "\n",
        "# 执行卷积操作\n",
        "conv_sobel_x = manual_conv2d(sample_image, sobel_x)\n",
        "conv_sobel_y = manual_conv2d(sample_image, sobel_y)\n",
        "conv_blur = manual_conv2d(sample_image, blur_kernel)\n",
        "conv_sharpen = manual_conv2d(sample_image, sharpen_kernel)\n",
        "\n",
        "# 可视化结果\n",
        "fig, axes = plt.subplots(2, 4, figsize=(16, 8))\n",
        "\n",
        "# 原始图像\n",
        "axes[0, 0].imshow(sample_image, cmap='gray')\n",
        "axes[0, 0].set_title('原始图像')\n",
        "axes[0, 0].axis('off')\n",
        "\n",
        "# Sobel X边缘检测\n",
        "axes[0, 1].imshow(conv_sobel_x, cmap='gray')\n",
        "axes[0, 1].set_title('Sobel X边缘检测')\n",
        "axes[0, 1].axis('off')\n",
        "\n",
        "# Sobel Y边缘检测\n",
        "axes[0, 2].imshow(conv_sobel_y, cmap='gray')\n",
        "axes[0, 2].set_title('Sobel Y边缘检测')\n",
        "axes[0, 2].axis('off')\n",
        "\n",
        "# 模糊效果\n",
        "axes[0, 3].imshow(conv_blur, cmap='gray')\n",
        "axes[0, 3].set_title('模糊效果')\n",
        "axes[0, 3].axis('off')\n",
        "\n",
        "# 显示卷积核\n",
        "axes[1, 0].imshow(sobel_x, cmap='RdBu', vmin=-2, vmax=2)\n",
        "axes[1, 0].set_title('Sobel X核')\n",
        "axes[1, 0].axis('off')\n",
        "\n",
        "axes[1, 1].imshow(sobel_y, cmap='RdBu', vmin=-2, vmax=2)\n",
        "axes[1, 1].set_title('Sobel Y核')\n",
        "axes[1, 1].axis('off')\n",
        "\n",
        "axes[1, 2].imshow(blur_kernel, cmap='gray')\n",
        "axes[1, 2].set_title('模糊核')\n",
        "axes[1, 2].axis('off')\n",
        "\n",
        "axes[1, 3].imshow(sharpen_kernel, cmap='RdBu', vmin=-1, vmax=5)\n",
        "axes[1, 3].set_title('锐化核')\n",
        "axes[1, 3].axis('off')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 使用PyTorch验证我们的实现\n",
        "print(\"使用PyTorch验证卷积操作...\")\n",
        "\n",
        "# 转换为PyTorch张量\n",
        "image_tensor = torch.FloatTensor(sample_image).unsqueeze(0).unsqueeze(0)  # [1, 1, H, W]\n",
        "sobel_x_tensor = torch.FloatTensor(sobel_x).unsqueeze(0).unsqueeze(0)    # [1, 1, H, W]\n",
        "\n",
        "# 使用PyTorch卷积\n",
        "pytorch_conv = F.conv2d(image_tensor, sobel_x_tensor, padding=0, stride=1)\n",
        "pytorch_result = pytorch_conv.squeeze().numpy()\n",
        "\n",
        "# 比较结果\n",
        "print(f\"手动实现结果形状: {conv_sobel_x.shape}\")\n",
        "print(f\"PyTorch实现结果形状: {pytorch_result.shape}\")\n",
        "print(f\"结果是否一致: {np.allclose(conv_sobel_x, pytorch_result, atol=1e-6)}\")\n",
        "\n",
        "# 可视化卷积过程\n",
        "def visualize_convolution_process():\n",
        "    \"\"\"可视化卷积过程\"\"\"\n",
        "    # 创建一个简单的3x3图像\n",
        "    small_image = np.array([[1, 2, 3],\n",
        "                           [4, 5, 6],\n",
        "                           [7, 8, 9]])\n",
        "    \n",
        "    # 创建一个简单的2x2卷积核\n",
        "    small_kernel = np.array([[1, 0],\n",
        "                            [0, -1]])\n",
        "    \n",
        "    # 执行卷积\n",
        "    result = manual_conv2d(small_image, small_kernel)\n",
        "    \n",
        "    fig, axes = plt.subplots(1, 3, figsize=(12, 4))\n",
        "    \n",
        "    # 显示输入图像\n",
        "    im1 = axes[0].imshow(small_image, cmap='viridis')\n",
        "    axes[0].set_title('输入图像 (3×3)')\n",
        "    axes[0].axis('off')\n",
        "    \n",
        "    # 添加数值标注\n",
        "    for i in range(3):\n",
        "        for j in range(3):\n",
        "            axes[0].text(j, i, str(small_image[i, j]), ha='center', va='center', \n",
        "                        color='white', fontweight='bold')\n",
        "    \n",
        "    # 显示卷积核\n",
        "    im2 = axes[1].imshow(small_kernel, cmap='RdBu', vmin=-1, vmax=1)\n",
        "    axes[1].set_title('卷积核 (2×2)')\n",
        "    axes[1].axis('off')\n",
        "    \n",
        "    # 添加数值标注\n",
        "    for i in range(2):\n",
        "        for j in range(2):\n",
        "            axes[1].text(j, i, str(small_kernel[i, j]), ha='center', va='center', \n",
        "                        color='white', fontweight='bold')\n",
        "    \n",
        "    # 显示输出\n",
        "    im3 = axes[2].imshow(result, cmap='viridis')\n",
        "    axes[2].set_title('输出特征图 (2×2)')\n",
        "    axes[2].axis('off')\n",
        "    \n",
        "    # 添加数值标注\n",
        "    for i in range(2):\n",
        "        for j in range(2):\n",
        "            axes[2].text(j, i, f'{result[i, j]:.0f}', ha='center', va='center', \n",
        "                        color='white', fontweight='bold')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    print(\"卷积计算过程:\")\n",
        "    print(f\"输入: {small_image}\")\n",
        "    print(f\"卷积核: {small_kernel}\")\n",
        "    print(f\"输出: {result}\")\n",
        "    \n",
        "    # 详细计算过程\n",
        "    print(\"\\n详细计算过程:\")\n",
        "    for i in range(2):\n",
        "        for j in range(2):\n",
        "            window = small_image[i:i+2, j:j+2]\n",
        "            conv_result = np.sum(window * small_kernel)\n",
        "            print(f\"位置({i},{j}): 窗口{window} × 核{small_kernel} = {conv_result}\")\n",
        "\n",
        "visualize_convolution_process()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. 数据准备和增强\n",
        "\n",
        "现在让我们准备MNIST数据，并添加数据增强技术来提高模型性能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 定义数据变换（无增强）\n",
        "transform_basic = transforms.Compose([\n",
        "    transforms.ToTensor(),\n",
        "    transforms.Normalize((0.5,), (0.5,))\n",
        "])\n",
        "\n",
        "# 定义数据变换（带增强）\n",
        "transform_augmented = transforms.Compose([\n",
        "    transforms.RandomRotation(10),  # 随机旋转±10度\n",
        "    transforms.RandomAffine(degrees=0, translate=(0.1, 0.1)),  # 随机平移\n",
        "    transforms.ToTensor(),\n",
        "    transforms.Normalize((0.5,), (0.5,))\n",
        "])\n",
        "\n",
        "# 下载并加载MNIST数据集\n",
        "print(\"正在下载MNIST数据集...\")\n",
        "train_dataset_basic = torchvision.datasets.MNIST(\n",
        "    root='./data', \n",
        "    train=True, \n",
        "    download=True, \n",
        "    transform=transform_basic\n",
        ")\n",
        "\n",
        "train_dataset_augmented = torchvision.datasets.MNIST(\n",
        "    root='./data', \n",
        "    train=True, \n",
        "    download=True, \n",
        "    transform=transform_augmented\n",
        ")\n",
        "\n",
        "test_dataset = torchvision.datasets.MNIST(\n",
        "    root='./data', \n",
        "    train=False, \n",
        "    download=True, \n",
        "    transform=transform_basic\n",
        ")\n",
        "\n",
        "# 创建数据加载器\n",
        "batch_size = 64\n",
        "train_loader_basic = DataLoader(\n",
        "    train_dataset_basic, \n",
        "    batch_size=batch_size, \n",
        "    shuffle=True,\n",
        "    num_workers=2\n",
        ")\n",
        "\n",
        "train_loader_augmented = DataLoader(\n",
        "    train_dataset_augmented, \n",
        "    batch_size=batch_size, \n",
        "    shuffle=True,\n",
        "    num_workers=2\n",
        ")\n",
        "\n",
        "test_loader = DataLoader(\n",
        "    test_dataset, \n",
        "    batch_size=batch_size, \n",
        "    shuffle=False,\n",
        "    num_workers=2\n",
        ")\n",
        "\n",
        "print(f\"训练集大小: {len(train_dataset_basic)}\")\n",
        "print(f\"测试集大小: {len(test_dataset)}\")\n",
        "print(f\"批次大小: {batch_size}\")\n",
        "\n",
        "# 可视化数据增强效果\n",
        "def visualize_data_augmentation():\n",
        "    \"\"\"可视化数据增强效果\"\"\"\n",
        "    # 获取一些样本\n",
        "    sample_indices = [0, 1, 2, 3, 4]\n",
        "    \n",
        "    fig, axes = plt.subplots(2, 5, figsize=(15, 6))\n",
        "    fig.suptitle('数据增强效果对比', fontsize=16)\n",
        "    \n",
        "    for i, idx in enumerate(sample_indices):\n",
        "        # 原始图像\n",
        "        original_image, label = train_dataset_basic[idx]\n",
        "        original_img = original_image.squeeze()\n",
        "        original_img = (original_img + 1) / 2  # 反归一化\n",
        "        \n",
        "        axes[0, i].imshow(original_img, cmap='gray')\n",
        "        axes[0, i].set_title(f'原始图像\\n标签: {label}')\n",
        "        axes[0, i].axis('off')\n",
        "        \n",
        "        # 增强后的图像\n",
        "        augmented_image, _ = train_dataset_augmented[idx]\n",
        "        augmented_img = augmented_image.squeeze()\n",
        "        augmented_img = (augmented_img + 1) / 2  # 反归一化\n",
        "        \n",
        "        axes[1, i].imshow(augmented_img, cmap='gray')\n",
        "        axes[1, i].set_title(f'增强后图像\\n标签: {label}')\n",
        "        axes[1, i].axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_data_augmentation()\n",
        "\n",
        "# 显示多个增强样本\n",
        "def show_multiple_augmentations():\n",
        "    \"\"\"显示同一张图像的多个增强版本\"\"\"\n",
        "    # 获取一张图像\n",
        "    original_image, label = train_dataset_basic[0]\n",
        "    \n",
        "    # 创建多个增强版本\n",
        "    augmented_images = []\n",
        "    for _ in range(8):\n",
        "        aug_image, _ = train_dataset_augmented[0]\n",
        "        augmented_images.append(aug_image)\n",
        "    \n",
        "    fig, axes = plt.subplots(2, 4, figsize=(12, 6))\n",
        "    fig.suptitle(f'同一张图像的多个增强版本 (标签: {label})', fontsize=14)\n",
        "    \n",
        "    for i, aug_img in enumerate(augmented_images):\n",
        "        row = i // 4\n",
        "        col = i % 4\n",
        "        \n",
        "        img = aug_img.squeeze()\n",
        "        img = (img + 1) / 2  # 反归一化\n",
        "        \n",
        "        axes[row, col].imshow(img, cmap='gray')\n",
        "        axes[row, col].set_title(f'增强版本 {i+1}')\n",
        "        axes[row, col].axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "show_multiple_augmentations()\n",
        "\n",
        "# 分析数据增强对像素分布的影响\n",
        "def analyze_pixel_distribution():\n",
        "    \"\"\"分析数据增强对像素分布的影响\"\"\"\n",
        "    # 收集一些样本的像素值\n",
        "    original_pixels = []\n",
        "    augmented_pixels = []\n",
        "    \n",
        "    for i in range(100):  # 取100个样本\n",
        "        orig_img, _ = train_dataset_basic[i]\n",
        "        aug_img, _ = train_dataset_augmented[i]\n",
        "        \n",
        "        original_pixels.extend(orig_img.flatten().numpy())\n",
        "        augmented_pixels.extend(aug_img.flatten().numpy())\n",
        "    \n",
        "    # 可视化像素分布\n",
        "    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n",
        "    \n",
        "    ax1.hist(original_pixels, bins=50, alpha=0.7, color='blue', label='原始图像')\n",
        "    ax1.hist(augmented_pixels, bins=50, alpha=0.7, color='red', label='增强图像')\n",
        "    ax1.set_title('像素值分布对比')\n",
        "    ax1.set_xlabel('像素值')\n",
        "    ax1.set_ylabel('频次')\n",
        "    ax1.legend()\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 统计信息\n",
        "    print(\"像素值统计:\")\n",
        "    print(f\"原始图像 - 均值: {np.mean(original_pixels):.4f}, 标准差: {np.std(original_pixels):.4f}\")\n",
        "    print(f\"增强图像 - 均值: {np.mean(augmented_pixels):.4f}, 标准差: {np.std(augmented_pixels):.4f}\")\n",
        "    \n",
        "    # 显示像素值范围\n",
        "    ax2.boxplot([original_pixels, augmented_pixels], labels=['原始图像', '增强图像'])\n",
        "    ax2.set_title('像素值分布箱线图')\n",
        "    ax2.set_ylabel('像素值')\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "analyze_pixel_distribution()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. 构建CNN模型\n",
        "\n",
        "现在让我们构建几个不同复杂度的CNN模型，从简单的LeNet风格到更深的网络。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 简单的CNN模型（LeNet风格）\n",
        "class SimpleCNN(nn.Module):\n",
        "    \"\"\"简单的CNN模型，类似LeNet\"\"\"\n",
        "    \n",
        "    def __init__(self, num_classes=10):\n",
        "        super(SimpleCNN, self).__init__()\n",
        "        \n",
        "        # 卷积层\n",
        "        self.conv1 = nn.Conv2d(1, 32, kernel_size=3, padding=1)  # 28x28 -> 28x28\n",
        "        self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)  # 28x28 -> 28x28\n",
        "        \n",
        "        # 池化层\n",
        "        self.pool = nn.MaxPool2d(2, 2)  # 28x28 -> 14x14 -> 7x7\n",
        "        \n",
        "        # 全连接层\n",
        "        self.fc1 = nn.Linear(64 * 7 * 7, 128)\n",
        "        self.fc2 = nn.Linear(128, num_classes)\n",
        "        \n",
        "        # Dropout\n",
        "        self.dropout = nn.Dropout(0.5)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        # 第一个卷积块\n",
        "        x = self.pool(F.relu(self.conv1(x)))  # 28x28 -> 14x14\n",
        "        x = self.pool(F.relu(self.conv2(x)))  # 14x14 -> 7x7\n",
        "        \n",
        "        # 展平\n",
        "        x = x.view(-1, 64 * 7 * 7)\n",
        "        \n",
        "        # 全连接层\n",
        "        x = F.relu(self.fc1(x))\n",
        "        x = self.dropout(x)\n",
        "        x = self.fc2(x)\n",
        "        \n",
        "        return x\n",
        "\n",
        "# 中等复杂度的CNN模型\n",
        "class MediumCNN(nn.Module):\n",
        "    \"\"\"中等复杂度的CNN模型\"\"\"\n",
        "    \n",
        "    def __init__(self, num_classes=10):\n",
        "        super(MediumCNN, self).__init__()\n",
        "        \n",
        "        # 卷积层\n",
        "        self.conv1 = nn.Conv2d(1, 32, kernel_size=3, padding=1)\n",
        "        self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)\n",
        "        self.conv3 = nn.Conv2d(64, 128, kernel_size=3, padding=1)\n",
        "        \n",
        "        # 池化层\n",
        "        self.pool = nn.MaxPool2d(2, 2)\n",
        "        \n",
        "        # 全连接层\n",
        "        self.fc1 = nn.Linear(128 * 3 * 3, 256)\n",
        "        self.fc2 = nn.Linear(256, 128)\n",
        "        self.fc3 = nn.Linear(128, num_classes)\n",
        "        \n",
        "        # Dropout\n",
        "        self.dropout = nn.Dropout(0.5)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        # 卷积块1\n",
        "        x = F.relu(self.conv1(x))\n",
        "        x = self.pool(x)  # 28x28 -> 14x14\n",
        "        \n",
        "        # 卷积块2\n",
        "        x = F.relu(self.conv2(x))\n",
        "        x = self.pool(x)  # 14x14 -> 7x7\n",
        "        \n",
        "        # 卷积块3\n",
        "        x = F.relu(self.conv3(x))\n",
        "        x = self.pool(x)  # 7x7 -> 3x3\n",
        "        \n",
        "        # 展平\n",
        "        x = x.view(-1, 128 * 3 * 3)\n",
        "        \n",
        "        # 全连接层\n",
        "        x = F.relu(self.fc1(x))\n",
        "        x = self.dropout(x)\n",
        "        x = F.relu(self.fc2(x))\n",
        "        x = self.dropout(x)\n",
        "        x = self.fc3(x)\n",
        "        \n",
        "        return x\n",
        "\n",
        "# 深度CNN模型\n",
        "class DeepCNN(nn.Module):\n",
        "    \"\"\"深度CNN模型\"\"\"\n",
        "    \n",
        "    def __init__(self, num_classes=10):\n",
        "        super(DeepCNN, self).__init__()\n",
        "        \n",
        "        # 卷积层\n",
        "        self.conv1 = nn.Conv2d(1, 64, kernel_size=3, padding=1)\n",
        "        self.conv2 = nn.Conv2d(64, 64, kernel_size=3, padding=1)\n",
        "        self.conv3 = nn.Conv2d(64, 128, kernel_size=3, padding=1)\n",
        "        self.conv4 = nn.Conv2d(128, 128, kernel_size=3, padding=1)\n",
        "        self.conv5 = nn.Conv2d(128, 256, kernel_size=3, padding=1)\n",
        "        \n",
        "        # 池化层\n",
        "        self.pool = nn.MaxPool2d(2, 2)\n",
        "        \n",
        "        # 全连接层\n",
        "        self.fc1 = nn.Linear(256 * 1 * 1, 512)\n",
        "        self.fc2 = nn.Linear(512, 256)\n",
        "        self.fc3 = nn.Linear(256, num_classes)\n",
        "        \n",
        "        # Dropout\n",
        "        self.dropout = nn.Dropout(0.5)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        # 卷积块1\n",
        "        x = F.relu(self.conv1(x))\n",
        "        x = F.relu(self.conv2(x))\n",
        "        x = self.pool(x)  # 28x28 -> 14x14\n",
        "        \n",
        "        # 卷积块2\n",
        "        x = F.relu(self.conv3(x))\n",
        "        x = F.relu(self.conv4(x))\n",
        "        x = self.pool(x)  # 14x14 -> 7x7\n",
        "        \n",
        "        # 卷积块3\n",
        "        x = F.relu(self.conv5(x))\n",
        "        x = self.pool(x)  # 7x7 -> 3x3 -> 1x1\n",
        "        \n",
        "        # 展平\n",
        "        x = x.view(-1, 256 * 1 * 1)\n",
        "        \n",
        "        # 全连接层\n",
        "        x = F.relu(self.fc1(x))\n",
        "        x = self.dropout(x)\n",
        "        x = F.relu(self.fc2(x))\n",
        "        x = self.dropout(x)\n",
        "        x = self.fc3(x)\n",
        "        \n",
        "        return x\n",
        "\n",
        "# 创建模型实例\n",
        "simple_cnn = SimpleCNN(num_classes=10)\n",
        "medium_cnn = MediumCNN(num_classes=10)\n",
        "deep_cnn = DeepCNN(num_classes=10)\n",
        "\n",
        "# 将模型移动到设备\n",
        "simple_cnn = simple_cnn.to(device)\n",
        "medium_cnn = medium_cnn.to(device)\n",
        "deep_cnn = deep_cnn.to(device)\n",
        "\n",
        "print(\"CNN模型结构:\")\n",
        "print(\"\\n简单CNN:\")\n",
        "print(simple_cnn)\n",
        "\n",
        "print(\"\\n中等CNN:\")\n",
        "print(medium_cnn)\n",
        "\n",
        "print(\"\\n深度CNN:\")\n",
        "print(deep_cnn)\n",
        "\n",
        "# 计算模型参数数量\n",
        "def count_parameters(model):\n",
        "    return sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
        "\n",
        "simple_params = count_parameters(simple_cnn)\n",
        "medium_params = count_parameters(medium_cnn)\n",
        "deep_params = count_parameters(deep_cnn)\n",
        "\n",
        "print(f\"\\n参数数量:\")\n",
        "print(f\"简单CNN: {simple_params:,} 参数\")\n",
        "print(f\"中等CNN: {medium_params:,} 参数\")\n",
        "print(f\"深度CNN: {deep_params:,} 参数\")\n",
        "\n",
        "# 测试模型前向传播\n",
        "sample_input = torch.randn(4, 1, 28, 28).to(device)\n",
        "\n",
        "with torch.no_grad():\n",
        "    simple_output = simple_cnn(sample_input)\n",
        "    medium_output = medium_cnn(sample_input)\n",
        "    deep_output = deep_cnn(sample_input)\n",
        "\n",
        "print(f\"\\n前向传播测试:\")\n",
        "print(f\"输入形状: {sample_input.shape}\")\n",
        "print(f\"简单CNN输出形状: {simple_output.shape}\")\n",
        "print(f\"中等CNN输出形状: {medium_output.shape}\")\n",
        "print(f\"深度CNN输出形状: {deep_output.shape}\")\n",
        "\n",
        "# 可视化CNN架构\n",
        "def visualize_cnn_architecture():\n",
        "    \"\"\"可视化CNN架构\"\"\"\n",
        "    fig, axes = plt.subplots(1, 3, figsize=(18, 6))\n",
        "    \n",
        "    # 简单CNN架构\n",
        "    simple_layers = ['输入\\n(1×28×28)', 'Conv1\\n(32×28×28)', 'Pool\\n(32×14×14)', \n",
        "                    'Conv2\\n(64×14×14)', 'Pool\\n(64×7×7)', 'FC1\\n(128)', 'FC2\\n(10)']\n",
        "    simple_connections = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]\n",
        "    \n",
        "    ax1 = axes[0]\n",
        "    ax1.set_xlim(-0.5, 6.5)\n",
        "    ax1.set_ylim(-0.5, 2.5)\n",
        "    \n",
        "    for i, layer in enumerate(simple_layers):\n",
        "        ax1.text(i, 1, layer, ha='center', va='center', \n",
        "                bbox=dict(boxstyle=\"round,pad=0.3\", facecolor=\"lightblue\"),\n",
        "                fontsize=10, fontweight='bold')\n",
        "    \n",
        "    for start, end in simple_connections:\n",
        "        ax1.arrow(start, 0.7, end-start, 0, head_width=0.05, head_length=0.05, \n",
        "                 fc='red', ec='red')\n",
        "    \n",
        "    ax1.set_title('简单CNN架构', fontsize=14, fontweight='bold')\n",
        "    ax1.axis('off')\n",
        "    \n",
        "    # 中等CNN架构\n",
        "    medium_layers = ['输入\\n(1×28×28)', 'Conv1\\n(32×28×28)', 'Pool\\n(32×14×14)', \n",
        "                    'Conv2\\n(64×14×14)', 'Pool\\n(64×7×7)', 'Conv3\\n(128×7×7)', \n",
        "                    'Pool\\n(128×3×3)', 'FC1\\n(256)', 'FC2\\n(128)', 'FC3\\n(10)']\n",
        "    medium_connections = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9)]\n",
        "    \n",
        "    ax2 = axes[1]\n",
        "    ax2.set_xlim(-0.5, 9.5)\n",
        "    ax2.set_ylim(-0.5, 2.5)\n",
        "    \n",
        "    for i, layer in enumerate(medium_layers):\n",
        "        ax2.text(i, 1, layer, ha='center', va='center', \n",
        "                bbox=dict(boxstyle=\"round,pad=0.3\", facecolor=\"lightgreen\"),\n",
        "                fontsize=9, fontweight='bold')\n",
        "    \n",
        "    for start, end in medium_connections:\n",
        "        ax2.arrow(start, 0.7, end-start, 0, head_width=0.05, head_length=0.05, \n",
        "                 fc='red', ec='red')\n",
        "    \n",
        "    ax2.set_title('中等CNN架构', fontsize=14, fontweight='bold')\n",
        "    ax2.axis('off')\n",
        "    \n",
        "    # 深度CNN架构\n",
        "    deep_layers = ['输入\\n(1×28×28)', 'Conv1\\n(64×28×28)', 'Conv2\\n(64×28×28)', \n",
        "                  'Pool\\n(64×14×14)', 'Conv3\\n(128×14×14)', 'Conv4\\n(128×14×14)', \n",
        "                  'Pool\\n(128×7×7)', 'Conv5\\n(256×7×7)', 'Pool\\n(256×1×1)', \n",
        "                  'FC1\\n(512)', 'FC2\\n(256)', 'FC3\\n(10)']\n",
        "    deep_connections = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9), (9, 10), (10, 11)]\n",
        "    \n",
        "    ax3 = axes[2]\n",
        "    ax3.set_xlim(-0.5, 11.5)\n",
        "    ax3.set_ylim(-0.5, 2.5)\n",
        "    \n",
        "    for i, layer in enumerate(deep_layers):\n",
        "        ax3.text(i, 1, layer, ha='center', va='center', \n",
        "                bbox=dict(boxstyle=\"round,pad=0.3\", facecolor=\"lightcoral\"),\n",
        "                fontsize=8, fontweight='bold')\n",
        "    \n",
        "    for start, end in deep_connections:\n",
        "        ax3.arrow(start, 0.7, end-start, 0, head_width=0.05, head_length=0.05, \n",
        "                 fc='red', ec='red')\n",
        "    \n",
        "    ax3.set_title('深度CNN架构', fontsize=14, fontweight='bold')\n",
        "    ax3.axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_cnn_architecture()\n",
        "\n",
        "# 分析特征图尺寸变化\n",
        "def analyze_feature_map_sizes():\n",
        "    \"\"\"分析特征图尺寸变化\"\"\"\n",
        "    print(\"特征图尺寸变化分析:\")\n",
        "    print(\"=\" * 50)\n",
        "    \n",
        "    # 简单CNN\n",
        "    print(\"简单CNN:\")\n",
        "    print(\"输入: 1×28×28\")\n",
        "    print(\"Conv1: 32×28×28\")\n",
        "    print(\"Pool1: 32×14×14\")\n",
        "    print(\"Conv2: 64×14×14\")\n",
        "    print(\"Pool2: 64×7×7\")\n",
        "    print(\"Flatten: 64×7×7 = 3136\")\n",
        "    print(\"FC1: 128\")\n",
        "    print(\"FC2: 10\")\n",
        "    print()\n",
        "    \n",
        "    # 中等CNN\n",
        "    print(\"中等CNN:\")\n",
        "    print(\"输入: 1×28×28\")\n",
        "    print(\"Conv1: 32×28×28\")\n",
        "    print(\"Pool1: 32×14×14\")\n",
        "    print(\"Conv2: 64×14×14\")\n",
        "    print(\"Pool2: 64×7×7\")\n",
        "    print(\"Conv3: 128×7×7\")\n",
        "    print(\"Pool3: 128×3×3\")\n",
        "    print(\"Flatten: 128×3×3 = 1152\")\n",
        "    print(\"FC1: 256\")\n",
        "    print(\"FC2: 128\")\n",
        "    print(\"FC3: 10\")\n",
        "    print()\n",
        "    \n",
        "    # 深度CNN\n",
        "    print(\"深度CNN:\")\n",
        "    print(\"输入: 1×28×28\")\n",
        "    print(\"Conv1: 64×28×28\")\n",
        "    print(\"Conv2: 64×28×28\")\n",
        "    print(\"Pool1: 64×14×14\")\n",
        "    print(\"Conv3: 128×14×14\")\n",
        "    print(\"Conv4: 128×14×14\")\n",
        "    print(\"Pool2: 128×7×7\")\n",
        "    print(\"Conv5: 256×7×7\")\n",
        "    print(\"Pool3: 256×1×1\")\n",
        "    print(\"Flatten: 256×1×1 = 256\")\n",
        "    print(\"FC1: 512\")\n",
        "    print(\"FC2: 256\")\n",
        "    print(\"FC3: 10\")\n",
        "\n",
        "analyze_feature_map_sizes()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. 训练函数和评估函数\n",
        "\n",
        "现在让我们定义训练和评估函数，这些函数将帮助我们训练CNN模型并监控性能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 训练函数\n",
        "def train_cnn_model(model, train_loader, criterion, optimizer, device, epoch):\n",
        "    \"\"\"训练CNN模型一个epoch\"\"\"\n",
        "    model.train()\n",
        "    running_loss = 0.0\n",
        "    correct = 0\n",
        "    total = 0\n",
        "    \n",
        "    pbar = tqdm(train_loader, desc=f'Epoch {epoch+1}')\n",
        "    \n",
        "    for batch_idx, (data, target) in enumerate(pbar):\n",
        "        data, target = data.to(device), target.to(device)\n",
        "        \n",
        "        optimizer.zero_grad()\n",
        "        output = model(data)\n",
        "        loss = criterion(output, target)\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "        \n",
        "        running_loss += loss.item()\n",
        "        _, predicted = torch.max(output.data, 1)\n",
        "        total += target.size(0)\n",
        "        correct += (predicted == target).sum().item()\n",
        "        \n",
        "        pbar.set_postfix({\n",
        "            'Loss': f'{loss.item():.4f}',\n",
        "            'Acc': f'{100.*correct/total:.2f}%'\n",
        "        })\n",
        "    \n",
        "    epoch_loss = running_loss / len(train_loader)\n",
        "    epoch_acc = 100. * correct / total\n",
        "    \n",
        "    return epoch_loss, epoch_acc\n",
        "\n",
        "# 评估函数\n",
        "def evaluate_cnn_model(model, test_loader, criterion, device):\n",
        "    \"\"\"评估CNN模型\"\"\"\n",
        "    model.eval()\n",
        "    test_loss = 0.0\n",
        "    correct = 0\n",
        "    total = 0\n",
        "    all_predictions = []\n",
        "    all_targets = []\n",
        "    \n",
        "    with torch.no_grad():\n",
        "        for data, target in tqdm(test_loader, desc='Evaluating'):\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            \n",
        "            output = model(data)\n",
        "            loss = criterion(output, target)\n",
        "            \n",
        "            test_loss += loss.item()\n",
        "            _, predicted = torch.max(output.data, 1)\n",
        "            total += target.size(0)\n",
        "            correct += (predicted == target).sum().item()\n",
        "            \n",
        "            all_predictions.extend(predicted.cpu().numpy())\n",
        "            all_targets.extend(target.cpu().numpy())\n",
        "    \n",
        "    test_loss /= len(test_loader)\n",
        "    test_acc = 100. * correct / total\n",
        "    \n",
        "    return test_loss, test_acc, all_predictions, all_targets\n",
        "\n",
        "# 完整的训练循环\n",
        "def train_complete_cnn(model, train_loader, test_loader, num_epochs=10, \n",
        "                      learning_rate=0.001, model_name=\"CNN\"):\n",
        "    \"\"\"完整的CNN训练循环\"\"\"\n",
        "    \n",
        "    criterion = nn.CrossEntropyLoss()\n",
        "    optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n",
        "    \n",
        "    train_losses = []\n",
        "    train_accuracies = []\n",
        "    test_losses = []\n",
        "    test_accuracies = []\n",
        "    \n",
        "    print(f\"开始训练 {model_name}...\")\n",
        "    print(f\"训练参数: Epochs={num_epochs}, LR={learning_rate}\")\n",
        "    print(\"=\" * 60)\n",
        "    \n",
        "    start_time = time.time()\n",
        "    \n",
        "    for epoch in range(num_epochs):\n",
        "        train_loss, train_acc = train_cnn_model(\n",
        "            model, train_loader, criterion, optimizer, device, epoch\n",
        "        )\n",
        "        \n",
        "        test_loss, test_acc, _, _ = evaluate_cnn_model(\n",
        "            model, test_loader, criterion, device\n",
        "        )\n",
        "        \n",
        "        train_losses.append(train_loss)\n",
        "        train_accuracies.append(train_acc)\n",
        "        test_losses.append(test_loss)\n",
        "        test_accuracies.append(test_acc)\n",
        "        \n",
        "        print(f'Epoch {epoch+1:2d}/{num_epochs}: '\n",
        "              f'Train Loss: {train_loss:.4f}, Train Acc: {train_acc:.2f}% | '\n",
        "              f'Test Loss: {test_loss:.4f}, Test Acc: {test_acc:.2f}%')\n",
        "    \n",
        "    training_time = time.time() - start_time\n",
        "    print(f\"\\n{model_name} 训练完成!\")\n",
        "    print(f\"总训练时间: {training_time:.2f}秒\")\n",
        "    print(f\"最终测试准确率: {test_accuracies[-1]:.2f}%\")\n",
        "    \n",
        "    return {\n",
        "        'train_losses': train_losses,\n",
        "        'train_accuracies': train_accuracies,\n",
        "        'test_losses': test_losses,\n",
        "        'test_accuracies': test_accuracies,\n",
        "        'training_time': training_time,\n",
        "        'final_test_acc': test_accuracies[-1]\n",
        "    }\n",
        "\n",
        "# 可视化训练结果\n",
        "def plot_cnn_training_results(results, model_name):\n",
        "    \"\"\"可视化CNN训练结果\"\"\"\n",
        "    fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 10))\n",
        "    \n",
        "    epochs = range(1, len(results['train_losses']) + 1)\n",
        "    \n",
        "    # 损失曲线\n",
        "    ax1.plot(epochs, results['train_losses'], 'b-', label='训练损失', linewidth=2)\n",
        "    ax1.plot(epochs, results['test_losses'], 'r-', label='测试损失', linewidth=2)\n",
        "    ax1.set_title(f'{model_name} - 损失变化')\n",
        "    ax1.set_xlabel('Epoch')\n",
        "    ax1.set_ylabel('损失')\n",
        "    ax1.legend()\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 准确率曲线\n",
        "    ax2.plot(epochs, results['train_accuracies'], 'b-', label='训练准确率', linewidth=2)\n",
        "    ax2.plot(epochs, results['test_accuracies'], 'r-', label='测试准确率', linewidth=2)\n",
        "    ax2.set_title(f'{model_name} - 准确率变化')\n",
        "    ax2.set_xlabel('Epoch')\n",
        "    ax2.set_ylabel('准确率 (%)')\n",
        "    ax2.legend()\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 训练vs测试损失对比\n",
        "    ax3.scatter(results['train_losses'], results['test_losses'], \n",
        "               c=epochs, cmap='viridis', s=50, alpha=0.7)\n",
        "    ax3.plot([0, max(max(results['train_losses']), max(results['test_losses']))], \n",
        "             [0, max(max(results['train_losses']), max(results['test_losses']))], \n",
        "             'k--', alpha=0.5)\n",
        "    ax3.set_title(f'{model_name} - 训练vs测试损失')\n",
        "    ax3.set_xlabel('训练损失')\n",
        "    ax3.set_ylabel('测试损失')\n",
        "    ax3.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 训练vs测试准确率对比\n",
        "    ax4.scatter(results['train_accuracies'], results['test_accuracies'], \n",
        "               c=epochs, cmap='viridis', s=50, alpha=0.7)\n",
        "    ax4.plot([0, 100], [0, 100], 'k--', alpha=0.5)\n",
        "    ax4.set_title(f'{model_name} - 训练vs测试准确率')\n",
        "    ax4.set_xlabel('训练准确率 (%)')\n",
        "    ax4.set_ylabel('测试准确率 (%)')\n",
        "    ax4.grid(True, alpha=0.3)\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "print(\"CNN训练和评估函数定义完成!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. 训练CNN模型\n",
        "\n",
        "现在让我们开始训练我们的CNN模型！\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 训练简单CNN模型\n",
        "print(\"开始训练简单CNN模型...\")\n",
        "simple_cnn_results = train_complete_cnn(\n",
        "    model=simple_cnn,\n",
        "    train_loader=train_loader_augmented,  # 使用数据增强\n",
        "    test_loader=test_loader,\n",
        "    num_epochs=10,\n",
        "    learning_rate=0.001,\n",
        "    model_name=\"简单CNN\"\n",
        ")\n",
        "\n",
        "# 可视化简单CNN的训练结果\n",
        "plot_cnn_training_results(simple_cnn_results, \"简单CNN\")\n",
        "\n",
        "# 获取最终预测结果\n",
        "print(\"\\n获取简单CNN的最终预测结果...\")\n",
        "_, _, simple_cnn_predictions, simple_cnn_targets = evaluate_cnn_model(\n",
        "    simple_cnn, test_loader, nn.CrossEntropyLoss(), device\n",
        ")\n",
        "\n",
        "# 计算混淆矩阵\n",
        "simple_cnn_cm = confusion_matrix(simple_cnn_targets, simple_cnn_predictions)\n",
        "\n",
        "# 可视化混淆矩阵\n",
        "plt.figure(figsize=(10, 8))\n",
        "sns.heatmap(simple_cnn_cm, annot=True, fmt='d', cmap='Blues',\n",
        "            xticklabels=range(10), yticklabels=range(10))\n",
        "plt.title('简单CNN - 混淆矩阵')\n",
        "plt.xlabel('预测标签')\n",
        "plt.ylabel('真实标签')\n",
        "plt.show()\n",
        "\n",
        "# 打印分类报告\n",
        "print(\"\\n简单CNN分类报告:\")\n",
        "print(classification_report(simple_cnn_targets, simple_cnn_predictions, \n",
        "                          target_names=[f'数字 {i}' for i in range(10)]))\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 训练深度CNN模型\n",
        "print(\"开始训练深度CNN模型...\")\n",
        "deep_cnn_results = train_complete_cnn(\n",
        "    model=deep_cnn,\n",
        "    train_loader=train_loader_augmented,  # 使用数据增强\n",
        "    test_loader=test_loader,\n",
        "    num_epochs=10,\n",
        "    learning_rate=0.001,\n",
        "    model_name=\"深度CNN\"\n",
        ")\n",
        "\n",
        "# 可视化深度CNN的训练结果\n",
        "plot_cnn_training_results(deep_cnn_results, \"深度CNN\")\n",
        "\n",
        "# 获取最终预测结果\n",
        "print(\"\\n获取深度CNN的最终预测结果...\")\n",
        "_, _, deep_cnn_predictions, deep_cnn_targets = evaluate_cnn_model(\n",
        "    deep_cnn, test_loader, nn.CrossEntropyLoss(), device\n",
        ")\n",
        "\n",
        "# 计算混淆矩阵\n",
        "deep_cnn_cm = confusion_matrix(deep_cnn_targets, deep_cnn_predictions)\n",
        "\n",
        "# 可视化混淆矩阵\n",
        "plt.figure(figsize=(10, 8))\n",
        "sns.heatmap(deep_cnn_cm, annot=True, fmt='d', cmap='Greens',\n",
        "            xticklabels=range(10), yticklabels=range(10))\n",
        "plt.title('深度CNN - 混淆矩阵')\n",
        "plt.xlabel('预测标签')\n",
        "plt.ylabel('真实标签')\n",
        "plt.show()\n",
        "\n",
        "# 打印分类报告\n",
        "print(\"\\n深度CNN分类报告:\")\n",
        "print(classification_report(deep_cnn_targets, deep_cnn_predictions, \n",
        "                          target_names=[f'数字 {i}' for i in range(10)]))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 6. CNN vs MLP性能对比\n",
        "\n",
        "现在让我们对比CNN和之前训练的MLP模型的性能差异。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 为了对比，我们需要重新训练一个MLP模型（使用相同的数据增强）\n",
        "from torch import nn as nn_mlp\n",
        "import torch.nn.functional as F_mlp\n",
        "\n",
        "# 重新定义MLP模型（与之前相同）\n",
        "class SimpleMLP(nn_mlp.Module):\n",
        "    def __init__(self, input_size=784, hidden_size=128, num_classes=10):\n",
        "        super(SimpleMLP, self).__init__()\n",
        "        self.fc1 = nn_mlp.Linear(input_size, hidden_size)\n",
        "        self.fc2 = nn_mlp.Linear(hidden_size, num_classes)\n",
        "        self.relu = nn_mlp.ReLU()\n",
        "        self.dropout = nn_mlp.Dropout(0.2)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        x = x.view(x.size(0), -1)\n",
        "        x = self.fc1(x)\n",
        "        x = self.relu(x)\n",
        "        x = self.dropout(x)\n",
        "        x = self.fc2(x)\n",
        "        return x\n",
        "\n",
        "# 创建MLP模型\n",
        "mlp_model = SimpleMLP().to(device)\n",
        "\n",
        "# 训练MLP模型\n",
        "print(\"开始训练MLP模型进行对比...\")\n",
        "mlp_results = train_complete_cnn(\n",
        "    model=mlp_model,\n",
        "    train_loader=train_loader_augmented,  # 使用相同的数据增强\n",
        "    test_loader=test_loader,\n",
        "    num_epochs=10,\n",
        "    learning_rate=0.001,\n",
        "    model_name=\"MLP\"\n",
        ")\n",
        "\n",
        "# 获取MLP预测结果\n",
        "_, _, mlp_predictions, mlp_targets = evaluate_cnn_model(\n",
        "    mlp_model, test_loader, nn.CrossEntropyLoss(), device\n",
        ")\n",
        "\n",
        "# 模型性能对比\n",
        "print(\"=\" * 80)\n",
        "print(\"CNN vs MLP 性能对比分析\")\n",
        "print(\"=\" * 80)\n",
        "\n",
        "# 性能对比表\n",
        "comparison_data = {\n",
        "    '模型': ['简单CNN', '深度CNN', 'MLP'],\n",
        "    '参数数量': [simple_params, deep_params, count_parameters(mlp_model)],\n",
        "    '最终测试准确率': [simple_cnn_results['final_test_acc'], deep_cnn_results['final_test_acc'], mlp_results['final_test_acc']],\n",
        "    '训练时间(秒)': [simple_cnn_results['training_time'], deep_cnn_results['training_time'], mlp_results['training_time']],\n",
        "    '最终训练损失': [simple_cnn_results['train_losses'][-1], deep_cnn_results['train_losses'][-1], mlp_results['train_losses'][-1]],\n",
        "    '最终测试损失': [simple_cnn_results['test_losses'][-1], deep_cnn_results['test_losses'][-1], mlp_results['test_losses'][-1]]\n",
        "}\n",
        "\n",
        "print(f\"{'指标':<20} {'简单CNN':<15} {'深度CNN':<15} {'MLP':<15}\")\n",
        "print(\"-\" * 65)\n",
        "\n",
        "for metric, values in comparison_data.items():\n",
        "    if metric == '模型':\n",
        "        continue\n",
        "    simple_cnn_val = values[0]\n",
        "    deep_cnn_val = values[1]\n",
        "    mlp_val = values[2]\n",
        "    \n",
        "    if metric == '参数数量':\n",
        "        print(f\"{metric:<20} {simple_cnn_val:<15,} {deep_cnn_val:<15,} {mlp_val:<15,}\")\n",
        "    elif metric == '训练时间(秒)':\n",
        "        print(f\"{metric:<20} {simple_cnn_val:<15.2f} {deep_cnn_val:<15.2f} {mlp_val:<15.2f}\")\n",
        "    else:\n",
        "        print(f\"{metric:<20} {simple_cnn_val:<15.4f} {deep_cnn_val:<15.4f} {mlp_val:<15.4f}\")\n",
        "\n",
        "# 可视化对比\n",
        "fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 12))\n",
        "\n",
        "# 1. 准确率对比\n",
        "models = ['简单CNN', '深度CNN', 'MLP']\n",
        "train_accs = [simple_cnn_results['train_accuracies'][-1], deep_cnn_results['train_accuracies'][-1], mlp_results['train_accuracies'][-1]]\n",
        "test_accs = [simple_cnn_results['test_accuracies'][-1], deep_cnn_results['test_accuracies'][-1], mlp_results['test_accuracies'][-1]]\n",
        "\n",
        "x = np.arange(len(models))\n",
        "width = 0.35\n",
        "\n",
        "ax1.bar(x - width/2, train_accs, width, label='训练准确率', alpha=0.8, color='skyblue')\n",
        "ax1.bar(x + width/2, test_accs, width, label='测试准确率', alpha=0.8, color='lightcoral')\n",
        "ax1.set_title('模型准确率对比')\n",
        "ax1.set_ylabel('准确率 (%)')\n",
        "ax1.set_xticks(x)\n",
        "ax1.set_xticklabels(models)\n",
        "ax1.legend()\n",
        "ax1.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, (train_acc, test_acc) in enumerate(zip(train_accs, test_accs)):\n",
        "    ax1.text(i - width/2, train_acc + 0.5, f'{train_acc:.2f}%', ha='center', va='bottom')\n",
        "    ax1.text(i + width/2, test_acc + 0.5, f'{test_acc:.2f}%', ha='center', va='bottom')\n",
        "\n",
        "# 2. 损失对比\n",
        "train_losses = [simple_cnn_results['train_losses'][-1], deep_cnn_results['train_losses'][-1], mlp_results['train_losses'][-1]]\n",
        "test_losses = [simple_cnn_results['test_losses'][-1], deep_cnn_results['test_losses'][-1], mlp_results['test_losses'][-1]]\n",
        "\n",
        "ax2.bar(x - width/2, train_losses, width, label='训练损失', alpha=0.8, color='lightgreen')\n",
        "ax2.bar(x + width/2, test_losses, width, label='测试损失', alpha=0.8, color='orange')\n",
        "ax2.set_title('模型损失对比')\n",
        "ax2.set_ylabel('损失值')\n",
        "ax2.set_xticks(x)\n",
        "ax2.set_xticklabels(models)\n",
        "ax2.legend()\n",
        "ax2.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, (train_loss, test_loss) in enumerate(zip(train_losses, test_losses)):\n",
        "    ax2.text(i - width/2, train_loss + 0.01, f'{train_loss:.4f}', ha='center', va='bottom')\n",
        "    ax2.text(i + width/2, test_loss + 0.01, f'{test_loss:.4f}', ha='center', va='bottom')\n",
        "\n",
        "# 3. 训练时间对比\n",
        "training_times = [simple_cnn_results['training_time'], deep_cnn_results['training_time'], mlp_results['training_time']]\n",
        "ax3.bar(models, training_times, alpha=0.8, color=['purple', 'brown', 'green'])\n",
        "ax3.set_title('训练时间对比')\n",
        "ax3.set_ylabel('时间 (秒)')\n",
        "ax3.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, time_val in enumerate(training_times):\n",
        "    ax3.text(i, time_val + 1, f'{time_val:.1f}s', ha='center', va='bottom')\n",
        "\n",
        "# 4. 参数数量对比\n",
        "param_counts = [simple_params, deep_params, count_parameters(mlp_model)]\n",
        "ax4.bar(models, param_counts, alpha=0.8, color=['red', 'blue', 'orange'])\n",
        "ax4.set_title('模型参数数量对比')\n",
        "ax4.set_ylabel('参数数量')\n",
        "ax4.set_yscale('log')  # 使用对数坐标\n",
        "ax4.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for i, count in enumerate(param_counts):\n",
        "    ax4.text(i, count * 1.2, f'{count:,}', ha='center', va='bottom')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 训练过程对比\n",
        "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
        "\n",
        "# 损失曲线对比\n",
        "epochs = range(1, len(simple_cnn_results['train_losses']) + 1)\n",
        "ax1.plot(epochs, simple_cnn_results['train_losses'], 'b-', label='简单CNN-训练', linewidth=2)\n",
        "ax1.plot(epochs, simple_cnn_results['test_losses'], 'b--', label='简单CNN-测试', linewidth=2)\n",
        "ax1.plot(epochs, deep_cnn_results['train_losses'], 'r-', label='深度CNN-训练', linewidth=2)\n",
        "ax1.plot(epochs, deep_cnn_results['test_losses'], 'r--', label='深度CNN-测试', linewidth=2)\n",
        "ax1.plot(epochs, mlp_results['train_losses'], 'g-', label='MLP-训练', linewidth=2)\n",
        "ax1.plot(epochs, mlp_results['test_losses'], 'g--', label='MLP-测试', linewidth=2)\n",
        "ax1.set_title('训练过程损失对比')\n",
        "ax1.set_xlabel('Epoch')\n",
        "ax1.set_ylabel('损失')\n",
        "ax1.legend()\n",
        "ax1.grid(True, alpha=0.3)\n",
        "\n",
        "# 准确率曲线对比\n",
        "ax2.plot(epochs, simple_cnn_results['train_accuracies'], 'b-', label='简单CNN-训练', linewidth=2)\n",
        "ax2.plot(epochs, simple_cnn_results['test_accuracies'], 'b--', label='简单CNN-测试', linewidth=2)\n",
        "ax2.plot(epochs, deep_cnn_results['train_accuracies'], 'r-', label='深度CNN-训练', linewidth=2)\n",
        "ax2.plot(epochs, deep_cnn_results['test_accuracies'], 'r--', label='深度CNN-测试', linewidth=2)\n",
        "ax2.plot(epochs, mlp_results['train_accuracies'], 'g-', label='MLP-训练', linewidth=2)\n",
        "ax2.plot(epochs, mlp_results['test_accuracies'], 'g--', label='MLP-测试', linewidth=2)\n",
        "ax2.set_title('训练过程准确率对比')\n",
        "ax2.set_xlabel('Epoch')\n",
        "ax2.set_ylabel('准确率 (%)')\n",
        "ax2.legend()\n",
        "ax2.grid(True, alpha=0.3)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 分析总结\n",
        "print(\"\\n\" + \"=\" * 80)\n",
        "print(\"CNN vs MLP 分析总结\")\n",
        "print(\"=\" * 80)\n",
        "\n",
        "print(\"1. 性能分析:\")\n",
        "best_cnn_acc = max(simple_cnn_results['final_test_acc'], deep_cnn_results['final_test_acc'])\n",
        "mlp_acc = mlp_results['final_test_acc']\n",
        "improvement = best_cnn_acc - mlp_acc\n",
        "print(f\"   - 最佳CNN比MLP准确率提高了 {improvement:.2f}%\")\n",
        "\n",
        "print(\"2. 效率分析:\")\n",
        "fastest_model = min(training_times)\n",
        "slowest_model = max(training_times)\n",
        "print(f\"   - 最快模型训练时间: {fastest_model:.1f}秒\")\n",
        "print(f\"   - 最慢模型训练时间: {slowest_model:.1f}秒\")\n",
        "print(f\"   - 时间差异: {slowest_model/fastest_model:.1f}倍\")\n",
        "\n",
        "print(\"3. 复杂度分析:\")\n",
        "min_params = min(param_counts)\n",
        "max_params = max(param_counts)\n",
        "print(f\"   - 最少参数: {min_params:,}\")\n",
        "print(f\"   - 最多参数: {max_params:,}\")\n",
        "print(f\"   - 参数差异: {max_params/min_params:.1f}倍\")\n",
        "\n",
        "print(\"4. 过拟合分析:\")\n",
        "cnn_overfitting = simple_cnn_results['train_accuracies'][-1] - simple_cnn_results['test_accuracies'][-1]\n",
        "mlp_overfitting = mlp_results['train_accuracies'][-1] - mlp_results['test_accuracies'][-1]\n",
        "print(f\"   - CNN过拟合程度: {cnn_overfitting:.2f}%\")\n",
        "print(f\"   - MLP过拟合程度: {mlp_overfitting:.2f}%\")\n",
        "\n",
        "print(\"5. 建议:\")\n",
        "if best_cnn_acc > mlp_acc + 2:\n",
        "    print(\"   - CNN在图像分类任务上表现明显更好，建议使用CNN\")\n",
        "elif mlp_acc > best_cnn_acc + 1:\n",
        "    print(\"   - MLP在这个简单任务上表现更好，可能数据增强对MLP更有效\")\n",
        "else:\n",
        "    print(\"   - CNN和MLP性能相近，可根据具体需求选择\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 7. 特征图可视化\n",
        "\n",
        "CNN的一个重要特点是能够可视化学习到的特征图，让我们看看CNN是如何\"看\"图像的。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 特征图可视化函数\n",
        "def visualize_feature_maps(model, image, layer_name, device):\n",
        "    \"\"\"可视化CNN的特征图\"\"\"\n",
        "    model.eval()\n",
        "    \n",
        "    # 创建钩子函数来捕获特征图\n",
        "    feature_maps = {}\n",
        "    \n",
        "    def hook_fn(module, input, output):\n",
        "        feature_maps[layer_name] = output.detach()\n",
        "    \n",
        "    # 注册钩子\n",
        "    if layer_name == 'conv1':\n",
        "        hook = model.conv1.register_forward_hook(hook_fn)\n",
        "    elif layer_name == 'conv2':\n",
        "        hook = model.conv2.register_forward_hook(hook_fn)\n",
        "    else:\n",
        "        print(f\"不支持的层: {layer_name}\")\n",
        "        return None\n",
        "    \n",
        "    # 前向传播\n",
        "    with torch.no_grad():\n",
        "        image = image.unsqueeze(0).to(device)  # 添加批次维度\n",
        "        _ = model(image)\n",
        "    \n",
        "    # 移除钩子\n",
        "    hook.remove()\n",
        "    \n",
        "    # 获取特征图\n",
        "    if layer_name in feature_maps:\n",
        "        return feature_maps[layer_name].squeeze(0).cpu().numpy()\n",
        "    return None\n",
        "\n",
        "# 获取一些测试图像\n",
        "sample_images = []\n",
        "sample_labels = []\n",
        "for i in range(5):\n",
        "    image, label = test_dataset[i]\n",
        "    sample_images.append(image)\n",
        "    sample_labels.append(label)\n",
        "\n",
        "# 可视化第一个样本的特征图\n",
        "test_image = sample_images[0]\n",
        "test_label = sample_labels[0]\n",
        "\n",
        "print(f\"可视化图像标签: {test_label}\")\n",
        "\n",
        "# 获取不同层的特征图\n",
        "conv1_features = visualize_feature_maps(simple_cnn, test_image, 'conv1', device)\n",
        "conv2_features = visualize_feature_maps(simple_cnn, test_image, 'conv2', device)\n",
        "\n",
        "# 可视化原始图像\n",
        "plt.figure(figsize=(20, 12))\n",
        "\n",
        "# 原始图像\n",
        "plt.subplot(3, 1, 1)\n",
        "img = test_image.squeeze()\n",
        "img = (img + 1) / 2  # 反归一化\n",
        "plt.imshow(img, cmap='gray')\n",
        "plt.title(f'原始图像 (标签: {test_label})')\n",
        "plt.axis('off')\n",
        "\n",
        "# Conv1特征图\n",
        "if conv1_features is not None:\n",
        "    plt.subplot(3, 1, 2)\n",
        "    # 显示前16个特征图\n",
        "    n_features = min(16, conv1_features.shape[0])\n",
        "    fig_width = 4\n",
        "    fig_height = 4\n",
        "    \n",
        "    # 创建网格显示特征图\n",
        "    feature_grid = np.zeros((fig_height * conv1_features.shape[1], \n",
        "                           fig_width * conv1_features.shape[2]))\n",
        "    \n",
        "    for i in range(n_features):\n",
        "        row = i // fig_width\n",
        "        col = i % fig_width\n",
        "        start_row = row * conv1_features.shape[1]\n",
        "        end_row = start_row + conv1_features.shape[1]\n",
        "        start_col = col * conv1_features.shape[2]\n",
        "        end_col = start_col + conv1_features.shape[2]\n",
        "        \n",
        "        # 归一化特征图到[0,1]\n",
        "        feature_map = conv1_features[i]\n",
        "        feature_map = (feature_map - feature_map.min()) / (feature_map.max() - feature_map.min() + 1e-8)\n",
        "        feature_grid[start_row:end_row, start_col:end_col] = feature_map\n",
        "    \n",
        "    plt.imshow(feature_grid, cmap='viridis')\n",
        "    plt.title(f'Conv1特征图 (前{n_features}个通道)')\n",
        "    plt.axis('off')\n",
        "\n",
        "# Conv2特征图\n",
        "if conv2_features is not None:\n",
        "    plt.subplot(3, 1, 3)\n",
        "    # 显示前16个特征图\n",
        "    n_features = min(16, conv2_features.shape[0])\n",
        "    fig_width = 4\n",
        "    fig_height = 4\n",
        "    \n",
        "    # 创建网格显示特征图\n",
        "    feature_grid = np.zeros((fig_height * conv2_features.shape[1], \n",
        "                           fig_width * conv2_features.shape[2]))\n",
        "    \n",
        "    for i in range(n_features):\n",
        "        row = i // fig_width\n",
        "        col = i % fig_width\n",
        "        start_row = row * conv2_features.shape[1]\n",
        "        end_row = start_row + conv2_features.shape[1]\n",
        "        start_col = col * conv2_features.shape[2]\n",
        "        end_col = start_col + conv2_features.shape[2]\n",
        "        \n",
        "        # 归一化特征图到[0,1]\n",
        "        feature_map = conv2_features[i]\n",
        "        feature_map = (feature_map - feature_map.min()) / (feature_map.max() - feature_map.min() + 1e-8)\n",
        "        feature_grid[start_row:end_row, start_col:end_col] = feature_map\n",
        "    \n",
        "    plt.imshow(feature_grid, cmap='viridis')\n",
        "    plt.title(f'Conv2特征图 (前{n_features}个通道)')\n",
        "    plt.axis('off')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 可视化多个样本的特征图\n",
        "def visualize_multiple_samples():\n",
        "    \"\"\"可视化多个样本的特征图\"\"\"\n",
        "    fig, axes = plt.subplots(3, 5, figsize=(20, 12))\n",
        "    \n",
        "    for i in range(5):\n",
        "        # 原始图像\n",
        "        img = sample_images[i].squeeze()\n",
        "        img = (img + 1) / 2\n",
        "        axes[0, i].imshow(img, cmap='gray')\n",
        "        axes[0, i].set_title(f'样本{i+1} (标签: {sample_labels[i]})')\n",
        "        axes[0, i].axis('off')\n",
        "        \n",
        "        # Conv1特征图（选择第一个通道）\n",
        "        conv1_features = visualize_feature_maps(simple_cnn, sample_images[i], 'conv1', device)\n",
        "        if conv1_features is not None:\n",
        "            feature_map = conv1_features[0]  # 选择第一个通道\n",
        "            feature_map = (feature_map - feature_map.min()) / (feature_map.max() - feature_map.min() + 1e-8)\n",
        "            axes[1, i].imshow(feature_map, cmap='viridis')\n",
        "            axes[1, i].set_title(f'Conv1-通道0')\n",
        "            axes[1, i].axis('off')\n",
        "        \n",
        "        # Conv2特征图（选择第一个通道）\n",
        "        conv2_features = visualize_feature_maps(simple_cnn, sample_images[i], 'conv2', device)\n",
        "        if conv2_features is not None:\n",
        "            feature_map = conv2_features[0]  # 选择第一个通道\n",
        "            feature_map = (feature_map - feature_map.min()) / (feature_map.max() - feature_map.min() + 1e-8)\n",
        "            axes[2, i].imshow(feature_map, cmap='viridis')\n",
        "            axes[2, i].set_title(f'Conv2-通道0')\n",
        "            axes[2, i].axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_multiple_samples()\n",
        "\n",
        "# 分析特征图的统计信息\n",
        "def analyze_feature_maps():\n",
        "    \"\"\"分析特征图的统计信息\"\"\"\n",
        "    print(\"特征图统计分析:\")\n",
        "    print(\"=\" * 50)\n",
        "    \n",
        "    # 分析Conv1特征图\n",
        "    conv1_features = visualize_feature_maps(simple_cnn, test_image, 'conv1', device)\n",
        "    if conv1_features is not None:\n",
        "        print(f\"Conv1特征图形状: {conv1_features.shape}\")\n",
        "        print(f\"Conv1特征图统计:\")\n",
        "        print(f\"  - 均值: {conv1_features.mean():.4f}\")\n",
        "        print(f\"  - 标准差: {conv1_features.std():.4f}\")\n",
        "        print(f\"  - 最小值: {conv1_features.min():.4f}\")\n",
        "        print(f\"  - 最大值: {conv1_features.max():.4f}\")\n",
        "        print()\n",
        "    \n",
        "    # 分析Conv2特征图\n",
        "    conv2_features = visualize_feature_maps(simple_cnn, test_image, 'conv2', device)\n",
        "    if conv2_features is not None:\n",
        "        print(f\"Conv2特征图形状: {conv2_features.shape}\")\n",
        "        print(f\"Conv2特征图统计:\")\n",
        "        print(f\"  - 均值: {conv2_features.mean():.4f}\")\n",
        "        print(f\"  - 标准差: {conv2_features.std():.4f}\")\n",
        "        print(f\"  - 最小值: {conv2_features.min():.4f}\")\n",
        "        print(f\"  - 最大值: {conv2_features.max():.4f}\")\n",
        "        print()\n",
        "    \n",
        "    # 可视化特征图分布\n",
        "    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n",
        "    \n",
        "    if conv1_features is not None:\n",
        "        ax1.hist(conv1_features.flatten(), bins=50, alpha=0.7, color='blue')\n",
        "        ax1.set_title('Conv1特征图值分布')\n",
        "        ax1.set_xlabel('特征值')\n",
        "        ax1.set_ylabel('频次')\n",
        "        ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    if conv2_features is not None:\n",
        "        ax2.hist(conv2_features.flatten(), bins=50, alpha=0.7, color='red')\n",
        "        ax2.set_title('Conv2特征图值分布')\n",
        "        ax2.set_xlabel('特征值')\n",
        "        ax2.set_ylabel('频次')\n",
        "        ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "analyze_feature_maps()\n",
        "\n",
        "# 可视化卷积核\n",
        "def visualize_conv_kernels():\n",
        "    \"\"\"可视化卷积核\"\"\"\n",
        "    print(\"卷积核可视化:\")\n",
        "    print(\"=\" * 30)\n",
        "    \n",
        "    # 获取卷积核权重\n",
        "    conv1_weights = simple_cnn.conv1.weight.data.cpu().numpy()\n",
        "    conv2_weights = simple_cnn.conv2.weight.data.cpu().numpy()\n",
        "    \n",
        "    print(f\"Conv1卷积核形状: {conv1_weights.shape}\")\n",
        "    print(f\"Conv2卷积核形状: {conv2_weights.shape}\")\n",
        "    \n",
        "    # 可视化Conv1卷积核\n",
        "    fig, axes = plt.subplots(4, 8, figsize=(16, 8))\n",
        "    fig.suptitle('Conv1卷积核 (前32个)', fontsize=16)\n",
        "    \n",
        "    for i in range(min(32, conv1_weights.shape[0])):\n",
        "        row = i // 8\n",
        "        col = i % 8\n",
        "        \n",
        "        kernel = conv1_weights[i, 0]  # 第一个输入通道\n",
        "        im = axes[row, col].imshow(kernel, cmap='RdBu', vmin=kernel.min(), vmax=kernel.max())\n",
        "        axes[row, col].set_title(f'核{i}')\n",
        "        axes[row, col].axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    # 可视化Conv2卷积核（选择前几个）\n",
        "    fig, axes = plt.subplots(2, 4, figsize=(12, 6))\n",
        "    fig.suptitle('Conv2卷积核 (前8个)', fontsize=16)\n",
        "    \n",
        "    for i in range(min(8, conv2_weights.shape[0])):\n",
        "        row = i // 4\n",
        "        col = i % 4\n",
        "        \n",
        "        # 显示第一个输入通道的卷积核\n",
        "        kernel = conv2_weights[i, 0]\n",
        "        im = axes[row, col].imshow(kernel, cmap='RdBu', vmin=kernel.min(), vmax=kernel.max())\n",
        "        axes[row, col].set_title(f'核{i}')\n",
        "        axes[row, col].axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_conv_kernels()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 8. 总结\n",
        "\n",
        "恭喜！你已经成功完成了卷积神经网络（CNN）的完整教程。让我们总结一下学到的重要概念：\n",
        "\n",
        "### 🎯 学习成果\n",
        "\n",
        "1. **卷积操作理解**\n",
        "   - 手动实现2D卷积操作\n",
        "   - 理解不同卷积核的效果（边缘检测、模糊、锐化）\n",
        "   - 掌握卷积的数学原理和计算过程\n",
        "\n",
        "2. **CNN架构设计**\n",
        "   - 构建了三个不同复杂度的CNN模型\n",
        "   - 理解了卷积层、池化层、全连接层的作用\n",
        "   - 掌握了特征图尺寸变化规律\n",
        "\n",
        "3. **数据增强技术**\n",
        "   - 学习了数据增强的重要性\n",
        "   - 实现了旋转、平移等增强技术\n",
        "   - 分析了数据增强对模型性能的影响\n",
        "\n",
        "4. **模型训练和评估**\n",
        "   - 完整的CNN训练流程\n",
        "   - 训练过程监控和可视化\n",
        "   - 性能指标分析和对比\n",
        "\n",
        "5. **特征图可视化**\n",
        "   - 使用钩子函数捕获中间层特征\n",
        "   - 可视化不同层的特征图\n",
        "   - 分析卷积核的学习模式\n",
        "\n",
        "### 🔍 关键发现\n",
        "\n",
        "- **CNN vs MLP**: CNN在图像任务上通常表现更好，因为它能保持空间结构信息\n",
        "- **数据增强**: 有效的数据增强能显著提高模型泛化能力\n",
        "- **特征学习**: CNN能够自动学习层次化的特征表示\n",
        "- **参数效率**: CNN通过参数共享大大减少了参数量\n",
        "\n",
        "### 🚀 下一步学习建议\n",
        "\n",
        "1. **更复杂的CNN架构**: ResNet、DenseNet、EfficientNet等\n",
        "2. **计算机视觉任务**: 目标检测、语义分割、图像生成\n",
        "3. **迁移学习**: 使用预训练模型进行微调\n",
        "4. **更复杂的数据集**: CIFAR-10、ImageNet等\n",
        "\n",
        "现在你已经掌握了CNN的核心概念和实现方法，可以继续探索更高级的深度学习技术了！🎉\n"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
