{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 08. 高级主题：Transformers和GAN\n",
        "\n",
        "## 学习目标\n",
        "- 理解Transformer架构的核心概念\n",
        "- 学习注意力机制的原理和实现\n",
        "- 掌握生成对抗网络（GAN）的基本原理\n",
        "- 实现简单的Transformer模型\n",
        "- 构建基础的GAN模型\n",
        "- 探索现代深度学习的前沿技术\n",
        "\n",
        "## 什么是Transformer？\n",
        "\n",
        "Transformer是一种革命性的神经网络架构，由Vaswani等人在2017年提出。它完全基于注意力机制，摒弃了传统的循环和卷积结构。\n",
        "\n",
        "### Transformer的核心组件\n",
        "\n",
        "1. **自注意力机制（Self-Attention）**\n",
        "   - 允许模型关注输入序列中的不同位置\n",
        "   - 计算序列中每个位置与其他位置的关系\n",
        "\n",
        "2. **多头注意力（Multi-Head Attention）**\n",
        "   - 并行运行多个注意力机制\n",
        "   - 捕捉不同类型的关系\n",
        "\n",
        "3. **位置编码（Positional Encoding）**\n",
        "   - 为序列中的每个位置添加位置信息\n",
        "   - 因为Transformer没有循环结构，需要显式编码位置\n",
        "\n",
        "4. **前馈网络（Feed-Forward Network）**\n",
        "   - 对每个位置独立应用相同的全连接网络\n",
        "\n",
        "### 注意力机制数学原理\n",
        "\n",
        "**缩放点积注意力：**\n",
        "$$Attention(Q,K,V) = softmax(\\frac{QK^T}{\\sqrt{d_k}})V$$\n",
        "\n",
        "其中：\n",
        "- $Q$: 查询矩阵 (Query)\n",
        "- $K$: 键矩阵 (Key)  \n",
        "- $V$: 值矩阵 (Value)\n",
        "- $d_k$: 键的维度\n",
        "\n",
        "## 什么是GAN？\n",
        "\n",
        "生成对抗网络（Generative Adversarial Network）由Goodfellow等人在2014年提出，是一种生成模型。\n",
        "\n",
        "### GAN的核心思想\n",
        "\n",
        "GAN由两个神经网络组成：\n",
        "1. **生成器（Generator）**: 学习生成假数据\n",
        "2. **判别器（Discriminator）**: 学习区分真假数据\n",
        "\n",
        "两个网络在对抗过程中不断改进，最终生成器能够生成逼真的数据。\n",
        "\n",
        "### GAN的损失函数\n",
        "\n",
        "**生成器损失：**\n",
        "$$L_G = -E_{z \\sim p_z(z)}[\\log D(G(z))]$$\n",
        "\n",
        "**判别器损失：**\n",
        "$$L_D = -E_{x \\sim p_{data}(x)}[\\log D(x)] - E_{z \\sim p_z(z)}[\\log(1-D(G(z)))]$$\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.optim as optim\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import DataLoader, TensorDataset\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "import seaborn as sns\n",
        "from sklearn.model_selection import train_test_split\n",
        "from sklearn.metrics import accuracy_score, classification_report\n",
        "import math\n",
        "import time\n",
        "from tqdm import tqdm\n",
        "import warnings\n",
        "warnings.filterwarnings('ignore')\n",
        "\n",
        "# 设置中文字体\n",
        "plt.rcParams['font.sans-serif'] = ['SimHei']\n",
        "plt.rcParams['axes.unicode_minus'] = False\n",
        "\n",
        "# 设置随机种子\n",
        "torch.manual_seed(42)\n",
        "np.random.seed(42)\n",
        "\n",
        "print(f\"PyTorch版本: {torch.__version__}\")\n",
        "print(f\"CUDA可用: {torch.cuda.is_available()}\")\n",
        "\n",
        "# 设置设备\n",
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "print(f\"使用设备: {device}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. 注意力机制实现\n",
        "\n",
        "首先，让我们从最基础的注意力机制开始实现。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 注意力机制实现\n",
        "class Attention(nn.Module):\n",
        "    \"\"\"缩放点积注意力机制\"\"\"\n",
        "    \n",
        "    def __init__(self, d_model):\n",
        "        super(Attention, self).__init__()\n",
        "        self.d_model = d_model\n",
        "        self.scale = math.sqrt(d_model)\n",
        "    \n",
        "    def forward(self, query, key, value, mask=None):\n",
        "        \"\"\"\n",
        "        前向传播\n",
        "        Args:\n",
        "            query: [batch_size, seq_len, d_model]\n",
        "            key: [batch_size, seq_len, d_model]\n",
        "            value: [batch_size, seq_len, d_model]\n",
        "            mask: [batch_size, seq_len, seq_len] 可选\n",
        "        \"\"\"\n",
        "        # 计算注意力分数\n",
        "        scores = torch.matmul(query, key.transpose(-2, -1)) / self.scale\n",
        "        \n",
        "        # 应用掩码（如果有）\n",
        "        if mask is not None:\n",
        "            scores = scores.masked_fill(mask == 0, -1e9)\n",
        "        \n",
        "        # 应用softmax\n",
        "        attention_weights = F.softmax(scores, dim=-1)\n",
        "        \n",
        "        # 计算加权和\n",
        "        output = torch.matmul(attention_weights, value)\n",
        "        \n",
        "        return output, attention_weights\n",
        "\n",
        "# 多头注意力机制\n",
        "class MultiHeadAttention(nn.Module):\n",
        "    \"\"\"多头注意力机制\"\"\"\n",
        "    \n",
        "    def __init__(self, d_model, num_heads):\n",
        "        super(MultiHeadAttention, self).__init__()\n",
        "        assert d_model % num_heads == 0\n",
        "        \n",
        "        self.d_model = d_model\n",
        "        self.num_heads = num_heads\n",
        "        self.d_k = d_model // num_heads\n",
        "        \n",
        "        # 线性变换层\n",
        "        self.W_q = nn.Linear(d_model, d_model)\n",
        "        self.W_k = nn.Linear(d_model, d_model)\n",
        "        self.W_v = nn.Linear(d_model, d_model)\n",
        "        self.W_o = nn.Linear(d_model, d_model)\n",
        "        \n",
        "        self.attention = Attention(self.d_k)\n",
        "    \n",
        "    def forward(self, query, key, value, mask=None):\n",
        "        batch_size, seq_len, d_model = query.size()\n",
        "        \n",
        "        # 线性变换并重塑为多头\n",
        "        Q = self.W_q(query).view(batch_size, seq_len, self.num_heads, self.d_k).transpose(1, 2)\n",
        "        K = self.W_k(key).view(batch_size, seq_len, self.num_heads, self.d_k).transpose(1, 2)\n",
        "        V = self.W_v(value).view(batch_size, seq_len, self.num_heads, self.d_k).transpose(1, 2)\n",
        "        \n",
        "        # 应用注意力\n",
        "        attention_output, attention_weights = self.attention(Q, K, V, mask)\n",
        "        \n",
        "        # 重塑并连接\n",
        "        attention_output = attention_output.transpose(1, 2).contiguous().view(\n",
        "            batch_size, seq_len, d_model\n",
        "        )\n",
        "        \n",
        "        # 最终线性变换\n",
        "        output = self.W_o(attention_output)\n",
        "        \n",
        "        return output, attention_weights\n",
        "\n",
        "# 位置编码\n",
        "class PositionalEncoding(nn.Module):\n",
        "    \"\"\"位置编码\"\"\"\n",
        "    \n",
        "    def __init__(self, d_model, max_length=5000):\n",
        "        super(PositionalEncoding, self).__init__()\n",
        "        \n",
        "        pe = torch.zeros(max_length, d_model)\n",
        "        position = torch.arange(0, max_length, dtype=torch.float).unsqueeze(1)\n",
        "        \n",
        "        div_term = torch.exp(torch.arange(0, d_model, 2).float() * \n",
        "                           (-math.log(10000.0) / d_model))\n",
        "        \n",
        "        pe[:, 0::2] = torch.sin(position * div_term)\n",
        "        pe[:, 1::2] = torch.cos(position * div_term)\n",
        "        pe = pe.unsqueeze(0).transpose(0, 1)\n",
        "        \n",
        "        self.register_buffer('pe', pe)\n",
        "    \n",
        "    def forward(self, x):\n",
        "        return x + self.pe[:x.size(0), :]\n",
        "\n",
        "# 前馈网络\n",
        "class FeedForward(nn.Module):\n",
        "    \"\"\"前馈网络\"\"\"\n",
        "    \n",
        "    def __init__(self, d_model, d_ff, dropout=0.1):\n",
        "        super(FeedForward, self).__init__()\n",
        "        self.linear1 = nn.Linear(d_model, d_ff)\n",
        "        self.linear2 = nn.Linear(d_ff, d_model)\n",
        "        self.dropout = nn.Dropout(dropout)\n",
        "    \n",
        "    def forward(self, x):\n",
        "        return self.linear2(self.dropout(F.relu(self.linear1(x))))\n",
        "\n",
        "# 测试注意力机制\n",
        "def test_attention():\n",
        "    \"\"\"测试注意力机制\"\"\"\n",
        "    print(\"测试注意力机制...\")\n",
        "    \n",
        "    # 参数设置\n",
        "    batch_size = 2\n",
        "    seq_len = 5\n",
        "    d_model = 64\n",
        "    num_heads = 8\n",
        "    \n",
        "    # 创建测试数据\n",
        "    x = torch.randn(batch_size, seq_len, d_model)\n",
        "    \n",
        "    # 创建多头注意力\n",
        "    mha = MultiHeadAttention(d_model, num_heads)\n",
        "    \n",
        "    # 前向传播\n",
        "    output, attention_weights = mha(x, x, x)\n",
        "    \n",
        "    print(f\"输入形状: {x.shape}\")\n",
        "    print(f\"输出形状: {output.shape}\")\n",
        "    print(f\"注意力权重形状: {attention_weights.shape}\")\n",
        "    \n",
        "    # 可视化注意力权重\n",
        "    plt.figure(figsize=(10, 6))\n",
        "    \n",
        "    # 选择第一个样本的第一个头\n",
        "    attention_matrix = attention_weights[0, 0].detach().numpy()\n",
        "    \n",
        "    plt.subplot(1, 2, 1)\n",
        "    plt.imshow(attention_matrix, cmap='Blues')\n",
        "    plt.title('注意力权重矩阵')\n",
        "    plt.xlabel('Key位置')\n",
        "    plt.ylabel('Query位置')\n",
        "    plt.colorbar()\n",
        "    \n",
        "    # 显示注意力权重的分布\n",
        "    plt.subplot(1, 2, 2)\n",
        "    plt.hist(attention_matrix.flatten(), bins=20, alpha=0.7, color='skyblue')\n",
        "    plt.title('注意力权重分布')\n",
        "    plt.xlabel('权重值')\n",
        "    plt.ylabel('频次')\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    return mha, attention_weights\n",
        "\n",
        "# 运行测试\n",
        "mha, attention_weights = test_attention()\n",
        "\n",
        "# 位置编码测试\n",
        "def test_positional_encoding():\n",
        "    \"\"\"测试位置编码\"\"\"\n",
        "    print(\"\\n测试位置编码...\")\n",
        "    \n",
        "    d_model = 64\n",
        "    seq_len = 20\n",
        "    \n",
        "    # 创建位置编码\n",
        "    pos_encoding = PositionalEncoding(d_model, max_length=100)\n",
        "    \n",
        "    # 创建测试输入\n",
        "    x = torch.randn(seq_len, 1, d_model)\n",
        "    \n",
        "    # 应用位置编码\n",
        "    x_with_pos = pos_encoding(x)\n",
        "    \n",
        "    print(f\"输入形状: {x.shape}\")\n",
        "    print(f\"位置编码后形状: {x_with_pos.shape}\")\n",
        "    \n",
        "    # 可视化位置编码\n",
        "    plt.figure(figsize=(12, 8))\n",
        "    \n",
        "    # 提取位置编码\n",
        "    pe = pos_encoding.pe[:seq_len, 0, :].numpy()\n",
        "    \n",
        "    plt.imshow(pe.T, cmap='RdYlBu', aspect='auto')\n",
        "    plt.title('位置编码可视化')\n",
        "    plt.xlabel('位置')\n",
        "    plt.ylabel('维度')\n",
        "    plt.colorbar(label='编码值')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    return pos_encoding\n",
        "\n",
        "pos_encoding = test_positional_encoding()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. Transformer模型实现\n",
        "\n",
        "现在让我们构建一个完整的Transformer模型。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Transformer编码器层\n",
        "class TransformerEncoderLayer(nn.Module):\n",
        "    \"\"\"Transformer编码器层\"\"\"\n",
        "    \n",
        "    def __init__(self, d_model, num_heads, d_ff, dropout=0.1):\n",
        "        super(TransformerEncoderLayer, self).__init__()\n",
        "        \n",
        "        self.self_attention = MultiHeadAttention(d_model, num_heads)\n",
        "        self.feed_forward = FeedForward(d_model, d_ff, dropout)\n",
        "        \n",
        "        self.norm1 = nn.LayerNorm(d_model)\n",
        "        self.norm2 = nn.LayerNorm(d_model)\n",
        "        self.dropout = nn.Dropout(dropout)\n",
        "    \n",
        "    def forward(self, x, mask=None):\n",
        "        # 自注意力 + 残差连接 + 层归一化\n",
        "        attn_output, attention_weights = self.self_attention(x, x, x, mask)\n",
        "        x = self.norm1(x + self.dropout(attn_output))\n",
        "        \n",
        "        # 前馈网络 + 残差连接 + 层归一化\n",
        "        ff_output = self.feed_forward(x)\n",
        "        x = self.norm2(x + self.dropout(ff_output))\n",
        "        \n",
        "        return x, attention_weights\n",
        "\n",
        "# 完整的Transformer模型\n",
        "class TransformerModel(nn.Module):\n",
        "    \"\"\"简化的Transformer模型（仅编码器）\"\"\"\n",
        "    \n",
        "    def __init__(self, vocab_size, d_model, num_heads, num_layers, d_ff, max_length, num_classes, dropout=0.1):\n",
        "        super(TransformerModel, self).__init__()\n",
        "        \n",
        "        self.d_model = d_model\n",
        "        self.embedding = nn.Embedding(vocab_size, d_model)\n",
        "        self.pos_encoding = PositionalEncoding(d_model, max_length)\n",
        "        \n",
        "        self.encoder_layers = nn.ModuleList([\n",
        "            TransformerEncoderLayer(d_model, num_heads, d_ff, dropout)\n",
        "            for _ in range(num_layers)\n",
        "        ])\n",
        "        \n",
        "        self.dropout = nn.Dropout(dropout)\n",
        "        self.classifier = nn.Linear(d_model, num_classes)\n",
        "        \n",
        "    def forward(self, x, mask=None):\n",
        "        # 嵌入 + 位置编码\n",
        "        x = self.embedding(x) * math.sqrt(self.d_model)\n",
        "        x = self.pos_encoding(x)\n",
        "        x = self.dropout(x)\n",
        "        \n",
        "        # 通过编码器层\n",
        "        attention_weights = []\n",
        "        for layer in self.encoder_layers:\n",
        "            x, attn_weights = layer(x, mask)\n",
        "            attention_weights.append(attn_weights)\n",
        "        \n",
        "        # 全局平均池化\n",
        "        x = x.mean(dim=1)\n",
        "        \n",
        "        # 分类\n",
        "        output = self.classifier(x)\n",
        "        \n",
        "        return output, attention_weights\n",
        "\n",
        "# 创建简单的文本分类数据集\n",
        "def create_simple_dataset():\n",
        "    \"\"\"创建简单的文本分类数据集\"\"\"\n",
        "    \n",
        "    # 简单的词汇表\n",
        "    vocab = {\n",
        "        '<PAD>': 0, '<UNK>': 1,\n",
        "        'good': 2, 'bad': 3, 'great': 4, 'terrible': 5,\n",
        "        'movie': 6, 'book': 7, 'food': 8, 'service': 9,\n",
        "        'love': 10, 'hate': 11, 'like': 12, 'dislike': 13,\n",
        "        'amazing': 14, 'awful': 15, 'excellent': 16, 'poor': 17\n",
        "    }\n",
        "    \n",
        "    # 示例句子\n",
        "    sentences = [\n",
        "        \"good movie love it\",\n",
        "        \"bad movie hate it\", \n",
        "        \"great book amazing\",\n",
        "        \"terrible food awful\",\n",
        "        \"excellent service love\",\n",
        "        \"poor service dislike\",\n",
        "        \"good food like it\",\n",
        "        \"bad book terrible\",\n",
        "        \"great movie amazing\",\n",
        "        \"awful service hate\"\n",
        "    ]\n",
        "    \n",
        "    # 对应的标签 (0: 正面, 1: 负面)\n",
        "    labels = [0, 1, 0, 1, 0, 1, 0, 1, 0, 1]\n",
        "    \n",
        "    # 转换为索引\n",
        "    def text_to_indices(text, vocab, max_length=10):\n",
        "        words = text.split()\n",
        "        indices = [vocab.get(word, vocab['<UNK>']) for word in words]\n",
        "        \n",
        "        # 填充或截断\n",
        "        if len(indices) < max_length:\n",
        "            indices.extend([vocab['<PAD>']] * (max_length - len(indices)))\n",
        "        else:\n",
        "            indices = indices[:max_length]\n",
        "        \n",
        "        return indices\n",
        "    \n",
        "    # 处理所有句子\n",
        "    X = [text_to_indices(sentence, vocab) for sentence in sentences]\n",
        "    y = labels\n",
        "    \n",
        "    return X, y, vocab\n",
        "\n",
        "# 创建数据集\n",
        "X, y, vocab = create_simple_dataset()\n",
        "vocab_size = len(vocab)\n",
        "max_length = 10\n",
        "num_classes = 2\n",
        "\n",
        "print(f\"词汇表大小: {vocab_size}\")\n",
        "print(f\"数据集大小: {len(X)}\")\n",
        "print(f\"最大序列长度: {max_length}\")\n",
        "\n",
        "# 显示数据集\n",
        "print(\"\\n数据集示例:\")\n",
        "for i, (sentence, label) in enumerate(zip(sentences, y)):\n",
        "    print(f\"{i+1}. {sentence} -> {['正面', '负面'][label]}\")\n",
        "\n",
        "# 转换为张量\n",
        "X_tensor = torch.tensor(X, dtype=torch.long)\n",
        "y_tensor = torch.tensor(y, dtype=torch.long)\n",
        "\n",
        "# 创建数据加载器\n",
        "dataset = TensorDataset(X_tensor, y_tensor)\n",
        "dataloader = DataLoader(dataset, batch_size=4, shuffle=True)\n",
        "\n",
        "# 创建Transformer模型\n",
        "d_model = 64\n",
        "num_heads = 8\n",
        "num_layers = 2\n",
        "d_ff = 256\n",
        "\n",
        "transformer = TransformerModel(\n",
        "    vocab_size=vocab_size,\n",
        "    d_model=d_model,\n",
        "    num_heads=num_heads,\n",
        "    num_layers=num_layers,\n",
        "    d_ff=d_ff,\n",
        "    max_length=max_length,\n",
        "    num_classes=num_classes\n",
        ")\n",
        "\n",
        "print(f\"\\nTransformer模型参数数量: {sum(p.numel() for p in transformer.parameters())}\")\n",
        "\n",
        "# 测试模型\n",
        "def test_transformer():\n",
        "    \"\"\"测试Transformer模型\"\"\"\n",
        "    print(\"\\n测试Transformer模型...\")\n",
        "    \n",
        "    # 创建测试输入\n",
        "    test_input = torch.tensor([[2, 6, 10, 1, 0, 0, 0, 0, 0, 0]], dtype=torch.long)  # \"good movie love\"\n",
        "    \n",
        "    # 前向传播\n",
        "    output, attention_weights = transformer(test_input)\n",
        "    \n",
        "    print(f\"输入形状: {test_input.shape}\")\n",
        "    print(f\"输出形状: {output.shape}\")\n",
        "    print(f\"注意力权重数量: {len(attention_weights)}\")\n",
        "    print(f\"输出概率: {F.softmax(output, dim=-1)}\")\n",
        "    \n",
        "    # 可视化注意力权重\n",
        "    plt.figure(figsize=(15, 5))\n",
        "    \n",
        "    for i, attn_weights in enumerate(attention_weights):\n",
        "        plt.subplot(1, len(attention_weights), i+1)\n",
        "        \n",
        "        # 选择第一个样本的第一个头\n",
        "        attention_matrix = attn_weights[0, 0].detach().numpy()\n",
        "        \n",
        "        plt.imshow(attention_matrix, cmap='Blues')\n",
        "        plt.title(f'编码器层 {i+1} 注意力权重')\n",
        "        plt.xlabel('Key位置')\n",
        "        plt.ylabel('Query位置')\n",
        "        plt.colorbar()\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    return output, attention_weights\n",
        "\n",
        "output, attention_weights = test_transformer()\n",
        "\n",
        "# 训练Transformer模型\n",
        "def train_transformer(model, dataloader, num_epochs=50, learning_rate=0.001):\n",
        "    \"\"\"训练Transformer模型\"\"\"\n",
        "    model = model.to(device)\n",
        "    criterion = nn.CrossEntropyLoss()\n",
        "    optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n",
        "    \n",
        "    train_losses = []\n",
        "    train_accuracies = []\n",
        "    \n",
        "    print(\"开始训练Transformer模型...\")\n",
        "    print(\"=\" * 50)\n",
        "    \n",
        "    for epoch in range(num_epochs):\n",
        "        model.train()\n",
        "        total_loss = 0\n",
        "        correct = 0\n",
        "        total = 0\n",
        "        \n",
        "        for batch_idx, (data, target) in enumerate(dataloader):\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            \n",
        "            optimizer.zero_grad()\n",
        "            output, _ = model(data)\n",
        "            loss = criterion(output, target)\n",
        "            loss.backward()\n",
        "            optimizer.step()\n",
        "            \n",
        "            total_loss += loss.item()\n",
        "            pred = output.argmax(dim=1, keepdim=True)\n",
        "            correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "            total += target.size(0)\n",
        "        \n",
        "        avg_loss = total_loss / len(dataloader)\n",
        "        accuracy = 100. * correct / total\n",
        "        \n",
        "        train_losses.append(avg_loss)\n",
        "        train_accuracies.append(accuracy)\n",
        "        \n",
        "        if (epoch + 1) % 10 == 0:\n",
        "            print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {avg_loss:.4f}, Accuracy: {accuracy:.2f}%')\n",
        "    \n",
        "    print(\"训练完成!\")\n",
        "    return train_losses, train_accuracies\n",
        "\n",
        "# 训练模型\n",
        "train_losses, train_accuracies = train_transformer(transformer, dataloader, num_epochs=100, learning_rate=0.001)\n",
        "\n",
        "# 可视化训练结果\n",
        "plt.figure(figsize=(12, 5))\n",
        "\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.plot(train_losses, 'b-', linewidth=2)\n",
        "plt.title('Transformer训练损失')\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.plot(train_accuracies, 'r-', linewidth=2)\n",
        "plt.title('Transformer训练准确率')\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Accuracy (%)')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 测试模型性能\n",
        "def evaluate_transformer(model, dataloader):\n",
        "    \"\"\"评估Transformer模型\"\"\"\n",
        "    model.eval()\n",
        "    all_preds = []\n",
        "    all_targets = []\n",
        "    \n",
        "    with torch.no_grad():\n",
        "        for data, target in dataloader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output, _ = model(data)\n",
        "            pred = output.argmax(dim=1)\n",
        "            all_preds.extend(pred.cpu().numpy())\n",
        "            all_targets.extend(target.cpu().numpy())\n",
        "    \n",
        "    return all_preds, all_targets\n",
        "\n",
        "# 评估模型\n",
        "preds, targets = evaluate_transformer(transformer, dataloader)\n",
        "accuracy = accuracy_score(targets, preds)\n",
        "\n",
        "print(f\"\\nTransformer模型最终准确率: {accuracy:.4f}\")\n",
        "print(\"\\n分类报告:\")\n",
        "print(classification_report(targets, preds, target_names=['正面', '负面']))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. 生成对抗网络（GAN）实现\n",
        "\n",
        "现在让我们实现一个基础的GAN模型来生成数字图像。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# GAN生成器\n",
        "class Generator(nn.Module):\n",
        "    \"\"\"GAN生成器\"\"\"\n",
        "    \n",
        "    def __init__(self, noise_dim, output_dim):\n",
        "        super(Generator, self).__init__()\n",
        "        \n",
        "        self.noise_dim = noise_dim\n",
        "        self.output_dim = output_dim\n",
        "        \n",
        "        # 生成器网络\n",
        "        self.network = nn.Sequential(\n",
        "            nn.Linear(noise_dim, 256),\n",
        "            nn.ReLU(),\n",
        "            nn.Linear(256, 512),\n",
        "            nn.ReLU(),\n",
        "            nn.Linear(512, 1024),\n",
        "            nn.ReLU(),\n",
        "            nn.Linear(1024, output_dim),\n",
        "            nn.Tanh()  # 输出范围 [-1, 1]\n",
        "        )\n",
        "    \n",
        "    def forward(self, noise):\n",
        "        return self.network(noise)\n",
        "\n",
        "# GAN判别器\n",
        "class Discriminator(nn.Module):\n",
        "    \"\"\"GAN判别器\"\"\"\n",
        "    \n",
        "    def __init__(self, input_dim):\n",
        "        super(Discriminator, self).__init__()\n",
        "        \n",
        "        self.input_dim = input_dim\n",
        "        \n",
        "        # 判别器网络\n",
        "        self.network = nn.Sequential(\n",
        "            nn.Linear(input_dim, 1024),\n",
        "            nn.LeakyReLU(0.2),\n",
        "            nn.Dropout(0.3),\n",
        "            nn.Linear(1024, 512),\n",
        "            nn.LeakyReLU(0.2),\n",
        "            nn.Dropout(0.3),\n",
        "            nn.Linear(512, 256),\n",
        "            nn.LeakyReLU(0.2),\n",
        "            nn.Dropout(0.3),\n",
        "            nn.Linear(256, 1),\n",
        "            nn.Sigmoid()  # 输出概率 [0, 1]\n",
        "        )\n",
        "    \n",
        "    def forward(self, x):\n",
        "        return self.network(x)\n",
        "\n",
        "# 创建简单的2D数据\n",
        "def create_2d_data(n_samples=1000):\n",
        "    \"\"\"创建简单的2D数据用于GAN训练\"\"\"\n",
        "    # 生成圆形数据\n",
        "    theta = np.random.uniform(0, 2*np.pi, n_samples)\n",
        "    r = np.random.uniform(0.5, 1.5, n_samples)\n",
        "    \n",
        "    x = r * np.cos(theta)\n",
        "    y = r * np.sin(theta)\n",
        "    \n",
        "    data = np.column_stack([x, y])\n",
        "    return data\n",
        "\n",
        "# 创建数据\n",
        "real_data = create_2d_data(1000)\n",
        "print(f\"真实数据形状: {real_data.shape}\")\n",
        "\n",
        "# 可视化真实数据\n",
        "plt.figure(figsize=(8, 6))\n",
        "plt.scatter(real_data[:, 0], real_data[:, 1], alpha=0.6, s=20)\n",
        "plt.title('真实数据分布')\n",
        "plt.xlabel('X')\n",
        "plt.ylabel('Y')\n",
        "plt.grid(True, alpha=0.3)\n",
        "plt.axis('equal')\n",
        "plt.show()\n",
        "\n",
        "# 创建GAN模型\n",
        "noise_dim = 10\n",
        "data_dim = 2\n",
        "\n",
        "generator = Generator(noise_dim, data_dim)\n",
        "discriminator = Discriminator(data_dim)\n",
        "\n",
        "print(f\"生成器参数数量: {sum(p.numel() for p in generator.parameters())}\")\n",
        "print(f\"判别器参数数量: {sum(p.numel() for p in discriminator.parameters())}\")\n",
        "\n",
        "# 测试生成器\n",
        "def test_generator():\n",
        "    \"\"\"测试生成器\"\"\"\n",
        "    print(\"\\n测试生成器...\")\n",
        "    \n",
        "    # 生成随机噪声\n",
        "    noise = torch.randn(100, noise_dim)\n",
        "    \n",
        "    # 生成假数据\n",
        "    fake_data = generator(noise)\n",
        "    fake_data = fake_data.detach().numpy()\n",
        "    \n",
        "    print(f\"噪声形状: {noise.shape}\")\n",
        "    print(f\"生成数据形状: {fake_data.shape}\")\n",
        "    \n",
        "    # 可视化生成的数据\n",
        "    plt.figure(figsize=(12, 5))\n",
        "    \n",
        "    plt.subplot(1, 2, 1)\n",
        "    plt.scatter(real_data[:, 0], real_data[:, 1], alpha=0.6, s=20, label='真实数据')\n",
        "    plt.title('真实数据')\n",
        "    plt.xlabel('X')\n",
        "    plt.ylabel('Y')\n",
        "    plt.legend()\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    plt.axis('equal')\n",
        "    \n",
        "    plt.subplot(1, 2, 2)\n",
        "    plt.scatter(fake_data[:, 0], fake_data[:, 1], alpha=0.6, s=20, color='red', label='生成数据')\n",
        "    plt.title('生成数据（训练前）')\n",
        "    plt.xlabel('X')\n",
        "    plt.ylabel('Y')\n",
        "    plt.legend()\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    plt.axis('equal')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    return fake_data\n",
        "\n",
        "fake_data = test_generator()\n",
        "\n",
        "# GAN训练函数\n",
        "def train_gan(generator, discriminator, real_data, num_epochs=1000, batch_size=64, lr=0.0002):\n",
        "    \"\"\"训练GAN模型\"\"\"\n",
        "    \n",
        "    # 优化器\n",
        "    g_optimizer = optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999))\n",
        "    d_optimizer = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999))\n",
        "    \n",
        "    # 损失函数\n",
        "    criterion = nn.BCELoss()\n",
        "    \n",
        "    # 训练历史\n",
        "    g_losses = []\n",
        "    d_losses = []\n",
        "    \n",
        "    print(\"开始训练GAN...\")\n",
        "    print(\"=\" * 50)\n",
        "    \n",
        "    for epoch in range(num_epochs):\n",
        "        # 训练判别器\n",
        "        d_optimizer.zero_grad()\n",
        "        \n",
        "        # 真实数据\n",
        "        real_batch = torch.FloatTensor(real_data[np.random.randint(0, len(real_data), batch_size)])\n",
        "        real_labels = torch.ones(batch_size, 1)\n",
        "        \n",
        "        # 假数据\n",
        "        noise = torch.randn(batch_size, noise_dim)\n",
        "        fake_batch = generator(noise)\n",
        "        fake_labels = torch.zeros(batch_size, 1)\n",
        "        \n",
        "        # 判别器损失\n",
        "        real_output = discriminator(real_batch)\n",
        "        fake_output = discriminator(fake_batch.detach())\n",
        "        \n",
        "        d_real_loss = criterion(real_output, real_labels)\n",
        "        d_fake_loss = criterion(fake_output, fake_labels)\n",
        "        d_loss = d_real_loss + d_fake_loss\n",
        "        \n",
        "        d_loss.backward()\n",
        "        d_optimizer.step()\n",
        "        \n",
        "        # 训练生成器\n",
        "        g_optimizer.zero_grad()\n",
        "        \n",
        "        # 生成器损失\n",
        "        fake_output = discriminator(fake_batch)\n",
        "        g_loss = criterion(fake_output, real_labels)  # 希望判别器认为假数据是真的\n",
        "        \n",
        "        g_loss.backward()\n",
        "        g_optimizer.step()\n",
        "        \n",
        "        # 记录损失\n",
        "        g_losses.append(g_loss.item())\n",
        "        d_losses.append(d_loss.item())\n",
        "        \n",
        "        # 打印进度\n",
        "        if (epoch + 1) % 100 == 0:\n",
        "            print(f'Epoch [{epoch+1}/{num_epochs}], D Loss: {d_loss.item():.4f}, G Loss: {g_loss.item():.4f}')\n",
        "    \n",
        "    print(\"训练完成!\")\n",
        "    return g_losses, d_losses\n",
        "\n",
        "# 训练GAN\n",
        "g_losses, d_losses = train_gan(generator, discriminator, real_data, num_epochs=1000)\n",
        "\n",
        "# 可视化训练过程\n",
        "plt.figure(figsize=(12, 5))\n",
        "\n",
        "plt.subplot(1, 2, 1)\n",
        "plt.plot(g_losses, label='生成器损失', alpha=0.7)\n",
        "plt.plot(d_losses, label='判别器损失', alpha=0.7)\n",
        "plt.title('GAN训练损失')\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 生成最终结果\n",
        "generator.eval()\n",
        "with torch.no_grad():\n",
        "    noise = torch.randn(1000, noise_dim)\n",
        "    final_fake_data = generator(noise).numpy()\n",
        "\n",
        "plt.subplot(1, 2, 2)\n",
        "plt.scatter(real_data[:, 0], real_data[:, 1], alpha=0.6, s=20, label='真实数据')\n",
        "plt.scatter(final_fake_data[:, 0], final_fake_data[:, 1], alpha=0.6, s=20, color='red', label='生成数据')\n",
        "plt.title('训练后的数据分布')\n",
        "plt.xlabel('X')\n",
        "plt.ylabel('Y')\n",
        "plt.legend()\n",
        "plt.grid(True, alpha=0.3)\n",
        "plt.axis('equal')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n",
        "\n",
        "# 分析生成质量\n",
        "def analyze_generation_quality():\n",
        "    \"\"\"分析生成质量\"\"\"\n",
        "    print(\"\\n生成质量分析:\")\n",
        "    print(\"=\" * 30)\n",
        "    \n",
        "    # 计算统计量\n",
        "    real_mean = np.mean(real_data, axis=0)\n",
        "    fake_mean = np.mean(final_fake_data, axis=0)\n",
        "    \n",
        "    real_std = np.std(real_data, axis=0)\n",
        "    fake_std = np.std(final_fake_data, axis=0)\n",
        "    \n",
        "    print(f\"真实数据均值: {real_mean}\")\n",
        "    print(f\"生成数据均值: {fake_mean}\")\n",
        "    print(f\"真实数据标准差: {real_std}\")\n",
        "    print(f\"生成数据标准差: {fake_std}\")\n",
        "    \n",
        "    # 计算距离\n",
        "    mean_distance = np.linalg.norm(real_mean - fake_mean)\n",
        "    std_distance = np.linalg.norm(real_std - fake_std)\n",
        "    \n",
        "    print(f\"均值距离: {mean_distance:.4f}\")\n",
        "    print(f\"标准差距离: {std_distance:.4f}\")\n",
        "    \n",
        "    # 可视化分布对比\n",
        "    plt.figure(figsize=(15, 5))\n",
        "    \n",
        "    # X坐标分布\n",
        "    plt.subplot(1, 3, 1)\n",
        "    plt.hist(real_data[:, 0], bins=30, alpha=0.7, label='真实数据', density=True)\n",
        "    plt.hist(final_fake_data[:, 0], bins=30, alpha=0.7, label='生成数据', density=True)\n",
        "    plt.title('X坐标分布')\n",
        "    plt.xlabel('X值')\n",
        "    plt.ylabel('密度')\n",
        "    plt.legend()\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    \n",
        "    # Y坐标分布\n",
        "    plt.subplot(1, 3, 2)\n",
        "    plt.hist(real_data[:, 1], bins=30, alpha=0.7, label='真实数据', density=True)\n",
        "    plt.hist(final_fake_data[:, 1], bins=30, alpha=0.7, label='生成数据', density=True)\n",
        "    plt.title('Y坐标分布')\n",
        "    plt.xlabel('Y值')\n",
        "    plt.ylabel('密度')\n",
        "    plt.legend()\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 半径分布\n",
        "    real_radius = np.sqrt(real_data[:, 0]**2 + real_data[:, 1]**2)\n",
        "    fake_radius = np.sqrt(final_fake_data[:, 0]**2 + final_fake_data[:, 1]**2)\n",
        "    \n",
        "    plt.subplot(1, 3, 3)\n",
        "    plt.hist(real_radius, bins=30, alpha=0.7, label='真实数据', density=True)\n",
        "    plt.hist(fake_radius, bins=30, alpha=0.7, label='生成数据', density=True)\n",
        "    plt.title('半径分布')\n",
        "    plt.xlabel('半径')\n",
        "    plt.ylabel('密度')\n",
        "    plt.legend()\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "analyze_generation_quality()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. 总结和扩展\n",
        "\n",
        "让我们总结一下学到的内容，并探讨一些扩展方向。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 总结和扩展\n",
        "print(\"🎉 高级主题教程总结\")\n",
        "print(\"=\" * 60)\n",
        "\n",
        "print(\"\\n📚 本教程涵盖的内容:\")\n",
        "print(\"1. 注意力机制\")\n",
        "print(\"   - 缩放点积注意力\")\n",
        "print(\"   - 多头注意力机制\")\n",
        "print(\"   - 位置编码\")\n",
        "print(\"   - 前馈网络\")\n",
        "\n",
        "print(\"\\n2. Transformer模型\")\n",
        "print(\"   - 编码器层实现\")\n",
        "print(\"   - 完整的Transformer架构\")\n",
        "print(\"   - 文本分类应用\")\n",
        "print(\"   - 注意力权重可视化\")\n",
        "\n",
        "print(\"\\n3. 生成对抗网络（GAN）\")\n",
        "print(\"   - 生成器和判别器设计\")\n",
        "print(\"   - 对抗训练过程\")\n",
        "print(\"   - 数据生成和评估\")\n",
        "print(\"   - 生成质量分析\")\n",
        "\n",
        "print(\"\\n🔍 关键发现:\")\n",
        "print(\"- Transformer通过注意力机制实现了并行计算\")\n",
        "print(\"- 多头注意力能够捕捉不同类型的关系\")\n",
        "print(\"- GAN通过对抗训练学习数据分布\")\n",
        "print(\"- 位置编码为序列提供位置信息\")\n",
        "\n",
        "print(\"\\n🚀 扩展方向:\")\n",
        "print(\"1. Transformer扩展\")\n",
        "print(\"   - 解码器层实现\")\n",
        "print(\"   - 完整的编码器-解码器架构\")\n",
        "print(\"   - 预训练模型（BERT, GPT）\")\n",
        "print(\"   - 注意力机制变体（稀疏注意力、线性注意力）\")\n",
        "\n",
        "print(\"\\n2. GAN扩展\")\n",
        "print(\"   - 条件GAN（CGAN）\")\n",
        "print(\"   - 深度卷积GAN（DCGAN）\")\n",
        "print(\"   - Wasserstein GAN（WGAN）\")\n",
        "print(\"   - 渐进式GAN（Progressive GAN）\")\n",
        "\n",
        "print(\"\\n3. 应用领域\")\n",
        "print(\"   - 自然语言处理（机器翻译、文本生成）\")\n",
        "print(\"   - 计算机视觉（图像生成、超分辨率）\")\n",
        "print(\"   - 语音处理（语音合成、语音识别）\")\n",
        "print(\"   - 推荐系统（序列推荐、生成推荐）\")\n",
        "\n",
        "print(\"\\n4. 技术改进\")\n",
        "print(\"   - 训练稳定性优化\")\n",
        "print(\"   - 模型压缩和加速\")\n",
        "print(\"   - 多模态学习\")\n",
        "print(\"   - 联邦学习\")\n",
        "\n",
        "# 创建技术对比图\n",
        "def create_technology_comparison():\n",
        "    \"\"\"创建技术对比图\"\"\"\n",
        "    fig, ax = plt.subplots(figsize=(14, 10))\n",
        "    \n",
        "    # 定义技术栈\n",
        "    technologies = {\n",
        "        '传统方法': ['线性回归', '逻辑回归', '朴素贝叶斯', 'SVM'],\n",
        "        '神经网络': ['MLP', 'CNN', 'RNN', 'LSTM'],\n",
        "        '注意力机制': ['Self-Attention', 'Multi-Head', 'Transformer', 'BERT'],\n",
        "        '生成模型': ['VAE', 'GAN', 'Flow-based', 'Diffusion'],\n",
        "        '预训练模型': ['Word2Vec', 'GloVe', 'BERT', 'GPT']\n",
        "    }\n",
        "    \n",
        "    # 绘制技术栈\n",
        "    y_positions = [4, 3, 2, 1, 0]\n",
        "    colors = ['lightblue', 'lightgreen', 'lightyellow', 'lightcoral', 'lightpink']\n",
        "    \n",
        "    for i, (category, tech_list) in enumerate(technologies.items()):\n",
        "        y_pos = y_positions[i]\n",
        "        color = colors[i]\n",
        "        \n",
        "        # 绘制类别框\n",
        "        rect = plt.Rectangle((0, y_pos-0.3), 12, 0.6, \n",
        "                           facecolor=color, alpha=0.7, edgecolor='black')\n",
        "        ax.add_patch(rect)\n",
        "        \n",
        "        # 添加类别标签\n",
        "        ax.text(6, y_pos, category, ha='center', va='center', \n",
        "               fontsize=12, fontweight='bold')\n",
        "        \n",
        "        # 添加技术标签\n",
        "        for j, tech in enumerate(tech_list):\n",
        "            x_pos = 1 + j * 2.5\n",
        "            ax.text(x_pos, y_pos, tech, ha='center', va='center', \n",
        "                   fontsize=10, bbox=dict(boxstyle=\"round,pad=0.2\", facecolor='white', alpha=0.8))\n",
        "        \n",
        "        # 绘制箭头\n",
        "        if i < len(technologies) - 1:\n",
        "            ax.arrow(6, y_pos-0.3, 0, -0.4, head_width=0.3, head_length=0.1, \n",
        "                    fc='black', ec='black')\n",
        "    \n",
        "    ax.set_xlim(0, 12)\n",
        "    ax.set_ylim(-0.5, 4.5)\n",
        "    ax.set_title('深度学习技术发展历程', fontsize=16, fontweight='bold')\n",
        "    ax.axis('off')\n",
        "    \n",
        "    # 添加当前进度标记\n",
        "    current_level = 2  # 注意力机制级别\n",
        "    ax.text(11, y_positions[current_level], '✓ 已完成', \n",
        "           ha='center', va='center', fontsize=10, \n",
        "           bbox=dict(boxstyle=\"round,pad=0.3\", facecolor='green', alpha=0.7))\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "create_technology_comparison()\n",
        "\n",
        "# 性能对比分析\n",
        "def performance_comparison():\n",
        "    \"\"\"性能对比分析\"\"\"\n",
        "    print(\"\\n📊 性能对比分析:\")\n",
        "    print(\"=\" * 40)\n",
        "    \n",
        "    # 模拟性能数据\n",
        "    models = ['线性回归', 'MLP', 'CNN', 'RNN', 'Transformer', 'BERT']\n",
        "    accuracy = [0.75, 0.82, 0.88, 0.85, 0.92, 0.95]\n",
        "    training_time = [1, 5, 15, 20, 30, 100]  # 相对时间\n",
        "    model_size = [1, 5, 10, 15, 50, 200]  # 相对大小\n",
        "    \n",
        "    # 创建对比图\n",
        "    fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(18, 6))\n",
        "    \n",
        "    # 准确率对比\n",
        "    bars1 = ax1.bar(models, accuracy, color='skyblue', alpha=0.7)\n",
        "    ax1.set_title('模型准确率对比')\n",
        "    ax1.set_ylabel('准确率')\n",
        "    ax1.set_ylim(0, 1)\n",
        "    ax1.tick_params(axis='x', rotation=45)\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    for bar, acc in zip(bars1, accuracy):\n",
        "        ax1.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01, \n",
        "                f'{acc:.2f}', ha='center', va='bottom')\n",
        "    \n",
        "    # 训练时间对比\n",
        "    bars2 = ax2.bar(models, training_time, color='lightcoral', alpha=0.7)\n",
        "    ax2.set_title('训练时间对比（相对）')\n",
        "    ax2.set_ylabel('训练时间')\n",
        "    ax2.tick_params(axis='x', rotation=45)\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    for bar, time in zip(bars2, training_time):\n",
        "        ax2.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 1, \n",
        "                f'{time}x', ha='center', va='bottom')\n",
        "    \n",
        "    # 模型大小对比\n",
        "    bars3 = ax3.bar(models, model_size, color='lightgreen', alpha=0.7)\n",
        "    ax3.set_title('模型大小对比（相对）')\n",
        "    ax3.set_ylabel('模型大小')\n",
        "    ax3.tick_params(axis='x', rotation=45)\n",
        "    ax3.grid(True, alpha=0.3)\n",
        "    \n",
        "    for bar, size in zip(bars3, model_size):\n",
        "        ax3.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 2, \n",
        "                f'{size}x', ha='center', va='bottom')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "performance_comparison()\n",
        "\n",
        "# 实践建议\n",
        "print(\"\\n💡 实践建议:\")\n",
        "print(\"1. 从简单开始\")\n",
        "print(\"   - 先理解基础概念\")\n",
        "print(\"   - 实现简单版本\")\n",
        "print(\"   - 逐步增加复杂度\")\n",
        "\n",
        "print(\"\\n2. 动手实践\")\n",
        "print(\"   - 尝试不同的数据集\")\n",
        "print(\"   - 调整超参数\")\n",
        "print(\"   - 可视化中间结果\")\n",
        "\n",
        "print(\"\\n3. 学习资源\")\n",
        "print(\"   - 阅读原始论文\")\n",
        "print(\"   - 查看开源实现\")\n",
        "print(\"   - 参与社区讨论\")\n",
        "\n",
        "print(\"\\n4. 项目实践\")\n",
        "print(\"   - 选择感兴趣的应用\")\n",
        "print(\"   - 从端到端实现\")\n",
        "print(\"   - 分享和展示成果\")\n",
        "\n",
        "print(\"\\n🎯 下一步学习建议:\")\n",
        "print(\"1. 深入学习Transformer\")\n",
        "print(\"   - 实现完整的编码器-解码器架构\")\n",
        "print(\"   - 学习预训练技术\")\n",
        "print(\"   - 探索注意力机制变体\")\n",
        "\n",
        "print(\"\\n2. 探索生成模型\")\n",
        "print(\"   - 学习变分自编码器（VAE）\")\n",
        "print(\"   - 了解扩散模型（Diffusion Models）\")\n",
        "print(\"   - 实践条件生成\")\n",
        "\n",
        "print(\"\\n3. 多模态学习\")\n",
        "print(\"   - 图像-文本联合学习\")\n",
        "print(\"   - 跨模态检索\")\n",
        "print(\"   - 多模态生成\")\n",
        "\n",
        "print(\"\\n4. 实际应用\")\n",
        "print(\"   - 构建端到端系统\")\n",
        "print(\"   - 模型部署和优化\")\n",
        "print(\"   - 生产环境实践\")\n",
        "\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"恭喜你完成了高级主题教程！🎉\")\n",
        "print(\"你已经掌握了Transformer和GAN的核心概念和实现方法。\")\n",
        "print(\"这些技术是现代深度学习的重要基础，继续探索更多可能性吧！\")\n"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
