{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 07. NLP基础：词袋模型和CBOW\n",
        "\n",
        "## 学习目标\n",
        "- 理解自然语言处理的基本概念\n",
        "- 学习文本预处理技术\n",
        "- 掌握词袋模型（Bag of Words）的实现\n",
        "- 学习连续词袋模型（CBOW）的原理和实现\n",
        "- 实现文本分类任务\n",
        "- 对比不同文本表示方法的性能\n",
        "\n",
        "## 什么是自然语言处理（NLP）？\n",
        "\n",
        "自然语言处理是人工智能的一个重要分支，旨在让计算机能够理解、处理和生成人类语言。\n",
        "\n",
        "### NLP的主要任务\n",
        "\n",
        "1. **文本分类**: 将文本分为不同的类别\n",
        "2. **情感分析**: 判断文本的情感倾向\n",
        "3. **机器翻译**: 将一种语言翻译成另一种语言\n",
        "4. **问答系统**: 回答用户提出的问题\n",
        "5. **文本生成**: 自动生成文本内容\n",
        "\n",
        "### 文本表示方法\n",
        "\n",
        "1. **词袋模型（Bag of Words）**\n",
        "   - 将文本表示为词汇的集合\n",
        "   - 忽略词汇的顺序和语法结构\n",
        "   - 简单但有效的基础方法\n",
        "\n",
        "2. **词嵌入（Word Embeddings）**\n",
        "   - 将词汇映射到高维向量空间\n",
        "   - 捕捉词汇之间的语义关系\n",
        "   - CBOW是词嵌入的一种训练方法\n",
        "\n",
        "### CBOW模型原理\n",
        "\n",
        "连续词袋模型（Continuous Bag of Words）通过上下文词汇预测中心词：\n",
        "\n",
        "**数学公式：**\n",
        "$$P(w_t | w_{t-c}, ..., w_{t-1}, w_{t+1}, ..., w_{t+c}) = \\frac{\\exp(v_{w_t}^T \\cdot h)}{\\sum_{w \\in V} \\exp(v_w^T \\cdot h)}$$\n",
        "\n",
        "其中：\n",
        "- $w_t$ 是中心词\n",
        "- $w_{t-c}, ..., w_{t+c}$ 是上下文词\n",
        "- $h$ 是上下文向量的平均\n",
        "- $v_w$ 是词 $w$ 的向量表示\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.optim as optim\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import DataLoader, TensorDataset\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "import seaborn as sns\n",
        "from sklearn.model_selection import train_test_split\n",
        "from sklearn.metrics import classification_report, confusion_matrix, accuracy_score\n",
        "from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\n",
        "from collections import Counter, defaultdict\n",
        "import re\n",
        "import string\n",
        "import time\n",
        "from tqdm import tqdm\n",
        "import warnings\n",
        "warnings.filterwarnings('ignore')\n",
        "\n",
        "# 设置中文字体\n",
        "plt.rcParams['font.sans-serif'] = ['SimHei']\n",
        "plt.rcParams['axes.unicode_minus'] = False\n",
        "\n",
        "# 设置随机种子\n",
        "torch.manual_seed(42)\n",
        "np.random.seed(42)\n",
        "\n",
        "print(f\"PyTorch版本: {torch.__version__}\")\n",
        "print(f\"CUDA可用: {torch.cuda.is_available()}\")\n",
        "\n",
        "# 设置设备\n",
        "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
        "print(f\"使用设备: {device}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. 文本预处理\n",
        "\n",
        "首先，让我们学习如何对文本数据进行预处理，这是NLP任务的重要第一步。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 文本预处理函数\n",
        "class TextPreprocessor:\n",
        "    \"\"\"文本预处理器\"\"\"\n",
        "    \n",
        "    def __init__(self, remove_punctuation=True, to_lowercase=True, remove_stopwords=True):\n",
        "        self.remove_punctuation = remove_punctuation\n",
        "        self.to_lowercase = to_lowercase\n",
        "        self.remove_stopwords = remove_stopwords\n",
        "        \n",
        "        # 英文停用词列表\n",
        "        self.stopwords = {\n",
        "            'a', 'an', 'and', 'are', 'as', 'at', 'be', 'by', 'for', 'from',\n",
        "            'has', 'he', 'in', 'is', 'it', 'its', 'of', 'on', 'that', 'the',\n",
        "            'to', 'was', 'will', 'with', 'i', 'you', 'we', 'they', 'this',\n",
        "            'these', 'those', 'have', 'had', 'do', 'does', 'did', 'can',\n",
        "            'could', 'would', 'should', 'may', 'might', 'must', 'shall'\n",
        "        }\n",
        "    \n",
        "    def clean_text(self, text):\n",
        "        \"\"\"清理文本\"\"\"\n",
        "        if self.to_lowercase:\n",
        "            text = text.lower()\n",
        "        \n",
        "        if self.remove_punctuation:\n",
        "            # 移除标点符号，保留字母、数字和空格\n",
        "            text = re.sub(r'[^a-zA-Z0-9\\s]', '', text)\n",
        "        \n",
        "        # 移除多余的空格\n",
        "        text = re.sub(r'\\s+', ' ', text).strip()\n",
        "        \n",
        "        return text\n",
        "    \n",
        "    def tokenize(self, text):\n",
        "        \"\"\"分词\"\"\"\n",
        "        return text.split()\n",
        "    \n",
        "    def remove_stopwords_func(self, tokens):\n",
        "        \"\"\"移除停用词\"\"\"\n",
        "        if self.remove_stopwords:\n",
        "            return [token for token in tokens if token not in self.stopwords]\n",
        "        return tokens\n",
        "    \n",
        "    def preprocess(self, text):\n",
        "        \"\"\"完整的预处理流程\"\"\"\n",
        "        text = self.clean_text(text)\n",
        "        tokens = self.tokenize(text)\n",
        "        tokens = self.remove_stopwords_func(tokens)\n",
        "        return tokens\n",
        "\n",
        "# 创建预处理器实例\n",
        "preprocessor = TextPreprocessor()\n",
        "\n",
        "# 示例文本\n",
        "sample_texts = [\n",
        "    \"This is a great movie! I love it so much.\",\n",
        "    \"The weather is terrible today. I hate rain.\",\n",
        "    \"Machine learning is fascinating and powerful.\",\n",
        "    \"I don't like this restaurant. The food is bad.\",\n",
        "    \"Python programming is fun and easy to learn.\"\n",
        "]\n",
        "\n",
        "print(\"文本预处理示例:\")\n",
        "print(\"=\" * 50)\n",
        "\n",
        "for i, text in enumerate(sample_texts):\n",
        "    print(f\"原始文本 {i+1}: {text}\")\n",
        "    processed = preprocessor.preprocess(text)\n",
        "    print(f\"处理后: {processed}\")\n",
        "    print()\n",
        "\n",
        "# 可视化预处理效果\n",
        "def visualize_preprocessing():\n",
        "    \"\"\"可视化预处理效果\"\"\"\n",
        "    fig, axes = plt.subplots(2, 3, figsize=(18, 10))\n",
        "    \n",
        "    # 原始文本\n",
        "    for i, text in enumerate(sample_texts[:3]):\n",
        "        axes[0, i].text(0.1, 0.5, text, fontsize=12, wrap=True, \n",
        "                       bbox=dict(boxstyle=\"round,pad=0.3\", facecolor=\"lightblue\"))\n",
        "        axes[0, i].set_title(f'原始文本 {i+1}')\n",
        "        axes[0, i].axis('off')\n",
        "    \n",
        "    # 处理后的文本\n",
        "    for i, text in enumerate(sample_texts[:3]):\n",
        "        processed = preprocessor.preprocess(text)\n",
        "        processed_text = ' '.join(processed)\n",
        "        axes[1, i].text(0.1, 0.5, processed_text, fontsize=12, wrap=True,\n",
        "                       bbox=dict(boxstyle=\"round,pad=0.3\", facecolor=\"lightgreen\"))\n",
        "        axes[1, i].set_title(f'处理后文本 {i+1}')\n",
        "        axes[1, i].axis('off')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_preprocessing()\n",
        "\n",
        "# 分析词汇统计\n",
        "def analyze_vocabulary():\n",
        "    \"\"\"分析词汇统计\"\"\"\n",
        "    all_tokens = []\n",
        "    for text in sample_texts:\n",
        "        tokens = preprocessor.preprocess(text)\n",
        "        all_tokens.extend(tokens)\n",
        "    \n",
        "    # 词汇频率统计\n",
        "    word_freq = Counter(all_tokens)\n",
        "    \n",
        "    print(\"词汇频率统计:\")\n",
        "    print(\"=\" * 30)\n",
        "    for word, freq in word_freq.most_common(10):\n",
        "        print(f\"{word}: {freq}\")\n",
        "    \n",
        "    # 可视化词汇频率\n",
        "    words, freqs = zip(*word_freq.most_common(10))\n",
        "    \n",
        "    plt.figure(figsize=(12, 6))\n",
        "    plt.bar(words, freqs, color='skyblue', alpha=0.7)\n",
        "    plt.title('词汇频率统计')\n",
        "    plt.xlabel('词汇')\n",
        "    plt.ylabel('频率')\n",
        "    plt.xticks(rotation=45)\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    return word_freq\n",
        "\n",
        "word_freq = analyze_vocabulary()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. 数据准备\n",
        "\n",
        "现在让我们创建一个简单的文本分类数据集，用于演示词袋模型和CBOW。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 创建文本分类数据集\n",
        "def create_text_dataset():\n",
        "    \"\"\"创建文本分类数据集\"\"\"\n",
        "    \n",
        "    # 正面情感文本\n",
        "    positive_texts = [\n",
        "        \"This movie is absolutely amazing and wonderful!\",\n",
        "        \"I love this product, it's fantastic and great!\",\n",
        "        \"The service was excellent and outstanding!\",\n",
        "        \"This book is incredible and inspiring!\",\n",
        "        \"The food here is delicious and perfect!\",\n",
        "        \"I had a wonderful time at this place!\",\n",
        "        \"This is the best experience ever!\",\n",
        "        \"The quality is superb and amazing!\",\n",
        "        \"I'm so happy with this purchase!\",\n",
        "        \"This is exactly what I was looking for!\",\n",
        "        \"The staff is friendly and helpful!\",\n",
        "        \"I would definitely recommend this to others!\",\n",
        "        \"This is a fantastic opportunity!\",\n",
        "        \"The results exceeded my expectations!\",\n",
        "        \"I'm thrilled with the outcome!\",\n",
        "        \"This is a brilliant solution!\",\n",
        "        \"The performance is outstanding!\",\n",
        "        \"I'm impressed with the quality!\",\n",
        "        \"This is a remarkable achievement!\",\n",
        "        \"The experience was unforgettable!\"\n",
        "    ]\n",
        "    \n",
        "    # 负面情感文本\n",
        "    negative_texts = [\n",
        "        \"This movie is terrible and awful!\",\n",
        "        \"I hate this product, it's horrible!\",\n",
        "        \"The service was poor and disappointing!\",\n",
        "        \"This book is boring and uninteresting!\",\n",
        "        \"The food here is disgusting and bad!\",\n",
        "        \"I had a terrible time at this place!\",\n",
        "        \"This is the worst experience ever!\",\n",
        "        \"The quality is poor and disappointing!\",\n",
        "        \"I'm so disappointed with this purchase!\",\n",
        "        \"This is not what I was expecting!\",\n",
        "        \"The staff is rude and unhelpful!\",\n",
        "        \"I would never recommend this to anyone!\",\n",
        "        \"This is a terrible opportunity!\",\n",
        "        \"The results were below my expectations!\",\n",
        "        \"I'm frustrated with the outcome!\",\n",
        "        \"This is a terrible solution!\",\n",
        "        \"The performance is disappointing!\",\n",
        "        \"I'm disappointed with the quality!\",\n",
        "        \"This is a poor achievement!\",\n",
        "        \"The experience was forgettable!\"\n",
        "    ]\n",
        "    \n",
        "    # 中性/技术文本\n",
        "    neutral_texts = [\n",
        "        \"The system processes data efficiently and accurately.\",\n",
        "        \"This algorithm uses machine learning techniques.\",\n",
        "        \"The software provides various features and functions.\",\n",
        "        \"The database stores information in structured format.\",\n",
        "        \"The application runs on multiple operating systems.\",\n",
        "        \"The network connects different devices and computers.\",\n",
        "        \"The program executes commands and operations.\",\n",
        "        \"The interface displays data and results.\",\n",
        "        \"The system manages resources and memory.\",\n",
        "        \"The software handles errors and exceptions.\",\n",
        "        \"The application supports different file formats.\",\n",
        "        \"The program processes input and generates output.\",\n",
        "        \"The system maintains logs and records.\",\n",
        "        \"The software provides configuration options.\",\n",
        "        \"The application includes documentation and help.\",\n",
        "        \"The program uses standard protocols and methods.\",\n",
        "        \"The system implements security measures.\",\n",
        "        \"The software provides backup and recovery.\",\n",
        "        \"The application supports user authentication.\",\n",
        "        \"The program includes testing and validation.\"\n",
        "    ]\n",
        "    \n",
        "    # 组合数据和标签\n",
        "    texts = positive_texts + negative_texts + neutral_texts\n",
        "    labels = [0] * len(positive_texts) + [1] * len(negative_texts) + [2] * len(neutral_texts)\n",
        "    \n",
        "    return texts, labels\n",
        "\n",
        "# 创建数据集\n",
        "texts, labels = create_text_dataset()\n",
        "\n",
        "print(f\"数据集大小: {len(texts)}\")\n",
        "print(f\"类别分布: {Counter(labels)}\")\n",
        "print(f\"类别名称: {['正面', '负面', '中性']}\")\n",
        "\n",
        "# 显示一些样本\n",
        "print(\"\\n数据集样本:\")\n",
        "print(\"=\" * 50)\n",
        "for i in range(6):\n",
        "    category = ['正面', '负面', '中性'][labels[i]]\n",
        "    print(f\"类别: {category}\")\n",
        "    print(f\"文本: {texts[i]}\")\n",
        "    print()\n",
        "\n",
        "# 数据分割\n",
        "X_train, X_test, y_train, y_test = train_test_split(\n",
        "    texts, labels, test_size=0.2, random_state=42, stratify=labels\n",
        ")\n",
        "\n",
        "print(f\"训练集大小: {len(X_train)}\")\n",
        "print(f\"测试集大小: {len(X_test)}\")\n",
        "print(f\"训练集类别分布: {Counter(y_train)}\")\n",
        "print(f\"测试集类别分布: {Counter(y_test)}\")\n",
        "\n",
        "# 可视化数据集分布\n",
        "def visualize_dataset_distribution():\n",
        "    \"\"\"可视化数据集分布\"\"\"\n",
        "    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
        "    \n",
        "    # 整体分布\n",
        "    categories = ['正面', '负面', '中性']\n",
        "    counts = [Counter(labels)[i] for i in range(3)]\n",
        "    colors = ['lightgreen', 'lightcoral', 'lightblue']\n",
        "    \n",
        "    ax1.bar(categories, counts, color=colors, alpha=0.7)\n",
        "    ax1.set_title('整体数据集分布')\n",
        "    ax1.set_ylabel('样本数量')\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 添加数值标签\n",
        "    for i, count in enumerate(counts):\n",
        "        ax1.text(i, count + 0.5, str(count), ha='center', va='bottom')\n",
        "    \n",
        "    # 训练集和测试集分布\n",
        "    train_counts = [Counter(y_train)[i] for i in range(3)]\n",
        "    test_counts = [Counter(y_test)[i] for i in range(3)]\n",
        "    \n",
        "    x = np.arange(len(categories))\n",
        "    width = 0.35\n",
        "    \n",
        "    ax2.bar(x - width/2, train_counts, width, label='训练集', alpha=0.7, color='skyblue')\n",
        "    ax2.bar(x + width/2, test_counts, width, label='测试集', alpha=0.7, color='orange')\n",
        "    \n",
        "    ax2.set_title('训练集和测试集分布')\n",
        "    ax2.set_ylabel('样本数量')\n",
        "    ax2.set_xticks(x)\n",
        "    ax2.set_xticklabels(categories)\n",
        "    ax2.legend()\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 添加数值标签\n",
        "    for i, (train_count, test_count) in enumerate(zip(train_counts, test_counts)):\n",
        "        ax2.text(i - width/2, train_count + 0.5, str(train_count), ha='center', va='bottom')\n",
        "        ax2.text(i + width/2, test_count + 0.5, str(test_count), ha='center', va='bottom')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_dataset_distribution()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. 词袋模型（Bag of Words）\n",
        "\n",
        "现在让我们实现词袋模型，这是最基础的文本表示方法。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 词袋模型实现\n",
        "class BagOfWords:\n",
        "    \"\"\"词袋模型实现\"\"\"\n",
        "    \n",
        "    def __init__(self, max_features=None, min_df=1, max_df=1.0):\n",
        "        self.max_features = max_features\n",
        "        self.min_df = min_df\n",
        "        self.max_df = max_df\n",
        "        self.vocabulary_ = {}\n",
        "        self.idf_ = {}\n",
        "        self.feature_names_ = []\n",
        "    \n",
        "    def fit(self, texts):\n",
        "        \"\"\"训练词袋模型\"\"\"\n",
        "        # 预处理所有文本\n",
        "        processed_texts = []\n",
        "        for text in texts:\n",
        "            tokens = preprocessor.preprocess(text)\n",
        "            processed_texts.append(tokens)\n",
        "        \n",
        "        # 构建词汇表\n",
        "        word_counts = Counter()\n",
        "        doc_counts = defaultdict(int)\n",
        "        \n",
        "        for tokens in processed_texts:\n",
        "            unique_tokens = set(tokens)\n",
        "            for token in unique_tokens:\n",
        "                doc_counts[token] += 1\n",
        "            word_counts.update(tokens)\n",
        "        \n",
        "        # 过滤词汇\n",
        "        n_docs = len(processed_texts)\n",
        "        filtered_words = []\n",
        "        \n",
        "        for word, count in word_counts.items():\n",
        "            doc_freq = doc_counts[word]\n",
        "            if (self.min_df <= doc_freq / n_docs <= self.max_df):\n",
        "                filtered_words.append((word, count))\n",
        "        \n",
        "        # 按频率排序并限制特征数量\n",
        "        filtered_words.sort(key=lambda x: x[1], reverse=True)\n",
        "        if self.max_features:\n",
        "            filtered_words = filtered_words[:self.max_features]\n",
        "        \n",
        "        # 构建词汇表\n",
        "        self.vocabulary_ = {word: idx for idx, (word, _) in enumerate(filtered_words)}\n",
        "        self.feature_names_ = [word for word, _ in filtered_words]\n",
        "        \n",
        "        # 计算IDF\n",
        "        for word in self.vocabulary_:\n",
        "            doc_freq = doc_counts[word]\n",
        "            self.idf_[word] = np.log(n_docs / doc_freq)\n",
        "        \n",
        "        return self\n",
        "    \n",
        "    def transform(self, texts):\n",
        "        \"\"\"将文本转换为词袋向量\"\"\"\n",
        "        if not self.vocabulary_:\n",
        "            raise ValueError(\"模型尚未训练，请先调用fit方法\")\n",
        "        \n",
        "        # 预处理文本\n",
        "        processed_texts = []\n",
        "        for text in texts:\n",
        "            tokens = preprocessor.preprocess(text)\n",
        "            processed_texts.append(tokens)\n",
        "        \n",
        "        # 构建词袋矩阵\n",
        "        n_features = len(self.vocabulary_)\n",
        "        n_docs = len(processed_texts)\n",
        "        X = np.zeros((n_docs, n_features))\n",
        "        \n",
        "        for doc_idx, tokens in enumerate(processed_texts):\n",
        "            word_counts = Counter(tokens)\n",
        "            for word, count in word_counts.items():\n",
        "                if word in self.vocabulary_:\n",
        "                    word_idx = self.vocabulary_[word]\n",
        "                    X[doc_idx, word_idx] = count\n",
        "        \n",
        "        return X\n",
        "    \n",
        "    def fit_transform(self, texts):\n",
        "        \"\"\"训练并转换\"\"\"\n",
        "        return self.fit(texts).transform(texts)\n",
        "\n",
        "# 创建并训练词袋模型\n",
        "print(\"训练词袋模型...\")\n",
        "bow_model = BagOfWords(max_features=100, min_df=1, max_df=1.0)\n",
        "X_train_bow = bow_model.fit_transform(X_train)\n",
        "X_test_bow = bow_model.transform(X_test)\n",
        "\n",
        "print(f\"词袋模型特征数量: {X_train_bow.shape[1]}\")\n",
        "print(f\"训练集形状: {X_train_bow.shape}\")\n",
        "print(f\"测试集形状: {X_test_bow.shape}\")\n",
        "\n",
        "# 显示词汇表\n",
        "print(f\"\\n词汇表 (前20个):\")\n",
        "for i, word in enumerate(bow_model.feature_names_[:20]):\n",
        "    print(f\"{i}: {word}\")\n",
        "\n",
        "# 可视化词袋向量\n",
        "def visualize_bow_vectors():\n",
        "    \"\"\"可视化词袋向量\"\"\"\n",
        "    # 选择几个样本进行可视化\n",
        "    sample_indices = [0, 1, 2]\n",
        "    \n",
        "    fig, axes = plt.subplots(len(sample_indices), 1, figsize=(15, 8))\n",
        "    if len(sample_indices) == 1:\n",
        "        axes = [axes]\n",
        "    \n",
        "    for i, idx in enumerate(sample_indices):\n",
        "        # 获取非零特征\n",
        "        vector = X_train_bow[idx]\n",
        "        non_zero_indices = np.where(vector > 0)[0]\n",
        "        non_zero_values = vector[non_zero_indices]\n",
        "        non_zero_words = [bow_model.feature_names_[j] for j in non_zero_indices]\n",
        "        \n",
        "        # 绘制条形图\n",
        "        axes[i].bar(range(len(non_zero_words)), non_zero_values, alpha=0.7)\n",
        "        axes[i].set_title(f'样本 {idx+1}: {X_train[idx][:50]}...')\n",
        "        axes[i].set_xlabel('词汇')\n",
        "        axes[i].set_ylabel('词频')\n",
        "        axes[i].set_xticks(range(len(non_zero_words)))\n",
        "        axes[i].set_xticklabels(non_zero_words, rotation=45)\n",
        "        axes[i].grid(True, alpha=0.3)\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_bow_vectors()\n",
        "\n",
        "# 分析词袋特征\n",
        "def analyze_bow_features():\n",
        "    \"\"\"分析词袋特征\"\"\"\n",
        "    # 计算特征统计\n",
        "    feature_sums = X_train_bow.sum(axis=0)\n",
        "    feature_means = X_train_bow.mean(axis=0)\n",
        "    \n",
        "    # 获取最频繁的特征\n",
        "    top_features_idx = np.argsort(feature_sums)[-20:]\n",
        "    top_features = [bow_model.feature_names_[i] for i in top_features_idx]\n",
        "    top_counts = feature_sums[top_features_idx]\n",
        "    \n",
        "    print(\"最频繁的20个特征:\")\n",
        "    print(\"=\" * 40)\n",
        "    for word, count in zip(top_features, top_counts):\n",
        "        print(f\"{word}: {count}\")\n",
        "    \n",
        "    # 可视化特征频率\n",
        "    plt.figure(figsize=(12, 6))\n",
        "    plt.bar(range(len(top_features)), top_counts, alpha=0.7, color='skyblue')\n",
        "    plt.title('最频繁的20个特征')\n",
        "    plt.xlabel('特征')\n",
        "    plt.ylabel('总词频')\n",
        "    plt.xticks(range(len(top_features)), top_features, rotation=45)\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    # 分析稀疏性\n",
        "    total_elements = X_train_bow.size\n",
        "    non_zero_elements = np.count_nonzero(X_train_bow)\n",
        "    sparsity = 1 - (non_zero_elements / total_elements)\n",
        "    \n",
        "    print(f\"\\n词袋矩阵稀疏性分析:\")\n",
        "    print(f\"总元素数: {total_elements}\")\n",
        "    print(f\"非零元素数: {non_zero_elements}\")\n",
        "    print(f\"稀疏性: {sparsity:.4f} ({sparsity*100:.2f}%)\")\n",
        "\n",
        "analyze_bow_features()\n",
        "\n",
        "# 使用词袋模型进行分类\n",
        "from sklearn.linear_model import LogisticRegression\n",
        "from sklearn.naive_bayes import MultinomialNB\n",
        "from sklearn.svm import SVC\n",
        "\n",
        "# 训练不同的分类器\n",
        "classifiers = {\n",
        "    'Logistic Regression': LogisticRegression(random_state=42, max_iter=1000),\n",
        "    'Naive Bayes': MultinomialNB(),\n",
        "    'SVM': SVC(random_state=42, kernel='linear')\n",
        "}\n",
        "\n",
        "bow_results = {}\n",
        "\n",
        "print(\"\\n词袋模型分类结果:\")\n",
        "print(\"=\" * 50)\n",
        "\n",
        "for name, classifier in classifiers.items():\n",
        "    # 训练\n",
        "    classifier.fit(X_train_bow, y_train)\n",
        "    \n",
        "    # 预测\n",
        "    y_pred = classifier.predict(X_test_bow)\n",
        "    \n",
        "    # 评估\n",
        "    accuracy = accuracy_score(y_test, y_pred)\n",
        "    bow_results[name] = accuracy\n",
        "    \n",
        "    print(f\"{name}: {accuracy:.4f}\")\n",
        "\n",
        "# 可视化分类结果\n",
        "plt.figure(figsize=(10, 6))\n",
        "classifier_names = list(bow_results.keys())\n",
        "accuracies = list(bow_results.values())\n",
        "\n",
        "bars = plt.bar(classifier_names, accuracies, alpha=0.7, color=['skyblue', 'lightgreen', 'lightcoral'])\n",
        "plt.title('词袋模型分类器性能对比')\n",
        "plt.ylabel('准确率')\n",
        "plt.ylim(0, 1)\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# 添加数值标签\n",
        "for bar, acc in zip(bars, accuracies):\n",
        "    plt.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01, \n",
        "             f'{acc:.3f}', ha='center', va='bottom')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. 连续词袋模型（CBOW）\n",
        "\n",
        "现在让我们实现CBOW模型，这是一种更高级的文本表示方法，能够学习词汇的语义关系。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# CBOW模型实现\n",
        "class CBOWModel(nn.Module):\n",
        "    \"\"\"连续词袋模型\"\"\"\n",
        "    \n",
        "    def __init__(self, vocab_size, embedding_dim, context_size):\n",
        "        super(CBOWModel, self).__init__()\n",
        "        self.vocab_size = vocab_size\n",
        "        self.embedding_dim = embedding_dim\n",
        "        self.context_size = context_size\n",
        "        \n",
        "        # 输入嵌入层\n",
        "        self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n",
        "        \n",
        "        # 线性层\n",
        "        self.linear1 = nn.Linear(embedding_dim, 128)\n",
        "        self.linear2 = nn.Linear(128, vocab_size)\n",
        "        \n",
        "        # Dropout\n",
        "        self.dropout = nn.Dropout(0.1)\n",
        "    \n",
        "    def forward(self, context):\n",
        "        \"\"\"\n",
        "        前向传播\n",
        "        context: [batch_size, 2*context_size] 上下文词汇的索引\n",
        "        \"\"\"\n",
        "        # 获取上下文词汇的嵌入\n",
        "        embeds = self.embeddings(context)  # [batch_size, 2*context_size, embedding_dim]\n",
        "        \n",
        "        # 平均池化\n",
        "        embeds = embeds.mean(dim=1)  # [batch_size, embedding_dim]\n",
        "        \n",
        "        # 通过线性层\n",
        "        out = F.relu(self.linear1(embeds))\n",
        "        out = self.dropout(out)\n",
        "        out = self.linear2(out)\n",
        "        \n",
        "        return out\n",
        "\n",
        "# 数据预处理函数\n",
        "def create_cbow_data(texts, context_size=2):\n",
        "    \"\"\"创建CBOW训练数据\"\"\"\n",
        "    # 预处理所有文本\n",
        "    processed_texts = []\n",
        "    for text in texts:\n",
        "        tokens = preprocessor.preprocess(text)\n",
        "        if len(tokens) > 2 * context_size:  # 确保有足够的上下文\n",
        "            processed_texts.append(tokens)\n",
        "    \n",
        "    # 构建词汇表\n",
        "    all_tokens = []\n",
        "    for tokens in processed_texts:\n",
        "        all_tokens.extend(tokens)\n",
        "    \n",
        "    word_counts = Counter(all_tokens)\n",
        "    vocab = {word: idx for idx, (word, _) in enumerate(word_counts.most_common())}\n",
        "    vocab_size = len(vocab)\n",
        "    \n",
        "    # 创建训练数据\n",
        "    contexts = []\n",
        "    targets = []\n",
        "    \n",
        "    for tokens in processed_texts:\n",
        "        for i in range(context_size, len(tokens) - context_size):\n",
        "            # 上下文词汇\n",
        "            context = []\n",
        "            for j in range(i - context_size, i + context_size + 1):\n",
        "                if j != i:  # 排除中心词\n",
        "                    context.append(vocab[tokens[j]])\n",
        "            \n",
        "            # 中心词\n",
        "            target = vocab[tokens[i]]\n",
        "            \n",
        "            contexts.append(context)\n",
        "            targets.append(target)\n",
        "    \n",
        "    return contexts, targets, vocab, vocab_size\n",
        "\n",
        "# 创建CBOW训练数据\n",
        "print(\"创建CBOW训练数据...\")\n",
        "contexts, targets, vocab, vocab_size = create_cbow_data(texts, context_size=2)\n",
        "\n",
        "print(f\"词汇表大小: {vocab_size}\")\n",
        "print(f\"训练样本数量: {len(contexts)}\")\n",
        "print(f\"上下文窗口大小: 2\")\n",
        "\n",
        "# 显示一些训练样本\n",
        "print(\"\\nCBOW训练样本示例:\")\n",
        "print(\"=\" * 50)\n",
        "for i in range(5):\n",
        "    context_words = [list(vocab.keys())[list(vocab.values()).index(idx)] for idx in contexts[i]]\n",
        "    target_word = list(vocab.keys())[list(vocab.values()).index(targets[i])]\n",
        "    print(f\"上下文: {context_words} -> 目标词: {target_word}\")\n",
        "\n",
        "# 转换为PyTorch张量\n",
        "contexts_tensor = torch.tensor(contexts, dtype=torch.long)\n",
        "targets_tensor = torch.tensor(targets, dtype=torch.long)\n",
        "\n",
        "# 创建数据加载器\n",
        "dataset = TensorDataset(contexts_tensor, targets_tensor)\n",
        "dataloader = DataLoader(dataset, batch_size=32, shuffle=True)\n",
        "\n",
        "print(f\"\\n数据加载器信息:\")\n",
        "print(f\"批次数量: {len(dataloader)}\")\n",
        "print(f\"批次大小: 32\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 训练CBOW模型\n",
        "def train_cbow_model(model, dataloader, num_epochs=50, learning_rate=0.001):\n",
        "    \"\"\"训练CBOW模型\"\"\"\n",
        "    model = model.to(device)\n",
        "    criterion = nn.CrossEntropyLoss()\n",
        "    optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n",
        "    \n",
        "    train_losses = []\n",
        "    \n",
        "    print(\"开始训练CBOW模型...\")\n",
        "    print(\"=\" * 50)\n",
        "    \n",
        "    for epoch in range(num_epochs):\n",
        "        model.train()\n",
        "        total_loss = 0\n",
        "        \n",
        "        for batch_idx, (contexts, targets) in enumerate(dataloader):\n",
        "            contexts, targets = contexts.to(device), targets.to(device)\n",
        "            \n",
        "            optimizer.zero_grad()\n",
        "            outputs = model(contexts)\n",
        "            loss = criterion(outputs, targets)\n",
        "            loss.backward()\n",
        "            optimizer.step()\n",
        "            \n",
        "            total_loss += loss.item()\n",
        "        \n",
        "        avg_loss = total_loss / len(dataloader)\n",
        "        train_losses.append(avg_loss)\n",
        "        \n",
        "        if (epoch + 1) % 10 == 0:\n",
        "            print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {avg_loss:.4f}')\n",
        "    \n",
        "    print(\"训练完成!\")\n",
        "    return train_losses\n",
        "\n",
        "# 创建和训练CBOW模型\n",
        "embedding_dim = 50\n",
        "context_size = 2\n",
        "\n",
        "cbow_model = CBOWModel(vocab_size, embedding_dim, context_size)\n",
        "print(f\"CBOW模型参数数量: {sum(p.numel() for p in cbow_model.parameters())}\")\n",
        "\n",
        "# 训练模型\n",
        "train_losses = train_cbow_model(cbow_model, dataloader, num_epochs=100, learning_rate=0.001)\n",
        "\n",
        "# 可视化训练过程\n",
        "plt.figure(figsize=(10, 6))\n",
        "plt.plot(train_losses, 'b-', linewidth=2)\n",
        "plt.title('CBOW模型训练损失')\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.grid(True, alpha=0.3)\n",
        "plt.show()\n",
        "\n",
        "# 获取词嵌入\n",
        "def get_word_embeddings(model, vocab):\n",
        "    \"\"\"获取训练好的词嵌入\"\"\"\n",
        "    model.eval()\n",
        "    embeddings = model.embeddings.weight.data.cpu().numpy()\n",
        "    \n",
        "    word_embeddings = {}\n",
        "    for word, idx in vocab.items():\n",
        "        word_embeddings[word] = embeddings[idx]\n",
        "    \n",
        "    return word_embeddings, embeddings\n",
        "\n",
        "# 获取词嵌入\n",
        "word_embeddings, all_embeddings = get_word_embeddings(cbow_model, vocab)\n",
        "\n",
        "print(f\"词嵌入维度: {all_embeddings.shape}\")\n",
        "print(f\"词汇数量: {len(word_embeddings)}\")\n",
        "\n",
        "# 可视化词嵌入（使用PCA降维）\n",
        "from sklearn.decomposition import PCA\n",
        "from sklearn.manifold import TSNE\n",
        "\n",
        "def visualize_word_embeddings(embeddings, vocab, method='PCA', n_components=2, n_words=50):\n",
        "    \"\"\"可视化词嵌入\"\"\"\n",
        "    # 选择最频繁的词汇\n",
        "    word_counts = Counter()\n",
        "    for text in texts:\n",
        "        tokens = preprocessor.preprocess(text)\n",
        "        word_counts.update(tokens)\n",
        "    \n",
        "    top_words = [word for word, _ in word_counts.most_common(n_words)]\n",
        "    top_indices = [vocab[word] for word in top_words if word in vocab]\n",
        "    top_embeddings = embeddings[top_indices]\n",
        "    \n",
        "    # 降维\n",
        "    if method == 'PCA':\n",
        "        reducer = PCA(n_components=n_components)\n",
        "    else:\n",
        "        reducer = TSNE(n_components=n_components, random_state=42)\n",
        "    \n",
        "    reduced_embeddings = reducer.fit_transform(top_embeddings)\n",
        "    \n",
        "    # 可视化\n",
        "    plt.figure(figsize=(12, 8))\n",
        "    scatter = plt.scatter(reduced_embeddings[:, 0], reduced_embeddings[:, 1], \n",
        "                         alpha=0.7, s=100, c=range(len(top_words)), cmap='viridis')\n",
        "    \n",
        "    # 添加词汇标签\n",
        "    for i, word in enumerate(top_words):\n",
        "        if word in vocab:\n",
        "            plt.annotate(word, (reduced_embeddings[i, 0], reduced_embeddings[i, 1]), \n",
        "                        xytext=(5, 5), textcoords='offset points', fontsize=8)\n",
        "    \n",
        "    plt.title(f'词嵌入可视化 ({method})')\n",
        "    plt.xlabel(f'{method} Component 1')\n",
        "    plt.ylabel(f'{method} Component 2')\n",
        "    plt.colorbar(scatter)\n",
        "    plt.grid(True, alpha=0.3)\n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    return reduced_embeddings\n",
        "\n",
        "# 可视化词嵌入\n",
        "print(\"可视化词嵌入...\")\n",
        "reduced_embeddings = visualize_word_embeddings(all_embeddings, vocab, method='PCA', n_words=30)\n",
        "\n",
        "# 分析相似词汇\n",
        "def find_similar_words(word, word_embeddings, vocab, top_k=5):\n",
        "    \"\"\"找到与给定词汇最相似的词汇\"\"\"\n",
        "    if word not in word_embeddings:\n",
        "        print(f\"词汇 '{word}' 不在词汇表中\")\n",
        "        return\n",
        "    \n",
        "    target_embedding = word_embeddings[word]\n",
        "    similarities = {}\n",
        "    \n",
        "    for other_word, other_embedding in word_embeddings.items():\n",
        "        if other_word != word:\n",
        "            # 计算余弦相似度\n",
        "            similarity = np.dot(target_embedding, other_embedding) / (\n",
        "                np.linalg.norm(target_embedding) * np.linalg.norm(other_embedding)\n",
        "            )\n",
        "            similarities[other_word] = similarity\n",
        "    \n",
        "    # 返回最相似的词汇\n",
        "    similar_words = sorted(similarities.items(), key=lambda x: x[1], reverse=True)[:top_k]\n",
        "    \n",
        "    print(f\"与 '{word}' 最相似的词汇:\")\n",
        "    for word, sim in similar_words:\n",
        "        print(f\"  {word}: {sim:.4f}\")\n",
        "    \n",
        "    return similar_words\n",
        "\n",
        "# 测试相似词汇查找\n",
        "test_words = ['movie', 'great', 'terrible', 'system', 'data']\n",
        "print(\"\\n相似词汇分析:\")\n",
        "print(\"=\" * 50)\n",
        "\n",
        "for word in test_words:\n",
        "    if word in vocab:\n",
        "        find_similar_words(word, word_embeddings, vocab)\n",
        "        print()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. 使用词嵌入进行文本分类\n",
        "\n",
        "现在让我们使用训练好的词嵌入来进行文本分类，并与词袋模型进行对比。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 使用词嵌入进行文本分类\n",
        "class EmbeddingClassifier(nn.Module):\n",
        "    \"\"\"基于词嵌入的分类器\"\"\"\n",
        "    \n",
        "    def __init__(self, embedding_weights, num_classes, hidden_dim=128):\n",
        "        super(EmbeddingClassifier, self).__init__()\n",
        "        \n",
        "        vocab_size, embedding_dim = embedding_weights.shape\n",
        "        \n",
        "        # 使用预训练的词嵌入\n",
        "        self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n",
        "        self.embeddings.weight.data.copy_(torch.from_numpy(embedding_weights))\n",
        "        self.embeddings.weight.requires_grad = False  # 冻结嵌入层\n",
        "        \n",
        "        # 分类器\n",
        "        self.classifier = nn.Sequential(\n",
        "            nn.Linear(embedding_dim, hidden_dim),\n",
        "            nn.ReLU(),\n",
        "            nn.Dropout(0.3),\n",
        "            nn.Linear(hidden_dim, hidden_dim // 2),\n",
        "            nn.ReLU(),\n",
        "            nn.Dropout(0.3),\n",
        "            nn.Linear(hidden_dim // 2, num_classes)\n",
        "        )\n",
        "    \n",
        "    def forward(self, x):\n",
        "        # 获取词嵌入\n",
        "        embeds = self.embeddings(x)  # [batch_size, seq_len, embedding_dim]\n",
        "        \n",
        "        # 平均池化\n",
        "        pooled = embeds.mean(dim=1)  # [batch_size, embedding_dim]\n",
        "        \n",
        "        # 分类\n",
        "        output = self.classifier(pooled)\n",
        "        return output\n",
        "\n",
        "# 准备训练数据\n",
        "def prepare_embedding_data(texts, labels, vocab, max_length=20):\n",
        "    \"\"\"准备词嵌入训练数据\"\"\"\n",
        "    processed_texts = []\n",
        "    processed_labels = []\n",
        "    \n",
        "    for text, label in zip(texts, labels):\n",
        "        tokens = preprocessor.preprocess(text)\n",
        "        \n",
        "        # 转换为索引\n",
        "        indices = []\n",
        "        for token in tokens:\n",
        "            if token in vocab:\n",
        "                indices.append(vocab[token])\n",
        "        \n",
        "        if len(indices) > 0:  # 确保至少有一个有效词汇\n",
        "            # 截断或填充到固定长度\n",
        "            if len(indices) > max_length:\n",
        "                indices = indices[:max_length]\n",
        "            else:\n",
        "                indices.extend([0] * (max_length - len(indices)))  # 用0填充\n",
        "            \n",
        "            processed_texts.append(indices)\n",
        "            processed_labels.append(label)\n",
        "    \n",
        "    return np.array(processed_texts), np.array(processed_labels)\n",
        "\n",
        "# 准备数据\n",
        "X_train_emb, y_train_emb = prepare_embedding_data(X_train, y_train, vocab)\n",
        "X_test_emb, y_test_emb = prepare_embedding_data(X_test, y_test, vocab)\n",
        "\n",
        "print(f\"词嵌入训练集形状: {X_train_emb.shape}\")\n",
        "print(f\"词嵌入测试集形状: {X_test_emb.shape}\")\n",
        "\n",
        "# 转换为PyTorch张量\n",
        "X_train_tensor = torch.tensor(X_train_emb, dtype=torch.long)\n",
        "y_train_tensor = torch.tensor(y_train_emb, dtype=torch.long)\n",
        "X_test_tensor = torch.tensor(X_test_emb, dtype=torch.long)\n",
        "y_test_tensor = torch.tensor(y_test_emb, dtype=torch.long)\n",
        "\n",
        "# 创建数据加载器\n",
        "train_dataset = TensorDataset(X_train_tensor, y_train_tensor)\n",
        "test_dataset = TensorDataset(X_test_tensor, y_test_tensor)\n",
        "\n",
        "train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)\n",
        "test_loader = DataLoader(test_dataset, batch_size=16, shuffle=False)\n",
        "\n",
        "# 创建和训练词嵌入分类器\n",
        "embedding_classifier = EmbeddingClassifier(all_embeddings, num_classes=3)\n",
        "embedding_classifier = embedding_classifier.to(device)\n",
        "\n",
        "print(f\"词嵌入分类器参数数量: {sum(p.numel() for p in embedding_classifier.parameters())}\")\n",
        "\n",
        "# 训练函数\n",
        "def train_embedding_classifier(model, train_loader, test_loader, num_epochs=50, learning_rate=0.001):\n",
        "    \"\"\"训练词嵌入分类器\"\"\"\n",
        "    criterion = nn.CrossEntropyLoss()\n",
        "    optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n",
        "    \n",
        "    train_losses = []\n",
        "    train_accuracies = []\n",
        "    test_accuracies = []\n",
        "    \n",
        "    print(\"开始训练词嵌入分类器...\")\n",
        "    print(\"=\" * 50)\n",
        "    \n",
        "    for epoch in range(num_epochs):\n",
        "        # 训练阶段\n",
        "        model.train()\n",
        "        total_loss = 0\n",
        "        correct = 0\n",
        "        total = 0\n",
        "        \n",
        "        for batch_idx, (data, target) in enumerate(train_loader):\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            \n",
        "            optimizer.zero_grad()\n",
        "            output = model(data)\n",
        "            loss = criterion(output, target)\n",
        "            loss.backward()\n",
        "            optimizer.step()\n",
        "            \n",
        "            total_loss += loss.item()\n",
        "            pred = output.argmax(dim=1, keepdim=True)\n",
        "            correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "            total += target.size(0)\n",
        "        \n",
        "        train_loss = total_loss / len(train_loader)\n",
        "        train_acc = 100. * correct / total\n",
        "        train_losses.append(train_loss)\n",
        "        train_accuracies.append(train_acc)\n",
        "        \n",
        "        # 测试阶段\n",
        "        model.eval()\n",
        "        test_correct = 0\n",
        "        test_total = 0\n",
        "        \n",
        "        with torch.no_grad():\n",
        "            for data, target in test_loader:\n",
        "                data, target = data.to(device), target.to(device)\n",
        "                output = model(data)\n",
        "                pred = output.argmax(dim=1, keepdim=True)\n",
        "                test_correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "                test_total += target.size(0)\n",
        "        \n",
        "        test_acc = 100. * test_correct / test_total\n",
        "        test_accuracies.append(test_acc)\n",
        "        \n",
        "        if (epoch + 1) % 10 == 0:\n",
        "            print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {train_loss:.4f}, '\n",
        "                  f'Train Acc: {train_acc:.2f}%, Test Acc: {test_acc:.2f}%')\n",
        "    \n",
        "    print(\"训练完成!\")\n",
        "    return train_losses, train_accuracies, test_accuracies\n",
        "\n",
        "# 训练模型\n",
        "train_losses, train_accs, test_accs = train_embedding_classifier(\n",
        "    embedding_classifier, train_loader, test_loader, num_epochs=100, learning_rate=0.001\n",
        ")\n",
        "\n",
        "# 可视化训练结果\n",
        "def plot_embedding_training_results(train_losses, train_accs, test_accs):\n",
        "    \"\"\"可视化词嵌入分类器训练结果\"\"\"\n",
        "    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n",
        "    \n",
        "    # 损失曲线\n",
        "    ax1.plot(train_losses, 'b-', linewidth=2, label='训练损失')\n",
        "    ax1.set_title('词嵌入分类器训练损失')\n",
        "    ax1.set_xlabel('Epoch')\n",
        "    ax1.set_ylabel('Loss')\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    ax1.legend()\n",
        "    \n",
        "    # 准确率曲线\n",
        "    ax2.plot(train_accs, 'b-', linewidth=2, label='训练准确率')\n",
        "    ax2.plot(test_accs, 'r-', linewidth=2, label='测试准确率')\n",
        "    ax2.set_title('词嵌入分类器准确率')\n",
        "    ax2.set_xlabel('Epoch')\n",
        "    ax2.set_ylabel('Accuracy (%)')\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    ax2.legend()\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "plot_embedding_training_results(train_losses, train_accs, test_accs)\n",
        "\n",
        "# 评估词嵌入分类器\n",
        "def evaluate_embedding_classifier(model, test_loader):\n",
        "    \"\"\"评估词嵌入分类器\"\"\"\n",
        "    model.eval()\n",
        "    all_preds = []\n",
        "    all_targets = []\n",
        "    \n",
        "    with torch.no_grad():\n",
        "        for data, target in test_loader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output = model(data)\n",
        "            pred = output.argmax(dim=1)\n",
        "            all_preds.extend(pred.cpu().numpy())\n",
        "            all_targets.extend(target.cpu().numpy())\n",
        "    \n",
        "    return all_preds, all_targets\n",
        "\n",
        "# 评估模型\n",
        "embedding_preds, embedding_targets = evaluate_embedding_classifier(embedding_classifier, test_loader)\n",
        "embedding_accuracy = accuracy_score(embedding_targets, embedding_preds)\n",
        "\n",
        "print(f\"词嵌入分类器准确率: {embedding_accuracy:.4f}\")\n",
        "\n",
        "# 分类报告\n",
        "print(\"\\n词嵌入分类器分类报告:\")\n",
        "print(\"=\" * 50)\n",
        "print(classification_report(embedding_targets, embedding_preds, \n",
        "                          target_names=['正面', '负面', '中性']))\n",
        "\n",
        "# 混淆矩阵\n",
        "plt.figure(figsize=(8, 6))\n",
        "cm = confusion_matrix(embedding_targets, embedding_preds)\n",
        "sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', \n",
        "            xticklabels=['正面', '负面', '中性'],\n",
        "            yticklabels=['正面', '负面', '中性'])\n",
        "plt.title('词嵌入分类器混淆矩阵')\n",
        "plt.xlabel('预测标签')\n",
        "plt.ylabel('真实标签')\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 6. 方法对比分析\n",
        "\n",
        "现在让我们对比词袋模型和CBOW词嵌入方法的性能。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 方法对比分析\n",
        "def compare_methods():\n",
        "    \"\"\"对比词袋模型和CBOW词嵌入方法\"\"\"\n",
        "    \n",
        "    # 收集结果\n",
        "    results = {\n",
        "        '方法': ['词袋模型 + 逻辑回归', '词袋模型 + 朴素贝叶斯', '词袋模型 + SVM', 'CBOW词嵌入 + 神经网络'],\n",
        "        '准确率': [\n",
        "            bow_results['Logistic Regression'],\n",
        "            bow_results['Naive Bayes'], \n",
        "            bow_results['SVM'],\n",
        "            embedding_accuracy\n",
        "        ],\n",
        "        '特征维度': [\n",
        "            X_train_bow.shape[1],\n",
        "            X_train_bow.shape[1],\n",
        "            X_train_bow.shape[1],\n",
        "            embedding_dim\n",
        "        ],\n",
        "        '模型复杂度': [\n",
        "            '低',\n",
        "            '低',\n",
        "            '中等',\n",
        "            '高'\n",
        "        ]\n",
        "    }\n",
        "    \n",
        "    # 创建对比表格\n",
        "    import pandas as pd\n",
        "    df = pd.DataFrame(results)\n",
        "    \n",
        "    print(\"方法对比分析:\")\n",
        "    print(\"=\" * 80)\n",
        "    print(df.to_string(index=False))\n",
        "    \n",
        "    # 可视化对比\n",
        "    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
        "    \n",
        "    # 准确率对比\n",
        "    methods = results['方法']\n",
        "    accuracies = results['准确率']\n",
        "    colors = ['skyblue', 'lightgreen', 'lightcoral', 'gold']\n",
        "    \n",
        "    bars1 = ax1.bar(range(len(methods)), accuracies, color=colors, alpha=0.7)\n",
        "    ax1.set_title('不同方法准确率对比')\n",
        "    ax1.set_ylabel('准确率')\n",
        "    ax1.set_xticks(range(len(methods)))\n",
        "    ax1.set_xticklabels([m.split(' + ')[0] for m in methods], rotation=45)\n",
        "    ax1.set_ylim(0, 1)\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 添加数值标签\n",
        "    for bar, acc in zip(bars1, accuracies):\n",
        "        ax1.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01, \n",
        "                f'{acc:.3f}', ha='center', va='bottom')\n",
        "    \n",
        "    # 特征维度对比\n",
        "    dimensions = results['特征维度']\n",
        "    bars2 = ax2.bar(range(len(methods)), dimensions, color=colors, alpha=0.7)\n",
        "    ax2.set_title('不同方法特征维度对比')\n",
        "    ax2.set_ylabel('特征维度')\n",
        "    ax2.set_xticks(range(len(methods)))\n",
        "    ax2.set_xticklabels([m.split(' + ')[0] for m in methods], rotation=45)\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 添加数值标签\n",
        "    for bar, dim in zip(bars2, dimensions):\n",
        "        ax2.text(bar.get_x() + bar.get_width()/2, bar.get_height() + max(dimensions)*0.01, \n",
        "                f'{dim}', ha='center', va='bottom')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "    \n",
        "    return df\n",
        "\n",
        "# 执行对比分析\n",
        "comparison_df = compare_methods()\n",
        "\n",
        "# 详细分析\n",
        "print(\"\\n详细分析:\")\n",
        "print(\"=\" * 50)\n",
        "\n",
        "print(\"1. 词袋模型特点:\")\n",
        "print(\"   - 优点: 简单直观，计算效率高，易于理解和实现\")\n",
        "print(\"   - 缺点: 忽略词汇顺序，无法捕捉语义关系，特征维度高且稀疏\")\n",
        "print(\"   - 适用场景: 小规模数据集，快速原型开发，对性能要求不高的场景\")\n",
        "\n",
        "print(\"\\n2. CBOW词嵌入特点:\")\n",
        "print(\"   - 优点: 捕捉词汇语义关系，特征维度低且密集，泛化能力强\")\n",
        "print(\"   - 缺点: 训练时间长，需要大量数据，模型复杂度高\")\n",
        "print(\"   - 适用场景: 大规模数据集，需要语义理解的任务，对性能要求高的场景\")\n",
        "\n",
        "print(\"\\n3. 性能分析:\")\n",
        "best_bow = max(bow_results.values())\n",
        "print(f\"   - 词袋模型最佳准确率: {best_bow:.4f}\")\n",
        "print(f\"   - CBOW词嵌入准确率: {embedding_accuracy:.4f}\")\n",
        "print(f\"   - 性能提升: {((embedding_accuracy - best_bow) / best_bow * 100):.2f}%\")\n",
        "\n",
        "# 可视化特征空间\n",
        "def visualize_feature_spaces():\n",
        "    \"\"\"可视化不同方法的特征空间\"\"\"\n",
        "    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
        "    \n",
        "    # 词袋模型特征空间（选择前2个主成分）\n",
        "    from sklearn.decomposition import PCA\n",
        "    \n",
        "    # 词袋模型PCA\n",
        "    pca_bow = PCA(n_components=2)\n",
        "    X_bow_pca = pca_bow.fit_transform(X_train_bow)\n",
        "    \n",
        "    scatter1 = ax1.scatter(X_bow_pca[:, 0], X_bow_pca[:, 1], \n",
        "                          c=y_train, cmap='viridis', alpha=0.7, s=50)\n",
        "    ax1.set_title('词袋模型特征空间 (PCA)')\n",
        "    ax1.set_xlabel(f'PC1 ({pca_bow.explained_variance_ratio_[0]:.2%} 方差)')\n",
        "    ax1.set_ylabel(f'PC2 ({pca_bow.explained_variance_ratio_[1]:.2%} 方差)')\n",
        "    ax1.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 词嵌入特征空间\n",
        "    # 使用平均词嵌入作为文档表示\n",
        "    doc_embeddings = []\n",
        "    for text in X_train:\n",
        "        tokens = preprocessor.preprocess(text)\n",
        "        embeddings = []\n",
        "        for token in tokens:\n",
        "            if token in word_embeddings:\n",
        "                embeddings.append(word_embeddings[token])\n",
        "        \n",
        "        if embeddings:\n",
        "            doc_emb = np.mean(embeddings, axis=0)\n",
        "        else:\n",
        "            doc_emb = np.zeros(embedding_dim)\n",
        "        doc_embeddings.append(doc_emb)\n",
        "    \n",
        "    doc_embeddings = np.array(doc_embeddings)\n",
        "    \n",
        "    # 词嵌入PCA\n",
        "    pca_emb = PCA(n_components=2)\n",
        "    X_emb_pca = pca_emb.fit_transform(doc_embeddings)\n",
        "    \n",
        "    scatter2 = ax2.scatter(X_emb_pca[:, 0], X_emb_pca[:, 1], \n",
        "                          c=y_train, cmap='viridis', alpha=0.7, s=50)\n",
        "    ax2.set_title('词嵌入特征空间 (PCA)')\n",
        "    ax2.set_xlabel(f'PC1 ({pca_emb.explained_variance_ratio_[0]:.2%} 方差)')\n",
        "    ax2.set_ylabel(f'PC2 ({pca_emb.explained_variance_ratio_[1]:.2%} 方差)')\n",
        "    ax2.grid(True, alpha=0.3)\n",
        "    \n",
        "    # 添加颜色条\n",
        "    plt.colorbar(scatter1, ax=ax1, label='类别')\n",
        "    plt.colorbar(scatter2, ax=ax2, label='类别')\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "visualize_feature_spaces()\n",
        "\n",
        "# 计算特征空间的可分离性\n",
        "def calculate_separability(X, y):\n",
        "    \"\"\"计算特征空间的可分离性\"\"\"\n",
        "    from sklearn.metrics import silhouette_score\n",
        "    \n",
        "    # 使用轮廓系数评估聚类质量\n",
        "    silhouette = silhouette_score(X, y)\n",
        "    \n",
        "    # 计算类间距离和类内距离\n",
        "    unique_classes = np.unique(y)\n",
        "    inter_class_distances = []\n",
        "    intra_class_distances = []\n",
        "    \n",
        "    for i, class1 in enumerate(unique_classes):\n",
        "        class1_data = X[y == class1]\n",
        "        class1_center = np.mean(class1_data, axis=0)\n",
        "        \n",
        "        # 类内距离\n",
        "        intra_dist = np.mean([np.linalg.norm(x - class1_center) for x in class1_data])\n",
        "        intra_class_distances.append(intra_dist)\n",
        "        \n",
        "        # 类间距离\n",
        "        for j, class2 in enumerate(unique_classes):\n",
        "            if i < j:\n",
        "                class2_data = X[y == class2]\n",
        "                class2_center = np.mean(class2_data, axis=0)\n",
        "                inter_dist = np.linalg.norm(class1_center - class2_center)\n",
        "                inter_class_distances.append(inter_dist)\n",
        "    \n",
        "    avg_inter_dist = np.mean(inter_class_distances)\n",
        "    avg_intra_dist = np.mean(intra_class_distances)\n",
        "    separability_ratio = avg_inter_dist / avg_intra_dist\n",
        "    \n",
        "    return silhouette, separability_ratio\n",
        "\n",
        "# 计算可分离性\n",
        "bow_silhouette, bow_separability = calculate_separability(X_train_bow, y_train)\n",
        "emb_silhouette, emb_separability = calculate_separability(doc_embeddings, y_train)\n",
        "\n",
        "print(f\"\\n特征空间可分离性分析:\")\n",
        "print(f\"词袋模型 - 轮廓系数: {bow_silhouette:.4f}, 可分离性比率: {bow_separability:.4f}\")\n",
        "print(f\"词嵌入 - 轮廓系数: {emb_silhouette:.4f}, 可分离性比率: {emb_separability:.4f}\")\n",
        "\n",
        "if emb_silhouette > bow_silhouette:\n",
        "    print(\"词嵌入方法在特征空间可分离性方面表现更好\")\n",
        "else:\n",
        "    print(\"词袋模型在特征空间可分离性方面表现更好\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 7. 总结和扩展\n",
        "\n",
        "让我们总结一下学到的内容，并探讨一些扩展方向。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 总结和扩展\n",
        "print(\"🎉 NLP基础教程总结\")\n",
        "print(\"=\" * 60)\n",
        "\n",
        "print(\"\\n📚 本教程涵盖的内容:\")\n",
        "print(\"1. 文本预处理技术\")\n",
        "print(\"   - 文本清理、分词、停用词移除\")\n",
        "print(\"   - 词汇频率统计和可视化\")\n",
        "\n",
        "print(\"\\n2. 词袋模型（Bag of Words）\")\n",
        "print(\"   - 从零实现词袋模型\")\n",
        "print(\"   - 特征提取和向量化\")\n",
        "print(\"   - 多种分类器性能对比\")\n",
        "\n",
        "print(\"\\n3. 连续词袋模型（CBOW）\")\n",
        "print(\"   - CBOW模型原理和实现\")\n",
        "print(\"   - 词嵌入训练和可视化\")\n",
        "print(\"   - 相似词汇分析\")\n",
        "\n",
        "print(\"\\n4. 文本分类应用\")\n",
        "print(\"   - 基于词嵌入的分类器\")\n",
        "print(\"   - 训练和评估流程\")\n",
        "print(\"   - 性能对比分析\")\n",
        "\n",
        "print(\"\\n5. 方法对比分析\")\n",
        "print(\"   - 准确率、特征维度、复杂度对比\")\n",
        "print(\"   - 特征空间可视化\")\n",
        "print(\"   - 可分离性分析\")\n",
        "\n",
        "print(\"\\n🔍 关键发现:\")\n",
        "print(f\"- 词袋模型最佳准确率: {max(bow_results.values()):.4f}\")\n",
        "print(f\"- CBOW词嵌入准确率: {embedding_accuracy:.4f}\")\n",
        "print(f\"- 词嵌入方法在语义理解方面表现更好\")\n",
        "print(f\"- 词袋模型在计算效率方面有优势\")\n",
        "\n",
        "print(\"\\n🚀 扩展方向:\")\n",
        "print(\"1. 高级词嵌入方法\")\n",
        "print(\"   - Word2Vec (Skip-gram)\")\n",
        "print(\"   - GloVe (Global Vectors)\")\n",
        "print(\"   - FastText\")\n",
        "print(\"   - 预训练词嵌入 (Word2Vec, GloVe)\")\n",
        "\n",
        "print(\"\\n2. 深度学习模型\")\n",
        "print(\"   - 循环神经网络 (RNN, LSTM, GRU)\")\n",
        "print(\"   - 卷积神经网络 (CNN)\")\n",
        "print(\"   - Transformer模型\")\n",
        "print(\"   - BERT等预训练模型\")\n",
        "\n",
        "print(\"\\n3. 文本预处理改进\")\n",
        "print(\"   - 词干提取和词形还原\")\n",
        "print(\"   - 命名实体识别\")\n",
        "print(\"   - 词性标注\")\n",
        "print(\"   - 句法分析\")\n",
        "\n",
        "print(\"\\n4. 评估指标扩展\")\n",
        "print(\"   - 精确率、召回率、F1分数\")\n",
        "print(\"   - 混淆矩阵分析\")\n",
        "print(\"   - ROC曲线和AUC\")\n",
        "print(\"   - 交叉验证\")\n",
        "\n",
        "print(\"\\n5. 实际应用场景\")\n",
        "print(\"   - 情感分析\")\n",
        "print(\"   - 垃圾邮件检测\")\n",
        "print(\"   - 新闻分类\")\n",
        "print(\"   - 产品评论分析\")\n",
        "\n",
        "# 创建学习路径图\n",
        "def create_learning_path():\n",
        "    \"\"\"创建NLP学习路径图\"\"\"\n",
        "    fig, ax = plt.subplots(figsize=(14, 10))\n",
        "    \n",
        "    # 定义学习路径\n",
        "    levels = [\n",
        "        \"基础概念\",\n",
        "        \"文本预处理\", \n",
        "        \"传统方法\",\n",
        "        \"词嵌入\",\n",
        "        \"深度学习\",\n",
        "        \"预训练模型\"\n",
        "    ]\n",
        "    \n",
        "    topics = [\n",
        "        [\"NLP基础\", \"文本表示\"],\n",
        "        [\"分词\", \"清理\", \"标准化\"],\n",
        "        [\"词袋模型\", \"TF-IDF\", \"朴素贝叶斯\"],\n",
        "        [\"Word2Vec\", \"CBOW\", \"Skip-gram\", \"GloVe\"],\n",
        "        [\"RNN\", \"LSTM\", \"CNN\", \"Attention\"],\n",
        "        [\"BERT\", \"GPT\", \"Transformer\", \"预训练\"]\n",
        "    ]\n",
        "    \n",
        "    # 绘制学习路径\n",
        "    y_positions = [5, 4, 3, 2, 1, 0]\n",
        "    colors = ['lightblue', 'lightgreen', 'lightyellow', 'lightcoral', 'lightpink', 'lightgray']\n",
        "    \n",
        "    for i, (level, topic_list, y_pos, color) in enumerate(zip(levels, topics, y_positions, colors)):\n",
        "        # 绘制级别框\n",
        "        rect = plt.Rectangle((0, y_pos-0.3), 12, 0.6, \n",
        "                           facecolor=color, alpha=0.7, edgecolor='black')\n",
        "        ax.add_patch(rect)\n",
        "        \n",
        "        # 添加级别标签\n",
        "        ax.text(6, y_pos, level, ha='center', va='center', \n",
        "               fontsize=12, fontweight='bold')\n",
        "        \n",
        "        # 添加主题标签\n",
        "        for j, topic in enumerate(topic_list):\n",
        "            x_pos = 1 + j * 2.5\n",
        "            ax.text(x_pos, y_pos, topic, ha='center', va='center', \n",
        "                   fontsize=10, bbox=dict(boxstyle=\"round,pad=0.2\", facecolor='white', alpha=0.8))\n",
        "        \n",
        "        # 绘制箭头\n",
        "        if i < len(levels) - 1:\n",
        "            ax.arrow(6, y_pos-0.3, 0, -0.4, head_width=0.3, head_length=0.1, \n",
        "                    fc='black', ec='black')\n",
        "    \n",
        "    ax.set_xlim(0, 12)\n",
        "    ax.set_ylim(-0.5, 5.5)\n",
        "    ax.set_title('NLP学习路径图', fontsize=16, fontweight='bold')\n",
        "    ax.axis('off')\n",
        "    \n",
        "    # 添加当前进度标记\n",
        "    current_level = 3  # 词嵌入级别\n",
        "    ax.text(11, y_positions[current_level], '✓ 已完成', \n",
        "           ha='center', va='center', fontsize=10, \n",
        "           bbox=dict(boxstyle=\"round,pad=0.3\", facecolor='green', alpha=0.7))\n",
        "    \n",
        "    plt.tight_layout()\n",
        "    plt.show()\n",
        "\n",
        "create_learning_path()\n",
        "\n",
        "# 实践建议\n",
        "print(\"\\n💡 实践建议:\")\n",
        "print(\"1. 尝试不同的数据集\")\n",
        "print(\"   - 使用更大的文本数据集\")\n",
        "print(\"   - 尝试不同领域的文本（新闻、社交媒体、学术论文）\")\n",
        "print(\"   - 处理多语言文本\")\n",
        "\n",
        "print(\"\\n2. 调优超参数\")\n",
        "print(\"   - 调整词嵌入维度\")\n",
        "print(\"   - 修改上下文窗口大小\")\n",
        "print(\"   - 尝试不同的学习率\")\n",
        "\n",
        "print(\"\\n3. 实现更多功能\")\n",
        "print(\"   - 添加词向量可视化\")\n",
        "print(\"   - 实现词汇类比任务\")\n",
        "print(\"   - 构建词汇相似度计算器\")\n",
        "\n",
        "print(\"\\n4. 性能优化\")\n",
        "print(\"   - 使用GPU加速训练\")\n",
        "print(\"   - 实现批处理优化\")\n",
        "print(\"   - 添加早停机制\")\n",
        "\n",
        "print(\"\\n🎯 下一步学习建议:\")\n",
        "print(\"1. 学习Word2Vec的Skip-gram模型\")\n",
        "print(\"2. 探索预训练词嵌入（如GloVe）\")\n",
        "print(\"3. 学习循环神经网络（RNN/LSTM）\")\n",
        "print(\"4. 了解注意力机制和Transformer\")\n",
        "print(\"5. 实践BERT等预训练模型\")\n",
        "\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"恭喜你完成了NLP基础教程！🎉\")\n",
        "print(\"你已经掌握了词袋模型和CBOW的核心概念和实现方法。\")\n",
        "print(\"继续探索更高级的NLP技术吧！\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
