{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "bba5a16c",
   "metadata": {
    "origin_pos": 0
   },
   "source": [
    "# 情感分析及数据集\n",
    "`sec_sentiment`\n",
    "\n",
    "随着在线社交媒体和评论平台的快速发展，大量评论的数据被记录下来。这些数据具有支持决策过程的巨大潜力。  \n",
    "*情感分析*（sentiment analysis）研究人们在文本中（如产品评论、博客评论和论坛讨论等）“隐藏”的情绪。  \n",
    "它在广泛应用于政治（如公众对政策的情绪分析）、金融（如市场情绪分析）和营销（如产品研究和品牌管理）等领域。  \n",
    "\n",
    "由于情感可以被分类为离散的极性或尺度（例如，积极的和消极的），我们可以将情感分析看作一项文本分类任务，它将可变长度的文本序列转换为固定长度的文本类别。    \n",
    "我们将使用斯坦福大学的[大型电影评论数据集（large movie review dataset）](https://ai.stanford.edu/~amaas/data/sentiment/)进行情感分析。它由一个训练集和一个测试集组成，其中包含从IMDb下载的25000个电影评论。在这两个数据集中，“积极”和“消极”标签的数量相同，表示不同的情感极性。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "00eff05f-cd92-4f7c-a1f5-5f5b7f5a29a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import sys\n",
    "parent_directory = os.path.dirname(os.getcwd())\n",
    "if parent_directory not in sys.path:\n",
    "    sys.path.append(parent_directory)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "7822039c",
   "metadata": {
    "origin_pos": 2,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import torch\n",
    "from torch import nn\n",
    "from d2l import torch as d2l"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76c1daa2",
   "metadata": {
    "origin_pos": 4
   },
   "source": [
    "##  读取数据集\n",
    "\n",
    "首先，下载并提取路径`/aclImdb`中的IMDb评论数据集。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "831081fb",
   "metadata": {
    "origin_pos": 5,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始下载 aclImdb 文件...\n",
      "文件 ../../DataSet/aclImdb_v1.tar.gz 已存在，跳过下载。\n",
      "文件 ../../DataSet/aclImdb_v1.tar.gz 验证通过，命中缓存\n",
      "下载完成，文件保存为 ../../DataSet/aclImdb_v1.tar.gz\n",
      "开始解压文件，文件扩展名: .gz\n",
      "检测到tar文件，开始解压...\n",
      "文件已解压到 ../../DataSet\n"
     ]
    }
   ],
   "source": [
    "#@save\n",
    "d2l.DATA_HUB['aclImdb'] = (\n",
    "    'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',\n",
    "    '01ada507287d82875905620988597833ad4e0903')\n",
    "\n",
    "data_dir = d2l.download_extract('aclImdb', 'aclImdb')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a376611c",
   "metadata": {
    "origin_pos": 6
   },
   "source": [
    "接下来，读取训练和测试数据集。每个样本都是一个评论及其标签：1表示“积极”，0表示“消极”。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "4d08a828",
   "metadata": {
    "origin_pos": 7,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练集数目： 25000\n",
      "标签： 1 review: For a movie that gets no respect there sure are a lot of mem\n",
      "标签： 1 review: Bizarre horror movie filled with famous faces but stolen by \n",
      "标签： 1 review: A solid, if unremarkable film. Matthau, as Einstein, was won\n"
     ]
    }
   ],
   "source": [
    "#@save\n",
    "def read_imdb(data_dir, is_train):\n",
    "    \"\"\"\n",
    "    读取IMDb评论数据集的文本序列和标签\n",
    "\n",
    "    参数：\n",
    "    - data_dir: 数据集存放的根目录路径。\n",
    "    - is_train: 是否读取训练集，True 为读取训练集，False 为读取测试集。\n",
    "\n",
    "    返回：\n",
    "    - data: 包含所有评论文本的列表。\n",
    "    - labels: 包含所有评论对应标签的列表。正面评论标签为1，负面评论标签为0。\n",
    "    \"\"\"\n",
    "    \n",
    "    # 初始化空列表，用来存放评论数据和对应的标签\n",
    "    data, labels = [], []\n",
    "\n",
    "    # 循环处理两个标签：'pos' 和 'neg'，分别对应正面和负面的评论\n",
    "    for label in ('pos', 'neg'):\n",
    "        # 根据是否为训练集决定路径，读取相应的文件夹（train/test）\n",
    "        folder_name = os.path.join(data_dir, 'train' if is_train else 'test', label)\n",
    "\n",
    "        # 遍历该文件夹中的所有文件（评论文件）\n",
    "        for file in os.listdir(folder_name):\n",
    "            # 打开文件并读取内容\n",
    "            with open(os.path.join(folder_name, file), 'rb') as f:\n",
    "                # 读取文件内容，解码为UTF-8并去掉换行符\n",
    "                review = f.read().decode('utf-8').replace('\\n', '')\n",
    "                # 将评论数据添加到data列表中\n",
    "                data.append(review)\n",
    "                # 根据标签 ('pos' 或 'neg') 为评论分配标签，正面评论标签为1，负面评论标签为0\n",
    "                labels.append(1 if label == 'pos' else 0)\n",
    "    \n",
    "    # 返回评论数据和对应的标签列表\n",
    "    return data, labels\n",
    "\n",
    "\n",
    "# 调用函数读取IMDb数据集的训练集数据\n",
    "train_data = read_imdb(data_dir, is_train=True)\n",
    "\n",
    "# 打印训练集的数量\n",
    "print('训练集数目：', len(train_data[0]))\n",
    "\n",
    "# 打印训练集中的前三条评论及其对应的标签\n",
    "for x, y in zip(train_data[0][:3], train_data[1][:3]):\n",
    "    print('标签：', y, 'review:', x[0:60])  # 输出评论的前60个字符\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "35e114e6",
   "metadata": {
    "origin_pos": 8
   },
   "source": [
    "## 预处理数据集\n",
    "\n",
    "将每个单词作为一个词元，过滤掉出现不到5次的单词，我们从训练数据集中创建一个词表。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "b833b646",
   "metadata": {
    "origin_pos": 9,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [],
   "source": [
    "train_tokens = d2l.tokenize(train_data[0], token='word')\n",
    "vocab = d2l.Vocab(train_tokens, min_freq=5, reserved_tokens=['<pad>'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "bff31545-4684-426b-81b1-ef517022535a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用 d2l 库的 tokenize 函数进行分词处理\n",
    "train_tokens = d2l.tokenize(train_data[0], token='word')\n",
    "\n",
    "# 该行代码使用 d2l.tokenize 方法对训练集中的评论文本进行分词（tokenization）。\n",
    "# 分词将每条评论转换成单词的列表，适合后续进行词汇表构建和模型训练。\n",
    "\n",
    "# 参数：\n",
    "# - train_data[0]：训练集的评论文本列表。train_data[0] 中包含了每条评论的原始文本。\n",
    "# - token='word'：指定分词的方式为按单词进行切分。即每个评论会被拆分为多个单词（tokens），如将 \"This is great\" 拆分为 [\"This\", \"is\", \"great\"]。\n",
    "\n",
    "# 返回：\n",
    "# - train_tokens：这是一个列表，每个元素代表一条评论的单词列表。例如，原始评论列表 [\"This is great\", \"I love it\"] 会被处理成 [[\"This\", \"is\", \"great\"], [\"I\", \"love\", \"it\"]]。\n",
    "\n",
    "\n",
    "# 使用 d2l 库的 Vocab 类构建词汇表\n",
    "vocab = d2l.Vocab(train_tokens, min_freq=5, reserved_tokens=['<pad>'])\n",
    "\n",
    "# 此行代码使用 d2l.Vocab 方法根据分词后的训练数据（train_tokens）构建一个词汇表。\n",
    "# 词汇表会记录每个单词的出现频率，并为每个单词分配一个唯一的索引。\n",
    "\n",
    "# 参数：\n",
    "# - train_tokens：经过分词后的训练数据。train_tokens 是一个包含每条评论中单词的列表。\n",
    "# - min_freq=5：设置最小词频阈值，表示仅保留在训练数据中至少出现 5 次的单词。频率低于 5 次的单词会被忽略。这样做可以减少低频词汇对模型的影响。\n",
    "# - reserved_tokens=['<pad>']：指定保留的特殊符号列表。`<pad>` 是常用的填充符号，用于统一所有评论的长度。在文本序列处理时，短的评论会通过添加 `<pad>` 来补齐为固定长度。\n",
    "\n",
    "# 返回：\n",
    "# - vocab：一个 Vocab 对象，表示词汇表。vocab 是一个字典，映射了每个单词（token）到它的唯一索引。词汇表还会保存频率信息和一些特殊符号（例如 `<pad>`）。\n",
    "\n",
    "\n",
    "# 词汇表示例：\n",
    "# 假设训练数据经过分词后，train_tokens 为：\n",
    "# [[\"This\", \"is\", \"great\"], [\"I\", \"love\", \"it\"], [\"It\", \"is\", \"awesome\"]]\n",
    "# 经过 min_freq=5 处理后，如果有单词频率低于 5 次，它们会被移除，\n",
    "# 结果词汇表可能如下：\n",
    "# vocab = {\n",
    "#     \"<pad>\": 0,    # 这是填充符号的索引\n",
    "#     \"This\": 1,     # \"This\" 词汇的索引\n",
    "#     \"is\": 2,       # \"is\" 词汇的索引\n",
    "#     \"great\": 3,    # \"great\" 词汇的索引\n",
    "#     \"I\": 4,        # \"I\" 词汇的索引\n",
    "#     \"love\": 5,     # \"love\" 词汇的索引\n",
    "#     \"it\": 6,       # \"it\" 词汇的索引\n",
    "#     \"awesome\": 7,  # \"awesome\" 词汇的索引\n",
    "# }\n",
    "# vocab 中的每个单词都有一个唯一的整数索引，后续在模型训练时会使用这些索引。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6592cc46",
   "metadata": {
    "origin_pos": 10
   },
   "source": [
    "在词元化之后，让我们绘制评论词元长度的直方图。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "ca2ed7c7",
   "metadata": {
    "origin_pos": 11,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n该行代码绘制了一个直方图，展示每条评论的 token 数量分布。\\n\\n详细解释：\\n- `[len(line) for line in train_tokens]`：这是一个列表推导式，它计算了每条评论（即每个 token 列表）中的 token 数量。`train_tokens` 是一个包含所有训练集评论的分词列表，每个 `line` 表示一条评论的 token 列表。因此，`len(line)` 就是每条评论的单词数量。\\n- `bins=range(0, 1000, 50)`：`bins` 参数设置了直方图的区间（bins）。`range(0, 1000, 50)` 表示将数据从 0 到 1000 的范围内分成若干个区间，每个区间的宽度为 50。所以这里的直方图将统计 token 数量在不同区间内的评论数量，例如 0 到 50 个 token 的评论、50 到 100 个 token 的评论等。\\n- 该直方图的 y 轴显示每个区间内的评论数量，x 轴显示该区间的 token 数量。\\n\\n返回值：该行代码不会返回任何值，它直接在当前绘图环境中生成并显示直方图。\\n'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/svg+xml": [
       "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n",
       "<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n",
       "  \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n",
       "<svg xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"255.828125pt\" height=\"183.35625pt\" viewBox=\"0 0 255.828125 183.35625\" xmlns=\"http://www.w3.org/2000/svg\" version=\"1.1\">\n",
       " <metadata>\n",
       "  <rdf:RDF xmlns:dc=\"http://purl.org/dc/elements/1.1/\" xmlns:cc=\"http://creativecommons.org/ns#\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n",
       "   <cc:Work>\n",
       "    <dc:type rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\"/>\n",
       "    <dc:date>2024-11-26T16:27:50.279040</dc:date>\n",
       "    <dc:format>image/svg+xml</dc:format>\n",
       "    <dc:creator>\n",
       "     <cc:Agent>\n",
       "      <dc:title>Matplotlib v3.9.2, https://matplotlib.org/</dc:title>\n",
       "     </cc:Agent>\n",
       "    </dc:creator>\n",
       "   </cc:Work>\n",
       "  </rdf:RDF>\n",
       " </metadata>\n",
       " <defs>\n",
       "  <style type=\"text/css\">*{stroke-linejoin: round; stroke-linecap: butt}</style>\n",
       " </defs>\n",
       " <g id=\"figure_1\">\n",
       "  <g id=\"patch_1\">\n",
       "   <path d=\"M 0 183.35625 \n",
       "L 255.828125 183.35625 \n",
       "L 255.828125 0 \n",
       "L 0 0 \n",
       "z\n",
       "\" style=\"fill: #ffffff\"/>\n",
       "  </g>\n",
       "  <g id=\"axes_1\">\n",
       "   <g id=\"patch_2\">\n",
       "    <path d=\"M 53.328125 145.8 \n",
       "L 248.628125 145.8 \n",
       "L 248.628125 7.2 \n",
       "L 53.328125 7.2 \n",
       "z\n",
       "\" style=\"fill: #ffffff\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_3\">\n",
       "    <path d=\"M 62.205398 145.8 \n",
       "L 71.549895 145.8 \n",
       "L 71.549895 135.096774 \n",
       "L 62.205398 135.096774 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_4\">\n",
       "    <path d=\"M 71.549895 145.8 \n",
       "L 80.894393 145.8 \n",
       "L 80.894393 99.870968 \n",
       "L 71.549895 99.870968 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_5\">\n",
       "    <path d=\"M 80.894393 145.8 \n",
       "L 90.238891 145.8 \n",
       "L 90.238891 13.8 \n",
       "L 80.894393 13.8 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_6\">\n",
       "    <path d=\"M 90.238891 145.8 \n",
       "L 99.583388 145.8 \n",
       "L 99.583388 52.23871 \n",
       "L 90.238891 52.23871 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_7\">\n",
       "    <path d=\"M 99.583388 145.8 \n",
       "L 108.927886 145.8 \n",
       "L 108.927886 91.277419 \n",
       "L 99.583388 91.277419 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_8\">\n",
       "    <path d=\"M 108.927886 145.8 \n",
       "L 118.272383 145.8 \n",
       "L 118.272383 110.032258 \n",
       "L 108.927886 110.032258 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_9\">\n",
       "    <path d=\"M 118.272383 145.8 \n",
       "L 127.616881 145.8 \n",
       "L 127.616881 119.090323 \n",
       "L 118.272383 119.090323 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_10\">\n",
       "    <path d=\"M 127.616881 145.8 \n",
       "L 136.961379 145.8 \n",
       "L 136.961379 126.348387 \n",
       "L 127.616881 126.348387 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_11\">\n",
       "    <path d=\"M 136.961379 145.8 \n",
       "L 146.305876 145.8 \n",
       "L 146.305876 131.109677 \n",
       "L 136.961379 131.109677 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_12\">\n",
       "    <path d=\"M 146.305876 145.8 \n",
       "L 155.650374 145.8 \n",
       "L 155.650374 134.554839 \n",
       "L 146.305876 134.554839 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_13\">\n",
       "    <path d=\"M 155.650374 145.8 \n",
       "L 164.994871 145.8 \n",
       "L 164.994871 137.341935 \n",
       "L 155.650374 137.341935 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_14\">\n",
       "    <path d=\"M 164.994871 145.8 \n",
       "L 174.339369 145.8 \n",
       "L 174.339369 139.045161 \n",
       "L 164.994871 139.045161 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_15\">\n",
       "    <path d=\"M 174.339369 145.8 \n",
       "L 183.683867 145.8 \n",
       "L 183.683867 140.825806 \n",
       "L 174.339369 140.825806 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_16\">\n",
       "    <path d=\"M 183.683867 145.8 \n",
       "L 193.028364 145.8 \n",
       "L 193.028364 141.793548 \n",
       "L 183.683867 141.793548 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_17\">\n",
       "    <path d=\"M 193.028364 145.8 \n",
       "L 202.372862 145.8 \n",
       "L 202.372862 142.432258 \n",
       "L 193.028364 142.432258 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_18\">\n",
       "    <path d=\"M 202.372862 145.8 \n",
       "L 211.717359 145.8 \n",
       "L 211.717359 143.225806 \n",
       "L 202.372862 143.225806 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_19\">\n",
       "    <path d=\"M 211.717359 145.8 \n",
       "L 221.061857 145.8 \n",
       "L 221.061857 143.554839 \n",
       "L 211.717359 143.554839 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_20\">\n",
       "    <path d=\"M 221.061857 145.8 \n",
       "L 230.406355 145.8 \n",
       "L 230.406355 144.154839 \n",
       "L 221.061857 144.154839 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_21\">\n",
       "    <path d=\"M 230.406355 145.8 \n",
       "L 239.750852 145.8 \n",
       "L 239.750852 144.348387 \n",
       "L 230.406355 144.348387 \n",
       "z\n",
       "\" clip-path=\"url(#p93ac632028)\" style=\"fill: #1f77b4\"/>\n",
       "   </g>\n",
       "   <g id=\"matplotlib.axis_1\">\n",
       "    <g id=\"xtick_1\">\n",
       "     <g id=\"line2d_1\">\n",
       "      <defs>\n",
       "       <path id=\"m92b5df5366\" d=\"M 0 0 \n",
       "L 0 3.5 \n",
       "\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </defs>\n",
       "      <g>\n",
       "       <use xlink:href=\"#m92b5df5366\" x=\"62.205398\" y=\"145.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_1\">\n",
       "      <!-- 0 -->\n",
       "      <g transform=\"translate(59.024148 160.398438) scale(0.1 -0.1)\">\n",
       "       <defs>\n",
       "        <path id=\"DejaVuSans-30\" d=\"M 2034 4250 \n",
       "Q 1547 4250 1301 3770 \n",
       "Q 1056 3291 1056 2328 \n",
       "Q 1056 1369 1301 889 \n",
       "Q 1547 409 2034 409 \n",
       "Q 2525 409 2770 889 \n",
       "Q 3016 1369 3016 2328 \n",
       "Q 3016 3291 2770 3770 \n",
       "Q 2525 4250 2034 4250 \n",
       "z\n",
       "M 2034 4750 \n",
       "Q 2819 4750 3233 4129 \n",
       "Q 3647 3509 3647 2328 \n",
       "Q 3647 1150 3233 529 \n",
       "Q 2819 -91 2034 -91 \n",
       "Q 1250 -91 836 529 \n",
       "Q 422 1150 422 2328 \n",
       "Q 422 3509 836 4129 \n",
       "Q 1250 4750 2034 4750 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       </defs>\n",
       "       <use xlink:href=\"#DejaVuSans-30\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"xtick_2\">\n",
       "     <g id=\"line2d_2\">\n",
       "      <g>\n",
       "       <use xlink:href=\"#m92b5df5366\" x=\"99.583388\" y=\"145.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_2\">\n",
       "      <!-- 200 -->\n",
       "      <g transform=\"translate(90.039638 160.398438) scale(0.1 -0.1)\">\n",
       "       <defs>\n",
       "        <path id=\"DejaVuSans-32\" d=\"M 1228 531 \n",
       "L 3431 531 \n",
       "L 3431 0 \n",
       "L 469 0 \n",
       "L 469 531 \n",
       "Q 828 903 1448 1529 \n",
       "Q 2069 2156 2228 2338 \n",
       "Q 2531 2678 2651 2914 \n",
       "Q 2772 3150 2772 3378 \n",
       "Q 2772 3750 2511 3984 \n",
       "Q 2250 4219 1831 4219 \n",
       "Q 1534 4219 1204 4116 \n",
       "Q 875 4013 500 3803 \n",
       "L 500 4441 \n",
       "Q 881 4594 1212 4672 \n",
       "Q 1544 4750 1819 4750 \n",
       "Q 2544 4750 2975 4387 \n",
       "Q 3406 4025 3406 3419 \n",
       "Q 3406 3131 3298 2873 \n",
       "Q 3191 2616 2906 2266 \n",
       "Q 2828 2175 2409 1742 \n",
       "Q 1991 1309 1228 531 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       </defs>\n",
       "       <use xlink:href=\"#DejaVuSans-32\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"127.246094\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"xtick_3\">\n",
       "     <g id=\"line2d_3\">\n",
       "      <g>\n",
       "       <use xlink:href=\"#m92b5df5366\" x=\"136.961379\" y=\"145.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_3\">\n",
       "      <!-- 400 -->\n",
       "      <g transform=\"translate(127.417629 160.398438) scale(0.1 -0.1)\">\n",
       "       <defs>\n",
       "        <path id=\"DejaVuSans-34\" d=\"M 2419 4116 \n",
       "L 825 1625 \n",
       "L 2419 1625 \n",
       "L 2419 4116 \n",
       "z\n",
       "M 2253 4666 \n",
       "L 3047 4666 \n",
       "L 3047 1625 \n",
       "L 3713 1625 \n",
       "L 3713 1100 \n",
       "L 3047 1100 \n",
       "L 3047 0 \n",
       "L 2419 0 \n",
       "L 2419 1100 \n",
       "L 313 1100 \n",
       "L 313 1709 \n",
       "L 2253 4666 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       </defs>\n",
       "       <use xlink:href=\"#DejaVuSans-34\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"127.246094\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"xtick_4\">\n",
       "     <g id=\"line2d_4\">\n",
       "      <g>\n",
       "       <use xlink:href=\"#m92b5df5366\" x=\"174.339369\" y=\"145.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_4\">\n",
       "      <!-- 600 -->\n",
       "      <g transform=\"translate(164.795619 160.398438) scale(0.1 -0.1)\">\n",
       "       <defs>\n",
       "        <path id=\"DejaVuSans-36\" d=\"M 2113 2584 \n",
       "Q 1688 2584 1439 2293 \n",
       "Q 1191 2003 1191 1497 \n",
       "Q 1191 994 1439 701 \n",
       "Q 1688 409 2113 409 \n",
       "Q 2538 409 2786 701 \n",
       "Q 3034 994 3034 1497 \n",
       "Q 3034 2003 2786 2293 \n",
       "Q 2538 2584 2113 2584 \n",
       "z\n",
       "M 3366 4563 \n",
       "L 3366 3988 \n",
       "Q 3128 4100 2886 4159 \n",
       "Q 2644 4219 2406 4219 \n",
       "Q 1781 4219 1451 3797 \n",
       "Q 1122 3375 1075 2522 \n",
       "Q 1259 2794 1537 2939 \n",
       "Q 1816 3084 2150 3084 \n",
       "Q 2853 3084 3261 2657 \n",
       "Q 3669 2231 3669 1497 \n",
       "Q 3669 778 3244 343 \n",
       "Q 2819 -91 2113 -91 \n",
       "Q 1303 -91 875 529 \n",
       "Q 447 1150 447 2328 \n",
       "Q 447 3434 972 4092 \n",
       "Q 1497 4750 2381 4750 \n",
       "Q 2619 4750 2861 4703 \n",
       "Q 3103 4656 3366 4563 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       </defs>\n",
       "       <use xlink:href=\"#DejaVuSans-36\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"127.246094\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"xtick_5\">\n",
       "     <g id=\"line2d_5\">\n",
       "      <g>\n",
       "       <use xlink:href=\"#m92b5df5366\" x=\"211.717359\" y=\"145.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_5\">\n",
       "      <!-- 800 -->\n",
       "      <g transform=\"translate(202.173609 160.398438) scale(0.1 -0.1)\">\n",
       "       <defs>\n",
       "        <path id=\"DejaVuSans-38\" d=\"M 2034 2216 \n",
       "Q 1584 2216 1326 1975 \n",
       "Q 1069 1734 1069 1313 \n",
       "Q 1069 891 1326 650 \n",
       "Q 1584 409 2034 409 \n",
       "Q 2484 409 2743 651 \n",
       "Q 3003 894 3003 1313 \n",
       "Q 3003 1734 2745 1975 \n",
       "Q 2488 2216 2034 2216 \n",
       "z\n",
       "M 1403 2484 \n",
       "Q 997 2584 770 2862 \n",
       "Q 544 3141 544 3541 \n",
       "Q 544 4100 942 4425 \n",
       "Q 1341 4750 2034 4750 \n",
       "Q 2731 4750 3128 4425 \n",
       "Q 3525 4100 3525 3541 \n",
       "Q 3525 3141 3298 2862 \n",
       "Q 3072 2584 2669 2484 \n",
       "Q 3125 2378 3379 2068 \n",
       "Q 3634 1759 3634 1313 \n",
       "Q 3634 634 3220 271 \n",
       "Q 2806 -91 2034 -91 \n",
       "Q 1263 -91 848 271 \n",
       "Q 434 634 434 1313 \n",
       "Q 434 1759 690 2068 \n",
       "Q 947 2378 1403 2484 \n",
       "z\n",
       "M 1172 3481 \n",
       "Q 1172 3119 1398 2916 \n",
       "Q 1625 2713 2034 2713 \n",
       "Q 2441 2713 2670 2916 \n",
       "Q 2900 3119 2900 3481 \n",
       "Q 2900 3844 2670 4047 \n",
       "Q 2441 4250 2034 4250 \n",
       "Q 1625 4250 1398 4047 \n",
       "Q 1172 3844 1172 3481 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       </defs>\n",
       "       <use xlink:href=\"#DejaVuSans-38\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"127.246094\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"text_6\">\n",
       "     <!-- # tokens per review -->\n",
       "     <g transform=\"translate(100.597656 174.076563) scale(0.1 -0.1)\">\n",
       "      <defs>\n",
       "       <path id=\"DejaVuSans-23\" d=\"M 3272 2816 \n",
       "L 2363 2816 \n",
       "L 2100 1772 \n",
       "L 3016 1772 \n",
       "L 3272 2816 \n",
       "z\n",
       "M 2803 4594 \n",
       "L 2478 3297 \n",
       "L 3391 3297 \n",
       "L 3719 4594 \n",
       "L 4219 4594 \n",
       "L 3897 3297 \n",
       "L 4872 3297 \n",
       "L 4872 2816 \n",
       "L 3775 2816 \n",
       "L 3519 1772 \n",
       "L 4513 1772 \n",
       "L 4513 1294 \n",
       "L 3397 1294 \n",
       "L 3072 0 \n",
       "L 2572 0 \n",
       "L 2894 1294 \n",
       "L 1978 1294 \n",
       "L 1656 0 \n",
       "L 1153 0 \n",
       "L 1478 1294 \n",
       "L 494 1294 \n",
       "L 494 1772 \n",
       "L 1594 1772 \n",
       "L 1856 2816 \n",
       "L 850 2816 \n",
       "L 850 3297 \n",
       "L 1978 3297 \n",
       "L 2297 4594 \n",
       "L 2803 4594 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-20\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-74\" d=\"M 1172 4494 \n",
       "L 1172 3500 \n",
       "L 2356 3500 \n",
       "L 2356 3053 \n",
       "L 1172 3053 \n",
       "L 1172 1153 \n",
       "Q 1172 725 1289 603 \n",
       "Q 1406 481 1766 481 \n",
       "L 2356 481 \n",
       "L 2356 0 \n",
       "L 1766 0 \n",
       "Q 1100 0 847 248 \n",
       "Q 594 497 594 1153 \n",
       "L 594 3053 \n",
       "L 172 3053 \n",
       "L 172 3500 \n",
       "L 594 3500 \n",
       "L 594 4494 \n",
       "L 1172 4494 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-6f\" d=\"M 1959 3097 \n",
       "Q 1497 3097 1228 2736 \n",
       "Q 959 2375 959 1747 \n",
       "Q 959 1119 1226 758 \n",
       "Q 1494 397 1959 397 \n",
       "Q 2419 397 2687 759 \n",
       "Q 2956 1122 2956 1747 \n",
       "Q 2956 2369 2687 2733 \n",
       "Q 2419 3097 1959 3097 \n",
       "z\n",
       "M 1959 3584 \n",
       "Q 2709 3584 3137 3096 \n",
       "Q 3566 2609 3566 1747 \n",
       "Q 3566 888 3137 398 \n",
       "Q 2709 -91 1959 -91 \n",
       "Q 1206 -91 779 398 \n",
       "Q 353 888 353 1747 \n",
       "Q 353 2609 779 3096 \n",
       "Q 1206 3584 1959 3584 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-6b\" d=\"M 581 4863 \n",
       "L 1159 4863 \n",
       "L 1159 1991 \n",
       "L 2875 3500 \n",
       "L 3609 3500 \n",
       "L 1753 1863 \n",
       "L 3688 0 \n",
       "L 2938 0 \n",
       "L 1159 1709 \n",
       "L 1159 0 \n",
       "L 581 0 \n",
       "L 581 4863 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-65\" d=\"M 3597 1894 \n",
       "L 3597 1613 \n",
       "L 953 1613 \n",
       "Q 991 1019 1311 708 \n",
       "Q 1631 397 2203 397 \n",
       "Q 2534 397 2845 478 \n",
       "Q 3156 559 3463 722 \n",
       "L 3463 178 \n",
       "Q 3153 47 2828 -22 \n",
       "Q 2503 -91 2169 -91 \n",
       "Q 1331 -91 842 396 \n",
       "Q 353 884 353 1716 \n",
       "Q 353 2575 817 3079 \n",
       "Q 1281 3584 2069 3584 \n",
       "Q 2775 3584 3186 3129 \n",
       "Q 3597 2675 3597 1894 \n",
       "z\n",
       "M 3022 2063 \n",
       "Q 3016 2534 2758 2815 \n",
       "Q 2500 3097 2075 3097 \n",
       "Q 1594 3097 1305 2825 \n",
       "Q 1016 2553 972 2059 \n",
       "L 3022 2063 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-6e\" d=\"M 3513 2113 \n",
       "L 3513 0 \n",
       "L 2938 0 \n",
       "L 2938 2094 \n",
       "Q 2938 2591 2744 2837 \n",
       "Q 2550 3084 2163 3084 \n",
       "Q 1697 3084 1428 2787 \n",
       "Q 1159 2491 1159 1978 \n",
       "L 1159 0 \n",
       "L 581 0 \n",
       "L 581 3500 \n",
       "L 1159 3500 \n",
       "L 1159 2956 \n",
       "Q 1366 3272 1645 3428 \n",
       "Q 1925 3584 2291 3584 \n",
       "Q 2894 3584 3203 3211 \n",
       "Q 3513 2838 3513 2113 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-73\" d=\"M 2834 3397 \n",
       "L 2834 2853 \n",
       "Q 2591 2978 2328 3040 \n",
       "Q 2066 3103 1784 3103 \n",
       "Q 1356 3103 1142 2972 \n",
       "Q 928 2841 928 2578 \n",
       "Q 928 2378 1081 2264 \n",
       "Q 1234 2150 1697 2047 \n",
       "L 1894 2003 \n",
       "Q 2506 1872 2764 1633 \n",
       "Q 3022 1394 3022 966 \n",
       "Q 3022 478 2636 193 \n",
       "Q 2250 -91 1575 -91 \n",
       "Q 1294 -91 989 -36 \n",
       "Q 684 19 347 128 \n",
       "L 347 722 \n",
       "Q 666 556 975 473 \n",
       "Q 1284 391 1588 391 \n",
       "Q 1994 391 2212 530 \n",
       "Q 2431 669 2431 922 \n",
       "Q 2431 1156 2273 1281 \n",
       "Q 2116 1406 1581 1522 \n",
       "L 1381 1569 \n",
       "Q 847 1681 609 1914 \n",
       "Q 372 2147 372 2553 \n",
       "Q 372 3047 722 3315 \n",
       "Q 1072 3584 1716 3584 \n",
       "Q 2034 3584 2315 3537 \n",
       "Q 2597 3491 2834 3397 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-70\" d=\"M 1159 525 \n",
       "L 1159 -1331 \n",
       "L 581 -1331 \n",
       "L 581 3500 \n",
       "L 1159 3500 \n",
       "L 1159 2969 \n",
       "Q 1341 3281 1617 3432 \n",
       "Q 1894 3584 2278 3584 \n",
       "Q 2916 3584 3314 3078 \n",
       "Q 3713 2572 3713 1747 \n",
       "Q 3713 922 3314 415 \n",
       "Q 2916 -91 2278 -91 \n",
       "Q 1894 -91 1617 61 \n",
       "Q 1341 213 1159 525 \n",
       "z\n",
       "M 3116 1747 \n",
       "Q 3116 2381 2855 2742 \n",
       "Q 2594 3103 2138 3103 \n",
       "Q 1681 3103 1420 2742 \n",
       "Q 1159 2381 1159 1747 \n",
       "Q 1159 1113 1420 752 \n",
       "Q 1681 391 2138 391 \n",
       "Q 2594 391 2855 752 \n",
       "Q 3116 1113 3116 1747 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-72\" d=\"M 2631 2963 \n",
       "Q 2534 3019 2420 3045 \n",
       "Q 2306 3072 2169 3072 \n",
       "Q 1681 3072 1420 2755 \n",
       "Q 1159 2438 1159 1844 \n",
       "L 1159 0 \n",
       "L 581 0 \n",
       "L 581 3500 \n",
       "L 1159 3500 \n",
       "L 1159 2956 \n",
       "Q 1341 3275 1631 3429 \n",
       "Q 1922 3584 2338 3584 \n",
       "Q 2397 3584 2469 3576 \n",
       "Q 2541 3569 2628 3553 \n",
       "L 2631 2963 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-76\" d=\"M 191 3500 \n",
       "L 800 3500 \n",
       "L 1894 563 \n",
       "L 2988 3500 \n",
       "L 3597 3500 \n",
       "L 2284 0 \n",
       "L 1503 0 \n",
       "L 191 3500 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-69\" d=\"M 603 3500 \n",
       "L 1178 3500 \n",
       "L 1178 0 \n",
       "L 603 0 \n",
       "L 603 3500 \n",
       "z\n",
       "M 603 4863 \n",
       "L 1178 4863 \n",
       "L 1178 4134 \n",
       "L 603 4134 \n",
       "L 603 4863 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-77\" d=\"M 269 3500 \n",
       "L 844 3500 \n",
       "L 1563 769 \n",
       "L 2278 3500 \n",
       "L 2956 3500 \n",
       "L 3675 769 \n",
       "L 4391 3500 \n",
       "L 4966 3500 \n",
       "L 4050 0 \n",
       "L 3372 0 \n",
       "L 2619 2869 \n",
       "L 1863 0 \n",
       "L 1184 0 \n",
       "L 269 3500 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "      </defs>\n",
       "      <use xlink:href=\"#DejaVuSans-23\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-20\" x=\"83.789062\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-74\" x=\"115.576172\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-6f\" x=\"154.785156\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-6b\" x=\"215.966797\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-65\" x=\"270.251953\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-6e\" x=\"331.775391\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-73\" x=\"395.154297\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-20\" x=\"447.253906\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-70\" x=\"479.041016\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-65\" x=\"542.517578\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-72\" x=\"604.041016\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-20\" x=\"645.154297\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-72\" x=\"676.941406\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-65\" x=\"715.804688\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-76\" x=\"777.328125\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-69\" x=\"836.507812\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-65\" x=\"864.291016\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-77\" x=\"925.814453\"/>\n",
       "     </g>\n",
       "    </g>\n",
       "   </g>\n",
       "   <g id=\"matplotlib.axis_2\">\n",
       "    <g id=\"ytick_1\">\n",
       "     <g id=\"line2d_6\">\n",
       "      <defs>\n",
       "       <path id=\"mc76c273534\" d=\"M 0 0 \n",
       "L -3.5 0 \n",
       "\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </defs>\n",
       "      <g>\n",
       "       <use xlink:href=\"#mc76c273534\" x=\"53.328125\" y=\"145.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_7\">\n",
       "      <!-- 0 -->\n",
       "      <g transform=\"translate(39.965625 149.599219) scale(0.1 -0.1)\">\n",
       "       <use xlink:href=\"#DejaVuSans-30\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"ytick_2\">\n",
       "     <g id=\"line2d_7\">\n",
       "      <g>\n",
       "       <use xlink:href=\"#mc76c273534\" x=\"53.328125\" y=\"107.090323\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_8\">\n",
       "      <!-- 2000 -->\n",
       "      <g transform=\"translate(20.878125 110.889541) scale(0.1 -0.1)\">\n",
       "       <use xlink:href=\"#DejaVuSans-32\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"127.246094\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"190.869141\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"ytick_3\">\n",
       "     <g id=\"line2d_8\">\n",
       "      <g>\n",
       "       <use xlink:href=\"#mc76c273534\" x=\"53.328125\" y=\"68.380645\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_9\">\n",
       "      <!-- 4000 -->\n",
       "      <g transform=\"translate(20.878125 72.179864) scale(0.1 -0.1)\">\n",
       "       <use xlink:href=\"#DejaVuSans-34\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"127.246094\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"190.869141\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"ytick_4\">\n",
       "     <g id=\"line2d_9\">\n",
       "      <g>\n",
       "       <use xlink:href=\"#mc76c273534\" x=\"53.328125\" y=\"29.670968\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "     <g id=\"text_10\">\n",
       "      <!-- 6000 -->\n",
       "      <g transform=\"translate(20.878125 33.470186) scale(0.1 -0.1)\">\n",
       "       <use xlink:href=\"#DejaVuSans-36\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"127.246094\"/>\n",
       "       <use xlink:href=\"#DejaVuSans-30\" x=\"190.869141\"/>\n",
       "      </g>\n",
       "     </g>\n",
       "    </g>\n",
       "    <g id=\"text_11\">\n",
       "     <!-- count -->\n",
       "     <g transform=\"translate(14.798437 90.60625) rotate(-90) scale(0.1 -0.1)\">\n",
       "      <defs>\n",
       "       <path id=\"DejaVuSans-63\" d=\"M 3122 3366 \n",
       "L 3122 2828 \n",
       "Q 2878 2963 2633 3030 \n",
       "Q 2388 3097 2138 3097 \n",
       "Q 1578 3097 1268 2742 \n",
       "Q 959 2388 959 1747 \n",
       "Q 959 1106 1268 751 \n",
       "Q 1578 397 2138 397 \n",
       "Q 2388 397 2633 464 \n",
       "Q 2878 531 3122 666 \n",
       "L 3122 134 \n",
       "Q 2881 22 2623 -34 \n",
       "Q 2366 -91 2075 -91 \n",
       "Q 1284 -91 818 406 \n",
       "Q 353 903 353 1747 \n",
       "Q 353 2603 823 3093 \n",
       "Q 1294 3584 2113 3584 \n",
       "Q 2378 3584 2631 3529 \n",
       "Q 2884 3475 3122 3366 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "       <path id=\"DejaVuSans-75\" d=\"M 544 1381 \n",
       "L 544 3500 \n",
       "L 1119 3500 \n",
       "L 1119 1403 \n",
       "Q 1119 906 1312 657 \n",
       "Q 1506 409 1894 409 \n",
       "Q 2359 409 2629 706 \n",
       "Q 2900 1003 2900 1516 \n",
       "L 2900 3500 \n",
       "L 3475 3500 \n",
       "L 3475 0 \n",
       "L 2900 0 \n",
       "L 2900 538 \n",
       "Q 2691 219 2414 64 \n",
       "Q 2138 -91 1772 -91 \n",
       "Q 1169 -91 856 284 \n",
       "Q 544 659 544 1381 \n",
       "z\n",
       "M 1991 3584 \n",
       "L 1991 3584 \n",
       "z\n",
       "\" transform=\"scale(0.015625)\"/>\n",
       "      </defs>\n",
       "      <use xlink:href=\"#DejaVuSans-63\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-6f\" x=\"54.980469\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-75\" x=\"116.162109\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-6e\" x=\"179.541016\"/>\n",
       "      <use xlink:href=\"#DejaVuSans-74\" x=\"242.919922\"/>\n",
       "     </g>\n",
       "    </g>\n",
       "   </g>\n",
       "   <g id=\"patch_22\">\n",
       "    <path d=\"M 53.328125 145.8 \n",
       "L 53.328125 7.2 \n",
       "\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_23\">\n",
       "    <path d=\"M 248.628125 145.8 \n",
       "L 248.628125 7.2 \n",
       "\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_24\">\n",
       "    <path d=\"M 53.328125 145.8 \n",
       "L 248.628125 145.8 \n",
       "\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
       "   </g>\n",
       "   <g id=\"patch_25\">\n",
       "    <path d=\"M 53.328125 7.2 \n",
       "L 248.628125 7.2 \n",
       "\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
       "   </g>\n",
       "  </g>\n",
       " </g>\n",
       " <defs>\n",
       "  <clipPath id=\"p93ac632028\">\n",
       "   <rect x=\"53.328125\" y=\"7.2\" width=\"195.3\" height=\"138.6\"/>\n",
       "  </clipPath>\n",
       " </defs>\n",
       "</svg>\n"
      ],
      "text/plain": [
       "<Figure size 350x250 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 设置绘图的大小\n",
    "d2l.set_figsize()\n",
    "\"\"\"\n",
    "该行代码调用了 d2l.set_figsize 方法来设置绘图的默认大小。\n",
    "在 d2l 库中，`set_figsize` 是一个用于设置图表大小的辅助函数。\n",
    "默认情况下，它将图表的大小设置为 4x3 英寸，适合显示在 Jupyter Notebook 中。\n",
    "\n",
    "它不需要任何参数，如果需要调整图形大小，可以传入参数，例如：`d2l.set_figsize(figsize=(6, 4))`。\n",
    "\"\"\"\n",
    "    \n",
    "# 设置x轴标签\n",
    "d2l.plt.xlabel('# tokens per review')\n",
    "\"\"\"\n",
    "该行代码设置了图表的 x 轴标签为 \"# tokens per review\"。\n",
    "这意味着 x 轴代表每条评论的单词数（token 数量）。我们通过评论文本的长度来表示这个值。\n",
    "\n",
    "在文本分类任务中，评论的单词数可以帮助我们了解文本数据的长度分布，并可作为模型输入的一个特征。\n",
    "\"\"\"\n",
    "\n",
    "# 设置y轴标签\n",
    "d2l.plt.ylabel('count')\n",
    "\"\"\"\n",
    "该行代码设置了图表的 y 轴标签为 \"count\"。\n",
    "这意味着 y 轴表示每个 token 数量区间内，包含多少条评论。\n",
    "例如，如果某个区间是 `[0, 50)`，y 轴的值就是包含 0 到 50 个 token 的评论数量。\n",
    "\"\"\"\n",
    "\n",
    "# 绘制直方图，展示每条评论的 token 数量分布\n",
    "d2l.plt.hist([len(line) for line in train_tokens], bins=range(0, 1000, 50))\n",
    "\"\"\"\n",
    "该行代码绘制了一个直方图，展示每条评论的 token 数量分布。\n",
    "\n",
    "详细解释：\n",
    "- `[len(line) for line in train_tokens]`：这是一个列表推导式，它计算了每条评论（即每个 token 列表）中的 token 数量。`train_tokens` 是一个包含所有训练集评论的分词列表，每个 `line` 表示一条评论的 token 列表。因此，`len(line)` 就是每条评论的单词数量。\n",
    "- `bins=range(0, 1000, 50)`：`bins` 参数设置了直方图的区间（bins）。`range(0, 1000, 50)` 表示将数据从 0 到 1000 的范围内分成若干个区间，每个区间的宽度为 50。所以这里的直方图将统计 token 数量在不同区间内的评论数量，例如 0 到 50 个 token 的评论、50 到 100 个 token 的评论等。\n",
    "- 该直方图的 y 轴显示每个区间内的评论数量，x 轴显示该区间的 token 数量。\n",
    "\n",
    "返回值：该行代码不会返回任何值，它直接在当前绘图环境中生成并显示直方图。\n",
    "\"\"\"\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b5faa2c",
   "metadata": {
    "origin_pos": 12
   },
   "source": [
    "正如我们所料，评论的长度各不相同。为了每次处理一小批量这样的评论，我们通过截断和填充将每个评论的长度设置为500。这类似于 :numref:`sec_machine_translation`中对机器翻译数据集的预处理步骤。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "2d5d1601",
   "metadata": {
    "origin_pos": 13,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([25000, 500])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'\\n这行代码打印了 `train_features` 张量的形状，以便我们检查数据的维度。\\n`train_features` 是一个二维张量，其中每行表示一条评论，评论的长度为 500（由 `num_steps` 设置）。\\n- 如果训练数据中有 N 条评论，`train_features` 的形状将为 `(N, 500)`，即每条评论包含 500 个 token 索引。\\n- 这行代码用于确认所有评论已正确转换为固定长度的向量。\\n'"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 设置序列长度\n",
    "num_steps = 500\n",
    "\"\"\"\n",
    "该行代码设置了序列的最大长度为 500。\n",
    "在处理文本数据时，我们通常会对每条评论进行统一的长度调整，以便在模型中进行批量处理。\n",
    "如果评论的长度超过了 `num_steps`，它将被截断；如果评论的长度小于 `num_steps`，它将被填充。\n",
    "\n",
    "这个参数决定了每条评论将被裁剪或填充到的长度（500个 token）。\n",
    "在实际操作中，500 是一个常见的选择，但根据数据的特点，可以选择不同的值。\n",
    "\"\"\"\n",
    "\n",
    "# 使用 d2l.truncate_pad 截断或填充评论，将其转换为固定长度的序列\n",
    "train_features = torch.tensor([d2l.truncate_pad(\n",
    "    vocab[line], num_steps, vocab['<pad>']) for line in train_tokens])\n",
    "\"\"\"\n",
    "该行代码对每条评论的 token 数量进行裁剪或填充，确保所有评论都具有相同的长度（500个 token）。\n",
    "使用 `d2l.truncate_pad` 函数来对评论进行处理，具体解释如下：\n",
    "\n",
    "- `vocab[line]`：首先，我们使用词汇表 `vocab` 将每条评论的单词（tokens）转换为对应的索引。例如，`vocab[line]` 返回每条评论中每个 token 对应的索引列表。\n",
    "- `num_steps`：这表示目标序列的最大长度。所有评论都将被处理为长度为 `num_steps`（在这里是 500）的一致长度。\n",
    "- `vocab['<pad>']`：这是用于填充的特殊 token，表示 `<pad>` 在词汇表中的索引。对于长度不足 `num_steps` 的评论，`<pad>` 将用于填充，直到达到 `num_steps` 的长度。\n",
    "\n",
    "结果：\n",
    "- 对于每条评论，`d2l.truncate_pad` 函数会根据评论的实际长度来决定：\n",
    "  - 如果评论的长度超过了 500，它将被截断。\n",
    "  - 如果评论的长度小于 500，它将被填充到 500。\n",
    "- 最终，`train_features` 是一个包含每条评论的张量，每个评论的长度为 `num_steps`（500），所有评论将统一为同样长度的张量。\n",
    "\n",
    "此操作的输出是一个 `torch.Tensor`，每条评论是一个固定长度的向量，评论中的每个单词都被转换成词汇表中的索引。\n",
    "\"\"\"\n",
    "\n",
    "# 打印处理后的数据形状\n",
    "print(train_features.shape)\n",
    "\n",
    "\"\"\"\n",
    "这行代码打印了 `train_features` 张量的形状，以便我们检查数据的维度。\n",
    "`train_features` 是一个二维张量，其中每行表示一条评论，评论的长度为 500（由 `num_steps` 设置）。\n",
    "- 如果训练数据中有 N 条评论，`train_features` 的形状将为 `(N, 500)`，即每条评论包含 500 个 token 索引。\n",
    "- 这行代码用于确认所有评论已正确转换为固定长度的向量。\n",
    "\"\"\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dca33759",
   "metadata": {
    "origin_pos": 14
   },
   "source": [
    "## 创建数据迭代器\n",
    "\n",
    "现在我们可以创建数据迭代器了。在每次迭代中，都会返回一小批量样本。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "454154e6",
   "metadata": {
    "origin_pos": 16,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "X: torch.Size([64, 500]) , y: torch.Size([64])\n",
      "小批量数目： 391\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'\\n该行代码打印了训练集中小批量的数量。`len(train_iter)` 返回数据集被划分成的小批量的数量。\\n\\n例如，如果训练集共有 1000 条评论，每批次 64 条评论，那么小批量的数量就是 `1000 // 64 = 15`（向下取整）。\\n'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 使用 d2l.load_array 加载数据并创建数据迭代器\n",
    "train_iter = d2l.load_array((train_features, torch.tensor(train_data[1])), 64)\n",
    "\"\"\"\n",
    "该行代码调用了 `d2l.load_array` 函数，将训练数据和标签加载为一个小批量数据迭代器 `train_iter`。\n",
    "详细解释：\n",
    "- `train_features`：这是一个二维张量，包含每条评论的 token 序列，大小为 `(N, num_steps)`，其中 N 是评论的数量，`num_steps` 是每条评论的最大长度（500）。\n",
    "- `torch.tensor(train_data[1])`：`train_data[1]` 是训练集中的标签，通常是 0 或 1（表示负面或正面评论）。我们将其转换为 PyTorch 张量。\n",
    "- `(train_features, torch.tensor(train_data[1]))`：这里将 `train_features` 和标签组合成一个元组，表示输入数据和标签。每个元素都是一个张量，分别表示评论的 token 序列和对应的标签。\n",
    "- `64`：这是小批量的大小，表示每个小批量包含 64 条评论。小批量训练有助于提高训练效率并减少内存消耗。\n",
    "\n",
    "`d2l.load_array` 将返回一个迭代器 `train_iter`，它可以按小批量返回数据。在每个批次中，返回的 `X` 是评论的 token 索引序列，`y` 是对应的标签（0 或 1）。\n",
    "\n",
    "\"\"\"\n",
    "\n",
    "# 迭代并打印一个小批量的数据\n",
    "for X, y in train_iter:\n",
    "    print('X:', X.shape, ', y:', y.shape)\n",
    "    break\n",
    "\"\"\"\n",
    "这段代码循环遍历 `train_iter` 迭代器，取出第一个小批量的数据并打印出其形状。\n",
    "- `X` 是输入数据（即评论的 token 索引），其形状应该是 `(64, 500)`，表示有 64 条评论，每条评论的长度为 500（由 `num_steps` 决定）。\n",
    "- `y` 是标签（即每条评论的分类标签），其形状应该是 `(64,)`，表示有 64 个标签。\n",
    "\n",
    "在此代码中，我们只打印出第一个小批量的数据，因此使用 `break` 语句来中止循环。\n",
    "\"\"\"\n",
    "\n",
    "# 打印小批量数目\n",
    "print('小批量数目：', len(train_iter))\n",
    "\"\"\"\n",
    "该行代码打印了训练集中小批量的数量。`len(train_iter)` 返回数据集被划分成的小批量的数量。\n",
    "\n",
    "例如，如果训练集共有 1000 条评论，每批次 64 条评论，那么小批量的数量就是 `1000 // 64 = 15`（向下取整）。\n",
    "\"\"\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42b492d4",
   "metadata": {
    "origin_pos": 18
   },
   "source": [
    "## 整合代码\n",
    "\n",
    "最后，我们将上述步骤封装到`load_data_imdb`函数中。它返回训练和测试数据迭代器以及IMDb评论数据集的词表。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "8dd551a9",
   "metadata": {
    "origin_pos": 20,
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [],
   "source": [
    "#@save\n",
    "def load_data_imdb(batch_size, num_steps=500):\n",
    "    \"\"\"返回数据迭代器和IMDb评论数据集的词表\"\"\"\n",
    "    \n",
    "    # 下载并解压 IMDb 数据集\n",
    "    data_dir = d2l.download_extract('aclImdb', 'aclImdb')\n",
    "    \"\"\"\n",
    "    下载并解压 IMDb 数据集，如果已经下载则跳过。返回数据集的存储路径 `data_dir`。\n",
    "    \"\"\"\n",
    "    \n",
    "    # 读取训练集和测试集的数据（文本和标签）\n",
    "    train_data = read_imdb(data_dir, True)\n",
    "    test_data = read_imdb(data_dir, False)\n",
    "    \"\"\"\n",
    "    读取 IMDb 数据集的训练集和测试集数据。`train_data` 是训练集文本和标签，`test_data` 是测试集文本和标签。\n",
    "    \"\"\"\n",
    "\n",
    "    # 对文本进行分词处理\n",
    "    train_tokens = d2l.tokenize(train_data[0], token='word')\n",
    "    test_tokens = d2l.tokenize(test_data[0], token='word')\n",
    "    \"\"\"\n",
    "    将训练集和测试集的评论文本进行分词处理，返回分词后的 token 列表。\n",
    "    这里使用 `d2l.tokenize` 函数将每条评论转换为词汇（单词）列表。\n",
    "    \"\"\"\n",
    "\n",
    "    # 创建词汇表，最小频率为 5\n",
    "    vocab = d2l.Vocab(train_tokens, min_freq=5)\n",
    "    \"\"\"\n",
    "    使用训练集的 token 构建词汇表 `vocab`，只保留出现次数不低于 5 次的词汇。\n",
    "    `min_freq=5` 确保词汇表中的词语频率不低于 5 次，去除低频词。\n",
    "    \"\"\"\n",
    "\n",
    "    # 对训练集和测试集的 token 序列进行截断或填充\n",
    "    train_features = torch.tensor([d2l.truncate_pad(\n",
    "        vocab[line], num_steps, vocab['<pad>']) for line in train_tokens])\n",
    "    test_features = torch.tensor([d2l.truncate_pad(\n",
    "        vocab[line], num_steps, vocab['<pad>']) for line in test_tokens])\n",
    "    \"\"\"\n",
    "    使用 `d2l.truncate_pad` 将训练集和测试集的评论 token 列表统一为指定的长度 `num_steps`。\n",
    "    如果评论的长度小于 `num_steps`，则使用 `<pad>` 填充；如果大于 `num_steps`，则进行截断。\n",
    "    结果是 `train_features` 和 `test_features` 分别是训练集和测试集的评论特征矩阵。\n",
    "    \"\"\"\n",
    "\n",
    "    # 创建训练集和测试集的数据迭代器\n",
    "    train_iter = d2l.load_array((train_features, torch.tensor(train_data[1])),\n",
    "                                batch_size)\n",
    "    test_iter = d2l.load_array((test_features, torch.tensor(test_data[1])),\n",
    "                               batch_size, is_train=False)\n",
    "    \"\"\"\n",
    "    使用 `d2l.load_array` 函数将训练集和测试集的特征及标签封装成数据迭代器。\n",
    "    `batch_size` 控制每个小批量的数据条数，`is_train=False` 表示测试集不进行数据增强。\n",
    "    \"\"\"\n",
    "\n",
    "    # 返回训练集迭代器、测试集迭代器和词汇表\n",
    "    return train_iter, test_iter, vocab\n",
    "    \"\"\"\n",
    "    返回训练集和测试集的小批量数据迭代器，以及训练集的词汇表 `vocab`。\n",
    "    迭代器可以在训练过程中按小批量加载数据，词汇表用于将文本转化为索引。\n",
    "    \"\"\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ead6677a",
   "metadata": {
    "origin_pos": 22
   },
   "source": [
    "## 小结\n",
    "\n",
    "* 情感分析研究人们在文本中的情感，这被认为是一个文本分类问题，它将可变长度的文本序列进行转换转换为固定长度的文本类别。\n",
    "* 经过预处理后，我们可以使用词表将IMDb评论数据集加载到数据迭代器中。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a0b32b5",
   "metadata": {
    "origin_pos": 24,
    "tab": [
     "pytorch"
    ]
   },
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "d2l-zh-312",
   "language": "python",
   "name": "d2l-zh-312"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.7"
  },
  "required_libs": []
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
