{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据集\n",
    "先把前面的命令抄过来"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/ubuntu/ssk/MoEResearch/MoEc_model/notebooks\n"
     ]
    }
   ],
   "source": [
    "!pwd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "sys.path.append(\"../fairseq\")\n",
    "sys.path.append(\"../\")\n",
    "import fairseq\n",
    "import unilm # import这个玩意，把作者自己定义的模型也注册了\n",
    "from fairseq import (\n",
    "    checkpoint_utils,\n",
    "    options,\n",
    "    quantization_utils,\n",
    "    tasks,\n",
    "    utils,\n",
    ")\n",
    "from fairseq.dataclass.utils import convert_namespace_to_omegaconf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "arguments=[\n",
    "        \"../fairseq/data-bin/wmt17_en_de\",\n",
    "        \"--arch\", \"gdmoe_wmt_en_de\",\n",
    "        \"--encoder-moe-layers\", \"3\" ,\n",
    "        \"--decoder-moe-layers\", \"3\" ,\n",
    "        \"--moe-top1-expert\" ,\n",
    "        \"--moe-sublayers\" ,\"3\" ,\n",
    "        \"--moe-expert-count\", \"64\" ,\n",
    "        \"--moe-gating-use-fp32\" ,\n",
    "        \"--tmoe-routing-dim-reduction\",\n",
    "        \"--tmoe-routing-dim\" ,\"32\" ,\n",
    "        \"--tmoe-routing-hard-cosine\" ,\n",
    "        \"--moe-activation-dropout\" ,\"0.0\" ,\n",
    "        \"--moe-dropout\" ,\"0.0\",\n",
    "        \"--max-tokens\", \"4096\",\n",
    "        ]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [],
   "source": [
    "parser = options.get_training_parser()\n",
    "args = options.parse_args_and_arch(parser,input_args=arguments)\n",
    "cfg = convert_namespace_to_omegaconf(args)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 在这里我们使用Translation任务作为例子，具体代码在这里`fairseq/fairseq/tasks/translation.py`\n",
    "- 可以看到，代码里头是这样的：\n",
    "    ```python\n",
    "    @register_task(\"translation\", dataclass=TranslationConfig)\n",
    "    class TranslationTask(FairseqTask):\n",
    "        \"\"\"\n",
    "        Translate from one (source) language to another (target) language.\n",
    "\n",
    "        Args:\n",
    "            src_dict (~fairseq.data.Dictionary): dictionary for the source language\n",
    "            tgt_dict (~fairseq.data.Dictionary): dictionary for the target language\n",
    "\n",
    "        .. note::\n",
    "\n",
    "            The translation task is compatible with :mod:`fairseq-train`,\n",
    "            :mod:`fairseq-generate` and :mod:`fairseq-interactive`.\n",
    "        \"\"\"\n",
    "    ```\n",
    "    TranslationTask继承了FairseqTask，FairseqTask在`fairseq/fairseq/tasks/fairseq_task.py`里头\n",
    "\n",
    "- 转到`fairseq/fairseq/tasks/fairseq_task.py`，可以看到它定义了一个任务的很多行为，包括但不限于：\n",
    "    - 创建模型\n",
    "    - 读入数据\n",
    "    - 创建修改器，损失函数\n",
    "    - 训练/验证/测试\n",
    "    - ...\n",
    "\n",
    "- 下面着重讲解数据部分\n",
    "\n",
    "在`fairseq/fairseq_cli/train.py`88行，即训练代码处，我们构建完模型之后，就需要读入数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "task:tasks.FairseqTask = tasks.setup_task(cfg.task)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "task.load_dataset(split=\"train\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "?没动静，也不返回东西，看源码`fairseq/fairseq/tasks/translation.py`：\n",
    "\n",
    "\n",
    "```python\n",
    "    def load_dataset(self, split, epoch=1, combine=False, **kwargs):\n",
    "        \"\"\"Load a given dataset split.\n",
    "\n",
    "        Args:\n",
    "            split (str): name of the split (e.g., train, valid, test)\n",
    "        \"\"\"\n",
    "        paths = utils.split_paths(self.cfg.data)\n",
    "        assert len(paths) > 0\n",
    "        if split != self.cfg.train_subset:\n",
    "            # if not training data set, use the first shard for valid and test\n",
    "            paths = paths[:1]\n",
    "        data_path = paths[(epoch - 1) % len(paths)]\n",
    "\n",
    "        # infer langcode\n",
    "        src, tgt = self.cfg.source_lang, self.cfg.target_lang\n",
    "\n",
    "        self.datasets[split] = load_langpair_dataset(\n",
    "            data_path,\n",
    "            split,\n",
    "            src,\n",
    "            self.src_dict,\n",
    "            tgt,\n",
    "            self.tgt_dict,\n",
    "            combine=combine,\n",
    "            dataset_impl=self.cfg.dataset_impl,\n",
    "            upsample_primary=self.cfg.upsample_primary,\n",
    "            left_pad_source=self.cfg.left_pad_source,\n",
    "            left_pad_target=self.cfg.left_pad_target,\n",
    "            max_source_positions=self.cfg.max_source_positions,\n",
    "            max_target_positions=self.cfg.max_target_positions,\n",
    "            load_alignments=self.cfg.load_alignments,\n",
    "            truncate_source=self.cfg.truncate_source,\n",
    "            num_buckets=self.cfg.num_batch_buckets,\n",
    "            shuffle=(split != \"test\"),\n",
    "            pad_to_multiple=self.cfg.required_seq_len_multiple,\n",
    "        )\n",
    "```\n",
    "\n",
    "原来是自己存到self.datasets[split]里头去了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "('en', 'de')"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.cfg.source_lang, task.cfg.target_lang"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<fairseq.data.language_pair_dataset.LanguagePairDataset at 0x7f2e8e0c9a30>"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.datasets[\"train\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 可以看到是个`fairseq.data.language_pair_dataset.LanguagePairDataset`对象，我们得看看他的一些属性。\n",
    "- 源文件在`fairseq/fairseq/data/language_pair_dataset.py`里头。\n",
    "- 太复杂了，不看了，我们重点关注__getitem__函数吧家人们\n",
    "    ```python\n",
    "    example = {\n",
    "            \"id\": index,\n",
    "            \"source\": src_item,\n",
    "            \"target\": tgt_item,\n",
    "        }\n",
    "        if self.align_dataset is not None:\n",
    "            example[\"alignment\"] = self.align_dataset[index]\n",
    "        if self.constraints is not None:\n",
    "            example[\"constraints\"] = self.constraints[index]\n",
    "    return example\n",
    "    ```\n",
    "    返回的是一个dict，分别是ID，源语言，目标语言（两个IF我们跳过）\n",
    "- 来看看到底是什么。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "所有数据："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'id': 0,\n",
       " 'source': tensor([   21,  5540,  7650,     4,  1968,     7,     4,    42,    85, 13246,\n",
       "          5309,    15,  3996,  1672,  1266,  1679,     5,     8,    21,    50,\n",
       "            72,   495,   300,     9,   468,    30,    11,  1532,    71,   148,\n",
       "            10,     4,   309,    13,    30,  4876,    11,  2619,  8062,  1309,\n",
       "           571,     6,     2]),\n",
       " 'target': tensor([   54, 15273,     6,    89,  5195,     4,    29,  4988,  1214, 21666,\n",
       "         21202, 15054,    19,    87,   257,    14,  1985,  1189,     4,  4200,\n",
       "           124,  2937,   415,  8190,    63,  3981,  6096,     8,   634,     4,\n",
       "            70,    24,  3111,  8818,   798,     5,     2])}"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.datasets[\"train\"][0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "长度：396万个样本，有点多啊，cifar10/100 TinyImageNet 也就50000个样本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3961179"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.datasets[\"train\"].__len__()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到，sorce和target的长度不一样"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(43, 37)"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(task.datasets[\"train\"][0][\"source\"]),len(task.datasets[\"train\"][0][\"target\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "实际上，文本(我们称为raw text)使用Byte Pair Encoding (BPE)编码变成IDs，它并不是传统方案，详细可以看： https://zhuanlan.zhihu.com/p/424631681\n",
    "\n",
    "我们调用`task.src_dict.string`可以将PBE编码的source转换成原来英文语句："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'I declare resumed the session of the European Parliament adjour@@ ned on Friday 17 December 1999 , and I would like once again to wish you a happy new year in the hope that you enjoyed a pleasant fes@@ tive period .'"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.src_dict.string(task.datasets[\"train\"][0][\"source\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "同样，可以使用类似方法将PBE编码的target转换成原来的德语\n",
    "\n",
    "```\n",
    "我宣布于 12 月 17 日星期五休会的欧洲议会会议复会，祝大家在新的一年里一切顺利，并希望大家度过一个愉快的假期。\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Ich erkläre die am Freitag , dem 17. Dezember unterbro@@ chene Sitzungsperiode des Europäischen Parlaments für wieder@@ aufgenommen , wünsche Ihnen nochmals alles Gute zum Jahres@@ wechsel und hoffe , daß Sie schöne Ferien hatten .'"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.tgt_dict.string(task.datasets[\"train\"][0][\"target\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "多看几个例子"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'id': 1,\n",
       " 'source': tensor([ 1521,     5,    23,    30,    31,    26,   831,     5,     4, 13466,\n",
       "           213,    63, 11884,  5925,    63,  2377,     9,  9699,  5558,     5,\n",
       "           202,     4,    89,    10,    11,   203,     7,    93,  4219,    11,\n",
       "          1227,     7,   725,  4140,    13,  2359,   106, 13466,  1748,     6,\n",
       "             2]),\n",
       " 'target': tensor([  262,    24,  2220,  1315,     4,    13,     7,   146,  9676, 15650,\n",
       "            31, 18565,   160, 22462,    15, 22508,    31,    21, 14263,     5,\n",
       "           699,    35,   287,  2180,   148,   114,  1529,    12, 12050,  9972,\n",
       "          1360,     5,     2])}"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "IDX= 1\n",
    "task.datasets[\"train\"][IDX]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Although , as you will have seen , the dread@@ ed &apos; millennium bug &apos; failed to materi@@ alise , still the people in a number of countries suffered a series of natural disasters that truly were dread@@ ful .'"
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.src_dict.string(task.datasets[\"train\"][IDX][\"source\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Wie Sie feststellen konnten , ist der ge@@ für@@ chtete &quot; Mill@@ en@@ ium @-@ Bug &quot; nicht eingetreten . Doch sind Bürger einiger unserer Mitgliedstaaten Opfer von schrecklichen Naturkatastrophen geworden .'"
      ]
     },
     "execution_count": 53,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.tgt_dict.string(task.datasets[\"train\"][IDX][\"target\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对`task.tgt_dict.string`感兴趣的话可以看`fairseq/fairseq/data/dictionary.py`. 其实就是一个字典，做一些操作。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "moe",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
