{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型输入输出\n",
    "讲完了数据，我们来讲模型输入输出\n",
    "\n",
    "- 模型输入啥，输出啥？\n",
    "- 怎么计算loss?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/ubuntu/ssk/MoEResearch/MoEc_model/notebooks\n"
     ]
    }
   ],
   "source": [
    "!pwd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 只能执行一次，否则报错\n",
    "import torch.distributed as dist\n",
    "dist.init_process_group(backend='nccl', init_method='tcp://localhost:23456', rank=0, world_size=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "sys.path.append(\"../fairseq\")\n",
    "sys.path.append(\"../\")\n",
    "import fairseq\n",
    "import unilm # import这个玩意，把作者自己定义的模型也注册了\n",
    "from fairseq import (\n",
    "    checkpoint_utils,\n",
    "    options,\n",
    "    quantization_utils,\n",
    "    tasks,\n",
    "    utils,\n",
    ")\n",
    "from fairseq.dataclass.utils import convert_namespace_to_omegaconf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "arguments=[\n",
    "        \"../fairseq/data-bin/wmt17_en_de\",\n",
    "        \"--arch\", \"gdmoe_wmt_en_de\",\n",
    "        \"--encoder-moe-layers\", \"3\" ,\n",
    "        \"--decoder-moe-layers\", \"3\" ,\n",
    "        \"--moe-top1-expert\" ,\n",
    "        \"--moe-sublayers\" ,\"3\" ,\n",
    "        \"--moe-expert-count\", \"4\" ,\n",
    "        \"--moe-gating-use-fp32\" ,\n",
    "        \"--tmoe-routing-dim-reduction\",\n",
    "        \"--tmoe-routing-dim\" ,\"32\" ,\n",
    "        \"--tmoe-routing-hard-cosine\" ,\n",
    "        \"--moe-activation-dropout\" ,\"0.0\" ,\n",
    "        \"--moe-dropout\" ,\"0.0\",\n",
    "        \"--max-tokens\", \"4096\",\n",
    "        \"--optimizer\", \"adam\", \"--adam-betas\" ,'(0.9, 0.98)', \"--adam-eps\", \"1e-06\",\n",
    "        \"--max-source-positions\" ,\"256\",\n",
    "        \"--max-target-positions\", \"256\",\n",
    "        \"--max-update\", \"32000\",\n",
    "        \"--update-freq\", \"16\"\n",
    "        ]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "parser = options.get_training_parser()\n",
    "args = options.parse_args_and_arch(parser,input_args=arguments)\n",
    "cfg = convert_namespace_to_omegaconf(args)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "task:tasks.FairseqTask = tasks.setup_task(cfg.task)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = task.build_model(cfg.model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "criterion = task.build_criterion(cfg.criterion)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用的是CEloss，在`fairseq/fairseq/criterions/cross_entropy.py`可以看到，只是封装了一下，本质上还是调用torch.nn.functional.nllloss."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "CrossEntropyCriterion()"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "criterion"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "task.load_dataset(split=\"train\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "试试loss function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_hat = task.datasets[\"train\"][0][\"source\"]\n",
    "y_true = task.datasets[\"train\"][0][\"source\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 注释掉了，因为报错\n",
    "# criterion.forward(y_hat,y_true)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "报错，看源码吧：\n",
    "```python\n",
    "    def forward(self, model, sample, reduce=True):\n",
    "        \"\"\"Compute the loss for the given sample.\n",
    "\n",
    "        Returns a tuple with three elements:\n",
    "        1) the loss\n",
    "        2) the sample size, which is used as the denominator for the gradient\n",
    "        3) logging outputs to display while training\n",
    "        \"\"\"\n",
    "        net_output = model(**sample[\"net_input\"])\n",
    "        loss, _ = self.compute_loss(model, net_output, sample, reduce=reduce)\n",
    "        sample_size = (\n",
    "            sample[\"target\"].size(0) if self.sentence_avg else sample[\"ntokens\"]\n",
    "        )\n",
    "        logging_output = {\n",
    "            \"loss\": loss.data,\n",
    "            \"ntokens\": sample[\"ntokens\"],\n",
    "            \"nsentences\": sample[\"target\"].size(0),\n",
    "            \"sample_size\": sample_size,\n",
    "        }\n",
    "        return loss, sample_size, logging_output\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 我们可以看到lossfunc的forward函数并不像Pytorch原生的一样输入是(y_pred,y_true)。\n",
    "\n",
    "- 它的输入是(model:nn.Module,sample:dict,reduce=True)\n",
    "\n",
    "- 难搞，看看DataLoader吧家人们"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以下代码从`fairseq/fairseq_cli/train.py`155行摘抄下来的"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fairseq.trainer import Trainer\n",
    "# 指定训练器\n",
    "trainer = Trainer(cfg, task, model, criterion, quantizer=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load the latest checkpoint if one is available and restore the\n",
    "# corresponding train iterator\n",
    "# 构建DataLoader\n",
    "extra_state, epoch_itr = checkpoint_utils.load_checkpoint(\n",
    "    cfg.checkpoint,\n",
    "    trainer,\n",
    "    # don't cache epoch iterators for sharded datasets\n",
    "    disable_iterator_cache=task.has_sharded_data(\"train\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到，这个DataLoader将数据包装成Batch格式了，Batch格式有以下元素：\n",
    "1. id，这个batch中的每个元素对应原来的dataset元素的index\n",
    "2. nsentences： 一个batch中多少个句子\n",
    "3. ntokens： 平均一个句子最多多少个token?\n",
    "4. net_input: 神经网络输入，最后使用**samples丢进model:`net_output = model(**sample[\"net_input\"])`,这个过程是在loss函数中进行的。可以看上面的Notebook Cell\n",
    "5. target: ground truth，真实的值，要做CEloss的\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "dict_keys(['id', 'nsentences', 'ntokens', 'net_input', 'target'])"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "epoch_itr.first_batch.keys()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'id': tensor([ 289496,  378279,  406693,  ...,  669571,  710074, 1739461]),\n",
       " 'nsentences': 2048,\n",
       " 'ntokens': 4096,\n",
       " 'net_input': {'src_tokens': tensor([[   6,    2],\n",
       "          [   6,    2],\n",
       "          [   6,    2],\n",
       "          ...,\n",
       "          [   6,    2],\n",
       "          [   6,    2],\n",
       "          [3463,    2]]),\n",
       "  'src_lengths': tensor([2, 2, 2,  ..., 2, 2, 2]),\n",
       "  'prev_output_tokens': tensor([[   2,    5],\n",
       "          [   2,    5],\n",
       "          [   2,    5],\n",
       "          ...,\n",
       "          [   2,    5],\n",
       "          [   2,    5],\n",
       "          [   2, 2962]])},\n",
       " 'target': tensor([[   5,    2],\n",
       "         [   5,    2],\n",
       "         [   5,    2],\n",
       "         ...,\n",
       "         [   5,    2],\n",
       "         [   5,    2],\n",
       "         [2962,    2]])}"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "batch_data = epoch_itr.first_batch\n",
    "batch_data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到，Batch_size是2048"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2048"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "batch_data[\"id\"].__len__()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后每个sample，被减到了2个token"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(2)"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "max(batch_data[\"net_input\"]['src_lengths'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "丢到模型里头去，看看模型输出啥。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "GDMoEModel(\n",
       "  (encoder): TransformerEncoderMoEBase(\n",
       "    (dropout_module): FairseqDropout()\n",
       "    (embed_tokens): Embedding(40360, 1024, padding_idx=1)\n",
       "    (embed_positions): SinusoidalPositionalEmbedding()\n",
       "    (layers): ModuleList(\n",
       "      (0-2): 3 x TransformerEncoderLayerBase(\n",
       "        (self_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (dropout_module): FairseqDropout()\n",
       "        (activation_dropout_module): FairseqDropout()\n",
       "        (fc1): Linear(in_features=1024, out_features=4096, bias=True)\n",
       "        (fc2): Linear(in_features=4096, out_features=1024, bias=True)\n",
       "        (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "      )\n",
       "      (3): UniLMMoeLayer(\n",
       "        (dropout_module): FairseqDropout()\n",
       "        (self_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (moe_layer): MOELayer(\n",
       "          (gate): Top1Gate(\n",
       "            (wg_reduction): Linear(in_features=1024, out_features=32, bias=False)\n",
       "            (wg): Linear(in_features=32, out_features=4, bias=False)\n",
       "          )\n",
       "          (experts): ModuleList(\n",
       "            (0-3): 4 x NFeedForwardNetwork(\n",
       "              (expert_network): ModuleList(\n",
       "                (0-2): 3 x MoESublayer(\n",
       "                  (norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "                  (dropout_module): FairseqDropout()\n",
       "                  (activation_dropout_module): FairseqDropout()\n",
       "                  (fc1): Linear(in_features=1024, out_features=4096, bias=True)\n",
       "                  (fc2): Linear(in_features=4096, out_features=1024, bias=True)\n",
       "                )\n",
       "              )\n",
       "            )\n",
       "          )\n",
       "        )\n",
       "        (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "      )\n",
       "      (4-5): 2 x TransformerEncoderLayerBase(\n",
       "        (self_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (dropout_module): FairseqDropout()\n",
       "        (activation_dropout_module): FairseqDropout()\n",
       "        (fc1): Linear(in_features=1024, out_features=4096, bias=True)\n",
       "        (fc2): Linear(in_features=4096, out_features=1024, bias=True)\n",
       "        (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "      )\n",
       "    )\n",
       "  )\n",
       "  (decoder): TransformerDecoderMoEBase(\n",
       "    (dropout_module): FairseqDropout()\n",
       "    (embed_tokens): Embedding(42720, 1024, padding_idx=1)\n",
       "    (embed_positions): SinusoidalPositionalEmbedding()\n",
       "    (layers): ModuleList(\n",
       "      (0-2): 3 x TransformerDecoderLayerBase(\n",
       "        (dropout_module): FairseqDropout()\n",
       "        (self_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (activation_dropout_module): FairseqDropout()\n",
       "        (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (encoder_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (fc1): Linear(in_features=1024, out_features=4096, bias=True)\n",
       "        (fc2): Linear(in_features=4096, out_features=1024, bias=True)\n",
       "        (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "      )\n",
       "      (3): TransformerDecoderLayerMoEBase(\n",
       "        (dropout_module): FairseqDropout()\n",
       "        (self_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (activation_dropout_module): FairseqDropout()\n",
       "        (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (encoder_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (moe_layer): MOELayer(\n",
       "          (gate): Top1Gate(\n",
       "            (wg_reduction): Linear(in_features=1024, out_features=32, bias=False)\n",
       "            (wg): Linear(in_features=32, out_features=4, bias=False)\n",
       "          )\n",
       "          (experts): ModuleList(\n",
       "            (0-3): 4 x NFeedForwardNetwork(\n",
       "              (expert_network): ModuleList(\n",
       "                (0-2): 3 x MoESublayer(\n",
       "                  (norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "                  (dropout_module): FairseqDropout()\n",
       "                  (activation_dropout_module): FairseqDropout()\n",
       "                  (fc1): Linear(in_features=1024, out_features=4096, bias=True)\n",
       "                  (fc2): Linear(in_features=4096, out_features=1024, bias=True)\n",
       "                )\n",
       "              )\n",
       "            )\n",
       "          )\n",
       "        )\n",
       "        (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "      )\n",
       "      (4-5): 2 x TransformerDecoderLayerBase(\n",
       "        (dropout_module): FairseqDropout()\n",
       "        (self_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (activation_dropout_module): FairseqDropout()\n",
       "        (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (encoder_attn): MultiheadAttention(\n",
       "          (dropout_module): FairseqDropout()\n",
       "          (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "          (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\n",
       "        )\n",
       "        (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "        (fc1): Linear(in_features=1024, out_features=4096, bias=True)\n",
       "        (fc2): Linear(in_features=4096, out_features=1024, bias=True)\n",
       "        (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\n",
       "      )\n",
       "    )\n",
       "    (output_projection): Linear(in_features=1024, out_features=42720, bias=False)\n",
       "  )\n",
       ")"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.cuda()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "仿照上面的代码，将数据丢进网络："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "for key in batch_data[\"net_input\"].keys():\n",
    "    if isinstance(batch_data[\"net_input\"][key],torch.Tensor):\n",
    "        batch_data[\"net_input\"][key] = batch_data[\"net_input\"][key].to(\"cuda\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "output = model.forward(**batch_data[\"net_input\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "输出的都啥玩意？ 看源码\n",
    "\n",
    "- gdmoe_wmt_en_de 采用的是 GDMoEModel，而GDMoEModel使用TransformerModelBase作为基类，其forward函数在TransformerModelBase被实现\n",
    "- 具体可见：`fairseq/fairseq/models/transformer/transformer_base.py`\n",
    "  ```python\n",
    "  def forward(\n",
    "        self,\n",
    "        src_tokens,\n",
    "        src_lengths,\n",
    "        prev_output_tokens,\n",
    "        return_all_hiddens: bool = True,\n",
    "        features_only: bool = False,\n",
    "        alignment_layer: Optional[int] = None,\n",
    "        alignment_heads: Optional[int] = None,\n",
    "    ):\n",
    "        \"\"\"\n",
    "        Run the forward pass for an encoder-decoder model.\n",
    "\n",
    "        Copied from the base class, but without ``**kwargs``,\n",
    "        which are not supported by TorchScript.\n",
    "        \"\"\"\n",
    "        encoder_out = self.encoder(\n",
    "            src_tokens, src_lengths=src_lengths, return_all_hiddens=return_all_hiddens\n",
    "        )\n",
    "        decoder_out = self.decoder(\n",
    "            prev_output_tokens,\n",
    "            encoder_out=encoder_out,\n",
    "            features_only=features_only,\n",
    "            alignment_layer=alignment_layer,\n",
    "            alignment_heads=alignment_heads,\n",
    "            src_lengths=src_lengths,\n",
    "            return_all_hiddens=return_all_hiddens,\n",
    "        )\n",
    "        return decoder_out\n",
    "  ```\n",
    "\n",
    "- 还是不知道，得步进去看decoder，decoder在`unilm/models/gdmoe_legacy.py`中TransformerDecoderMoEBase类\n",
    "    ```python\n",
    "    # 375行\n",
    "    # append encoder moe loss into l_aux\n",
    "    if 'l_aux' in encoder_out:\n",
    "        l_aux.extend(encoder_out['l_aux'])\n",
    "    return x, {\"attn\": [attn], \"inner_states\": inner_states, \"l_aux\": l_aux}\n",
    "    ```\n",
    "\n",
    "- 这下知道了，返回2个东西，一个是Transformer的原始输出，另外一个是字典"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "x, dic = output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到，我们$x$输出就是[batch_size,seq_length,tgt_vocab_size]."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2048, 2, 42720])"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "42720"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "task.target_dictionary.__len__()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2048, 2])"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "batch_data[\"target\"].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 我们这样就回到了比较擅长的领域了，计算CEloss\n",
    "- 有prediction: torch.Size([2048, 2, 42720]), target: torch.Size([2048, 2])\n",
    "- 当然有其它变种LOSS，如LabelSmoothCE等等，大体上都是这个逻辑。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "回到原来Trainer的逻辑中来，最终调用的是`fairseq/fairseq/tasks/fairseq_task.py`中469行的train_step函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_data[\"target\"]=batch_data[\"target\"].to(\"cuda\")\n",
    "loss, sample_size, logging_output = criterion(model, batch_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(46929.3516, device='cuda:0', grad_fn=<NllLossBackward0>)"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "4096"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sample_size"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'loss': tensor(46929.3516, device='cuda:0'),\n",
       " 'ntokens': 4096,\n",
       " 'nsentences': 2048,\n",
       " 'sample_size': 4096}"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "logging_output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这样我们至少就完成了一次loss计算，至于optimizer，就不在我们的管辖范围内，作者在这里用到了很多技术：\n",
    "- adam优化器，--adam-betas '(0.9, 0.98)' --adam-eps 1e-06\n",
    "- labelsmoothCE\n",
    "- fp16训练（低精度）\n",
    "- --clip-norm 0.1，梯度裁剪\n",
    "- inverse_sqrt 学习率调度\n",
    "- warm up& cold down\n",
    "这些不知道也没事，可以练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们的任务其实很简单，只需要改MoE就行，下个ipynb详解怎么改。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "moe",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
