{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# MOE模型简单示例\n",
    "\n",
    "我们这里不讨论Mixtral 8x7B这种大模型中使用的MOE技术，而是我们编写一个简单的，可以应用在任何任务中的自定义MOE，通过代码我们可以了解MOE的工作原理，这样对我们理解MOE在大模型中的工作方式是非常有帮助的。\n",
    "\n",
    "下面我们将一段一段地介绍PyTorch的代码实现。\n",
    "\n",
    "## 1. 定义专家模型\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from   tqdm        import tqdm\n",
    "import torch.nn    as nn\n",
    "import torch.optim as optim\n",
    "from   typing      import List, Tuple\n",
    "\n",
    "#! 定义专家模型\n",
    "class Expert(nn.Module): \n",
    "    def __init__(self, input_dim, hidden_dim, output_dim): \n",
    "        super(Expert, self).__init__() \n",
    "        self.layer1 = nn.Linear(input_dim,  hidden_dim) \n",
    "        self.layer2 = nn.Linear(hidden_dim, output_dim) \n",
    "\n",
    "    def forward(self, x): \n",
    "        x = torch.relu(self.layer1(x)) \n",
    "        return torch.softmax(self.layer2(x), dim=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 定义门控模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "#! 定义门控模型\n",
    "class Gating(nn.Module):\n",
    "    def __init__(self, input_dim, num_experts, dropout=0.1) -> None:\n",
    "        super(Gating, self).__init__()\n",
    "        self.layer1      = nn.Linear(input_dim, 128) \n",
    "        self.dropout     = nn.Dropout(dropout) \n",
    "        self.layer2      = nn.Linear(128, 256) \n",
    "        self.leaky_relu  = nn.LeakyReLU() \n",
    "        self.layer3      = nn.Linear(256, 128) \n",
    "        self.layer4      = nn.Linear(128, num_experts) \n",
    "        \n",
    "    \n",
    "    def forward(self, x)->torch.Tensor:\n",
    "        \"\"\"\n",
    "        @brief 门控模型更复杂，有三个线性层和dropout层用于正则化以防止过拟合。它使用ReLU和LeakyReLU激活\n",
    "               函数引入非线性。最后一层的输出大小等于专家的数量，并对这些输出应用softmax函数。输出权重，这\n",
    "               样可以将专家的输出与之结合。\n",
    "\n",
    "                说明：其实门控网络，或者叫路由网络是MOE中最复杂的部分，因为它涉及到控制输入到那个专家模型，\n",
    "                所以门控网络也有很多个设计方案，例如（如果我没记错的话）Mixtral 8x7B 只是取了8个专家中的\n",
    "                top2。所以我们这里不详细讨论各种方案，只是介绍其基本原理和代码实现。\n",
    "        @param x torch.Tensor 输入张量\n",
    "        \"\"\"\n",
    "        x = self.dropout(torch.relu(self.layer1(x)))\n",
    "        x = self.dropout(self.leaky_relu(self.layer2(x)))\n",
    "        x = self.dropout(self.leaky_relu(self.layer3(x)))\n",
    "        return torch.softmax(self.layer4(x), dim=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 定义MOE模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "#! 完整的MOE模型\n",
    "class MOE(nn.Module):\n",
    "    def __init__(self, trained_experts: List[Expert]):\n",
    "        super(MOE, self).__init__()\n",
    "        self.experts = nn.ModuleList(trained_experts)\n",
    "        num_experts  = len(trained_experts)\n",
    "        # 假设所有专家具有相同的输入维度\n",
    "        input_dim    = trained_experts[0].layer1.in_features\n",
    "        self.gating  = Gating(input_dim, num_experts)\n",
    "    \n",
    "    def forward(self, x)->torch.Tensor:\n",
    "        weights = self.gating(x)\n",
    "        outputs = torch.stack([expert(x) for expert in self.experts], dim=2)\n",
    "        weights = weights.unsqueeze(1).expand_as(outputs) \n",
    "        return torch.sum(outputs * weights, dim=2)    # 多个专家的加权求和作为结果输出\n",
    "        "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 自定义数据集\n",
    "\n",
    "这段代码创建了一个合成数据集，其中包含三个类标签——0、1和2。基于类标签对特征进行操作，从而在数据中引入一些模型可以学习的结构。\n",
    "\n",
    "数据被分成针对个别专家的训练集、MoE模型和测试集。我们确保专家模型是在一个子集上训练的，这样第一个专家在标签0和1上得到很好的训练，第二个专家在标签1和2上得到更好的训练，第三个专家看到更多的标签2和0。我们期望的结果是：虽然每个专家对标签0、1和2的分类准确率都不令人满意，但通过结合三位专家的决策，MoE将表现出色。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([1999, 4]) \n",
      " torch.Size([500, 4]) \n",
      " torch.Size([1642, 4]) \n",
      " torch.Size([1642, 4]) \n",
      " torch.Size([1642, 4])\n"
     ]
    }
   ],
   "source": [
    "#! 数据集生成\n",
    "num_samples = 5000 \n",
    "input_dim   = 4 \n",
    "hidden_dim  = 32 \n",
    "\n",
    "# Generate equal numbers of labels 0, 1, and 2 \n",
    "y_data = torch.cat([ \n",
    "    torch.zeros(num_samples // 3), \n",
    "    torch.ones(num_samples // 3), \n",
    "    torch.full((num_samples - 2 * (num_samples // 3),), 2)\n",
    "]).long() \n",
    "\n",
    "# print(y_data.shape)       # torch.Size([5000])\n",
    "x_data = torch.randn(num_samples, input_dim) \n",
    "# print(x_data.shape)       # torch.Size([5000, 4])\n",
    "for i in range(num_samples): \n",
    "    if y_data[i] == 0: \n",
    "        x_data[i, 0] += 1  # Making x[0] more positive \n",
    "    elif y_data[i] == 1: \n",
    "        x_data[i, 1] -= 1  # Making x[1] more negative \n",
    "    elif y_data[i] == 2: \n",
    "        x_data[i, 0] -= 1  # Making x[0] more negative \n",
    "        \n",
    "indices = torch.randperm(num_samples)    # 将数据打乱\n",
    "x_data  = x_data[indices] \n",
    "y_data  = y_data[indices] \n",
    "# print(y_data.bincount() )              # 标签分布，直方图统计\n",
    "shuffled_indices = torch.randperm(num_samples) \n",
    "x_data  = x_data[shuffled_indices] \n",
    "y_data  = y_data[shuffled_indices] \n",
    "\n",
    "x_train_experts = x_data[:int(num_samples/2)]                     # 使用前半部分数据用来训练各个专家模型\n",
    "y_train_experts = y_data[:int(num_samples/2)] \n",
    "\n",
    "mask_expert1 = (y_train_experts == 0) | (y_train_experts == 1) \n",
    "mask_expert2 = (y_train_experts == 1) | (y_train_experts == 2) \n",
    "mask_expert3 = (y_train_experts == 0) | (y_train_experts == 2) \n",
    "\n",
    "num_samples_per_expert = min(mask_expert1.sum(), mask_expert2.sum(), mask_expert3.sum()) \n",
    "\n",
    "#! 给各个专家模型选择训练数据\n",
    "x_expert1 = x_train_experts[mask_expert1][:num_samples_per_expert] \n",
    "y_expert1 = y_train_experts[mask_expert1][:num_samples_per_expert] \n",
    "\n",
    "x_expert2 = x_train_experts[mask_expert2][:num_samples_per_expert] \n",
    "y_expert2 = y_train_experts[mask_expert2][:num_samples_per_expert] \n",
    "\n",
    "x_expert3 = x_train_experts[mask_expert3][:num_samples_per_expert] \n",
    "y_expert3 = y_train_experts[mask_expert3][:num_samples_per_expert]\n",
    "\n",
    "############################################################\n",
    "#! 另一部分数据用来训练MOE模型和测试\n",
    "x_remaining = x_data[int(num_samples/2)+1:] \n",
    "y_remaining = y_data[int(num_samples/2)+1:] \n",
    "\n",
    "split       = int(0.8 * len(x_remaining)) \n",
    "x_train_moe = x_remaining[:split] \n",
    "y_train_moe = y_remaining[:split] \n",
    "\n",
    "x_test = x_remaining[split:] \n",
    "y_test = y_remaining[split:] \n",
    "\n",
    "print(x_train_moe.shape,\"\\n\", x_test.shape,\"\\n\", \n",
    "      x_expert1.shape,\"\\n\", \n",
    "      x_expert2.shape,\"\\n\", x_expert3.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. 模型训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始训练专家模型1...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 500/500 [00:00<00:00, 628.53it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始训练专家模型2...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 500/500 [00:00<00:00, 540.85it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始训练专家模型3...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 500/500 [00:00<00:00, 782.69it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始训练MOE模型...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 500/500 [00:07<00:00, 63.81it/s]\n"
     ]
    }
   ],
   "source": [
    "#! 模型初始化和训练设置:\n",
    "# Define hidden dimension \n",
    "output_dim = 3 \n",
    "hidden_dim = 32 \n",
    "\n",
    "epochs = 500 \n",
    "learning_rate = 0.001 \n",
    "\n",
    "\n",
    "# Instantiate the experts \n",
    "expert1 = Expert(input_dim, hidden_dim, output_dim) \n",
    "expert2 = Expert(input_dim, hidden_dim, output_dim) \n",
    "expert3 = Expert(input_dim, hidden_dim, output_dim) \n",
    "\n",
    "# Set up loss \n",
    "criterion = nn.CrossEntropyLoss() \n",
    "\n",
    "# Optimizers for experts \n",
    "optimizer_expert1 = optim.Adam(expert1.parameters(), lr=learning_rate) \n",
    "optimizer_expert2 = optim.Adam(expert2.parameters(), lr=learning_rate) \n",
    "optimizer_expert3 = optim.Adam(expert3.parameters(), lr=learning_rate)\n",
    "\n",
    "#! 训练expert 1模型 \n",
    "print(\"开始训练专家模型1...\")\n",
    "for epoch in tqdm(range(epochs)): \n",
    "    optimizer_expert1.zero_grad() \n",
    "    outputs_expert1 = expert1(x_expert1) \n",
    "    loss_expert1    = criterion(outputs_expert1, y_expert1) \n",
    "    loss_expert1.backward() \n",
    "    optimizer_expert1.step() \n",
    "\n",
    "#! 训练expert 2模型\n",
    "print(\"开始训练专家模型2...\")\n",
    "for epoch in tqdm(range(epochs)): \n",
    "    optimizer_expert2.zero_grad() \n",
    "    outputs_expert2 = expert2(x_expert2) \n",
    "    loss_expert2 = criterion(outputs_expert2, y_expert2) \n",
    "    loss_expert2.backward() \n",
    "    optimizer_expert2.step() \n",
    "\n",
    "#! 训练expert 3模型\n",
    "print(\"开始训练专家模型3...\")\n",
    "for epoch in tqdm(range(epochs)): \n",
    "    optimizer_expert3.zero_grad() \n",
    "    outputs_expert3 = expert3(x_expert3) \n",
    "    loss_expert3 = criterion(outputs_expert3, y_expert3) \n",
    "    loss_expert3.backward()\n",
    "    \n",
    "# Create the MoE model with the trained experts \n",
    "moe_model = MOE([expert1, expert2, expert3]) \n",
    "\n",
    "#! 4. 训练MOE模型 \n",
    "print(\"开始训练MOE模型...\")\n",
    "optimizer_moe = optim.Adam(moe_model.parameters(), lr=learning_rate) \n",
    "for epoch in tqdm(range(epochs)): \n",
    "    optimizer_moe.zero_grad() \n",
    "    outputs_moe = moe_model(x_train_moe) \n",
    "    loss_moe = criterion(outputs_moe, y_train_moe) \n",
    "    loss_moe.backward() \n",
    "    optimizer_moe.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. 精度验证"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始模型精度评估...\n",
      "Expert 1 Accuracy: 0.394\n",
      "Expert 2 Accuracy: 0.498\n",
      "Expert 3 Accuracy: 0.508\n",
      "Mixture of Experts Accuracy: 0.67\n"
     ]
    }
   ],
   "source": [
    "def evaluate(model, x, y):   # 评估函数\n",
    "    with torch.no_grad(): \n",
    "        outputs = model(x) \n",
    "        _, predicted = torch.max(outputs, 1) \n",
    "        correct = (predicted == y).sum().item() \n",
    "        accuracy = correct / len(y) \n",
    "    return accuracy\n",
    "\n",
    "print(\"开始模型精度评估...\")\n",
    "accuracy_expert1 = evaluate(expert1, x_test, y_test) \n",
    "accuracy_expert2 = evaluate(expert2, x_test, y_test) \n",
    "accuracy_expert3 = evaluate(expert3, x_test, y_test) \n",
    "accuracy_moe     = evaluate(moe_model, x_test, y_test) \n",
    "\n",
    "print(\"Expert 1 Accuracy:\", accuracy_expert1) \n",
    "print(\"Expert 2 Accuracy:\", accuracy_expert2) \n",
    "print(\"Expert 3 Accuracy:\", accuracy_expert3) \n",
    "print(\"Mixture of Experts Accuracy:\", accuracy_moe) "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
