{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构建VisionTransformer并使用MoE层代替FFN(FeedForward-NN)层\n",
    "最近看到了个别人写的MoE层，写的还不错https://github.com/lucidrains/mixture-of-experts，速度快但是理解性差\n",
    "\n",
    "拿来主义！开整，已经copy到models/mixture_of_experts.py中去了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "sys.path.append(\"../\")\n",
    "import torch as t\n",
    "import torch.nn as nn\n",
    "from models.mixture_of_experts import MoE\n",
    "\n",
    "moe = MoE(\n",
    "    dim = 512,\n",
    "    num_experts = 16,               # increase the experts (# parameters) of your model without increasing computation\n",
    "    hidden_dim = 512 * 4,           # size of hidden dimension in each expert, defaults to 4 * dimension\n",
    "    activation = nn.LeakyReLU,      # use your preferred activation, will default to GELU\n",
    "    second_policy_train = 'random', # in top_2 gating, policy for whether to use a second-place expert\n",
    "    second_policy_eval = 'random',  # all (always) | none (never) | threshold (if gate value > the given threshold) | random (if gate value > threshold * random_uniform(0, 1))\n",
    "    second_threshold_train = 0.2,\n",
    "    second_threshold_eval = 0.2,\n",
    "    capacity_factor_train = 1.25,   # experts have fixed capacity per batch. we need some extra capacity in case gating is not perfectly balanced.\n",
    "    capacity_factor_eval = 2.,      # capacity_factor_* should be set to a value >=1\n",
    "    loss_coef = 1e-2                # multiplier on the auxiliary expert balancing auxiliary loss\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看输出，输入维度是[batch_size,word_length,embedding_dimension]，在VisionTransformer中是[batch_size,patch_size,path_dimension]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "output,aux_loss = moe.forward(t.randn(4,16,512))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([4, 16, 512])"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output.shape # should be 4,16,512"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(0.0210, grad_fn=<MulBackward0>)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "aux_loss # should be a scalar"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "MoE(\n",
       "  (gate): Top2Gating()\n",
       "  (experts): Experts(\n",
       "    (act): LeakyReLU(negative_slope=0.01)\n",
       "  )\n",
       ")"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "moe"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Param Name:gate.w_gating shape:torch.Size([512, 16])\n",
      "Param Name:experts.w1 shape:torch.Size([16, 512, 2048])\n",
      "Param Name:experts.w2 shape:torch.Size([16, 2048, 512])\n"
     ]
    }
   ],
   "source": [
    "for name,param in moe.named_parameters():\n",
    "    print(\"Param Name:{} shape:{}\".format(name,param.shape))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "相比于moe_test.ipynb中的实现，这个版本是直接将MoE的参数增加了一个维度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 开始构建MoE ViT\n",
    "\n",
    "1. 将所有FeedForwardNN层换成这个层（不包括Classifier）\n",
    "2. 重写Forward函数，将每个Expert的AuxLoss保留下来，作为forward的输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "import timm\n",
    "\n",
    "vit = timm.models.vision_transformer.vit_base_patch16_224()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用最基本的ViT，平平无奇没有任何别的特点。\n",
    "\n",
    "ViTbase的参数如下：\n",
    "```python\n",
    "dict(patch_size=16, embed_dim=768, depth=12, num_heads=12)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "VisionTransformer(\n",
       "  (patch_embed): PatchEmbed(\n",
       "    (proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))\n",
       "    (norm): Identity()\n",
       "  )\n",
       "  (pos_drop): Dropout(p=0.0, inplace=False)\n",
       "  (patch_drop): Identity()\n",
       "  (norm_pre): Identity()\n",
       "  (blocks): Sequential(\n",
       "    (0): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (1): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (2): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (3): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (4): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (5): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (6): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (7): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (8): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (9): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (10): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (11): Block(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "  )\n",
       "  (norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "  (fc_norm): Identity()\n",
       "  (head_drop): Dropout(p=0.0, inplace=False)\n",
       "  (head): Linear(in_features=768, out_features=1000, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vit"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "具体查看它的blocks，需要将所有blocks中的FFN换成MoE，我们需要修改这些东西：\n",
    "1. ViT类中的forward函数\n",
    "2. Block中的forward函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (0): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (1): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (2): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (3): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (4): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (5): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (6): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (7): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (8): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (9): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (10): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       "  (11): Block(\n",
       "    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (attn): Attention(\n",
       "      (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "      (q_norm): Identity()\n",
       "      (k_norm): Identity()\n",
       "      (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "      (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "      (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls1): Identity()\n",
       "    (drop_path1): Identity()\n",
       "    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "    (mlp): Mlp(\n",
       "      (fc1): Linear(in_features=768, out_features=3072, bias=True)\n",
       "      (act): GELU(approximate='none')\n",
       "      (drop1): Dropout(p=0.0, inplace=False)\n",
       "      (norm): Identity()\n",
       "      (fc2): Linear(in_features=3072, out_features=768, bias=True)\n",
       "      (drop2): Dropout(p=0.0, inplace=False)\n",
       "    )\n",
       "    (ls2): Identity()\n",
       "    (drop_path2): Identity()\n",
       "  )\n",
       ")"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vit.blocks"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "修改完毕的东西在models/vit_moe.py里头，下面是具体修改思路：\n",
    "\n",
    "这是ViT的init函数\n",
    "```python\n",
    "def __init__(\n",
    "            self,\n",
    "            img_size: Union[int, Tuple[int, int]] = 224,\n",
    "            patch_size: Union[int, Tuple[int, int]] = 16,\n",
    "            in_chans: int = 3,\n",
    "            num_classes: int = 1000,\n",
    "            global_pool: str = 'token',\n",
    "            embed_dim: int = 768,\n",
    "            depth: int = 12,\n",
    "            num_heads: int = 12,\n",
    "            mlp_ratio: float = 4.,\n",
    "            qkv_bias: bool = True,\n",
    "            qk_norm: bool = False,\n",
    "            init_values: Optional[float] = None,\n",
    "            class_token: bool = True,\n",
    "            no_embed_class: bool = False,\n",
    "            reg_tokens: int = 0,\n",
    "            pre_norm: bool = False,\n",
    "            fc_norm: Optional[bool] = None,\n",
    "            dynamic_img_size: bool = False,\n",
    "            dynamic_img_pad: bool = False,\n",
    "            drop_rate: float = 0.,\n",
    "            pos_drop_rate: float = 0.,\n",
    "            patch_drop_rate: float = 0.,\n",
    "            proj_drop_rate: float = 0.,\n",
    "            attn_drop_rate: float = 0.,\n",
    "            drop_path_rate: float = 0.,\n",
    "            weight_init: str = '',\n",
    "            embed_layer: Callable = PatchEmbed,\n",
    "            norm_layer: Optional[LayerType] = None,\n",
    "            act_layer: Optional[LayerType] = None,\n",
    "            block_fn: Type[nn.Module] = Block,\n",
    "            mlp_layer: Type[nn.Module] = Mlp,\n",
    "    ):\n",
    "```\n",
    "\n",
    "我们看到`mlp_layer`参数了，可以指定我们的MoE FFN作为MLPLayer，然后需要修改block的forward函数，挺麻烦的，不如自定义一个Block，然后再修改ViT的forward函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "测试一哈"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "from models.vit_moe import ViTMoE\n",
    "\n",
    "vit_moe = ViTMoE(num_experts=8)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "logits, aux_losses = vit_moe.forward(t.randn(32,3,224,224))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([32, 1000])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "logits.shape # must be 32*1000 1000classes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[tensor(0.0105, grad_fn=<MulBackward0>),\n",
       " tensor(0.0116, grad_fn=<MulBackward0>),\n",
       " tensor(0.0116, grad_fn=<MulBackward0>),\n",
       " tensor(0.0121, grad_fn=<MulBackward0>),\n",
       " tensor(0.0123, grad_fn=<MulBackward0>),\n",
       " tensor(0.0123, grad_fn=<MulBackward0>),\n",
       " tensor(0.0131, grad_fn=<MulBackward0>),\n",
       " tensor(0.0129, grad_fn=<MulBackward0>),\n",
       " tensor(0.0135, grad_fn=<MulBackward0>),\n",
       " tensor(0.0136, grad_fn=<MulBackward0>),\n",
       " tensor(0.0139, grad_fn=<MulBackward0>),\n",
       " tensor(0.0134, grad_fn=<MulBackward0>)]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "aux_losses"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "sum(aux_losses).backward() "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Param Name:cls_token shape:torch.Size([1, 1, 768])\n",
      "Param Name:pos_embed shape:torch.Size([1, 197, 768])\n",
      "Param Name:patch_embed.proj.weight shape:torch.Size([768, 3, 16, 16])\n",
      "Param Name:patch_embed.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.0.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.0.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.0.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.0.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.0.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.1.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.1.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.1.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.1.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.1.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.1.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.1.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.1.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.1.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.1.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.1.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.2.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.2.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.2.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.2.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.2.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.2.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.2.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.2.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.2.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.2.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.2.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.3.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.3.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.3.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.3.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.3.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.3.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.3.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.3.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.3.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.3.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.3.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.4.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.4.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.4.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.4.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.4.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.4.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.4.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.4.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.4.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.4.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.4.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.5.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.5.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.5.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.5.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.5.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.5.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.5.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.5.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.5.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.5.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.5.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.6.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.6.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.6.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.6.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.6.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.6.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.6.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.6.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.6.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.6.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.6.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.7.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.7.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.7.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.7.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.7.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.7.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.7.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.7.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.7.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.7.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.7.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.8.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.8.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.8.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.8.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.8.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.8.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.8.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.8.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.8.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.8.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.8.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.9.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.9.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.9.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.9.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.9.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.9.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.9.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.9.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.9.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.9.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.9.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.10.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.10.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.10.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.10.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.10.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.10.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.10.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.10.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.10.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.10.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.10.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:blocks.11.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.11.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.11.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.11.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.11.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.11.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.11.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.11.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.11.mlp.gate.w_gating shape:torch.Size([768, 8])\n",
      "Param Name:blocks.11.mlp.experts.w1 shape:torch.Size([8, 768, 3072])\n",
      "Param Name:blocks.11.mlp.experts.w2 shape:torch.Size([8, 3072, 768])\n",
      "Param Name:norm.weight shape:torch.Size([768])\n",
      "Param Name:norm.bias shape:torch.Size([768])\n",
      "Param Name:head.weight shape:torch.Size([1000, 768])\n",
      "Param Name:head.bias shape:torch.Size([1000])\n"
     ]
    }
   ],
   "source": [
    "for name,param in vit_moe.named_parameters():\n",
    "    print(\"Param Name:{} shape:{}\".format(name,param.shape))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到有12个层的Gate的loss，这个应该被加入到总的Loss当中去优化。\n",
    "\n",
    "具体使用方法就是：\n",
    "```python\n",
    "loss_fn = nn.CrossEntropyLoss()\n",
    "\n",
    "for samples,labels in trainLoader:\n",
    "    logits,aux_losses = vit_moe.forward(sample)\n",
    "    ce_loss = loss_fn(logits,labels)\n",
    "\n",
    "    total_loss = ce_loss+ sum(aux_losses)\n",
    "\n",
    "    loss.backward()\n",
    "\n",
    "```\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "关于大邹老师的ShareParameterMoE:\n",
    "\n",
    "已经实现，下面是测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "from models.mixture_of_experts import MoEShareParam\n",
    "\n",
    "moe_shareparam = MoEShareParam(dim=512,share_dim=256,hidden_dim=256*3,num_experts=16)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "outputs, aux_loss = moe_shareparam.forward(inputs=t.randn((32,6,512)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([32, 6, 512])"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "outputs.shape # should be 32 6 512"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(0.0349, grad_fn=<MulBackward0>)"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "aux_loss # should be a scalar"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Param Name:mlp.0.weight shape:torch.Size([768, 256])\n",
      "Param Name:mlp.0.bias shape:torch.Size([768])\n",
      "Param Name:mlp.2.weight shape:torch.Size([256, 768])\n",
      "Param Name:mlp.2.bias shape:torch.Size([256])\n",
      "Param Name:gate.w_gating shape:torch.Size([256, 16])\n",
      "Param Name:experts.w1 shape:torch.Size([16, 256, 768])\n",
      "Param Name:experts.w2 shape:torch.Size([16, 768, 256])\n"
     ]
    }
   ],
   "source": [
    "for name,param in moe_shareparam.named_parameters():\n",
    "    print(\"Param Name:{} shape:{}\".format(name,param.shape))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到已经将512分为2个部分了。\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "剩下的就是训练了，看看显存占用吧家人们"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wed Nov 29 20:01:40 2023       \n",
      "+---------------------------------------------------------------------------------------+\n",
      "| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |\n",
      "|-----------------------------------------+----------------------+----------------------+\n",
      "| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
      "| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |\n",
      "|                                         |                      |               MIG M. |\n",
      "|=========================================+======================+======================|\n",
      "|   0  NVIDIA GeForce RTX 4090        Off | 00000000:41:00.0 Off |                  Off |\n",
      "|  0%   31C    P8              25W / 450W |     18MiB / 24564MiB |      0%      Default |\n",
      "|                                         |                      |                  N/A |\n",
      "+-----------------------------------------+----------------------+----------------------+\n",
      "|   1  NVIDIA GeForce RTX 4090        Off | 00000000:43:00.0 Off |                  Off |\n",
      "| 30%   52C    P2             287W / 450W |  12788MiB / 24564MiB |     84%      Default |\n",
      "|                                         |                      |                  N/A |\n",
      "+-----------------------------------------+----------------------+----------------------+\n",
      "                                                                                         \n",
      "+---------------------------------------------------------------------------------------+\n",
      "| Processes:                                                                            |\n",
      "|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |\n",
      "|        ID   ID                                                             Usage      |\n",
      "|=======================================================================================|\n",
      "+---------------------------------------------------------------------------------------+\n"
     ]
    }
   ],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ViTMoE(\n",
       "  (patch_embed): PatchEmbed(\n",
       "    (proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))\n",
       "    (norm): Identity()\n",
       "  )\n",
       "  (pos_drop): Dropout(p=0.0, inplace=False)\n",
       "  (patch_drop): Identity()\n",
       "  (norm_pre): Identity()\n",
       "  (blocks): ModuleList(\n",
       "    (0-11): 12 x MoEBlock(\n",
       "      (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=768, out_features=2304, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=768, out_features=768, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): MoE(\n",
       "        (gate): Top2Gating()\n",
       "        (experts): Experts(\n",
       "          (act): GELU(approximate='none')\n",
       "        )\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "  )\n",
       "  (norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True)\n",
       "  (fc_norm): Identity()\n",
       "  (head_drop): Dropout(p=0.0, inplace=False)\n",
       "  (head): Linear(in_features=768, out_features=1000, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vit_moe.cuda()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wed Nov 29 20:01:41 2023       \n",
      "+---------------------------------------------------------------------------------------+\n",
      "| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |\n",
      "|-----------------------------------------+----------------------+----------------------+\n",
      "| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
      "| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |\n",
      "|                                         |                      |               MIG M. |\n",
      "|=========================================+======================+======================|\n",
      "|   0  NVIDIA GeForce RTX 4090        Off | 00000000:41:00.0 Off |                  Off |\n",
      "|  0%   32C    P2              61W / 450W |   3982MiB / 24564MiB |      0%      Default |\n",
      "|                                         |                      |                  N/A |\n",
      "+-----------------------------------------+----------------------+----------------------+\n",
      "|   1  NVIDIA GeForce RTX 4090        Off | 00000000:43:00.0 Off |                  Off |\n",
      "| 30%   52C    P2             290W / 450W |  12788MiB / 24564MiB |     85%      Default |\n",
      "|                                         |                      |                  N/A |\n",
      "+-----------------------------------------+----------------------+----------------------+\n",
      "                                                                                         \n",
      "+---------------------------------------------------------------------------------------+\n",
      "| Processes:                                                                            |\n",
      "|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |\n",
      "|        ID   ID                                                             Usage      |\n",
      "|=======================================================================================|\n",
      "+---------------------------------------------------------------------------------------+\n"
     ]
    }
   ],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "光模型就占了8个G 很难想等会训练起来要多少G，估计会OOM（out of memory）,到时候别用ViTBase了 用ViTSmall"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "VisionTransformer(\n",
       "  (patch_embed): PatchEmbed(\n",
       "    (proj): Conv2d(3, 384, kernel_size=(16, 16), stride=(16, 16))\n",
       "    (norm): Identity()\n",
       "  )\n",
       "  (pos_drop): Dropout(p=0.0, inplace=False)\n",
       "  (patch_drop): Identity()\n",
       "  (norm_pre): Identity()\n",
       "  (blocks): Sequential(\n",
       "    (0): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (1): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (2): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (3): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (4): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (5): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (6): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (7): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (8): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (9): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (10): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "    (11): Block(\n",
       "      (norm1): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (attn): Attention(\n",
       "        (qkv): Linear(in_features=384, out_features=1152, bias=True)\n",
       "        (q_norm): Identity()\n",
       "        (k_norm): Identity()\n",
       "        (attn_drop): Dropout(p=0.0, inplace=False)\n",
       "        (proj): Linear(in_features=384, out_features=384, bias=True)\n",
       "        (proj_drop): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls1): Identity()\n",
       "      (drop_path1): Identity()\n",
       "      (norm2): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "      (mlp): Mlp(\n",
       "        (fc1): Linear(in_features=384, out_features=1536, bias=True)\n",
       "        (act): GELU(approximate='none')\n",
       "        (drop1): Dropout(p=0.0, inplace=False)\n",
       "        (norm): Identity()\n",
       "        (fc2): Linear(in_features=1536, out_features=384, bias=True)\n",
       "        (drop2): Dropout(p=0.0, inplace=False)\n",
       "      )\n",
       "      (ls2): Identity()\n",
       "      (drop_path2): Identity()\n",
       "    )\n",
       "  )\n",
       "  (norm): LayerNorm((384,), eps=1e-06, elementwise_affine=True)\n",
       "  (fc_norm): Identity()\n",
       "  (head_drop): Dropout(p=0.0, inplace=False)\n",
       "  (head): Linear(in_features=384, out_features=1000, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "timm.models.vision_transformer.vit_small_patch16_224()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "测试一下share Param的MoE"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "from models.vit_shareparam import ViTMoEShareParam\n",
    "moe_vit_shareparam = ViTMoEShareParam(share_dim=384)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Param Name:cls_token shape:torch.Size([1, 1, 768])\n",
      "Param Name:pos_embed shape:torch.Size([1, 197, 768])\n",
      "Param Name:patch_embed.proj.weight shape:torch.Size([768, 3, 16, 16])\n",
      "Param Name:patch_embed.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.0.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.0.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.0.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.0.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.0.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.0.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.0.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.0.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.0.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.0.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.0.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.1.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.1.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.1.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.1.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.1.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.1.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.1.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.1.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.1.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.1.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.1.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.1.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.1.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.1.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.1.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.2.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.2.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.2.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.2.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.2.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.2.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.2.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.2.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.2.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.2.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.2.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.2.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.2.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.2.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.2.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.3.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.3.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.3.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.3.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.3.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.3.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.3.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.3.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.3.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.3.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.3.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.3.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.3.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.3.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.3.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.4.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.4.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.4.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.4.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.4.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.4.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.4.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.4.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.4.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.4.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.4.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.4.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.4.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.4.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.4.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.5.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.5.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.5.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.5.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.5.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.5.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.5.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.5.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.5.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.5.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.5.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.5.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.5.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.5.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.5.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.6.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.6.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.6.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.6.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.6.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.6.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.6.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.6.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.6.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.6.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.6.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.6.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.6.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.6.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.6.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.7.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.7.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.7.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.7.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.7.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.7.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.7.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.7.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.7.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.7.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.7.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.7.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.7.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.7.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.7.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.8.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.8.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.8.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.8.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.8.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.8.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.8.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.8.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.8.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.8.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.8.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.8.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.8.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.8.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.8.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.9.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.9.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.9.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.9.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.9.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.9.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.9.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.9.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.9.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.9.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.9.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.9.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.9.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.9.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.9.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.10.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.10.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.10.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.10.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.10.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.10.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.10.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.10.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.10.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.10.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.10.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.10.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.10.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.10.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.10.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:blocks.11.norm1.weight shape:torch.Size([768])\n",
      "Param Name:blocks.11.norm1.bias shape:torch.Size([768])\n",
      "Param Name:blocks.11.attn.qkv.weight shape:torch.Size([2304, 768])\n",
      "Param Name:blocks.11.attn.qkv.bias shape:torch.Size([2304])\n",
      "Param Name:blocks.11.attn.proj.weight shape:torch.Size([768, 768])\n",
      "Param Name:blocks.11.attn.proj.bias shape:torch.Size([768])\n",
      "Param Name:blocks.11.norm2.weight shape:torch.Size([768])\n",
      "Param Name:blocks.11.norm2.bias shape:torch.Size([768])\n",
      "Param Name:blocks.11.mlp.mlp.0.weight shape:torch.Size([3072, 384])\n",
      "Param Name:blocks.11.mlp.mlp.0.bias shape:torch.Size([3072])\n",
      "Param Name:blocks.11.mlp.mlp.2.weight shape:torch.Size([384, 3072])\n",
      "Param Name:blocks.11.mlp.mlp.2.bias shape:torch.Size([384])\n",
      "Param Name:blocks.11.mlp.gate.w_gating shape:torch.Size([384, 16])\n",
      "Param Name:blocks.11.mlp.experts.w1 shape:torch.Size([16, 384, 3072])\n",
      "Param Name:blocks.11.mlp.experts.w2 shape:torch.Size([16, 3072, 384])\n",
      "Param Name:norm.weight shape:torch.Size([768])\n",
      "Param Name:norm.bias shape:torch.Size([768])\n",
      "Param Name:head.weight shape:torch.Size([1000, 768])\n",
      "Param Name:head.bias shape:torch.Size([1000])\n"
     ]
    }
   ],
   "source": [
    "for name,param in moe_vit_shareparam.named_parameters():\n",
    "    print(\"Param Name:{} shape:{}\".format(name,param.shape))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "moe",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
