{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 从零手搓中文大模型｜🚀Day04"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "前面已经完成了**数据预处理**，今天我们来研究一下**模型的配置**。\n",
    "\n",
    "`litgpt`使用的配置文件和`transformers`有点不太一样，它的仓库里提供了一些预训练所用的`yaml`[配置文件样例](https://github.com/Lightning-AI/litgpt/tree/main/config_hub)。这个主要用于需要自定义模型的场景。\n",
    "\n",
    "另外`litgpt`也内置了一些`huggingface`上的[现成模型](https://github.com/Lightning-AI/litgpt/blob/main/litgpt/config.py)，可以直接拿来使用。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练配置文件\n",
    "以下是我这次定义的一个配置文件。\n",
    "\n",
    "内容有点多，但是还是都列举出来了，可以直接跳到后面对一些关键参数的解释。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```yaml\n",
    "# The name of the model to pretrain. Choose from names in ``litgpt.config``. Mutually exclusive with\n",
    "# ``model_config``. (type: Optional[str], default: null)\n",
    "model_name: microstories\n",
    "\n",
    "# A ``litgpt.Config`` object to define the model architecture. Mutually exclusive with\n",
    "# ``model_config``. (type: Optional[Config], default: null)\n",
    "model_config:\n",
    "  name: microstories\n",
    "  hf_config: {}\n",
    "  scale_embeddings: false\n",
    "  block_size: 512\n",
    "  padded_vocab_size: 65024\n",
    "  vocab_size: 64798\n",
    "  n_layer: 6\n",
    "  n_head: 6\n",
    "  n_query_groups: 6\n",
    "  n_embd: 512\n",
    "  head_size: 48\n",
    "  rotary_percentage: 1.0\n",
    "  parallel_residual: false\n",
    "  bias: false\n",
    "  norm_class_name: RMSNorm\n",
    "  mlp_class_name: LLaMAMLP\n",
    "  intermediate_size: 768\n",
    "\n",
    "# Directory in which to save checkpoints and logs. If running in a Lightning Studio Job, look for it in\n",
    "# /teamspace/jobs/<job-name>/share. (type: <class 'Path'>, default: out/pretrain)\n",
    "out_dir: Chinese_LLM_From_Scratch/Experiments/Output/pretrain/microstories\n",
    "\n",
    "# The precision to use for pretraining. Possible choices: \"bf16-true\", \"bf16-mixed\", \"32-true\". (type: Optional[str], default: null)\n",
    "precision: bf16-mixed\n",
    "\n",
    "# Optional path to a checkpoint directory to initialize the model from.\n",
    "# Useful for continued pretraining. Mutually exclusive with ``resume``. (type: Optional[Path], default: null)\n",
    "initial_checkpoint_dir:\n",
    "\n",
    "# Path to a checkpoint directory to resume from in case training was interrupted, or ``True`` to resume\n",
    "# from the latest checkpoint in ``out_dir``. An error will be raised if no checkpoint is found. Passing\n",
    "# ``'auto'`` will resume from the latest checkpoint but not error if no checkpoint exists.\n",
    "# (type: Union[bool, Literal[\"auto\"], Path], default: False)\n",
    "resume: true\n",
    "\n",
    "# Data-related arguments. If not provided, the default is ``litgpt.data.TinyLlama``.\n",
    "data:\n",
    "  # TinyStories\n",
    "  class_path: litgpt.data.LitData\n",
    "  init_args:\n",
    "    data_path: Chinese_LLM_From_Scratch/Data/TinyStoriesChinese/processed_data\n",
    "    split_names:\n",
    "      - train\n",
    "      - val\n",
    "\n",
    "# Training-related arguments. See ``litgpt.args.TrainArgs`` for details\n",
    "train:\n",
    "  # Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)\n",
    "  save_interval: 1000\n",
    "\n",
    "  # Number of iterations between logging calls (type: int, default: 1)\n",
    "  log_interval: 1\n",
    "\n",
    "  # Number of samples between optimizer steps across data-parallel ranks (type: int, default: 512)\n",
    "  global_batch_size: 512\n",
    "\n",
    "  # Number of samples per data-parallel rank (type: int, default: 4)\n",
    "  micro_batch_size: 32\n",
    "\n",
    "  # Number of iterations with learning rate warmup active (type: int, default: 2000)\n",
    "  lr_warmup_steps: 1000\n",
    "\n",
    "  # Number of epochs to train on (type: Optional[int], default: null)\n",
    "  epochs:\n",
    "\n",
    "  # Total number of tokens to train on (type: Optional[int], default: 3000000000000)\n",
    "  max_tokens: 3000000000000\n",
    "\n",
    "  # Limits the number of optimizer steps to run. (type: Optional[int], default: null)\n",
    "  max_steps:\n",
    "\n",
    "  # Limits the length of samples. Off by default (type: Optional[int], default: null)\n",
    "  max_seq_length: 512\n",
    "\n",
    "  # Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: False)\n",
    "  tie_embeddings: true\n",
    "\n",
    "  #   (type: Optional[float], default: 1.0)\n",
    "  max_norm: 1.0\n",
    "\n",
    "  #   (type: float, default: 4e-05)\n",
    "  min_lr: 0.0\n",
    "\n",
    "# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details\n",
    "eval:\n",
    "  # Number of optimizer steps between evaluation calls (type: int, default: 1000)\n",
    "  interval: 2000\n",
    "\n",
    "  # Number of tokens to generate (type: Optional[int], default: null)\n",
    "  max_new_tokens:\n",
    "\n",
    "  # Number of iterations (type: int, default: 100)\n",
    "  max_iters: 100\n",
    "\n",
    "  # Whether to evaluate on the validation set at the beginning of the training\n",
    "  initial_validation: false\n",
    "\n",
    "  # Whether to evaluate on the validation set at the end the training\n",
    "  final_validation: false\n",
    "\n",
    "# Optimizer-related arguments\n",
    "optimizer:\n",
    "  class_path: torch.optim.AdamW\n",
    "\n",
    "  init_args:\n",
    "    #   (type: float, default: 0.001)\n",
    "    lr: 0.0005\n",
    "\n",
    "    #   (type: float, default: 0.01)\n",
    "    weight_decay: 0.1\n",
    "\n",
    "    #   (type: tuple, default: (0.9,0.999))\n",
    "    betas:\n",
    "      - 0.9\n",
    "      - 0.95\n",
    "\n",
    "# How many devices/GPUs to use. Uses all GPUs by default. (type: Union[int, str], default: auto)\n",
    "devices: auto\n",
    "\n",
    "# How many nodes to use. (type: int, default: 1)\n",
    "num_nodes: 1\n",
    "\n",
    "# Optional path to the tokenizer dir that was used for preprocessing the dataset. Only some data\n",
    "# module require this. (type: Optional[Path], default: null)\n",
    "tokenizer_dir: Chinese_LLM_From_Scratch/References/chatglm3-6b\n",
    "\n",
    "# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: tensorboard)\n",
    "logger_name: wandb\n",
    "\n",
    "# The random seed to use for reproducibility. (type: int, default: 42)\n",
    "seed: 42\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### model_config"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```yaml\n",
    "model_config:\n",
    "  name: microstories\n",
    "  hf_config: {}\n",
    "  scale_embeddings: false\n",
    "  block_size: 512\n",
    "  padded_vocab_size: 65024\n",
    "  vocab_size: 64798\n",
    "  n_layer: 6\n",
    "  n_head: 6\n",
    "  n_query_groups: 6\n",
    "  n_embd: 512\n",
    "  head_size: 48\n",
    "  rotary_percentage: 1.0\n",
    "  parallel_residual: false\n",
    "  bias: false\n",
    "  norm_class_name: RMSNorm\n",
    "  mlp_class_name: LLaMAMLP\n",
    "  intermediate_size: 768\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- `scale_embeddings`控制是否对embedding进行缩放。\n",
    "  \n",
    "  ![scale_embedding](https://erxuanyi-1257355350.cos.ap-beijing.myqcloud.com/scale_embedding.png)\n",
    "  \n",
    "  如果为`True`，那么在`forward`函数中会对`embedding`进行缩放。注意个缩放和`sefl-attention`中的缩放不是一回事，不要弄混了。\n",
    "  其实也有很多讨论关于这个地方这一步**是否有必要**的，目前看来似乎是区别不大，可以设置为`False`。\n",
    "- `transformer`中的`block_size`，也就是`max_seq_length`。\n",
    "- `padded_vovab_size`和`vocab_size`直接取自`tokenizer`。\n",
    "- `n_layer`和`n_head`都是`6`，构建了一个`6`层`6`头的`transformer`。\n",
    "- `n_query_groups`是`6`，这是`GQA(Grouped-Query Attention)`的一个参数，控制`query`的分组。当`n_query_groups`等于`n_head`时，其实就是`MHA(Multi-Head Attention)`。下面这个图比较直观：\n",
    "  \n",
    "  ![GQA_2](https://erxuanyi-1257355350.cos.ap-beijing.myqcloud.com/GQA_2.png)\n",
    "\n",
    "- 头的大小`head_size`是`48`，`n_embd`是`512`。\n",
    "- `rotary_percentage`是`1.0`，这个是`旋转编码（Rotary Position Embedding, RoPE）`的有关参数，这里先不展开介绍了。\n",
    "- `parallel_residual`是`false`，关于`parallel residual`和`non-parallel residual`的解释可以参考这个图：\n",
    "  \n",
    "  ![parallel_residual](https://erxuanyi-1257355350.cos.ap-beijing.myqcloud.com/parallel_residual.png)\n",
    "- `bias`控制`Linear`层的`bias`是否存在，现在大多模型一般都是`false`。\n",
    "- `norm_class_name`是`RMSNorm`，`mlp_class_name`是`LLaMAMLP`，具体可以参见`litgpt`里[`model.py`](https://github.com/Lightning-AI/litgpt/blob/main/litgpt/model.py#L30)中的实现。\n",
    "- `intermediate_size`是`768`，这个是上面的`MLP`中间层的大小。\n",
    "\n",
    "按照上面的配置得到的模型参数量在`44M`左右，也就是只有`0.044B`的大小。\n",
    "\n",
    "但根据微软的[TinyStories](https://arxiv.org/pdf/2305.07759)论文结论，`10-80M`级别的模型能在小故事生成这种简单的语言任务上达到不错的效果（依旧能说人话）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 其他参数\n",
    "\n",
    "其余的都是一些训练的参数，比如`batch_size`，`lr`，`weight_decay`等等，这些都是比较常见的参数，不再赘述。\n",
    "\n",
    "`logger`我这里选择的是`wandb`，可以直接在`wandb`上查看训练过程中的一些指标。\n",
    "\n",
    "`data`设置成之前预处理好的数据集的路径（其中指定了加载数据所用的`litdata`的类名）\n",
    "\n",
    "`tokenizer_dir`是选用的或者自己训练好的`tokenizer`的路径。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 启动训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```bash\n",
    "litgpt pretrain --config Experiments/configs/microstories.yaml\n",
    "```\n",
    "预训练启动的命令非常简单，只需要指定上面的配置文件的路径即可。\n",
    "\n",
    "不出意外地话模型就能开始训练了，可以在`wandb`上查看训练过程中的指标。\n",
    "\n",
    "我的模型其实已经训练了一段时间，show一下训练过程中的图表：\n",
    "\n",
    "![image](https://erxuanyi-1257355350.cos.ap-beijing.myqcloud.com/image.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 小结\n",
    "1. 详细介绍了`litgpt`的预训练模型配置文件。\n",
    "2. 顺带解释了一些重要参数的原理。\n",
    "3. 训练启动。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "bigmodel",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
