{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 代码实现ppo"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "先把本教程中的mask忽略，加入了一些mask写的有点乱"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "trl代码中的对于ppo的实现\n",
    "https://github.com/huggingface/trl/blob/main/trl/trainer/ppo_trainer.py\n",
    "\n",
    "https://mp.weixin.qq.com/s/S72LO26IsZ8AED8sQKIWnQ\n",
    "\n",
    "讲了PPO  loss max https://zhuanlan.zhihu.com/p/28223597805\n",
    "\n",
    "https://zhuanlan.zhihu.com/p/677607581"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面为你解释这些参数的含义：\n",
    "\n",
    "### 模型架构相关参数\n",
    "1. **`vocab_size = 10`**\n",
    "词汇表的大小代表了模型能够识别的不同词汇的数量。举例来说，若你正在处理的是一个简单的数字文本任务，其中仅有 0 - 9 这 10 个数字，那么 `vocab_size` 就会被设定为 10。\n",
    "\n",
    "2. **`hidden_size = 128`**\n",
    "隐藏层的维度大小表明了模型中每个隐藏层神经元的数量。在神经网络里，隐藏层会对输入数据进行特征提取与转换。`hidden_size` 越大，模型所能学习到的特征就越复杂，不过这也会使计算量和内存需求增加。\n",
    "\n",
    "3. **`intermediate_size = 256`**\n",
    "在 Transformer 架构里，`intermediate_size` 指的是前馈神经网络（FFN）中间层的维度。FFN 一般由两个线性层构成，中间层的维度通常会比输入输出层的维度大，这样有助于模型学习到更丰富的特征。\n",
    "\n",
    "4. **`num_hidden_layers = 2`**\n",
    "隐藏层的数量意味着模型中堆叠的隐藏层的层数。层数越多，模型的表达能力就越强，能够学习到更复杂的模式，但同时也会增加过拟合的风险以及训练的难度。\n",
    "\n",
    "5. **`num_attention_heads = 4`**\n",
    "注意力头的数量是指在多头注意力机制中并行的注意力头的个数。多头注意力机制能够让模型从不同的表示子空间中捕捉特征，提升模型的表达能力。\n",
    "\n",
    "6. **`num_key_value_heads = 4`**\n",
    "键值对注意力头的数量在某些改进的注意力机制中会用到，它决定了用于计算键（key）和值（value）的注意力头的数量。在标准的多头注意力机制里，`num_key_value_heads` 通常和 `num_attention_heads` 相等。\n",
    "\n",
    "### 数据处理和生成相关参数\n",
    "7. **`batch_size = 5`**\n",
    "批量大小代表了在一次训练或者推理过程中同时处理的样本数量。使用较大的批量大小能够提升训练效率，但会增加内存的需求；而较小的批量大小则可以减少内存使用，但会使训练速度变慢。\n",
    "\n",
    "8. **`length_x = 5`**\n",
    "输入序列的长度指的是每个输入样本的长度。在处理文本时，它代表的是输入文本中词元（token）的数量。\n",
    "\n",
    "9. **`max_new_tokens = 5`**\n",
    "最大新生成的词元数量表示在文本生成任务中，模型最多可以生成的词元数量。例如在文本续写任务里，这个参数会限制模型生成的文本长度。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "vocab_size = 10   #当前教程实际使用的时候是词汇表实际大小\n",
    "hidden_size = 128\n",
    "intermediate_size = 256\n",
    "num_hidden_layers = 2\n",
    "num_attention_heads = 4\n",
    "batch_size = 3\n",
    "length_x = 5\n",
    "max_new_tokens = 5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 初始化actor模型\n",
    "\n",
    "以GPT2为例，初始化模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/anaconda3/envs/llm/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from transformers import GPT2Config, GPT2LMHeadModel\n",
    "\n",
    "torch.manual_seed(1)\n",
    "\n",
    "# 定义参数\n",
    "vocab_size = 10\n",
    "hidden_size = 128\n",
    "intermediate_size = 256\n",
    "num_hidden_layers = 2\n",
    "num_attention_heads = 4\n",
    "\n",
    "# 加载模型配置\n",
    "config = GPT2Config(\n",
    "    vocab_size=50257,\n",
    "    n_embd=hidden_size,\n",
    "    n_inner=intermediate_size,\n",
    "    n_layer=num_hidden_layers,\n",
    "    n_head=num_attention_heads\n",
    ")\n",
    "\n",
    "# 初始化 GPT - 2 模型\n",
    "model = GPT2LMHeadModel(config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## model generate\n",
    "\n",
    "主要看下inputs_ids和attention_mask的含义"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### inputs_ids\n",
    "\n",
    "input_ids：它是一个张量（tensor），表示文本被分词后每个词（token）对应的 ID。比如在第一行 [20015, 232, 25465, ...] 中，每个数字都是原文本中一个词被 GPT - 2 分词器转换后的唯一标识。不同模型的词表不同，这些 ID 对应的具体词汇也不一样。这里第一行可能对应一句中文文本分词结果，第二行 [14150, 257, 922, ...] 前半部分对应英文文本，后半部分 50256 一般是填充值 ，表示补齐固定长度。\n",
    "\n",
    "\n",
    "attention_mask：同样是张量，用于指示哪些位置是有效的词（值为 1），哪些位置是填充的（值为 0） 。比如第二行 [1, 1, 1, 1, 0, 0, 0, 0, 0, 0] 表示前 4 个词是有效输入，后面是填充的，模型在处理时会忽略填充位置。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "inputs_ids可以认为是要输入的文本经过tokenizer处理后的结果，而attention_mask则是用于指示哪些位置是有效的词（值为 1），哪些位置是填充的（值为 0） 。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'input_ids': tensor([[20015,   232, 25465, 25465, 36365,   242, 38834,   165,   242,   247],\n",
      "        [14150,   257,   922,  1110, 50256, 50256, 50256, 50256, 50256, 50256]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
      "        [1, 1, 1, 1, 0, 0, 0, 0, 0, 0]])}\n"
     ]
    }
   ],
   "source": [
    "from transformers import GPT2Tokenizer\n",
    "import torch\n",
    "\n",
    "# 初始化 GPT - 2 分词器\n",
    "tokenizer = GPT2Tokenizer.from_pretrained('gpt2')\n",
    "# 设置padding token\n",
    "tokenizer.pad_token = tokenizer.eos_token  # 使用EOS token作为padding token\n",
    "\n",
    "# 输入文本\n",
    "inputs = ['今天天气不错', 'have a good day']\n",
    "\n",
    "# 对输入进行分词处理\n",
    "inputs = tokenizer(inputs, return_tensors='pt',padding=True, truncation=True)\n",
    "\n",
    "print(inputs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
      "Setting `pad_token_id` to `eos_token_id`:None for open-end generation.\n",
      "The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
      "A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[20015,   232, 25465, 25465, 36365,   242, 38834,   165,   242,   247,\n",
      "           247,   247,   247,   247,   247],\n",
      "        [14150,   257,   922,  1110, 50256, 50256, 50256, 50256, 50256, 50256,\n",
      "         50256, 50256, 50256, 50256, 50256]])\n"
     ]
    }
   ],
   "source": [
    "output_ids = model.generate(inputs['input_ids'], max_new_tokens=max_new_tokens)\n",
    "print(output_ids)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['今天天气不错�����', 'have a good day']\n"
     ]
    }
   ],
   "source": [
    "output_ids = tokenizer.batch_decode(output_ids, skip_special_tokens=True)\n",
    "print(output_ids)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "填充左边和右边会导致input_ids中padding_id的位置不一样，导致attention_mask中padding_id的位置不一样，导致模型在处理时会忽略填充位置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
      "Setting `pad_token_id` to `eos_token_id`:None for open-end generation.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'input_ids': tensor([[20015,   232, 25465, 25465, 36365,   242, 38834,   165,   242,   247],\n",
      "        [50256, 50256, 50256, 50256, 50256, 50256, 14150,   257,   922,  1110]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
      "        [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]])}\n",
      "tensor([[20015,   232, 25465, 25465, 36365,   242, 38834,   165,   242,   247,\n",
      "           247,   247,   247,   247,   247],\n",
      "        [50256, 50256, 50256, 50256, 50256, 50256, 14150,   257,   922,  1110,\n",
      "          1110,  1110,  1110,  1110,  1110]])\n",
      "['今天天气不错�����', 'have a good day day day day day day']\n"
     ]
    }
   ],
   "source": [
    "tokenizer.padding_side = 'left'\n",
    "inputs = ['今天天气不错', 'have a good day']\n",
    "inputs = tokenizer(inputs, return_tensors='pt',padding=True, truncation=True)\n",
    "\n",
    "print(inputs)\n",
    "\n",
    "output_ids = model.generate(inputs['input_ids'], max_new_tokens=max_new_tokens)\n",
    "\n",
    "print(output_ids)\n",
    "\n",
    "output_ids = tokenizer.batch_decode(output_ids, skip_special_tokens=True)\n",
    "print(output_ids)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 现在开始正式讲rlhf流程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 初始化reward model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据之前的定义，奖励模型可以从模型的输出中提取出最后一个token的隐藏状态，然后通过一个线性层计算奖励。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "假设batch_size = 2, sequence_length = 4\n",
    "input_ids = torch.tensor([\n",
    "    [1, 2, 3, 4],  # 第一个序列\n",
    "    [5, 6, 7, 8]   # 第二个序列\n",
    "])\n",
    "\n",
    "attention_mask = torch.tensor([\n",
    "    [1, 1, 1, 0],  # 第一个序列有效长度为3\n",
    "    [1, 1, 1, 1]   # 第二个序列有效长度为4\n",
    "])\n",
    "\n",
    "sequence_length = attention_mask.sum(dim=1).long() - 1\n",
    "\n",
    "结果: tensor([2, 3])\n",
    "\n",
    "第一个序列：3-1=2（索引从0开始）\n",
    "\n",
    "第二个序列：4-1=3\n",
    "\n",
    "batch_indices = torch.arange(batch_size)\n",
    "\n",
    "结果: tensor([0, 1])\n",
    "\n",
    "假设hidden_size = 2\n",
    "\n",
    "last_hidden_state = torch.tensor([\n",
    "    [[1.0, 1.1], [2.0, 2.1], [3.0, 3.1], [4.0, 4.1]],  # 第一个序列\n",
    "    [[5.0, 5.1], [6.0, 6.1], [7.0, 7.1], [8.0, 8.1]]   # 第二个序列\n",
    "])\n",
    "\n",
    "使用batch_indices和sequence_length提取\n",
    "\n",
    "result = last_hidden_state[batch_indices, sequence_length]\n",
    "\n",
    "结果: tensor([[3.0, 3.1],    # 第一个序列的第2个位置（索引从0开始）\n",
    "\n",
    "[8.0, 8.1]])   # 第二个序列的第3个位置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "class GPTRewardModel(torch.nn.Module):\n",
    "    def __init__(self, gpt_model, reward_head):\n",
    "        super(GPTRewardModel, self).__init__()\n",
    "        self.gpt_model = gpt_model\n",
    "        self.reward_head = reward_head\n",
    "        \n",
    "    def forward(self, input_ids, attention_mask):\n",
    "        # 获取模型的输出\n",
    "        outputs = self.gpt_model(input_ids=input_ids, attention_mask=attention_mask)\n",
    "        # 通常取最后一个隐藏状态作为输出\n",
    "        last_hidden_state = outputs.hidden_states[-1]\n",
    "        batch_size = input_ids.shape[0]\n",
    "        # 确保sequence_length是long类型\n",
    "        sequence_length = attention_mask.sum(dim=1).long() - 1\n",
    "        # 使用torch.arange并确保在正确的设备上\n",
    "        batch_indices = torch.arange(batch_size, device=input_ids.device).long()\n",
    "        last_hidden_state = last_hidden_state[batch_indices, sequence_length]\n",
    "        print(f\"last_hidden_state shape: {last_hidden_state.shape}, sequence_length: {sequence_length.shape}\")\n",
    "        # 计算奖励\n",
    "        rewards = self.reward_head(last_hidden_state)\n",
    "        return rewards\n",
    "\n",
    "# 重新初始化模型\n",
    "model.config.output_hidden_states = True\n",
    "rm_model = GPTRewardModel(model, torch.nn.Linear(hidden_size, 1)) ## 这里的reward_head是一个线性层，将最后一个隐藏状态映射到奖励值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[20015,   232, 25465, 25465, 36365,   242, 38834,   165,   242,   247],\n",
       "        [50256, 50256, 50256, 50256, 50256, 50256, 14150,   257,   922,  1110]])"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "inputs['input_ids']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "last_hidden_state shape: torch.Size([2, 128]), sequence_length: torch.Size([2])\n",
      "tensor([[-0.1647],\n",
      "        [-0.2839]], grad_fn=<AddmmBackward0>)\n"
     ]
    }
   ],
   "source": [
    "reward = rm_model(inputs['input_ids'], inputs['attention_mask'])\n",
    "print(reward)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 简化版ppo\n",
    "从以上过程可以看出，我们输入给模型的其实是input_ids和attention_mask，所以我们现在为了展示方便，构造一个没有实际意义的输入，输入给模型，然后输出奖励。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt = torch.randint(0, vocab_size, (batch_size, length_x))\n",
    "response = torch.randint(0, vocab_size, (batch_size, length_x + max_new_tokens))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[5, 0, 0, 1, 0],\n",
      "        [4, 8, 1, 4, 1],\n",
      "        [9, 6, 7, 0, 5]])\n",
      "tensor([[4, 8, 5, 2, 9, 5, 5, 0, 6, 3],\n",
      "        [0, 3, 0, 4, 8, 2, 6, 4, 9, 3],\n",
      "        [2, 6, 7, 5, 0, 0, 3, 3, 4, 8]])\n"
     ]
    }
   ],
   "source": [
    "print(prompt)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们希望让模型只关注response，所以对prompt对应的mask置为0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
      "        [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
      "        [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.]])\n"
     ]
    }
   ],
   "source": [
    "attention_mask = torch.ones(batch_size, length_x+max_new_tokens)\n",
    "attention_mask[:, :length_x] = 0\n",
    "print(attention_mask)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1., 1., 1., 1., 1.],\n",
       "        [1., 1., 1., 1., 1.],\n",
       "        [1., 1., 1., 1., 1.]])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt_attention_mask = torch.ones(batch_size, length_x)\n",
    "prompt_attention_mask"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "创建几个模型\n",
    "\n",
    "\n",
    "model_ref 和model的配置一样\n",
    "\n",
    "reward model和value model的配置大体一样\n",
    "\n",
    "value model的输出是所有token的隐藏状态所得到的value"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/anaconda3/envs/llm/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:774: UserWarning: `return_dict_in_generate` is NOT set to `True`, but `output_hidden_states` is. When `return_dict_in_generate` is not `True`, `output_hidden_states` is ignored.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "# 初始化 GPT - 2 模型\n",
    "model_ref = GPT2LMHeadModel(config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看区别"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "GPT2LMHeadModel(\n",
      "  (transformer): GPT2Model(\n",
      "    (wte): Embedding(50257, 128)\n",
      "    (wpe): Embedding(1024, 128)\n",
      "    (drop): Dropout(p=0.1, inplace=False)\n",
      "    (h): ModuleList(\n",
      "      (0-1): 2 x GPT2Block(\n",
      "        (ln_1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "        (attn): GPT2SdpaAttention(\n",
      "          (c_attn): Conv1D(nf=384, nx=128)\n",
      "          (c_proj): Conv1D(nf=128, nx=128)\n",
      "          (attn_dropout): Dropout(p=0.1, inplace=False)\n",
      "          (resid_dropout): Dropout(p=0.1, inplace=False)\n",
      "        )\n",
      "        (ln_2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "        (mlp): GPT2MLP(\n",
      "          (c_fc): Conv1D(nf=256, nx=128)\n",
      "          (c_proj): Conv1D(nf=128, nx=256)\n",
      "          (act): NewGELUActivation()\n",
      "          (dropout): Dropout(p=0.1, inplace=False)\n",
      "        )\n",
      "      )\n",
      "    )\n",
      "    (ln_f): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "  )\n",
      "  (lm_head): Linear(in_features=128, out_features=50257, bias=False)\n",
      ")\n",
      "GPT2LMHeadModel(\n",
      "  (transformer): GPT2Model(\n",
      "    (wte): Embedding(50257, 128)\n",
      "    (wpe): Embedding(1024, 128)\n",
      "    (drop): Dropout(p=0.1, inplace=False)\n",
      "    (h): ModuleList(\n",
      "      (0-1): 2 x GPT2Block(\n",
      "        (ln_1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "        (attn): GPT2SdpaAttention(\n",
      "          (c_attn): Conv1D(nf=384, nx=128)\n",
      "          (c_proj): Conv1D(nf=128, nx=128)\n",
      "          (attn_dropout): Dropout(p=0.1, inplace=False)\n",
      "          (resid_dropout): Dropout(p=0.1, inplace=False)\n",
      "        )\n",
      "        (ln_2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "        (mlp): GPT2MLP(\n",
      "          (c_fc): Conv1D(nf=256, nx=128)\n",
      "          (c_proj): Conv1D(nf=128, nx=256)\n",
      "          (act): NewGELUActivation()\n",
      "          (dropout): Dropout(p=0.1, inplace=False)\n",
      "        )\n",
      "      )\n",
      "    )\n",
      "    (ln_f): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "  )\n",
      "  (lm_head): Linear(in_features=128, out_features=50257, bias=False)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "print(model_ref)\n",
    "print(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 初始化value model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "假设我们有以下维度的数据：\n",
    "\n",
    "last_hidden_state 的形状是 [batch_size, sequence_length, hidden_size]\n",
    "\n",
    "比如 [5, 10, 128]，表示批次大小为5，序列长度为10，隐藏层维度为128\n",
    "\n",
    "self.value_head 是一个线性层 Linear(hidden_size, 1)\n",
    "\n",
    "输入维度是128，输出维度是1\n",
    "\n",
    "处理过程：\n",
    "\n",
    "self.value_head(last_hidden_state) 的操作：\n",
    "\n",
    "输入: [5, 10, 128]\n",
    "\n",
    "输出: [5, 10, 1] # 线性层将最后一个维度从128转换为1\n",
    "\n",
    "[:, :, 0] 的操作：\n",
    "\n",
    "取最后一个维度的第0个元素\n",
    "\n",
    "结果形状变为: [5, 10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "class GPTValueModel(torch.nn.Module):\n",
    "    def __init__(self, gpt_model, value_head):\n",
    "        super().__init__()\n",
    "        self.gpt_model = gpt_model\n",
    "        self.value_head = value_head\n",
    "        \n",
    "    def forward(self, input_ids, attention_mask):\n",
    "        outputs = self.gpt_model(input_ids=input_ids, attention_mask=attention_mask)\n",
    "        last_hidden_state = outputs.hidden_states[-1]\n",
    "        values = self.value_head(last_hidden_state)[:, :, 0]\n",
    "        return values\n",
    "    \n",
    "model.config.output_hidden_states = True\n",
    "vm_model = GPTValueModel(model,torch.nn.Linear(hidden_size, 1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "GPTRewardModel(\n",
      "  (gpt_model): GPT2LMHeadModel(\n",
      "    (transformer): GPT2Model(\n",
      "      (wte): Embedding(50257, 128)\n",
      "      (wpe): Embedding(1024, 128)\n",
      "      (drop): Dropout(p=0.1, inplace=False)\n",
      "      (h): ModuleList(\n",
      "        (0-1): 2 x GPT2Block(\n",
      "          (ln_1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "          (attn): GPT2SdpaAttention(\n",
      "            (c_attn): Conv1D(nf=384, nx=128)\n",
      "            (c_proj): Conv1D(nf=128, nx=128)\n",
      "            (attn_dropout): Dropout(p=0.1, inplace=False)\n",
      "            (resid_dropout): Dropout(p=0.1, inplace=False)\n",
      "          )\n",
      "          (ln_2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "          (mlp): GPT2MLP(\n",
      "            (c_fc): Conv1D(nf=256, nx=128)\n",
      "            (c_proj): Conv1D(nf=128, nx=256)\n",
      "            (act): NewGELUActivation()\n",
      "            (dropout): Dropout(p=0.1, inplace=False)\n",
      "          )\n",
      "        )\n",
      "      )\n",
      "      (ln_f): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "    )\n",
      "    (lm_head): Linear(in_features=128, out_features=50257, bias=False)\n",
      "  )\n",
      "  (reward_head): Linear(in_features=128, out_features=1, bias=True)\n",
      ")\n",
      "GPTValueModel(\n",
      "  (gpt_model): GPT2LMHeadModel(\n",
      "    (transformer): GPT2Model(\n",
      "      (wte): Embedding(50257, 128)\n",
      "      (wpe): Embedding(1024, 128)\n",
      "      (drop): Dropout(p=0.1, inplace=False)\n",
      "      (h): ModuleList(\n",
      "        (0-1): 2 x GPT2Block(\n",
      "          (ln_1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "          (attn): GPT2SdpaAttention(\n",
      "            (c_attn): Conv1D(nf=384, nx=128)\n",
      "            (c_proj): Conv1D(nf=128, nx=128)\n",
      "            (attn_dropout): Dropout(p=0.1, inplace=False)\n",
      "            (resid_dropout): Dropout(p=0.1, inplace=False)\n",
      "          )\n",
      "          (ln_2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "          (mlp): GPT2MLP(\n",
      "            (c_fc): Conv1D(nf=256, nx=128)\n",
      "            (c_proj): Conv1D(nf=128, nx=256)\n",
      "            (act): NewGELUActivation()\n",
      "            (dropout): Dropout(p=0.1, inplace=False)\n",
      "          )\n",
      "        )\n",
      "      )\n",
      "      (ln_f): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
      "    )\n",
      "    (lm_head): Linear(in_features=128, out_features=50257, bias=False)\n",
      "  )\n",
      "  (value_head): Linear(in_features=128, out_features=1, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "print(rm_model)\n",
    "print(vm_model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## ppo前向过程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "创建几个model的函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_response(model, prompt, max_new_tokens, attention_mask):\n",
    "    inputs = {'input_ids': prompt, 'attention_mask': attention_mask}  # ignore mask，好像不需要mask\n",
    "    y = model.generate(**inputs,\n",
    "                max_new_tokens=max_new_tokens,\n",
    "                # forced_eos_token_id=True\n",
    "                )\n",
    "    return y\n",
    "\n",
    "def get_reward(model, response, attention_mask):\n",
    "    inputs   = {'input_ids': response, 'attention_mask': attention_mask}  # ignore mask\n",
    "    y = model(inputs['input_ids'], inputs['attention_mask'])\n",
    "    return y\n",
    "\n",
    "def get_value(model, prompt, attention_mask):\n",
    "    inputs = {'input_ids': prompt, 'attention_mask': attention_mask}  # ignore mask\n",
    "    y = model(inputs['input_ids'], inputs['attention_mask'])\n",
    "    return y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[5, 0, 0, 1, 0],\n",
       "        [4, 8, 1, 4, 1],\n",
       "        [9, 6, 7, 0, 5]])"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[4, 8, 5, 2, 9, 5, 5, 0, 6, 3],\n",
       "        [0, 3, 0, 4, 8, 2, 6, 4, 9, 3],\n",
       "        [2, 6, 7, 5, 0, 0, 3, 3, 4, 8]])"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1., 1., 1., 1., 1.],\n",
       "        [1., 1., 1., 1., 1.],\n",
       "        [1., 1., 1., 1., 1.]])"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt_attention_mask"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "        [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "        [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.]])"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "attention_mask"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这里就可以看到，ppo流程中的reward只是在最后一个token上得到的，但是我的value model要在每一个token上得到一个价值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Setting `pad_token_id` to `eos_token_id`:None for open-end generation.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[    5,     0,     0,     1,     0,     0,     0,     0,     0,     0],\n",
      "        [    4,     8,     1,     4,     1, 10998, 10998, 10998, 10998, 10998],\n",
      "        [    9,     6,     7,     0,     5,     5,     5,     5,     5,     5]])\n",
      "last_hidden_state shape: torch.Size([3, 128]), sequence_length: torch.Size([3])\n",
      "tensor([[-0.4702],\n",
      "        [-1.0223],\n",
      "        [-0.6396]], grad_fn=<AddmmBackward0>)\n",
      "tensor([[ 0.1054, -0.1810, -0.2179, -0.4633, -0.1662,  0.0374, -0.7071, -0.7640,\n",
      "         -1.3427,  0.2779],\n",
      "        [ 0.0424, -0.0425, -1.1631, -0.1351,  0.2049,  0.0207, -0.9090,  0.4028,\n",
      "         -0.1427,  0.6911],\n",
      "        [ 0.1912, -0.2840,  0.1110,  0.6809, -0.4596, -0.1590, -0.2637, -0.3191,\n",
      "         -0.1446,  0.9440]], grad_fn=<SelectBackward0>)\n"
     ]
    }
   ],
   "source": [
    "print(get_response(model, prompt, max_new_tokens, prompt_attention_mask))\n",
    "print(get_reward(rm_model, response, attention_mask))\n",
    "print(get_value(vm_model, response, attention_mask))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "PPO 相关设置"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "封装几个ppo的model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "class PPOModels():\n",
    "    def __init__(self, model_actor, model_ref, model_rm, model_critic):\n",
    "        self.actor = model_actor\n",
    "        self.ref = model_ref\n",
    "        self.rm = model_rm\n",
    "        self.critic = model_critic\n",
    "\n",
    "\n",
    "model_ref.eval()\n",
    "rm_model.eval()\n",
    "models = PPOModels(model, model_ref, rm_model, vm_model)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "设置ppo的超参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. ppo_epochs在每次策略更新时，PPO 算法对收集到的数据进行迭代训练的次数。\n",
    "\n",
    "2. mini_batch_size每个训练步骤中，从收集到的数据里选取的小批量数据的样本数量。\n",
    "\n",
    "3. epochs整个训练过程中，算法对所有收集到的数据进行完整遍历的次数。\n",
    "\n",
    "4. kl_ctlKL 散度惩罚项的系数，用于控制新旧策略之间的差异程度。\n",
    "\n",
    "5. vf_coef价值函数损失的系数，用于平衡策略损失和价值函数损失在总损失中的权重。\n",
    "\n",
    "6. lam广义优势估计（GAE）中的 \\(\\lambda\\) 参数，用于平衡优势估计的偏差和方差。\n",
    "\n",
    "7. gamma折扣因子，用于计算未来奖励的折现值，决定未来奖励在当前价值估计中的重要程度。\n",
    "\n",
    "8. cliprange_value价值函数裁剪范围的参数，用于限制价值函数更新的幅度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "class PPOConfig():\n",
    "    def __init__(self):\n",
    "        self.ppo_epochs = 5\n",
    "        self.mini_batch_size = 2\n",
    "        self.epochs = 4\n",
    "        self.kl_ctl = 0.1\n",
    "        self.vf_coef = 0.1\n",
    "        self.lam = 0.9\n",
    "        self.gamma = 0.9\n",
    "        self.cliprange_value = 0.2\n",
    "\n",
    "    def __str__(self):\n",
    "        return f'ppo_epochs:{self.ppo_epochs}\\nmini_batch_size:{self.mini_batch_size}\\nepochs:{self.epochs}\\nkl_ctl:{self.kl_ctl}'\n",
    "\n",
    "\n",
    "ppo_config = PPOConfig()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在每一步中ppo都在干什么"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先要有个列表来记录每一步的采样"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "ppo_old_batchs = {\n",
    "    'prompt': None,\n",
    "    'response': None,\n",
    "    'mask': None,\n",
    "    'logprobs_ref': None,\n",
    "    'logprobs_old': None,\n",
    "    'logprobs': None,\n",
    "    'values_old': None,\n",
    "    'values': None,\n",
    "    'rewards': None,\n",
    "    'rewards_kl': None,\n",
    "    'loss': None,\n",
    "    'logits': None,\n",
    "}\n",
    "\n",
    "ppo_old_batchs['prompt'] = prompt\n",
    "ppo_old_batchs['response'] = response\n",
    "ppo_old_batchs['mask'] = attention_mask"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'prompt': tensor([[5, 0, 0, 1, 0],\n",
       "         [4, 8, 1, 4, 1],\n",
       "         [9, 6, 7, 0, 5]]),\n",
       " 'response': tensor([[4, 8, 5, 2, 9, 5, 5, 0, 6, 3],\n",
       "         [0, 3, 0, 4, 8, 2, 6, 4, 9, 3],\n",
       "         [2, 6, 7, 5, 0, 0, 3, 3, 4, 8]]),\n",
       " 'mask': tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.]]),\n",
       " 'logprobs_ref': None,\n",
       " 'logprobs_old': None,\n",
       " 'logprobs': None,\n",
       " 'values_old': None,\n",
       " 'values': None,\n",
       " 'rewards': None,\n",
       " 'rewards_kl': None,\n",
       " 'loss': None,\n",
       " 'logits': None}"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ppo_old_batchs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "前向推理，得到token的logprobs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "logprobs = F.log_softmax(logits, dim=-1)第一步:对logits进行softmax并取log\n",
    "\n",
    "torch.gather是一个用于从张量中按索引收集值的操作 \n",
    "\n",
    "假设我们有:\n",
    "\n",
    "logp.shape = [1, 5, 32]      # [batch_size, seq_len, vocab_size]\n",
    "\n",
    "labels.shape = [1, 5]        # [batch_size, seq_len]\n",
    "\n",
    "1. labels.unsqueeze(2)\n",
    "\n",
    "在最后增加一个维度\n",
    "\n",
    "labels_expanded = labels.unsqueeze(2)   # shape变为[1, 5, 1]\n",
    "\n",
    "2. torch.gather(logp, 2, labels_expanded)\n",
    "\n",
    "dim=2表示在词表维度(第3维)上收集值\n",
    "\n",
    "gathered = torch.gather(logp, 2, labels_expanded)  # shape为[1, 5, 1]\n",
    "\n",
    "3. squeeze(-1)\n",
    "\n",
    "去掉最后一个维度\n",
    "\n",
    "logpy = gathered.squeeze(-1)  # 最终shape为[1, 5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "inputs_ids shape: torch.Size([3, 10])\n",
      "logits shape: torch.Size([3, 10, 50257])\n",
      "logits shape: torch.Size([3, 10, 50257]), response shape: torch.Size([3, 10]), attention_mask shape: torch.Size([3, 10])\n",
      "all_token_logprobs shape: torch.Size([3, 10, 50257])\n",
      "gathered shape: torch.Size([3, 10, 1]), response shape: torch.Size([3, 10])\n",
      "response_logprobs shape: torch.Size([3, 10])\n",
      "\n",
      "\n",
      "inputs_ids shape: torch.Size([3, 10])\n",
      "logits shape: torch.Size([3, 10, 50257])\n",
      "logits shape: torch.Size([3, 10, 50257]), response shape: torch.Size([3, 10]), attention_mask shape: torch.Size([3, 10])\n",
      "all_token_logprobs shape: torch.Size([3, 10, 50257])\n",
      "gathered shape: torch.Size([3, 10, 1]), response shape: torch.Size([3, 10])\n",
      "response_logprobs shape: torch.Size([3, 10])\n",
      "\n",
      "\n",
      "inputs_ids shape: torch.Size([3, 10])\n",
      "logits shape: torch.Size([3, 10, 50257])\n",
      "logits shape: torch.Size([3, 10, 50257]), response shape: torch.Size([3, 10]), attention_mask shape: torch.Size([3, 10])\n",
      "all_token_logprobs shape: torch.Size([3, 10, 50257])\n",
      "gathered shape: torch.Size([3, 10, 1]), response shape: torch.Size([3, 10])\n",
      "response_logprobs shape: torch.Size([3, 10])\n",
      "torch.Size([3, 10])\n",
      "torch.Size([3, 10])\n",
      "torch.Size([3, 10])\n"
     ]
    }
   ],
   "source": [
    "import torch.nn.functional as F\n",
    "\n",
    "def get_logits(model, input_ids):\n",
    "    # 得到logits\n",
    "    outputs = model(input_ids=input_ids)\n",
    "    print(f\"inputs_ids shape: {input_ids.shape}\")\n",
    "    logits = outputs.logits\n",
    "    print(f\"logits shape: {logits.shape}\")\n",
    "    return logits\n",
    "\n",
    "def get_logprobs(model, response, attention_mask):\n",
    "    # 得到logprobs\n",
    "    logits = get_logits(model, response)\n",
    "    print(f\"logits shape: {logits.shape}, response shape: {response.shape}, attention_mask shape: {attention_mask.shape}\")\n",
    "    # F.log_softmax() 是先进行softmax运算然后再取对数（log）\n",
    "    all_token_logprobs = F.log_softmax(logits, dim=-1)\n",
    "    print(f\"all_token_logprobs shape: {all_token_logprobs.shape}\")\n",
    "    # 使用torch.gather() 从logprobs中收集response的值\n",
    "    gathered = torch.gather(all_token_logprobs, 2, response.unsqueeze(2))\n",
    "    print(f\"gathered shape: {gathered.shape}, response shape: {response.shape}\")\n",
    "    # 去掉最后一个维度\n",
    "    response_logprobs = gathered.squeeze(-1)\n",
    "    print(f\"response_logprobs shape: {response_logprobs.shape}\")\n",
    "    return response_logprobs\n",
    "\n",
    "logprobs_ref = get_logprobs(models.ref, ppo_old_batchs['response'], ppo_old_batchs['mask'])\n",
    "print('\\n')\n",
    "logprobs_old = get_logprobs(models.actor, ppo_old_batchs['response'], ppo_old_batchs['mask'])\n",
    "print('\\n')\n",
    "logprobs = get_logprobs(models.actor, ppo_old_batchs['response'], ppo_old_batchs['mask'])\n",
    "\n",
    "print(logprobs_ref.shape)\n",
    "print(logprobs_old.shape)\n",
    "print(logprobs.shape)   \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([3, 10])"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ -9.6364, -10.0382,  -9.4454,  -9.7810,  -9.3484,  -9.5437,  -9.6146,\n",
       "          -9.3174,  -9.8408,  -9.5032],\n",
       "        [ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "          -9.6053,  -9.3741,  -9.4720],\n",
       "        [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "          -9.9353,  -9.3172,  -9.8445]], grad_fn=<SqueezeBackward1>)"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "logprobs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算kl"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.0130,  0.0095, -0.0262, -0.0021, -0.0283, -0.0148, -0.0134, -0.0258,\n",
      "          0.0089, -0.0307],\n",
      "        [-0.0315, -0.0049, -0.0047, -0.0323,  0.0020, -0.0178,  0.0170, -0.0316,\n",
      "         -0.0339, -0.0369],\n",
      "        [-0.0574,  0.0419, -0.0651, -0.0085, -0.0412, -0.0019, -0.0238,  0.0211,\n",
      "         -0.0333,  0.0152]], grad_fn=<MulBackward0>)\n"
     ]
    }
   ],
   "source": [
    "def get_kl(logprobs_ref, logprobs_old, kl_ctl):\n",
    "    kl = logprobs_ref - logprobs_old\n",
    "    kl = kl * kl_ctl\n",
    "    return kl\n",
    "\n",
    "kl = get_kl(logprobs_ref, logprobs_old, ppo_config.kl_ctl)\n",
    "print(kl)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算reward_kl\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.0130,  0.0095, -0.0262, -0.0021, -0.0283, -0.0148, -0.0134, -0.0258,\n",
      "          0.0089, -0.0307],\n",
      "        [-0.0315, -0.0049, -0.0047, -0.0323,  0.0020, -0.0178,  0.0170, -0.0316,\n",
      "         -0.0339, -0.0369],\n",
      "        [-0.0574,  0.0419, -0.0651, -0.0085, -0.0412, -0.0019, -0.0238,  0.0211,\n",
      "         -0.0333,  0.0152]], grad_fn=<MulBackward0>)\n",
      "last_hidden_state shape: torch.Size([3, 128]), sequence_length: torch.Size([3])\n",
      "tensor([[-0.7784],\n",
      "        [-0.9515],\n",
      "        [-0.9003]], grad_fn=<AddmmBackward0>)\n",
      "tensor([[-0.0130,  0.0095, -0.0262, -0.0021, -0.0283, -0.0148, -0.0134, -0.0258,\n",
      "          0.0089, -0.8090],\n",
      "        [-0.0315, -0.0049, -0.0047, -0.0323,  0.0020, -0.0178,  0.0170, -0.0316,\n",
      "         -0.0339, -0.9884],\n",
      "        [-0.0574,  0.0419, -0.0651, -0.0085, -0.0412, -0.0019, -0.0238,  0.0211,\n",
      "         -0.0333, -0.8852]], grad_fn=<CopySlices>)\n"
     ]
    }
   ],
   "source": [
    "def get_reward_with_kl(logprobs_ref, logprobs_old, kl_ctl, reward):\n",
    "    kl = logprobs_ref - logprobs_old\n",
    "    kl = kl * kl_ctl\n",
    "    kl[:, -1] += reward[:, 0]\n",
    "    return kl\n",
    "\n",
    "print(kl)\n",
    "rewards = get_reward(models.rm, ppo_old_batchs['response'], ppo_old_batchs['mask'])\n",
    "print(rewards)\n",
    "\n",
    "kl_reward = get_reward_with_kl(logprobs_ref, logprobs_old, ppo_config.kl_ctl, rewards)\n",
    "print(kl_reward)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "values = get_value(models.critic, ppo_old_batchs['response'], ppo_old_batchs['mask'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.1939, -0.0731, -0.0170, -0.4315,  0.0534, -0.2046, -0.6074, -0.7700,\n",
       "         -1.2505,  0.1553],\n",
       "        [ 0.0511, -0.2098, -0.8512, -0.1117,  0.2560, -0.0967, -0.9718,  0.2660,\n",
       "         -0.1777,  0.4735],\n",
       "        [ 0.2042, -0.6096, -0.0284,  0.2577, -0.3757, -0.3134, -0.5433, -0.2487,\n",
       "         -0.2369,  1.0747]], grad_fn=<SelectBackward0>)"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'prompt': tensor([[5, 0, 0, 1, 0],\n",
       "         [4, 8, 1, 4, 1],\n",
       "         [9, 6, 7, 0, 5]]),\n",
       " 'response': tensor([[4, 8, 5, 2, 9, 5, 5, 0, 6, 3],\n",
       "         [0, 3, 0, 4, 8, 2, 6, 4, 9, 3],\n",
       "         [2, 6, 7, 5, 0, 0, 3, 3, 4, 8]]),\n",
       " 'mask': tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.]]),\n",
       " 'logprobs_ref': tensor([[ -9.7659,  -9.9431,  -9.7075,  -9.8018,  -9.6310,  -9.6916,  -9.7483,\n",
       "           -9.5755,  -9.7520,  -9.8097],\n",
       "         [ -9.9691,  -9.7657,  -9.7810,  -9.7806,  -9.8304,  -9.9382,  -9.6816,\n",
       "           -9.9212,  -9.7132,  -9.8413],\n",
       "         [-10.4189,  -9.7863, -10.1431,  -9.8084,  -9.5995,  -9.5113,  -9.8666,\n",
       "           -9.7238,  -9.6501,  -9.6926]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs_old': tensor([[ -9.6364, -10.0382,  -9.4454,  -9.7810,  -9.3484,  -9.5437,  -9.6146,\n",
       "           -9.3174,  -9.8408,  -9.5032],\n",
       "         [ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs': tensor([[ -9.6364, -10.0382,  -9.4454,  -9.7810,  -9.3484,  -9.5437,  -9.6146,\n",
       "           -9.3174,  -9.8408,  -9.5032],\n",
       "         [ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445]], grad_fn=<SqueezeBackward1>),\n",
       " 'values_old': tensor([[ 0.1939, -0.0731, -0.0170, -0.4315,  0.0534, -0.2046, -0.6074, -0.7700,\n",
       "          -1.2505,  0.1553],\n",
       "         [ 0.0511, -0.2098, -0.8512, -0.1117,  0.2560, -0.0967, -0.9718,  0.2660,\n",
       "          -0.1777,  0.4735],\n",
       "         [ 0.2042, -0.6096, -0.0284,  0.2577, -0.3757, -0.3134, -0.5433, -0.2487,\n",
       "          -0.2369,  1.0747]], grad_fn=<SelectBackward0>),\n",
       " 'values': None,\n",
       " 'rewards': tensor([[-0.7784],\n",
       "         [-0.9515],\n",
       "         [-0.9003]], grad_fn=<AddmmBackward0>),\n",
       " 'rewards_kl': tensor([[-0.0130,  0.0095, -0.0262, -0.0021, -0.0283, -0.0148, -0.0134, -0.0258,\n",
       "           0.0089, -0.8090],\n",
       "         [-0.0315, -0.0049, -0.0047, -0.0323,  0.0020, -0.0178,  0.0170, -0.0316,\n",
       "          -0.0339, -0.9884],\n",
       "         [-0.0574,  0.0419, -0.0651, -0.0085, -0.0412, -0.0019, -0.0238,  0.0211,\n",
       "          -0.0333, -0.8852]], grad_fn=<CopySlices>),\n",
       " 'loss': None,\n",
       " 'logits': None}"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ppo_old_batchs['logprobs_ref'] = logprobs_ref\n",
    "ppo_old_batchs['logprobs_old'] = logprobs_old\n",
    "ppo_old_batchs['logprobs'] = logprobs\n",
    "ppo_old_batchs['values_old'] = values\n",
    "ppo_old_batchs['rewards'] = rewards\n",
    "ppo_old_batchs['rewards_kl'] = kl_reward\n",
    "\n",
    "ppo_old_batchs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 计算loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "rewards：一个张量，代表在每个时间步获得的奖励。\n",
    "\n",
    "mask：一个掩码张量，用于标识哪些时间步是有效的（例如，用于处理终止状态）。\n",
    "\n",
    "values：一个张量，代表每个时间步的状态价值估计。\n",
    "\n",
    "gamma：折扣因子，用于计算未来奖励的折现值，取值范围通常在 [0, 1] 之间。\n",
    "\n",
    "lam：GAE 中的 \\(\\lambda\\) 参数，用于平衡偏差和方差，取值范围同样在 [0, 1] 之间。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# PPO 中的 GAE 公式\n",
    "\n",
    "在PPO（Proximal Policy Optimization）算法中，优势函数和价值损失是连接价值估计与策略优化的核心组件。\n",
    "\n",
    "## 优势函数（Advantage Function）\n",
    "\n",
    "优势函数衡量在某一状态下采取特定动作的**相对价值**，定义为：\n",
    "\n",
    "$$A(s_t, a_t) = Q(s_t, a_t) - V(s_t)$$\n",
    "\n",
    "状态 - 动作价值函数（Q 函数），表示在状态 \\(s_t\\) 采取动作 \\(a_t\\) 后，从后续轨迹中获得的总折扣回报的期望。\n",
    "\n",
    "状态价值函数（V 函数），表示在状态 \\(s_t\\) 下，遵循当前策略时获得的总折扣回报的期望（即 “平均收益”）。\n",
    "\n",
    "优势函数的本质是回答：\n",
    "\n",
    "在状态 \\(s_t\\) 下选择动作 \\(a_t\\)，比‘按当前策略随机选一个动作’好多少？”\n",
    "\n",
    "若 \\(A(s_t, a_t) > 0\\)：动作 \\(a_t\\) 优于平均水平，值得鼓励（策略应提高该动作的概率）\n",
    "\n",
    "若 \\(A(s_t, a_t) < 0\\)：动作 \\(a_t\\) 劣于平均水平，应抑制（策略应降低该动作的概率）。\n",
    "\n",
    "优势函数将 “绝对价值” 转化为 “相对价值”，减少了估计偏差（例如，即使 \\(Q(s_t, a_t)\\) 和 \\(V(s_t)\\) 都有误差，两者的差值可能更稳定）\n",
    "\n",
    "在实际训练中，Q 和 V 无法直接获得，PPO 通常使用GAE（Generalized Advantage Estimation） 来估计优势函数\n",
    "\n",
    "GAE（Generalized Advantage Estimation）的时序差分残差公式：\n",
    "\n",
    "$$\\delta_t = r_t + \\gamma V(s_{t+1}) - V(s_t)$$\n",
    "\n",
    "其中，$r_t$ 是时间步 $t$ 的奖励，$\\gamma$ 是折扣因子，$V(s_t)$ 是状态 $s_t$ 的价值估计。\n",
    "\n",
    "GAE 优势估计的递归形式：\n",
    "\n",
    "$$\\hat{A}_t = \\delta_t + \\gamma \\lambda \\hat{A}_{t+1}$$\n",
    "\n",
    "其中 $\\lambda$ 是 GAE 的衰减参数（$0 \\leq \\lambda \\leq 1$）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_GAE(rewards, attention_mask, values, gemma, lam):\n",
    "    lastgae = 0 #初始化为 0，用于存储上一个时间步的广义优势估计值。\n",
    "    advantages_recersed = []\n",
    "    response_len = rewards.shape[-1]\n",
    "\n",
    "    values = values * attention_mask\n",
    "    rewards = rewards * attention_mask\n",
    "\n",
    "    for t in reversed(range(response_len)):\n",
    "        nextvalues = values[:, t + 1] if t < response_len - 1 else 0.0\n",
    "        # 计算时间步 t 的 TD 误差（Temporal Difference error），即当前奖励加上折扣后的下一个时间步的价值估计，再减去当前时间步的价值估计。\n",
    "        delta = rewards[:, t] + gemma * nextvalues - values[:, t]\n",
    "        # 根据 GAE 的递推公式，计算当前时间步的广义优势估计值。\n",
    "        lastgae = delta + gemma * lam * lastgae\n",
    "        advantages_recersed.append(lastgae)\n",
    "    # 将 advantages_reversed 列表反转，使其按时间步的正序排列。\n",
    "    advantages = torch.stack(advantages_recersed[::-1]).transpose(0, 1)\n",
    "    return advantages\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'prompt': tensor([[5, 0, 0, 1, 0],\n",
       "         [4, 8, 1, 4, 1],\n",
       "         [9, 6, 7, 0, 5]]),\n",
       " 'response': tensor([[4, 8, 5, 2, 9, 5, 5, 0, 6, 3],\n",
       "         [0, 3, 0, 4, 8, 2, 6, 4, 9, 3],\n",
       "         [2, 6, 7, 5, 0, 0, 3, 3, 4, 8]]),\n",
       " 'mask': tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.]]),\n",
       " 'logprobs_ref': tensor([[ -9.7659,  -9.9431,  -9.7075,  -9.8018,  -9.6310,  -9.6916,  -9.7483,\n",
       "           -9.5755,  -9.7520,  -9.8097],\n",
       "         [ -9.9691,  -9.7657,  -9.7810,  -9.7806,  -9.8304,  -9.9382,  -9.6816,\n",
       "           -9.9212,  -9.7132,  -9.8413],\n",
       "         [-10.4189,  -9.7863, -10.1431,  -9.8084,  -9.5995,  -9.5113,  -9.8666,\n",
       "           -9.7238,  -9.6501,  -9.6926]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs_old': tensor([[ -9.6364, -10.0382,  -9.4454,  -9.7810,  -9.3484,  -9.5437,  -9.6146,\n",
       "           -9.3174,  -9.8408,  -9.5032],\n",
       "         [ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs': tensor([[ -9.6364, -10.0382,  -9.4454,  -9.7810,  -9.3484,  -9.5437,  -9.6146,\n",
       "           -9.3174,  -9.8408,  -9.5032],\n",
       "         [ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445]], grad_fn=<SqueezeBackward1>),\n",
       " 'values_old': tensor([[ 0.1939, -0.0731, -0.0170, -0.4315,  0.0534, -0.2046, -0.6074, -0.7700,\n",
       "          -1.2505,  0.1553],\n",
       "         [ 0.0511, -0.2098, -0.8512, -0.1117,  0.2560, -0.0967, -0.9718,  0.2660,\n",
       "          -0.1777,  0.4735],\n",
       "         [ 0.2042, -0.6096, -0.0284,  0.2577, -0.3757, -0.3134, -0.5433, -0.2487,\n",
       "          -0.2369,  1.0747]], grad_fn=<SelectBackward0>),\n",
       " 'values': None,\n",
       " 'rewards': tensor([[-0.7784],\n",
       "         [-0.9515],\n",
       "         [-0.9003]], grad_fn=<AddmmBackward0>),\n",
       " 'rewards_kl': tensor([[-0.0130,  0.0095, -0.0262, -0.0021, -0.0283, -0.0148, -0.0134, -0.0258,\n",
       "           0.0089, -0.8090],\n",
       "         [-0.0315, -0.0049, -0.0047, -0.0323,  0.0020, -0.0178,  0.0170, -0.0316,\n",
       "          -0.0339, -0.9884],\n",
       "         [-0.0574,  0.0419, -0.0651, -0.0085, -0.0412, -0.0019, -0.0238,  0.0211,\n",
       "          -0.0333, -0.8852]], grad_fn=<CopySlices>),\n",
       " 'loss': None,\n",
       " 'logits': None}"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ppo_old_batchs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-0.2043, -0.2523, -0.3115, -0.3845, -0.4747, -0.3587, -0.0023,  0.1193,\n",
       "          0.6180, -0.9643],\n",
       "        [-0.1865, -0.2303, -0.2843, -0.3509, -0.4333, -0.4275,  0.4546, -0.9550,\n",
       "         -0.6142, -1.4619],\n",
       "        [-0.1640, -0.2025, -0.2500, -0.3087, -0.3811, -0.1223,  0.0682, -0.2809,\n",
       "         -0.4166, -1.9599]], grad_fn=<TransposeBackward0>)"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gae = get_GAE(ppo_old_batchs['rewards_kl'], ppo_old_batchs['mask'], ppo_old_batchs['values_old'], ppo_config.gamma, ppo_config.lam)\n",
    "gae\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算value loss\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "advantages：优势函数的估计值，用于计算回报。\n",
    "\n",
    "\n",
    "values：当前价值函数的估计值。\n",
    "\n",
    "values_old：旧的价值函数估计值。\n",
    "\n",
    "mask：掩码张量，用于指定哪些元素参与损失计算。\n",
    "\n",
    "cliprange_value：裁剪范围，用于限制价值函数的更新幅度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "https://github.com/huggingface/trl/blob/26d86757a7c7e24e397ea44f57ecce6031dfac01/trl/trainer/ppo_trainer.py#L561C29-L567C30"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "def masked_mean(values: torch.Tensor, mask: torch.Tensor, axis = None) -> torch.Tensor:\n",
    "    \"\"\"Compute mean of tensor with a masked values.\"\"\"\n",
    "    if axis is not None:\n",
    "        return (values * mask).sum(axis=axis) / mask.sum(axis=axis)\n",
    "    else:\n",
    "        return (values * mask).sum() / mask.sum()\n",
    "\n",
    "def get_value_loss(advantages, values, values_old, attention_mask, cliprange_value):\n",
    "    # 目标回报 = 旧价值估计 + 优势估计\n",
    "    # 这是因为优势函数的定义为：A = Q - V，因此 Q = V + A，这里用returns表示目标 Q 值\n",
    "    returns = values_old + advantages\n",
    "    advantages = advantages.detach()\n",
    "    # 对新的价值估计values进行裁剪，限制其与旧价值估计values_old的差异不超过cliprange_value\n",
    "    vpredclipped = torch.clamp(values, values_old - cliprange_value, values_old + cliprange_value)\n",
    "\n",
    "    vf_losses1 = torch.square(vpredclipped - returns) # 裁剪后的价值估计与目标回报的平方误差\n",
    "    vf_losses2 = torch.square(values - returns) # 未裁剪的价值估计与目标回报的平方误差\n",
    "    vf_loss_max = torch.max(vf_losses1, vf_losses2)\n",
    "    vf_loss = 0.5 * masked_mean(vf_loss_max, attention_mask)\n",
    "    return vf_loss\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "ppo_old_batchs['values'] = ppo_old_batchs['values_old'] + 0.5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(0.6554, grad_fn=<MulBackward0>)"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "value_loss = get_value_loss(gae, ppo_old_batchs['values'], ppo_old_batchs['values_old'], ppo_old_batchs['mask'], ppo_config.cliprange_value)\n",
    "value_loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算policy loss\n",
    "https://github.com/huggingface/trl/blob/26d86757a7c7e24e397ea44f57ecce6031dfac01/trl/trainer/ppo_trainer.py#L569-L574"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "markdown\n",
    "# PPO（Proximal Policy Optimization）核心公式与实现\n",
    "\n",
    "PPO算法的核心是通过策略损失和价值损失的联合优化来更新智能体策略，以下是完整的公式说明与代码实现。\n",
    "\n",
    "## 1. 策略损失（Policy Loss）\n",
    "\n",
    "### 核心公式\n",
    "\n",
    "策略损失的计算基于重要性采样和裁剪机制：\n",
    "\n",
    "1. **重要性采样比率**  \n",
    "   $$\\text{ratio}_t = \\frac{\\pi_\\theta(a_t | s_t)}{\\pi_{\\theta_{\\text{old}}}(a_t | s_t)} = \\exp\\left(\\log \\pi_\\theta(a_t | s_t) - \\log \\pi_{\\theta_{\\text{old}}}(a_t | s_t)\\right)$$\n",
    "\n",
    "2. **未裁剪损失**  \n",
    "   $$L_1(\\theta) = -A_t \\cdot \\text{ratio}_t$$\n",
    "\n",
    "3. **裁剪后损失**  \n",
    "   $$L_2(\\theta) = -A_t \\cdot \\text{clip}(\\text{ratio}_t, 1-\\epsilon, 1+\\epsilon)$$\n",
    "\n",
    "4. **最终策略损失**  \n",
    "   $$L_{\\text{policy}}(\\theta) = \\mathbb{E}\\left[ \\max(L_1(\\theta), L_2(\\theta)) \\right]$$\n",
    "\n",
    "其中：\n",
    "- $A_t$ 是优势估计（GAE计算结果）\n",
    "- $\\epsilon$ 是裁剪范围超参数（通常为0.2）\n",
    "- $\\pi_\\theta$ 是当前策略，$\\pi_{\\theta_{\\text{old}}}$ 是更新前的旧策略\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_policy_loss(advantages, logprobs, logprobs_old, mask, cliprange):\n",
    "    # 重要性采样\n",
    "    ratio = torch.exp(logprobs - logprobs_old)\n",
    "    # 计算策略损失\n",
    "    pg_losses = -advantages * ratio\n",
    "    pg_losses2 = -advantages * torch.clamp(ratio, 1.0 - cliprange, 1.0 + cliprange)\n",
    "    pg_loss_max = torch.max(pg_losses, pg_losses2)\n",
    "    pg_loss = masked_mean(pg_loss_max, mask)\n",
    "    return pg_loss\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "pg_loss = get_policy_loss(gae, ppo_old_batchs['logprobs'], ppo_old_batchs['logprobs_old'], ppo_old_batchs['mask'], ppo_config.cliprange_value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(0.4202, grad_fn=<DivBackward0>)"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pg_loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "计算熵损失\n",
    "https://github.com/huggingface/trl/blob/26d86757a7c7e24e397ea44f57ecce6031dfac01/trl/trainer/ppo_trainer.py#L582-L583"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "entropy（熵）没有直接参与到模型的损失（loss）\n",
    "\n",
    "在计算完损失并进行反向传播和参数更新后，代码计算了 entropy\n",
    "\n",
    "这里计算的 entropy 被记录到 entropy_stats 张量中，用于后续的统计和记录，但没有用于损失计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "inputs_ids shape: torch.Size([3, 10])\n",
      "logits shape: torch.Size([3, 10, 50257])\n"
     ]
    }
   ],
   "source": [
    "logits = get_logits(models.actor, ppo_old_batchs['response'])\n",
    "ppo_old_batchs['logits'] = logits"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# PPO中的熵损失（Entropy Loss）计算\n",
    "\n",
    "熵损失用于衡量策略的随机性，在PPO中通常作为总损失的一部分，鼓励智能体保持探索行为。\n",
    "\n",
    "## 熵计算函数\n",
    "\n",
    "```python\n",
    "def get_entropy_loss(logits, mask):\n",
    "    # 将logits转换为概率分布（softmax归一化）\n",
    "    prob_dist = torch.nn.functional.softmax(logits, dim=-1)\n",
    "    \n",
    "    # 计算熵: H(p) = -Σ(p_i * log(p_i))\n",
    "    # 等价于: log(Σ(exp(logits_i))) - Σ(p_i * logits_i)\n",
    "    entropy = torch.logsumexp(logits, dim=-1) - torch.sum(prob_dist * logits, dim=-1)\n",
    "    \n",
    "    return entropy\n",
    "\n",
    "# 计算旧批次数据的熵\n",
    "entropy = get_entropy_loss(ppo_old_batchs['logits'], ppo_old_batchs['mask'])\n",
    "entropy  # 返回每个样本的熵值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "logits shape: torch.Size([3, 10, 50257]), mask shape: torch.Size([3, 10])\n",
      "prob_dist shape: torch.Size([3, 10, 50257]), logits shape: torch.Size([3, 10, 50257])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor([[10.7993, 10.7995, 10.7994, 10.7994, 10.7990, 10.7992, 10.7994, 10.7994,\n",
       "         10.7997, 10.7995],\n",
       "        [10.7995, 10.7994, 10.7994, 10.7996, 10.7995, 10.7992, 10.7994, 10.7995,\n",
       "         10.7993, 10.7996],\n",
       "        [10.7992, 10.7996, 10.7994, 10.7993, 10.7995, 10.7993, 10.7994, 10.7994,\n",
       "         10.7996, 10.7994]], grad_fn=<SubBackward0>)"
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def get_entropy_loss(logits, mask):\n",
    "    prob_dist = torch.nn.functional.softmax(logits, dim=-1)\n",
    "    print(f\"prob_dist shape: {prob_dist.shape}, logits shape: {logits.shape}\")\n",
    "    # 计算熵\n",
    "    # 使用torch.logsumexp计算logits的对数和，然后减去每个概率分布乘以logits的和\n",
    "    # 这里的熵计算公式是 H(X) = log(sum(exp(logits))) - sum(prob_dist * logits)\n",
    "    \n",
    "    entropy = torch.logsumexp(logits, dim=-1) - torch.sum(prob_dist * logits, dim=-1)\n",
    "    return entropy\n",
    "print(f\"logits shape: {logits.shape}, mask shape: {ppo_old_batchs['mask'].shape}\")\n",
    "entropy = get_entropy_loss(ppo_old_batchs['logits'], ppo_old_batchs['mask'])\n",
    "entropy\n",
    "                                "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss = pg_loss + ppo_config.vf_coef * value_loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_loss(batchs, ppo_config):\n",
    "    gae = get_GAE(batchs['rewards_kl'],\n",
    "                  batchs['mask'],\n",
    "                  batchs['values'],\n",
    "                  ppo_config.gamma,\n",
    "                  ppo_config.lam)\n",
    "    value_loss = get_value_loss(gae,\n",
    "                             batchs['values'],\n",
    "                             batchs['values_old'],\n",
    "                             batchs['mask'],\n",
    "                             ppo_config.cliprange_value)\n",
    "    pg_loss = get_policy_loss(\n",
    "                              gae,\n",
    "                              batchs['logprobs'],\n",
    "                              batchs['logprobs_old'],\n",
    "                              batchs['mask'],\n",
    "                              ppo_config.cliprange_value)\n",
    "    entropy = get_entropy_loss(batchs['logits'], batchs['mask'])\n",
    "    loss = pg_loss + ppo_config.vf_coef * value_loss\n",
    "    return loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "prob_dist shape: torch.Size([3, 10, 50257]), logits shape: torch.Size([3, 10, 50257])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor(0.9609, grad_fn=<AddBackward0>)"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loss = get_loss(ppo_old_batchs, ppo_config)\n",
    "loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'prompt': tensor([[5, 0, 0, 1, 0],\n",
       "         [4, 8, 1, 4, 1],\n",
       "         [9, 6, 7, 0, 5]]),\n",
       " 'response': tensor([[4, 8, 5, 2, 9, 5, 5, 0, 6, 3],\n",
       "         [0, 3, 0, 4, 8, 2, 6, 4, 9, 3],\n",
       "         [2, 6, 7, 5, 0, 0, 3, 3, 4, 8]]),\n",
       " 'mask': tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.]]),\n",
       " 'logprobs_ref': tensor([[ -9.7659,  -9.9431,  -9.7075,  -9.8018,  -9.6310,  -9.6916,  -9.7483,\n",
       "           -9.5755,  -9.7520,  -9.8097],\n",
       "         [ -9.9691,  -9.7657,  -9.7810,  -9.7806,  -9.8304,  -9.9382,  -9.6816,\n",
       "           -9.9212,  -9.7132,  -9.8413],\n",
       "         [-10.4189,  -9.7863, -10.1431,  -9.8084,  -9.5995,  -9.5113,  -9.8666,\n",
       "           -9.7238,  -9.6501,  -9.6926]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs_old': tensor([[ -9.6364, -10.0382,  -9.4454,  -9.7810,  -9.3484,  -9.5437,  -9.6146,\n",
       "           -9.3174,  -9.8408,  -9.5032],\n",
       "         [ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs': tensor([[ -9.6364, -10.0382,  -9.4454,  -9.7810,  -9.3484,  -9.5437,  -9.6146,\n",
       "           -9.3174,  -9.8408,  -9.5032],\n",
       "         [ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445]], grad_fn=<SqueezeBackward1>),\n",
       " 'values_old': tensor([[ 0.1939, -0.0731, -0.0170, -0.4315,  0.0534, -0.2046, -0.6074, -0.7700,\n",
       "          -1.2505,  0.1553],\n",
       "         [ 0.0511, -0.2098, -0.8512, -0.1117,  0.2560, -0.0967, -0.9718,  0.2660,\n",
       "          -0.1777,  0.4735],\n",
       "         [ 0.2042, -0.6096, -0.0284,  0.2577, -0.3757, -0.3134, -0.5433, -0.2487,\n",
       "          -0.2369,  1.0747]], grad_fn=<SelectBackward0>),\n",
       " 'values': tensor([[ 0.6939,  0.4269,  0.4830,  0.0685,  0.5534,  0.2954, -0.1074, -0.2700,\n",
       "          -0.7505,  0.6553],\n",
       "         [ 0.5511,  0.2902, -0.3512,  0.3883,  0.7560,  0.4033, -0.4718,  0.7660,\n",
       "           0.3223,  0.9735],\n",
       "         [ 0.7042, -0.1096,  0.4716,  0.7577,  0.1243,  0.1866, -0.0433,  0.2513,\n",
       "           0.2631,  1.5747]], grad_fn=<AddBackward0>),\n",
       " 'rewards': tensor([[-0.7784],\n",
       "         [-0.9515],\n",
       "         [-0.9003]], grad_fn=<AddmmBackward0>),\n",
       " 'rewards_kl': tensor([[-0.0130,  0.0095, -0.0262, -0.0021, -0.0283, -0.0148, -0.0134, -0.0258,\n",
       "           0.0089, -0.8090],\n",
       "         [-0.0315, -0.0049, -0.0047, -0.0323,  0.0020, -0.0178,  0.0170, -0.0316,\n",
       "          -0.0339, -0.9884],\n",
       "         [-0.0574,  0.0419, -0.0651, -0.0085, -0.0412, -0.0019, -0.0238,  0.0211,\n",
       "          -0.0333, -0.8852]], grad_fn=<CopySlices>),\n",
       " 'loss': None,\n",
       " 'logits': tensor([[[-1.4843e-01, -3.8199e-01,  1.5566e-01,  ...,  6.0343e-01,\n",
       "           -3.5546e-01, -2.5944e-01],\n",
       "          [-1.6893e-01, -2.5384e-03, -8.4530e-03,  ...,  8.7142e-02,\n",
       "           -2.0942e-01, -8.3370e-02],\n",
       "          [-4.3086e-01,  5.5402e-02, -4.6384e-01,  ...,  9.1063e-02,\n",
       "           -8.1510e-02,  1.6532e-01],\n",
       "          ...,\n",
       "          [ 1.5315e+00, -8.8365e-02, -1.9262e-01,  ..., -2.3480e-01,\n",
       "            7.7313e-02, -1.3036e-02],\n",
       "          [-9.2542e-02, -2.2912e-01,  8.3747e-02,  ...,  2.8154e-03,\n",
       "           -1.3022e-01,  6.1364e-02],\n",
       "          [-4.0653e-01, -2.8789e-02, -1.5729e-01,  ...,  2.5900e-01,\n",
       "           -3.2773e-01, -1.3417e-01]],\n",
       " \n",
       "         [[ 1.1939e+00, -3.4385e-01,  1.8697e-01,  ...,  8.9561e-02,\n",
       "           -1.3423e-01, -5.1387e-05],\n",
       "          [ 1.3593e-01, -2.1616e-01,  1.7281e-01,  ...,  5.4955e-02,\n",
       "           -2.8100e-01, -9.6232e-02],\n",
       "          [ 1.1163e+00, -4.0199e-01, -5.8994e-02,  ..., -4.4124e-02,\n",
       "            8.6503e-02, -4.1281e-02],\n",
       "          ...,\n",
       "          [ 7.9734e-02, -4.3286e-01,  1.4872e-01,  ..., -5.1665e-03,\n",
       "           -7.4853e-02, -2.7805e-02],\n",
       "          [ 3.4729e-01, -2.8876e-01,  3.5831e-02,  ...,  1.3297e-01,\n",
       "           -8.0469e-03,  5.7139e-02],\n",
       "          [-3.4550e-01, -1.6689e-01, -1.2459e-01,  ...,  2.8532e-01,\n",
       "           -3.9113e-01, -1.1683e-01]],\n",
       " \n",
       "         [[ 6.7189e-03, -4.6148e-02,  1.0041e+00,  ...,  5.6802e-01,\n",
       "           -1.4841e-01, -1.4218e-01],\n",
       "          [-8.0866e-02, -2.3968e-01,  1.6320e-01,  ...,  6.1787e-02,\n",
       "            1.6179e-02,  2.5040e-01],\n",
       "          [-3.4248e-01, -1.3313e-01, -4.3621e-01,  ...,  3.2381e-01,\n",
       "            1.3221e-02,  5.6685e-02],\n",
       "          ...,\n",
       "          [ 3.4316e-01, -8.4548e-04, -3.4696e-01,  ..., -6.7568e-02,\n",
       "           -1.2948e-01, -1.6340e-01],\n",
       "          [-3.2091e-02, -6.8572e-01,  2.5836e-01,  ...,  2.4276e-01,\n",
       "           -1.0186e-01, -1.8865e-01],\n",
       "          [-4.6698e-01, -2.5016e-01, -1.1452e-01,  ...,  6.8086e-02,\n",
       "           -3.2970e-01, -7.7348e-02]]], grad_fn=<UnsafeViewBackward0>)}"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ppo_old_batchs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## PPO训练\n",
    "\n",
    "https://github.com/huggingface/trl/blob/26d86757a7c7e24e397ea44f57ecce6031dfac01/trl/trainer/ppo_trainer.py#L529-L538"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将一个完整的批次数据 ppo_batchs 按照指定的 batch_size 和 mini_batch_size 划分成多个小批次数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "def get_minibatch(ppo_batchs, batch_size, mini_batch_size):\n",
    "    # 计算需要多少个小批次\n",
    "    step = batch_size // mini_batch_size\n",
    "    ppo_batchs_iter = []\n",
    "    \n",
    "    # 随机打乱索引以提高训练效果\n",
    "    b_inds = np.random.permutation(batch_size)\n",
    "    \n",
    "    # 根据索引创建小批次\n",
    "    for i in range(step):\n",
    "        start_idx = i * mini_batch_size\n",
    "        end_idx = start_idx + mini_batch_size\n",
    "        batch_inds = b_inds[start_idx:end_idx]\n",
    "        \n",
    "        # 创建当前小批次的数据\n",
    "        mini_batch = {}\n",
    "        for key, value in ppo_batchs.items():\n",
    "            if value is not None and isinstance(value, torch.Tensor) and value.size(0) == batch_size:\n",
    "                mini_batch[key] = value[batch_inds]\n",
    "            else:\n",
    "                mini_batch[key] = value\n",
    "                \n",
    "        ppo_batchs_iter.append(mini_batch)\n",
    "    \n",
    "    return ppo_batchs_iter"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'prompt': tensor([[5, 0, 0, 1, 0],\n",
       "         [4, 8, 1, 4, 1],\n",
       "         [9, 6, 7, 0, 5],\n",
       "         [4, 8, 5, 2, 9],\n",
       "         [5, 5, 0, 6, 3]]),\n",
       " 'response': tensor([[0, 3, 0, 4, 8, 2, 6, 4, 9, 3],\n",
       "         [2, 6, 7, 5, 0, 0, 3, 3, 4, 8],\n",
       "         [0, 8, 8, 2, 6, 0, 6, 0, 5, 8],\n",
       "         [8, 1, 4, 6, 2, 7, 5, 5, 9, 5],\n",
       "         [7, 4, 9, 5, 6, 6, 6, 1, 9, 8]]),\n",
       " 'mask': tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],\n",
       "         [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.]]),\n",
       " 'logprobs_ref': tensor([[ -9.7657,  -9.5145,  -9.7403,  -9.4521,  -9.8023,  -9.8455,  -9.8040,\n",
       "           -9.5040,  -9.9263,  -9.4373],\n",
       "         [-10.0543,  -9.8124,  -9.6533,  -9.7472,  -9.6888,  -9.7347,  -9.5207,\n",
       "           -9.2883,  -9.4406,  -9.7164],\n",
       "         [ -9.7657, -10.3167,  -9.8208,  -9.8356,  -9.5770,  -9.7337,  -9.7759,\n",
       "           -9.6341,  -9.4780,  -9.8436],\n",
       "         [-10.3158,  -9.5739,  -9.6799,  -9.8827, -10.0626,  -9.6075,  -9.7284,\n",
       "           -9.6707,  -9.9424,  -9.6236],\n",
       "         [ -9.8277,  -9.9490,  -9.4426,  -9.7313,  -9.5943,  -9.7917,  -9.6991,\n",
       "           -9.7685,  -9.9496,  -9.7640]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs_old': tensor([[ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445],\n",
       "         [ -9.6546, -10.1063,  -9.7419,  -9.6142,  -9.8585,  -9.5115,  -9.7855,\n",
       "           -9.2093,  -9.4475,  -9.7984],\n",
       "         [-10.0683,  -9.7843,  -9.6151,  -9.7731,  -9.4803,  -9.1821,  -9.4697,\n",
       "           -9.6959,  -9.3579,  -9.5344],\n",
       "         [ -9.6822,  -9.6050,  -9.5979,  -9.7321, -10.0195, -10.2095,  -9.9384,\n",
       "           -9.7428,  -9.4144,  -9.9008]], grad_fn=<SqueezeBackward1>),\n",
       " 'logprobs': tensor([[ -9.6546,  -9.7166,  -9.7343,  -9.4578,  -9.8507,  -9.7604,  -9.8515,\n",
       "           -9.6053,  -9.3741,  -9.4720],\n",
       "         [ -9.8447, -10.2057,  -9.4921,  -9.7237,  -9.1873,  -9.4923,  -9.6284,\n",
       "           -9.9353,  -9.3172,  -9.8445],\n",
       "         [ -9.6546, -10.1063,  -9.7419,  -9.6142,  -9.8585,  -9.5115,  -9.7855,\n",
       "           -9.2093,  -9.4475,  -9.7984],\n",
       "         [-10.0683,  -9.7843,  -9.6151,  -9.7731,  -9.4803,  -9.1821,  -9.4697,\n",
       "           -9.6959,  -9.3579,  -9.5344],\n",
       "         [ -9.6822,  -9.6050,  -9.5979,  -9.7321, -10.0195, -10.2095,  -9.9384,\n",
       "           -9.7428,  -9.4144,  -9.9008]], grad_fn=<SqueezeBackward1>),\n",
       " 'values_old': tensor([[ 1.2677,  0.5070,  0.9766, -0.4549,  0.5805, -0.4866,  0.5283, -0.2907,\n",
       "           0.0779, -0.1667],\n",
       "         [ 0.3226, -0.0667, -0.7088, -0.4413,  0.6490,  0.8188,  1.3689,  0.6129,\n",
       "           0.8584, -0.0860],\n",
       "         [ 1.2112,  0.0672,  0.4946, -0.7344,  0.5928,  0.8188,  1.0112,  0.7424,\n",
       "           1.3459, -0.0567],\n",
       "         [ 0.5810, -0.2458,  0.0620, -0.9607, -0.0040, -1.0716,  0.5418, -0.1127,\n",
       "          -0.0043, -0.3484],\n",
       "         [-0.4887, -0.2443, -0.6051, -0.6362,  0.2427, -0.0520,  0.6208,  0.1293,\n",
       "           0.1234, -0.2866]], grad_fn=<SelectBackward0>),\n",
       " 'values': tensor([[ 1.7677,  1.0070,  1.4766,  0.0451,  1.0805,  0.0134,  1.0283,  0.2093,\n",
       "           0.5779,  0.3333],\n",
       "         [ 0.8226,  0.4333, -0.2088,  0.0587,  1.1490,  1.3188,  1.8689,  1.1129,\n",
       "           1.3584,  0.4140],\n",
       "         [ 1.7112,  0.5672,  0.9946, -0.2344,  1.0928,  1.3188,  1.5112,  1.2424,\n",
       "           1.8459,  0.4433],\n",
       "         [ 1.0810,  0.2542,  0.5620, -0.4607,  0.4960, -0.5716,  1.0418,  0.3873,\n",
       "           0.4957,  0.1516],\n",
       "         [ 0.0113,  0.2557, -0.1051, -0.1362,  0.7427,  0.4480,  1.1208,  0.6293,\n",
       "           0.6234,  0.2134]], grad_fn=<AddBackward0>),\n",
       " 'rewards': tensor([[-0.9515],\n",
       "         [-0.9003],\n",
       "         [-1.3975],\n",
       "         [-1.6012],\n",
       "         [-1.6159]], grad_fn=<AddmmBackward0>),\n",
       " 'rewards_kl': tensor([[-1.1109e-02,  2.0212e-02, -5.9538e-04,  5.7230e-04,  4.8371e-03,\n",
       "          -8.5035e-03,  4.7553e-03,  1.0133e-02, -5.5222e-02, -9.4801e-01],\n",
       "         [-2.0961e-02,  3.9329e-02, -1.6113e-02, -2.3515e-03, -5.0153e-02,\n",
       "          -2.4239e-02,  1.0773e-02,  6.4699e-02, -1.2334e-02, -8.8754e-01],\n",
       "         [-1.1109e-02, -2.1040e-02, -7.8938e-03, -2.2135e-02,  2.8151e-02,\n",
       "          -2.2220e-02,  9.5730e-04, -4.2487e-02, -3.0542e-03, -1.4020e+00],\n",
       "         [-2.4752e-02,  2.1033e-02, -6.4787e-03, -1.0964e-02, -5.8229e-02,\n",
       "          -4.2533e-02, -2.5875e-02,  2.5274e-03, -5.8451e-02, -1.6101e+00],\n",
       "         [-1.4547e-02, -3.4400e-02,  1.5532e-02,  8.0109e-05,  4.2521e-02,\n",
       "           4.1776e-02,  2.3927e-02, -2.5682e-03, -5.3520e-02, -1.6023e+00]],\n",
       "        grad_fn=<CopySlices>),\n",
       " 'loss': None,\n",
       " 'logits': tensor([[[ 1.1939e+00, -3.4385e-01,  1.8697e-01,  ...,  8.9561e-02,\n",
       "           -1.3423e-01, -5.1387e-05],\n",
       "          [ 1.3593e-01, -2.1616e-01,  1.7281e-01,  ...,  5.4955e-02,\n",
       "           -2.8100e-01, -9.6232e-02],\n",
       "          [ 1.1163e+00, -4.0199e-01, -5.8994e-02,  ..., -4.4124e-02,\n",
       "            8.6503e-02, -4.1281e-02],\n",
       "          ...,\n",
       "          [ 7.9734e-02, -4.3286e-01,  1.4872e-01,  ..., -5.1665e-03,\n",
       "           -7.4853e-02, -2.7805e-02],\n",
       "          [ 3.4729e-01, -2.8876e-01,  3.5831e-02,  ...,  1.3297e-01,\n",
       "           -8.0469e-03,  5.7139e-02],\n",
       "          [-3.4550e-01, -1.6689e-01, -1.2459e-01,  ...,  2.8532e-01,\n",
       "           -3.9113e-01, -1.1683e-01]],\n",
       " \n",
       "         [[ 6.7189e-03, -4.6148e-02,  1.0041e+00,  ...,  5.6802e-01,\n",
       "           -1.4841e-01, -1.4218e-01],\n",
       "          [-8.0866e-02, -2.3968e-01,  1.6320e-01,  ...,  6.1787e-02,\n",
       "            1.6179e-02,  2.5040e-01],\n",
       "          [-3.4248e-01, -1.3313e-01, -4.3621e-01,  ...,  3.2381e-01,\n",
       "            1.3221e-02,  5.6685e-02],\n",
       "          ...,\n",
       "          [ 3.4316e-01, -8.4548e-04, -3.4696e-01,  ..., -6.7568e-02,\n",
       "           -1.2948e-01, -1.6340e-01],\n",
       "          [-3.2091e-02, -6.8572e-01,  2.5836e-01,  ...,  2.4276e-01,\n",
       "           -1.0186e-01, -1.8865e-01],\n",
       "          [-4.6698e-01, -2.5016e-01, -1.1452e-01,  ...,  6.8086e-02,\n",
       "           -3.2970e-01, -7.7348e-02]],\n",
       " \n",
       "         [[ 1.1939e+00, -3.4385e-01,  1.8697e-01,  ...,  8.9561e-02,\n",
       "           -1.3423e-01, -5.1387e-05],\n",
       "          [-1.0593e-01, -1.3282e-01,  2.0533e-01,  ..., -1.9474e-01,\n",
       "           -1.6972e-01,  4.7611e-02],\n",
       "          [-4.1880e-01,  1.8398e-02, -5.3639e-02,  ..., -1.0487e-02,\n",
       "           -1.2665e-01, -7.0815e-02],\n",
       "          ...,\n",
       "          [ 1.6399e+00, -2.6469e-01, -8.5538e-02,  ..., -2.8674e-01,\n",
       "            5.6738e-02,  8.3134e-02],\n",
       "          [ 1.7255e-01, -3.7670e-01, -3.0233e-01,  ..., -7.1360e-02,\n",
       "           -9.5127e-02,  4.1914e-01],\n",
       "          [-4.9126e-01, -2.2191e-01, -7.8555e-03,  ..., -5.6117e-03,\n",
       "           -3.6520e-01,  9.7580e-03]],\n",
       " \n",
       "         [[-1.4264e-01, -2.8157e-02,  2.0611e-01,  ...,  3.9266e-01,\n",
       "           -3.9834e-01, -2.0778e-01],\n",
       "          [-2.1525e-01,  1.0653e+00,  2.1692e-01,  ...,  1.1699e-01,\n",
       "            5.6338e-02, -1.0115e-01],\n",
       "          [-5.6471e-01, -2.6728e-01,  5.1792e-02,  ...,  2.3630e-01,\n",
       "           -8.6777e-02, -2.1680e-01],\n",
       "          ...,\n",
       "          [ 5.0904e-02,  6.5761e-02, -6.5508e-01,  ..., -3.1484e-01,\n",
       "            5.0776e-02,  3.6046e-01],\n",
       "          [ 2.1136e-01, -1.6706e-01, -4.8888e-02,  ...,  1.3312e-01,\n",
       "            2.5565e-03, -4.6409e-02],\n",
       "          [-5.2850e-01, -9.6140e-02, -4.2049e-01,  ...,  4.4030e-03,\n",
       "           -1.7598e-01,  2.3337e-01]],\n",
       " \n",
       "         [[-1.9936e-01, -1.6945e-01, -1.9695e-01,  ...,  5.2535e-01,\n",
       "           -1.7846e-01, -2.9423e-01],\n",
       "          [-3.5077e-01, -4.7752e-01,  2.0070e-01,  ...,  2.2220e-01,\n",
       "           -8.3356e-02, -2.5743e-01],\n",
       "          [-2.8892e-01,  3.0952e-02, -2.3381e-01,  ...,  1.5720e-01,\n",
       "            9.4805e-02, -8.3954e-02],\n",
       "          ...,\n",
       "          [ 6.9169e-02,  1.1067e+00, -2.3178e-01,  ...,  7.0888e-02,\n",
       "            2.1960e-01, -7.3331e-02],\n",
       "          [ 3.3634e-01, -3.3223e-01, -1.2819e-01,  ...,  1.1444e-01,\n",
       "            1.8477e-01, -8.0723e-02],\n",
       "          [-4.0076e-01, -2.4644e-01, -2.1143e-01,  ...,  1.5312e-02,\n",
       "           -1.8078e-01, -1.4051e-01]]], grad_fn=<UnsafeViewBackward0>)}"
      ]
     },
     "execution_count": 75,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ppo_old_batchs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 155,
   "metadata": {},
   "outputs": [],
   "source": [
    "def ppo_train_step(models, ppo_batchs, ppo_config, get_loss, optimizer):\n",
    "    losses = []\n",
    "    \n",
    "    \n",
    "    # 多轮PPO训练\n",
    "    for i in range(ppo_config.ppo_epochs):\n",
    "        # 获取小批次数据\n",
    "        ppo_batchs_iter = get_minibatch(\n",
    "            ppo_batchs, batch_size, ppo_config.mini_batch_size)\n",
    "        \n",
    "        # 对每个小批次进行训练\n",
    "        for mini_batchs in ppo_batchs_iter:\n",
    "            # 获取当前策略的输出\n",
    "            optimizer.zero_grad()\n",
    "            # 重新计算所有中间结果，而不是重用之前的计算图\n",
    "            with torch.set_grad_enabled(True):\n",
    "                logits = get_logits(models.actor, mini_batchs['prompt'])\n",
    "                \"\"\"\n",
    "                省略了\n",
    "                \"\"\"\n",
    "\n",
    "                \n",
    "                # 计算损失\n",
    "                loss= get_loss(\n",
    "                    mini_batchs, ppo_config)\n",
    "                \n",
    "                # 在实际训练中应该进行反向传播\n",
    "                loss.backward()\n",
    "            optimizer.step()\n",
    "            \n",
    "            # 记录损失\n",
    "            losses.append(loss)\n",
    "    \n",
    "    # 更新批次数据中的损失\n",
    "    ppo_batchs['loss'] = losses\n",
    "    \n",
    "    print(losses)\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "llm",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
