{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "12f4dced",
   "metadata": {},
   "source": [
    "## Q-函数估计中的\"最大化偏差\"问题--Double DQN\n",
    "\n",
    "### 其一\n",
    "在 DQN 中，我们使用深度神经网络来近似 Q 函数，即 $Q ( s , a )$ ，它表示在状态 $s$ 下采取动作 $a$ 时的预期回报。Q-Learning 算法本身是基于贝尔曼方程的，更新公式为:\n",
    "\n",
    "$$\n",
    "Q\\left(s_t, a_t\\right) \\leftarrow Q\\left(s_t, a_t\\right)+\\alpha\\left[r_t+\\gamma \\max _a Q\\left(s_{t+1}, a\\right)-Q\\left(s_t, a_t\\right)\\right]\n",
    "$$\n",
    "\n",
    "\n",
    "其中， $\\max _a Q\\left(s_{t+1}, a\\right)$ 是当前 Q 函数对下一状态所有可能动作的最大值估计。这个最大值是由 Q函数本身来估计的，但由于 $Q$ 函数是在每个时刻更新的，因此在面对估计误差时，DQN 会倾向于过高估计 $Q$ 值。\n",
    "\n",
    "### 其二\n",
    "自举问题,同一个模型会倾向于把值计算得更大或者更小,由于上一点得原因，普遍是偏大\n",
    "```py\n",
    "# 计算当前状态下的Q值\n",
    "            # gather(dim=1, index=action) 根据动作选择对应的Q值\n",
    "            value = model(state).gather(dim=1, index=action)\n",
    "\n",
    "            # 计算目标Q值（target）\n",
    "            with torch.no_grad():  # 不计算梯度\n",
    "                target = model(next_state)  # 获取下一个状态的Q值\n",
    "            # 取最大Q值作为目标\n",
    "            target = target.max(dim=1)[0].reshape(-1, 1)  # 取每个样本的最大Q值\n",
    "            # 计算目标值：reward + gamma * max(Q') * (1 - done)\n",
    "            target = target * 0.99 * (1 - over) + reward  # 使用折扣因子0.99\n",
    "```\n",
    "\n",
    "双模型使用两个不同的模型计算value和target,缓解了自举造成的过高估计."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 152,
   "id": "91162df8",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAASAAAADMCAYAAADTcn7NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8pXeV/AAAACXBIWXMAAA9hAAAPYQGoP6dpAAATrUlEQVR4nO3dfWxTZd8H8G+7rR1jO50brnWuDeSWiAsv6oDtYJ5otDdTFyKyPFGDOg3BMDsizhBdgqiomcE/fMWRO1HwH8TMBI0LinPIiFoYTBfHgEUSTPcgbXl5erpN13Xt9fzhs6OVgXSUXu36/SSHcM51tf2da+t351xnZzUIIQSIiCQwyi6AiDIXA4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJwwAiImkYQEQkDQOIiKSRFkBbtmzBzJkzkZubi8rKSnR1dckqhYgkkRJAH3/8MRobG/HCCy/ghx9+wIIFC1BdXQ2/3y+jHCKSxCDjZtTKykosWrQI7777LgAgGo3Cbrdj7dq1eO6555JdDhFJkp3sFxwdHUV3dzeampr0bUajEU6nE263e8LHhEIhhEIhfT0ajeL8+fMoLi6GwWC46jUTUXyEEBgcHERpaSmMxoufaCU9gM6ePYtIJAKr1Rqz3Wq14vjx4xM+prm5GS+99FIyyiOiBBoYGEBZWdlF25MeQJPR1NSExsZGfV3TNDgcDgwMDEBRFImVEdFEgsEg7HY7CgoKLtkv6QE0Y8YMZGVlwefzxWz3+Xyw2WwTPsZsNsNsNl+wXVEUBhBRCvunKZKkXwUzmUyoqKhAR0eHvi0ajaKjowOqqia7HCKSSMopWGNjI+rq6rBw4UIsXrwYb775JoaHh/H444/LKIeIJJESQA888ADOnDmDjRs3wuv14uabb8aXX355wcQ0EU1tUn4P6EoFg0FYLBZomsY5IKIUdLnvUd4LRkTSMICISBoGEBFJwwAiImkYQEQkDQOIiKRhABGRNAwgIpKGAURE0jCAiEgaBhARScMAIiJpGEBEJA0DiIikYQARkTQMICKShgFERNIwgIhIGgYQEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJwwAiImniDqD9+/dj2bJlKC0thcFgwKeffhrTLoTAxo0bcd1112HatGlwOp34+eefY/qcP38eK1euhKIoKCwsxKpVqzA0NHRFO0JE6SfuABoeHsaCBQuwZcuWCds3b96Mt99+G1u3bsXBgwcxffp0VFdXY2RkRO+zcuVK9PX1ob29HW1tbdi/fz+eeOKJye8FEaUncQUAiF27dunr0WhU2Gw28frrr+vbAoGAMJvN4qOPPhJCCHH06FEBQBw6dEjv88UXXwiDwSBOnTp1Wa+raZoAIDRNu5Lyiegqudz3aELngE6ePAmv1wun06lvs1gsqKyshNvtBgC43W4UFhZi4cKFeh+n0wmj0YiDBw9O+LyhUAjBYDBmIaL0l9AA8nq9AACr1Rqz3Wq16m1erxclJSUx7dnZ2SgqKtL7/F1zczMsFou+2O32RJZNRJKkxVWwpqYmaJqmLwMDA7JLIqIESGgA2Ww2AIDP54vZ7vP59DabzQa/3x/TPjY2hvPnz+t9/s5sNkNRlJiFiNJfQgNo1qxZsNls6Ojo0LcFg0EcPHgQqqoCAFRVRSAQQHd3t95n7969iEajqKysTGQ5RJTisuN9wNDQEE6cOKGvnzx5Ej09PSgqKoLD4cC6devwyiuvYPbs2Zg1axaef/55lJaWYvny5QCAm266CXfffTdWr16NrVu3IhwOo6GhAQ8++CBKS0sTtmNElAbivbz2zTffCAAXLHV1dUKIPy7FP//888JqtQqz2Szuuusu0d/fH/Mc586dEw899JDIz88XiqKIxx9/XAwODib8Eh8RyXG571GDEEJIzL9JCQaDsFgs0DSN80FEKehy36NpcRWMiKYmBhARScMAIiJpGEBEJA0DiIikYQARkTQMICKShgFERNIwgIhIGgYQEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJwwAiImkYQEQkTdwfy0N0NURGf0fAcwQQ0T82GAyw2Oci25wntzC6qhhAlBJGhwM4uW87RCQMADAYs1C+YgMDaIrjKRilhvT7dChKAAYQpQShf8YlZRIGEKUGHgFlJAYQpYQ0/IBeSgAGEKUIBlAmYgBRauARUEaKK4Cam5uxaNEiFBQUoKSkBMuXL0d/f39Mn5GREbhcLhQXFyM/Px+1tbXw+XwxfTweD2pqapCXl4eSkhKsX78eY2NjV743lLaEiPIgKAPFFUCdnZ1wuVw4cOAA2tvbEQ6HsXTpUgwPD+t9nn76aXz++edobW1FZ2cnfv31V6xYsUJvj0QiqKmpwejoKL7//nt8+OGH2L59OzZu3Ji4vaL0wyOgzCSugN/vFwBEZ2enEEKIQCAgcnJyRGtrq97n2LFjAoBwu91CCCF2794tjEaj8Hq9ep+WlhahKIoIhUKX9bqapgkAQtO0KymfUsjg6RPi0H/qRdfW1aJr62px6D9rxPDZ/5FdFk3S5b5Hr2gOSNM0AEBRUREAoLu7G+FwGE6nU+8zZ84cOBwOuN1uAIDb7ca8efNgtVr1PtXV1QgGg+jr65vwdUKhEILBYMxCU4vg+VdGmnQARaNRrFu3Drfddhvmzp0LAPB6vTCZTCgsLIzpa7Va4fV69T5/DZ/x9vG2iTQ3N8NiseiL3W6fbNmUqgR/ETETTTqAXC4Xjhw5gp07dyayngk1NTVB0zR9GRgYuOqvScklOAeUkSZ1M2pDQwPa2tqwf/9+lJWV6dttNhtGR0cRCARijoJ8Ph9sNpvep6urK+b5xq+Sjff5O7PZDLPZPJlSKV0wgDJSXEdAQgg0NDRg165d2Lt3L2bNmhXTXlFRgZycHHR0dOjb+vv74fF4oKoqAEBVVfT29sLv9+t92tvboSgKysvLr2RfKK0xgDJRXEdALpcLO3bswGeffYaCggJ9zsZisWDatGmwWCxYtWoVGhsbUVRUBEVRsHbtWqiqiqqqKgDA0qVLUV5ejkceeQSbN2+G1+vFhg0b4HK5eJSTwcT43wGijBJXALW0tAAA7rjjjpjt27Ztw2OPPQYAeOONN2A0GlFbW4tQKITq6mq89957et+srCy0tbWhvr4eqqpi+vTpqKurw6ZNm65sTyi98RQsIxlEGs7+BYNBWCwWaJoGRVFkl0MJEPAcwYk9WyCiEQB//kGyvOLrJVdGk3G571HeC0YpIu1+DlICMIAoNaTfgTglAAOIUkIazgRQAjCAKDUIwRDKQAwgSgkCvAyfiRhAlBJE9O8BZAAMUkqhJGIAUUoIBbx/fighgJw8Bdnm6RIromRgAFFKiP7/7/+MMxizYDDy23Oq41eYUpQBPAeb+hhAlJoM+j80hTGAKCUZYICB+TPlMYAoNRl4CpYJGECUupg/Ux4DiFITj4AyAgOIUpKB4ZMRGECUmgwGcBZ66mMAUcpi/Ex9DCBKTZwDyggMIEpRvBk1EzCAKCUZ/vIvTV0MIEpNnIDOCAwgSlEGGBhCUx4DiFITb0bNCAwgSkkGTkJnhLg+GZVossLhMIaHhy/aHhoZiVkfi0SgaRoMxom/Rc1mM6ZNm5bQGin5GECUFN999x0efvjhi7Y/dPts/Pd/zdbXf/rpJyx75hZEohN/UobL5UJTU1PC66TkYgBRUoRCIZw6deqi7drgdRgIzcHp0L9gyT6D4ZE9OHXq1EUDKBgMXq1SKYnimgNqaWnB/PnzoSgKFEWBqqr44osv9PaRkRG4XC4UFxcjPz8ftbW18Pl8Mc/h8XhQU1ODvLw8lJSUYP369RgbG0vM3lDaOh26AceGluB8+Hqc/H0Bjg8t5oelZoC4AqisrAyvvfYauru7cfjwYdx5552477770NfXBwB4+umn8fnnn6O1tRWdnZ349ddfsWLFCv3xkUgENTU1GB0dxffff48PP/wQ27dvx8aNGxO7V5R2hiMWRPUDcgOGIoX8tPgMENcp2LJly2LWX331VbS0tODAgQMoKyvD+++/jx07duDOO+8EAGzbtg033XQTDhw4gKqqKnz11Vc4evQovv76a1itVtx88814+eWX8eyzz+LFF1+EyWRK3J5RWikxeZBrHMRINB9ZhjCuM/3Mz4vPAJOeA4pEImhtbcXw8DBUVUV3dzfC4TCcTqfeZ86cOXA4HHC73aiqqoLb7ca8efNgtVr1PtXV1aivr0dfXx9uueWWuGo4fvw48vPzJ7sLlEQej+eS7f0//4hweBPOjV6P/Oz/xdjQz5c8Ajp79iyOHj2a2CIpYYaGhi6rX9wB1NvbC1VVMTIygvz8fOzatQvl5eXo6emByWRCYWFhTH+r1Qqv1wsA8Hq9MeEz3j7edjGhUAihUEhfH5+A1DSN80dp4lKX4AGg54QXPScu/j3wd6FQCIFA4Aqroqvln77e4+IOoBtvvBE9PT3QNA2ffPIJ6urq0NnZGXeB8WhubsZLL710wfbKykooinJVX5sSY3BwMKHPd/3112PJkiUJfU5KnMu9Shn3b0KbTCbccMMNqKioQHNzMxYsWIC33noLNpsNo6OjF/xU8vl8sNlsAACbzXbBVbHx9fE+E2lqaoKmafoyMDAQb9lElIKu+FaMaDSKUCiEiooK5OTkoKOjQ2/r7++Hx+OBqqoAAFVV0dvbC7/fr/dpb2+HoigoLy+/6GuYzWb90v/4QkTpL65TsKamJtxzzz1wOBwYHBzEjh07sG/fPuzZswcWiwWrVq1CY2MjioqKoCgK1q5dC1VVUVVVBQBYunQpysvL8cgjj2Dz5s3wer3YsGEDXC4XzGbzVdlBIkpdcQWQ3+/Ho48+itOnT8NisWD+/PnYs2cP/v3vfwMA3njjDRiNRtTW1iIUCqG6uhrvvfee/visrCy0tbWhvr4eqqpi+vTpqKurw6ZNmxK7V5RysrOzE3rkyh9YU4NBiPT7ZYtgMAiLxQJN03g6liZGRkZw5syZhD1fQUHBBVdcKXVc7nuU94JRUuTm5sJut8sug1IM/x4QEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJwwAiImkYQEQkDQOIiKRhABGRNAwgIpKGAURE0jCAiEgaBhARScMAIiJpGEBEJA0DiIikYQARkTQMICKShgFERNIwgIhIGgYQEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikiZbdgGTIYQAAASDQcmVENFExt+b4+/Vi0nLADp37hwAwG63S66EiC5lcHAQFovlou1pGUBFRUUAAI/Hc8mdo1jBYBB2ux0DAwNQFEV2OWmBYzY5QggMDg6itLT0kv3SMoCMxj+mriwWC78pJkFRFI5bnDhm8bucgwNOQhORNAwgIpImLQPIbDbjhRdegNlsll1KWuG4xY9jdnUZxD9dJyMiukrS8giIiKYGBhARScMAIiJpGEBEJE1aBtCWLVswc+ZM5ObmorKyEl1dXbJLkqa5uRmLFi1CQUEBSkpKsHz5cvT398f0GRkZgcvlQnFxMfLz81FbWwufzxfTx+PxoKamBnl5eSgpKcH69esxNjaWzF2R5rXXXoPBYMC6dev0bRyzJBFpZufOncJkMokPPvhA9PX1idWrV4vCwkLh8/lklyZFdXW12LZtmzhy5Ijo6ekR9957r3A4HGJoaEjvs2bNGmG320VHR4c4fPiwqKqqEkuWLNHbx8bGxNy5c4XT6RQ//vij2L17t5gxY4ZoamqSsUtJ1dXVJWbOnCnmz58vnnrqKX07xyw50i6AFi9eLFwul74eiUREaWmpaG5ullhV6vD7/QKA6OzsFEIIEQgERE5OjmhtbdX7HDt2TAAQbrdbCCHE7t27hdFoFF6vV+/T0tIiFEURoVAouTuQRIODg2L27Nmivb1d3H777XoAccySJ61OwUZHR9Hd3Q2n06lvMxqNcDqdcLvdEitLHZqmAfjzht3u7m6Ew+GYMZszZw4cDoc+Zm63G/PmzYPVatX7VFdXIxgMoq+vL4nVJ5fL5UJNTU3M2AAcs2RKq5tRz549i0gkEvNFBwCr1Yrjx49Lqip1RKNRrFu3Drfddhvmzp0LAPB6vTCZTCgsLIzpa7Va4fV69T4Tjel421S0c+dO/PDDDzh06NAFbRyz5EmrAKJLc7lcOHLkCL799lvZpaS0gYEBPPXUU2hvb0dubq7scjJaWp2CzZgxA1lZWRdcjfD5fLDZbJKqSg0NDQ1oa2vDN998g7KyMn27zWbD6OgoAoFATP+/jpnNZptwTMfbppru7m74/X7ceuutyM7ORnZ2Njo7O/H2228jOzsbVquVY5YkaRVAJpMJFRUV6Ojo0LdFo1F0dHRAVVWJlckjhEBDQwN27dqFvXv3YtasWTHtFRUVyMnJiRmz/v5+eDwefcxUVUVvby/8fr/ep729HYqioLy8PDk7kkR33XUXent70dPToy8LFy7EypUr9f9zzJJE9ix4vHbu3CnMZrPYvn27OHr0qHjiiSdEYWFhzNWITFJfXy8sFovYt2+fOH36tL789ttvep81a9YIh8Mh9u7dKw4fPixUVRWqqurt45eUly5dKnp6esSXX34prr322oy6pPzXq2BCcMySJe0CSAgh3nnnHeFwOITJZBKLFy8WBw4ckF2SNAAmXLZt26b3+f3338WTTz4prrnmGpGXlyfuv/9+cfr06Zjn+eWXX8Q999wjpk2bJmbMmCGeeeYZEQ6Hk7w38vw9gDhmycE/x0FE0qTVHBARTS0MICKShgFERNIwgIhIGgYQEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikoYBRETSMICISJr/AxX3fSf8CH1lAAAAAElFTkSuQmCC",
      "text/plain": [
       "<Figure size 300x300 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import gymnasium as gym\n",
    "\n",
    "\n",
    "#定义环境\n",
    "class MyWrapper(gym.Wrapper):\n",
    "\n",
    "    def __init__(self):\n",
    "        env = gym.make('CartPole-v1', render_mode='rgb_array')\n",
    "        super().__init__(env)\n",
    "        self.env = env\n",
    "        self.step_n = 0\n",
    "\n",
    "    def reset(self):\n",
    "        state, _ = self.env.reset()\n",
    "        self.step_n = 0\n",
    "        return state\n",
    "\n",
    "    def step(self, action):\n",
    "        state, reward, terminated, truncated, info = self.env.step(action)\n",
    "        over = terminated or truncated\n",
    "\n",
    "        #限制最大步数\n",
    "        self.step_n += 1\n",
    "        if self.step_n >= 200:\n",
    "            over = True\n",
    "        \n",
    "        #没坚持到最后,扣分\n",
    "        if over and self.step_n < 200:\n",
    "            reward = -1000\n",
    "\n",
    "        return state, reward, over\n",
    "\n",
    "    #打印游戏图像\n",
    "    def show(self):\n",
    "        from matplotlib import pyplot as plt\n",
    "        plt.figure(figsize=(3, 3))\n",
    "        plt.imshow(self.env.render())\n",
    "        plt.show()\n",
    "\n",
    "\n",
    "env = MyWrapper()\n",
    "\n",
    "env.reset()\n",
    "\n",
    "env.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37ecacbf",
   "metadata": {},
   "source": [
    "# 对决DQN-dueling\n",
    "\n",
    "## 1-书上的说法\n",
    "对于 Dueling DQN 中的公式 $Q_{\\eta, \\alpha, \\beta}(s, a)=V_{\\eta, \\alpha}(s)+A_{\\eta, \\beta}(s, a)$ ，它存在对于 $V$ 值和 $A$ 值建模不唯一性的问题。例如，对于同样的 $Q$ 值，如果将 $V$ 值加上任意大小的常数 $C$ ，再将所有 $A$ 值减去 $C$ ，则得到的 $Q$ 值依然不变，这就导致了训练的不稳定性。为了解决这一问题，Dueling DQN 强制最优动作的优势函数的实际输出为 0 ，即：\n",
    "\n",
    "$$\n",
    "Q_{\\eta, \\alpha, \\beta}(s, a)=V_{\\eta, \\alpha}(s)+A_{\\eta, \\beta}(s, a)-\\max _{a^{\\prime}} A_{\\eta, \\beta}\\left(s, a^{\\prime}\\right)\n",
    "$$\n",
    "\n",
    "\n",
    "此时 $V(s)=\\max _\\alpha Q(s, a)$ ，可以确保 $V$ 值建模的唯一性。在实现过程中，我们还可以用平均代替最大化操作，即：\n",
    "\n",
    "$$\n",
    "Q_{\\eta, \\alpha, \\beta}(s, a)=V_{\\eta, \\alpha}(s)+A_{\\eta, \\beta}(s, a)-\\frac{1}{| A |} \\sum_{a^{\\prime}} A_{\\eta, \\beta}\\left(s, a^{\\prime}\\right)\n",
    "$$\n",
    "\n",
    "## 2-公式简化理解\n",
    "\n",
    "每次都取最大价值action产生的误差，（很多其他书上将这一步称为**基线（base line）**）\n",
    "$$\n",
    "\\frac{1}{| A |} \\sum_{a^{\\prime}} A_{\\eta, \\beta}\\left(s, a^{\\prime}\\right)\n",
    "$$\n",
    "状态价值：\n",
    "$$\n",
    "V_{\\eta, \\alpha}(s)\n",
    "$$\n",
    "动作价值\n",
    "$$\n",
    "A_{\\eta, \\beta}(s, a)\n",
    "$$\n",
    "\n",
    "对标书上的公式：\n",
    "$$\n",
    "Q(\\text { state }, \\text { action })=\\text { state分数 }+ \\text { action分数 }- \\text { mean(action分数) }\n",
    "$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 153,
   "id": "ecfbe912",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(Model(\n",
       "   (fc): Sequential(\n",
       "     (0): Linear(in_features=4, out_features=64, bias=True)\n",
       "     (1): ReLU()\n",
       "     (2): Linear(in_features=64, out_features=64, bias=True)\n",
       "     (3): ReLU()\n",
       "   )\n",
       "   (fc_action): Linear(in_features=64, out_features=2, bias=True)\n",
       "   (fc_state): Linear(in_features=64, out_features=1, bias=True)\n",
       " ),\n",
       " Model(\n",
       "   (fc): Sequential(\n",
       "     (0): Linear(in_features=4, out_features=64, bias=True)\n",
       "     (1): ReLU()\n",
       "     (2): Linear(in_features=64, out_features=64, bias=True)\n",
       "     (3): ReLU()\n",
       "   )\n",
       "   (fc_action): Linear(in_features=64, out_features=2, bias=True)\n",
       "   (fc_state): Linear(in_features=64, out_features=1, bias=True)\n",
       " ))"
      ]
     },
     "execution_count": 153,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "# #定义模型,评估状态下每个动作的价值\n",
    "model = torch.nn.Sequential(\n",
    "    torch.nn.Linear(4, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 2),\n",
    ")\n",
    "\n",
    "#延迟更新的模型,用于计算target\n",
    "model_delay = torch.nn.Sequential(\n",
    "    torch.nn.Linear(4, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 2),\n",
    ")\n",
    "####################Dueling DQN#######################################\n",
    "# class Model(torch.nn.Module):\n",
    "\n",
    "#     def __init__(self):\n",
    "#         super().__init__()\n",
    "\n",
    "#         self.fc = torch.nn.Sequential(\n",
    "#             torch.nn.Linear(4, 64),\n",
    "#             torch.nn.ReLU(),\n",
    "#             torch.nn.Linear(64, 64),\n",
    "#             torch.nn.ReLU(),\n",
    "#         )\n",
    "\n",
    "#         self.fc_action = torch.nn.Linear(64, 2)\n",
    "#         self.fc_state = torch.nn.Linear(64, 1)\n",
    "\n",
    "#     def forward(self, state):\n",
    "#         state = self.fc(state)\n",
    "\n",
    "#         #评估state的价值\n",
    "#         value_state = self.fc_state(state)\n",
    "\n",
    "#         #每个state下每个action的价值\n",
    "#         value_action = self.fc_action(state)\n",
    "\n",
    "#         #综合以上两者计算最终的价值,action去均值是为了数值稳定\n",
    "#         return value_state + value_action - value_action.mean(dim=-1,\n",
    "#                                                               keepdim=True)\n",
    "################################通常为现实环境训练需要加入noise#######################################\n",
    "# class Model(torch.nn.Module):\n",
    "\n",
    "#     def __init__(self):\n",
    "#         super().__init__()\n",
    "\n",
    "#         self.fc = torch.nn.Sequential(\n",
    "#             torch.nn.Linear(4, 64),\n",
    "#             torch.nn.ReLU(),\n",
    "#             torch.nn.Linear(64, 64),\n",
    "#             torch.nn.ReLU(),\n",
    "#         )\n",
    "\n",
    "#         #输出层参数的均值和标准差\n",
    "#         self.weight_mean = torch.nn.Parameter(torch.randn(64, 2))\n",
    "#         self.weight_std = torch.nn.Parameter(torch.randn(64, 2))\n",
    "\n",
    "#         self.bias_mean = torch.nn.Parameter(torch.randn(2))\n",
    "#         self.bias_std = torch.nn.Parameter(torch.randn(2))\n",
    "        \n",
    "#     def forward(self, state):\n",
    "#         state = self.fc(state)\n",
    "\n",
    "#         #正态分布投影,获取输出层的参数\n",
    "#         weight = self.weight_mean + torch.randn(64, 2) * self.weight_std\n",
    "#         bias = self.bias_mean + torch.randn(2) * self.bias_std\n",
    "\n",
    "#         #val模式下不需要随机性\n",
    "#         if not self.training:\n",
    "#             weight = self.weight_mean\n",
    "#             bias = self.bias_mean\n",
    "\n",
    "#         #计算输出\n",
    "#         return state.matmul(weight) + bias\n",
    "\n",
    "############################################################################\n",
    "model = Model()\n",
    "model_delay = Model()\n",
    "\n",
    "#复制参数\n",
    "model_delay.load_state_dict(model.state_dict())\n",
    "\n",
    "model, model_delay"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 154,
   "id": "84cbf0ff",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "-991.0"
      ]
     },
     "execution_count": 154,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from IPython import display\n",
    "import random\n",
    "\n",
    "\n",
    "#玩一局游戏并记录数据\n",
    "def play(show=False):\n",
    "    data = []\n",
    "    reward_sum = 0\n",
    "\n",
    "    state = env.reset()\n",
    "    over = False\n",
    "    while not over:\n",
    "        action = model(torch.FloatTensor(state).reshape(1, 4)).argmax().item()\n",
    "        if random.random() < 0.1:\n",
    "            action = env.action_space.sample()\n",
    "\n",
    "        next_state, reward, over = env.step(action)\n",
    "\n",
    "        data.append((state, action, reward, next_state, over))\n",
    "        reward_sum += reward\n",
    "\n",
    "        state = next_state\n",
    "\n",
    "        if show:\n",
    "            display.clear_output(wait=True)\n",
    "            env.show()\n",
    "\n",
    "    return data, reward_sum\n",
    "\n",
    "\n",
    "play()[-1]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aa2ba973",
   "metadata": {},
   "source": [
    "## 原始的pooling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 155,
   "id": "7bfcfd21",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(206,\n",
       " (array([-0.04286215,  0.01233882,  0.03926238, -0.02857477], dtype=float32),\n",
       "  1,\n",
       "  1.0,\n",
       "  array([-0.04261538,  0.20687637,  0.03869089, -0.30861604], dtype=float32),\n",
       "  False))"
      ]
     },
     "execution_count": 155,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 定义一个数据池类，用于存储和管理从环境中收集的游戏数据\n",
    "class Pool:\n",
    "\n",
    "    def __init__(self):\n",
    "        # 初始化一个空的池，用来存储游戏的经验数据（状态、动作、奖励、下一个状态、游戏是否结束）\n",
    "        self.pool = []\n",
    "\n",
    "    def __len__(self):\n",
    "        # 返回数据池中数据的数量\n",
    "        return len(self.pool)\n",
    "\n",
    "    def __getitem__(self, i):\n",
    "        # 根据索引返回数据池中的第i条数据\n",
    "        return self.pool[i]\n",
    "\n",
    "    # 更新数据池\n",
    "    def update(self):\n",
    "        # 每次更新数据池时，至少要添加N条新的数据\n",
    "        old_len = len(self.pool)  # 记录更新前池中数据的数量\n",
    "        while len(self.pool) - old_len < 200:  # 每次添加至少200条新的数据\n",
    "            # 从play()函数中收集数据并扩展到数据池中\n",
    "            self.pool.extend(play()[0])\n",
    "\n",
    "        # 只保留最新的20000条数据\n",
    "        self.pool = self.pool[-20_000:]\n",
    "\n",
    "    # 从数据池中获取一个批次的样本\n",
    "    def sample(self):\n",
    "        # 从数据池中随机选择64条数据\n",
    "        data = random.sample(self.pool, 64)\n",
    "        \n",
    "        # 将每个数据项（状态、动作、奖励、下一个状态、是否结束）拆开并转换为Tensor\n",
    "        state = torch.FloatTensor([i[0] for i in data]).reshape(-1, 4)  # 状态，维度为[batch_size, 4]\n",
    "        action = torch.LongTensor([i[1] for i in data]).reshape(-1, 1)  # 动作，维度为[batch_size, 1]\n",
    "        reward = torch.FloatTensor([i[2] for i in data]).reshape(-1, 1)  # 奖励，维度为[batch_size, 1]\n",
    "        next_state = torch.FloatTensor([i[3] for i in data]).reshape(-1, 4)  # 下一个状态，维度为[batch_size, 4]\n",
    "        over = torch.LongTensor([i[4] for i in data]).reshape(-1, 1)  # 是否结束，维度为[batch_size, 1]\n",
    "\n",
    "        # 返回状态、动作、奖励、下一个状态和是否结束的批次\n",
    "        return state, action, reward, next_state, over\n",
    "\n",
    "\n",
    "# 创建一个 Pool 对象，用于存储游戏经验数据\n",
    "pool = Pool()\n",
    "\n",
    "# 更新数据池，收集新的数据\n",
    "pool.update()\n",
    "\n",
    "# 从数据池中随机获取一个批次的样本\n",
    "pool.sample()\n",
    "\n",
    "# 获取数据池的长度，和数据池中第一条数据的内容\n",
    "len(pool), pool[0]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b826dad",
   "metadata": {},
   "source": [
    "# 重新加权的pooling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 156,
   "id": "c49986c5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n# 定义一个经验池类\\nclass Pool:\\n\\n    # 初始化方法，创建一个空的池子和一个概率表\\n    def __init__(self):\\n        self.pool = []  # 存储经验池数据，初始为空列表\\n       #1-- ################################################################################################################\\n        self.prob = []  # 存储每个样本的选择概率，初始为空列表\\n\\n    # 计算池子的长度，即池子中存储的经验数量\\n    def __len__(self):\\n        return len(self.pool)\\n\\n    # 通过索引获取池子中的样本\\n    def __getitem__(self, i):\\n        return self.pool[i]  # 返回指定索引位置的样本数据\\n\\n    # 更新动作池的方法\\n    def update(self):\\n        # 记录当前池子的大小\\n        old_len = len(self.pool)\\n        \\n        # 不断从环境中获取新的数据直到池子中新增不少于200条数据\\n        while len(self.pool) - old_len < 200:\\n         #  2 ################################################################################################################\\n            data = play()[0]  # 调用 play() 函数，获取一批新的数据，这里假定 play() 返回一个元组\\n            self.pool.extend(data)  # 将新获取的数据添加到池子中\\n            \\n            # 更新概率表，假设每条数据的初始选择概率都为 1.0\\n            self.prob.extend([1.0] * len(data))\\n        \\n        # 保证池子的大小不超过 20,000 条数据，保留最新的 20,000 条数据\\n        self.pool = self.pool[-20_000:]  # 保留池子中的最后 20,000 条数据\\n        self.prob = self.prob[-20_000:]  # 保留概率表中对应的最后 20,000 条概率数据\\n\\n    # 从池子中采样一批数据\\n    def sample(self):\\n        # 使用 torch.FloatTensor 创建一个张量，包含每个样本的概率\\n        # clamp(0.1, 1.0) 将概率限制在 0.1 到 1.0 之间\\n        # multinomial 用于从样本中按概率进行采样，\\n        # num_samples=64 表示每次采样 64 个样本\\n        # replacement=False 表示不允许重复采样\\n        idx = torch.FloatTensor(self.prob).clamp(0.1, 1.0).multinomial(\\n            num_samples=64, replacement=False)\\n        #####################################################################################################################\\n        # 根据采样得到的索引，从池子中取出对应的样本\\n        data = [self.pool[i] for i in idx]\\n\\n        # 将每个样本中的数据（状态、动作、奖励、下一个状态、结束标志）分别提取出来\\n        # 使用 torch.FloatTensor 将数据转换为张量并进行形状调整\\n        state = torch.FloatTensor([i[0] for i in data]).reshape(-1, 4)  # 提取状态信息，并转换为形状为 (-1, 4) 的张量\\n        action = torch.LongTensor([i[1] for i in data]).reshape(-1, 1)  # 提取动作信息，并转换为形状为 (-1, 1) 的张量\\n        reward = torch.FloatTensor([i[2] for i in data]).reshape(-1, 1)  # 提取奖励信息，并转换为形状为 (-1, 1) 的张量\\n        next_state = torch.FloatTensor([i[3] for i in data]).reshape(-1, 4)  # 提取下一个状态信息，并转换为形状为 (-1, 4) 的张量\\n        over = torch.LongTensor([i[4] for i in data]).reshape(-1, 1)  # 提取结束标志信息，并转换为形状为 (-1, 1) 的张量\\n\\n\\n       #3 #####################################################################################################################\\n        # 返回采样的 索引以及对应的状态、动作、奖励、下一个状态和结束标志\\n        return idx, state, action, reward, next_state, over\\n        #########################################################################################\\n\\n# 实例化一个 Pool 对象\\npool = Pool()\\n\\n# 更新动作池，获取新的经验数据并更新池子\\npool.update()\\n\\n# 从池子中采样一批数据\\npool.sample()\\n\\n# 打印池子的大小（即池子中存储的样本数量）以及池子中的第一个样本\\nlen(pool), pool[0]\\n'"
      ]
     },
     "execution_count": 156,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# '''\n",
    "# 定义一个经验池类\n",
    "class Pool:\n",
    "\n",
    "    # 初始化方法，创建一个空的池子和一个概率表\n",
    "    def __init__(self):\n",
    "        self.pool = []  # 存储经验池数据，初始为空列表\n",
    "       #1-- ################################################################################################################\n",
    "        self.prob = []  # 存储每个样本的选择概率，初始为空列表\n",
    "\n",
    "    # 计算池子的长度，即池子中存储的经验数量\n",
    "    def __len__(self):\n",
    "        return len(self.pool)\n",
    "\n",
    "    # 通过索引获取池子中的样本\n",
    "    def __getitem__(self, i):\n",
    "        return self.pool[i]  # 返回指定索引位置的样本数据\n",
    "\n",
    "    # 更新动作池的方法\n",
    "    def update(self):\n",
    "        # 记录当前池子的大小\n",
    "        old_len = len(self.pool)\n",
    "        \n",
    "        # 不断从环境中获取新的数据直到池子中新增不少于200条数据\n",
    "        while len(self.pool) - old_len < 200:\n",
    "         #  2 ################################################################################################################\n",
    "            data = play()[0]  # 调用 play() 函数，获取一批新的数据，这里假定 play() 返回一个元组\n",
    "            self.pool.extend(data)  # 将新获取的数据添加到池子中\n",
    "            \n",
    "            # 更新概率表，假设每条数据的初始选择概率都为 1.0\n",
    "            self.prob.extend([1.0] * len(data))\n",
    "        \n",
    "        # 保证池子的大小不超过 20,000 条数据，保留最新的 20,000 条数据\n",
    "        self.pool = self.pool[-20_000:]  # 保留池子中的最后 20,000 条数据\n",
    "        self.prob = self.prob[-20_000:]  # 保留概率表中对应的最后 20,000 条概率数据\n",
    "\n",
    "    # 从池子中采样一批数据\n",
    "    def sample(self):\n",
    "        # 使用 torch.FloatTensor 创建一个张量，包含每个样本的概率\n",
    "        # clamp(0.1, 1.0) 将概率限制在 0.1 到 1.0 之间\n",
    "        # multinomial 用于从样本中按概率进行采样，\n",
    "        # num_samples=64 表示每次采样 64 个样本\n",
    "        # replacement=False 表示不允许重复采样\n",
    "        idx = torch.FloatTensor(self.prob).clamp(0.1, 1.0).multinomial(\n",
    "            num_samples=64, replacement=False)\n",
    "        #####################################################################################################################\n",
    "        # 根据采样得到的索引，从池子中取出对应的样本\n",
    "        data = [self.pool[i] for i in idx]\n",
    "\n",
    "        # 将每个样本中的数据（状态、动作、奖励、下一个状态、结束标志）分别提取出来\n",
    "        # 使用 torch.FloatTensor 将数据转换为张量并进行形状调整\n",
    "        state = torch.FloatTensor([i[0] for i in data]).reshape(-1, 4)  # 提取状态信息，并转换为形状为 (-1, 4) 的张量\n",
    "        action = torch.LongTensor([i[1] for i in data]).reshape(-1, 1)  # 提取动作信息，并转换为形状为 (-1, 1) 的张量\n",
    "        reward = torch.FloatTensor([i[2] for i in data]).reshape(-1, 1)  # 提取奖励信息，并转换为形状为 (-1, 1) 的张量\n",
    "        next_state = torch.FloatTensor([i[3] for i in data]).reshape(-1, 4)  # 提取下一个状态信息，并转换为形状为 (-1, 4) 的张量\n",
    "        over = torch.LongTensor([i[4] for i in data]).reshape(-1, 1)  # 提取结束标志信息，并转换为形状为 (-1, 1) 的张量\n",
    "\n",
    "\n",
    "       #3 #####################################################################################################################\n",
    "        # 返回采样的 索引以及对应的状态、动作、奖励、下一个状态和结束标志\n",
    "        return idx, state, action, reward, next_state, over\n",
    "        #########################################################################################\n",
    "\n",
    "# 实例化一个 Pool 对象\n",
    "pool = Pool()\n",
    "\n",
    "# 更新动作池，获取新的经验数据并更新池子\n",
    "pool.update()\n",
    "\n",
    "# 从池子中采样一批数据\n",
    "pool.sample()\n",
    "\n",
    "# 打印池子的大小（即池子中存储的样本数量）以及池子中的第一个样本\n",
    "len(pool), pool[0]\n",
    "'''"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bd7b882d",
   "metadata": {},
   "source": [
    "## 注意，在更新参数时，model_delay不需要更新，"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 157,
   "id": "018c4b1e",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 415 -958.15\n",
      "100 20000 200.0\n",
      "200 20000 200.0\n",
      "300 20000 200.0\n",
      "400 20000 200.0\n",
      "500 20000 200.0\n",
      "600 20000 200.0\n",
      "700 20000 200.0\n",
      "800 20000 200.0\n",
      "900 20000 200.0\n"
     ]
    }
   ],
   "source": [
    "#训练\n",
    "def train():\n",
    "    model.train()   #model_delay没有在训练模式，不需要更新\n",
    "    optimizer = torch.optim.Adam(model.parameters(), lr=2e-4)\n",
    "    loss_fn = torch.nn.MSELoss()\n",
    "\n",
    "    #共更新N轮数据\n",
    "    for epoch in range(100):\n",
    "        pool.update()\n",
    "\n",
    "        #每次更新数据后,训练N次\n",
    "        for i in range(200):\n",
    "\n",
    "            #采样N条数据\n",
    "            state, action, reward, next_state, over = pool.sample() ############################# idx,\n",
    "            # idx,state, action, reward, next_state, over = pool.sample()\n",
    "            #计算value，gather(dim=1, index=action)：从模型的输出中根据 `action` 提取对应动作的 Q 值。\n",
    "            # dim=1表示在第二个维度（即动作维度）上进行操作，\n",
    "            value = model(state).gather(dim=1, index=action)\n",
    "\n",
    "            #计算target\n",
    "            with torch.no_grad():\n",
    "            # --------------------------未改进的target计算--------------------------------\n",
    "                target = model_delay(next_state)\n",
    "            target = target.max(dim=1)[0].reshape(-1, 1)    #原始论文中的计算方式\n",
    "            # ----------------------------------------------------------\n",
    "            # --------------------------改进的target计算--------------------------------\n",
    "            #使用原模型计算动作,使用延迟模型计算target,进一步缓解自举\n",
    "                # next_action = model(next_state).argmax(dim=1, keepdim=True)\n",
    "                # target = model_delay(next_state).gather(dim=1,\n",
    "                #                                         index=next_action)\n",
    "            # ----------------------------------------------------------\n",
    "\n",
    "            target = target * 0.99 * (1 - over) + reward\n",
    "            #########################pool-未加权####################\n",
    "\n",
    "            loss = loss_fn(value, target)\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "            optimizer.zero_grad()\n",
    "            ################################################\n",
    "\n",
    "            #4 ########################根据概率缩放权重####################################\n",
    "            #根据概率缩放loss\n",
    "            # r = torch.FloatTensor([pool.prob[i] for i in idx])\n",
    "            # r = (1 - r).clamp(0.1, 1.0).reshape(-1, 1)\n",
    "\n",
    "            # loss = loss_fn(value, target)\n",
    "            # (loss * r).mean(0).backward()\n",
    "            # optimizer.step()\n",
    "            # optimizer.zero_grad()\n",
    "\n",
    "            # #根据loss调整数据权重\n",
    "            # for i, j in zip(idx.tolist(),\n",
    "            #                 loss.abs().sigmoid().flatten().tolist()):\n",
    "            #     pool.prob[i] = j\n",
    "            ######################################################\n",
    "\n",
    "\n",
    "        #复制参数，每5个epoch同步一次，不能差距太远，但也需要有差距\n",
    "        if (epoch + 1) % 5 == 0:\n",
    "            model_delay.load_state_dict(model.state_dict())\n",
    "\n",
    "        if epoch % 50 == 0:\n",
    "            test_result = sum([play()[-1] for _ in range(20)]) / 20\n",
    "            print(epoch, len(pool), test_result)\n",
    "\n",
    "\n",
    "train()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 158,
   "id": "a101d0e6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAASAAAADMCAYAAADTcn7NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8pXeV/AAAACXBIWXMAAA9hAAAPYQGoP6dpAAAThUlEQVR4nO3dfWxT570H8K+dxIa8HHtJFrtZYgVpvaMZL+0CJKe9uqvAJWsjVNb8se2iLkMIVOYgIBXaIrW0pZtcMV2xsUH4pwWkK0pvJmUTEdBlAcKdMATCohtCybo7ukQQ26VZjpNAbCd+7h9Tzq1LoHHi+LHx9yMdCT/Pz8e/c4i/OS9xYhBCCBARSWCU3QARpS8GEBFJwwAiImkYQEQkDQOIiKRhABGRNAwgIpKGAURE0jCAiEgaBhARSSMtgA4cOICysjIsWLAAlZWV6OzslNUKEUkiJYA++OADNDQ04I033sDVq1exfPlyVFdXw+/3y2iHiCQxyPgwamVlJVauXInf/OY3AIBIJILS0lJs27YNP/3pTxPdDhFJkpnoFwyFQujq6kJjY6M+ZjQa4XQ64fF4pn1OMBhEMBjUH0ciEQwNDaGgoAAGg2Heeyai2AghMDIyguLiYhiNDz7RSngA3blzB5OTk7DZbFHjNpsNN27cmPY5brcbb731ViLaI6I4GhgYQElJyQPnEx5As9HY2IiGhgb9saZpcDgcGBgYgKIoEjsjoukEAgGUlpYiLy/voXUJD6DCwkJkZGTA5/NFjft8Ptjt9mmfYzabYTab7xtXFIUBRJTEvuwSScLvgplMJlRUVKC9vV0fi0QiaG9vh6qqiW6HiCSScgrW0NCAuro6rFixAqtWrcIvf/lLjI2NYePGjTLaISJJpATQ9773PXz66afYvXs3vF4vnnzySZw+ffq+C9NE9GiT8nNAcxUIBGCxWKBpGq8BESWhmb5H+VkwIpKGAURE0jCAiEgaBhARScMAIiJpGEBEJA0DiIikYQARkTQMICKShgFERNIwgIhIGgYQEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJwwAiImkYQEQkDQOIiKRhABGRNAwgIpKGAURE0jCAiEgaBhARSRNzAJ0/fx7r1q1DcXExDAYDfve730XNCyGwe/duPPbYY1i4cCGcTic+/vjjqJqhoSFs2LABiqLAarVi06ZNGB0dndOGEFHqiTmAxsbGsHz5chw4cGDa+b1792L//v04dOgQLl26hJycHFRXV2N8fFyv2bBhA3p7e9HW1obW1lacP38eW7Zsmf1WEFFqEnMAQLS0tOiPI5GIsNvt4he/+IU+Njw8LMxms3j//feFEEJcv35dABCXL1/Wa06dOiUMBoO4devWjF5X0zQBQGiaNpf2iWiezPQ9GtdrQDdv3oTX64XT6dTHLBYLKisr4fF4AAAejwdWqxUrVqzQa5xOJ4xGIy5dujTteoPBIAKBQNRCRKkvrgHk9XoBADabLWrcZrPpc16vF0VFRVHzmZmZyM/P12u+yO12w2Kx6EtpaWk82yYiSVLiLlhjYyM0TdOXgYEB2S0RURzENYDsdjsAwOfzRY37fD59zm63w+/3R81PTExgaGhIr/kis9kMRVGiFiJKfXENoEWLFsFut6O9vV0fCwQCuHTpElRVBQCoqorh4WF0dXXpNWfOnEEkEkFlZWU82yGiJJcZ6xNGR0fx17/+VX988+ZNdHd3Iz8/Hw6HAzt27MDPfvYzPP7441i0aBFef/11FBcXY/369QCAJ554At/5znewefNmHDp0COFwGPX19fj+97+P4uLiuG0YEaWAWG+vnT17VgC4b6mrqxNC/PNW/Ouvvy5sNpswm81izZo1oq+vL2odn332mfjBD34gcnNzhaIoYuPGjWJkZCTut/iISI6ZvkcNQgghMf9mJRAIwGKxQNM0Xg8iSkIzfY+mxF0wIno0MYCISBoGEBFJwwAiImkYQEQkDQOIiKRhABGRNAwgIpKGAURE0jCAiEgaBhARScMAIiJpGEBEJA0DiIikYQARkTQMICKShgFERNIwgIhIGgYQEUnDACIiaWL+szxE8RCZCGO4vwdiMqyP5X1tMUzZFoldUaIxgEiKyfA9/P2//xMT46P62L+8sJ0BlGZ4CkZJQ4iI7BYowRhAlDxS70/U0RwxgChppODfyKQ5YgBR8mAApR0GECUNAQZQumEAUfLgRei0E1MAud1urFy5Enl5eSgqKsL69evR19cXVTM+Pg6Xy4WCggLk5uaitrYWPp8vqqa/vx81NTXIzs5GUVERdu3ahYmJiblvDaU2noKlnZgCqKOjAy6XCxcvXkRbWxvC4TDWrl2LsbExvWbnzp04ceIEmpub0dHRgdu3b+Oll17S5ycnJ1FTU4NQKIQLFy7g6NGjOHLkCHbv3h2/raKUxIvQaUjMgd/vFwBER0eHEEKI4eFhkZWVJZqbm/Wajz76SAAQHo9HCCHEyZMnhdFoFF6vV69pamoSiqKIYDA4o9fVNE0AEJqmzaV9kih0VxNXj+wUnYc268udv1yS3RbFyUzfo3O6BqRpGgAgPz8fANDV1YVwOAyn06nXLF68GA6HAx6PBwDg8XiwdOlS2Gw2vaa6uhqBQAC9vb3Tvk4wGEQgEIha6NEjwGtA6WbWARSJRLBjxw4888wzWLJkCQDA6/XCZDLBarVG1dpsNni9Xr3m8+EzNT81Nx232w2LxaIvpaWls22bkhlPwdLOrAPI5XLh2rVrOH78eDz7mVZjYyM0TdOXgYGBeX9NkoABlHZm9WHU+vp6tLa24vz58ygpKdHH7XY7QqEQhoeHo46CfD4f7Ha7XtPZ2Rm1vqm7ZFM1X2Q2m2E2m2fTKqUQwQBKOzEdAQkhUF9fj5aWFpw5cwaLFi2Kmq+oqEBWVhba29v1sb6+PvT390NVVQCAqqro6emB3+/Xa9ra2qAoCsrLy+eyLZTqGEBpJ6YjIJfLhWPHjuH3v/898vLy9Gs2FosFCxcuhMViwaZNm9DQ0ID8/HwoioJt27ZBVVVUVVUBANauXYvy8nK8/PLL2Lt3L7xeL1577TW4XC4e5aQ5fho+/cQUQE1NTQCAZ599Nmr88OHD+NGPfgQA2LdvH4xGI2praxEMBlFdXY2DBw/qtRkZGWhtbcXWrVuhqipycnJQV1eHPXv2zG1L6BHAI6B0YxApeOIdCARgsVigaRoURZHdDs1C+F4A1/7rzahfSOb413+H7ZvPymuK4mam71F+FoySR+p9L6Q5YgBR8mAApR0GECUNXoROPwwgSiI8Ako3DCBKGil4P4TmiAFEkhgAgyFqREQmJfVCsjCASApjZhbMeYVRY+P/uC2pG5KFAUSSGGDIiP45WB4BpR8GEMlhMMAAw5fX0SONAURSGID7rgFR+mEAkUQMoHTHACJpDDwCSnsMIJLk/tvwlH4YQCQRAyjdMYBIDgNPwYgBRNLwFIwYQCQVAyjdMYBIGp6CEQOI5GH+pD0GEMlhMIAJRAwgksIAnoIRA4ikYgClOwYQycMjoLTHACJJDDwFo9j+MipRLEKhEO7evTvtnBARhMITX6gPY3h4+IHrW7hwIf989yOGAUTzpqWlBa+++uq0cwYDsP3F5fi3JV/Tx06fPoX/2OJ+4PrefvttbNy4Me59kjwMIJo3Y2NjuHXr1rRzBgAjY4tx894yfBp2oCDrFkbv/v2B9QAwOjr6wDlKTTFdA2pqasKyZcugKAoURYGqqjh16pQ+Pz4+DpfLhYKCAuTm5qK2thY+ny9qHf39/aipqUF2djaKioqwa9cuTExMfPGl6BEnANy8+0385e4qDIWL8fHdCvzt3nLZbVGCxRRAJSUleOedd9DV1YUrV65g9erVePHFF9Hb2wsA2LlzJ06cOIHm5mZ0dHTg9u3beOmll/TnT05OoqamBqFQCBcuXMDRo0dx5MgR7N69O75bRSlhdMICoX8JGjE6YZXZDkkQ0ynYunXroh7//Oc/R1NTEy5evIiSkhK8++67OHbsGFavXg0AOHz4MJ544glcvHgRVVVV+MMf/oDr16/jj3/8I2w2G5588km8/fbb+MlPfoI333wTJpMpfltGSc9m+huyDPcQFguQZQjiMfP/ym6JEmzW14AmJyfR3NyMsbExqKqKrq4uhMNhOJ1OvWbx4sVwOBzweDyoqqqCx+PB0qVLYbPZ9Jrq6mps3boVvb29eOqpp2Lq4caNG8jNzZ3tJtA8GxwcfOj8//Scx+CnPvwjbIc1y4/hO30Prfd6vbh+/Xo8W6R5MtPrdTEHUE9PD1RVxfj4OHJzc9HS0oLy8nJ0d3fDZDLBarVG1dtsNni9XgD//AL6fPhMzU/NPUgwGEQwGNQfBwIBAICmabx+lMQedAt+yoXeAaB3YMbrGx8ff+htekoeY2NjM6qLOYC+8Y1voLu7G5qm4be//S3q6urQ0dERc4OxcLvdeOutt+4br6yshKIo8/raNHs3btyI6/rKysrw9NNPx3WdND+mDhK+TMw/CW0ymfD1r38dFRUVcLvdWL58OX71q1/BbrcjFArd9x3K5/PBbrcDAOx2+313xaYeT9VMp7GxEZqm6cvAwMy/axJR8przRzEikQiCwSAqKiqQlZWF9vZ2fa6vrw/9/f1QVRUAoKoqenp64Pf79Zq2tjYoioLy8vIHvobZbNZv/U8tRJT6YjoFa2xsxPPPPw+Hw4GRkREcO3YM586dw4cffgiLxYJNmzahoaEB+fn5UBQF27Ztg6qqqKqqAgCsXbsW5eXlePnll7F37154vV689tprcLlc/BF7ojQUUwD5/X788Ic/xODgICwWC5YtW4YPP/wQzz33HABg3759MBqNqK2tRTAYRHV1NQ4ePKg/PyMjA62trdi6dStUVUVOTg7q6uqwZ8+e+G4VJQWTyRTXo1V+k3r0GIQQQnYTsQoEArBYLNA0jadjSWxsbAxDQ0NxW5/VakVeXl7c1kfzZ6bvUX4WjOZNTk4OcnJyZLdBSYy/D4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJwwAiImkYQEQkDQOIiKRhABGRNAwgIpKGAURE0jCAiEgaBhARScMAIiJpGEBEJA0DiIikYQARkTQMICKShgFERNIwgIhIGgYQEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJkym7gdkQQgAAAoGA5E6IaDpT782p9+qDpGQAffbZZwCA0tJSyZ0Q0cOMjIzAYrE8cD4lAyg/Px8A0N/f/9CNo2iBQAClpaUYGBiAoiiy20kJ3GezI4TAyMgIiouLH1qXkgFkNP7z0pXFYuEXxSwoisL9FiPus9jN5OCAF6GJSBoGEBFJk5IBZDab8cYbb8BsNstuJaVwv8WO+2x+GcSX3ScjIponKXkERESPBgYQEUnDACIiaRhARCRNSgbQgQMHUFZWhgULFqCyshKdnZ2yW5LG7XZj5cqVyMvLQ1FREdavX4++vr6omvHxcbhcLhQUFCA3Nxe1tbXw+XxRNf39/aipqUF2djaKioqwa9cuTExMJHJTpHnnnXdgMBiwY8cOfYz7LEFEijl+/LgwmUzivffeE729vWLz5s3CarUKn88nuzUpqqurxeHDh8W1a9dEd3e3eOGFF4TD4RCjo6N6zSuvvCJKS0tFe3u7uHLliqiqqhJPP/20Pj8xMSGWLFkinE6n+POf/yxOnjwpCgsLRWNjo4xNSqjOzk5RVlYmli1bJrZv366Pc58lRsoF0KpVq4TL5dIfT05OiuLiYuF2uyV2lTz8fr8AIDo6OoQQQgwPD4usrCzR3Nys13z00UcCgPB4PEIIIU6ePCmMRqPwer16TVNTk1AURQSDwcRuQAKNjIyIxx9/XLS1tYlvf/vbegBxnyVOSp2ChUIhdHV1wel06mNGoxFOpxMej0diZ8lD0zQA//+B3a6uLoTD4ah9tnjxYjgcDn2feTweLF26FDabTa+prq5GIBBAb29vArtPLJfLhZqamqh9A3CfJVJKfRj1zp07mJycjPpPBwCbzYYbN25I6ip5RCIR7NixA8888wyWLFkCAPB6vTCZTLBarVG1NpsNXq9Xr5lun07NPYqOHz+Oq1ev4vLly/fNcZ8lTkoFED2cy+XCtWvX8Kc//Ul2K0ltYGAA27dvR1tbGxYsWCC7nbSWUqdghYWFyMjIuO9uhM/ng91ul9RVcqivr0drayvOnj2LkpISfdxutyMUCmF4eDiq/vP7zG63T7tPp+YeNV1dXfD7/fjWt76FzMxMZGZmoqOjA/v370dmZiZsNhv3WYKkVACZTCZUVFSgvb1dH4tEImhvb4eqqhI7k0cIgfr6erS0tODMmTNYtGhR1HxFRQWysrKi9llfXx/6+/v1faaqKnp6euD3+/WatrY2KIqC8vLyxGxIAq1ZswY9PT3o7u7WlxUrVmDDhg36v7nPEkT2VfBYHT9+XJjNZnHkyBFx/fp1sWXLFmG1WqPuRqSTrVu3CovFIs6dOycGBwf15e7du3rNK6+8IhwOhzhz5oy4cuWKUFVVqKqqz0/dUl67dq3o7u4Wp0+fFl/96lfT6pby5++CCcF9ligpF0BCCPHrX/9aOBwOYTKZxKpVq8TFixdltyQNgGmXw4cP6zX37t0TP/7xj8VXvvIVkZ2dLb773e+KwcHBqPV88skn4vnnnxcLFy4UhYWF4tVXXxXhcDjBWyPPFwOI+ywx+Os4iEialLoGRESPFgYQEUnDACIiaRhARCQNA4iIpGEAEZE0DCAikoYBRETSMICISBoGEBFJwwAiImkYQEQkzf8BAPRwQ4QjVrMAAAAASUVORK5CYII=",
      "text/plain": [
       "<Figure size 300x300 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "200.0"
      ]
     },
     "execution_count": 158,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "play(True)[-1]"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rl2024",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.17"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
