{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "状态价值函数:\n",
    "\n",
    "V(state) = 所有动作求和 -> 概率(action) * Q(state,action)\n",
    "\n",
    "对这个式子做变形得到:\n",
    "\n",
    "V(state) = 所有动作求和 -> 现概率(action) * \\[旧概率(action) / 现概率(action)\\] * Q(state,action)\n",
    "\n",
    "初始时可以认为现概率和旧概率相等,但随着模型的更新,现概率会变化.\n",
    "\n",
    "式子中的Q(state,action)可以用蒙特卡洛法估计.\n",
    "\n",
    "按照策略梯度的理论,状态价值取决于动作的质量,所以只要最大化V函数,就可以得到最好的动作策略."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAASAAAADMCAYAAADTcn7NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAUfElEQVR4nO3dfWxT570H8K+dxE5CcpyGNPYiYoEEK4t46RognFZXm1qPjEbVWDNpm1CXVYjeMgeVZkJqrlqqdptSUWl92Sj8s0GnqmPKJLY1l5ZFgYY7YQikyxSSktvq0iUt2C6wHOeF+PV3/+hyWpNAcUj8xPj7kY5UP8/j4995Gn85Pi+2RUQEREQKWFUXQETZiwFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKKAugPXv2YPHixcjPz0dNTQ26urpUlUJEiigJoD/84Q9oamrCs88+i/feew+rV69GbW0tgsGginKISBGLiptRa2pqsHbtWvz6178GACQSCVRWVmL79u146qmn0l0OESmSm+4XjEQi6O7uRnNzs9lmtVrh8Xjg8/mmfU44HEY4HDYfJxIJXLlyBQsXLoTFYpnzmokoNSKCkZERVFRUwGq9/gettAfQpUuXEI/H4XQ6k9qdTifOnTs37XNaWlrw3HPPpaM8IppFQ0NDWLRo0XX70x5AM9Hc3IympibzsWEYcLvdGBoagqZpCisjoumEQiFUVlaiuLj4huPSHkBlZWXIyclBIBBIag8EAnC5XNM+x263w263T2nXNI0BRDSPfdkhkrSfBbPZbKiurkZHR4fZlkgk0NHRAV3X010OESmk5CNYU1MTGhoasGbNGqxbtw4vv/wyxsbG8Oijj6ooh4gUURJA3//+9/Hpp59i165d8Pv9uPvuu/HOO+9MOTBNRLc3JdcB3apQKASHwwHDMHgMiGgeutn3KO8FIyJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImUYQESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIpB9Dx48fx0EMPoaKiAhaLBX/605+S+kUEu3btwle+8hUUFBTA4/Hggw8+SBpz5coVbN68GZqmoaSkBFu2bMHo6OgtbQgRZZ6UA2hsbAyrV6/Gnj17pu3fvXs3Xn31Vezbtw+nTp3CggULUFtbi4mJCXPM5s2b0dfXh/b2drS1teH48eN47LHHZr4VRJSZ5BYAkEOHDpmPE4mEuFwuefHFF8224eFhsdvt8vvf/15ERPr7+wWAnD592hzz9ttvi8VikU8++eSmXtcwDAEghmHcSvlENEdu9j06q8eAzp8/D7/fD4/HY7Y5HA7U1NTA5/MBAHw+H0pKSrBmzRpzjMfjgdVqxalTp6ZdbzgcRigUSlqIKPPNagD5/X4AgNPpTGp3Op1mn9/vR3l5eVJ/bm4uSktLzTHXamlpgcPhMJfKysrZLJuIFMmIs2DNzc0wDMNchoaGVJdERLNgVgPI5XIBAAKBQFJ7IBAw+1wuF4LBYFJ/LBbDlStXzDHXstvt0DQtaSGizDerAbRkyRK4XC50dHSYbaFQCKdOnYKu6wAAXdcxPDyM7u5uc8zRo0eRSCRQU1Mzm+UQ0TyXm+oTRkdH8eGHH5qPz58/j56eHpSWlsLtdmPHjh34+c9/jmXLlmHJkiV45plnUFFRgU2bNgEAvva1r+Hb3/42tm7din379iEajaKxsRE/+MEPUFFRMWsbRkQZINXTa8eOHRMAU5aGhgYR+exU/DPPPCNOp1Psdrs88MADMjAwkLSOy5cvyw9/+EMpKioSTdPk0UcflZGRkVk/xUdEatzse9QiIqIw/2YkFArB4XDAMAweDyKah272PZoRZ8GI6PbEACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImUYQESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUSflneYhmQyIWxfBgLyQeNduKK+6CbUGJuqIo7RhApEQ8ehX//J83EJsYNduW1noZQFmGH8Fo3khEJ1SXQGnGAKJ5Ix4Nqy6B0owBRIpYYLHmJLXEwmOKaiFVGECkhDXPjnyHM6nt6pWPFVVDqjCASAmLxQprnj25MfN+JZxuEQOIlLBYLLDm2lSXQYqlFEAtLS1Yu3YtiouLUV5ejk2bNmFgYCBpzMTEBLxeLxYuXIiioiLU19cjEAgkjRkcHERdXR0KCwtRXl6OnTt3IhaL3frWUOawWGHNyZvSLNwLyiopBVBnZye8Xi9OnjyJ9vZ2RKNRbNiwAWNjnx88fPLJJ/HWW2+htbUVnZ2duHDhAh5++GGzPx6Po66uDpFIBCdOnMDrr7+OAwcOYNeuXbO3VZQRrj0InYjHATCAsorcgmAwKACks7NTRESGh4clLy9PWltbzTHvv/++ABCfzyciIocPHxar1Sp+v98cs3fvXtE0TcLh8E29rmEYAkAMw7iV8kmx/zt2QLr2bTWXgf9+ReKxqOqyaBbc7Hv0lo4BGYYBACgtLQUAdHd3IxqNwuPxmGOWL18Ot9sNn88HAPD5fFi5ciWczs/PgNTW1iIUCqGvr2/a1wmHwwiFQkkL3X4SsQggCdVlUBrNOIASiQR27NiB++67DytWrAAA+P1+2Gw2lJSUJI11Op3w+/3mmC+Gz2T/ZN90Wlpa4HA4zKWysnKmZdM8YrEk//klYhFIggGUTWYcQF6vF2fPnsXBgwdns55pNTc3wzAMcxkaGprz16S5V7BwUdLjcOhTxHk7RlaZ0c2ojY2NaGtrw/Hjx7Fo0ed/RC6XC5FIBMPDw0l7QYFAAC6XyxzT1dWVtL7Js2STY65lt9tht9un7aPMlWsvTHoskgAPQmeXlPaARASNjY04dOgQjh49iiVLliT1V1dXIy8vDx0dHWbbwMAABgcHoes6AEDXdfT29iIYDJpj2tvboWkaqqqqbmVbKMNY8/JVl0CKpbQH5PV68eabb+LPf/4ziouLzWM2DocDBQUFcDgc2LJlC5qamlBaWgpN07B9+3bouo7169cDADZs2ICqqio88sgj2L17N/x+P55++ml4vV7u5WSZnGuvhAa4A5RlUgqgvXv3AgC++c1vJrXv378fP/7xjwEAL730EqxWK+rr6xEOh1FbW4vXXnvNHJuTk4O2tjZs27YNuq5jwYIFaGhowPPPP39rW0IZ59rrgCACScTVFENKWEQy79LTUCgEh8MBwzCgaZrqcmiGRi5+gHN/edF8bM2zo2pTMwpKKxRWRbPhZt+jvBeM5g8RxGMR1VVQGjGASCHLv5fPiAgSMX4pWTZhAJEyeYUO5OYXmY8lHsXEcOAGz6DbDQOIlMnJs0/5So4v/koG3f4YQKSMJScXlpycLx9Ity0GECljzcmFxTr1SpAMPDFLM8QAInUsVlgslqSmBK8DyioMIJpXEvxpnqzCAKJ5hafhswsDiNSxWKYcA4pNjIE3hGUPBhApY4Flym0X41c+5s/zZBEGEKljAXJsBcltDJ+swgAihSzTfyUHZQ0GECllzWUAZTMGECllyUk+CC2S+PdXs1I2YACRMhaLBZZr2iQeh8R5MWK2YADRvCKJGBIJ/kx3tmAAkVrX3ooRj0FivCM+WzCASKn8ElfSxYixqyFExofVFURpxQAipXJsBUl7QSIJ/jxzFmEAkVLWXPuUO+IpezCASKmcXNuU40CUPRhApNRn1wF9IYAEkAQ/gmULBhDNO3F+J1DWSOmXUYlmIhwO4+rVq9P2RcdHrrkBVTBq/AsYHr7u+goLC2Gz2a7bT5mDAURz7ne/+x2ee+65afvKtHy88p//gQJ7HgBAIPivp3bi6D8+vu76Xn75ZXzve9+bk1opvRhANOdGR0fxySefTNsX+pcNH1+6CktpDa5EK3Bn3j9xR0H/dccDwPj4+FyVSmmW0jGgvXv3YtWqVdA0DZqmQdd1vP3222b/xMQEvF4vFi5ciKKiItTX1yMQSP6hucHBQdTV1aGwsBDl5eXYuXMnYjFeep+tItE4/nG5Ch+O34Mr0Qr873gNgrJadVmUJikF0KJFi/DCCy+gu7sbZ86cwf3334/vfOc76OvrAwA8+eSTeOutt9Da2orOzk5cuHABDz/8sPn8eDyOuro6RCIRnDhxAq+//joOHDiAXbt2ze5WUcZIJASXJ4ox+acosGIsXqK0JkqflALooYcewoMPPohly5bhq1/9Kn7xi1+gqKgIJ0+ehGEY+M1vfoNf/vKXuP/++1FdXY39+/fjxIkTOHnyJADgr3/9K/r7+/HGG2/g7rvvxsaNG/Gzn/0Me/bsQSQSmZMNpPktnhCUWgeQawkDENgsV+GynVddFqXJjI8BxeNxtLa2YmxsDLquo7u7G9FoFB6PxxyzfPlyuN1u+Hw+rF+/Hj6fDytXroTT6TTH1NbWYtu2bejr68PXv/71lGo4d+4cioqKvnwgKXXtx/AvSojgdPcROM6fx3CsHKV5fvgvnrvh+i5cuID+/v7ZLpNm0ejo6E2NSzmAent7oes6JiYmUFRUhEOHDqGqqgo9PT2w2WwoKSlJGu90OuH3+wEAfr8/KXwm+yf7riccDiMc/vzakFAoBAAwDIPHjzLA9U7BT+rs+QjARze9vvHxcQzf4DQ9qTc2NnZT41IOoLvuugs9PT0wDAN//OMf0dDQgM7OzpQLTEVLS8u0p3FramqgadqcvjbdulOnTs3q+pYuXYp77713VtdJs2tyJ+HLpHwltM1mw9KlS1FdXY2WlhasXr0ar7zyClwuFyKRyJR/mQKBAFwuFwDA5XJN2R2ffDw5ZjrNzc0wDMNchoaGUi2biOahW74VI5FIIBwOo7q6Gnl5eejo6DD7BgYGMDg4CF3XAQC6rqO3txfBYNAc097eDk3TUFVVdd3XsNvt5qn/yYWIMl9KH8Gam5uxceNGuN1ujIyM4M0338S7776LI0eOwOFwYMuWLWhqakJpaSk0TcP27duh6zrWr18PANiwYQOqqqrwyCOPYPfu3fD7/Xj66afh9Xpht/PXEYiyTUoBFAwG8aMf/QgXL16Ew+HAqlWrcOTIEXzrW98CALz00kuwWq2or69HOBxGbW0tXnvtNfP5OTk5aGtrw7Zt26DrOhYsWICGhgY8//zzs7tVNK9M7sHOFt4HdvuwiGTeT1GGQiE4HA4YhsGPYxlgZGRkVs9alZaWYsGCBbO2Ppp9N/se5b1gNOeKi4tRXFysugyah/h9QESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImUYQESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEiZXNUFzISIAABCoZDiSohoOpPvzcn36vVkZABdvnwZAFBZWam4EiK6kZGRETgcjuv2Z2QAlZaWAgAGBwdvuHGULBQKobKyEkNDQ9A0TXU5GYFzNjMigpGREVRUVNxwXEYGkNX62aErh8PBP4oZ0DSN85YizlnqbmbngAehiUgZBhARKZORAWS32/Hss8/CbrerLiWjcN5SxzmbWxb5svNkRERzJCP3gIjo9sAAIiJlGEBEpAwDiIiUycgA2rNnDxYvXoz8/HzU1NSgq6tLdUnKtLS0YO3atSguLkZ5eTk2bdqEgYGBpDETExPwer1YuHAhioqKUF9fj0AgkDRmcHAQdXV1KCwsRHl5OXbu3IlYLJbOTVHmhRdegMViwY4dO8w2zlmaSIY5ePCg2Gw2+e1vfyt9fX2ydetWKSkpkUAgoLo0JWpra2X//v1y9uxZ6enpkQcffFDcbreMjo6aYx5//HGprKyUjo4OOXPmjKxfv17uvfdesz8Wi8mKFSvE4/HI3//+dzl8+LCUlZVJc3Ozik1Kq66uLlm8eLGsWrVKnnjiCbOdc5YeGRdA69atE6/Xaz6Ox+NSUVEhLS0tCquaP4LBoACQzs5OEREZHh6WvLw8aW1tNce8//77AkB8Pp+IiBw+fFisVqv4/X5zzN69e0XTNAmHw+ndgDQaGRmRZcuWSXt7u3zjG98wA4hzlj4Z9REsEomgu7sbHo/HbLNarfB4PPD5fAormz8MwwDw+Q273d3diEajSXO2fPlyuN1uc858Ph9WrlwJp9NpjqmtrUUoFEJfX18aq08vr9eLurq6pLkBOGfplFE3o166dAnxeDzpfzoAOJ1OnDt3TlFV80cikcCOHTtw3333YcWKFQAAv98Pm82GkpKSpLFOpxN+v98cM92cTvbdjg4ePIj33nsPp0+fntLHOUufjAogujGv14uzZ8/ib3/7m+pS5rWhoSE88cQTaG9vR35+vupyslpGfQQrKytDTk7OlLMRgUAALpdLUVXzQ2NjI9ra2nDs2DEsWrTIbHe5XIhEIhgeHk4a/8U5c7lc087pZN/tpru7G8FgEPfccw9yc3ORm5uLzs5OvPrqq8jNzYXT6eScpUlGBZDNZkN1dTU6OjrMtkQigY6ODui6rrAydUQEjY2NOHToEI4ePYolS5Yk9VdXVyMvLy9pzgYGBjA4OGjOma7r6O3tRTAYNMe0t7dD0zRUVVWlZ0PS6IEHHkBvby96enrMZc2aNdi8ebP535yzNFF9FDxVBw8eFLvdLgcOHJD+/n557LHHpKSkJOlsRDbZtm2bOBwOeffdd+XixYvmMj4+bo55/PHHxe12y9GjR+XMmTOi67roum72T55S3rBhg/T09Mg777wjd955Z1adUv7iWTARzlm6ZFwAiYj86le/ErfbLTabTdatWycnT55UXZIyAKZd9u/fb465evWq/OQnP5E77rhDCgsL5bvf/a5cvHgxaT0fffSRbNy4UQoKCqSsrEx++tOfSjQaTfPWqHNtAHHO0oNfx0FEymTUMSAiur0wgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImX+HxdqylKRniSWAAAAAElFTkSuQmCC",
      "text/plain": [
       "<Figure size 300x300 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import gym\n",
    "\n",
    "\n",
    "#定义环境\n",
    "class MyWrapper(gym.Wrapper):\n",
    "\n",
    "    def __init__(self):\n",
    "        env = gym.make('CartPole-v1', render_mode='rgb_array')\n",
    "        super().__init__(env)\n",
    "        self.env = env\n",
    "        self.step_n = 0\n",
    "\n",
    "    def reset(self):\n",
    "        state, _ = self.env.reset()\n",
    "        self.step_n = 0\n",
    "        return state\n",
    "\n",
    "    def step(self, action):\n",
    "        state, reward, terminated, truncated, info = self.env.step(action)\n",
    "        over = terminated or truncated\n",
    "\n",
    "        #限制最大步数\n",
    "        self.step_n += 1\n",
    "        if self.step_n >= 200:\n",
    "            over = True\n",
    "\n",
    "        #没坚持到最后,扣分\n",
    "        if over and self.step_n < 200:\n",
    "            reward = -1000\n",
    "\n",
    "        return state, reward, over\n",
    "\n",
    "    #打印游戏图像\n",
    "    def show(self):\n",
    "        from matplotlib import pyplot as plt\n",
    "        plt.figure(figsize=(3, 3))\n",
    "        plt.imshow(self.env.render())\n",
    "        plt.show()\n",
    "\n",
    "\n",
    "env = MyWrapper()\n",
    "\n",
    "env.reset()\n",
    "\n",
    "env.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[0.5141, 0.4859],\n",
       "         [0.5217, 0.4783]], grad_fn=<SoftmaxBackward0>),\n",
       " tensor([[-0.3116],\n",
       "         [-0.1846]], grad_fn=<AddmmBackward0>))"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "#定义模型\n",
    "model_action = torch.nn.Sequential(\n",
    "    torch.nn.Linear(4, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 2),\n",
    "    torch.nn.Softmax(dim=1),\n",
    ")\n",
    "\n",
    "model_value = torch.nn.Sequential(\n",
    "    torch.nn.Linear(4, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 1),\n",
    ")\n",
    "\n",
    "model_action(torch.randn(2, 4)), model_value(torch.randn(2, 4))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "d:\\appDir\\python3.10\\lib\\site-packages\\gym\\utils\\passive_env_checker.py:233: DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`.  (Deprecated NumPy 1.24)\n",
      "  if not isinstance(terminated, (bool, np.bool8)):\n",
      "C:\\Users\\Administrator\\AppData\\Local\\Temp\\ipykernel_3148\\1112667714.py:34: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ..\\torch\\csrc\\utils\\tensor_new.cpp:248.)\n",
      "  state = torch.FloatTensor(state).reshape(-1, 4)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "-990.0"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from IPython import display\n",
    "import random\n",
    "\n",
    "\n",
    "#玩一局游戏并记录数据\n",
    "def play(show=False):\n",
    "    state = []\n",
    "    action = []\n",
    "    reward = []\n",
    "    next_state = []\n",
    "    over = []\n",
    "\n",
    "    s = env.reset()\n",
    "    o = False\n",
    "    while not o:\n",
    "        #根据概率采样\n",
    "        prob = model_action(torch.FloatTensor(s).reshape(1, 4))[0].tolist()\n",
    "        a = random.choices(range(2), weights=prob, k=1)[0]\n",
    "\n",
    "        ns, r, o = env.step(a)\n",
    "\n",
    "        state.append(s)\n",
    "        action.append(a)\n",
    "        reward.append(r)\n",
    "        next_state.append(ns)\n",
    "        over.append(o)\n",
    "\n",
    "        s = ns\n",
    "\n",
    "        if show:\n",
    "            display.clear_output(wait=True)\n",
    "            env.show()\n",
    "\n",
    "    state = torch.FloatTensor(state).reshape(-1, 4)\n",
    "    action = torch.LongTensor(action).reshape(-1, 1)\n",
    "    reward = torch.FloatTensor(reward).reshape(-1, 1)\n",
    "    next_state = torch.FloatTensor(next_state).reshape(-1, 4)\n",
    "    over = torch.LongTensor(over).reshape(-1, 1)\n",
    "\n",
    "    return state, action, reward, next_state, over, reward.sum().item()\n",
    "\n",
    "\n",
    "state, action, reward, next_state, over, reward_sum = play()\n",
    "\n",
    "reward_sum"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer_action = torch.optim.Adam(model_action.parameters(), lr=1e-3)\n",
    "optimizer_value = torch.optim.Adam(model_value.parameters(), lr=1e-2)\n",
    "\n",
    "\n",
    "def requires_grad(model, value):\n",
    "    for param in model.parameters():\n",
    "        param.requires_grad_(value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([11, 1])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def train_value(state, reward, next_state, over):\n",
    "    requires_grad(model_action, False)\n",
    "    requires_grad(model_value, True)\n",
    "\n",
    "    #计算target\n",
    "    with torch.no_grad():\n",
    "        target = model_value(next_state)\n",
    "    target = target * 0.98 * (1 - over) + reward\n",
    "\n",
    "    #每批数据反复训练10次\n",
    "    for _ in range(10):\n",
    "        #计算value\n",
    "        value = model_value(state)\n",
    "\n",
    "        loss = torch.nn.functional.mse_loss(value, target)\n",
    "        loss.backward()\n",
    "        optimizer_value.step()\n",
    "        optimizer_value.zero_grad()\n",
    "\n",
    "    #减去value相当于去基线\n",
    "    return (target - value).detach()\n",
    "\n",
    "\n",
    "value = train_value(state, reward, next_state, over)\n",
    "\n",
    "value.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "619.5274658203125"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def train_action(state, action, value):\n",
    "    requires_grad(model_action, True)\n",
    "    requires_grad(model_value, False)\n",
    "\n",
    "    #计算当前state的价值,其实就是Q(state,action),这里是用蒙特卡洛法估计的\n",
    "    delta = []\n",
    "    for i in range(len(value)):\n",
    "        s = 0\n",
    "        for j in range(i, len(value)):\n",
    "            s += value[j] * (0.98 * 0.95)**(j - i)\n",
    "        delta.append(s)\n",
    "    delta = torch.FloatTensor(delta).reshape(-1, 1)\n",
    "\n",
    "    #更新前的动作概率\n",
    "    with torch.no_grad():\n",
    "        prob_old = model_action(state).gather(dim=1, index=action)\n",
    "\n",
    "    #每批数据反复训练10次\n",
    "    for _ in range(10):\n",
    "        #更新后的动作概率\n",
    "        prob_new = model_action(state).gather(dim=1, index=action)\n",
    "\n",
    "        #求出概率的变化\n",
    "        ratio = prob_new / prob_old\n",
    "\n",
    "        #计算截断的和不截断的两份loss,取其中小的\n",
    "        surr1 = ratio * delta\n",
    "        surr2 = ratio.clamp(0.8, 1.2) * delta\n",
    "\n",
    "        loss = -torch.min(surr1, surr2).mean()\n",
    "\n",
    "        #更新参数\n",
    "        loss.backward()\n",
    "        optimizer_action.step()\n",
    "        optimizer_action.zero_grad()\n",
    "\n",
    "    return loss.item()\n",
    "\n",
    "\n",
    "train_action(state, action, value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 -127.68484497070312 -972.75\n"
     ]
    }
   ],
   "source": [
    "def train():\n",
    "    model_action.train()\n",
    "    model_value.train()\n",
    "\n",
    "    #训练N局\n",
    "    for epoch in range(100):\n",
    "        #一个epoch最少玩N步\n",
    "        steps = 0\n",
    "        while steps < 200:\n",
    "            state, action, reward, next_state, over, _ = play()\n",
    "            steps += len(state)\n",
    "\n",
    "            #训练两个模型\n",
    "            delta = train_value(state, reward, next_state, over)\n",
    "            loss = train_action(state, action, delta)\n",
    "\n",
    "        if epoch % 100 == 0:\n",
    "            test_result = sum([play()[-1] for _ in range(20)]) / 20\n",
    "            print(epoch, loss, test_result)\n",
    "\n",
    "\n",
    "train()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAASAAAADMCAYAAADTcn7NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAT2klEQVR4nO3dfWxb5b0H8K+dxG7a5DhNstjkJlY7rVCivo20TQ/8sQm8ZiWq1hFdbVPFMlQV0TkVJbu9WiQoAjaFddLYYKX9h7VMExQyqeMSFViUQCpu3YYGgtKkzYUNlNw2tmlLjpO0cV78u39wc4ZpWvxWP3H6/UhH4jzP4+PfOdjfnnMe27GIiICISAGr6gKI6ObFACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImWUBdC+ffuwZMkSLFiwAFVVVejs7FRVChEpoiSAXnnlFTQ0NODxxx/H+++/j9WrV6O6uhrBYFBFOUSkiEXFl1Grqqqwbt06/PGPfwQARCIRlJeXY+fOnfjlL3+Z7nKISJHsdD/hxMQEurq60NjYaLZZrVZ4PB74fL5ZHxMOhxEOh831SCSCS5cuoaioCBaL5YbXTETxERGMjIygtLQUVuu1L7TSHkAXLlzA9PQ0nE5nVLvT6cTZs2dnfUxTUxOeeOKJdJRHRCk0ODiIsrKya/anPYAS0djYiIaGBnPdMAy43W4MDg5C0zSFlRHRbEKhEMrLy5Gfn3/dcWkPoOLiYmRlZSEQCES1BwIBuFyuWR9jt9tht9uvatc0jQFENId93S2StM+C2Ww2VFZWoq2tzWyLRCJoa2uDruvpLoeIFFJyCdbQ0IC6ujqsXbsW69evx+9//3uMjY3hgQceUFEOESmiJIB+9KMf4bPPPsOePXvg9/uxZs0avPnmm1fdmCai+U3J54CSFQqF4HA4YBgG7wERzUGxvkf5XTAiUoYBRETKMICISBkGEBEpwwAiImUYQESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpE3cAHTt2DJs3b0ZpaSksFgv+9re/RfWLCPbs2YNbbrkFubm58Hg8+Oijj6LGXLp0CVu3boWmaSgoKMC2bdswOjqa1I4QUeaJO4DGxsawevVq7Nu3b9b+vXv34tlnn8WBAwdw8uRJLFq0CNXV1RgfHzfHbN26Fb29vWhtbUVLSwuOHTuGBx98MPG9IKLMJEkAIEeOHDHXI5GIuFwu+e1vf2u2DQ8Pi91ul5dffllERPr6+gSAvPfee+aYN954QywWi5w7dy6m5zUMQwCIYRjJlE9EN0is79GU3gP65JNP4Pf74fF4zDaHw4Gqqir4fD4AgM/nQ0FBAdauXWuO8Xg8sFqtOHny5KzbDYfDCIVCUQsRZb6UBpDf7wcAOJ3OqHan02n2+f1+lJSURPVnZ2ejsLDQHPNVTU1NcDgc5lJeXp7KsolIkYyYBWtsbIRhGOYyODiouiQiSoGUBpDL5QIABAKBqPZAIGD2uVwuBIPBqP6pqSlcunTJHPNVdrsdmqZFLUSU+VIaQEuXLoXL5UJbW5vZFgqFcPLkSei6DgDQdR3Dw8Po6uoyx7S3tyMSiaCqqiqV5RDRHJcd7wNGR0fx8ccfm+uffPIJuru7UVhYCLfbjV27duFXv/oVli1bhqVLl+Kxxx5DaWkptmzZAgC4/fbb8f3vfx/bt2/HgQMHMDk5ifr6evz4xz9GaWlpynaMiDJAvNNrb7/9tgC4aqmrqxORL6biH3vsMXE6nWK32+Wee+6R/v7+qG1cvHhRfvKTn0heXp5omiYPPPCAjIyMpHyKj4jUiPU9ahERUZh/CQmFQnA4HDAMg/eDiOagWN+jGTELRkTzEwOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUibuP8tDN6/x4QDGPvvUXM+y5cLhXgGLhf+OUWIYQBQz43/7MPDfL5vrCxbfAu3fboclmwFEieErhxInggz8q040hzCAKHEiACKqq6AMxgCihAl4BkTJYQBR4kQA4RkQJY4BRInjPSBKEgOIEiaQ/78PRJSYuAKoqakJ69atQ35+PkpKSrBlyxb09/dHjRkfH4fX60VRURHy8vJQW1uLQCAQNWZgYAA1NTVYuHAhSkpKsHv3bkxNTSW/N5RevASjJMUVQB0dHfB6vThx4gRaW1sxOTmJjRs3YmxszBzzyCOP4PXXX0dzczM6Ojpw/vx53HfffWb/9PQ0ampqMDExgePHj+PFF1/EoUOHsGfPntTtFaWF8BKMkiVJCAaDAkA6OjpERGR4eFhycnKkubnZHHPmzBkBID6fT0REjh49KlarVfx+vzlm//79ommahMPhmJ7XMAwBIIZhJFM+xcnf0y6dB7abywd/3i3hkYuqy6I5KNb3aFL3gAzDAAAUFhYCALq6ujA5OQmPx2OOWb58OdxuN3w+HwDA5/Nh5cqVcDqd5pjq6mqEQiH09vbO+jzhcBihUChqobmAZ0CUnIQDKBKJYNeuXbjrrruwYsUKAIDf74fNZkNBQUHUWKfTCb/fb475cvjM9M/0zaapqQkOh8NcysvLEy2bkmCxWKLWJRKBRKYVVUPzQcIB5PV6cfr0aRw+fDiV9cyqsbERhmGYy+Dg4A1/Trqa3VECS1aOuT4VHsPkZZ6NUuIS+jJqfX09WlpacOzYMZSVlZntLpcLExMTGB4ejjoLCgQCcLlc5pjOzs6o7c3Mks2M+Sq73Q673Z5IqZRC1qwcWGCBedElEQhnwSgJcZ0BiQjq6+tx5MgRtLe3Y+nSpVH9lZWVyMnJQVtbm9nW39+PgYEB6LoOANB1HT09PQgGg+aY1tZWaJqGioqKZPaFbjSLFbB8/TCiWMV1BuT1evHSSy/htddeQ35+vnnPxuFwIDc3Fw6HA9u2bUNDQwMKCwuhaRp27twJXdexYcMGAMDGjRtRUVGB+++/H3v37oXf78ejjz4Kr9fLs5w5zmLl51YpteIKoP379wMAvvvd70a1Hzx4ED/72c8AAM888wysVitqa2sRDodRXV2N559/3hyblZWFlpYW7NixA7quY9GiRairq8OTTz6Z3J7QDffFD4/xFIhSxyIZOI8aCoXgcDhgGAY0TVNdzk3j8oVBnPmvvYhMhs222zb/B7TSWxVWRXNRrO9RnlNT7Kw8A6LUYgBRzPjbz5RqfEVRzCwW61UfRiRKBgOIYsdZMEoxvqIoZrwEo1TjK4pixml4SjUGEMXOYrk6fzLvUxw0hzCAKCki/DY8JY4BREmRCL+MSoljAFFy+G14SgIDiJLCHySjZDCAKCm8BKNkMIAoKbwJTclgAFFSeAZEyWAAUVJ4D4iSwQCi5HAWjJLAAKKk8BKMksEAorhYvvJdjEhkSlElNB8wgChmWTl22PKLotrGL51XVA3NBwwgip3FCuuX/jAhwDMgSg4DiOLD3wSiFOKriWJmsVj4t8EopfhqojhYvvhNIKIUYQBR7Cz8WVZKrbj+MirNf+FwGFeuXJm1TyJTmJqK/uTzxMQEhoeHr7m93Nxc/sltuiYGEEV59dVX0djYOGtfltWC//z3O7BumdNse+211/Dctqeuub3f/OY32Lp1a8rrpPmBAURRRkdHce7cuVn7rFYLRsZW4J9X1uDCZBmKcwYxOvaPa44HgLGxsRtVKs0DcV3Q79+/H6tWrYKmadA0Dbqu44033jD7x8fH4fV6UVRUhLy8PNTW1iIQCERtY2BgADU1NVi4cCFKSkqwe/duTE3xsyQZQYB/jK3E/1xei0uTpfjo8jp8emWl6qoog8UVQGVlZXj66afR1dWFU6dO4e6778YPfvAD9Pb2AgAeeeQRvP7662hubkZHRwfOnz+P++67z3z89PQ0ampqMDExgePHj+PFF1/EoUOHsGfPntTuFd0QIoLQpAMzLxuBFaPTBUproswWVwBt3rwZ9957L5YtW4Zbb70Vv/71r5GXl4cTJ07AMAy88MIL+N3vfoe7774blZWVOHjwII4fP44TJ04AAP7+97+jr68Pf/nLX7BmzRps2rQJTz31FPbt24eJiYkbsoOUOgLAmfMxcizjAAQ5liu4xf5P1WVRBkv4HtD09DSam5sxNjYGXdfR1dWFyclJeDwec8zy5cvhdrvh8/mwYcMG+Hw+rFy5Ek7nv25iVldXY8eOHejt7cW3v/3tuGo4e/Ys8vLyEt0FmoXf779u/wcftmNg6Bw+n3RicU4AFwL91x0/NDSEvr6+VJZIGWB0dDSmcXEHUE9PD3Rdx/j4OPLy8nDkyBFUVFSgu7sbNpsNBQUFUeOdTqf5ovb7/VHhM9M/03ct4XAY4XDYXA+FQgAAwzB4/yjFrjUFP+PdngEAA3Ft73rT9DQ/xTr5EHcA3Xbbbeju7oZhGPjrX/+Kuro6dHR0xF1gPJqamvDEE09c1V5VVQVN027oc99sPvzww5Ru75vf/CbuvPPOlG6T5r6Zk4SvE/fHWm02G771rW+hsrISTU1NWL16Nf7whz/A5XLN+qG0QCAAl8sFAHC5XFfNis2sz4yZTWNjIwzDMJfBwcF4yyaiOSjpz9VHIhGEw2FUVlYiJycHbW1tZl9/fz8GBgag6zoAQNd19PT0IBgMmmNaW1uhaRoqKiqu+Rx2u92c+p9ZiCjzxXUJ1tjYiE2bNsHtdmNkZAQvvfQS3nnnHbz11ltwOBzYtm0bGhoaUFhYCE3TsHPnTui6jg0bNgAANm7ciIqKCtx///3Yu3cv/H4/Hn30UXi9Xn5cn+gmFFcABYNB/PSnP8XQ0BAcDgdWrVqFt956C9/73vcAAM888wysVitqa2sRDodRXV2N559/3nx8VlYWWlpasGPHDui6jkWLFqGurg5PPvlkaveKEmaz2VJ6hmmz2VK2LZp/LCIiqouIVygUgsPhgGEYvBxLsdHRUXz++ecp297ixYv5UYmbUKzvUX4XjKLk5eUxMCht+OMuRKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImUYQESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISJls1QUkQkQAAKFQSHElRDSbmffmzHv1WjIygC5evAgAKC8vV1wJEV3PyMgIHA7HNfszMoAKCwsBAAMDA9fdOYoWCoVQXl6OwcFBaJqmupyMwGOWGBHByMgISktLrzsuIwPIav3i1pXD4eCLIgGapvG4xYnHLH6xnBzwJjQRKcMAIiJlMjKA7HY7Hn/8cdjtdtWlZBQet/jxmN1YFvm6eTIiohskI8+AiGh+YAARkTIMICJShgFERMpkZADt27cPS5YswYIFC1BVVYXOzk7VJSnT1NSEdevWIT8/HyUlJdiyZQv6+/ujxoyPj8Pr9aKoqAh5eXmora1FIBCIGjMwMICamhosXLgQJSUl2L17N6amptK5K8o8/fTTsFgs2LVrl9nGY5YmkmEOHz4sNptN/vSnP0lvb69s375dCgoKJBAIqC5Nierqajl48KCcPn1auru75d577xW32y2jo6PmmIceekjKy8ulra1NTp06JRs2bJA777zT7J+ampIVK1aIx+ORDz74QI4ePSrFxcXS2NioYpfSqrOzU5YsWSKrVq2Shx9+2GznMUuPjAug9evXi9frNdenp6eltLRUmpqaFFY1dwSDQQEgHR0dIiIyPDwsOTk50tzcbI45c+aMABCfzyciIkePHhWr1Sp+v98cs3//ftE0TcLhcHp3II1GRkZk2bJl0traKt/5znfMAOIxS5+MugSbmJhAV1cXPB6P2Wa1WuHxeODz+RRWNncYhgHgX1/Y7erqwuTkZNQxW758Odxut3nMfD4fVq5cCafTaY6prq5GKBRCb29vGqtPL6/Xi5qamqhjA/CYpVNGfRn1woULmJ6ejvqfDgBOpxNnz55VVNXcEYlEsGvXLtx1111YsWIFAMDv98Nms6GgoCBqrNPphN/vN8fMdkxn+uajw4cP4/3338d77713VR+PWfpkVADR9Xm9Xpw+fRrvvvuu6lLmtMHBQTz88MNobW3FggULVJdzU8uoS7Di4mJkZWVdNRsRCATgcrkUVTU31NfXo6WlBW+//TbKysrMdpfLhYmJCQwPD0eN//Ixc7lcsx7Tmb75pqurC8FgEHfccQeys7ORnZ2Njo4OPPvss8jOzobT6eQxS5OMCiCbzYbKykq0tbWZbZFIBG1tbdB1XWFl6ogI6uvrceTIEbS3t2Pp0qVR/ZWVlcjJyYk6Zv39/RgYGDCPma7r6OnpQTAYNMe0trZC0zRUVFSkZ0fS6J577kFPTw+6u7vNZe3atdi6dav53zxmaaL6Lni8Dh8+LHa7XQ4dOiR9fX3y4IMPSkFBQdRsxM1kx44d4nA45J133pGhoSFzuXz5sjnmoYceErfbLe3t7XLq1CnRdV10XTf7Z6aUN27cKN3d3fLmm2/KN77xjZtqSvnLs2AiPGbpknEBJCLy3HPPidvtFpvNJuvXr5cTJ06oLkkZALMuBw8eNMdcuXJFfv7zn8vixYtl4cKF8sMf/lCGhoaitvPpp5/Kpk2bJDc3V4qLi+UXv/iFTE5Opnlv1PlqAPGYpQd/joOIlMmoe0BENL8wgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImX+D/F/nHu9N9/NAAAAAElFTkSuQmCC",
      "text/plain": [
       "<Figure size 300x300 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "200.0"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "play(True)[-1]"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "第9章-策略梯度算法.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
