{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "状态价值函数:\n",
    "\n",
    "V(state) = 所有动作求和 -> 概率(action) * Q(state,action)\n",
    "\n",
    "对这个式子做变形得到:\n",
    "\n",
    "V(state) = 所有动作求和 -> 现概率(action) * \\[旧概率(action) / 现概率(action)\\] * Q(state,action)\n",
    "\n",
    "初始时可以认为现概率和旧概率相等,但随着模型的更新,现概率会变化.\n",
    "\n",
    "式子中的Q(state,action)可以用蒙特卡洛法估计.\n",
    "\n",
    "按照策略梯度的理论,状态价值取决于动作的质量,所以只要最大化V函数,就可以得到最好的动作策略."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAASAAAADMCAYAAADTcn7NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8ekN5oAAAACXBIWXMAAA9hAAAPYQGoP6dpAAARg0lEQVR4nO3da2xUVb/H8f/0NlB6s8W29mn7QCIRONy0QKnkRCOVioSI8EINwUo4ELEQLoZoEyiCmhI8CYpCfaPAG8Sn5qmEHkBrCyWGYqHYHCjQaCKHHqCtgL1Q6G1mnax1MvMwULCV0tVpv59ks7v3WjOzZ9P5zbrs6TiUUkoAwIIAGw8KABoBBMAaAgiANQQQAGsIIADWEEAArCGAAFhDAAGwhgACYA0BBGDwBdD27dtlxIgRMmTIEElNTZXy8nJbhwJgMAXQ119/LWvWrJENGzbIqVOnZOLEiZKRkSH19fU2DgeAJQ4bH0bVLZ4pU6bIZ599ZrbdbrckJSXJihUr5N133+3rwwFgSVBfP2B7e7tUVFRIdna2d19AQICkp6dLWVlZl7dpa2szi4cOrOvXr0tMTIw4HI4+OW4A3afbNc3NzZKQkGBe3/0mgK5evSoul0vi4uJ89uvt8+fPd3mb3Nxc2bhxYx8dIYDeUlNTI4mJif0ngP4K3VrSY0YejY2NkpycbJ5cRESE1WMDcLempiYzrBIeHi730+cBNHz4cAkMDJS6ujqf/Xo7Pj6+y9s4nU6z3EmHDwEE9F9/NkTS57NgISEhkpKSIsXFxT5jOno7LS2trw8HgEVWumC6O5WZmSmTJ0+WqVOnyscffywtLS2yaNEiG4cDYDAF0CuvvCK///675OTkSG1trUyaNEkOHTp018A0gIHNynVAvTHAFRkZaQajGQMC/Pc1ymfBAFhDAAGwhgACYA0BBMAaAgiANQQQAGsIIADWEEAArCGAAFhDAAGwhgACYA0BBMAaAgiANQQQAGsIIADWEEAArCGAAFhDAAGwhgACYA0BBMAaAgiANQQQAGsIIADWEEAArCGAAFhDAAGwhgACYA0BBMB/Aujo0aMyZ84cSUhIEIfDId9++61PuVJKcnJy5LHHHpOhQ4dKenq6/PLLLz51rl+/LgsWLDBfWh8VFSWLFy+WGzduPPizATCwA6ilpUUmTpwo27dv77J8y5Ytsm3bNvn888/lp59+kmHDhklGRoa0trZ66+jwqaqqkqKiIiksLDShtnTp0gd7JgD8j3oA+uYFBQXebbfbreLj49VHH33k3dfQ0KCcTqf66quvzPbZs2fN7U6cOOGtc/DgQeVwONSlS5e69biNjY3mPvQaQP/T3ddor44B/fbbb1JbW2u6XR6RkZGSmpoqZWVlZluvdbdr8uTJ3jq6fkBAgGkxdaWtrU2ampp8FgD+r1cDSIePFhcX57Nfb3vK9Do2NtanPCgoSKKjo7117pSbm2uCzLMkJSX15mEDsMQvZsGys7OlsbHRu9TU1Ng+JAD9LYDi4+PNuq6uzme/3vaU6XV9fb1PeWdnp5kZ89S5k9PpNDNmty8A/F+vBtDIkSNNiBQXF3v36fEaPbaTlpZmtvW6oaFBKioqvHVKSkrE7XabsSIAg0dQT2+gr9f59ddffQaeKysrzRhOcnKyrFq1Sj744AMZNWqUCaT169eba4bmzp1r6o8ZM0ZeeOEFWbJkiZmq7+jokOXLl8urr75q6gEYRHo6vXb48GEzvXbnkpmZ6Z2KX79+vYqLizPT7zNmzFDV1dU+93Ht2jX12muvqbCwMBUREaEWLVqkmpube32KD4Ad3X2NOvQ/4md0t07PhukBacaDAP99jfrFLBiAgYkAAmANAQTAGgIIgDUEEABrCCAA1hBAAKwhgABYQwABsIYAAmANAQTAGgIIgDUEEABrCCAA1hBAAKwhgABYQwABsIYAAmANAQTAGgIIgP98LQ/QW5RyS2NNlbjabnr3hT76dxka1fUXVGLgIYBgjXK75X9/+qfcun7Juy/56VcIoEGELhgAawggANYQQACsIYAAWEMAAbCGAALgHwGUm5srU6ZMkfDwcImNjZW5c+dKdXW1T53W1lbJysqSmJgYCQsLk/nz50tdXZ1PnYsXL8rs2bMlNDTU3M/atWuls7Ozd54RgIEZQKWlpSZcjh8/LkVFRdLR0SEzZ86UlpYWb53Vq1fL/v37JT8/39S/fPmyzJs3z1vucrlM+LS3t8uxY8dk9+7dsmvXLsnJyendZwag/1MPoL6+Xum7KC0tNdsNDQ0qODhY5efne+ucO3fO1CkrKzPbBw4cUAEBAaq2ttZbJy8vT0VERKi2trZuPW5jY6O5T72G/3J1dqjT/3hPlX++xLvU/vcPtg8LvaC7r9EHGgNqbGw06+joaLOuqKgwraL09HRvndGjR0tycrKUlZWZbb0eP368xMXFeetkZGRIU1OTVFVVdfk4bW1tpvz2BYD/+8sB5Ha7ZdWqVTJ9+nQZN26c2VdbWyshISESFRXlU1eHjS7z1Lk9fDzlnrJ7jT1FRkZ6l6SkpL962AAGQgDpsaAzZ87I3r175WHLzs42rS3PUlNT89AfE0A//TDq8uXLpbCwUI4ePSqJiYne/fHx8WZwuaGhwacVpGfBdJmnTnl5uc/9eWbJPHXu5HQ6zQJgELeAlFImfAoKCqSkpERGjhzpU56SkiLBwcFSXFzs3aen6fW0e1pamtnW69OnT0t9fb23jp5Ri4iIkLFjxz74MwIwMFtAutu1Z88e2bdvn7kWyDNmo8dlhg4dataLFy+WNWvWmIFpHSorVqwwoTNt2jRTV0/b66BZuHChbNmyxdzHunXrzH3TygEGlx4FUF5enlk/++yzPvt37twpb7zxhvl569atEhAQYC5A1LNXeoZrx44d3rqBgYGm+7Zs2TITTMOGDZPMzEzZtGlT7zwjAH7Doefixc/oaXjd2tID0rqVBf/kdnXK2X9+eNcfJIsbP8PqcaHvXqN8FgyANQQQAGsIIADWEEAArCGAAFhDAAGwhgACYA0BBMAaAgiANQQQAGsIIADWEEAArCGAAFhDAAGwhgACYA0BBMAaAgiANQQQAGsIIADWEEAArCGAAFhDAAGwhgACYA0BBMA/vhkV6KmbN29Ke3t7l2XK7RKXy+2z79atW9LQ0HDP+wsLC5OgIH5tBwr+J/FQvffee7Jnz54uywIDHPKf/zFd/h77r2/OzN28Wf6r/EKX9fVXfu/bt0+efPLJh3a86FsEEB4q3Zq5dOlfX718u6DAAGlpC5RzLU9Ls+sRSXRWS0ND2T3r6wC6V2sKg2AMKC8vTyZMmGC+61kvaWlpcvDgQW95a2urZGVlSUxMjGkqz58/X+rq6nzu4+LFizJ79mwJDQ2V2NhYWbt2rXR2dvbeM4LfUMohZ1umy/+0/ptc7/ibVN34d/m9Pdn2YaG/BlBiYqJs3rxZKioq5OTJk/Lcc8/JSy+9JFVVVaZ89erVsn//fsnPz5fS0lK5fPmyzJs3z3t7l8tlwke/ix07dkx2794tu3btkpycnN5/Zuj3lDjkhusREXGYbZcEy01XuO3DQn/tgs2ZM8dn+8MPPzStouPHj5tw+uKLL0x/XweTtnPnThkzZowpnzZtmnz//fdy9uxZ+eGHHyQuLk4mTZok77//vrzzzjtmrCAkJKR3nx36NYfDLX9z/iJKRopbAmVYYKPEhFy2fVjwhzEg3ZrRLZ2WlhbTFdOtoo6ODklPT/fWGT16tCQnJ0tZWZkJIL0eP368CR+PjIwMWbZsmWlF9XRw8fz586arh/7rjz/+uGeZ2+2W42Vfict5Qm66IuXRkBr57cK5+97fhQsXJDycVlJ/d+PGjYcTQKdPnzaBo8d79Iu/oKBAxo4dK5WVlaYFExUV5VNfh01tba35Wa9vDx9PuafsXtra2szi0dTUZNaNjY2MH/Vz9xs0Vkqk8Nh5/VbS7ftrbm6+7zQ9+gfdMHkoAfTEE0+YsNEv/m+++UYyMzPNeM/DlJubKxs3brxrf2pqqhkMR/915xvOg9ItaP3/jv7N00jo9SuhdSvn8ccfl5SUFBMMEydOlE8++UTi4+PNu92d7056FkyXaXp956yYZ9tTpyvZ2dkm8DxLTU1NTw8bwED8KIbux+vukQ6k4OBgKS4u9pZVV1ebaXfdZdP0Wnfh6uvrvXWKiopMK0Z34+7F6XR6p/49CwD/16MumG6JzJo1ywws6764nvE6cuSIfPfddxIZGSmLFy+WNWvWSHR0tAmJFStWmNDRA9DazJkzTdAsXLhQtmzZYsZ91q1bZ64d0iEDYHDpUQDplsvrr78uV65cMYGjL0rU4fP888+b8q1bt5qrVfUFiLpVpGe4duzY4b19YGCgFBYWmlkvHUzDhg0zY0ibNm3q/WeGfmHIkCG91mLVvz98DmxgcSil5yL8b4BLB6AeD6I71r9dv3692zMi3aGvnqe1PHBeo7yd4KHS3XG9AF3h7wEBsIYAAmANAQTAGgIIgDUEEABrCCAA1hBAAKwhgABYQwABsIYAAmANAQTAGgIIgDUEEABrCCAA1hBAAKwhgABYQwABsIYAAmANAQTAGgIIgDUEEABrCCAA1hBAAKwhgABYQwABsIYAAmANAQTAGgIIgDUEEABrCCAA1gSJH1JKmXVTU5PtQwHQBc9r0/NaHVABdO3aNbNOSkqyfSgA7qO5uVkiIyMHVgBFR0eb9cWLF+/75HD3u5IO7ZqaGomIiLB9OH6Bc/bX6JaPDp+EhIT71vPLAAoI+P+hKx0+/FL0nD5nnLee4Zz1XHcaBwxCA7CGAAJgjV8GkNPplA0bNpg1uo/z1nOcs4fLof5sngwAHhK/bAEBGBgIIADWEEAArCGAAFjjlwG0fft2GTFihAwZMkRSU1OlvLxcBqvc3FyZMmWKhIeHS2xsrMydO1eqq6t96rS2tkpWVpbExMRIWFiYzJ8/X+rq6nzq6KvKZ8+eLaGhoeZ+1q5dK52dnTIYbN68WRwOh6xatcq7j3PWR5Sf2bt3rwoJCVFffvmlqqqqUkuWLFFRUVGqrq5ODUYZGRlq586d6syZM6qyslK9+OKLKjk5Wd24ccNb580331RJSUmquLhYnTx5Uk2bNk09/fTT3vLOzk41btw4lZ6ern7++Wd14MABNXz4cJWdna0GuvLycjVixAg1YcIEtXLlSu9+zlnf8LsAmjp1qsrKyvJuu1wulZCQoHJzc60eV39RX1+vL6tQpaWlZruhoUEFBwer/Px8b51z586ZOmVlZWZbv3gCAgJUbW2tt05eXp6KiIhQbW1taqBqbm5Wo0aNUkVFReqZZ57xBhDnrO/4VResvb1dKioqJD093edzYXq7rKzM6rH1F42NjT4f2NXnq6Ojw+ecjR49WpKTk73nTK/Hjx8vcXFx3joZGRnmg5hVVVUyUOkulu5C3X5uNM5Z3/GrD6NevXpVXC6Xz3+6prfPnz8vg53b7TbjGNOnT5dx48aZfbW1tRISEiJRUVF3nTNd5qnT1Tn1lA1Ee/fulVOnTsmJEyfuKuOc9R2/CiD8+Tv6mTNn5Mcff7R9KP2a/tMaK1eulKKiIjORAXv8qgs2fPhwCQwMvGs2Qm/Hx8fLYLZ8+XIpLCyUw4cPS2Jione/Pi+669rQ0HDPc6bXXZ1TT9lAo7tY9fX18tRTT0lQUJBZSktLZdu2beZn3ZLhnPUNvwog3SxOSUmR4uJin26H3k5LS5PBSE8k6PApKCiQkpISGTlypE+5Pl/BwcE+50xP0+spZM850+vTp0+bF6WHbh3ov38zduxYGWhmzJhhnm9lZaV3mTx5sixYsMD7M+esjyg/nIZ3Op1q165d6uzZs2rp0qVmGv722YjBZNmyZSoyMlIdOXJEXblyxbvcvHnTZ0pZT82XlJSYKeW0tDSz3DmlPHPmTDOVf+jQIfXoo48Oqinl22fBNM5Z3/C7ANI+/fRT88uhrwfS0/LHjx9Xg5V+D+lq0dcGedy6dUu99dZb6pFHHlGhoaHq5ZdfNiF1uwsXLqhZs2apoUOHmutZ3n77bdXR0aEGawBxzvoGf44DgDV+NQYEYGAhgABYQwABsIYAAmANAQTAGgIIgDUEEABrCCAA1hBAAKwhgABYQwABsIYAAiC2/B/n4zr6VG+2IAAAAABJRU5ErkJggg==",
      "text/plain": [
       "<Figure size 300x300 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import gym\n",
    "\n",
    "\n",
    "#定义环境\n",
    "class MyWrapper(gym.Wrapper):\n",
    "\n",
    "    def __init__(self):\n",
    "        env = gym.make('CartPole-v1', render_mode='rgb_array')\n",
    "        super().__init__(env)\n",
    "        self.env = env\n",
    "        self.step_n = 0\n",
    "\n",
    "    def reset(self):\n",
    "        state, _ = self.env.reset()\n",
    "        self.step_n = 0\n",
    "        return state\n",
    "\n",
    "    def step(self, action):\n",
    "        state, reward, terminated, truncated, info = self.env.step(action)\n",
    "        over = terminated or truncated\n",
    "\n",
    "        #限制最大步数\n",
    "        self.step_n += 1\n",
    "        if self.step_n >= 200:\n",
    "            over = True\n",
    "\n",
    "        #没坚持到最后,扣分\n",
    "        if over and self.step_n < 200:\n",
    "            reward = -1000\n",
    "\n",
    "        return state, reward, over\n",
    "\n",
    "    #打印游戏图像\n",
    "    def show(self):\n",
    "        from matplotlib import pyplot as plt\n",
    "        plt.figure(figsize=(3, 3))\n",
    "        plt.imshow(self.env.render())\n",
    "        plt.show()\n",
    "\n",
    "\n",
    "env = MyWrapper()\n",
    "\n",
    "env.reset()\n",
    "\n",
    "env.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[0.5028, 0.4972],\n",
       "         [0.5123, 0.4877]], grad_fn=<SoftmaxBackward0>),\n",
       " tensor([[-0.0145],\n",
       "         [ 0.0464]], grad_fn=<AddmmBackward0>))"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "#定义模型\n",
    "model_action = torch.nn.Sequential(\n",
    "    torch.nn.Linear(4, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 2),\n",
    "    torch.nn.Softmax(dim=1),\n",
    ")\n",
    "\n",
    "model_value = torch.nn.Sequential(\n",
    "    torch.nn.Linear(4, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 64),\n",
    "    torch.nn.ReLU(),\n",
    "    torch.nn.Linear(64, 1),\n",
    ")\n",
    "\n",
    "model_action(torch.randn(2, 4)), model_value(torch.randn(2, 4))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/tmp/ipykernel_7761/1112667714.py:34: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at  ../torch/csrc/utils/tensor_new.cpp:201.)\n",
      "  state = torch.FloatTensor(state).reshape(-1, 4)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "-979.0"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from IPython import display\n",
    "import random\n",
    "\n",
    "\n",
    "#玩一局游戏并记录数据\n",
    "def play(show=False):\n",
    "    state = []\n",
    "    action = []\n",
    "    reward = []\n",
    "    next_state = []\n",
    "    over = []\n",
    "\n",
    "    s = env.reset()\n",
    "    o = False\n",
    "    while not o:\n",
    "        #根据概率采样\n",
    "        prob = model_action(torch.FloatTensor(s).reshape(1, 4))[0].tolist()\n",
    "        a = random.choices(range(2), weights=prob, k=1)[0]\n",
    "\n",
    "        ns, r, o = env.step(a)\n",
    "\n",
    "        state.append(s)\n",
    "        action.append(a)\n",
    "        reward.append(r)\n",
    "        next_state.append(ns)\n",
    "        over.append(o)\n",
    "\n",
    "        s = ns\n",
    "\n",
    "        if show:\n",
    "            display.clear_output(wait=True)\n",
    "            env.show()\n",
    "\n",
    "    state = torch.FloatTensor(state).reshape(-1, 4)\n",
    "    action = torch.LongTensor(action).reshape(-1, 1)\n",
    "    reward = torch.FloatTensor(reward).reshape(-1, 1)\n",
    "    next_state = torch.FloatTensor(next_state).reshape(-1, 4)\n",
    "    over = torch.LongTensor(over).reshape(-1, 1)\n",
    "\n",
    "    return state, action, reward, next_state, over, reward.sum().item()\n",
    "\n",
    "\n",
    "state, action, reward, next_state, over, reward_sum = play()\n",
    "\n",
    "reward_sum"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer_action = torch.optim.Adam(model_action.parameters(), lr=1e-3)\n",
    "optimizer_value = torch.optim.Adam(model_value.parameters(), lr=1e-2)\n",
    "\n",
    "\n",
    "def requires_grad(model, value):\n",
    "    for param in model.parameters():\n",
    "        param.requires_grad_(value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([22, 1])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def train_value(state, reward, next_state, over):\n",
    "    requires_grad(model_action, False)\n",
    "    requires_grad(model_value, True)\n",
    "\n",
    "    #计算target\n",
    "    with torch.no_grad():\n",
    "        target = model_value(next_state)\n",
    "    target = target * 0.98 * (1 - over) + reward\n",
    "\n",
    "    #每批数据反复训练10次\n",
    "    for _ in range(10):\n",
    "        #计算value\n",
    "        value = model_value(state)\n",
    "\n",
    "        loss = torch.nn.functional.mse_loss(value, target)\n",
    "        loss.backward()\n",
    "        optimizer_value.step()\n",
    "        optimizer_value.zero_grad()\n",
    "\n",
    "    #减去value相当于去基线\n",
    "    return (target - value).detach()\n",
    "\n",
    "\n",
    "value = train_value(state, reward, next_state, over)\n",
    "\n",
    "value.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "481.4684753417969"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def train_action(state, action, value):\n",
    "    requires_grad(model_action, True)\n",
    "    requires_grad(model_value, False)\n",
    "\n",
    "    #计算当前state的价值,其实就是Q(state,action),这里是用蒙特卡洛法估计的\n",
    "    delta = []\n",
    "    for i in range(len(value)):\n",
    "        s = 0\n",
    "        for j in range(i, len(value)):\n",
    "            s += value[j] * (0.98 * 0.95)**(j - i)\n",
    "        delta.append(s)\n",
    "    delta = torch.FloatTensor(delta).reshape(-1, 1)\n",
    "\n",
    "    #更新前的动作概率\n",
    "    with torch.no_grad():\n",
    "        prob_old = model_action(state).gather(dim=1, index=action)\n",
    "\n",
    "    #每批数据反复训练10次\n",
    "    for _ in range(10):\n",
    "        #更新后的动作概率\n",
    "        prob_new = model_action(state).gather(dim=1, index=action)\n",
    "\n",
    "        #求出概率的变化\n",
    "        ratio = prob_new / prob_old\n",
    "\n",
    "        #计算截断的和不截断的两份loss,取其中小的\n",
    "        surr1 = ratio * delta\n",
    "        surr2 = ratio.clamp(0.8, 1.2) * delta\n",
    "\n",
    "        loss = -torch.min(surr1, surr2).mean()\n",
    "\n",
    "        #更新参数\n",
    "        loss.backward()\n",
    "        optimizer_action.step()\n",
    "        optimizer_action.zero_grad()\n",
    "\n",
    "    return loss.item()\n",
    "\n",
    "\n",
    "train_action(state, action, value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 57.404075622558594 -978.6\n",
      "100 -18.518997192382812 200.0\n",
      "200 0.9326910972595215 200.0\n",
      "300 -0.06682347506284714 200.0\n",
      "400 -0.040946170687675476 200.0\n",
      "500 -0.3048292398452759 200.0\n",
      "600 -0.3876965045928955 200.0\n",
      "700 2.3514397144317627 200.0\n",
      "800 1.754111886024475 200.0\n",
      "900 25.258014678955078 200.0\n"
     ]
    }
   ],
   "source": [
    "def train():\n",
    "    model_action.train()\n",
    "    model_value.train()\n",
    "\n",
    "    #训练N局\n",
    "    for epoch in range(1000):\n",
    "        #一个epoch最少玩N步\n",
    "        steps = 0\n",
    "        while steps < 200:\n",
    "            state, action, reward, next_state, over, _ = play()\n",
    "            steps += len(state)\n",
    "\n",
    "            #训练两个模型\n",
    "            delta = train_value(state, reward, next_state, over)\n",
    "            loss = train_action(state, action, delta)\n",
    "\n",
    "        if epoch % 100 == 0:\n",
    "            test_result = sum([play()[-1] for _ in range(20)]) / 20\n",
    "            print(epoch, loss, test_result)\n",
    "\n",
    "\n",
    "train()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAASAAAADMCAYAAADTcn7NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/P9b71AAAACXBIWXMAAA9hAAAPYQGoP6dpAAAT7ElEQVR4nO3df0xb570/8LcN2IHAMYEUeyi4ye2qZig/1pGEnFX6bt/WDWu507Jwpa2KWlZF6W1ioqRMkYbUpkpuJaLsj2zdUvLH7pJKV10mJrGpiLZi0JJb1QmJM64ITdCimwqUxHZDxDHQYgz+3D8izuoGMgzEj23eL+lI8fM8Pv6cB593zg+MLSIiICJSwKq6ACJauhhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImUYQESkjLIAOnHiBFavXo1ly5ahsrIS3d3dqkohIkWUBNAf//hH1NfX4/XXX8elS5ewceNGVFVVIRQKqSiHiBSxqPgwamVlJTZv3ozf/va3AIBYLIaysjLs27cPv/jFL5JdDhEpkp3sF5yYmIDf70dDQ4PZZrVa4fF44PP5ZnxOJBJBJBIxH8diMdy5cwfFxcWwWCwPvGYiSoyIYGRkBKWlpbBaZz/RSnoA3b59G1NTU3A6nXHtTqcTV69enfE5jY2NOHz4cDLKI6JFNDg4iFWrVs3an/QAmo+GhgbU19ebjw3DgNvtxuDgIDRNU1gZEc0kHA6jrKwMBQUF9x2X9ABauXIlsrKyEAwG49qDwSBcLteMz7Hb7bDb7fe0a5rGACJKYf/sEknS74LZbDZUVFSgo6PDbIvFYujo6ICu68kuh4gUUnIKVl9fj9raWmzatAlbtmzBr371K4yNjeHFF19UUQ4RKaIkgH7yk5/g888/x6FDhxAIBPDtb38b77///j0Xpokosyn5PaCFCofDcDgcMAyD14CIUtBc91F+FoyIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImUYQESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKJBxAZ8+exQ9/+EOUlpbCYrHgz3/+c1y/iODQoUP4xje+gdzcXHg8Hvz973+PG3Pnzh3s3LkTmqahsLAQu3btwujo6II2hIjST8IBNDY2ho0bN+LEiRMz9h87dgxvvvkmTp48ifPnz2P58uWoqqrC+Pi4OWbnzp3o6+tDe3s7WltbcfbsWbz00kvz3woiSk+yAACkpaXFfByLxcTlcskvf/lLs214eFjsdrv84Q9/EBGRTz/9VADIhQsXzDHvvfeeWCwWuXHjxpxe1zAMASCGYSykfCJ6QOa6jy7qNaDr168jEAjA4/GYbQ6HA5WVlfD5fAAAn8+HwsJCbNq0yRzj8XhgtVpx/vz5GdcbiUQQDofjFiJKf4saQIFAAADgdDrj2p1Op9kXCARQUlIS15+dnY2ioiJzzNc1NjbC4XCYS1lZ2WKWTUSKpMVdsIaGBhiGYS6Dg4OqSyKiRbCoAeRyuQAAwWAwrj0YDJp9LpcLoVAorn9ychJ37twxx3yd3W6HpmlxCxGlv0UNoDVr1sDlcqGjo8NsC4fDOH/+PHRdBwDouo7h4WH4/X5zTGdnJ2KxGCorKxezHCJKcdmJPmF0dBTXrl0zH1+/fh09PT0oKiqC2+3GgQMH8MYbb+DRRx/FmjVr8Nprr6G0tBTbt28HAHzrW9/CD37wA+zevRsnT55ENBpFXV0dfvrTn6K0tHTRNoyI0kCit9c+/PBDAXDPUltbKyJ3b8W/9tpr4nQ6xW63y1NPPSX9/f1x6xgaGpLnnntO8vPzRdM0efHFF2VkZGTRb/ERkRpz3UctIiIK829ewuEwHA4HDMPg9SCiFDTXfTQt7oIRUWZiABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKJPy1PJTaIiNDGA3842uTrNk2ONzrYc3ij5pSD9+VGWYsdB3/2/mf5uOc5SuwrvQxBhClJJ6CZTyBSEx1EUQzYgBlOhGAAUQpigG0BEiMAUSpiQGU4UTk7lEQUQpiAGU8XgOi1MUAynTCAKLUlVAANTY2YvPmzSgoKEBJSQm2b9+O/v7+uDHj4+Pwer0oLi5Gfn4+ampqEAwG48YMDAyguroaeXl5KCkpwcGDBzE5ObnwraGZMYAoRSUUQF1dXfB6vTh37hza29sRjUaxbds2jI2NmWNeeeUVvPvuu2hubkZXVxdu3ryJHTt2mP1TU1Oorq7GxMQEPvnkE7z99ts4ffo0Dh06tHhbRSYRuXsdiCgVyQKEQiEBIF1dXSIiMjw8LDk5OdLc3GyOuXLligAQn88nIiJtbW1itVolEAiYY5qamkTTNIlEInN6XcMwBIAYhrGQ8jPS0LUL0n1yt7n4T+2XL4ZuqC6Llpi57qMLugZkGAYAoKioCADg9/sRjUbh8XjMMWvXroXb7YbP5wMA+Hw+rF+/Hk6n0xxTVVWFcDiMvr6+GV8nEokgHA7HLTRHvAZEKWzeARSLxXDgwAE88cQTWLduHQAgEAjAZrOhsLAwbqzT6UQgEDDHfDV8pvun+2bS2NgIh8NhLmVlZfMtO/NZLHEPRQQSm1JUDNH9zTuAvF4vLl++jDNnzixmPTNqaGiAYRjmMjg4+MBfM13ZC4phzVlmPo5FxzExekdhRUSzm9cnFOvq6tDa2oqzZ89i1apVZrvL5cLExASGh4fjjoKCwSBcLpc5pru7O25903fJpsd8nd1uh91un0+pS47Fmg3LPUdBPAWj1JTQEZCIoK6uDi0tLejs7MSaNWvi+isqKpCTk4OOjg6zrb+/HwMDA9B1HQCg6zp6e3sRCoXMMe3t7dA0DeXl5QvZFgJgsVjvOQ0jSlUJHQF5vV688847+Mtf/oKCggLzmo3D4UBubi4cDgd27dqF+vp6FBUVQdM07Nu3D7quY+vWrQCAbdu2oby8HM8//zyOHTuGQCCAV199FV6vl0c5i8FqBcAAovSQUAA1NTUBAL7//e/HtZ86dQo/+9nPAADHjx+H1WpFTU0NIpEIqqqq8NZbb5ljs7Ky0Nraij179kDXdSxfvhy1tbU4cuTIwraEANw9Avr6KRhRqrKIpN9vqYXDYTgcDhiGAU3TVJeTUsbDn+NKSyMmx0fNtkee/ncU/UuFwqpoqZnrPsrPgmUYi4WnYJQ+GEAZhhehKZ0wgDKNxcLjH0obDKAMY7HyCIjSBwMo0/AaEKURBlCGme0aUBre7KQlgAGUaRg+lEYYQEsBPw1PKYoBtATww6iUqhhASwD/HhClKgbQEsAvJqRUxQBaAngKRqmKAbQU8BSMUhQDaAngKRilKgbQEiDCIyBKTQygJYBHQJSqGEBLAG/DU6piAGWge74VIzapqBKi+2MAZRirNRt2R0lc25d3biqqhuj+GECZxmKBNSsnrolHQJSqGEAZ6O7fhSZKfXynZiCLlT9WSg98p2YYi8UCWLJUl0E0JwygDMQjIEoXCX0zKqknIhgbG8Pk5MwXlkViiEbj+6LRSRiGMes6CwoKkJXFoyZKPgZQGtq7dy86Oztn7LNagL3/uh5Pbiwz2z7++L/xH97jM47Pzc3FX//6Vzz88MMPpFai+2EApaGhoSHcuHFjxj6LBfjcWI/e0f+HSGw5Hs7tw/j44Kzj8/LyZj2aInrQErpY0NTUhA0bNkDTNGiaBl3X8d5775n94+Pj8Hq9KC4uRn5+PmpqahAMBuPWMTAwgOrqauTl5aGkpAQHDx7kDrCIsrPtGNN24EbkMdyOluF/Rv4/hiedqssimlFCAbRq1SocPXoUfr8fFy9exJNPPokf/ehH6OvrAwC88sorePfdd9Hc3Iyuri7cvHkTO3bsMJ8/NTWF6upqTExM4JNPPsHbb7+N06dP49ChQ4u7VUuY1ZqN5Voppr8bbFLsGI/lqS2KaDayQCtWrJDf/e53Mjw8LDk5OdLc3Gz2XblyRQCIz+cTEZG2tjaxWq0SCATMMU1NTaJpmkQikTm/pmEYAkAMw1ho+WknFovJs88+KwBmXKzWLHnhuQY58sY5OfxGtxw/2iaNe/9t1vF5eXly7do11ZtFGWau++i8rwFNTU2hubkZY2Nj0HUdfr8f0WgUHo/HHLN27Vq43W74fD5s3boVPp8P69evh9P5j1OCqqoq7NmzB319fXj88ccTquHq1avIz8+f7yakrdHR0Vn7YrEp9F36LwQDVxCJ5cFp+wxDdwKzjhcRXLt2DZFI5EGUSkvU/d6jX5VwAPX29kLXdYyPjyM/Px8tLS0oLy9HT08PbDYbCgsL48Y7nU4EAnd3gEAgEBc+0/3TfbOJRCJxO0g4HAYAGIaxJK8fRaPR+/b7+weB/sE5rUtEMDIyguHh4UWojOiusbGxOY1LOIAee+wx9PT0wDAM/OlPf0JtbS26uroSLjARjY2NOHz48D3tlZWV0DTtgb52qhERrFixYtHWZ7Va8fjjj+ORRx5ZtHUSTR8k/DMJ/8qszWbDN7/5TVRUVKCxsREbN27Er3/9a7hcLkxMTNzzP2kwGITL5QIAuFyue+6KTT+eHjOThoYGGIZhLoODc/vfnYhS24J/Zz8WiyESiaCiogI5OTno6Ogw+/r7+zEwMABd1wEAuq6jt7cXoVDIHNPe3g5N01BeXj7ra9jtdvPW//RCROkvoVOwhoYGPPPMM3C73RgZGcE777yDjz76CB988AEcDgd27dqF+vp6FBUVQdM07Nu3D7quY+vWrQCAbdu2oby8HM8//zyOHTuGQCCAV199FV6vF3a7/YFsIBGlroQCKBQK4YUXXsCtW7fgcDiwYcMGfPDBB3j66acBAMePH4fVakVNTQ0ikQiqqqrw1ltvmc/PyspCa2sr9uzZA13XsXz5ctTW1uLIkSOLu1UZLi8vb9GOAvPy8mDlh1dJEYuIiOoiEhUOh+FwOGAYxpI7HRMR3L59G+Pj44uyPovFApfLhexsfiqHFs9c91G+69KMxWLBQw89pLoMokXBY28iUoYBRETKMICISBkGEBEpwwAiImUYQESkDAOIiJRhABGRMgwgIlKGAUREyjCAiEgZBhARKcMAIiJlGEBEpAwDiIiUYQARkTIMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISBkGEBEpwwAiImUYQESkTLbqAuZDRAAA4XBYcSVENJPpfXN6X51NWgbQ0NAQAKCsrExxJUR0PyMjI3A4HLP2p2UAFRUVAQAGBgbuu3EULxwOo6ysDIODg9A0TXU5aYFzNj8igpGREZSWlt53XFoGkNV699KVw+Hgm2IeNE3jvCWIc5a4uRwc8CI0ESnDACIiZdIygOx2O15//XXY7XbVpaQVzlviOGcPlkX+2X0yIqIHJC2PgIgoMzCAiEgZBhARKcMAIiJl0jKATpw4gdWrV2PZsmWorKxEd3e36pKUaWxsxObNm1FQUICSkhJs374d/f39cWPGx8fh9XpRXFyM/Px81NTUIBgMxo0ZGBhAdXU18vLyUFJSgoMHD2JycjKZm6LM0aNHYbFYcODAAbONc5YkkmbOnDkjNptNfv/730tfX5/s3r1bCgsLJRgMqi5NiaqqKjl16pRcvnxZenp65NlnnxW32y2jo6PmmJdfflnKysqko6NDLl68KFu3bpXvfve7Zv/k5KSsW7dOPB6P/O1vf5O2tjZZuXKlNDQ0qNikpOru7pbVq1fLhg0bZP/+/WY75yw50i6AtmzZIl6v13w8NTUlpaWl0tjYqLCq1BEKhQSAdHV1iYjI8PCw5OTkSHNzsznmypUrAkB8Pp+IiLS1tYnVapVAIGCOaWpqEk3TJBKJJHcDkmhkZEQeffRRaW9vl+9973tmAHHOkietTsEmJibg9/vh8XjMNqvVCo/HA5/Pp7Cy1GEYBoB/fGDX7/cjGo3GzdnatWvhdrvNOfP5fFi/fj2cTqc5pqqqCuFwGH19fUmsPrm8Xi+qq6vj5gbgnCVTWn0Y9fbt25iamor7oQOA0+nE1atXFVWVOmKxGA4cOIAnnngC69atAwAEAgHYbDYUFhbGjXU6nQgEAuaYmeZ0ui8TnTlzBpcuXcKFCxfu6eOcJU9aBRDdn9frxeXLl/Hxxx+rLiWlDQ4OYv/+/Whvb8eyZctUl7OkpdUp2MqVK5GVlXXP3YhgMAiXy6WoqtRQV1eH1tZWfPjhh1i1apXZ7nK5MDExgeHh4bjxX50zl8s145xO92Uav9+PUCiE73znO8jOzkZ2dja6urrw5ptvIjs7G06nk3OWJGkVQDabDRUVFejo6DDbYrEYOjo6oOu6wsrUERHU1dWhpaUFnZ2dWLNmTVx/RUUFcnJy4uasv78fAwMD5pzpuo7e3l6EQiFzTHt7OzRNQ3l5eXI2JImeeuop9Pb2oqenx1w2bdqEnTt3mv/mnCWJ6qvgiTpz5ozY7XY5ffq0fPrpp/LSSy9JYWFh3N2IpWTPnj3icDjko48+klu3bpnLF198YY55+eWXxe12S2dnp1y8eFF0XRdd183+6VvK27Ztk56eHnn//ffloYceWlK3lL96F0yEc5YsaRdAIiK/+c1vxO12i81mky1btsi5c+dUl6QMgBmXU6dOmWO+/PJL2bt3r6xYsULy8vLkxz/+sdy6dStuPZ999pk888wzkpubKytXrpSf//znEo1Gk7w16nw9gDhnycE/x0FEyqTVNSAiyiwMICJShgFERMowgIhIGQYQESnDACIiZRhARKQMA4iIlGEAEZEyDCAiUoYBRETKMICISJn/A5p6q/Yfd4BiAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<Figure size 300x300 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "200.0"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "play(True)[-1]"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "第9章-策略梯度算法.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.21"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}