{
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8-final"
  },
  "orig_nbformat": 2,
  "kernelspec": {
   "name": "python_defaultSpec_1599400687672",
   "display_name": "Python 3.7.8 64-bit ('venv': venv)"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2,
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 深度 Q 网络\n",
    "\n",
    "深度强化学习是使用深度学习进行强化学习的方法\n",
    "\n",
    "### DQN，Deep Q-learning\n",
    "Q 学习动作的价值函数公式：\n",
    "$$\n",
    "    Q(s_t,a_t)=Q(s_t,a_t)+\\eta*(R_{t+1}+\\gamma\\max_aQ(s_{t+1}+a)-Q(s_t,a_t))\n",
    "$$\n",
    "\n",
    "使用此公式是为了最终保持：\n",
    "$$\n",
    "    Q(s_t,a_t)=R_{t+1}+\\gamma\\max_aQ(s_{t+1},a)\n",
    "$$\n",
    "\n",
    ">例如\n",
    ">\n",
    "如果在时间 $t$ 时的状态 $s_t$ 下采用动作 $a_t$，输出层的神经元输出值是 $Q(s_t,a_t)$\n",
    ">\n",
    ">学习以使该输出值和 $R_{t+1}+\\gamma\\max_aQ(s_{t+1},a)$ 接近\n",
    "使用平方误差函数：\n",
    ">$$\n",
    "    E(s_t,a_t)=(R_{t+1}+\\gamma\\max_aQ(s_{t+1},a)-Q(s_t,a_t))^2\n",
    "\n",
    "\n",
    "由于状态 $s_{t+1}$ 实际上是由 $s_t$ 状态下采取动作 $a_t$ 后求得的，要通过将状态 $s_{t+1}$ 输入网络来获得 $\\max_aQ(s_{t+1},a)$\n",
    "\n",
    ">![图5.1 倒立摆CartPole任务中的DQN](media\\5.1倒立摆中的DQN.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 四个要点\n",
    "为了使 DQN 能稳定学习，需要注意四个要点：\n",
    "\n",
    "1. 经验回放(experience replay)\n",
    "    DQN不是每一步都学习该步的内容（experience），而是将每个步骤的内容存储在经验池中并随机从经验池中提取内容（replay）让神经网络学习。每个步骤的内容也称为转换（transition）\n",
    "2. 固定目标Q网络（Fixed Target Q-Network）\n",
    "    + 确定动作的主网络（main-network）\n",
    "    + 计算误差函数时确定动作价值的目标网络（target-network）\n",
    "    + 目标网络将被主网络周期性地覆盖（这次简要演示，没有这步骤）\n",
    "3. 奖励的裁剪（clip ping）\n",
    "4. 使用 Huber 函数而不是平方误差函数（取误差绝对值当其大于1时）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. 包\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "import gym"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2. 动画\n",
    "from JSAnimation.IPython_display import display_animation\n",
    "from matplotlib import animation\n",
    "from IPython.display import HTML, display\n",
    "\n",
    "def display_frames_as_gif(frames):\n",
    "    \"\"\"\n",
    "    Displays a list of frames as a gif, with controls\n",
    "    以gif格式显示关键帧列，具有控制\n",
    "    \"\"\"\n",
    "    \n",
    "    plt.figure(figsize=(frames[0].shape[1]/72.0, frames[0].shape[0]/72.0),dpi=72)\n",
    "    patch = plt.imshow(frames[0])\n",
    "    plt.axis('off')\n",
    "    \n",
    "    def animate(i):\n",
    "        img = patch.set_data(frames[i])\n",
    "        return img   ## *** return是必须要有的 ***\n",
    "        \n",
    "    anim = animation.FuncAnimation(plt.gcf(), animate, frames=len(frames), interval=50)\n",
    "    \n",
    "    anim.save('media/movie_cartpole_DQN.mp4')\n",
    "    ## display(display_animation(anim, default_mode='loop'))  ## *** delete ***\n",
    "    return HTML(anim.to_jshtml())  ## *** 返回一个HTML对象，以便被调用者显示。 ***"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": "tr(name_a='名称为A', value_b=100)\n100\n"
    }
   ],
   "source": [
    "# 然后，实现一个 namedtuple（具名元组） 用例\n",
    "# 这段代码使用的是 namedtuple\n",
    "# 可以使用 namedtuple 与字段名称成对存储值\n",
    "# 按字段名称访问值很方便\n",
    "# 原书提供链接：https://docs.python.jp/3/library/collections.html#collections.namedtuple\n",
    "# 中文文档链接：https://docs.python.org/zh-cn/3/library/collections.html#collections.namedtuple\n",
    "# 以下是一个用法示例\n",
    "\n",
    "from collections import namedtuple\n",
    "\n",
    "Tr = namedtuple('tr', ('name_a', 'value_b'))\n",
    "Tr_object = Tr('名称为A', 100)\n",
    "\n",
    "print(Tr_object)  # 输出：tr(name_a='名称为A'，value_b=100)\n",
    "print(Tr_object.value_b)  # 输出：100\n",
    "\n",
    "# 使用 namedtuple 转换 Tr_object\n",
    "# 键名 name_a，name_b\n",
    "# 可以通过键名访问每个值\n",
    "# 使用 namdtuple 转换每个步骤的 transition\n",
    "# 以便实现 DQN 时更容易访问状态和动作值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 3. 生成 namedtuple\n",
    "from collections import namedtuple\n",
    "\n",
    "Transition = namedtuple(\n",
    "    'Transition', ('state', 'action', 'next_state', 'reward')\n",
    ")\n",
    "\n",
    "# 4. 常量\n",
    "ENV = 'CartPole-v0'\n",
    "GAMMA = 0.9\n",
    "MAX_STEPS = 200\n",
    "NUM_EPISODES = 500"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 5. ReplayMemory 存储经验数据\n",
    "'''\n",
    "为了实现小批量学习，定义内存类 ReplayMemory 来存储经验数据\n",
    "\n",
    "push 函数，用于保存该步骤中的 transition 作为经验\n",
    "sample 函数，随机选择 transition\n",
    "len 函数，返回当前存储的 transition 数\n",
    "\n",
    "如果存储的 transition 数大于常量 CAPACITY，则将索引返回到前面并覆盖旧内容\n",
    "'''\n",
    "class ReplayMemory:\n",
    "\n",
    "    def __init__(self, CAPACITY):\n",
    "        self.capacity = CAPACITY    # 下面 memory 的最大长度\n",
    "        self.memory = []    # 存储过往经验\n",
    "        self.index = 0  # 表示要保存的索引\n",
    "\n",
    "    def push(self, state, action, state_next, reward):\n",
    "        '''将 transition = (state, action, state_next, reward) 保存在存储器中'''\n",
    "\n",
    "        if len(self.memory) < self.capacity:\n",
    "            self.memory.append(None)  # 内存未满时添加\n",
    "\n",
    "        # 使用具名元组对象 Transition 将值和字段名称保存为一对\n",
    "        self.memory[self.index] = Transition(state, action, state_next, reward)\n",
    "\n",
    "        self.index = (self.index + 1) % self.capacity  # 索引加一\n",
    "\n",
    "    def sample(self, batch_size):\n",
    "        '''随机检索 Batch_size 大小的样本并返回'''\n",
    "        return random.sample(self.memory, batch_size)\n",
    "\n",
    "    def __len__(self):\n",
    "        '''返回当前 memory 长度'''\n",
    "        return len(self.memory)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来是 DQN 的核心，Brain 类：\n",
    "\n",
    ">在非深度学习的Q学习中，Brain 类有一个表，这里有一个神经网络，使用 replay 函数和 decision_action 函数。\n",
    "\n",
    "+ replay 从内存类中获取小批量数据\n",
    "    学习神经网络连接参数\n",
    "    更新Q函数\n",
    "+ decision_action 遵循 $\\epsilon$-贪婪法\n",
    "    返回随机选取的动作\n",
    "    或者在当前状态下具有最高Q值的动作的索引 index "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 执行 DQN 的 Brain 类\n",
    "# 将 Q 函数定义为深度学习网络（而非一个表格）\n",
    "\n",
    "# 包\n",
    "\n",
    "import random\n",
    "import torch\n",
    "from torch import nn\n",
    "from torch import optim\n",
    "import torch.nn.functional as F\n",
    "\n",
    "# 常量\n",
    "BATCH_SIZE = 32\n",
    "CAPACITY = 10000\n",
    "\n",
    "class Brain:\n",
    "    def __init__(self, num_states, num_actions):\n",
    "        self.num_actions = num_actions  # CartPole 的两个动作\n",
    "\n",
    "        # 创建存储经验的对象\n",
    "        self.memory = ReplayMemory(CAPACITY)\n",
    "\n",
    "        # 构建一个神经网络\n",
    "        self.model = nn.Sequential()\n",
    "        self.model.add_module('fc1', nn.Linear(num_states, 32))\n",
    "        self.model.add_module('relu1', nn.ReLU())\n",
    "        self.model.add_module('fc2', nn.Linear(32, 32))\n",
    "        self.model.add_module('relu2', nn.ReLU())\n",
    "        self.model.add_module('fc3', nn.Linear(32, num_actions))\n",
    "\n",
    "        print(self.model)  # 输出网络的形状\n",
    "\n",
    "        # 最优化方法的设定\n",
    "        self.optimizer = optim.Adam(self.model.parameters(), lr=0.0001)\n",
    "\n",
    "    def replay(self):\n",
    "        '''通过 Experience Replay（经验回放） 学习网络的连接参数'''\n",
    "\n",
    "        # -----------------------------------------\n",
    "        # 1. 检查经验池的大小\n",
    "        # -----------------------------------------\n",
    "        # 1.1 经验池大小小于批量数据时不执行任何操作\n",
    "        if len(self.memory) < BATCH_SIZE:\n",
    "            return\n",
    "\n",
    "        # -----------------------------------------\n",
    "        # 2. 创建小批量数据\n",
    "        # -----------------------------------------\n",
    "        # 2.1 从经验池中获取小批量数据\n",
    "        transitions = self.memory.sample(BATCH_SIZE)\n",
    "\n",
    "        # 2.2 将每个变量转换为与小批量\n",
    "        # 得到的 transitions 存储了一个 BATCH_SIZE 大小的 (state, action, state_next, reward)\n",
    "        # 即：BATCH_SIZE * (state, action, state_next, reward)\n",
    "        # 想把它变成小批量数据，换句话说：\n",
    "        # 转为 (state*BATCH_SIZE, action*BATCH_SIZE, state_next*BATCH_SIZE, reward*BATCH_SIZE)\n",
    "        batch = Transition(*zip(*transitions))\n",
    "\n",
    "        # 2.3 将每个变量转换为与小批量数据对应的形式\n",
    "        # 例如，对于 state，形状为 [torch.FloatTensor of size 1x4]\n",
    "        # 将其转换为 torch.FloatTensor of size BATCH_SIZE * 4\n",
    "        # cat 是指 Concatenates（连接）\n",
    "        state_batch = torch.cat(batch.state)\n",
    "        action_batch = torch.cat(batch.action)\n",
    "        reward_batch = torch.cat(batch.reward)\n",
    "        # 只收集下一个状态是否存在的变量：\n",
    "        non_final_next_states = torch.cat([s for s in batch.next_state\n",
    "                                           if s is not None])\n",
    "\n",
    "        # -----------------------------------------\n",
    "        # 3. 求取 Q(s_t, a_t)值作为监督信号\n",
    "        # -----------------------------------------\n",
    "        # 3.1 将网络切换到推理模式\n",
    "        self.model.eval()\n",
    "\n",
    "        # 3.2 求取网络输出的 Q(s_t, a_t)\n",
    "        # self.model(state_batch)输出左右两个 Q 值\n",
    "        # 成为[torch.FloatTensor of size BATCH_SIZEx2]\n",
    "        # 为了求得于此处执行的动作 a_t 对应的 Q 值，\n",
    "        # 求取由 action_batch 执行的动作 a_t 是向右还是向左的 index\n",
    "        # 用 gather 获得相应的 Q 值。\n",
    "        state_action_values = self.model(state_batch).gather(1, action_batch)\n",
    "\n",
    "        # 3.3 求max{Q(s_t+1, a)}。\n",
    "        # 需要注意下一个状态s_t+1，不存在下一个状态时为 0\n",
    "\n",
    "        # 创建索引掩码以检查 cartple 是否未完场且具有 next_state\n",
    "        non_final_mask = torch.ByteTensor(tuple(map(lambda s: s is not None,\n",
    "                                                    batch.next_state)))\n",
    "        # 首先全部设置为 0\n",
    "        next_state_values = torch.zeros(BATCH_SIZE)\n",
    "\n",
    "        # 求取具有下一状态的 index 的最大 Q 值\n",
    "        # 访问输出并通过 max() 求列方向最大值的 [value, index]\n",
    "        # 并输出其 Q 值（index = 0）\n",
    "        # 用 detach 取出该值\n",
    "        next_state_values[non_final_mask] = self.model(\n",
    "            non_final_next_states).max(1)[0].detach()\n",
    "\n",
    "        # 3.4 从 Q 公式中求取 Q(s_t, a_t) 值作为监督信息\n",
    "        expected_state_action_values = reward_batch + GAMMA * next_state_values\n",
    "\n",
    "        # -----------------------------------------\n",
    "        # 4. 更新连接参数\n",
    "        # -----------------------------------------\n",
    "        # 4.1 切换到训练状态\n",
    "        self.model.train()\n",
    "\n",
    "        # 4.2 计算损失函数（smooth_l1_loss 是 Huberloss）\n",
    "        # expected_state_action_values 的 size 是 [minbatch]，\n",
    "        # 通过 unsqueeze 得到 [minibatch x 1]\n",
    "        loss = F.smooth_l1_loss(state_action_values,\n",
    "                                expected_state_action_values.unsqueeze(1))\n",
    "\n",
    "        # 4.3 更新连接参数\n",
    "        self.optimizer.zero_grad()  # 重置渐变\n",
    "        loss.backward()  # 计算反向传播\n",
    "        self.optimizer.step()  # 更新连接参数\n",
    "\n",
    "    def decide_action(self, state, episode):\n",
    "        '''根据当前状态确定动作'''\n",
    "        # 使用 ε-贪婪法 逐步采用最佳动作\n",
    "        epsilon = 0.5 * (1 / (episode + 1))\n",
    "\n",
    "        if epsilon <= np.random.uniform(0, 1):\n",
    "            self.model.eval()  # 将网络切换到推理模式\n",
    "            with torch.no_grad():\n",
    "                action = self.model(state).max(1)[1].view(1, 1)\n",
    "            # 获取网络输出最大值的 索引 index = max(1)[1]\n",
    "            # .view(1,1) 将 [torch.LongTensor of size 1] 转换为 size 1x1 大小\n",
    "\n",
    "        else:\n",
    "            # 随即返回 0,1 的动作\n",
    "            action = torch.LongTensor(\n",
    "                [[random.randrange(self.num_actions)]])  # 随机返回 0,1 动作\n",
    "            # action 的形式为 [torch.LongTensor of size 1x1]\n",
    "\n",
    "        return action\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# CartPole 上运行的智能体(agent)类，带有杆的小车\n",
    "\n",
    "\n",
    "class Agent:\n",
    "    def __init__(self, num_states, num_actions):\n",
    "        '''设置任务状态和动作的数量'''\n",
    "        self.brain = Brain(num_states, num_actions)  # 为智能体生成大脑来确定动作\n",
    "    def update_q_function(self):\n",
    "        '''更新 Q 函数'''\n",
    "        self.brain.replay()\n",
    "\n",
    "    def get_action(self, state, episode):\n",
    "        '''决定动作'''\n",
    "        action = self.brain.decide_action(state, episode)\n",
    "        return action\n",
    "\n",
    "    def memorize(self, state, action, state_next, reward):\n",
    "        '''将 state, action, state_next, reward 的内容保存在 memory 经验池中'''\n",
    "        self.brain.memory.push(state, action, state_next, reward)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 执行 CartPole 的环境类\n",
    "\n",
    "\n",
    "class Environment:\n",
    "\n",
    "    def __init__(self):\n",
    "        self.env = gym.make(ENV)  # 设定任务\n",
    "        num_states = self.env.observation_space.shape[0]  # 获得任务的状态变量数 4\n",
    "        num_actions = self.env.action_space.n  # CartPole的动作数 2\n",
    "        self.agent = Agent(num_states, num_actions)  # 创建 Agent，在环境中执行动作\n",
    "\n",
    "        \n",
    "    def run(self):\n",
    "        '''执行'''\n",
    "        episode_10_list = np.zeros(10)  # 存储 10 次试验的连续站立步数，用于输出平均步数\n",
    "        complete_episodes = 0  # 连续 195 步以上统计\n",
    "        episode_final = False  # 最终尝试标记\n",
    "        frames = []  # 存储图像的变量\n",
    "\n",
    "        for episode in range(NUM_EPISODES):  # 最大重复试验次数\n",
    "            observation = self.env.reset()  # 环境初始化\n",
    "\n",
    "            state = observation  # 直接将观测作为状态 state 使用\n",
    "            state = torch.from_numpy(state).type(\n",
    "                torch.FloatTensor)  # NumPy 变量转换为 PyTorch Tensor\n",
    "            state = torch.unsqueeze(state, 0)  # FloatTensor of size 4 转换为 size 1x4\n",
    "\n",
    "            for step in range(MAX_STEPS):  # 1 轮循环（1 episode）\n",
    "\n",
    "                if episode_final is True:  # 最终试验时添加各时刻图像\n",
    "                    frames.append(self.env.render(mode='rgb_array'))\n",
    "\n",
    "                action = self.agent.get_action(state, episode)  # 求取动作\n",
    "\n",
    "                # 通过执行动作 a_t 求 s_{t+1} 和 done 标志\n",
    "                # 从 action 中指定 .item() 并获取内容\n",
    "                observation_next, _, done, _ = self.env.step(action.item())  # reward 和 info不适用，所以用 _\n",
    "\n",
    "                # 给与奖励。对 episode是否结束以及是否有下一个状态进行判断\n",
    "                if done:  # 如果 step 不超过 200，或陪着如果倾斜超过某个角度\n",
    "                    state_next = None  # 没有下一个状态，存储 None\n",
    "\n",
    "                    # 添加到最近的 10 episode 的步数列表中\n",
    "                    episode_10_list = np.hstack(\n",
    "                        (episode_10_list[1:], step + 1))\n",
    "\n",
    "                    if step < 195:\n",
    "                        reward = torch.FloatTensor(\n",
    "                            [-1.0])  # 半途倒下，奖励 -1\n",
    "                        complete_episodes = 0  # 重置连续成功记录\n",
    "                    else:\n",
    "                        reward = torch.FloatTensor([1.0])  # 一直站立直到结束时奖励 1\n",
    "                        complete_episodes = complete_episodes + 1  # 更新连续记录\n",
    "                else:\n",
    "                    reward = torch.FloatTensor([0.0])  # 普通奖励 0\n",
    "                    state_next = observation_next  # 保持观察不变\n",
    "                    state_next = torch.from_numpy(state_next).type(\n",
    "                        torch.FloatTensor)  # numpy 变量 --> PyTorch 变量\n",
    "                    state_next = torch.unsqueeze(state_next, 0)  # FloatTensor of size 4 转为 size 1x4\n",
    "\n",
    "                # 向经验池中添加经验\n",
    "                self.agent.memorize(state, action, state_next, reward)\n",
    "\n",
    "                # 经验回放 Experience Replay，更新 Q 函数\n",
    "                self.agent.update_q_function()\n",
    "\n",
    "                # 更新观测值\n",
    "                state = state_next\n",
    "\n",
    "                # 结束处理\n",
    "                if done:\n",
    "                    print('%d Episode: Finished after %d steps：10 次试验平均 step 数 = %.1lf' % (\n",
    "                        episode, step + 1, episode_10_list.mean()))\n",
    "                    break\n",
    "\n",
    "            if episode_final is True:\n",
    "                # 保存并绘制动画\n",
    "                self.env.close()    # 关闭环境\n",
    "                html = display_frames_as_gif(frames)\n",
    "                html\n",
    "                break\n",
    "\n",
    "            # 连续 10 轮成功\n",
    "            if complete_episodes >= 10:\n",
    "                print('10 轮连续成功')\n",
    "                episode_final = True  # 标记下一次为最终试验"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": "Sequential(\n  (fc1): Linear(in_features=4, out_features=32, bias=True)\n  (relu1): ReLU()\n  (fc2): Linear(in_features=32, out_features=32, bias=True)\n  (relu2): ReLU()\n  (fc3): Linear(in_features=32, out_features=2, bias=True)\n)\n0 Episode: Finished after 11 steps：10 次试验平均 step 数 = 1.1\n1 Episode: Finished after 14 steps：10 次试验平均 step 数 = 2.5\n2 Episode: Finished after 10 steps：10 次试验平均 step 数 = 3.5\n3 Episode: Finished after 11 steps：10 次试验平均 step 数 = 4.6\n4 Episode: Finished after 14 steps：10 次试验平均 step 数 = 6.0\n5 Episode: Finished after 9 steps：10 次试验平均 step 数 = 6.9\n6 Episode: Finished after 8 steps：10 次试验平均 step 数 = 7.7\n7 Episode: Finished after 11 steps：10 次试验平均 step 数 = 8.8\n8 Episode: Finished after 10 steps：10 次试验平均 step 数 = 9.8\n9 Episode: Finished after 8 steps：10 次试验平均 step 数 = 10.6\n10 Episode: Finished after 12 steps：10 次试验平均 step 数 = 10.7\n11 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.2\n12 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.2\n13 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.1\n14 Episode: Finished after 10 steps：10 次试验平均 step 数 = 9.7\n15 Episode: Finished after 10 steps：10 次试验平均 step 数 = 9.8\n16 Episode: Finished after 11 steps：10 次试验平均 step 数 = 10.1\n17 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.0\n18 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.0\n19 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.1\n20 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.8\n21 Episode: Finished after 10 steps：10 次试验平均 step 数 = 9.9\n22 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.8\n23 Episode: Finished after 10 steps：10 次试验平均 step 数 = 9.8\n24 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.7\n25 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.6\n26 Episode: Finished after 11 steps：10 次试验平均 step 数 = 9.6\n27 Episode: Finished after 10 steps：10 次试验平均 step 数 = 9.6\n28 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.5\n29 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.5\n30 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.5\n31 Episode: Finished after 11 steps：10 次试验平均 step 数 = 9.6\n32 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.6\n33 Episode: Finished after 11 steps：10 次试验平均 step 数 = 9.7\n34 Episode: Finished after 11 steps：10 次试验平均 step 数 = 9.9\n35 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.9\n36 Episode: Finished after 11 steps：10 次试验平均 step 数 = 9.9\n37 Episode: Finished after 11 steps：10 次试验平均 step 数 = 10.0\n38 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.1\n39 Episode: Finished after 11 steps：10 次试验平均 step 数 = 10.3\n40 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.3\n41 Episode: Finished after 8 steps：10 次试验平均 step 数 = 10.0\n42 Episode: Finished after 12 steps：10 次试验平均 step 数 = 10.3\n43 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.2\n44 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.0\n45 Episode: Finished after 12 steps：10 次试验平均 step 数 = 10.3\n46 Episode: Finished after 11 steps：10 次试验平均 step 数 = 10.3\n47 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.1\n48 Episode: Finished after 11 steps：10 次试验平均 step 数 = 10.2\n49 Episode: Finished after 13 steps：10 次试验平均 step 数 = 10.4\n50 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.4\n51 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.5\n52 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.3\n53 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.3\n54 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.4\n55 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.1\n56 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.0\n57 Episode: Finished after 12 steps：10 次试验平均 step 数 = 10.3\n58 Episode: Finished after 14 steps：10 次试验平均 step 数 = 10.6\n59 Episode: Finished after 13 steps：10 次试验平均 step 数 = 10.6\n60 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.7\n61 Episode: Finished after 10 steps：10 次试验平均 step 数 = 10.8\n62 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.7\n63 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.6\n64 Episode: Finished after 14 steps：10 次试验平均 step 数 = 11.0\n65 Episode: Finished after 14 steps：10 次试验平均 step 数 = 11.5\n66 Episode: Finished after 12 steps：10 次试验平均 step 数 = 11.7\n67 Episode: Finished after 16 steps：10 次试验平均 step 数 = 12.1\n68 Episode: Finished after 14 steps：10 次试验平均 step 数 = 12.1\n69 Episode: Finished after 12 steps：10 次试验平均 step 数 = 12.0\n70 Episode: Finished after 13 steps：10 次试验平均 step 数 = 12.3\n71 Episode: Finished after 13 steps：10 次试验平均 step 数 = 12.6\n72 Episode: Finished after 14 steps：10 次试验平均 step 数 = 13.1\n73 Episode: Finished after 12 steps：10 次试验平均 step 数 = 13.4\n74 Episode: Finished after 15 steps：10 次试验平均 step 数 = 13.5\n75 Episode: Finished after 16 steps：10 次试验平均 step 数 = 13.7\n76 Episode: Finished after 17 steps：10 次试验平均 step 数 = 14.2\n77 Episode: Finished after 13 steps：10 次试验平均 step 数 = 13.9\n78 Episode: Finished after 15 steps：10 次试验平均 step 数 = 14.0\n79 Episode: Finished after 12 steps：10 次试验平均 step 数 = 14.0\n80 Episode: Finished after 11 steps：10 次试验平均 step 数 = 13.8\n81 Episode: Finished after 16 steps：10 次试验平均 step 数 = 14.1\n82 Episode: Finished after 14 steps：10 次试验平均 step 数 = 14.1\n83 Episode: Finished after 16 steps：10 次试验平均 step 数 = 14.5\n84 Episode: Finished after 21 steps：10 次试验平均 step 数 = 15.1\n85 Episode: Finished after 26 steps：10 次试验平均 step 数 = 16.1\n86 Episode: Finished after 14 steps：10 次试验平均 step 数 = 15.8\n87 Episode: Finished after 12 steps：10 次试验平均 step 数 = 15.7\n88 Episode: Finished after 20 steps：10 次试验平均 step 数 = 16.2\n89 Episode: Finished after 25 steps：10 次试验平均 step 数 = 17.5\n90 Episode: Finished after 24 steps：10 次试验平均 step 数 = 18.8\n91 Episode: Finished after 18 steps：10 次试验平均 step 数 = 19.0\n92 Episode: Finished after 9 steps：10 次试验平均 step 数 = 18.5\n93 Episode: Finished after 26 steps：10 次试验平均 step 数 = 19.5\n94 Episode: Finished after 9 steps：10 次试验平均 step 数 = 18.3\n95 Episode: Finished after 10 steps：10 次试验平均 step 数 = 16.7\n96 Episode: Finished after 9 steps：10 次试验平均 step 数 = 16.2\n97 Episode: Finished after 9 steps：10 次试验平均 step 数 = 15.9\n98 Episode: Finished after 15 steps：10 次试验平均 step 数 = 15.4\n99 Episode: Finished after 14 steps：10 次试验平均 step 数 = 14.3\n100 Episode: Finished after 22 steps：10 次试验平均 step 数 = 14.1\n101 Episode: Finished after 41 steps：10 次试验平均 step 数 = 16.4\n102 Episode: Finished after 44 steps：10 次试验平均 step 数 = 19.9\n103 Episode: Finished after 12 steps：10 次试验平均 step 数 = 18.5\n104 Episode: Finished after 12 steps：10 次试验平均 step 数 = 18.8\n105 Episode: Finished after 17 steps：10 次试验平均 step 数 = 19.5\n106 Episode: Finished after 16 steps：10 次试验平均 step 数 = 20.2\n107 Episode: Finished after 18 steps：10 次试验平均 step 数 = 21.1\n108 Episode: Finished after 20 steps：10 次试验平均 step 数 = 21.6\n109 Episode: Finished after 22 steps：10 次试验平均 step 数 = 22.4\n110 Episode: Finished after 16 steps：10 次试验平均 step 数 = 21.8\n111 Episode: Finished after 14 steps：10 次试验平均 step 数 = 19.1\n112 Episode: Finished after 16 steps：10 次试验平均 step 数 = 16.3\n113 Episode: Finished after 13 steps：10 次试验平均 step 数 = 16.4\n114 Episode: Finished after 12 steps：10 次试验平均 step 数 = 16.4\n115 Episode: Finished after 18 steps：10 次试验平均 step 数 = 16.5\n116 Episode: Finished after 22 steps：10 次试验平均 step 数 = 17.1\n117 Episode: Finished after 12 steps：10 次试验平均 step 数 = 16.5\n118 Episode: Finished after 19 steps：10 次试验平均 step 数 = 16.4\n119 Episode: Finished after 13 steps：10 次试验平均 step 数 = 15.5\n120 Episode: Finished after 22 steps：10 次试验平均 step 数 = 16.1\n121 Episode: Finished after 21 steps：10 次试验平均 step 数 = 16.8\n122 Episode: Finished after 16 steps：10 次试验平均 step 数 = 16.8\n123 Episode: Finished after 21 steps：10 次试验平均 step 数 = 17.6\n124 Episode: Finished after 20 steps：10 次试验平均 step 数 = 18.4\n125 Episode: Finished after 24 steps：10 次试验平均 step 数 = 19.0\n126 Episode: Finished after 22 steps：10 次试验平均 step 数 = 19.0\n127 Episode: Finished after 22 steps：10 次试验平均 step 数 = 20.0\n128 Episode: Finished after 31 steps：10 次试验平均 step 数 = 21.2\n129 Episode: Finished after 21 steps：10 次试验平均 step 数 = 22.0\n130 Episode: Finished after 42 steps：10 次试验平均 step 数 = 24.0\n131 Episode: Finished after 30 steps：10 次试验平均 step 数 = 24.9\n132 Episode: Finished after 30 steps：10 次试验平均 step 数 = 26.3\n133 Episode: Finished after 31 steps：10 次试验平均 step 数 = 27.3\n134 Episode: Finished after 32 steps：10 次试验平均 step 数 = 28.5\n135 Episode: Finished after 20 steps：10 次试验平均 step 数 = 28.1\n136 Episode: Finished after 27 steps：10 次试验平均 step 数 = 28.6\n137 Episode: Finished after 45 steps：10 次试验平均 step 数 = 30.9\n138 Episode: Finished after 23 steps：10 次试验平均 step 数 = 30.1\n139 Episode: Finished after 49 steps：10 次试验平均 step 数 = 32.9\n140 Episode: Finished after 41 steps：10 次试验平均 step 数 = 32.8\n141 Episode: Finished after 33 steps：10 次试验平均 step 数 = 33.1\n142 Episode: Finished after 42 steps：10 次试验平均 step 数 = 34.3\n143 Episode: Finished after 31 steps：10 次试验平均 step 数 = 34.3\n144 Episode: Finished after 34 steps：10 次试验平均 step 数 = 34.5\n145 Episode: Finished after 42 steps：10 次试验平均 step 数 = 36.7\n146 Episode: Finished after 62 steps：10 次试验平均 step 数 = 40.2\n147 Episode: Finished after 46 steps：10 次试验平均 step 数 = 40.3\n148 Episode: Finished after 41 steps：10 次试验平均 step 数 = 42.1\n149 Episode: Finished after 44 steps：10 次试验平均 step 数 = 41.6\n150 Episode: Finished after 42 steps：10 次试验平均 step 数 = 41.7\n151 Episode: Finished after 46 steps：10 次试验平均 step 数 = 43.0\n152 Episode: Finished after 59 steps：10 次试验平均 step 数 = 44.7\n153 Episode: Finished after 41 steps：10 次试验平均 step 数 = 45.7\n154 Episode: Finished after 35 steps：10 次试验平均 step 数 = 45.8\n155 Episode: Finished after 47 steps：10 次试验平均 step 数 = 46.3\n156 Episode: Finished after 42 steps：10 次试验平均 step 数 = 44.3\n157 Episode: Finished after 27 steps：10 次试验平均 step 数 = 42.4\n158 Episode: Finished after 34 steps：10 次试验平均 step 数 = 41.7\n159 Episode: Finished after 45 steps：10 次试验平均 step 数 = 41.8\n160 Episode: Finished after 44 steps：10 次试验平均 step 数 = 42.0\n161 Episode: Finished after 35 steps：10 次试验平均 step 数 = 40.9\n162 Episode: Finished after 36 steps：10 次试验平均 step 数 = 38.6\n163 Episode: Finished after 39 steps：10 次试验平均 step 数 = 38.4\n164 Episode: Finished after 28 steps：10 次试验平均 step 数 = 37.7\n165 Episode: Finished after 31 steps：10 次试验平均 step 数 = 36.1\n166 Episode: Finished after 33 steps：10 次试验平均 step 数 = 35.2\n167 Episode: Finished after 39 steps：10 次试验平均 step 数 = 36.4\n168 Episode: Finished after 35 steps：10 次试验平均 step 数 = 36.5\n169 Episode: Finished after 50 steps：10 次试验平均 step 数 = 37.0\n170 Episode: Finished after 35 steps：10 次试验平均 step 数 = 36.1\n171 Episode: Finished after 60 steps：10 次试验平均 step 数 = 38.6\n172 Episode: Finished after 58 steps：10 次试验平均 step 数 = 40.8\n173 Episode: Finished after 34 steps：10 次试验平均 step 数 = 40.3\n174 Episode: Finished after 35 steps：10 次试验平均 step 数 = 41.0\n175 Episode: Finished after 51 steps：10 次试验平均 step 数 = 43.0\n176 Episode: Finished after 65 steps：10 次试验平均 step 数 = 46.2\n177 Episode: Finished after 53 steps：10 次试验平均 step 数 = 47.6\n178 Episode: Finished after 98 steps：10 次试验平均 step 数 = 53.9\n179 Episode: Finished after 55 steps：10 次试验平均 step 数 = 54.4\n180 Episode: Finished after 49 steps：10 次试验平均 step 数 = 55.8\n181 Episode: Finished after 71 steps：10 次试验平均 step 数 = 56.9\n182 Episode: Finished after 41 steps：10 次试验平均 step 数 = 55.2\n183 Episode: Finished after 53 steps：10 次试验平均 step 数 = 57.1\n184 Episode: Finished after 49 steps：10 次试验平均 step 数 = 58.5\n185 Episode: Finished after 45 steps：10 次试验平均 step 数 = 57.9\n186 Episode: Finished after 82 steps：10 次试验平均 step 数 = 59.6\n187 Episode: Finished after 52 steps：10 次试验平均 step 数 = 59.5\n188 Episode: Finished after 66 steps：10 次试验平均 step 数 = 56.3\n189 Episode: Finished after 48 steps：10 次试验平均 step 数 = 55.6\n190 Episode: Finished after 62 steps：10 次试验平均 step 数 = 56.9\n191 Episode: Finished after 84 steps：10 次试验平均 step 数 = 58.2\n192 Episode: Finished after 88 steps：10 次试验平均 step 数 = 62.9\n193 Episode: Finished after 50 steps：10 次试验平均 step 数 = 62.6\n194 Episode: Finished after 67 steps：10 次试验平均 step 数 = 64.4\n195 Episode: Finished after 53 steps：10 次试验平均 step 数 = 65.2\n196 Episode: Finished after 43 steps：10 次试验平均 step 数 = 61.3\n197 Episode: Finished after 80 steps：10 次试验平均 step 数 = 64.1\n198 Episode: Finished after 65 steps：10 次试验平均 step 数 = 64.0\n199 Episode: Finished after 92 steps：10 次试验平均 step 数 = 68.4\n200 Episode: Finished after 165 steps：10 次试验平均 step 数 = 78.7\n201 Episode: Finished after 69 steps：10 次试验平均 step 数 = 77.2\n202 Episode: Finished after 54 steps：10 次试验平均 step 数 = 73.8\n203 Episode: Finished after 94 steps：10 次试验平均 step 数 = 78.2\n204 Episode: Finished after 114 steps：10 次试验平均 step 数 = 82.9\n205 Episode: Finished after 200 steps：10 次试验平均 step 数 = 97.6\n206 Episode: Finished after 200 steps：10 次试验平均 step 数 = 113.3\n207 Episode: Finished after 55 steps：10 次试验平均 step 数 = 110.8\n208 Episode: Finished after 70 steps：10 次试验平均 step 数 = 111.3\n209 Episode: Finished after 200 steps：10 次试验平均 step 数 = 122.1\n210 Episode: Finished after 91 steps：10 次试验平均 step 数 = 114.7\n211 Episode: Finished after 71 steps：10 次试验平均 step 数 = 114.9\n212 Episode: Finished after 79 steps：10 次试验平均 step 数 = 117.4\n213 Episode: Finished after 75 steps：10 次试验平均 step 数 = 115.5\n214 Episode: Finished after 63 steps：10 次试验平均 step 数 = 110.4\n215 Episode: Finished after 84 steps：10 次试验平均 step 数 = 98.8\n216 Episode: Finished after 72 steps：10 次试验平均 step 数 = 86.0\n217 Episode: Finished after 70 steps：10 次试验平均 step 数 = 87.5\n218 Episode: Finished after 66 steps：10 次试验平均 step 数 = 87.1\n219 Episode: Finished after 65 steps：10 次试验平均 step 数 = 73.6\n220 Episode: Finished after 79 steps：10 次试验平均 step 数 = 72.4\n221 Episode: Finished after 90 steps：10 次试验平均 step 数 = 74.3\n222 Episode: Finished after 200 steps：10 次试验平均 step 数 = 86.4\n223 Episode: Finished after 152 steps：10 次试验平均 step 数 = 94.1\n224 Episode: Finished after 112 steps：10 次试验平均 step 数 = 99.0\n225 Episode: Finished after 161 steps：10 次试验平均 step 数 = 106.7\n226 Episode: Finished after 126 steps：10 次试验平均 step 数 = 112.1\n227 Episode: Finished after 186 steps：10 次试验平均 step 数 = 123.7\n228 Episode: Finished after 109 steps：10 次试验平均 step 数 = 128.0\n229 Episode: Finished after 94 steps：10 次试验平均 step 数 = 130.9\n230 Episode: Finished after 95 steps：10 次试验平均 step 数 = 132.5\n231 Episode: Finished after 87 steps：10 次试验平均 step 数 = 132.2\n232 Episode: Finished after 114 steps：10 次试验平均 step 数 = 123.6\n233 Episode: Finished after 93 steps：10 次试验平均 step 数 = 117.7\n234 Episode: Finished after 116 steps：10 次试验平均 step 数 = 118.1\n235 Episode: Finished after 102 steps：10 次试验平均 step 数 = 112.2\n236 Episode: Finished after 102 steps：10 次试验平均 step 数 = 109.8\n237 Episode: Finished after 125 steps：10 次试验平均 step 数 = 103.7\n238 Episode: Finished after 200 steps：10 次试验平均 step 数 = 112.8\n239 Episode: Finished after 188 steps：10 次试验平均 step 数 = 122.2\n240 Episode: Finished after 194 steps：10 次试验平均 step 数 = 132.1\n241 Episode: Finished after 199 steps：10 次试验平均 step 数 = 143.3\n242 Episode: Finished after 170 steps：10 次试验平均 step 数 = 148.9\n243 Episode: Finished after 160 steps：10 次试验平均 step 数 = 155.6\n244 Episode: Finished after 184 steps：10 次试验平均 step 数 = 162.4\n245 Episode: Finished after 196 steps：10 次试验平均 step 数 = 171.8\n246 Episode: Finished after 200 steps：10 次试验平均 step 数 = 181.6\n247 Episode: Finished after 192 steps：10 次试验平均 step 数 = 188.3\n248 Episode: Finished after 200 steps：10 次试验平均 step 数 = 188.3\n249 Episode: Finished after 198 steps：10 次试验平均 step 数 = 189.3\n250 Episode: Finished after 177 steps：10 次试验平均 step 数 = 187.6\n251 Episode: Finished after 200 steps：10 次试验平均 step 数 = 187.7\n252 Episode: Finished after 180 steps：10 次试验平均 step 数 = 188.7\n253 Episode: Finished after 200 steps：10 次试验平均 step 数 = 192.7\n254 Episode: Finished after 200 steps：10 次试验平均 step 数 = 194.3\n255 Episode: Finished after 200 steps：10 次试验平均 step 数 = 194.7\n256 Episode: Finished after 200 steps：10 次试验平均 step 数 = 194.7\n257 Episode: Finished after 200 steps：10 次试验平均 step 数 = 195.5\n258 Episode: Finished after 200 steps：10 次试验平均 step 数 = 195.5\n259 Episode: Finished after 200 steps：10 次试验平均 step 数 = 195.7\n260 Episode: Finished after 200 steps：10 次试验平均 step 数 = 198.0\n261 Episode: Finished after 195 steps：10 次试验平均 step 数 = 197.5\n262 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.5\n263 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.5\n264 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.5\n265 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.5\n266 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.5\n267 Episode: Finished after 194 steps：10 次试验平均 step 数 = 198.9\n268 Episode: Finished after 180 steps：10 次试验平均 step 数 = 196.9\n269 Episode: Finished after 200 steps：10 次试验平均 step 数 = 196.9\n270 Episode: Finished after 193 steps：10 次试验平均 step 数 = 196.2\n271 Episode: Finished after 184 steps：10 次试验平均 step 数 = 195.1\n272 Episode: Finished after 200 steps：10 次试验平均 step 数 = 195.1\n273 Episode: Finished after 200 steps：10 次试验平均 step 数 = 195.1\n274 Episode: Finished after 195 steps：10 次试验平均 step 数 = 194.6\n275 Episode: Finished after 184 steps：10 次试验平均 step 数 = 193.0\n276 Episode: Finished after 184 steps：10 次试验平均 step 数 = 191.4\n277 Episode: Finished after 200 steps：10 次试验平均 step 数 = 192.0\n278 Episode: Finished after 200 steps：10 次试验平均 step 数 = 194.0\n279 Episode: Finished after 200 steps：10 次试验平均 step 数 = 194.0\n280 Episode: Finished after 200 steps：10 次试验平均 step 数 = 194.7\n281 Episode: Finished after 200 steps：10 次试验平均 step 数 = 196.3\n282 Episode: Finished after 200 steps：10 次试验平均 step 数 = 196.3\n283 Episode: Finished after 200 steps：10 次试验平均 step 数 = 196.3\n284 Episode: Finished after 192 steps：10 次试验平均 step 数 = 196.0\n285 Episode: Finished after 200 steps：10 次试验平均 step 数 = 197.6\n286 Episode: Finished after 199 steps：10 次试验平均 step 数 = 199.1\n287 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.1\n288 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.1\n289 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.1\n290 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.1\n291 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.1\n292 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.1\n293 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.1\n294 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.9\n10 轮连续成功\n295 Episode: Finished after 200 steps：10 次试验平均 step 数 = 199.9\n"
    },
    {
     "output_type": "display_data",
     "data": {
      "text/plain": "<Figure size 600x400 with 1 Axes>",
      "image/svg+xml": "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\r\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\r\n  \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\r\n<!-- Created with matplotlib (https://matplotlib.org/) -->\r\n<svg height=\"316.4pt\" version=\"1.1\" viewBox=\"0 0 467.4 316.4\" width=\"467.4pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\r\n <metadata>\r\n  <rdf:RDF xmlns:cc=\"http://creativecommons.org/ns#\" xmlns:dc=\"http://purl.org/dc/elements/1.1/\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\r\n   <cc:Work>\r\n    <dc:type rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\"/>\r\n    <dc:date>2020-09-06T21:55:10.832058</dc:date>\r\n    <dc:format>image/svg+xml</dc:format>\r\n    <dc:creator>\r\n     <cc:Agent>\r\n      <dc:title>Matplotlib v3.3.1, https://matplotlib.org/</dc:title>\r\n     </cc:Agent>\r\n    </dc:creator>\r\n   </cc:Work>\r\n  </rdf:RDF>\r\n </metadata>\r\n <defs>\r\n  <style type=\"text/css\">*{stroke-linecap:butt;stroke-linejoin:round;}</style>\r\n </defs>\r\n <g id=\"figure_1\">\r\n  <g id=\"patch_1\">\r\n   <path d=\"M 0 316.4 \r\nL 467.4 316.4 \r\nL 467.4 0 \r\nL 0 0 \r\nz\r\n\" style=\"fill:none;\"/>\r\n  </g>\r\n  <g id=\"axes_1\">\r\n   <g clip-path=\"url(#p199eb5bca2)\">\r\n    <image height=\"302\" id=\"image3d72001863\" transform=\"scale(1 -1)translate(0 -302)\" width=\"453\" x=\"7.2\" xlink:href=\"data:image/png;base64,\r\niVBORw0KGgoAAAANSUhEUgAAAcUAAAEuCAYAAAD/QgnFAAAHUElEQVR4nO3czYtddx3H8e+5c5NJ0pnJTBMEG1trIUkfKFjUrlwkihsJCC50J9V/QMhfEVfZZW/AhTshuyBU2+Cm4CDBGaJoYhWJrYmTh0niPNzTzcdCydwUZ8o9PTev127ujzl8VvNm7j33NG3btgUA1KDrAQDweSGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAhigAQoggAIYoAEKIIACGKABCiCAAx7HoAwNPk4sWLde7cua5nPObChQt16tSprmd0ThQBJujWrVu1urra9YzH3L9/v+sJnwvePgWAEEUACFEEgBBFAAhRBIAQRQAIUQSAEEUAiOGVK1e63gDw1Lh+/XrXE3a0srJSi4uLXc/o3PDy5ctdbwB4aly7dq3rCTtaXl72VJuqatq2bbseAfC0OH/+fJ09e/YzudagaWowaD7+eWt7tOtrXbp0qc6cOfNZzOo1zz4F6ImFQ7P1/BcW6u7ms7XdDuuH33qtvv3GV2o42Ki54Z368c9+VX/+x+2uZ/aaKAL0xJuvHKuzP3qrltdO18boYFVVvXe36pmZtXpj6e0aNM2nXIFP4+5TgB65+ejFj4P4P+vbi7X8n9O1vrXQ0arpIYoAPbE52l8bowM7nq1vL9ZWu3/Ci6aPKAL0xIPtuVrfOjz2/Dtff6m8gbo3ogjQE3+98cda+dN7Y07b+sGp16oZyOJeiCJAT9y6+7DW1j6oQW1/4vWZZqNeP3ylFva583Sv3H0K0CPH539fx+df+cRni/PDtfryM6v14JGvne+VKAL0zIn55a4nTC1RBJigpaWlOnny5K5//0nPJx0Mmjp54kSNdvEP49zc3K43TROPeQPokb+9+4v69+o7O54NhrP11bfOVzOYmfCq6eFGGwAIUQSAEEUACFEEgBBFAAhRBIAQRQAIUQTokWPf+F7tnz+649lo67914zc/n/Ci6SKKAD0yPDD3xC/nbz68O8E100cUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBemYw3Df+sB3VaHtrcmOmjCgC9Mzx7/507Nm9f16rD67+eoJrposoAvRM0zz5T3fbthNaMn1EEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUAXqmmdlXh194fez5+r/+UpsP7kxw0fQQRYCemdk3W0df/ubY8zvvX62N+7cnuGh6iCIAhCgCQIgiAIQoAkCIIgCEKAJAiCIAhCgCQIgiAIQoAkCIIgCEKAL00MEjz9f8cy+PPb/5h8vVjkYTXDQdRBGgh2bnj9TBZ58be37n/avVtqL4/xJFAAhRBIAQRQAIUQSAEEUACFEEgBBFAAhRBIAQRQAIUQSAEEUACFEE6KlmMDP2rN3eqr//7pcTXDMdRBGgp469+f06sPTFMadtPVq7OdE900AUAXqqGQyqqul6xlQRRQAIUQSAEEUACFEEgBBFAAhRBIAQRQAIUQSAEEUACFEEgBBFgB6bnT8y9my0tVGbD+9NcE3/iSJAj714+ic17vmnDz68UR+u/Hayg3pOFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIIZdDwBg75Ze+lotfOnVx14/dPSFDtb0V9O2bdv1CAB2p21HtXHvds3MHqrh7KGu5/SeKAJA+EwRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASBEEQBCFAEgRBEAQhQBIEQRAEIUASA+Avlm2kyi5fVDAAAAAElFTkSuQmCC\" y=\"-7.2\"/>\r\n   </g>\r\n  </g>\r\n </g>\r\n <defs>\r\n  <clipPath id=\"p199eb5bca2\">\r\n   <rect height=\"302\" width=\"453\" x=\"7.2\" y=\"7.2\"/>\r\n  </clipPath>\r\n </defs>\r\n</svg>\r\n",
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAdMAAAE8CAYAAACb7Fv6AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/d3fzzAAAACXBIWXMAAAsTAAALEwEAmpwYAAAII0lEQVR4nO3dv2/Uhx3G8c/Zh4NtfiWElEYtpYnaClsoMlEXhFTURGydM1b9I7pVDExsUZSFqWJBrTpU6tKBDunQBdNMNE0gEkKtSIdAbWwXsI3v2wEJKcJnWj/mDr73em3cRz498uC37nTYnaZpCgDYvrFhDwCAl52YAkBITAEgJKYAEBJTAAiJKQCEus+4+38zAPBYp9/BK1MACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTgJZqml6tLt2pR6v3hz2l9brDHgDA87Gx9rD+9ttf1atvnah935l56j71+pGaev3IEJa1j5gCtNzCzU9r4eanTz3+7Xd/JqY7xNu8ABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElOAlrr1ya+rqtn0NnXoaB2a+clgB7WYmAK01Ory3b63se5E7ZrcO8A17SamABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpQAs1vV71+7287DwxBWih2/O/r4cL/+pz7dTuA4cHuqftxBSghZreRt9bZ7xb3z35wQDXtJ+YAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASU4CWWV2+Ww/+/VXf+/4jx6vT8eN/J/luArTMg7v/rOWvvuh7P/zOmeqM+fG/k3w3ASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKUCLbKyv1p0v/tL3vv/I8ZrY89oAF40GMQVokWZjve7941rf+/S33q5dU/sHuGg0iCkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFaJGm6W1573Q6A1oyWsQUoEW+/ONHfW973/xRvXH8/QGuGR1iCtAivUfr/Y+dsRob7w5uzAgRUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFKAlHj1cqaa30fe+a3LfANeMFjEFaInbV/9Qa8t3Nr2NdV+po6d/PuBFo0NMASAkpgAQElMACIkpAITEFABCYgoAITEFgFB32AMAeOzixYt1/vz5bX/9L977Qb3/zpub3u7f/0/Nzs5Wr/n/n/fChQt1+vTpbe8aBWIK8IJYWFio69evb/vrF989VFWbx7TXa+r6jRvV20ZNV1ZWtr1pVIgpQIvcWJ6rtd7uJ//e212s701/PsRFo0FMAVriy+UTNbl8ono1/uSx8c5aNVV1aPza8IaNAB9AAmiBg/sm68CBN74R0qqqjWairt07VUvrrw1p2WgQU4AWeOvobM388Md9rp363Z8/q2Y7nz7ifyKmAC0wNb5S0917fe9/+uvNktLnR0wBWmDX2FpNjD3c9DY9vljdztqAF40WMQVoicO7b9XE2INvPDY9vlhzr35S092lIa0aDT7NC9AC85/frl9++HEtrV+qjaZbH/x0tt6b+351x9Zqsu5Vr/Em7/MkpgAtsHR/tT679XVVfV1VVR9d+nt9/JvOk/ujjd6Qlo2GLWN69uzZQe0AGHlXrlzZsefqNU31Nnbm1eilS5dqfn5+R57rZXbu3Lm+ty1jeubMmR0fA8DmFhcX6/Lly8Oe8ZS5ubk6efLksGe80LaM6alTpwa1A2DkXb16ddgTNjUzM6MHz+DTvAAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEg5HfzArwgDh48WMeOHRv2jKfs2bNn2BNeeJ1m678k4M8MAMBjnX4Hb/MCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKACExBYCQmAJASEwBICSmABASUwAIiSkAhMQUAEJiCgAhMQWAkJgCQEhMASAkpgAQElMACIkpAITEFABCYgoAITEFgJCYAkBITAEgJKYAEBJTAAiJKQCExBQAQmIKAKHuM+6dgawAgJeYV6YAEBJTAAiJKQCExBQAQmIKACExBYDQfwE2fN+nWiwmswAAAABJRU5ErkJggg==\n"
     },
     "metadata": {
      "needs_background": "light"
     }
    }
   ],
   "source": [
    "# 主函数\n",
    "cartpole_env = Environment()\n",
    "cartpole_env.run()\n"
   ]
  }
 ]
}