{
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8-final"
  },
  "orig_nbformat": 2,
  "kernelspec": {
   "name": "python_defaultSpec_1599636406003",
   "display_name": "Python 3.7.8 64-bit ('venv': venv)"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2,
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "DQN 有两种，2013 版和 2015 Nature 版\n",
    "第五章的小批量学习实现对应 2013 版\n",
    "\n",
    "2015版 采用 `目标 Q 网络` 以训练 `主 Q 网络`\n",
    "\n",
    "经过一定的训练步数， 目标 Q 网络 的旧参数 被 主 Q 网络 的新参数替换\n",
    "\n",
    "更新公式如下：\n",
    "$$\n",
    "    Q_m(s_t, a_t)=Q_m(s_t,a_t) + \\eta*(R_{t+1}+\\gamma\\max_a{Q_t(s_{t+1}, a)}-Q_m(s_t,a_t))\n",
    "$$\n",
    "这里，$Q_m$ 表示 主网络，$Q_t$ 表示目标网络\n",
    "\n",
    "DDQN 中使用：\n",
    "$$\n",
    "    a_m = \\arg \\max_a Q_m(s_{t+1},a)    \\\\\n",
    "    Q_m(s_t, a_t)=Q_m(s_t,a_t) + \\eta*(R_{t+1}+\\gamma Q_t(s_{t+1}, a_m)-Q_m(s_t,a_t))\n",
    "$$\n",
    "从主网络中获得在下一状态具有最高Q值的动作 $a_m$，并从目标网络中获得其 Q 值\n",
    "更加稳定\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. 包\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "import gym\n",
    "\n",
    "####################################\n",
    "'''\n",
    "# 2. 动画\n",
    "from JSAnimation.IPython_display import display_animation\n",
    "from matplotlib import animation\n",
    "from IPython.display import HTML, display\n",
    "\n",
    "def display_frames_as_gif(frames):\n",
    "    \"\"\"\n",
    "    Displays a list of frames as a gif, with controls\n",
    "    以gif格式显示关键帧列，具有控制\n",
    "    \"\"\"\n",
    "    \n",
    "    plt.figure(figsize=(frames[0].shape[1]/72.0, frames[0].shape[0]/72.0),dpi=72)\n",
    "    patch = plt.imshow(frames[0])\n",
    "    plt.axis('off')\n",
    "    \n",
    "    def animate(i):\n",
    "        img = patch.set_data(frames[i])\n",
    "        return img   ## *** return是必须要有的 ***\n",
    "        \n",
    "    anim = animation.FuncAnimation(plt.gcf(), animate, frames=len(frames), interval=50)\n",
    "    \n",
    "    anim.save('media/movie_cartpole_DQN.mp4')\n",
    "    ## display(display_animation(anim, default_mode='loop'))  ## *** delete ***\n",
    "    return HTML(anim.to_jshtml())  ## *** 返回一个HTML对象，以便被调用者显示。 ***\n",
    "\n",
    "'''\n",
    "################################################\n",
    "# 3. 生成 namedtuple\n",
    "from collections import namedtuple\n",
    "\n",
    "Transition = namedtuple(\n",
    "    'Transition', ('state', 'action', 'next_state', 'reward')\n",
    ")\n",
    "\n",
    "\n",
    "#####################################\n",
    "# 4. 常量\n",
    "ENV = 'CartPole-v0'\n",
    "GAMMA = 0.9\n",
    "MAX_STEPS = 200\n",
    "NUM_EPISODES = 500\n",
    "\n",
    "\n",
    "######################################\n",
    "# 5. ReplayMemory 存储经验数据\n",
    "'''\n",
    "为了实现小批量学习，定义内存类 ReplayMemory 来存储经验数据\n",
    "\n",
    "push 函数，用于保存该步骤中的 transition 作为经验\n",
    "sample 函数，随机选择 transition\n",
    "len 函数，返回当前存储的 transition 数\n",
    "\n",
    "如果存储的 transition 数大于常量 CAPACITY，则将索引返回到前面并覆盖旧内容\n",
    "'''\n",
    "class ReplayMemory:\n",
    "\n",
    "    def __init__(self, CAPACITY):\n",
    "        self.capacity = CAPACITY    # 下面 memory 的最大长度\n",
    "        self.memory = []    # 存储过往经验\n",
    "        self.index = 0  # 表示要保存的索引\n",
    "\n",
    "    def push(self, state, action, state_next, reward):\n",
    "        '''将 transition = (state, action, state_next, reward) 保存在存储器中'''\n",
    "\n",
    "        if len(self.memory) < self.capacity:\n",
    "            self.memory.append(None)  # 内存未满时添加\n",
    "\n",
    "        # 使用具名元组对象 Transition 将值和字段名称保存为一对\n",
    "        self.memory[self.index] = Transition(state, action, state_next, reward)\n",
    "\n",
    "        self.index = (self.index + 1) % self.capacity  # 索引加一\n",
    "\n",
    "    def sample(self, batch_size):\n",
    "        '''随机检索 Batch_size 大小的样本并返回'''\n",
    "        return random.sample(self.memory, batch_size)\n",
    "\n",
    "    def __len__(self):\n",
    "        '''返回当前 memory 长度'''\n",
    "        return len(self.memory)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    ">第五章为了优先理解实现流程，编写了代码较长的函数replay而未进行拆分。\n",
    "首先，重构第5章中 DQN 的程序以缩短 Brain 类的 replay 函数，将三个部分分别进行函数话\n",
    "+ make_minibatch 创建小批量数据\n",
    "+ get_expected_state_action_values 获取监督信息 $Q(s_t,a_t)$\n",
    "+ update_main_q_network 更新连接参数\n",
    "\n",
    "```python\n",
    "    def replay(self):\n",
    "        ''' 经验回放学习网络的连接参数 '''\n",
    "\n",
    "        # 1. 检查内存大小\n",
    "        if len(self.memory) < BATCH_SIZE:\n",
    "            return\n",
    "\n",
    "        # 2. 创建小批量数据\n",
    "        self.batch, self.state_batch, self.action_batch, self.reward_batch, self.non_final_next_states = self.make_minibatch()\n",
    "\n",
    "        # 3. 获取 Q(s_t,a_t) 作为监督信息\n",
    "        self.expected_state_action_values = self.get_expected_state_action_values()\n",
    "\n",
    "        # 4. 更新连接参数\n",
    "        self.update_main_q_network()\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 建立深度学习网络\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "\n",
    "class Net(nn.Module):\n",
    "\n",
    "    def __init__(self, n_in, n_mid, n_out):\n",
    "        super(Net, self).__init__()\n",
    "        self.fc1 = nn.Linear(n_in, n_mid)\n",
    "        self.fc2 = nn.Linear(n_mid, n_mid)\n",
    "        self.fc3 = nn.Linear(n_mid, n_out)\n",
    "\n",
    "    def forward(self, x):\n",
    "        h1 = F.relu(self.fc1(x))\n",
    "        h2 = F.relu(self.fc2(h1))\n",
    "        output = self.fc3(h2)\n",
    "        return output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 执行 DQN 的 Brain 类\n",
    "# 将 Q 函数定义为深度学习网络（而非一个表格）\n",
    "\n",
    "# 包\n",
    "\n",
    "import random\n",
    "import torch\n",
    "from torch import nn\n",
    "from torch import optim\n",
    "import torch.nn.functional as F\n",
    "\n",
    "# 常量\n",
    "BATCH_SIZE = 32\n",
    "CAPACITY = 10000\n",
    "\n",
    "class Brain:\n",
    "    def __init__(self, num_states, num_actions):\n",
    "        self.num_actions = num_actions  # 取得 CartPole 的动作 数 2\n",
    "\n",
    "        # 创建存储经验的对象\n",
    "        self.memory = ReplayMemory(CAPACITY)\n",
    "\n",
    "        # 构建一个神经网络\n",
    "        n_in, n_mid, n_out = num_states, 32, num_actions\n",
    "        self.main_q_network = Net(n_in, n_mid, n_out)  # 使用 Net 类构建 主 Q 网络\n",
    "        self.target_q_network = Net(n_in, n_mid, n_out)  # 目标 Q 网络\n",
    "        print(self.main_q_network)  # 主网络形状\n",
    "\n",
    "        # 最优化方法的设定\n",
    "        self.optimizer = optim.Adam(self.main_q_network.parameters(), lr=0.0001)\n",
    "    \n",
    "    def replay(self):\n",
    "        ''' 经验回放学习网络的连接参数 '''\n",
    "\n",
    "        # 1. 检查内存大小\n",
    "        if len(self.memory) < BATCH_SIZE:\n",
    "            return\n",
    "\n",
    "        # 2. 创建小批量数据\n",
    "        self.batch, self.state_batch, self.action_batch, self.reward_batch, self.non_final_next_states = self.make_minibatch()\n",
    "\n",
    "        # 3. 获取 Q(s_t,a_t) 作为监督信息\n",
    "        self.expected_state_action_values = self.get_expected_state_action_values()\n",
    "\n",
    "        # 4. 更新连接参数\n",
    "        self.update_main_q_network()\n",
    "        \n",
    "    def decide_action(self, state, episode):\n",
    "        '''根据当前状态确定动作'''\n",
    "        # ε-贪婪法 逐步采用最佳动作\n",
    "        epsilon = 0.5 * (1 / (episode + 1))\n",
    "\n",
    "        if epsilon <= np.random.uniform(0, 1):\n",
    "            self.main_q_network.eval()  # 主网络切换到推理模式\n",
    "            with torch.no_grad():\n",
    "                action = self.main_q_network(state).max(1)[1].view(1, 1)\n",
    "            # 获得网络属猪最大值的索引 index = max(1)[1]\n",
    "            # .view(1,1) 将 [torch.LongTensor of size 1]　转换为 size 1x1\n",
    "        else:\n",
    "            # 随机返回 0,1\n",
    "            action = torch.LongTensor(\n",
    "                [[random.randrange(self.num_actions)]])  # 随机返回\n",
    "            # action 的形式为 [torch.LongTensor of size 1x1]\n",
    "\n",
    "        return action\n",
    "\n",
    "    def make_minibatch(self):\n",
    "        ''' 2. 创建小批量数据 '''\n",
    "\n",
    "        # 2.1 从经验池中取出小批量数据\n",
    "        transitions = self.memory.sample(BATCH_SIZE)\n",
    "\n",
    "        # 2.2 将每个变量转换为对应格式\n",
    "        # 得到的 transitions 存储了一个 BATCH_SIZE 大小的 (state, action, state_next, reward)\n",
    "        # 即：BATCH_SIZE * (state, action, state_next, reward)\n",
    "        # 想把它变成小批量数据，换句话说：\n",
    "        # 转为 (state*BATCH_SIZE, action*BATCH_SIZE, state_next*BATCH_SIZE, reward*BATCH_SIZE)\n",
    "        batch = Transition(*zip(*transitions))\n",
    "\n",
    "        # 2.3 将每个变量元素转化为对应于小批量的形式，使其成为网络处理的变量。\n",
    "        # 例如，在 state 状态下，\n",
    "        # BATCH_SIZE 个 [torch.FloatTensor of size 1*4]\n",
    "        # 转换为 [torch.FloatTensor of size BATCH_SIZE*4]\n",
    "        # cat 是指 Concatenates（连接）\n",
    "        state_batch = torch.cat(batch.state)\n",
    "        action_batch = torch.cat(batch.action)\n",
    "        reward_batch = torch.cat(batch.reward)\n",
    "        non_final_next_states = torch.cat([s for s in batch.next_state\n",
    "                                           if s is not None])\n",
    "\n",
    "        return batch, state_batch, action_batch, reward_batch, non_final_next_states\n",
    "\n",
    "    def get_expected_state_action_values(self):\n",
    "        ''' 3. 找到 Q(s_t,a_t) 值作为监督信息 '''\n",
    "\n",
    "        # 3.1 两个网络切换为推理模式\n",
    "        self.main_q_network.eval()\n",
    "        self.target_q_network.eval()\n",
    "\n",
    "        # 3.2 求取网络输出的 Q(s_t, a_t)\n",
    "        # self.model(state_batch)输出 向左 或者 向右 的 Q 值\n",
    "        # [torch.FloatTensor of size BATCH_SIZEx2]\n",
    "        # 为了求得于此处执行的动作 a_t 对应的 Q 值，\n",
    "        # 求取由 action_batch 执行的动作 a_t 是向右还是向左的 索引\n",
    "        # 用 gather 获得相应的 Q 值。\n",
    "        self.state_action_values = self.main_q_network(\n",
    "            self.state_batch).gather(1, self.action_batch)\n",
    "\n",
    "        # 3.3 求max{Q(s_t+1, a)}。\n",
    "        # 需要注意下一个状态s_t+1，不存在下一个状态时为 0\n",
    "\n",
    "         # 创建索引掩码以检查 cartple 是否未完场且具有 next_state\n",
    "        non_final_mask = torch.ByteTensor(tuple(map(lambda s: s is not None,\n",
    "                                                    self.batch.next_state)))\n",
    "        # 首先全部设置为 0\n",
    "        next_state_values = torch.zeros(BATCH_SIZE)\n",
    "        \n",
    "        # $a_m$\n",
    "        a_m = torch.zeros(BATCH_SIZE).type(torch.LongTensor)\n",
    "\n",
    "        # 从主网络中求取下一个状态中最大Q值的动作 a_m\n",
    "        # 最后的[1] 返回与该动作对应的索引 index\n",
    "        a_m[non_final_mask] = self.main_q_network(\n",
    "            self.non_final_next_states).detach().max(1)[1]\n",
    "\n",
    "        # 仅过滤具有下一个状态的，并将 size 32 转换为 size 32*1\n",
    "        a_m_non_final_next_states = a_m[non_final_mask].view(-1, 1)\n",
    "\n",
    "        # 从目标 Q 网络中找到具有下一状态的 index 的动作 a_m 的 Q 值\n",
    "        # 用 detach() 取出\n",
    "        # 用 squeeze()将size[minibatch×1]压缩为[minibatch]\n",
    "        next_state_values[non_final_mask] = self.target_q_network(\n",
    "            self.non_final_next_states).gather(1, a_m_non_final_next_states).detach().squeeze()\n",
    "\n",
    "        # 3.4 根据Q学习公式，求出 Q(s_t, a_t)作为监督信息\n",
    "        expected_state_action_values = self.reward_batch + GAMMA * next_state_values\n",
    "\n",
    "        return expected_state_action_values\n",
    "\n",
    "    def update_main_q_network(self):\n",
    "        ''' 4. 更新连接参数 '''\n",
    "\n",
    "        # 4.1 主网络训练模式\n",
    "        self.main_q_network.train()\n",
    "\n",
    "        # 4.2 计算损失函数（smooth_l1_loss 是 Huberloss）\n",
    "        # expected_state_action_values的size为[minbatch]，所以将其解压（unsquee）为[minibatch x 1] 。\n",
    "        loss = F.smooth_l1_loss(self.state_action_values,\n",
    "                                self.expected_state_action_values.unsqueeze(1))\n",
    "\n",
    "        # 4.3 更新连接参数\n",
    "        self.optimizer.zero_grad()  # 重置梯度\n",
    "        loss.backward()  # 计算反向传播\n",
    "        self.optimizer.step()  # 更新连接参数\n",
    "\n",
    "    def update_target_q_function(self):  # 添加 DDQN\n",
    "        ''' 让目标网络与主网络相同 '''\n",
    "        self.target_q_network.load_state_dict(self.main_q_network.state_dict())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "随着 Brain 类的改变，需要对 Agent 类进行微调\n",
    "+ 重构 update_target_q_function，在其中执行 Brain 类的函数 update_target_q_function\n",
    "+ 在 Environment 类的试验（episode）结束时，执行 Agent 类的函数 update_target_q_function\n",
    "+ 在这里，每两轮试验执行一次，将主网络复制到目标网络"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# CartPole 上运行的智能体(agent)类，带有杆的小车\n",
    "\n",
    "\n",
    "class Agent:\n",
    "    def __init__(self, num_states, num_actions):\n",
    "        '''设置任务状态和动作的数量'''\n",
    "        self.brain = Brain(num_states, num_actions)  # 为智能体生成大脑来确定动作\n",
    "    \n",
    "    def update_q_function(self):\n",
    "        '''更新 Q 函数'''\n",
    "        self.brain.replay()\n",
    "\n",
    "    def get_action(self, state, episode):\n",
    "        '''决定动作'''\n",
    "        action = self.brain.decide_action(state, episode)\n",
    "        return action\n",
    "\n",
    "    def memorize(self, state, action, state_next, reward):\n",
    "        '''将 state, action, state_next, reward 的内容保存在 memory 经验池中'''\n",
    "        self.brain.memory.push(state, action, state_next, reward)\n",
    "    \n",
    "    def update_target_q_function(self):\n",
    "        ''' 将目标网络更新到与主网络相同 '''\n",
    "        self.brain.update_target_q_function()\n",
    "\n",
    "\n",
    "# 执行 CartPole 的环境类\n",
    "\n",
    "\n",
    "class Environment:\n",
    "\n",
    "    def __init__(self):\n",
    "        self.env = gym.make(ENV)  # 设定任务\n",
    "        num_states = self.env.observation_space.shape[0]  # 获得任务的状态变量数 4\n",
    "        num_actions = self.env.action_space.n  # CartPole的动作数 2\n",
    "        self.agent = Agent(num_states, num_actions)  # 创建 Agent 在环境中执行动作\n",
    "\n",
    "        \n",
    "    def run(self):\n",
    "        '''执行'''\n",
    "        episode_10_list = np.zeros(10)  # 存储 10 次试验的连续站立步数，用于输出平均步数\n",
    "        complete_episodes = 0  # 连续 195 步以上统计\n",
    "        episode_final = False  # 最终尝试标记\n",
    "        frames = []  # 存储图像的变量\n",
    "\n",
    "        for episode in range(NUM_EPISODES):  # 重复试验次数\n",
    "            observation = self.env.reset()  # 环境初始化\n",
    "\n",
    "            state = observation  # 直接将观测值作为状态值使用\n",
    "            state = torch.from_numpy(state).type(\n",
    "                torch.FloatTensor)  # NumPy 变量转换为 PyTorch Tensor\n",
    "            state = torch.unsqueeze(state, 0)  # FloatTensor of size 4 转换为 size 1x4\n",
    "\n",
    "            for step in range(MAX_STEPS):  # 每 1 轮循环（1 episode）\n",
    "                # 不再绘制动画\n",
    "                action = self.agent.get_action(state, episode)  # 求取动作\n",
    "\n",
    "                # 通过执行动作 a_t 求 s_{t+1} 和 done 标志\n",
    "                # 从 action 中指定 .item() 并获取内容\n",
    "                observation_next, _, done, _ = self.env.step(action.item())  # reward 和 info不适用，所以用 _\n",
    "\n",
    "                # 给与奖励。对 episode是否结束以及是否有下一个状态进行判断\n",
    "                if done:  # 如果 step 不超过 200，或陪着如果倾斜超过某个角度\n",
    "                    state_next = None  # 没有下一个状态，存储 None\n",
    "\n",
    "                    # 添加到最近的 10 episode 的步数列表中\n",
    "                    episode_10_list = np.hstack(\n",
    "                        (episode_10_list[1:], step + 1))\n",
    "\n",
    "                    if step < 195:\n",
    "                        reward = torch.FloatTensor(\n",
    "                            [-1.0])  # 半途倒下，奖励 -1\n",
    "                        complete_episodes = 0  # 重置连续成功记录\n",
    "                    else:\n",
    "                        reward = torch.FloatTensor([1.0])  # 一直站立直到结束时奖励 1\n",
    "                        complete_episodes = complete_episodes + 1  # 更新连续记录\n",
    "                else:\n",
    "                    reward = torch.FloatTensor([0.0])  # 普通奖励 0\n",
    "                    state_next = observation_next  # 将状态设置为观察值\n",
    "                    state_next = torch.from_numpy(state_next).type(\n",
    "                        torch.FloatTensor)  # numpy 变量 --> PyTorch Tensor 变量\n",
    "                    state_next = torch.unsqueeze(state_next, 0)  # FloatTensor of size 4 扩展为 size 1x4\n",
    "\n",
    "                # 向经验池中添加经验\n",
    "                self.agent.memorize(state, action, state_next, reward)\n",
    "\n",
    "                # 经验回放 Experience Replay，更新 Q 函数\n",
    "                self.agent.update_q_function()\n",
    "\n",
    "                # 更新观测值\n",
    "                state = state_next\n",
    "\n",
    "                # 结束处理\n",
    "                if done:\n",
    "                    print('%d Episode: Finished after %d steps：10 次试验平均 step 数 = %.1lf' % (\n",
    "                        episode, step + 1, episode_10_list.mean()))\n",
    "\n",
    "                    # 使用 DDQN 添加，每 2 轮试验复制一次主网络到目标网络\n",
    "                    if(episode % 2 == 0):\n",
    "                        self.agent.update_target_q_function()\n",
    "                    break\n",
    "\n",
    "            if episode_final is True:\n",
    "                # 不再绘制动画\n",
    "                break\n",
    "\n",
    "            # 连续 10 轮成功\n",
    "            if complete_episodes >= 10:\n",
    "                print('10 轮连续成功')\n",
    "                episode_final = True  # 标记下一次为最终试验"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": "Net(\n  (fc1): Linear(in_features=4, out_features=32, bias=True)\n  (fc2): Linear(in_features=32, out_features=32, bias=True)\n  (fc3): Linear(in_features=32, out_features=2, bias=True)\n)\n0 Episode: Finished after 17 steps：10 次试验平均 step 数 = 1.7\n1 Episode: Finished after 13 steps：10 次试验平均 step 数 = 3.0\n2 Episode: Finished after 15 steps：10 次试验平均 step 数 = 4.5\n3 Episode: Finished after 11 steps：10 次试验平均 step 数 = 5.6\n4 Episode: Finished after 8 steps：10 次试验平均 step 数 = 6.4\n5 Episode: Finished after 10 steps：10 次试验平均 step 数 = 7.4\n6 Episode: Finished after 9 steps：10 次试验平均 step 数 = 8.3\n7 Episode: Finished after 10 steps：10 次试验平均 step 数 = 9.3\n8 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.2\n9 Episode: Finished after 10 steps：10 次试验平均 step 数 = 11.2\n10 Episode: Finished after 9 steps：10 次试验平均 step 数 = 10.4\n11 Episode: Finished after 8 steps：10 次试验平均 step 数 = 9.9\n12 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.3\n13 Episode: Finished after 9 steps：10 次试验平均 step 数 = 9.1\n14 Episode: Finished after 13 steps：10 次试验平均 step 数 = 9.6\n15 Episode: Finished after 16 steps：10 次试验平均 step 数 = 10.2\n16 Episode: Finished after 13 steps：10 次试验平均 step 数 = 10.6\n17 Episode: Finished after 25 steps：10 次试验平均 step 数 = 12.1\n18 Episode: Finished after 24 steps：10 次试验平均 step 数 = 13.6\n19 Episode: Finished after 20 steps：10 次试验平均 step 数 = 14.6\n20 Episode: Finished after 13 steps：10 次试验平均 step 数 = 15.0\n21 Episode: Finished after 12 steps：10 次试验平均 step 数 = 15.4\n22 Episode: Finished after 14 steps：10 次试验平均 step 数 = 15.9\n23 Episode: Finished after 13 steps：10 次试验平均 step 数 = 16.3\n24 Episode: Finished after 20 steps：10 次试验平均 step 数 = 17.0\n25 Episode: Finished after 17 steps：10 次试验平均 step 数 = 17.1\n26 Episode: Finished after 12 steps：10 次试验平均 step 数 = 17.0\n27 Episode: Finished after 9 steps：10 次试验平均 step 数 = 15.4\n28 Episode: Finished after 9 steps：10 次试验平均 step 数 = 13.9\n29 Episode: Finished after 18 steps：10 次试验平均 step 数 = 13.7\n30 Episode: Finished after 45 steps：10 次试验平均 step 数 = 16.9\n31 Episode: Finished after 13 steps：10 次试验平均 step 数 = 17.0\n32 Episode: Finished after 39 steps：10 次试验平均 step 数 = 19.5\n33 Episode: Finished after 15 steps：10 次试验平均 step 数 = 19.7\n34 Episode: Finished after 32 steps：10 次试验平均 step 数 = 20.9\n35 Episode: Finished after 37 steps：10 次试验平均 step 数 = 22.9\n36 Episode: Finished after 29 steps：10 次试验平均 step 数 = 24.6\n37 Episode: Finished after 33 steps：10 次试验平均 step 数 = 27.0\n38 Episode: Finished after 21 steps：10 次试验平均 step 数 = 28.2\n39 Episode: Finished after 22 steps：10 次试验平均 step 数 = 28.6\n40 Episode: Finished after 22 steps：10 次试验平均 step 数 = 26.3\n41 Episode: Finished after 20 steps：10 次试验平均 step 数 = 27.0\n42 Episode: Finished after 18 steps：10 次试验平均 step 数 = 24.9\n43 Episode: Finished after 23 steps：10 次试验平均 step 数 = 25.7\n44 Episode: Finished after 20 steps：10 次试验平均 step 数 = 24.5\n45 Episode: Finished after 17 steps：10 次试验平均 step 数 = 22.5\n46 Episode: Finished after 19 steps：10 次试验平均 step 数 = 21.5\n47 Episode: Finished after 25 steps：10 次试验平均 step 数 = 20.7\n48 Episode: Finished after 20 steps：10 次试验平均 step 数 = 20.6\n49 Episode: Finished after 28 steps：10 次试验平均 step 数 = 21.2\n50 Episode: Finished after 17 steps：10 次试验平均 step 数 = 20.7\n51 Episode: Finished after 25 steps：10 次试验平均 step 数 = 21.2\n52 Episode: Finished after 23 steps：10 次试验平均 step 数 = 21.7\n53 Episode: Finished after 18 steps：10 次试验平均 step 数 = 21.2\n54 Episode: Finished after 16 steps：10 次试验平均 step 数 = 20.8\n55 Episode: Finished after 25 steps：10 次试验平均 step 数 = 21.6\n56 Episode: Finished after 16 steps：10 次试验平均 step 数 = 21.3\n57 Episode: Finished after 24 steps：10 次试验平均 step 数 = 21.2\n58 Episode: Finished after 24 steps：10 次试验平均 step 数 = 21.6\n59 Episode: Finished after 22 steps：10 次试验平均 step 数 = 21.0\n60 Episode: Finished after 23 steps：10 次试验平均 step 数 = 21.6\n61 Episode: Finished after 37 steps：10 次试验平均 step 数 = 22.8\n62 Episode: Finished after 38 steps：10 次试验平均 step 数 = 24.3\n63 Episode: Finished after 28 steps：10 次试验平均 step 数 = 25.3\n64 Episode: Finished after 30 steps：10 次试验平均 step 数 = 26.7\n65 Episode: Finished after 37 steps：10 次试验平均 step 数 = 27.9\n66 Episode: Finished after 51 steps：10 次试验平均 step 数 = 31.4\n67 Episode: Finished after 42 steps：10 次试验平均 step 数 = 33.2\n68 Episode: Finished after 71 steps：10 次试验平均 step 数 = 37.9\n69 Episode: Finished after 43 steps：10 次试验平均 step 数 = 40.0\n70 Episode: Finished after 23 steps：10 次试验平均 step 数 = 40.0\n71 Episode: Finished after 26 steps：10 次试验平均 step 数 = 38.9\n72 Episode: Finished after 102 steps：10 次试验平均 step 数 = 45.3\n73 Episode: Finished after 27 steps：10 次试验平均 step 数 = 45.2\n74 Episode: Finished after 39 steps：10 次试验平均 step 数 = 46.1\n75 Episode: Finished after 41 steps：10 次试验平均 step 数 = 46.5\n76 Episode: Finished after 67 steps：10 次试验平均 step 数 = 48.1\n77 Episode: Finished after 88 steps：10 次试验平均 step 数 = 52.7\n78 Episode: Finished after 71 steps：10 次试验平均 step 数 = 52.7\n79 Episode: Finished after 64 steps：10 次试验平均 step 数 = 54.8\n80 Episode: Finished after 30 steps：10 次试验平均 step 数 = 55.5\n81 Episode: Finished after 28 steps：10 次试验平均 step 数 = 55.7\n82 Episode: Finished after 73 steps：10 次试验平均 step 数 = 52.8\n83 Episode: Finished after 45 steps：10 次试验平均 step 数 = 54.6\n84 Episode: Finished after 50 steps：10 次试验平均 step 数 = 55.7\n85 Episode: Finished after 111 steps：10 次试验平均 step 数 = 62.7\n86 Episode: Finished after 68 steps：10 次试验平均 step 数 = 62.8\n87 Episode: Finished after 200 steps：10 次试验平均 step 数 = 74.0\n88 Episode: Finished after 59 steps：10 次试验平均 step 数 = 72.8\n89 Episode: Finished after 200 steps：10 次试验平均 step 数 = 86.4\n90 Episode: Finished after 200 steps：10 次试验平均 step 数 = 103.4\n91 Episode: Finished after 63 steps：10 次试验平均 step 数 = 106.9\n92 Episode: Finished after 193 steps：10 次试验平均 step 数 = 118.9\n93 Episode: Finished after 74 steps：10 次试验平均 step 数 = 121.8\n94 Episode: Finished after 194 steps：10 次试验平均 step 数 = 136.2\n95 Episode: Finished after 200 steps：10 次试验平均 step 数 = 145.1\n96 Episode: Finished after 91 steps：10 次试验平均 step 数 = 147.4\n97 Episode: Finished after 180 steps：10 次试验平均 step 数 = 145.4\n98 Episode: Finished after 99 steps：10 次试验平均 step 数 = 149.4\n99 Episode: Finished after 200 steps：10 次试验平均 step 数 = 149.4\n100 Episode: Finished after 158 steps：10 次试验平均 step 数 = 145.2\n101 Episode: Finished after 200 steps：10 次试验平均 step 数 = 158.9\n102 Episode: Finished after 200 steps：10 次试验平均 step 数 = 159.6\n103 Episode: Finished after 200 steps：10 次试验平均 step 数 = 172.2\n104 Episode: Finished after 200 steps：10 次试验平均 step 数 = 172.8\n105 Episode: Finished after 200 steps：10 次试验平均 step 数 = 172.8\n106 Episode: Finished after 200 steps：10 次试验平均 step 数 = 183.7\n107 Episode: Finished after 200 steps：10 次试验平均 step 数 = 185.7\n108 Episode: Finished after 200 steps：10 次试验平均 step 数 = 195.8\n109 Episode: Finished after 200 steps：10 次试验平均 step 数 = 195.8\n110 Episode: Finished after 200 steps：10 次试验平均 step 数 = 200.0\n10 轮连续成功\n111 Episode: Finished after 200 steps：10 次试验平均 step 数 = 200.0\n"
    }
   ],
   "source": [
    "cartpole_env = Environment()\n",
    "cartpole_env.run()"
   ]
  }
 ]
}