{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 包装器"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "很多时候，你希望以某种通用的方式扩展环境的功能。例如，想\n",
    "象一个环境，它给了你一些观察，但是你想将它们累积缓存起来，用\n",
    "以提供智能体最近N个观察。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Wrapper类继承自Env类。它的构造函数只有一个参数，即要被\n",
    "“包装”的Env类的实例"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "ObservationWrapper：需要重新定义父类的observation(obs)方\n",
    "法。obs参数是被包装的环境给出的观察，这个方法需要返回给予\n",
    "智能体的观察。<br>\n",
    "RewardWrapper：它暴露了一个reward(rew)方法，可以修改给予\n",
    "智能体的奖励值。<br>\n",
    "ActionWrapper：需要覆盖action(act)方法，它能修改智能体传\n",
    "给被包装环境的动作。<br>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**通过 ActionWrapper 类以一定概率替换智能体的动作为随机动作，就是一种实现探索的方式**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"使用10%的概率\"\"\"\n",
    "import gym\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Random!!\n",
      "总奖励10.0\n"
     ]
    }
   ],
   "source": [
    "import gym\n",
    "from typing import TypeVar\n",
    "import random\n",
    " \n",
    "Action = TypeVar('Action')\n",
    "class RandomActionWrapper(gym.ActionWrapper):\n",
    "    \"\"\"当 epsilon=0.1 时，代码会以 10% 的概率 随机执行动作，其余 90% 的概率 执行原动作。\"\"\"\n",
    "    def __init__(self, env,epsilon=0.1):#ε是随即动作的概率\n",
    "        super().__init__(env)#super().__init__(env) 会直接调用父类 gym.ActionWrapper 的构造函数，正确传递 env 参数。\n",
    "        self.epsilon=epsilon\n",
    "    \"\"\"\n",
    "    -> Action\n",
    "含义：方法返回值的类型也是 Action。\n",
    "意义：确保返回值与输入参数的类型一致。例如，如果输入是离散动作（int），返回值也必须是 int。\n",
    "    \"\"\"\n",
    "    def action(self, action:Action)->Action:\n",
    "        if random.random()<self.epsilon:\n",
    "            print(\"Random!!\")\n",
    "            return self.env.action_space.sample()\n",
    "        return action\n",
    "\n",
    "if __name__==\"__main__\":\n",
    "    env=RandomActionWrapper(gym.make(\"CartPole-v0\"))\n",
    "    obs=env.reset()\n",
    "    total_reward=0.0\n",
    "\n",
    "    while True:\n",
    "        obs,reward,done,_=env.step(0)#每次都执行0号动作\n",
    "        total_reward+=reward\n",
    "        if done:\n",
    "            break\n",
    "    print(f\"总奖励{total_reward}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 监控器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# \"\"\"\n",
    "# 另一个应该注意的类是Monitor。它的实现方式与Wrapper类似，\n",
    "# 可以将智能体的性能信息写入文件，也可以选择将智能体的动作录下\n",
    "# 来。唯一区别如下：\n",
    "# \"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "功能：<br>\n",
    "录制每个训练回合（episode）的视频，保存到 recording 目录。\n",
    "记录回合的奖励、步数等统计数据（生成 recording/openaigym.episode_batch.json）。<br>\n",
    "参数说明：<br>\n",
    "env：被包装的环境对象。<br>\n",
    "\"recording\"：视频和日志的保存路径。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "import gym.wrappers\n",
    "import gym.wrappers.monitor\n",
    "\n",
    "\n",
    "if __name__==\"__main__\":\n",
    "    env=gym.make(\"CartPole-v0\")\n",
    "\n",
    "#     env：被包装的环境对象。\n",
    "# recording：视频和日志的保存路径\n",
    "\n",
    "    env=gym.wrappers.Monitor(env,\"recording\")\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
