{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "946aa78d",
   "metadata": {},
   "source": [
    "### DQN解决倒立摆问题"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a901585b-ae5e-45ac-8b38-fc996749807e",
   "metadata": {},
   "source": [
    "😋😋公众号算法美食屋后台回复关键词：**torchkeras**，获取本文notebook源代码和数据集下载链接。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6546c403",
   "metadata": {},
   "source": [
    "表格型方法存储的状态数量有限，当面对围棋或机器人控制这类有数不清的状态的环境时，表格型方法在存储和查找效率上都受局限，DQN的提出解决了这一局限，**使用神经网络来近似替代Q表格**。\n",
    "\n",
    "本质上DQN还是一个Q-learning算法，更新方式一致。为了更好的探索环境，同样的也采用epsilon-greedy方法训练。\n",
    "\n",
    "在Q-learning的基础上，DQN提出了两个技巧使得Q网络的更新迭代更稳定。\n",
    "\n",
    "* 经验回放(Experience Replay): 使用一个经验池存储多条经验s,a,r,s'，再从中随机抽取一批数据送去训练。\n",
    "\n",
    "* 固定目标(Fixed Q-Target): 复制一个和原来Q网络结构一样的Target-Q网络，用于计算Q目标值。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e0577e68",
   "metadata": {},
   "source": [
    "### 一，准备环境"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c40ab52c",
   "metadata": {},
   "source": [
    "gym是一个常用的强化学习测试环境，可以用make创建环境。\n",
    "\n",
    "env具有reset,step,render几个方法。\n",
    "\n",
    "\n",
    "**倒立摆问题** \n",
    "\n",
    "环境设计如下：\n",
    "\n",
    "倒立摆问题环境的状态是无限的，用一个4维的向量表示state.\n",
    "\n",
    "4个维度分别代表如下含义\n",
    "\n",
    "* cart位置：-2.4 ~ 2.4\n",
    "* cart速度：-inf ~ inf\n",
    "* pole角度：-0.5 ～ 0.5 （radian）\n",
    "* pole角速度：-inf ~ inf\n",
    "\n",
    "智能体设计如下：\n",
    "\n",
    "智能体的action有两种，可能的取值2种：\n",
    "\n",
    "* 0，向左\n",
    "* 1，向右\n",
    "\n",
    "奖励设计如下：\n",
    "\n",
    "每维持一个步骤，奖励+1，到达200个步骤，游戏结束。\n",
    "\n",
    "所以最高得分为200分。\n",
    "\n",
    "倒立摆问题希望训练一个智能体能够尽可能地维持倒立摆的平衡。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "cb8a6b3c-9070-4e12-bb7b-b488c3b46577",
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys \n",
    "sys.path.insert(0,\"../..\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "a129271d-c504-4a9e-b200-da1c80268d7b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchkeras import KerasModel "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "283986f4-3585-4e22-9ed7-f20a29916424",
   "metadata": {},
   "outputs": [],
   "source": [
    "import gym \n",
    "import numpy as np \n",
    "import pandas as pd \n",
    "import time\n",
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "from IPython import display\n",
    "\n",
    "print(\"gym.__version__=\",gym.__version__)\n",
    "\n",
    "\n",
    "%matplotlib inline\n",
    "\n",
    "#可视化函数：\n",
    "def show_state(env, step, info=''):\n",
    "    plt.figure(num=10086,dpi=100)\n",
    "    plt.clf()\n",
    "    plt.imshow(env.render())\n",
    "    plt.title(\"step: %d %s\" % (step, info))\n",
    "    plt.axis('off')\n",
    "    display.clear_output(wait=True)\n",
    "    display.display(plt.gcf())\n",
    "    plt.close()\n",
    "    \n",
    "\n",
    "env = gym.make('CartPole-v1',render_mode=\"rgb_array\") # CartPole-v0: 预期最后一次评估总分 > 180（最大值是200）\n",
    "env.reset()\n",
    "action_dim = env.action_space.n   # CartPole-v0: 2\n",
    "obs_shape = env.observation_space.shape   # CartPole-v0: (4,)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "519bc64e-66a9-40a4-81bb-1d85e2cb8bf8",
   "metadata": {},
   "outputs": [],
   "source": [
    "env.reset()\n",
    "done = False\n",
    "step = 0\n",
    "while not done:\n",
    "    \n",
    "    action = np.random.randint(0, 1)\n",
    "    state,reward,done,truncated,info = env.step(action)\n",
    "    step+=1\n",
    "    print(state,reward)\n",
    "    time.sleep(1.0)\n",
    "    show_state(env,step=step)\n",
    "    #print('step {}: action {}, state {}, reward {}, done {}, truncated {}, info {}'.format(\\\n",
    "    #        step, action, state, reward, done, truncated,info))\n",
    "    \n",
    "display.clear_output(wait=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3bc6b673",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "86a5c44a",
   "metadata": {},
   "source": [
    "### 二，定义Agent "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09ff500b",
   "metadata": {},
   "source": [
    "DQN的核心思想为使用一个神经网络来近似替代Q表格。\n",
    "\n",
    "Model: 模型结构, 负责拟合函数 Q(s,a)。主要实现forward方法。\n",
    "\n",
    "Agent:智能体，负责学习并和环境交互, 输入输出是numpy.array形式。有sample(单步采样), predict(单步预测), 有predict_batch(批量预测), compute_loss(计算损失), sync_target(参数同步)等方法。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e71d72fd",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch \n",
    "from torch import nn\n",
    "import torch.nn.functional as F\n",
    "import copy \n",
    "\n",
    "class Model(nn.Module):\n",
    "    def __init__(self, obs_dim, action_dim):\n",
    "        \n",
    "        # 3层全连接网络\n",
    "        super(Model, self).__init__()\n",
    "        self.obs_dim = obs_dim\n",
    "        self.action_dim = action_dim \n",
    "        self.fc1 = nn.Linear(obs_dim,32)\n",
    "        self.fc2 = nn.Linear(32,16)\n",
    "        self.fc3 = nn.Linear(16,action_dim)\n",
    "\n",
    "    def forward(self, obs):\n",
    "        # 输入state，输出所有action对应的Q，[Q(s,a1), Q(s,a2), Q(s,a3)...]\n",
    "        x = self.fc1(obs)\n",
    "        x = torch.tanh(x)\n",
    "        x = self.fc2(x)\n",
    "        x = torch.tanh(x)\n",
    "        Q = self.fc3(x)\n",
    "        return Q\n",
    "    \n",
    "model = Model(4,2)\n",
    "model_target = copy.deepcopy(model)\n",
    "\n",
    "model.eval()\n",
    "model.forward(torch.tensor([[0.2,0.1,0.2,0.0],[0.3,0.5,0.2,0.6]]))\n",
    "\n",
    "model_target.eval() \n",
    "model_target.forward(torch.tensor([[0.2,0.1,0.2,0.0],[0.3,0.5,0.2,0.6]]))\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3a702800",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch \n",
    "from torch import nn \n",
    "import copy \n",
    "\n",
    "class DQNAgent(nn.Module):\n",
    "    def __init__(self, model, \n",
    "        gamma=0.9,\n",
    "        e_greed=0.1,\n",
    "        e_greed_decrement=0.001\n",
    "        ):\n",
    "        super().__init__()\n",
    "        \n",
    "        self.model = model\n",
    "        self.target_model = copy.deepcopy(model)\n",
    "  \n",
    "        self.gamma = gamma # reward 的衰减因子，一般取 0.9 到 0.999 不等\n",
    "        \n",
    "        self.e_greed = e_greed  # 有一定概率随机选取动作，探索\n",
    "        self.e_greed_decrement = e_greed_decrement  # 随着训练逐步收敛，探索的程度慢慢降低\n",
    "        \n",
    "        self.global_step = 0\n",
    "        self.update_target_steps = 200 # 每隔200个training steps再把model的参数复制到target_model中\n",
    "        \n",
    "        \n",
    "    def forward(self,obs):\n",
    "        return self.model(obs)\n",
    "    \n",
    "    @torch.no_grad()\n",
    "    def predict_batch(self, obs):\n",
    "        \"\"\" 使用self.model网络来获取 [Q(s,a1),Q(s,a2),...]\n",
    "        \"\"\"\n",
    "        self.model.eval()\n",
    "        return self.forward(obs)\n",
    "    \n",
    "    \n",
    "    #单步骤采样    \n",
    "    def sample(self, obs):\n",
    "        sample = np.random.rand()  # 产生0~1之间的小数\n",
    "        if sample < self.e_greed:\n",
    "            action = np.random.randint(self.model.action_dim)  # 探索：每个动作都有概率被选择\n",
    "        else:\n",
    "            action = self.predict(obs)  # 选择最优动作\n",
    "        self.e_greed = max(\n",
    "            0.01, self.e_greed - self.e_greed_decrement)  # 随着训练逐步收敛，探索的程度慢慢降低\n",
    "        return action\n",
    "    \n",
    "    #单步骤预测   \n",
    "    def predict(self, obs):  # 选择最优动作\n",
    "        obs = np.expand_dims(obs, axis=0)\n",
    "        tensor = torch.tensor(obs,dtype=torch.float32).to(self.model.fc1.weight.device)\n",
    "        pred_Q = self.predict_batch(tensor)\n",
    "        action = torch.argmax(pred_Q,1,keepdim=True).cpu().numpy()  \n",
    "        action = np.squeeze(action)\n",
    "        return action\n",
    "    \n",
    "    \n",
    "    def sync_target(self):\n",
    "        \"\"\" 把 self.model 的模型参数值同步到 self.target_model\n",
    "        \"\"\"\n",
    "        self.target_model.load_state_dict(self.model.state_dict())\n",
    "    \n",
    "\n",
    "    def compute_loss(self, obs, action, reward, next_obs, done):\n",
    "        \n",
    "        # 每隔200个training steps同步一次model和target_model的参数\n",
    "        if self.global_step % self.update_target_steps == 0:\n",
    "            self.sync_target()\n",
    "        self.global_step += 1\n",
    "        \n",
    "        \n",
    "        # 从target_model中获取 max Q' 的值，用于计算target_Q\n",
    "        self.target_model.eval()\n",
    "        next_pred_value = self.target_model(next_obs)\n",
    "        best_value = torch.max(next_pred_value, dim = 1,keepdim=True).values \n",
    "        target = reward.reshape((-1,1)) + (\n",
    "            torch.tensor(1.0) - done.reshape(-1,1)) * self.gamma * best_value\n",
    "        \n",
    "        #print(\"best_value\",best_value.shape)\n",
    "        #print(\"target\",target.shape)\n",
    "\n",
    "        # 获取Q预测值\n",
    "        self.model.train()\n",
    "        pred_value = self.model(obs)  \n",
    "        action_onehot = F.one_hot(action.reshape(-1),\n",
    "                num_classes = self.model.action_dim).float()\n",
    "        prediction = torch.sum(pred_value*action_onehot,dim= 1,keepdim=True)\n",
    "        \n",
    "        #print(\"pred_value\",pred_value.shape)\n",
    "        #print(\"action_onehot\",action_onehot.shape)\n",
    "        #print(\"prediction\",prediction.shape)\n",
    "        \n",
    "        # 计算 Q(s,a) 与 target_Q的均方差，得到loss\n",
    "        loss = F.smooth_l1_loss(target,prediction)\n",
    "        return loss \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9cded035-1714-48bf-a772-ab94d1f9e831",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent = DQNAgent(model,gamma=0.9,e_greed=0.1,\n",
    "                 e_greed_decrement=0.001) \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eaf29876-7e8e-4f0a-b6f0-6bcb2b3fea65",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent.predict_batch(torch.tensor([[2.0,3.0,4.0,2.0],[1.0,2.0,3.0,4.0]]))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7baa50e3",
   "metadata": {},
   "outputs": [],
   "source": [
    "loss = agent.compute_loss(torch.tensor([[2.0,3.0,4.0,2.0],[1.0,2.0,3.0,4.0],[1.0,2.0,3.0,4.0]]),\n",
    "          torch.tensor([[1],[0],[0]]),\n",
    "          torch.tensor([[1.0],[1.0],[1.0]]),\n",
    "         torch.tensor([[2.0,3.0,0.4,2.0],[1.0,2.0,3.0,4.0],[1.0,2.0,3.0,4.0]]),\n",
    "         torch.tensor(0.9))\n",
    "print(loss)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "109152db-ed87-4f35-80b1-61b99bd00c3f",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "93668346-2510-416b-b695-926b7a52074a",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "efcec74b",
   "metadata": {},
   "source": [
    "### 三，训练Agent "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "840ee7f4",
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "import collections\n",
    "import numpy as np\n",
    "\n",
    "LEARN_FREQ = 5 # 训练频率，不需要每一个step都learn，攒一些新增经验后再learn，提高效率\n",
    "MEMORY_SIZE = 2048    # replay memory的大小，越大越占用内存\n",
    "MEMORY_WARMUP_SIZE = 512  # replay_memory 里需要预存一些经验数据，再开启训练\n",
    "BATCH_SIZE = 128   # 每次给agent learn的数据数量，从replay memory随机里sample一批数据出来\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4c2e7709",
   "metadata": {},
   "outputs": [],
   "source": [
    "#经验回放\n",
    "class ReplayMemory(object):\n",
    "    def __init__(self, max_size):\n",
    "        self.buffer = collections.deque(maxlen=max_size)\n",
    "\n",
    "    # 增加一条经验到经验池中\n",
    "    def append(self, exp):\n",
    "        self.buffer.append(exp)\n",
    "\n",
    "    # 从经验池中选取N条经验出来\n",
    "    def sample(self, batch_size):\n",
    "        mini_batch = random.sample(self.buffer, batch_size)\n",
    "        obs_batch, action_batch, reward_batch, next_obs_batch, done_batch = [], [], [], [], []\n",
    "\n",
    "        for experience in mini_batch:\n",
    "            s, a, r, s_p, done = experience\n",
    "            obs_batch.append(s)\n",
    "            action_batch.append(a)\n",
    "            reward_batch.append(r)\n",
    "            next_obs_batch.append(s_p)\n",
    "            done_batch.append(done)\n",
    "\n",
    "        return np.array(obs_batch).astype('float32'), \\\n",
    "            np.array(action_batch).astype('int64'), np.array(reward_batch).astype('float32'),\\\n",
    "            np.array(next_obs_batch).astype('float32'), np.array(done_batch).astype('float32')\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.buffer)\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c5d2a816-2ae5-4395-bb48-c95460338f1b",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ef47dae-5e61-4e69-9834-133e7fdf8c44",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.utils.data import IterableDataset,DataLoader  \n",
    "class MyDataset(IterableDataset):\n",
    "    def __init__(self,env,agent,rpm,stage='train',size=200):\n",
    "        self.env = env\n",
    "        self.agent = agent \n",
    "        self.rpm = rpm if stage=='train' else None\n",
    "        self.stage = stage\n",
    "        self.size = size \n",
    "        \n",
    "    def __iter__(self):\n",
    "        obs,info = self.env.reset() # 重置环境, 重新开一局（即开始新的一个episode）\n",
    "        step = 0\n",
    "        batch_reward_true = [] #记录真实的reward\n",
    "        while True:\n",
    "            step += 1\n",
    "            action = self.agent.sample(obs) \n",
    "            next_obs, reward, done, _, _ = self.env.step(action) # 与环境进行一个交互\n",
    "            batch_reward_true.append(reward)\n",
    "            \n",
    "            if self.stage=='train':\n",
    "                self.rpm.append((obs, action, reward, next_obs, float(done)))\n",
    "                if (len(rpm) > MEMORY_WARMUP_SIZE) and (step % LEARN_FREQ == 0):\n",
    "                    #yield batch_obs, batch_action, batch_reward, batch_next_obs,batch_done\n",
    "                    yield self.rpm.sample(BATCH_SIZE),sum(batch_reward_true)\n",
    "                    batch_reward_true.clear()\n",
    "            \n",
    "            else:\n",
    "                obs_batch = np.array([obs]).astype('float32')\n",
    "                action_batch = np.array([action]).astype('int64')\n",
    "                reward_batch = np.array([reward]).astype('float32')\n",
    "                next_obs_batch = np.array([next_obs]).astype('float32')\n",
    "                done_batch = np.array([float(done)]).astype('float32')\n",
    "                batch_data = obs_batch,action_batch,reward_batch,next_obs_batch,done_batch\n",
    "                yield batch_data,sum(batch_reward_true)\n",
    "                batch_reward_true.clear()\n",
    "            \n",
    "    \n",
    "            if self.stage =='train':\n",
    "                next_action = self.agent.sample(next_obs) # 训练阶段使用探索策略\n",
    "            else:\n",
    "                next_action = self.agent.predict(next_obs) # 验证阶段使用模型预测结果\n",
    " \n",
    "            action = next_action\n",
    "            obs = next_obs   \n",
    "\n",
    "            if done:\n",
    "                if self.stage=='train' and len(self.rpm)<MEMORY_WARMUP_SIZE: #确保训练一次\n",
    "                    yield self.rpm.sample(len(self.rpm)),sum(batch_reward_true)\n",
    "                    batch_reward_true.clear()\n",
    "                    break\n",
    "                else:\n",
    "                    break\n",
    "    def __len__(self):\n",
    "        return self.size \n",
    "    \n",
    "\n",
    "env = gym.make('CartPole-v1') \n",
    "rpm = ReplayMemory(MEMORY_SIZE)\n",
    "\n",
    "ds_train = MyDataset(env,agent,rpm,stage='train',size=1000)\n",
    "ds_val = MyDataset(env,agent,rpm,stage='val',size=200)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e2c5f16c-85b3-4e5d-9a2e-d5a3765a8b78",
   "metadata": {},
   "outputs": [],
   "source": [
    "#ReplayMemory预存数据\n",
    "while len(ds_train.rpm)<MEMORY_WARMUP_SIZE:\n",
    "    for data in ds_train:\n",
    "        print(len(ds_train.rpm))\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "53a73e72-ce44-4e47-8156-167a22e4c7f6",
   "metadata": {},
   "outputs": [],
   "source": [
    "def collate_fn(batch):\n",
    "    samples,rewards = [x[0] for x in batch],[x[-1] for x in batch] \n",
    "    samples = [torch.from_numpy(np.concatenate([x[j] for x in samples])) for j in range(5)] \n",
    "    rewards = torch.from_numpy(np.array([sum(rewards)]).astype('float32'))\n",
    "    return samples,rewards \n",
    "\n",
    "dl_train = DataLoader(ds_train,batch_size=1,collate_fn=collate_fn)\n",
    "dl_val = DataLoader(ds_val,batch_size=1,collate_fn=collate_fn)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c8980d2c-0ee9-404f-8d15-eb9ef2af6985",
   "metadata": {},
   "outputs": [],
   "source": [
    "for batch in dl_train:\n",
    "    break"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "70436e9a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys,datetime\n",
    "from tqdm import tqdm\n",
    "import numpy as np\n",
    "from accelerate import Accelerator\n",
    "from torchkeras import KerasModel\n",
    "import pandas as pd \n",
    "\n",
    "from copy import deepcopy\n",
    "\n",
    "class StepRunner:\n",
    "    def __init__(self, net, loss_fn, accelerator=None, stage = \"train\", metrics_dict = None, \n",
    "                 optimizer = None, lr_scheduler = None\n",
    "                 ):\n",
    "        self.net,self.loss_fn,self.metrics_dict,self.stage = net,loss_fn,metrics_dict,stage\n",
    "        self.optimizer,self.lr_scheduler = optimizer,lr_scheduler\n",
    "        self.accelerator = accelerator if accelerator is not None else Accelerator()\n",
    "    \n",
    "    def __call__(self, batch):\n",
    "        \n",
    "        samples,reward = batch\n",
    "        #torch_data = ([torch.from_numpy(x) for x in batch_data])\n",
    "        loss = self.net.compute_loss(*samples)\n",
    "        \n",
    "        #backward()\n",
    "        if self.optimizer is not None and self.stage==\"train\":\n",
    "            self.accelerator.backward(loss)\n",
    "            if self.accelerator.sync_gradients:\n",
    "                self.accelerator.clip_grad_norm_(self.net.parameters(), 1.0)\n",
    "            self.optimizer.step()\n",
    "            if self.lr_scheduler is not None:\n",
    "                self.lr_scheduler.step()\n",
    "            self.optimizer.zero_grad()\n",
    "                \n",
    "            \n",
    "        #losses （or plain metric）\n",
    "        step_losses = {self.stage+'_reward':reward.item(), \n",
    "                       self.stage+'_loss':loss.item()}\n",
    "        \n",
    "        #metrics (stateful metric)\n",
    "        step_metrics = {}\n",
    "        if self.stage==\"train\":\n",
    "            if self.optimizer is not None:\n",
    "                step_metrics['lr'] = self.optimizer.state_dict()['param_groups'][0]['lr']\n",
    "            else:\n",
    "                step_metrics['lr'] = 0.0\n",
    "        return step_losses,step_metrics\n",
    "    \n",
    "\n",
    "class EpochRunner:\n",
    "    def __init__(self,steprunner,quiet=False):\n",
    "        self.steprunner = steprunner\n",
    "        self.stage = steprunner.stage\n",
    "        self.accelerator = steprunner.accelerator\n",
    "        self.net = steprunner.net\n",
    "        self.quiet = quiet\n",
    "        \n",
    "    def __call__(self,dataloader):\n",
    "        dataloader.agent = self.net \n",
    "        n = dataloader.size  if hasattr(dataloader,'size') else len(dataloader)\n",
    "        loop = tqdm(enumerate(dataloader,start=1), \n",
    "                    total=n,\n",
    "                    file=sys.stdout,\n",
    "                    disable=not self.accelerator.is_local_main_process or self.quiet,\n",
    "                    ncols=100\n",
    "                   )\n",
    "        epoch_losses = {}\n",
    "        for step, batch in loop: \n",
    "            if step<n:\n",
    "                step_losses,step_metrics = self.steprunner(batch)   \n",
    "                step_log = dict(step_losses,**step_metrics)\n",
    "                for k,v in step_losses.items():\n",
    "                    epoch_losses[k] = epoch_losses.get(k,0.0)+v\n",
    "                loop.set_postfix(**step_log) \n",
    "            else:\n",
    "                break\n",
    "            \n",
    "        epoch_metrics = step_metrics\n",
    "        epoch_metrics.update({self.stage+\"_\"+name:metric_fn.compute().item() \n",
    "                         for name,metric_fn in self.steprunner.metrics_dict.items()})\n",
    "        epoch_losses = {k:v for k,v in epoch_losses.items()}\n",
    "        epoch_log = dict(epoch_losses,**epoch_metrics)\n",
    "        loop.set_postfix(**epoch_log)\n",
    "\n",
    "        for name,metric_fn in self.steprunner.metrics_dict.items():\n",
    "            metric_fn.reset()\n",
    "            \n",
    "        return epoch_log\n",
    "    \n",
    "KerasModel.StepRunner = StepRunner\n",
    "KerasModel.EpochRunner = EpochRunner \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "49957914",
   "metadata": {},
   "outputs": [],
   "source": [
    "keras_model = KerasModel(net= agent,loss_fn=None,\n",
    "        optimizer=torch.optim.Adam(agent.model.parameters(),lr=1e-2))\n",
    "\n",
    "dfhistory = keras_model.fit(train_data = dl_train,\n",
    "    val_data=dl_val,\n",
    "    epochs=600,\n",
    "    ckpt_path='checkpoint.pt',\n",
    "    patience=100,\n",
    "    monitor='val_reward',\n",
    "    mode='max',\n",
    "    callbacks=None,\n",
    "    plot= True,\n",
    "    cpu=True)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "42b48153",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "475e96d4",
   "metadata": {},
   "source": [
    "### 四，评估Agent "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "51a68f2d-785b-432a-a93c-7af94a7c63ff",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 评估 agent, 跑 3 次，总reward求平均\n",
    "def evaluate(env, agent, render=False):\n",
    "    eval_reward = []\n",
    "    for i in range(2):\n",
    "        obs,info = env.reset()\n",
    "        episode_reward = 0\n",
    "        step=0\n",
    "        while step<300:\n",
    "            action = agent.predict(obs)  # 预测动作，只选最优动作\n",
    "            obs, reward, done, _, _ = env.step(action)\n",
    "            episode_reward += reward\n",
    "            if render:\n",
    "                show_state(env,step,info='reward='+str(episode_reward))\n",
    "            if done:\n",
    "                break\n",
    "            step+=1\n",
    "        eval_reward.append(episode_reward)\n",
    "    return np.mean(eval_reward)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2b2476b3",
   "metadata": {},
   "outputs": [],
   "source": [
    "#直观显示动画\n",
    "env = gym.make('CartPole-v1',render_mode=\"rgb_array\") \n",
    "\n",
    "evaluate(env, agent, render=True)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "05fd6239",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "cd03379b",
   "metadata": {},
   "source": [
    "### 五，保存Agent "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "211be5cc",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.save(agent.state_dict(),'dqn_agent.pt')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14fb3fe0",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "cfa6498a-6a18-4084-9e9f-3ae566283a16",
   "metadata": {},
   "source": [
    "**如果本项目对你有所帮助，想鼓励一下作者，记得给本项目加一颗星星star⭐️，并分享给你的朋友们喔😊!** \n",
    "\n",
    "如果在torchkeras的使用中遇到问题，可以在项目中提交issue。\n",
    "\n",
    "如果想要获得更快的反馈或者与其他torchkeras用户小伙伴进行交流，\n",
    "\n",
    "可以在公众号算法美食屋后台回复关键字：**加群**。\n",
    "\n",
    "![](https://tva1.sinaimg.cn/large/e6c9d24egy1h41m2zugguj20k00b9q46.jpg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b7a7b8cb",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "formats": "md,ipynb",
   "main_language": "python",
   "notebook_metadata_filter": "-all"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
