{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Configurations for Colab"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "IN_COLAB = \"google.colab\" in sys.modules\n",
    "\n",
    "if IN_COLAB:\n",
    "    !apt install python-opengl\n",
    "    !apt install ffmpeg\n",
    "    !apt install xvfb\n",
    "    !pip install PyVirtualDisplay==3.0\n",
    "    !pip install gymnasium==0.28.1\n",
    "    from pyvirtualdisplay import Display\n",
    "    \n",
    "    # Start virtual display\n",
    "    dis = Display(visible=0, size=(400, 400))\n",
    "    dis.start()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 04. Dueling Network\n",
    "\n",
    "[Z. Wang et al., \"Dueling Network Architectures for Deep Reinforcement Learning.\" arXiv preprint arXiv:1511.06581, 2015.](https://arxiv.org/pdf/1511.06581.pdf)\n",
    "\n",
    "The proposed network architecture, which is named *dueling architecture*, explicitly separates the representation of state values and (state-dependent) action advantages. \n",
    "\n",
    "![fig1](https://user-images.githubusercontent.com/14961526/60322956-c2f0b600-99bb-11e9-9ed4-443bd14bc3b0.png)\n",
    "\n",
    "The dueling network automatically produces separate estimates of the state value function and advantage function, without any extra supervision. Intuitively, the dueling architecture can learn which states are (or are not) valuable, without having to learn the effect of each action for each state. This is particularly useful in states where its actions do not affect the environment in any relevant way. \n",
    "\n",
    "The dueling architecture represents both the value $V(s)$ and advantage $A(s, a)$ functions with a single deep model whose output combines the two to produce a state-action value $Q(s, a)$. Unlike in advantage updating, the representation and algorithm are decoupled by construction.\n",
    "\n",
    "$$A^\\pi (s, a) = Q^\\pi (s, a) - V^\\pi (s).$$\n",
    "\n",
    "The value function $V$ measures the how good it is to be in a particular state $s$. The $Q$ function, however, measures the the value of choosing a particular action when in this state. Now, using the definition of advantage, we might be tempted to construct the aggregating module as follows:\n",
    "\n",
    "$$Q(s, a; \\theta, \\alpha, \\beta) = V (s; \\theta, \\beta) + A(s, a; \\theta, \\alpha),$$\n",
    "\n",
    "where $\\theta$ denotes the parameters of the convolutional layers, while $\\alpha$ and $\\beta$ are the parameters of the two streams of fully-connected layers.\n",
    "\n",
    "Unfortunately, the above equation is unidentifiable in the sense that given $Q$ we cannot recover $V$ and $A$ uniquely; for example, there are uncountable pairs of $V$ and $A$ that make $Q$ values to zero. To address this issue of identifiability, we can force the advantage function estimator to have zero advantage at the chosen action. That is, we let the last module of the network implement the forward mapping.\n",
    "\n",
    "$$\n",
    "Q(s, a; \\theta, \\alpha, \\beta) = V (s; \\theta, \\beta) + \\big( A(s, a; \\theta, \\alpha) - \\max_{a' \\in |\\mathcal{A}|} A(s, a'; \\theta, \\alpha) \\big).\n",
    "$$\n",
    "\n",
    "This formula guarantees that we can recover the unique $V$ and $A$, but the optimization is not so stable because the advantages have to compensate any change to the optimal action’s advantage. Due to the reason, an alternative module that replaces the max operator with an average is proposed:\n",
    "\n",
    "$$\n",
    "Q(s, a; \\theta, \\alpha, \\beta) = V (s; \\theta, \\beta) + \\big( A(s, a; \\theta, \\alpha) - \\frac{1}{|\\mathcal{A}|} \\sum_{a'} A(s, a'; \\theta, \\alpha) \\big).\n",
    "$$\n",
    "\n",
    "Unlike the max advantage form, in this formula, the advantages only need to change as fast as the mean, so it increases the stability of optimization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from typing import Dict, List, Tuple\n",
    "\n",
    "import gymnasium as gym\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "from IPython.display import clear_output\n",
    "from torch.nn.utils import clip_grad_norm_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Replay buffer\n",
    "\n",
    "Please see *01.dqn.ipynb* for detailed description."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ReplayBuffer:\n",
    "    \"\"\"A simple numpy replay buffer.\"\"\"\n",
    "\n",
    "    def __init__(self, obs_dim: int, size: int, batch_size: int = 32):\n",
    "        self.obs_buf = np.zeros([size, obs_dim], dtype=np.float32)\n",
    "        self.next_obs_buf = np.zeros([size, obs_dim], dtype=np.float32)\n",
    "        self.acts_buf = np.zeros([size], dtype=np.float32)\n",
    "        self.rews_buf = np.zeros([size], dtype=np.float32)\n",
    "        self.done_buf = np.zeros(size, dtype=np.float32)\n",
    "        self.max_size, self.batch_size = size, batch_size\n",
    "        self.ptr, self.size, = 0, 0\n",
    "\n",
    "    def store(\n",
    "        self,\n",
    "        obs: np.ndarray,\n",
    "        act: np.ndarray, \n",
    "        rew: float, \n",
    "        next_obs: np.ndarray, \n",
    "        done: bool,\n",
    "    ):\n",
    "        self.obs_buf[self.ptr] = obs\n",
    "        self.next_obs_buf[self.ptr] = next_obs\n",
    "        self.acts_buf[self.ptr] = act\n",
    "        self.rews_buf[self.ptr] = rew\n",
    "        self.done_buf[self.ptr] = done\n",
    "        self.ptr = (self.ptr + 1) % self.max_size\n",
    "        self.size = min(self.size + 1, self.max_size)\n",
    "\n",
    "    def sample_batch(self) -> Dict[str, np.ndarray]:\n",
    "        idxs = np.random.choice(self.size, size=self.batch_size, replace=False)\n",
    "        return dict(obs=self.obs_buf[idxs],\n",
    "                    next_obs=self.next_obs_buf[idxs],\n",
    "                    acts=self.acts_buf[idxs],\n",
    "                    rews=self.rews_buf[idxs],\n",
    "                    done=self.done_buf[idxs])\n",
    "\n",
    "    def __len__(self) -> int:\n",
    "        return self.size"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Dueling Network\n",
    "\n",
    "Carefully take a look at advantage and value layers separated from feature layer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Network(nn.Module):\n",
    "    def __init__(self, in_dim: int, out_dim: int):\n",
    "        \"\"\"Initialization.\"\"\"\n",
    "        super(Network, self).__init__()\n",
    "\n",
    "        # set common feature layer\n",
    "        self.feature_layer = nn.Sequential(\n",
    "            nn.Linear(in_dim, 128), \n",
    "            nn.ReLU(),\n",
    "        )\n",
    "        \n",
    "        # set advantage layer\n",
    "        self.advantage_layer = nn.Sequential(\n",
    "            nn.Linear(128, 128),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(128, out_dim),\n",
    "        )\n",
    "\n",
    "        # set value layer\n",
    "        self.value_layer = nn.Sequential(\n",
    "            nn.Linear(128, 128),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(128, 1),\n",
    "        )\n",
    "\n",
    "    def forward(self, x: torch.Tensor) -> torch.Tensor:\n",
    "        \"\"\"Forward method implementation.\"\"\"\n",
    "        feature = self.feature_layer(x)\n",
    "        \n",
    "        value = self.value_layer(feature)\n",
    "        advantage = self.advantage_layer(feature)\n",
    "\n",
    "        q = value + advantage - advantage.mean(dim=-1, keepdim=True)\n",
    "        \n",
    "        return q"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## DQN + DuelingNet Agent (w/o Double-DQN & PER)\n",
    "\n",
    "Here is a summary of DQNAgent class.\n",
    "\n",
    "| Method           | Note                                                 |\n",
    "| ---              | ---                                                  |\n",
    "|select_action     | select an action from the input state.               |\n",
    "|step              | take an action and return the response of the env.   |\n",
    "|compute_dqn_loss  | return dqn loss.                                     |\n",
    "|update_model      | update the model by gradient descent.                |\n",
    "|target_hard_update| hard update from the local model to the target model.|\n",
    "|train             | train the agent during num_frames.                   |\n",
    "|test              | test the agent (1 episode).                          |\n",
    "|plot              | plot the training progresses.                        |\n",
    "\n",
    "\n",
    "Aside from the dueling network architecture, the authors suggest to use Double-DQN and Prioritized Experience Replay as extra components for better performance. However, we don't implement them to simplify the tutorial. There is only one diffrence between DQNAgent here and the one from *01.dqn.ipynb* and that is the usage of clip_grad_norm_ to prevent gradient exploding."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "class DQNAgent:\n",
    "    \"\"\"DQN Agent interacting with environment.\n",
    "    \n",
    "    Attribute:\n",
    "        env (gym.Env): openAI Gym environment\n",
    "        memory (ReplayBuffer): replay memory to store transitions\n",
    "        batch_size (int): batch size for sampling\n",
    "        epsilon (float): parameter for epsilon greedy policy\n",
    "        epsilon_decay (float): step size to decrease epsilon\n",
    "        max_epsilon (float): max value of epsilon\n",
    "        min_epsilon (float): min value of epsilon\n",
    "        target_update (int): period for target model's hard update\n",
    "        gamma (float): discount factor\n",
    "        dqn (Network): model to train and select actions\n",
    "        dqn_target (Network): target model to update\n",
    "        optimizer (torch.optim): optimizer for training dqn\n",
    "        transition (list): transition information including\n",
    "                           state, action, reward, next_state, done\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(\n",
    "        self, \n",
    "        env: gym.Env,\n",
    "        memory_size: int,\n",
    "        batch_size: int,\n",
    "        target_update: int,\n",
    "        epsilon_decay: float,\n",
    "        seed: int,\n",
    "        max_epsilon: float = 1.0,\n",
    "        min_epsilon: float = 0.1,\n",
    "        gamma: float = 0.99,\n",
    "    ):\n",
    "        \"\"\"Initialization.\n",
    "        \n",
    "        Args:\n",
    "            env (gym.Env): openAI Gym environment\n",
    "            memory_size (int): length of memory\n",
    "            batch_size (int): batch size for sampling\n",
    "            target_update (int): period for target model's hard update\n",
    "            epsilon_decay (float): step size to decrease epsilon\n",
    "            lr (float): learning rate\n",
    "            max_epsilon (float): max value of epsilon\n",
    "            min_epsilon (float): min value of epsilon\n",
    "            gamma (float): discount factor\n",
    "        \"\"\"\n",
    "        obs_dim = env.observation_space.shape[0]\n",
    "        action_dim = env.action_space.n\n",
    "        \n",
    "        self.env = env\n",
    "        self.memory = ReplayBuffer(obs_dim, memory_size, batch_size)\n",
    "        self.batch_size = batch_size\n",
    "        self.epsilon = max_epsilon\n",
    "        self.epsilon_decay = epsilon_decay\n",
    "        self.seed = seed\n",
    "        self.max_epsilon = max_epsilon\n",
    "        self.min_epsilon = min_epsilon\n",
    "        self.target_update = target_update\n",
    "        self.gamma = gamma\n",
    "        \n",
    "        # device: cpu / gpu\n",
    "        self.device = torch.device(\n",
    "            \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
    "        )\n",
    "        print(self.device)\n",
    "\n",
    "        # networks: dqn, dqn_target\n",
    "        self.dqn = Network(obs_dim, action_dim).to(self.device)\n",
    "        self.dqn_target = Network(obs_dim, action_dim).to(self.device)\n",
    "        self.dqn_target.load_state_dict(self.dqn.state_dict())\n",
    "        self.dqn_target.eval()\n",
    "        \n",
    "        # optimizer\n",
    "        self.optimizer = optim.Adam(self.dqn.parameters())\n",
    "\n",
    "        # transition to store in memory\n",
    "        self.transition = list()\n",
    "        \n",
    "        # mode: train / test\n",
    "        self.is_test = False\n",
    "\n",
    "    def select_action(self, state: np.ndarray) -> np.ndarray:\n",
    "        \"\"\"Select an action from the input state.\"\"\"\n",
    "        # epsilon greedy policy\n",
    "        if self.epsilon > np.random.random():\n",
    "            selected_action = self.env.action_space.sample()\n",
    "        else:\n",
    "            selected_action = self.dqn(\n",
    "                torch.FloatTensor(state).to(self.device)\n",
    "            ).argmax()\n",
    "            selected_action = selected_action.detach().cpu().numpy()\n",
    "        \n",
    "        if not self.is_test:\n",
    "            self.transition = [state, selected_action]\n",
    "        \n",
    "        return selected_action\n",
    "\n",
    "    def step(self, action: np.ndarray) -> Tuple[np.ndarray, np.float64, bool]:\n",
    "        \"\"\"Take an action and return the response of the env.\"\"\"\n",
    "        next_state, reward, terminated, truncated, _ = self.env.step(action)\n",
    "        done = terminated or truncated\n",
    "        \n",
    "        if not self.is_test:\n",
    "            self.transition += [reward, next_state, done]\n",
    "            self.memory.store(*self.transition)\n",
    "    \n",
    "        return next_state, reward, done\n",
    "\n",
    "    def update_model(self) -> torch.Tensor:\n",
    "        \"\"\"Update the model by gradient descent.\"\"\"\n",
    "        samples = self.memory.sample_batch()\n",
    "\n",
    "        loss = self._compute_dqn_loss(samples)\n",
    "\n",
    "        self.optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        # DuelingNet: we clip the gradients to have their norm less than or equal to 10.\n",
    "        clip_grad_norm_(self.dqn.parameters(), 10.0)\n",
    "        self.optimizer.step()\n",
    "\n",
    "        return loss.item()\n",
    "        \n",
    "    def train(self, num_frames: int, plotting_interval: int = 200):\n",
    "        \"\"\"Train the agent.\"\"\"\n",
    "        self.is_test = False\n",
    "        \n",
    "        state, _ = self.env.reset(seed=self.seed)\n",
    "        update_cnt = 0\n",
    "        epsilons = []\n",
    "        losses = []\n",
    "        scores = []\n",
    "        score = 0\n",
    "\n",
    "        for frame_idx in range(1, num_frames + 1):\n",
    "            action = self.select_action(state)\n",
    "            next_state, reward, done = self.step(action)\n",
    "\n",
    "            state = next_state\n",
    "            score += reward\n",
    "\n",
    "            # if episode ends\n",
    "            if done:\n",
    "                state, _ = self.env.reset(seed=self.seed)\n",
    "                scores.append(score)\n",
    "                score = 0\n",
    "\n",
    "            # if training is ready\n",
    "            if len(self.memory) >= self.batch_size:\n",
    "                loss = self.update_model()\n",
    "                losses.append(loss)\n",
    "                update_cnt += 1\n",
    "                \n",
    "                # linearly decrease epsilon\n",
    "                self.epsilon = max(\n",
    "                    self.min_epsilon, self.epsilon - (\n",
    "                        self.max_epsilon - self.min_epsilon\n",
    "                    ) * self.epsilon_decay\n",
    "                )\n",
    "                epsilons.append(self.epsilon)\n",
    "                \n",
    "                # if hard update is needed\n",
    "                if update_cnt % self.target_update == 0:\n",
    "                    self._target_hard_update()\n",
    "\n",
    "            # plotting\n",
    "            if frame_idx % plotting_interval == 0:\n",
    "                self._plot(frame_idx, scores, losses, epsilons)\n",
    "                \n",
    "        self.env.close()\n",
    "                \n",
    "    def test(self, video_folder: str) -> None:\n",
    "        \"\"\"Test the agent.\"\"\"\n",
    "        self.is_test = True\n",
    "        \n",
    "        # for recording a video\n",
    "        naive_env = self.env\n",
    "        self.env = gym.wrappers.RecordVideo(self.env, video_folder=video_folder)\n",
    "        \n",
    "        state, _ = self.env.reset(seed=self.seed)\n",
    "        done = False\n",
    "        score = 0\n",
    "        \n",
    "        while not done:\n",
    "            action = self.select_action(state)\n",
    "            next_state, reward, done = self.step(action)\n",
    "\n",
    "            state = next_state\n",
    "            score += reward\n",
    "        \n",
    "        print(\"score: \", score)\n",
    "        self.env.close()\n",
    "        \n",
    "        # reset\n",
    "        self.env = naive_env\n",
    "\n",
    "    def _compute_dqn_loss(self, samples: Dict[str, np.ndarray]) -> torch.Tensor:\n",
    "        \"\"\"Return dqn loss.\"\"\"\n",
    "        device = self.device  # for shortening the following lines\n",
    "        state = torch.FloatTensor(samples[\"obs\"]).to(device)\n",
    "        next_state = torch.FloatTensor(samples[\"next_obs\"]).to(device)\n",
    "        action = torch.LongTensor(samples[\"acts\"].reshape(-1, 1)).to(device)\n",
    "        reward = torch.FloatTensor(samples[\"rews\"].reshape(-1, 1)).to(device)\n",
    "        done = torch.FloatTensor(samples[\"done\"].reshape(-1, 1)).to(device)\n",
    "\n",
    "        # G_t   = r + gamma * v(s_{t+1})  if state != Terminal\n",
    "        #       = r                       otherwise\n",
    "        curr_q_value = self.dqn(state).gather(1, action)\n",
    "        next_q_value = self.dqn_target(next_state).max(\n",
    "            dim=1, keepdim=True\n",
    "        )[0].detach()\n",
    "        mask = 1 - done\n",
    "        target = (reward + self.gamma * next_q_value * mask).to(self.device)\n",
    "\n",
    "        # calculate dqn loss\n",
    "        loss = F.smooth_l1_loss(curr_q_value, target)\n",
    "\n",
    "        return loss\n",
    "\n",
    "    def _target_hard_update(self):\n",
    "        \"\"\"Hard update: target <- local.\"\"\"\n",
    "        self.dqn_target.load_state_dict(self.dqn.state_dict())\n",
    "                \n",
    "    def _plot(\n",
    "        self, \n",
    "        frame_idx: int, \n",
    "        scores: List[float], \n",
    "        losses: List[float], \n",
    "        epsilons: List[float],\n",
    "    ):\n",
    "        \"\"\"Plot the training progresses.\"\"\"\n",
    "        clear_output(True)\n",
    "        plt.figure(figsize=(20, 5))\n",
    "        plt.subplot(131)\n",
    "        plt.title('frame %s. score: %s' % (frame_idx, np.mean(scores[-10:])))\n",
    "        plt.plot(scores)\n",
    "        plt.subplot(132)\n",
    "        plt.title('loss')\n",
    "        plt.plot(losses)\n",
    "        plt.subplot(133)\n",
    "        plt.title('epsilons')\n",
    "        plt.plot(epsilons)\n",
    "        plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Environment\n",
    "\n",
    "You can see the [code](https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/envs/classic_control/cartpole.py) and [configurations](https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/envs/classic_control/cartpole.py#L91) of CartPole-v1 from Farama Gymnasium's repository."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# environment\n",
    "env = gym.make(\"CartPole-v1\", max_episode_steps=200, render_mode=\"rgb_array\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Set random seed"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "seed = 777\n",
    "\n",
    "def seed_torch(seed):\n",
    "    torch.manual_seed(seed)\n",
    "    if torch.backends.cudnn.enabled:\n",
    "        torch.cuda.manual_seed(seed)\n",
    "        torch.backends.cudnn.benchmark = False\n",
    "        torch.backends.cudnn.deterministic = True\n",
    "\n",
    "np.random.seed(seed)\n",
    "seed_torch(seed)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Initialize"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "cuda\n"
     ]
    }
   ],
   "source": [
    "# parameters\n",
    "num_frames = 10000\n",
    "memory_size = 1000\n",
    "batch_size = 32\n",
    "target_update = 100\n",
    "epsilon_decay = 1 / 2000\n",
    "\n",
    "# train\n",
    "agent = DQNAgent(env, memory_size, batch_size, target_update, epsilon_decay, seed)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent.train(num_frames)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Test\n",
    "\n",
    "Run the trained agent (1 episode)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Moviepy - Building video /Users/jinwoo.park/Repositories/rainbow-is-all-you-need/videos/dueling/rl-video-episode-0.mp4.\n",
      "Moviepy - Writing video /Users/jinwoo.park/Repositories/rainbow-is-all-you-need/videos/dueling/rl-video-episode-0.mp4\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "                                                                                                               "
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Moviepy - Done !\n",
      "Moviepy - video ready /Users/jinwoo.park/Repositories/rainbow-is-all-you-need/videos/dueling/rl-video-episode-0.mp4\n",
      "score:  200.0\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\r"
     ]
    }
   ],
   "source": [
    "video_folder=\"videos/dueling\"\n",
    "agent.test(video_folder=video_folder)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Render"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "        <video width=\"320\" height=\"240\" alt=\"test\" controls>\n",
       "        <source src=\"data:video/mp4;base64,AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAANR5tZGF0AAACsAYF//+s3EXpvebZSLeWLNgg2SPu73gyNjQgLSBjb3JlIDE2MSByMzAzME0gOGJkNmQyOCAtIEguMjY0L01QRUctNCBBVkMgY29kZWMgLSBDb3B5bGVmdCAyMDAzLTIwMjAgLSBodHRwOi8vd3d3LnZpZGVvbGFuLm9yZy94MjY0Lmh0bWwgLSBvcHRpb25zOiBjYWJhYz0xIHJlZj0zIGRlYmxvY2s9MTowOjAgYW5hbHlzZT0weDM6MHgxMTMgbWU9aGV4IHN1Ym1lPTcgcHN5PTEgcHN5X3JkPTEuMDA6MC4wMCBtaXhlZF9yZWY9MSBtZV9yYW5nZT0xNiBjaHJvbWFfbWU9MSB0cmVsbGlzPTEgOHg4ZGN0PTEgY3FtPTAgZGVhZHpvbmU9MjEsMTEgZmFzdF9wc2tpcD0xIGNocm9tYV9xcF9vZmZzZXQ9LTIgdGhyZWFkcz0xMiBsb29rYWhlYWRfdGhyZWFkcz0yIHNsaWNlZF90aHJlYWRzPTAgbnI9MCBkZWNpbWF0ZT0xIGludGVybGFjZWQ9MCBibHVyYXlfY29tcGF0PTAgY29uc3RyYWluZWRfaW50cmE9MCBiZnJhbWVzPTMgYl9weXJhbWlkPTIgYl9hZGFwdD0xIGJfYmlhcz0wIGRpcmVjdD0xIHdlaWdodGI9MSBvcGVuX2dvcD0wIHdlaWdodHA9MiBrZXlpbnQ9MjUwIGtleWludF9taW49MjUgc2NlbmVjdXQ9NDAgaW50cmFfcmVmcmVzaD0wIHJjX2xvb2thaGVhZD00MCByYz1jcmYgbWJ0cmVlPTEgY3JmPTIzLjAgcWNvbXA9MC42MCBxcG1pbj0wIHFwbWF4PTY5IHFwc3RlcD00IGlwX3JhdGlvPTEuNDAgYXE9MToxLjAwAIAAAAG9ZYiEACf//vWxfApqyfOKDOgyLuGXJMmutiLibQDAFR8gAAADAAA7Zb/FzICUal8AAAMAQkANQH0GUHmImKsVAleIOBmriAEHYF9QlOSkgF1OQnZHB8MeeXq5U0srzHINKc/aresEPjEeJuf22YYFInPRbE6vMCvnQCsJt8E68Dgefc1XlgvDGhgI5SHBtCC83rbKLnLp21lLxS03Ac+KAPbfSz6eth/m/pvoP8UbpOAACDy0udypnNArrc2HxVUOwWQ+kCj3gGXw9QNvyRCSuUrRSs0DGeEV/uDaVSjF1t0gX/reHvApmDi0AkH32wRb/fRcxBuF5Sa+DAHChZgBf4Y1QLWY79hKztXl5XJcEukDjW8vtr/aVlBIIoZF1p6qqW30dykZayOqP2TU40Om/V//XY9KYZEBR0BTMRMD9n1m2v6atmBqZNKC8YQrfytRwBXCAaMEMsJEIl8rx/hKm9lNDfag3qc0ypfhjKqsUElugAAmeTGShn5n/pECTIPjmmlG2be5uVD8YqFo1u+c75sLsv009hAQm+bKNfxEPSV6cvpgSz5pXs67uBU3dAlb3TCBeAAAAwAAAwAQ8QAAAFdBmiRsQr/+OEAAACbe4mmcuuaEYRgABv1tF7AiYckWTsCxY5H23czFvgeFsvpARD+PBXsFNjuWRCbvuSbGMsXYhAEED3S/wWe8HOu1eHzu0AisIsPnJtYAAAAiQZ5CeIR/AAADAzYP60YSjxItFcPWIj6X/QsNukyHcXa43QAAABQBnmF0R/8AAAMAtaFDrAQ2D42adgAAACgBnmNqR/8AAAUdNPXhX2ssfEXImZPdGKjug5Rd283fisuVLpPMzaVBAAAAYkGaaEmoQWiZTAhP//3xAAADAPJM0gg5eAFt43qyU58dYO0R70w3tT7OwZx0Z1SuJqgsNE/xLjgzVFO41bloSAQAKxqoMmEGQ25lZCtq8X22ONhN75O0MaHMYVMGKYlMzMLzAAAAIkGehkURLCP/AAADAzkwebbivJbLIkR+8WkQErFL1qNH0k0AAAAQAZ6ldEf/AAADAAI66pS2UQAAABwBnqdqR/8AAA00n5fHA03Fa04koHJVhFwP38yTAAAAb0GarEmoQWyZTAhH//3hAAAEFpHEAP+IRAAE6n4UcNAom/tHdI2+B7N3QHc87+JAh81Qd7sieClp1iSv0slSQzm/pSvWSpvlj5RdaDZLUhbkYaRUNoPJeQXJiENLDhxpb+48BxwhM0eidxamlL72WAAAADJBnspFFSwj/wAAFrUrN2tTqopmgyLSFXzeTXqDABxT8ckeC7PUkmFsCR2Vi/WuaPXy4QAAAB0Bnul0R/8AACOr+Ft4Ep1oK81VTFa7vzgnTf5EqAAAABgBnutqR/8AAAMARX46a4qjuU5HuqxyH4MAAABPQZrvSahBbJlMCFf//jhAAAEM+vhkC9ZsP14ceKN5v23wrbf4hcZ3+viAY3uF9OOeaRBkvqgiIvvzqUE3WlSHC5cXQ/9C3VNwcOq6DTekIQAAAChBnw1FFSwj/wAAFrUrN046YT70YEcAPv31D2w0oKgCS8PzoSkYpvbNAAAAKAGfLmpH/wAABRlyAh51mOIkeVJzPg1Ec+zMBDOXYN3pKVbak87y2zEAAABLQZszSahBbJlMCE///fEAAAMAXzNEiyEnInB0AQFfJ5xJ5wJxACao/SZY7t8Zhxwqyw7mz7G0C6eoDbL6A/6RF0cohWIGotftHNAYAAAAJUGfUUUVLCP/AAADAzhBmEk7HvfLSKggALDulkCO7kMHGRuttmAAAAAaAZ9wdEf/AAAFHFJptSWmq4gz/LFL5TshPg0AAAAhAZ9yakf/AAAjrl4zgAlWcb/tCU/O2BulMEHFEI+nrMFbAAAAS0Gbd0moQWyZTAhP//3xAAADAF//kqREqGFWk/5nLJKGGMkkQwGN7Hpwqx4josPL1wA7J0pkayI8LA2nx4ctgS+XgsYJZ38LssyVXwAAACFBn5VFFSwj/wAAAwM4UIC6fcFWSov6/QOLeANS8bXd1t0AAAAcAZ+0dEf/AAAFH8cP48sTdUVnAGQoBTfiPj+2YAAAAB0Bn7ZqR/8AAAMB2s3xURYcvC+MEQH9ftF9Kcq2YQAAAGdBm7tJqEFsmUwIV//+OEAAAQ2ik3YFSt0vhldvClywIzKcyk0jrmDKvUD5i5iJcY0vWS27o7VAwdljeYiW8tkU6qMojAcpEAfg/kEf/8Cdt7m20CcI1MsIyQp45LAuzct/Vt1OdjzxAAAAM0Gf2UUVLCP/AAAWtSs3h7ouktADbaNIASwD10WUyQrNTB4LhbbaX00iXMG6cLjcG3i5yQAAABcBn/h0R/8AACOr+Ft4Ep1oI4gRnrHxLwAAABwBn/pqR/8AAAUecIVSNoCiqLuXGmWbFiNoo/4MAAAAJUGb/0moQWyZTAhP//3xAAADAPKW9wspb3tZ7sstOMAGPF7QImEAAAAhQZ4dRRUsI/8AAAMDObtn7D+wyDkXaOwupz4DUQLkYJ25AAAAGwGePHRH/wAABRxSabCFOXkUgTTU0ZaZgi5JgAAAAB0Bnj5qR/8AAAUZB3m6ylxQQy+MCF4ZH9HnkOYgIAAAAJJBmiNJqEFsmUwIT//98QAAAwKfAq02S9iEALDs3gDdjHTS57KGs4XFOeA7qOfgYoJ/Pv36j92m9UrlRGInfDw+GkZL6KYY+HIQSJY2peq8jsth6vkZeixPgNaJT8JsOcvAmyxa2b9oirOZFhgHCy+9TE0z3A9IyW+4J4G88RoOe0s3YR6YXKSrt1MY+4SZmqncWQAAACVBnkFFFSwj/wAAFrxDzbfUfJspnDYJaAXtfyQYvxD0Kz+mvtmAAAAAGwGeYHRH/wAABRxSabCFOXkUgTTU0ZaZgi5JgQAAABsBnmJqR/8AACOxzAlvpZNATOZwL4sBra5By4AAAABeQZpnSahBbJlMCE///fEAAAMA+BNQX46IA0vGDNDDhimrBfMLgAGh2OP0Pzgk5veJKDaDzY2axcx1lOAGCSMiaYJIZFkmyLSfX8qVrSBr+YUs9lXlWUitP2QtAN4oIQAAADRBnoVFFSwj/wAAFrMrtQYAedWScXpA3fshXwzQeJsWRko5RZnDLesZUlhsdAAJeylP9A7dAAAAIgGepHRH/wAAI8MXaKmk9CINobd3Onw5FPPPkOnxVRCgO3EAAAAfAZ6makf/AAAjvwQnEslpnH9lWNTvlWXDwABsiX07cQAAAFNBmqtJqEFsmUwIT//98QAAAwKgrB9tr4MK7f9jG42LZJCA4JINkvT5dl+5I2JzYjDUo3cAHt4+M/yXM/rBTVl2afWQiolmeWRfkzRUZv/Cx9DwgAAAAC9BnslFFSwj/wAAFsCbPsWXhAQX3c69VgKTyfs5lQW3Xe0t+DT5Os5jMxnuNNWFBAAAADMBnuh0R/8AACPDF2ips5XfGgBWXIvSR9dT7l04OLArW7hJ0c5oDEbGHambZx1ldrp2DBkAAAArAZ7qakf/AAAjvwQnYL+cvZkBAwzGSXHCDdN1dCJVC776O6rJQrOPB4cUEAAAAEtBmu9JqEFsmUwIT//98QAAAwKNcAgv72iIlUR4EizCFvJVuT01RoPO1jRVfFIyqhUvdDgQHV2D7vLurIB3tWwug15ejgDr6KAEmOYAAAAjQZ8NRRUsI/8AAAhr+ucbzS9cUUHoXVnUqBaSifh7v2RjwYEAAAA1AZ8sdEf/AAAirEJkoJmYvdeihOqPYtJKHaYQXZ9oc4yCreG0itmOmz9qP5/1HzHm1nA6SYEAAAA2AZ8uakf/AAANMj0bcAKxCRzcl9+MFSjDqMqvn+wACCyRH74OP1mJxehU0UUD2Gs0zDZNZVihAAAAPUGbM0moQWyZTAhH//3hAAAEE1vPw7vsQf1QAr+rSb9MEryxObh9oF1CkHIN2A8H+5K1HX4DgaIIaY43UEAAAAAoQZ9RRRUsI/8AABazR86v/J/f4stMbCAAJqCspGFpezx4LyLF/KA5cAAAAC0Bn3B0R/8AACPDPgIiSN2ClbAAsBUG319Gh80ZACArGPVsI1JhRkolWdAmUfEAAAAYAZ9yakf/AAAjsisHEoqm0wDeRWV1eKCAAAAAb0GbdkmoQWyZTAhH//3hAAAEE3OT1SNWwcHw4C+yiAOKFesyd3FvZlkzma8yxpMQHgY1HxsMy8ucD5S+waKJPxo/pQJTMfBcAtAkPAPaJY4gqX5q30EGHxNxxtznXOzuhgxKQRcPhg0tvtv+sDa5TgAAACVBn5RFFSwj/wAAFrxOAuSi/kmyvCkcPVbTfAqCUrPT/6T9WrbhAAAAKwGftWpH/wAAI78a+jXkGt3M2+0MnmqkSPEls4rVURViODFgLNehnoLRbcAAAABnQZu6SahBbJlMCEf//eEAAAQTW9ICM+O/R863rjmHBfvYcDSTEi8QBfz4Z+rU4n53SriJzjZglHZILBo8wDQmATz3awIhfHMTkqZx2ZC7k7WhB0X5oVzFHD+ItkdAl9sJIbqU8n4OCQAAACZBn9hFFSwj/wAAFruK0eZEXixKDkKCPxBumt1vVbHfPwEPZjbttwAAABoBn/d0R/8AACPDPg542E9dILEC81+v5EEUEAAAABsBn/lqR/8AACO/Gs573gm2YrEYALkAboGKYgMAAABYQZv+SahBbJlMCEf//eEAAAQzW7RjbWfpRk8J/KPuiR4uh3jwMwzMr+rK/Ml1xnFtYQ9fgEoWZrZ5Y3Oilh0sDtDBP9QAVNmjgU2H/8gtursivq+do4LeWAAAACpBnhxFFSwj/wAAF0NHtw3/ziVCoQAX9GOpIzzfOFFDBW5j3e88ZV7E4sEAAAAkAZ47dEf/AAAkwxdoqaVl0M4tqZLJq773AAXPn3Cp9wyZNKY1AAAAJQGePWpH/wAAJLHMCW+pb7MawEpCZjCX4VAAc3O/lCd6Ug9pQ+AAAACHQZoiSahBbJlMCEf//eEAAAQzQHaSUBjdyZe8acfLT3OMlveM3Ch9wih/VzJ8G5zRAClRh96n1WeMsgxsBRCoCSeklxJTZJzkLWEE9cKTQZryCFCKYnI74zRIrcfuGQ5RsHLUdKJSQxArg1gNEXYyW4V/+feTp20HIPU5mvEM+Nfb3PFkRtlgAAAAK0GeQEUVLCP/AAAXS4qUZI1jHuHc6XIe8Q0rMUXI8R7a8sAlkcCr+wIrbMEAAAAyAZ5/dEf/AAAkq/hcLpYOOlSpCxqhfdaen8jmgMMixpwpbFCmz+rtgTpB93sEnwP7+2AAAAAoAZ5hakf/AAAkrl1U4dt5Qsqf3DJnP+0j6Q/3vkl+AFbfTUEeCgIO6QAAAItBmmZJqEFsmUwIR//94QAABDUT0DWov5eEm8eZzttBVDP1wbmVqhQLh+XMQdYYIiSXzxCjg2PiniM68+OJFT8Lv8s+zuAgTnib/6gYs6jNfrz9JMi7JDJFq//+YfpuMZZTzOijGjxXLZ+oPwksMbd9QX+qb+0hYuA0sFIPVjJrZg110Ap0sHe7iYWIAAAAKkGehEUVLCP/AAAXRSs6vvObHXgAPz3vygA/vC/B+bxKxmCDM5dSkt8VKwAAACsBnqN0R/8AACTAmd3pXxFuwnc7PUw3apOaeu34sCsekBEOGqZ+xSZySnAhAAAANwGepWpH/wAADcT42r1ZEBnfvZ6PQM9UZ8MylGxT1NVuqgCgBJeii1+OI7jnlpsNFT6uKN1H/UkAAADCQZqpSahBbJlMCEf//eEAAAMBnCJ7ZQApQu+BU8EjkSHAkwr4qMnmIPHeqGbxjqx669JSVTRyNrMXBlN0pk3ftjq4SvFgIzxcrPg1YZ1SibDWoGm3vhEsNFNpZItFFUkaOrrX+9jy+QVKAXnBVWtIEp9TYeoCyZrJoi1cItJcBRCeJI67g8dmbjnNJGOL2TYeeyp0Q4BOJz4tubyCs1qicDRc88shXJIg9EiUvnHPl2zj4svfIWthZlhRDU7+O7mMSBkAAAA5QZ7HRRUsI/8AAAivIm8alMukOnXqeuCiuiZSRu7GJj9eTF6fmOiKKZFJqWuSwPkzjcR+xBREf468AAAAQwGe6GpH/wAADXtFiihkI2KAE+IJPZQBZWAQ7ockSgA8qBfV4mFclXdTSrSXYhCPfGHdFIg6EADYnn0+w0YjX9h02pAAAADCQZrtSahBbJlMCE///fEAAAMCstV/a8BPxQ0w04CheB6bdRyIbZoACNc3EVycfIMekhb2Fv//jYG300cLmg0KEaF8ba0GUn+/3j6h8E9DMtacBwU1DGzLzVxuXlvHbhmeOzHhVKczh6QDRb+fR9G7E6/Z5/mxbWvq6I0EyPMcPUFYt1qX28MWprOkEALwaurFzOzf3k5JRWthmD4wHxTt1nM+GNg3/byp0LBpqBj3sDnhNlH2ah8EqentqWGX7mhcsrsAAABTQZ8LRRUsI/8AABdDR7cNGiDIDNHpGaOYi3MxS3w1d/D9kRTuCkWCxzD1/3KexMwzKFDqi/nIbD5cC/U7BGvZxSw/DrQALnuNdzC7d8VXzj0ONKYAAAA1AZ8qdEf/AAAjrD57KGG3wCEPLOyk3MOELXzXOxyHaxxoxqe4m40Pqg+E3qNFi49zOEGDr6YAAAAvAZ8sakf/AAAkvwQlkkBmduM8bxxX3R1wbW1lQyfYqw3eGFNmepma5ruEcQi958EAAABYQZsxSahBbJlMCE///fEAAAMCsOjJVp7LimGeZwnNWGuq8sELPdfb6bAl9yv3n5GzF/AkUnOgmJMAa3//BbCpJAHw3gTWANFhUsasrfA/jQvHGvOMhoFonQAAAEFBn09FFSwj/wAAF0Kr6LQKM7RJKPC6Gtz+z/7IKqSaq8z5UAc8G1lE1xVr2Uzsc5MZd6Va4B6f0AMDCZkLFB7acQAAACYBn250R/8AACTDF2ippSp6NwLEHZ8+0wvvoVmk+JMlZ3nSpT5WfAAAADcBn3BqR/8AACSuXjN+6LVhU4ZMVp70ta27O05XGdagfbrmYE6dgf0hffqYI1wa3pnJQE98U+6wAAAAqkGbdUmoQWyZTAhP//3xAAADArKlz8rkngE1y0XDvxVo53GyEgaQTXMTRSK1U+KJ2LW0x1Q2kQun3mcPcQtlUqLtUQGthqHrfIEltNOPnSP5zXyFHAPjmMj8IZNUt/FMZuK9AvVKl+8DLalRQ1iI19goOQAszf94nNe8OUamP1M+QnNTApv6mDFxyHfi1H6Vqp8M4tHX0JvwsUxVvmIC2Tap8BxUDbzcwfN/AAAAQEGfk0UVLCP/AAAXQqwvTReVe6aDAbUpDpOFmkSTjwj6lgzoVLoJoXvezBUAJLA5dfMLM7EhWTJtiSUy6Efk0WAAAAA1AZ+ydEf/AAAkqenbUN15LtYBLfwOHm3/2kzkyuprNeE25aFyZVMqqiDqiIg9lMBHFO4lHaAAAAA0AZ+0akf/AAAkkjIcHKEiBeEqqMACV6GRT+WIPAIdRaL0DSe5y5o3Y4/vecN4sRY9iECaGwAAAKpBm7lJqEFsmUwIT//98QAAAwLC6OCgJft6G1I75ptqbLhjSWl6c+GM6jtn6gWuFCMimGk8FuDdcBK7As6eNFi0ebJmx5/1J6pQdye0AuwDxM3Tn7o5BnitKmN96YOW5HilKY5w0KF8P2c7RmLVXNZCYpNcSo/QrMX/qyYHv77mXAqeT/5oO2T1i0JFUVf1lGDADyTsP5U2VEVGrfJHz292krrx5NefdV3BOAAAADFBn9dFFSwj/wAAF+l8w2An9bfZ/uW9Q4ocmhcfGcIkvJaN42Tnj3IIGRpcrzKYPB9NAAAAIAGf9nRH/wAAJKnq+NeAb3Xr6RzG9BjLBy2e7VVGcf0xAAAAPgGf+GpH/wAAJb71PhqfgVIABLFPKhCzN0gxzvNVB4Zh4MSCWKgBObNOy+knJbQjhUQlg8MC55q5bnIMfLpgAAAAkEGb/UmoQWyZTAhP//3xAAADAsO4ItEQAxsuk/Cko+BHTO6aXkHb/OtJxUmctVI2MA96i8wztS8qtK5oMdzgj1fRABvvuxfGeyyFGk5qXsowDjS7Ztrlb5fvUu15TU2h3oFK+BomR32fHSwkpPYAZWLZtcd6x0tnRgO+KJH+zRPAFmkhdr8W33EVMp4DED9ewQAAADJBnhtFFSwj/wAAF+CHtw3kQtZ/1511sUOww/w3SKDMnhkCyokMtYOnLaVIVxZvtVNqQAAAAC0Bnjp0R/8AACXDF2oEX1xfKw5fzJXri0Oy8qz0lSses/aUIaUGcu4tIAxMJuEAAAAgAZ48akf/AAAlscwJb6lvwUOpg2yPpmlEn4iqgmdm14EAAAB2QZohSahBbJlMCEf//eEAAARTW7PYu3dgT2dis41lAbMM3TCUW4gzm974jvokXGkw1ba/kr3QAFf2dn8esV4TioPAdj5RWUT1Qugtib//vx89wxv973BvJSY2uL36kFIK0qB89zwkSQzGwR6AQ681TbQZpftNUAAAAEhBnl9FFSwj/wAAF+CH+3FgowrXhbXoxNfgXQak0HwFpvMLgDRtdGreYu6Jwxq+dHb1Mls6gqrfkMAEtKbNSG69WG1w31iP5YAAAAAeAZ5+dEf/AAAlq/hbeBKdaClv1KPbO8TroEL6RkWBAAAAJAGeYGpH/wAAJbHoNHOgNn1fgTFzCuRGgLv9840cvi/5TlInpgAAAE5BmmVJqEFsmUwIT//98QAAAwLFGBFcAFco0TwEyJ8stE33ob9nyEuKnkEXSURRalCEku7W9wT5EOgpvRz2E0KJaB975cXVyLrqfJ4p+4EAAABAQZ6DRRUsI/8AABfgh7cN4IMW1UPSZtxO0uGysP2jttQ0vcpXl/9z4GjwqN2Bf2qnC5dOUgBNLCGloDp+M+eoLAAAACsBnqJ0R/8AACWp6mdq5WVSNnifpIXWlCkbq5LOVJ6FM7YABNJiq5bMEdalAAAAIQGepGpH/wAAJb8EJZIkyaAophWzIgs9LmlJ8A9vFLBA/wAAAFtBmqlJqEFsmUwIR//94QAABFM/ksJ4b9CJFgp4xjzqPrMgsQ8VM1Mo5KtVKIg8Q4glQpnULMJYGebCdi2w5aO4MSgFC6fkZ+SXpKRnExgs3dumnNsYBxcs0OzhAAAANEGex0UVLCP/AAAX6Jti5IOQwf7/Zpw65rQwMcgFAAHHwxRBsp/dA4e9ykPSt49/SK09uV8AAAAjAZ7mdEf/AAAlwJnd6YNROttQ+lRwi75huxWJlgkYgC0X0wsAAAAnAZ7oakf/AAAlvwQn2H1mgcR9nTlX4slJFMNVJI5A/2AwCvbfAd1gAAAAfEGa7EmoQWyZTAj//IQAABDa3Y3CAT98Mx3Qe5e6m1+ndlI+3p9x/9dROEXkDSZWMs8wpLs2wkbuvelahHsFU+pJlxliGNBkdEfX/vesmf5VTgBn19l6mdbIt93iTuMklO/3Xg/oZiKlosW211Zu4O9ecDVizHD6mgTdqcEAAABAQZ8KRRUsI/8AABfinvPxzCvBozSWG/PhKY1B8XQqs2YdLpHSX/DP+Bod1MdOezTjreqx+kbvI1yeOAJU5A/HTgAAADEBnytqR/8AAA4mPjo3FLQr8KOgIKiF27Fn1mq03LZWDf+dxrCQ0hV4GYE2ppg2qPnwAAAAZ0GbLUmoQWyZTAhP//3xAAADArXJJkuuZOuCkAbxpR+7IgvjjYETn5Bjv89OeOAQha672i4MrSYwu+CL37A8qtr+BioNwxCzRN2s3GCfrNYN0Rq2313r8UD7wcnUVsEkxKQfXN6Yj6sAAACSQZtRSeEKUmUwIT/98QAAAwLEo0g6q30gDS8WxHwkCrUfN25lpqBFgMJ/hN1WCRwBD1I8cheEGSwBRXO2nHwBDFRn8pW10iJIy0MhRHRPjq5Z17TBhNzbG0u2q10d7p2mUFaLT4y3Xh27KyjC6fdEXoBxseRt4zAJNSVBjQeSSEdlFTwIa7RL5IilFElWEEk6wWUAAAAuQZ9vRTRMI/8AABfoys/50XEWpG7j3HO/2Oi44Ue8J5btYJtqj+QrglS9fnmC7wAAADcBn450R/8AACXDPpjRNEABXp6yOO17UPA7/8Rqpb4+z82VtWf5UVlfaZDPV9BTbSKRy+57/TPgAAAAJAGfkGpH/wAAJa5f+zXeTdMpL1Nj5O+qTyd61K5EtEoS6RI9wAAAAFxBm5VJqEFomUwIT//98QAAAwLYvAxZmAnvZKHjjIXiyzj8jTOT/bh/bYaQ+xek4xL60aaY8vlynk7KB05LjH2GWEIuyXT6gAjg6vO0NCGe87USsA6xihtSHrFO3QAAAEhBn7NFESwj/wAAGInde3dTMkAH8+P6l41ytTtpXVW9HQZXvejhEpKHkyHAdNRVv1OWgy95uyGabzXg8rDJZbomUEEr6bT+YeAAAAAnAZ/SdEf/AAAmwuzhgndQj0Avn3HN2powDRezsLnXUccZcSQsEvcHAAAALQGf1GpH/wAAJq5ey993m4UTiulgoIFryYZ7TAS5VRTPUi+aF7qd7vLWkpJt5QAAAHxBm9lJqEFsmUwIR//94QAABHcUmq+sseNUAcPdPd/EVJs86Twn6S+O5yrpMEDgf6EJmY5PcWZufTUyp/2JEJm4l5vgZcBim7BpLmXT/eDXdo7XD8oKurcZTtjpcgU365VFQ5owuG4iTG+shU0XHXXmWVawehBX6U0eHcCAAAAAJ0Gf90UVLCP/AAAYf+votDz+Vsf4RT8ruVl4EFDdSmE5gLTh3XIGpQAAAB4BnhZ0R/8AACap6ccThXQICrm0pPfQt9f4+MuY7oEAAAAfAZ4Yakf/AAAmrl4zgXRi3KG8AmxObtnEEgVkBrRqwAAAAJhBmh1JqEFsmUwIT//98QAAAwLU7TIVAMzGFYd5SyxZ0BNqY3n6dsiUCEc/+RlF1SLXI+Kn38XjxncflRckRJ/ZCSJP/JMgSM7ciZ/XR6kdh1jiDgcl1Ad6VOVgd1Yn/4K8thuAd07SFYEDDmqF2BQunJ/x4ZXQ0+Sow26ik5ZOuo8Nykw3cSSXbN90ZOVWUYu3RVc2A27smQAAADhBnjtFFSwj/wAAGH/r6LRDINg/X9yGD0SHHVDwm3uavD6DhTQ6hr2mrcwCWmzjfqAagWvhgFOHcAAAACQBnlp0R/8AACaSurGIjHsoDiC13WKgL7hhlBUtWlBZUmY7EnEAAAAzAZ5cakf/AAAmrl4zgXrI/WgqlACWFr1bcoTywkTqjKkXAhFb6Apj2EpujY3Ztee7Y12VAAAAZkGaQUmoQWyZTAhH//3hAAAEc0hDod3xEUxE7/WGVj/VAo5+I3q/Vgv10r1hn2+QD2nySqU2rPgDH1saG83PVUskKHpdoOpE20Dsf3KpQr1Q2blre5zyXHJU4KK8dmWzDXunNcJrgAAAACtBnn9FFSwj/wAAGH/sL1OtEj+YUX/lt5iH/i6zW6b8Ucaz1qvfjzPAXrs+AAAAIgGennRH/wAAJsCZMmk0idUmPyzDmVZl8jDKeIkEC4pbU18AAAAjAZ6Aakf/AAAmrl/7OBGE4gx4RV2RMKAuQdi5whJasOxlPZUAAAB9QZqESahBbJlMCEf//eEAAASZWaK6KP+EPt9qI4neDpmV7hEHRNdASQh98eHJQXVgjlx9Zt6CMYnN9//mrVv/drhdiJ1LfLT7Bf0QWw3UIkqNEbTexXq+Wba5q07jYregMPuMJGagI4JRQvEmNWQpaguiQZlyHzatym2GnDsAAAA9QZ6iRRUsI/8AABkpfS5jWnKpI4QSaw+8eJSoB0H/LNVNzs+aN1Uy3MAFtMQgCzxVPduJPtO0BdkfYVd7gAAAADcBnsNqR/8AACfZhLPVQx9v1dIvNPZosS1cJLb1eG+vVGyBSwd6eVcjDCXmAXEAISi4dxuAcfghAAAAZ0GayEmoQWyZTAhH//3hAAAEk0gy3+lIShFG40gCFBFLr4iXJYy2zq0z945J2nvlldyZxztgtmFcTA5UGWwMytMfVlpdhokHCzssxi3O926Kuh0z4rcwfxIR4bkPjjAVaXmLU1ykGfEAAAArQZ7mRRUsI/8AABkoypRkjWMkZDVcEYYqgEPQtOvfm+XK+xOo0JvLSVcBtwAAACcBnwV0R/8AACfeX2ipnfFXCiT+Vbje2W2p30ASeg+JABogfdFLQ20AAAAqAZ8Hakf/AAAnxBhPaBXipX/3hxGrwQ8XcjD8SnauEOvaVbASGPlAw46wAAAANEGbDEmoQWyZTAhH//3hAAAEk0g/pjm87Dh0Qq8qjkh5oo4XwAiLL4YhweHJf3nRfNW7kGAAAAAoQZ8qRRUsI/8AABkf6+i0PNPANw6y/aUd+BOJvZGkDvICdeOLJyesWQAAACgBn0l0R/8AACfbVjjRVXdjoOBale01qclMnJE6rEQcWpUuVDdlWHiwAAAAKgGfS2pH/wAAJ6g7kl+UmrbKzsiaXJbysN/LxbQ3HCd1oM1KBPRZRTrFgAAAAE1Bm1BJqEFsmUwIR//94QAABJNIQ6GBmIbymQlTyihrHgH+vmF8FWeB6yA+FicKXBfn0Atub6U0gJgU9bZ9JBGrl4yIW/qG+pbM27tooQAAACpBn25FFSwj/wAAGR/sLzqp6hHFnklxwPWoyn2oAMnEkfceM+2yTapCBtkAAAAmAZ+NdEf/AAAn21Y402Pve3E8OjmziZeEWGyrbc+ajRGtTlzGmnEAAAAoAZ+Pakf/AAAnxVXs9AyCkFK+VjvN5J/dYlO4cYgRnU1CrUlzjPnDbAAAAFlBm5RJqEFsmUwIT//98QAAAwL6TRkOlm0P4mPnxmQAknk893cGMl7dlcmSGOOl5opJqIMWIP6lY9mbeh97pNvaiqjFrJP6PlmVBONOEL15vpQtVhApX6HTxAAAAClBn7JFFSwj/wAAGcl8w2AgW1ZcG5/Al4vWCYhQKVtvIwFGI86xO99z4QAAACABn9F0R/8AACfAWZcQAu2g0vKJy18SmToq5dUVCDZNZQAAACsBn9NqR/8AACj5bjisVL1oASXjPgx9iO+3Nht/B3YDOdDiQ/Snf+ZrAE3mAAAASEGb2EmoQWyZTAhP//3xAAADAvpNF9G1HMIhKYRGm/TbgAXKwFt59AT3O98ItUaFHzU769S7HQ+YrhMf2VRpQ833xI/reiRS1QAAACtBn/ZFFSwj/wAAGb/sHqMappy6mjJWDHCxzE5prYV4aXMFkTbJwJ6HRS1sAAAAJgGeFXRH/wAAKPtXJihRg3AJPdehEd36SMv9Ub9NDPQV+YbvyseBAAAAJQGeF2pH/wAAKOVT0zx9qtFP4icjPgrRmSspsER/Gr54y0JBxwkAAABnQZocSahBbJlMCE///fEAAAMC+k0a1s/IIJISAaHzD++Q5Z7BqWrM4x96W8PD5DmKVFyQ3Na+5sV4ZwC8z8Ek5sQsBAj0rjzbh/kLzwRTyn8iXxQ9mSVEqZYp3rs/8+kVzAtKxDDUdAAAAC5BnjpFFSwj/wAAGb/r6LQLNTTtu0Ep5WbTJK9HihAC3lyZKPyccLWltOjW8GmBAAAAKgGeWXRH/wAAKPtXJihRg2+8wcD9gYpLd1Lp3U577+pwG46LL1wjXJUcIAAAACYBnltqR/8AACjIOszhwzmpY0Bv7A6XR2a1Sz9tKQl3AdtB/p2JnwAAAEFBmkBJqEFsmUwIR//94QAABLNIQ6Hd9QZd5HEhLyO3u5ib5DmKUxGqd7VuwyW5LcAC6cOfXMq0IdtnP6SvVlJzywAAAD5Bnn5FFSwj/wAAGb/sLz2wa+ZIIfrI/LvbfSPUSAGDkABwOnkTv9U0NnkEPDO4cm769X8qLypS5Ik4aqGcCAAAACgBnp10R/8AACj7VyYnaAf+1u8bkpxaARytVL/zPrNUJJBt8SGrcnseAAAAJQGen2pH/wAAKOVV1jaLKnOO5/lZQsE+iBztF8lhl9NnqAmJqYEAAACrQZqESahBbJlMCE///fEAAAMDDk0X0bUBe0AI0JYTGsL1iSo/leVlp7dU1FX09hpoHSJww8Ij2Zz1cZ7v9pgSQISzI8I6Zn9LmeU7G6ma8G9u6R79Ady4AJtgmVJvR6UpUdMXK4bvy/0FdEAB9DACKPxbR+VW593L+n7VLxbzsMDJ15B24RsW7TcgStdowY3aq+q5J0PcPDUYOjkP1jWflSkWjsRoRdE2y0QEAAAATkGeokUVLCP/AAAaaXzDYCfQ8/euLRT+5o59DNz7DcgzTdUXOmw5hhtB4E8ZAF9oPmCVdmYEdVDmTp2nccthUGBcGSm/XEY5ROLduuP93QAAACYBnsF0R/8AACj7WAoiVzp+M7HzcDZHhsbWXeWKw6bxhn0QRpiNUAAAACMBnsNqR/8AACoCZCQrB4WY3S6dhcdmngzGUDmkqU/trNiNUQAAAGdBmshJqEFsmUwIT//98QAAAwMOSMfDzR5YQAbVqszawsIXXVdYT9rIRN4ijM8jFVdWtDRzWymBxBLkjS1Zf5S0p3TG+cBltgf+yH5QFV+wIe2ZquJGVTTm5O2zGR8OyiwcupNNMeLhAAAAJUGe5kUVLCP/AAAaX+weoxtHCYeFGZA9rumk+fLXrZhpHAXRMuEAAAAzAZ8FdEf/AAAqApNST1jg803T2h5PT4sSpbn+FX+XEv8x3Z5f9wmpdaKrwvINsub5TruBAAAAIgGfB2pH/wAAKgVT0zx6hmdc6eT3t1TPqnnX+KZ0AoKLGqAAAABoQZsMSahBbJlMCEf//eEAAATVFJtNage9WdINy3HSzSFqw/w9HNL9hVfv6SDBPqbvTYMnSQACAijuXlTSHZ2A1/P/WRW5wr+z0RpnRgQM78DxknLVgeVnB6U7fRblV5dmgdPWTIBR2QIAAAAqQZ8qRRUsI/8AABpfQ70LZJu9KPCMPK59s3j2sgM1Ce/IiMJMu96G267hAAAAJAGfSXRH/wAAKhtXJihWybEhf57+duhlGOiFwXbHQ+SpJp71+AAAACsBn0tqR/8AACnoOszhF8e4g3iGsxV6ZiPyPp9rioZstrlQ530uATZSCBkgAAAAbkGbUEmoQWyZTAhP//3xAAADAw5Izo/VuVAAWTDZaTlNJ57V/JuNnhljf0CR1UzHddAHFn2gBYC3a2OJdVD6Rx/H2r2t/qG9xqpJENTL2PQwbsCzfjKJ3FBXWJ5kBuxXypz+dgBqY7DZAu2Qlv6BAAAAOUGfbkUVLCP/AAAaX+weoxtGx9BYmKchyP//NBGh0kl5bNKgqj2GWsdYNvQWeOGJ5PXqq6Ih5gS7gQAAACsBn410R/8AACoeX2CDR3koIvHiBtv0QKIlI78qxjqN/9YsV5aG19MImNOBAAAAJAGfj2pH/wAAKgVUEB4MBhynNmPZQcpGrbwSY5ml5WfQ3ln3pwAAAGlBm5RJqEFsmUwIT//98QAAAwMPTUEuAsqj+zgVUv+v0GNgP8Urru8F3ANhHMkcke+GGN0VVMqK/v8Lztispf4xBWLjXrgDmUkX4gef37Ye2j/9Ua9om8ezgbX4qrervoa/vdU5sG9vMQwAAAApQZ+yRRUsI/8AABpgiJEuOcBc/JcAn6gMeGogHZxZbIICAYEkB4Mhv4EAAAAdAZ/RdEf/AAAqG1gC+nrLJLgntTzQIL6c+8TbaFMAAAAkAZ/Takf/AAAqGaUOTelCXutDkZT9UIN+0c+Dvk86IKlZl62gAAAAWUGb2EmoQWyZTAhP//3xAAADAyJIx8PNZjVytJP2bk2/AofwA2+NNi5f+oi3sutD6GJyC95h/8m380iGQa/kFhD8Vwz9m/jJjCfcTXn1ZN+THaUgIhvO0wPlAAAANEGf9kUVLCP/AAAbAIf5C32E5ot4FCYfCk2GcAbTvMvQAO6mSIi2v2eM0UgKk7r8E7D0GMEAAAAkAZ4VdEf/AAArPlnm8rA+8+K55a8mH0gib+fGNkwC7c8GVdMfAAAAJAGeF2pH/wAAKzmEs9xKxH/chsplAILQqFEiAaJC4M9yOh/1qQAAADNBmhxJqEFsmUwIT//98QAAAwMjTUD/zGkJURgOqeA1GPw3kLyN4OdTFAHioqG5uTyxSvoAAAAhQZ46RRUsI/8AABsAh7cNuRwv5dDXsZLuUndJFUiAU4yxAAAAKwGeWXRH/wAAKztXJia1T3lwAC6GyZj414kUQb6Jhr7vswjWIwreZwXwMeAAAAAqAZ5bakf/AAArOYSz3EcBxngFwNQrU3+mY45xCj7iigzb4isBZkCPkPdtAAAAM0GaQEmoQWyZTAhH//3hAAAE+aQeTQ9BXa91B8U/pQLHxRgn+Rh4xKkg5ZvxIosM7xzLgQAAADpBnn5FFSwj/wAAGwCHtw37paGaspSz5ViJCIoT5m1GHEgH+vidAALZzrxJ0qwYczRxHXNlYLiX+f61AAAAJAGenXRH/wAAKztXJihWyauyT7OgZ/6gBLAidyQQ4F2WY/XWoAAAADMBnp9qR/8AACs5hLPcSJpUac1m48kePpvot6IqFuzhBMriv/k6GSV7XTh9ZA2peM8qcIEAAABLQZqDSahBbJlMCE///fEAAAMDIlbeTaEJn5qAFjJeU6QuSgbUIvQNzyE04wz+ZVOO5VvPf5y4j0a6VYd237m/NizGPbsJMRaJ1pi4AAAAM0GeoUUVLCP/AAAbCYPN+Jb6K3UUH7lFE+4oTCAriQMHbdIHG2jXCifUTsSSUaHwKDm+tQAAACEBnsJqR/8AACslU9M9RpTMGktVya71zqBwShp0nfuCUKYAAABCQZrHSahBbJlMCEf//eEAAAT1f6OhgaSTroAP33lD/5edQnzPlUQ8hy87RC1xG8yOlgUSRa98III6KYug2uRDibkRAAAAJ0Ge5UUVLCP/AAAbCMp+X8gmh019Oc5eqI80TBr0XAPnQmwIMwK2gQAAAC4BnwR0R/8AACs+X2ips2PRaAEfG6UBnbHpfAIdd0spMUJuqowpR1tUSgIy6/FBAAAAHAGfBmpH/wAAKyVV7PUvqTWE8OQXS8KQOkNzxYEAAAA2QZsJSahBbJlMFEx//IQAABNMBLNbRjRkFgbDnfvYisUPt8OdXJMhKbisebnvV1PqW0dz38XcAAAAHQGfKGpH/wAAKynzBxM44IKV5/BBpcZ1xQDmlCmAAAAMc21vb3YAAABsbXZoZAAAAAAAAAAAAAAAAAAAA+gAAA/IAAEAAAEAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAuddHJhawAAAFx0a2hkAAAAAwAAAAAAAAAAAAAAAQAAAAAAAA/IAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAQAAAAAJYAAABkAAAAAAAJGVkdHMAAAAcZWxzdAAAAAAAAAABAAAPyAAAAgAAAQAAAAALFW1kaWEAAAAgbWRoZAAAAAAAAAAAAAAAAAAAMgAAAMoAVcQAAAAAAC1oZGxyAAAAAAAAAAB2aWRlAAAAAAAAAAAAAAAAVmlkZW9IYW5kbGVyAAAACsBtaW5mAAAAFHZtaGQAAAABAAAAAAAAAAAAAAAkZGluZgAAABxkcmVmAAAAAAAAAAEAAAAMdXJsIAAAAAEAAAqAc3RibAAAALBzdHNkAAAAAAAAAAEAAACgYXZjMQAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAJYAZAASAAAAEgAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABj//wAAADZhdmNDAWQAH//hABlnZAAfrNlAmDPl4QAAAwABAAADAGQPGDGWAQAGaOvjyyLA/fj4AAAAABRidHJ0AAAAAAAAaR4AAGkeAAAAGHN0dHMAAAAAAAAAAQAAAMoAAAEAAAAAFHN0c3MAAAAAAAAAAQAAAAEAAAYwY3R0cwAAAAAAAADEAAAAAQAAAgAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAEAAAAAAIAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABAAAAAACAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABAAAAAACAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAEAAAAAAIAAAEAAAAAAQAAAgAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABAAAAAACAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAUAAAAAAQAAAgAAAAABAAAAAAAAAAEAAAEAAAAAAQAABQAAAAABAAACAAAAAAEAAAAAAAAAAQAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAQAAAAAAgAAAQAAAAABAAAFAAAAAAEAAAIAAAAAAQAAAAAAAAABAAABAAAAAAEAAAMAAAAAAQAAAQAAAAAcc3RzYwAAAAAAAAABAAAAAQAAAMoAAAABAAADPHN0c3oAAAAAAAAAAAAAAMoAAAR1AAAAWwAAACYAAAAYAAAALAAAAGYAAAAmAAAAFAAAACAAAABzAAAANgAAACEAAAAcAAAAUwAAACwAAAAsAAAATwAAACkAAAAeAAAAJQAAAE8AAAAlAAAAIAAAACEAAABrAAAANwAAABsAAAAgAAAAKQAAACUAAAAfAAAAIQAAAJYAAAApAAAAHwAAAB8AAABiAAAAOAAAACYAAAAjAAAAVwAAADMAAAA3AAAALwAAAE8AAAAnAAAAOQAAADoAAABBAAAALAAAADEAAAAcAAAAcwAAACkAAAAvAAAAawAAACoAAAAeAAAAHwAAAFwAAAAuAAAAKAAAACkAAACLAAAALwAAADYAAAAsAAAAjwAAAC4AAAAvAAAAOwAAAMYAAAA9AAAARwAAAMYAAABXAAAAOQAAADMAAABcAAAARQAAACoAAAA7AAAArgAAAEQAAAA5AAAAOAAAAK4AAAA1AAAAJAAAAEIAAACUAAAANgAAADEAAAAkAAAAegAAAEwAAAAiAAAAKAAAAFIAAABEAAAALwAAACUAAABfAAAAOAAAACcAAAArAAAAgAAAAEQAAAA1AAAAawAAAJYAAAAyAAAAOwAAACgAAABgAAAATAAAACsAAAAxAAAAgAAAACsAAAAiAAAAIwAAAJwAAAA8AAAAKAAAADcAAABqAAAALwAAACYAAAAnAAAAgQAAAEEAAAA7AAAAawAAAC8AAAArAAAALgAAADgAAAAsAAAALAAAAC4AAABRAAAALgAAACoAAAAsAAAAXQAAAC0AAAAkAAAALwAAAEwAAAAvAAAAKgAAACkAAABrAAAAMgAAAC4AAAAqAAAARQAAAEIAAAAsAAAAKQAAAK8AAABSAAAAKgAAACcAAABrAAAAKQAAADcAAAAmAAAAbAAAAC4AAAAoAAAALwAAAHIAAAA9AAAALwAAACgAAABtAAAALQAAACEAAAAoAAAAXQAAADgAAAAoAAAAKAAAADcAAAAlAAAALwAAAC4AAAA3AAAAPgAAACgAAAA3AAAATwAAADcAAAAlAAAARgAAACsAAAAyAAAAIAAAADoAAAAhAAAAFHN0Y28AAAAAAAAAAQAAADAAAABidWR0YQAAAFptZXRhAAAAAAAAACFoZGxyAAAAAAAAAABtZGlyYXBwbAAAAAAAAAAAAAAAAC1pbHN0AAAAJal0b28AAAAdZGF0YQAAAAEAAAAATGF2ZjU4Ljc2LjEwMA==\" type=\"video/mp4\"/>\n",
       "        </video>\n",
       "        "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Played: videos/dueling/rl-video-episode-0.mp4\n"
     ]
    }
   ],
   "source": [
    "import base64\n",
    "import glob\n",
    "import io\n",
    "import os\n",
    "\n",
    "from IPython.display import HTML, display\n",
    "\n",
    "\n",
    "def ipython_show_video(path: str) -> None:\n",
    "    \"\"\"Show a video at `path` within IPython Notebook.\"\"\"\n",
    "    if not os.path.isfile(path):\n",
    "        raise NameError(\"Cannot access: {}\".format(path))\n",
    "\n",
    "    video = io.open(path, \"r+b\").read()\n",
    "    encoded = base64.b64encode(video)\n",
    "\n",
    "    display(HTML(\n",
    "        data=\"\"\"\n",
    "        <video width=\"320\" height=\"240\" alt=\"test\" controls>\n",
    "        <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\"/>\n",
    "        </video>\n",
    "        \"\"\".format(encoded.decode(\"ascii\"))\n",
    "    ))\n",
    "\n",
    "\n",
    "def show_latest_video(video_folder: str) -> str:\n",
    "    \"\"\"Show the most recently recorded video from video folder.\"\"\"\n",
    "    list_of_files = glob.glob(os.path.join(video_folder, \"*.mp4\"))\n",
    "    latest_file = max(list_of_files, key=os.path.getctime)\n",
    "    ipython_show_video(latest_file)\n",
    "    return latest_file\n",
    "\n",
    "\n",
    "latest_file = show_latest_video(video_folder=video_folder)\n",
    "print(\"Played:\", latest_file)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "drl",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
