{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "RL_Tutorial_MLSS_2020",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "ULdrhOaVbsdO"
      },
      "source": [
        "# Reinforcement Learning Tutorial\n",
        "\n",
        "<a href=\"https://colab.research.google.com/github/feryal/rl_mlss_2020/blob/master/RL_Tutorial_MLSS_2020.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
        "\n",
        "> Contact us at feryal@google.com, mwhoffman@google.com & bshahr@google.com for any questions/comments :)\n",
        ">\n",
        ">Special thanks to Gheorghe Comanici, Diana Borsa & Nando de Freitas.\n",
        ">\n",
        ">This Tutorial is based on the [EEML 2020 RL tutorial](https://github.com/eemlcommunity/PracticalSessions2020/tree/master/rl) and extended to also cover policy-based methods.\n",
        "\n",
        "\n",
        "\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Dv-846KxIqPD",
        "colab_type": "text"
      },
      "source": [
        "The tutorial covers a number of important reinforcement learning (RL) algorithms. This tutorial is split into four parts:\n",
        "\n",
        "  0. refresher on environments and agents\n",
        "  1. value-based methods: tabular \n",
        "  2. value-based methods: function approximation \n",
        "  3. policy-based methods\n",
        "\n",
        "We will first guide you through the general interaction between RL agents and environments, where the agents goal is to take actions in order to maximize returns. Next, we will implement SARSA, and $\\color{green}Q$-learning for a simple grid-world environment. The core ideas in the latter will be scaled to more complex MDPs through the use of function approximation. For that we will provide a short introduction to deep RL and the DQN algorithm. We will then switch to policy gradients and implement the REINFORCE algorithm and train it on the gridworld environment. We will later see that these agents can then be interfaced with other environments such as CartPole. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ffeeXVm4AuZ6",
        "colab_type": "text"
      },
      "source": [
        "# Overview"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kT8rkxTUAZzL",
        "colab_type": "text"
      },
      "source": [
        "The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of <font color='blue'>**actions**</font>  that an agent can take.  The agent takes an action informed by the <font color='redorange'>**observations**</font> it recieves, and will get a <font color='green'>**reward**</font> from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. \n",
        "\n",
        "\n",
        "<center><img src=\"https://drive.google.com/uc?id=1KktLm5mdWx1ORotxeYCq1WcQHkXzRT4F\" width=\"500\" /></center>\n",
        "\n",
        "\n",
        "In the first part of the tutorial we focus on <font color='green'>**value based methods**</font>, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that <font color='green'>**value**</font> (instead of maintaining a policy directly, like in <font color='blue'>**policy gradient methods**</font>). \n",
        "\n",
        "We represent the <font color='green'>**action-value function**</font> (otherwise known as $\\color{green}Q$-function associated with following/employing a policy $\\pi$ in a given MDP as:\n",
        "\n",
        "$$ \\color{green}Q^{\\color{blue}{\\pi}}(\\color{red}{s},\\color{blue}{a}) = \\mathbb{E}_{\\tau \\sim P^{\\color{blue}{\\pi}}} \\left[ \\sum_t \\gamma^t \\color{green}{r_t}| s_0=\\color{red}s,a_0=\\color{blue}{a} \\right]$$\n",
        "\n",
        "where $\\tau = \\{\\color{red}{s_0}, \\color{blue}{a_0}, \\color{green}{r_0}, \\color{red}{s_1}, \\color{blue}{a_1}, \\color{green}{r_1}, \\cdots \\}$\n",
        "\n",
        "\n",
        "Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:\n",
        "\n",
        "$$ \\color{green}Q^\\color{blue}{\\pi}(\\color{red}{s},\\color{blue}{a}) =    \\sum_{\\color{red}{s'}\\in \\color{red}{\\mathcal{S}}} \n",
        "\\color{purple}P(\\color{red}{s'} |\\color{red}{s},\\color{blue}{a})\n",
        "\\left(\n",
        "  \\color{green}{R}(\\color{red}{s},\\color{blue}{a}, \\color{red}{s'}) \n",
        "  + \\gamma \\color{green}V^\\color{blue}{\\pi}(\\color{red}{s'}) \n",
        "  \\right)\n",
        "$$\n",
        "\n",
        "where $\\color{green}V^\\color{blue}{\\pi}$ is the expected $\\color{green}Q^\\color{blue}{\\pi}$ value for a particular state, i.e. $\\color{green}V^\\color{blue}{\\pi}(\\color{red}{s}) = \\sum_{\\color{blue}{a} \\in \\color{blue}{\\mathcal{A}}} \\color{blue}{\\pi}(\\color{blue}{a} |\\color{red}{s}) \\color{green}Q^\\color{blue}{\\pi}(\\color{red}{s},\\color{blue}{a})$."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "xaJxoatMhJ71"
      },
      "source": [
        "## Installation and imports"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "ovuCuHCC78Zu"
      },
      "source": [
        "1. [Acme](https://github.com/deepmind/acme) is a library of reinforcement learning (RL) agents and agent building blocks. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.\n",
        "\n",
        "\n",
        "2. [Sonnet](https://github.com/deepmind/sonnet) is a simple neural network library for Tensorflow.\n",
        "\n",
        "3. [dm_env](https://github.com/deepmind/dm_env): DeepMind Environment API, which will be covered in more details in the [Environment subsection](https://colab.research.google.com/drive/1oKyyhOFAFSBTpVnmuOm9HXh5D5ekqhh5#scrollTo=I6KuVGSk4uc9) below."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "colab_type": "code",
        "id": "KH3O0zcXUeun",
        "colab": {}
      },
      "source": [
        "#@title Install requirements  { form-width: \"30%\" }\n",
        "\n",
        "!pip install dm-acme\n",
        "!pip install dm-acme[reverb]\n",
        "!pip install dm-acme[tf]\n",
        "!pip install dm-acme[envs]\n",
        "!pip install dm-env\n",
        "!sudo apt-get install -y xvfb ffmpeg\n",
        "!pip install imageio\n",
        "\n",
        "from IPython.display import clear_output\n",
        "clear_output()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "colab_type": "code",
        "id": "HJ74Id-8MERq",
        "colab": {}
      },
      "source": [
        "#@title Import modules  { form-width: \"30%\" }\n",
        "\n",
        "import IPython\n",
        "from typing import Callable, Optional, Sequence\n",
        "\n",
        "import acme\n",
        "from acme import environment_loop\n",
        "from acme import specs\n",
        "from acme import wrappers\n",
        "from acme.utils import tree_utils\n",
        "from acme.agents.tf import dqn\n",
        "# from acme.utils import counting\n",
        "from acme.utils import loggers\n",
        "import base64\n",
        "import collections\n",
        "import dm_env\n",
        "import enum\n",
        "# import functools\n",
        "import gym\n",
        "# import io\n",
        "import imageio\n",
        "import itertools\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import random\n",
        "import sonnet as snt\n",
        "import tensorflow.compat.v2 as tf\n",
        "tf.enable_v2_behavior()\n",
        "import tensorflow_probability as tfp\n",
        "import time\n",
        "\n",
        "import warnings\n",
        "warnings.filterwarnings('ignore')\n",
        "\n",
        "\n",
        "\n",
        "np.set_printoptions(precision=3, suppress=1)\n",
        "\n",
        "plt.style.use('seaborn-notebook')\n",
        "plt.style.use('seaborn-whitegrid')\n",
        "\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HeGPIOMkUTEn",
        "colab_type": "text"
      },
      "source": [
        "# Part 0: Environment & Agent"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "I6KuVGSk4uc9"
      },
      "source": [
        "## Environment\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UhZwB__DPcyM",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.\n",
        "\n",
        "<center>\n",
        "<img src=\"https://drive.google.com/uc?id=163QdCqrPybJVVO0NhDxpun5O0YZmCnsI\" width=\"500\" />\n",
        "</center>\n",
        "\n",
        " \n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "inIAhwLKuHKr",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Implement GridWorld { form-width: \"30%\" }\n",
        "\n",
        "\n",
        "class ObservationType(enum.IntEnum):\n",
        "  STATE_INDEX = enum.auto()\n",
        "  AGENT_ONEHOT = enum.auto()\n",
        "  GRID = enum.auto()\n",
        "  AGENT_GOAL_POS = enum.auto()\n",
        "\n",
        "\n",
        "class GridWorld(dm_env.Environment):\n",
        "\n",
        "  def __init__(self,\n",
        "               layout,\n",
        "               start_state,\n",
        "               goal_state=None,\n",
        "               observation_type=ObservationType.STATE_INDEX,\n",
        "               discount=0.9,\n",
        "               penalty_for_walls=-5,\n",
        "               reward_goal=10,\n",
        "               max_episode_length=None,\n",
        "               randomize_goals=False):\n",
        "    \"\"\"Build a grid environment.\n",
        "\n",
        "    Simple gridworld defined by a map layout, a start and a goal state.\n",
        "\n",
        "    Layout should be a NxN grid, containing:\n",
        "      * 0: empty\n",
        "      * -1: wall\n",
        "      * Any other positive value: value indicates reward; episode will terminate\n",
        "\n",
        "    Args:\n",
        "      layout: NxN array of numbers, indicating the layout of the environment.\n",
        "      start_state: Tuple (y, x) of starting location.\n",
        "      goal_state: Optional tuple (y, x) of goal location. Will be randomly\n",
        "        sampled once if None.\n",
        "      observation_type: Enum observation type to use. One of:\n",
        "        * ObservationType.STATE_INDEX: int32 index of agent occupied tile.\n",
        "        * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the \n",
        "          agent is and 0 elsewhere.\n",
        "        * ObservationType.GRID: NxNx3 float32 grid of feature channels. \n",
        "          First channel contains walls (1 if wall, 0 otherwise), second the \n",
        "          agent position (1 if agent, 0 otherwise) and third goal position\n",
        "          (1 if goal, 0 otherwise)\n",
        "        * ObservationType.AGENT_GOAL_POS: float32 tuple with \n",
        "          (agent_y, agent_x, goal_y, goal_x)\n",
        "      discount: Discounting factor included in all Timesteps.\n",
        "      penalty_for_walls: Reward added when hitting a wall (should be negative).\n",
        "      reward_goal: Reward added when finding the goal (should be positive).\n",
        "      max_episode_length: If set, will terminate an episode after this many \n",
        "        steps.\n",
        "      randomize_goals: If true, randomize goal at every episode.\n",
        "    \"\"\"\n",
        "    if observation_type not in ObservationType:\n",
        "      raise ValueError('observation_type should be a ObservationType instace.')\n",
        "    self._layout = np.array(layout)\n",
        "    self._start_state = start_state\n",
        "    self._state = self._start_state\n",
        "    self._number_of_states = np.prod(np.shape(self._layout))\n",
        "    self._discount = discount\n",
        "    self._penalty_for_walls = penalty_for_walls\n",
        "    self._reward_goal = reward_goal\n",
        "    self._observation_type = observation_type\n",
        "    self._layout_dims = self._layout.shape\n",
        "    self._max_episode_length = max_episode_length\n",
        "    self._num_episode_steps = 0\n",
        "    self._randomize_goals = randomize_goals\n",
        "    if goal_state is None:\n",
        "      # Randomly sample goal_state if not provided\n",
        "      goal_state = self._sample_goal()\n",
        "    self.goal_state = goal_state\n",
        "\n",
        "  def _sample_goal(self):\n",
        "    \"\"\"Randomly sample reachable non-starting state.\"\"\"\n",
        "    # Sample a new goal\n",
        "    n = 0\n",
        "    max_tries = 1e5\n",
        "    while n < max_tries:\n",
        "      goal_state = tuple(np.random.randint(d) for d in self._layout_dims)\n",
        "      if goal_state != self._state and self._layout[goal_state] == 0:\n",
        "        # Reachable state found!\n",
        "        return goal_state\n",
        "      n += 1\n",
        "    raise ValueError('Failed to sample a goal state.')\n",
        "\n",
        "  @property\n",
        "  def layout(self):\n",
        "    return self._layout\n",
        "\n",
        "  @property\n",
        "  def number_of_states(self):\n",
        "    return self._number_of_states\n",
        "\n",
        "  @property\n",
        "  def goal_state(self):\n",
        "    return self._goal_state\n",
        "  \n",
        "  @property\n",
        "  def start_state(self):\n",
        "    return self._start_state\n",
        "  \n",
        "  @property\n",
        "  def state(self):\n",
        "    return self._state\n",
        "\n",
        "  def set_state(self, x, y):\n",
        "    self._state = (y, x)\n",
        "\n",
        "  @goal_state.setter\n",
        "  def goal_state(self, new_goal):\n",
        "    if new_goal == self._state or self._layout[new_goal] < 0:\n",
        "      raise ValueError('This is not a valid goal!')\n",
        "    # Zero out any other goal\n",
        "    self._layout[self._layout > 0] = 0\n",
        "    # Setup new goal location\n",
        "    self._layout[new_goal] = self._reward_goal\n",
        "    self._goal_state = new_goal\n",
        "\n",
        "  def observation_spec(self):\n",
        "    if self._observation_type is ObservationType.AGENT_ONEHOT:\n",
        "      return specs.Array(\n",
        "          shape=self._layout_dims,\n",
        "          dtype=np.float32,\n",
        "          name='observation_agent_onehot')\n",
        "    elif self._observation_type is ObservationType.GRID:\n",
        "      return specs.Array(\n",
        "          shape=self._layout_dims + (3,),\n",
        "          dtype=np.float32,\n",
        "          name='observation_grid')\n",
        "    elif self._observation_type is ObservationType.AGENT_GOAL_POS:\n",
        "      return specs.Array(\n",
        "          shape=(4,), dtype=np.float32, name='observation_agent_goal_pos')\n",
        "    elif self._observation_type is ObservationType.STATE_INDEX:\n",
        "      return specs.DiscreteArray(\n",
        "          self._number_of_states, dtype=int, name='observation_state_index')\n",
        "\n",
        "  def action_spec(self):\n",
        "    return specs.DiscreteArray(4, dtype=int, name='action')\n",
        "\n",
        "  def get_obs(self):\n",
        "    if self._observation_type is ObservationType.AGENT_ONEHOT:\n",
        "      obs = np.zeros(self._layout.shape, dtype=np.float32)\n",
        "      # Place agent\n",
        "      obs[self._state] = 1\n",
        "      return obs\n",
        "    elif self._observation_type is ObservationType.GRID:\n",
        "      obs = np.zeros(self._layout.shape + (3,), dtype=np.float32)\n",
        "      obs[..., 0] = self._layout < 0\n",
        "      obs[self._state[0], self._state[1], 1] = 1\n",
        "      obs[self._goal_state[0], self._goal_state[1], 2] = 1\n",
        "      return obs\n",
        "    elif self._observation_type is ObservationType.AGENT_GOAL_POS:\n",
        "      return np.array(self._state + self._goal_state, dtype=np.float32)\n",
        "    elif self._observation_type is ObservationType.STATE_INDEX:\n",
        "      y, x = self._state\n",
        "      return y * self._layout.shape[1] + x\n",
        "\n",
        "  def reset(self):\n",
        "    self._state = self._start_state\n",
        "    self._num_episode_steps = 0\n",
        "    if self._randomize_goals:\n",
        "      self.goal_state = self._sample_goal()\n",
        "    return dm_env.TimeStep(\n",
        "        step_type=dm_env.StepType.FIRST,\n",
        "        reward=None,\n",
        "        discount=None,\n",
        "        observation=self.get_obs())\n",
        "\n",
        "  def step(self, action):\n",
        "    y, x = self._state\n",
        "\n",
        "    if action == 0:  # up\n",
        "      new_state = (y - 1, x)\n",
        "    elif action == 1:  # right\n",
        "      new_state = (y, x + 1)\n",
        "    elif action == 2:  # down\n",
        "      new_state = (y + 1, x)\n",
        "    elif action == 3:  # left\n",
        "      new_state = (y, x - 1)\n",
        "    else:\n",
        "      raise ValueError(\n",
        "          'Invalid action: {} is not 0, 1, 2, or 3.'.format(action))\n",
        "\n",
        "    new_y, new_x = new_state\n",
        "    step_type = dm_env.StepType.MID\n",
        "    if self._layout[new_y, new_x] == -1:  # wall\n",
        "      reward = self._penalty_for_walls\n",
        "      discount = self._discount\n",
        "      new_state = (y, x)\n",
        "    elif self._layout[new_y, new_x] == 0:  # empty cell\n",
        "      reward = 0.\n",
        "      discount = self._discount\n",
        "    else:  # a goal\n",
        "      reward = self._layout[new_y, new_x]\n",
        "      discount = 0.\n",
        "      new_state = self._start_state\n",
        "      step_type = dm_env.StepType.LAST\n",
        "\n",
        "    self._state = new_state\n",
        "    self._num_episode_steps += 1\n",
        "    if (self._max_episode_length is not None and\n",
        "        self._num_episode_steps >= self._max_episode_length):\n",
        "      step_type = dm_env.StepType.LAST\n",
        "    return dm_env.TimeStep(\n",
        "        step_type=step_type,\n",
        "        reward=np.float32(reward),\n",
        "        discount=discount,\n",
        "        observation=self.get_obs())\n",
        "\n",
        "  def plot_grid(self, add_start=True):\n",
        "    plt.figure(figsize=(4, 4))\n",
        "    plt.imshow(self._layout <= -1, interpolation='nearest')\n",
        "    ax = plt.gca()\n",
        "    ax.grid(0)\n",
        "    plt.xticks([])\n",
        "    plt.yticks([])\n",
        "    # Add start/goal\n",
        "    if add_start:\n",
        "      plt.text(\n",
        "          self._start_state[1],\n",
        "          self._start_state[0],\n",
        "          r'$\\mathbf{S}$',\n",
        "          fontsize=16,\n",
        "          ha='center',\n",
        "          va='center')\n",
        "    plt.text(\n",
        "        self._goal_state[1],\n",
        "        self._goal_state[0],\n",
        "        r'$\\mathbf{G}$',\n",
        "        fontsize=16,\n",
        "        ha='center',\n",
        "        va='center')\n",
        "    h, w = self._layout.shape\n",
        "    for y in range(h - 1):\n",
        "      plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-k', lw=2)\n",
        "    for x in range(w - 1):\n",
        "      plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-k', lw=2)\n",
        "      \n",
        "  def plot_state(self, return_rgb=False):\n",
        "    self.plot_grid(add_start=False)\n",
        "    # Add the agent location\n",
        "    plt.text(\n",
        "        self._state[1],\n",
        "        self._state[0],\n",
        "        u'😃',\n",
        "        fontname='symbola',\n",
        "        fontsize=18,\n",
        "        ha='center',\n",
        "        va='center',\n",
        "    )\n",
        "    if return_rgb:\n",
        "      fig = plt.gcf()\n",
        "      plt.axis('tight')\n",
        "      plt.subplots_adjust(0, 0, 1, 1, 0, 0)\n",
        "      fig.canvas.draw()\n",
        "      data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')\n",
        "      w, h = fig.canvas.get_width_height()\n",
        "      data = data.reshape((h, w, 3))\n",
        "      plt.close(fig)\n",
        "      return data\n",
        "\n",
        "  def plot_policy(self, policy):\n",
        "    action_names = [\n",
        "        r'$\\uparrow$', r'$\\rightarrow$', r'$\\downarrow$', r'$\\leftarrow$'\n",
        "    ]\n",
        "    self.plot_grid()\n",
        "    plt.title('Policy Visualization')\n",
        "    h, w = self._layout.shape\n",
        "    for y in range(h):\n",
        "      for x in range(w):\n",
        "        # if ((y, x) != self._start_state) and ((y, x) != self._goal_state):\n",
        "        if (y, x) != self._goal_state:\n",
        "          action_name = action_names[policy[y, x]]\n",
        "          plt.text(x, y, action_name, ha='center', va='center')\n",
        "\n",
        "  def plot_greedy_policy(self, q):\n",
        "    greedy_actions = np.argmax(q, axis=2)\n",
        "    self.plot_policy(greedy_actions)\n",
        "    \n",
        "\n",
        "def build_gridworld_task(task,\n",
        "                         discount=0.9,\n",
        "                         penalty_for_walls=-5,\n",
        "                         observation_type=ObservationType.STATE_INDEX,\n",
        "                         max_episode_length=200):\n",
        "  \"\"\"Construct a particular Gridworld layout with start/goal states.\n",
        "\n",
        "  Args:\n",
        "      task: string name of the task to use. One of {'simple', 'obstacle', \n",
        "        'random_goal'}.\n",
        "      discount: Discounting factor included in all Timesteps.\n",
        "      penalty_for_walls: Reward added when hitting a wall (should be negative).\n",
        "      observation_type: Enum observation type to use. One of:\n",
        "        * ObservationType.STATE_INDEX: int32 index of agent occupied tile.\n",
        "        * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the \n",
        "          agent is and 0 elsewhere.\n",
        "        * ObservationType.GRID: NxNx3 float32 grid of feature channels. \n",
        "          First channel contains walls (1 if wall, 0 otherwise), second the \n",
        "          agent position (1 if agent, 0 otherwise) and third goal position\n",
        "          (1 if goal, 0 otherwise)\n",
        "        * ObservationType.AGENT_GOAL_POS: float32 tuple with \n",
        "          (agent_y, agent_x, goal_y, goal_x).\n",
        "      max_episode_length: If set, will terminate an episode after this many \n",
        "        steps.\n",
        "  \"\"\"\n",
        "  tasks_specifications = {\n",
        "      'simple': {\n",
        "          'layout': [\n",
        "              [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1],\n",
        "          ],\n",
        "          'start_state': (2, 2),\n",
        "          'goal_state': (7, 2)\n",
        "      },\n",
        "      'obstacle': {\n",
        "          'layout': [\n",
        "              [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1],\n",
        "          ],\n",
        "          'start_state': (2, 2),\n",
        "          'goal_state': (2, 8)\n",
        "      },\n",
        "      'random_goal': {\n",
        "          'layout': [\n",
        "              [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1],\n",
        "              [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1],\n",
        "          ],\n",
        "          'start_state': (2, 2),\n",
        "          # 'randomize_goals': True\n",
        "      },\n",
        "  }\n",
        "  return GridWorld(\n",
        "      discount=discount,\n",
        "      penalty_for_walls=penalty_for_walls,\n",
        "      observation_type=observation_type,\n",
        "      max_episode_length=max_episode_length,\n",
        "      **tasks_specifications[task])\n",
        "\n",
        "\n",
        "def setup_environment(environment):\n",
        "  \"\"\"Returns the environment and its spec.\"\"\"\n",
        "  \n",
        "  # Make sure the environment outputs single-precision floats.\n",
        "  environment = wrappers.SinglePrecisionWrapper(environment)\n",
        "\n",
        "  # Grab the spec of the environment.\n",
        "  environment_spec = specs.make_environment_spec(environment)\n",
        "\n",
        "  return environment, environment_spec"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZizdE9SQS-cN",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "We will use two distinct tabular GridWorlds:\n",
        "* `simple` where the goal is at the bottom left of the grid, little navigation required.\n",
        "* `obstacle` where the goal is behind an obstacle the agent must avoid.\n",
        "\n",
        "You can visualize the grid worlds by running the cell below. \n",
        "\n",
        "Note that **S** indicates the start state and **G** indicates the goal. \n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7Xdnh3Odc63Q",
        "colab_type": "code",
        "cellView": "both",
        "colab": {}
      },
      "source": [
        "# @title Visualise GridWorlds { form-width: \"30%\" }\n",
        "\n",
        "# Instantiate two tabular environments, a simple task, and one that involves\n",
        "# the avoidance of an obstacle.\n",
        "simple_grid = build_gridworld_task(\n",
        "    task='simple', observation_type=ObservationType.GRID)\n",
        "obstacle_grid = build_gridworld_task(\n",
        "    task='obstacle', observation_type=ObservationType.GRID)\n",
        "\n",
        "# Plot them.\n",
        "simple_grid.plot_grid()\n",
        "plt.title('Simple')\n",
        "\n",
        "obstacle_grid.plot_grid()\n",
        "plt.title('Obstacle');"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RTsiWgDSCL7C",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "In this environment, the agent has four possible  <font color='blue'>**actions**</font>: `up`, `right`, `down`, and `left`.  The <font color='green'>**reward**</font> is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\\gamma = 0.9$. \n",
        "\n",
        "Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g. <font color='redorange'>**observations**</font>) or consumes (e.g. <font color='blue'>**actions**</font>). The `environment_spec` will show you the form of the <font color='redorange'>**observations**</font>, <font color='green'>**rewards**</font> and **discounts** that the environment exposes and the form of the <font color='blue'>**actions**</font> that can be taken.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rmKop4FECVV6",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Note: setup_environment is implemented in the same cell as GridWorld.\n",
        "environment, environment_spec = setup_environment(simple_grid)\n",
        "\n",
        "print('actions:\\n', environment_spec.actions, '\\n')\n",
        "print('observations:\\n', environment_spec.observations, '\\n')\n",
        "print('rewards:\\n', environment_spec.rewards, '\\n')\n",
        "print('discounts:\\n', environment_spec.discounts, '\\n')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0VVTmep2UK6U",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "We first set the environment to its initial location by calling the `reset()` method which returns the first observation. \n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rHden9m9FNPK",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "environment.reset()\n",
        "environment.plot_state() "
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pXb7u9epFWnX",
        "colab_type": "text"
      },
      "source": [
        "Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.\n",
        "\n",
        "Let's take an action and visualise the resulting state of the grid-world. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LY1eopIWFe95",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Pick an action and see the state changing\n",
        "action = 2 #@param [\"0\", \"1\", \"2\", \"3\"] {type:\"raw\"}\n",
        "\n",
        "action = int(action) \n",
        "timestep = environment.step(action)  # pytype: dm_env.TimeStep\n",
        "environment.plot_state()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pSFDZPksEGpl",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Implement the run loop  { form-width: \"30%\" }\n",
        "\n",
        "def run_loop(\n",
        "    environment: dm_env.Environment,\n",
        "    agent: acme.Actor,\n",
        "    num_episodes: Optional[int] = None,\n",
        "    num_steps: Optional[int] = None,\n",
        "    logger_time_delta: float = .25,\n",
        "    label: str = 'training_loop',\n",
        "    log_loss: bool = False,\n",
        "):\n",
        "  \"\"\"Perform the run loop.\n",
        "\n",
        "  We are following the Acme run loop.\n",
        "\n",
        "  Run the environment loop for `num_episodes` episodes. Each episode is itself\n",
        "  a loop which interacts first with the environment to get an observation and\n",
        "  then give that observation to the agent in order to retrieve an action. Upon\n",
        "  termination of an episode a new episode will be started. If the number of\n",
        "  episodes is not given then this will interact with the environment\n",
        "  infinitely.\n",
        "\n",
        "  Args:\n",
        "    environment: dm_env.Environment used to generate trajectories.\n",
        "    agent: acme.Actor for selecting actions in the run loop.\n",
        "    num_steps: number of steps to run the loop for. If `None` (default), runs\n",
        "      without limit.\n",
        "    num_episodes: number of episodes to run the loop for. If `None` (default),\n",
        "      runs without limit.\n",
        "    logger_time_delta: time interval (in seconds) between consecutive logging\n",
        "      steps.\n",
        "    label: optional label used at logging steps.\n",
        "  \"\"\"\n",
        "  logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta)\n",
        "  iterator = range(num_episodes) if num_episodes else itertools.count()\n",
        "  all_returns = []\n",
        "  \n",
        "  num_total_steps = 0\n",
        "  for episode in iterator:\n",
        "    # Reset any counts and start the environment.\n",
        "    start_time = time.time()\n",
        "    episode_steps = 0\n",
        "    episode_return = 0\n",
        "    episode_loss = 0\n",
        "\n",
        "    timestep = environment.reset()\n",
        "    \n",
        "    # Make the first observation.\n",
        "    agent.observe_first(timestep)\n",
        "\n",
        "    # Run an episode.\n",
        "    while not timestep.last():\n",
        "      # Generate an action from the agent's policy and step the environment.\n",
        "      action = agent.select_action(timestep.observation)\n",
        "      timestep = environment.step(action)\n",
        "\n",
        "      # Have the agent observe the timestep and let the agent update itself.\n",
        "      agent.observe(action, next_timestep=timestep)\n",
        "      agent.update()\n",
        "\n",
        "      # Book-keeping.\n",
        "      episode_steps += 1\n",
        "      num_total_steps += 1\n",
        "      episode_return += timestep.reward\n",
        "\n",
        "      if log_loss:\n",
        "        episode_loss += agent.last_loss\n",
        "\n",
        "      if num_steps is not None and num_total_steps >= num_steps:\n",
        "        break\n",
        "\n",
        "    # Collect the results and combine with counts.\n",
        "    steps_per_second = episode_steps / (time.time() - start_time)\n",
        "    result = {\n",
        "        'episode': episode,\n",
        "        'episode_length': episode_steps,\n",
        "        'episode_return': episode_return,\n",
        "    }\n",
        "    if log_loss:\n",
        "      result['loss_avg'] = episode_loss / episode_steps\n",
        "\n",
        "    all_returns.append(episode_return)\n",
        "\n",
        "    # Log the given results.\n",
        "    logger.write(result)\n",
        "    \n",
        "    if num_steps is not None and num_total_steps >= num_steps:\n",
        "      break\n",
        "\n",
        "  return all_returns"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_gatpjQ8QA_H",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Implement the evaluation loop { form-width: \"30%\" }\n",
        "\n",
        "def evaluate(environment: dm_env.Environment,\n",
        "             agent: acme.Actor,\n",
        "             evaluation_episodes: int):\n",
        "  frames = []\n",
        "\n",
        "  for episode in range(evaluation_episodes):\n",
        "    timestep = environment.reset()\n",
        "    episode_return = 0\n",
        "    steps = 0\n",
        "    while not timestep.last():\n",
        "      frames.append(environment.plot_state(return_rgb=True))\n",
        "\n",
        "      action = agent.select_action(timestep.observation)\n",
        "      timestep = environment.step(action)\n",
        "      steps += 1\n",
        "      episode_return += timestep.reward\n",
        "    print(\n",
        "        f'Episode {episode} ended with reward {episode_return} in {steps} steps'\n",
        "    )\n",
        "  return frames\n",
        "\n",
        "def display_video(frames: Sequence[np.ndarray],\n",
        "                  filename: str = 'temp.mp4',\n",
        "                  frame_rate: int = 12):\n",
        "  \"\"\"Save and display video.\"\"\"\n",
        "  # Write the frames to a video.\n",
        "  with imageio.get_writer(filename, fps=frame_rate) as video:\n",
        "    for frame in frames:\n",
        "      video.append_data(frame)\n",
        "\n",
        "  # Read video and display the video.\n",
        "  video = open(filename, 'rb').read()\n",
        "  b64_video = base64.b64encode(video)\n",
        "  video_tag = ('<video  width=\"320\" height=\"240\" controls alt=\"test\" '\n",
        "               'src=\"data:video/mp4;base64,{0}\">').format(b64_video.decode())\n",
        "  return IPython.display.HTML(video_tag)\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_0YgLdsi3kXw",
        "colab_type": "text"
      },
      "source": [
        "## Agent\n",
        "\n",
        "We will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python and Function Approximation agents, we will use Tensorflow (v2).\n",
        "\n",
        "All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs:\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4MAYbEvtJ1kT",
        "colab_type": "text"
      },
      "source": [
        "### Agent interface\n",
        "\n",
        "\n",
        "<center><img src=\"https://drive.google.com/uc?id=1T7FTpA9RgDYFkciDFZK4brNyURZN_ZGp\" width=\"500\" /></center>\n",
        "\n",
        "Each agent implements the following functions:\n",
        "\n",
        "```python\n",
        "class Agent(acme.Actor):\n",
        "  def __init__(self, number_of_actions, number_of_states, ...):\n",
        "    \"\"\"Provides the agent the number of actions and number of states.\"\"\"\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    \"\"\"Generates actions from observations.\"\"\"\n",
        "\n",
        "  def observe_first(self, timestep):\n",
        "    \"\"\"Records the initial timestep in a trajectory.\"\"\"\n",
        "  \n",
        "  def observe(self, action, next_timestep):\n",
        "    \"\"\"Records the transition which occurred from taking an action.\"\"\"\n",
        "\n",
        "  def update(self):\n",
        "    \"\"\"Updates the agent's internals to potentially change its behavior.\"\"\"\n",
        "```\n",
        "\n",
        "Remarks on the `observe()` function:\n",
        "\n",
        "1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.\n",
        "\n",
        "2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.\n",
        "\n",
        "3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7XD9bXC3UCHd",
        "colab_type": "text"
      },
      "source": [
        "### Random Agent\n",
        "\n",
        "We can just choose actions randomly to move around this environment."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0lU-ybzz4Ng7",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Implementation  { form-width: \"30%\" }\n",
        "\n",
        "class RandomAgent(acme.Actor):\n",
        "\n",
        "  def __init__(self, environment_spec):\n",
        "    \"\"\"Gets the number of available actions from the environment spec.\"\"\"\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    \"\"\"Selects an action uniformly at random.\"\"\"\n",
        "    return np.random.randint(self._num_actions)    \n",
        "    \n",
        "  def observe_first(self, timestep):\n",
        "    \"\"\"Does not record as the RandomAgent has no use for data.\"\"\"\n",
        "    pass\n",
        "\n",
        "  def observe(self, action, next_timestep):\n",
        "    \"\"\"Does not record as the RandomAgent has no use for data.\"\"\"\n",
        "    pass\n",
        "\n",
        "  def update(self):    \n",
        "    \"\"\"Does not update as the RandomAgent does not learn from data.\"\"\"\n",
        "    pass"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "oxjzoRO03jGH",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Visualisation { form-width: \"30%\" }\n",
        "\n",
        "# Create the agent by giving it the action space specification.\n",
        "agent = RandomAgent(environment_spec)\n",
        "\n",
        "# Run the agent in the evaluation loop, which returns the frames.\n",
        "frames = evaluate(environment, agent, evaluation_episodes=1)\n",
        "\n",
        "# Visualize the random agent's episode.\n",
        "display_video(frames)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YPc0CrguF4GV",
        "colab_type": "text"
      },
      "source": [
        "# Part 1 - Value-based methods: Model-free Tabular Agents\n",
        "\n",
        "The first set of execises are based on the simple case where the number of states is small enough for our agents to maintain a table of values for each individual state that it will ever encounter: hence the name _tabular_.\n",
        "\n",
        "We will cover two basic RL tabular algorithms:\n",
        "- **On-policy**: SARSA \n",
        "- **Off-policy**: $\\color{green}Q$-learning\n",
        "\n",
        "Tabular agents expose a property `q_values` which returns a matrix of $\\color{green}Q$-values\n",
        "of shape (`num_states`, `num_actions`).\n",
        "\n",
        "In particular, we will consider the case where the GridWorld has a fixed layout, and the goal is always at the same location, hence the state is fully determined by the location of the agent. As such, the <font color='orangered'>**observation**</font> from the environment is changed to be an integer corresponding to each one of the 90 locations on the grid. Notice the different observation specification below.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zL8J6nVc2zlq",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Create the environment.\n",
        "grid = build_gridworld_task(\n",
        "    task='simple',\n",
        "    observation_type=ObservationType.STATE_INDEX,  # Notice the difference here.\n",
        "    max_episode_length=200)\n",
        "environment, environment_spec = setup_environment(grid)\n",
        "\n",
        "# Notice the difference between this observation specification and that above.\n",
        "print('observation specification:\\n', environment_spec.observations)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FA8FRfY-Dsth",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Implement helpers for value visualisation  { form-width: \"30%\" }\n",
        "\n",
        "map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a]\n",
        "map_from_action_to_name = lambda a: (\"up\", \"right\", \"down\", \"left\")[a]\n",
        "\n",
        "def plot_values(values, colormap='pink', vmin=-1, vmax=10):\n",
        "  plt.imshow(values, interpolation=\"nearest\", cmap=colormap, vmin=vmin, vmax=vmax)\n",
        "  plt.yticks([])\n",
        "  plt.xticks([])\n",
        "  plt.colorbar(ticks=[vmin, vmax])\n",
        "\n",
        "def plot_state_value(action_values, epsilon=0.1):\n",
        "  q = action_values\n",
        "  fig = plt.figure(figsize=(4, 4))\n",
        "  vmin = np.min(action_values)\n",
        "  vmax = np.max(action_values)\n",
        "  v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1)\n",
        "  plot_values(v, colormap='summer', vmin=vmin, vmax=vmax)\n",
        "  plt.title(\"$v(s)$\")\n",
        "\n",
        "def plot_action_values(action_values, epsilon=0.1):\n",
        "  q = action_values\n",
        "  fig = plt.figure(figsize=(8, 8))\n",
        "  fig.subplots_adjust(wspace=0.3, hspace=0.3)\n",
        "  vmin = np.min(action_values)\n",
        "  vmax = np.max(action_values)\n",
        "  dif = vmax - vmin\n",
        "  for a in [0, 1, 2, 3]:\n",
        "    plt.subplot(3, 3, map_from_action_to_subplot(a))\n",
        "    \n",
        "    plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif)\n",
        "    action_name = map_from_action_to_name(a)\n",
        "    plt.title(r\"$q(s, \\mathrm{\" + action_name + r\"})$\")\n",
        "    \n",
        "  plt.subplot(3, 3, 5)\n",
        "  v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1)\n",
        "  plot_values(v, colormap='summer', vmin=vmin, vmax=vmax)\n",
        "  plt.title(\"$v(s)$\")\n",
        "      \n",
        "\n",
        "def smooth(x, window=10):\n",
        "  return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1)\n",
        "  \n",
        "def plot_stats(stats, window=10):\n",
        "  plt.figure(figsize=(16,4))\n",
        "  plt.subplot(121)\n",
        "  xline = range(0, len(stats.episode_lengths), window)\n",
        "  plt.plot(xline, smooth(stats.episode_lengths, window=window))\n",
        "  plt.ylabel('Episode Length')\n",
        "  plt.xlabel('Episode Count')\n",
        "  plt.subplot(122)\n",
        "  plt.plot(xline, smooth(stats.episode_rewards, window=window))\n",
        "  plt.ylabel('Episode Return')\n",
        "  plt.xlabel('Episode Count')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EcrhrNnIr3kX",
        "colab_type": "text"
      },
      "source": [
        "## 1.1 On-policy control: SARSA Agent\n",
        "In this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.\n",
        "\n",
        "\n",
        "The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. \n",
        "\n",
        "As discussed during lectures, a greedy policy with respect to a given $\\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\\epsilon$-greedy policy with respect to $\\color{Green}Q$.\n",
        "\n",
        "### SARSA Algorithm\n",
        "\n",
        "**Input:**\n",
        "- $\\epsilon \\in (0, 1)$ the probability of taking a random action , and\n",
        "- $\\alpha > 0$ the step size, also known as learning rate.\n",
        "\n",
        "**Initialize:** $\\color{green}Q(\\color{red}{s}, \\color{blue}{a})$ for all $\\color{red}{s}$ ∈ $\\mathcal{\\color{red}S}$ and $\\color{blue}a$ ∈ $\\mathcal{\\color{blue}A}$\n",
        "\n",
        "**Loop forever:**\n",
        "\n",
        "1. Get $\\color{red}s \\gets{}$current (non-terminal) state\n",
        " \n",
        "2. Select $\\color{blue}a \\gets{} \\text{epsilon_greedy}(\\color{green}Q(\\color{red}s, \\cdot))$\n",
        " \n",
        "3. Step in the environment by passing the selected action $\\color{blue}a$\n",
        "\n",
        "4. Observe resulting reward $\\color{green}r$, discount $\\gamma$, and state $\\color{red}{s'}$\n",
        "\n",
        "5. Compute TD error: $\\Delta \\color{green}Q \\gets \n",
        "\\color{green}r + \\gamma \\color{green}Q(\\color{red}{s'}, \\color{blue}{a'}) − \\color{green}Q(\\color{red}s, \\color{blue}a)$, <br> where $\\color{blue}{a'} \\gets \\text{epsilon_greedy}(\\color{green}Q(\\color{red}{s'}, \\cdot))$\n",
        "\n",
        "5. Update $\\color{green}Q(\\color{red}s, \\color{blue}a) \\gets \\color{green}Q(\\color{red}s, \\color{blue}a) + \\alpha \\Delta \\color{green}Q$\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "r7oR18EUI-QG",
        "colab_type": "text"
      },
      "source": [
        "### Implement epsilon-greedy"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xNfVHzosN2P0",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "# @title **[Coding task]** Epilson-greedy policy { form-width: \"30%\" }\n",
        "\n",
        "def epsilon_greedy(\n",
        "    q_values_at_s: np.ndarray,  # Q-values in state s: Q(s, :).\n",
        "    epsilon: float = 0.1,  # Probability of taking a random action.\n",
        "):\n",
        "  \"\"\"Return an epsilon-greedy action sample.\"\"\"\n",
        "  pass"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XWqlIWbwN7Mk",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "# @title **[Solution]** Epilson-greedy policy { form-width: \"30%\" }\n",
        "\n",
        "def epsilon_greedy(\n",
        "    q_values_at_s: np.ndarray,  # Q-values in state s: Q(s, :).\n",
        "    epsilon: float = 0.1,  # Probability of taking a random action.\n",
        "):\n",
        "  \"\"\"Return an epsilon-greedy action sample.\"\"\"\n",
        "  if epsilon < np.random.random():\n",
        "    # Greedy: Pick action with the largest Q-value.\n",
        "    return np.argmax(q_values_at_s)\n",
        "  else:\n",
        "    # Get the number of actions from the size of the given vector of Q-values.\n",
        "    num_actions = np.array(q_values_at_s).shape[-1]\n",
        "    return np.random.randint(num_actions)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1bgnm8JsJFNC",
        "colab_type": "text"
      },
      "source": [
        "### Implement SARSA"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7bmAV4Kcr7Zz",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title **[Coding task]** SARSA Agent  { form-width: \"30%\" }\n",
        "\n",
        "class SarsaAgent(acme.Actor):\n",
        "\n",
        "  def __init__(self,\n",
        "               environment_spec: specs.EnvironmentSpec,\n",
        "               epsilon: float,\n",
        "               step_size: float = 0.1\n",
        "               ):\n",
        "    \n",
        "    # Get number of states and actions from the environment spec.\n",
        "    self._num_states = environment_spec.observations.num_values\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "\n",
        "    # Create the table of Q-values, all initialized at zero.\n",
        "    self._q = np.zeros((self._num_states, self._num_actions))\n",
        "\n",
        "    # Store algorithm hyper-parameters.\n",
        "    self._step_size = step_size\n",
        "    self._epsilon = epsilon\n",
        "\n",
        "    # Containers you may find useful.\n",
        "    self._state = None\n",
        "    self._action = None\n",
        "    self._next_state = None\n",
        "    \n",
        "  @property\n",
        "  def q_values(self):\n",
        "    return self._q\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    return epsilon_greedy(self._q[observation], self._epsilon)\n",
        "    \n",
        "  def observe_first(self, timestep):\n",
        "    # Set current state.\n",
        "    self._state = timestep.observation\n",
        "\n",
        "  def observe(self, action, next_timestep):\n",
        "    # Unpacking the timestep to lighten notation.\n",
        "    s = self._state\n",
        "    a = action\n",
        "    r = next_timestep.reward\n",
        "    g = next_timestep.discount\n",
        "    next_s = next_timestep.observation\n",
        "    \n",
        "    # ============ YOUR CODE HERE =============\n",
        "    # Compute the on-policy Q-value update.\n",
        "    # self._td_error =\n",
        "    pass\n",
        "\n",
        "  def update(self):\n",
        "    # ============ YOUR CODE HERE =============\n",
        "    # Update the Q-value table.\n",
        "    pass"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "JtlH1tU7sCEm",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title **[Solution]** SARSA Agent { form-width: \"30%\" }\n",
        "\n",
        "class SarsaAgent(acme.Actor):\n",
        "\n",
        "  def __init__(self,\n",
        "               environment_spec: specs.EnvironmentSpec,\n",
        "               epsilon: float,\n",
        "               step_size: float = 0.1\n",
        "               ):\n",
        "    \n",
        "    # Get number of states and actions from the environment spec.\n",
        "    self._num_states = environment_spec.observations.num_values\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "\n",
        "    # Create the table of Q-values, all initialized at zero.\n",
        "    self._q = np.zeros((self._num_states, self._num_actions))\n",
        "\n",
        "    # Store algorithm hyper-parameters.\n",
        "    self._step_size = step_size\n",
        "    self._epsilon = epsilon\n",
        "\n",
        "    # Containers you may find useful.\n",
        "    self._state = None\n",
        "    self._action = None\n",
        "    self._next_state = None\n",
        "    \n",
        "  @property\n",
        "  def q_values(self):\n",
        "    return self._q\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    return epsilon_greedy(self._q[observation], self._epsilon)\n",
        "    \n",
        "  def observe_first(self, timestep):\n",
        "    # Set current state.\n",
        "    self._state = timestep.observation\n",
        "\n",
        "  def observe(self, action, next_timestep):\n",
        "    # Unpacking the timestep to lighten notation.\n",
        "    s = self._state\n",
        "    a = action\n",
        "    r = next_timestep.reward\n",
        "    g = next_timestep.discount\n",
        "    next_s = next_timestep.observation\n",
        "\n",
        "    # Compute the action that would be taken from the next state.\n",
        "    next_a = self.select_action(next_s)\n",
        "    \n",
        "    # Compute the on-policy Q-value update.\n",
        "    self._action = a\n",
        "    self._next_state = next_s\n",
        "    self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a]\n",
        "\n",
        "  def update(self):\n",
        "    # Optional unpacking to lighten notation.\n",
        "    s = self._state\n",
        "    a = self._action\n",
        "\n",
        "    # Update the Q-value table value at (s, a).\n",
        "    self._q[s, a] += self._step_size * self._td_error\n",
        "\n",
        "    # Update the current state.\n",
        "    self._state = self._next_state"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "K8eBOcXZu1fM",
        "colab_type": "text"
      },
      "source": [
        "### **Task**: Run your SARSA agent on the `obstacle` environment\n",
        "\n",
        "This environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xKYEB2d2uGaa",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "num_steps = 1e5 #@param {type:\"number\"}\n",
        "num_steps = int(num_steps)\n",
        "\n",
        "# Create the environment.\n",
        "grid = build_gridworld_task(task='obstacle')\n",
        "environment, environment_spec = setup_environment(grid)\n",
        "\n",
        "# Create the agent.\n",
        "agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1)\n",
        "\n",
        "# Run the experiment and get the value functions from agent\n",
        "returns = run_loop(environment=environment, agent=agent, num_steps=num_steps)\n",
        "print('AFTER {0:,} STEPS ...'.format(num_steps))\n",
        "\n",
        "# Get the Q-values and reshape them to recover grid-like structure of states.\n",
        "q_values = agent.q_values\n",
        "grid_shape = grid.layout.shape\n",
        "q_values = q_values.reshape([*grid_shape, -1])\n",
        "\n",
        "# Visualize the value and Q-value tables.\n",
        "plot_action_values(q_values, epsilon=1.)\n",
        "\n",
        "# Visualize the greedy policy.\n",
        "environment.plot_greedy_policy(q_values)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pFGX_zGcvb8D",
        "colab_type": "text"
      },
      "source": [
        "## 1.2 Off-policy control: Q-learning Agent\n",
        "\n",
        "Reminder: $\\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.\n",
        "\n",
        "**Initialize** $\\color{green}Q(\\color{red}{s}, \\color{blue}{a})$ for all $\\color{red}{s} \\in \\color{red}{\\mathcal{S}}$ and $\\color{blue}{a} \\in \\color{blue}{\\mathcal{A}}$\n",
        "\n",
        "**Loop forever**:\n",
        "\n",
        "1. Get $\\color{red}{s} \\gets{}$current (non-terminal) state\n",
        " \n",
        "2. Select $\\color{blue}{a} \\gets{} \\text{behaviour_policy}(\\color{red}{s})$\n",
        " \n",
        "3. Step in the environment by passing the selected action $\\color{blue}{a}$\n",
        "\n",
        "4. Observe resulting reward $\\color{green}{r}$, discount $\\gamma$, and state, $\\color{red}{s'}$\n",
        "\n",
        "5. Compute the TD error: $\\Delta \\color{green}Q \\gets \\color{green}{r} + \\gamma \\color{green}Q(\\color{red}{s'}, \\color{blue}{a'}) − \\color{green}Q(\\color{red}{s}, \\color{blue}{a})$, <br>\n",
        "where $\\color{blue}{a'} \\gets \\arg\\max_{\\color{blue}{\\mathcal A}} \\color{green}Q(\\color{red}{s'}, \\cdot)$\n",
        "\n",
        "6. Update $\\color{green}Q(\\color{red}{s}, \\color{blue}{a}) \\gets \\color{green}Q(\\color{red}{s}, \\color{blue}{a}) + \\alpha \\Delta \\color{green}Q$\n",
        "\n",
        "Notice that the actions $\\color{blue}{a}$ and $\\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "I6s820jAwoVA",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title **[Coding task]** Q-Learning Agent  { form-width: \"30%\" }\n",
        "\n",
        "QValues = np.ndarray\n",
        "Action = int\n",
        "# A value-based policy takes the Q-values at a state and returns an action.\n",
        "ValueBasedPolicy = Callable[[QValues], Action]\n",
        "\n",
        "class QLearningAgent(acme.Actor):\n",
        "\n",
        "  def __init__(self,\n",
        "               environment_spec: specs.EnvironmentSpec,\n",
        "               behaviour_policy: ValueBasedPolicy,\n",
        "               step_size: float = 0.1,\n",
        "               ):\n",
        "    \n",
        "    # Get number of states and actions from the environment spec.\n",
        "    self._num_states = environment_spec.observations.num_values\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "\n",
        "    # Create the table of Q-values, all initialized at zero.\n",
        "    self._q = np.zeros((self._num_states, self._num_actions))\n",
        "\n",
        "    # Store algorithm hyper-parameters.\n",
        "    self._step_size = step_size\n",
        "\n",
        "    # Store behavior policy.\n",
        "    self._behaviour_policy = behaviour_policy\n",
        "\n",
        "    # Containers you may find useful.\n",
        "    self._state = None\n",
        "    self._action = None\n",
        "    self._next_state = None\n",
        "    \n",
        "  @property\n",
        "  def q_values(self):\n",
        "    return self._q\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    return self._behaviour_policy(self._q[observation])\n",
        "    \n",
        "  def observe_first(self, timestep):\n",
        "    self._state = timestep.observation\n",
        "\n",
        "  def observe(self, action, next_timestep):\n",
        "    s = self._state\n",
        "    a = action\n",
        "    r = next_timestep.reward\n",
        "    g = next_timestep.discount\n",
        "    next_s = next_timestep.observation\n",
        "    \n",
        "    # ============ YOUR CODE HERE =============\n",
        "    # Compute the offline Q-value update.\n",
        "    # self._td_error =\n",
        "    pass\n",
        "\n",
        "  def update(self):\n",
        "    # ============ YOUR CODE HERE =============\n",
        "    pass"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ak1T5PNV8Pbk",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title **[Solution]** Q-Learning Agent { form-width: \"30%\" }\n",
        "\n",
        "QValues = np.ndarray\n",
        "Action = int\n",
        "# A policy takes an observation and returns an action.\n",
        "ValueBasedPolicy = Callable[[QValues], Action]\n",
        "\n",
        "class QLearningAgent(acme.Actor):\n",
        "\n",
        "  def __init__(self,\n",
        "               environment_spec: specs.EnvironmentSpec,\n",
        "               behaviour_policy: ValueBasedPolicy,\n",
        "               step_size: float = 0.1,\n",
        "               ):\n",
        "\n",
        "    # Get number of states and actions from the environment spec.\n",
        "    self._num_states = environment_spec.observations.num_values\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "\n",
        "    # Create the table of Q-values, all initialized at zero.\n",
        "    self._q = np.zeros((self._num_states, self._num_actions))\n",
        "\n",
        "    # Store algorithm hyper-parameters.\n",
        "    self._step_size = step_size\n",
        "\n",
        "    # Store behavior policy.\n",
        "    self._behaviour_policy = behaviour_policy\n",
        "\n",
        "    # Containers you may find useful.\n",
        "    self._state = None\n",
        "    self._action = None\n",
        "    self._next_state = None\n",
        "    \n",
        "  @property\n",
        "  def q_values(self):\n",
        "    return self._q\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    return self._behaviour_policy(self._q[observation])\n",
        "    \n",
        "  def observe_first(self, timestep):\n",
        "    # Set current state.\n",
        "    self._state = timestep.observation\n",
        "\n",
        "  def observe(self, action, next_timestep):\n",
        "    # Unpacking the timestep to lighten notation.\n",
        "    s = self._state\n",
        "    a = action\n",
        "    r = next_timestep.reward\n",
        "    g = next_timestep.discount\n",
        "    next_s = next_timestep.observation\n",
        "    \n",
        "    # Compute the TD error.\n",
        "    self._action = a\n",
        "    self._next_state = next_s\n",
        "    self._td_error = r + g * np.max(self._q[next_s]) - self._q[s, a]\n",
        "\n",
        "  def update(self):\n",
        "    # Optional unpacking to lighten notation.\n",
        "    s = self._state\n",
        "    a = self._action\n",
        "\n",
        "    # Update the Q-value table value at (s, a).\n",
        "    self._q[s, a] += self._step_size * self._td_error\n",
        "\n",
        "    # Update the current state.\n",
        "    self._state = self._next_state"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2RqdV3rjwcAh",
        "colab_type": "text"
      },
      "source": [
        "### **Task 1**: Run your Q-learning agent on `obstacle`\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LL4PgT-jwi3-",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "epsilon = 1.  #@param {type:\"number\"} \n",
        "num_steps = 1e5  #@param {type:\"number\"}\n",
        "num_steps = int(num_steps)\n",
        "\n",
        "# environment\n",
        "grid = build_gridworld_task(task='obstacle')\n",
        "environment, environment_spec = setup_environment(grid)\n",
        "\n",
        "# behavior policy\n",
        "behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon)\n",
        "\n",
        "# agent\n",
        "agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1)\n",
        "\n",
        "# run experiment and get the value functions from agent\n",
        "returns = run_loop(environment=environment, agent=agent, num_steps=num_steps)\n",
        "\n",
        "# get the q-values\n",
        "q = agent.q_values.reshape(grid.layout.shape + (4,))\n",
        "\n",
        "# visualize value functions\n",
        "print('AFTER {:,} STEPS ...'.format(num_steps))\n",
        "plot_action_values(q, epsilon=0)\n",
        "\n",
        "# visualise the greedy policy\n",
        "grid.plot_greedy_policy(q)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cMk2ArG-weg_",
        "colab_type": "text"
      },
      "source": [
        "### **Task 2:** Experiment with different levels of 'greediness'\n",
        "* The default was $\\epsilon=1.$, what does this correspond to?\n",
        "* Try also $\\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xY7wxgfkWIxr",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "epsilon = 0.1  #@param {type:\"number\"} \n",
        "num_steps = 1e5  #@param {type:\"number\"}\n",
        "num_steps = int(num_steps)\n",
        "\n",
        "# environment\n",
        "grid = build_gridworld_task(task='obstacle')\n",
        "environment, environment_spec = setup_environment(grid)\n",
        "\n",
        "# behavior policy\n",
        "behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon)\n",
        "\n",
        "# agent\n",
        "agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1)\n",
        "\n",
        "# run experiment and get the value functions from agent\n",
        "returns = run_loop(environment=environment, agent=agent, num_steps=num_steps)\n",
        "\n",
        "# get the q-values\n",
        "q = agent.q_values.reshape(grid.layout.shape + (4,))\n",
        "\n",
        "# visualize value functions\n",
        "print('AFTER {:,} STEPS ...'.format(num_steps))\n",
        "plot_action_values(q, epsilon=epsilon)\n",
        "\n",
        "# visualise the greedy policy\n",
        "grid.plot_greedy_policy(q)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Lqg1n48y81ei",
        "colab_type": "text"
      },
      "source": [
        "## 1.3 **[Try this later!]** Experience Replay\n",
        "\n",
        "Implement an agent that uses **Experience Replay** to learn action values, at each step:\n",
        "* select actions using the behaviour policy\n",
        "* accumulate all observed transitions $(\\color{red}s, \\color{blue}a, \\color{green}r, \\color{red}s')$ in the environment in a *replay buffer*,\n",
        "* apply an online $\\color{green}Q$-learning update\n",
        "* apply multiple $\\color{green}Q$-learning updates based on transitions sampled from the *replay buffer* (in addition to the online updates).\n",
        "\n",
        "\n",
        "**Initialize:** $\\color{green}Q(\\color{red}s, \\color{blue}a)$ for all $\\color{red}{s} ∈ \\mathcal{\\color{red}S}$ and $\\color{blue}a ∈ \\mathcal{\\color{blue}A}$\n",
        "\n",
        "**Loop forever:**\n",
        "\n",
        "1. Get $\\color{red}{s} \\gets{}$current (non-terminal) state\n",
        " \n",
        "2. Select $\\color{blue}{a} \\gets{}  \\text{behaviour_policy}(\\color{red}{s})$\n",
        " \n",
        "3. Step in the environment by passing the chosen action $\\color{blue}{a}$\n",
        "\n",
        "4. Observe resulting reward $\\color{green}{r}$, discount $\\gamma$, and state $\\color{red}{s'}$\n",
        "\n",
        "5. Apply online $\\color{green}Q$-learning update<br>\n",
        "$\\color{green}Q(\\color{red}{s}, \\color{blue}{a}) \\gets \\color{green}Q(\\color{red}{s}, \\color{blue}{a}) + \\alpha (\\color{green}{r} + \\gamma \\color{green}Q(\\color{red}{s'}, \\color{blue}{a'}) − \\color{green}Q(\\color{red}{s}, \\color{blue}{a}))$\n",
        "\n",
        "5. Add transition $(\\color{red}{s}, \\color{blue}{a}, \\color{green}{r}, \\gamma, \\color{red}{s'})$ to the replay buffer\n",
        "\n",
        "6. Loop repeat n times:\n",
        "\n",
        "  1. $(\\color{red}{s}, \\color{blue}{a}, \\color{green}{r}, \\gamma, \\color{red}{s'}) \\gets \\text{ReplayBuffer}.\\text{sample_transition}()$\n",
        "  \n",
        "  4. $\\color{green}Q(\\color{red}{s}, \\color{blue}{a}) \\gets \\color{green}Q(\\color{red}{s}, \\color{blue}{a}) + \\alpha (\\color{green}{r} + \\gamma \\max_\\color{blue}{a'} \\color{green}Q(\\color{red}{s'}, \\color{blue}{a'}) − \\color{green}Q(\\color{red}{s}, \\color{blue}{a}))$"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ietFnV739JwD",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title **[Coding task]** Q-learning with replay { form-width: \"30%\" }\n",
        "\n",
        "class ReplayQLearningAgent(acme.Actor):\n",
        "\n",
        "  def __init__(\n",
        "      self,\n",
        "      environment_spec: specs.EnvironmentSpec,\n",
        "      behaviour_policy: ValueBasedPolicy, \n",
        "      num_offline_updates: int = 0,\n",
        "      step_size: float = 0.1\n",
        "  ): \n",
        "\n",
        "    # Get number of states and actions from the environment spec.\n",
        "    self._num_states = environment_spec.observations.num_values\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "\n",
        "    # Create the table of Q-values, all initialized at zero.\n",
        "    self._q = np.zeros((self._num_states, self._num_actions))\n",
        "\n",
        "    # Store algorithm hyper-parameters.\n",
        "    self._step_size = step_size\n",
        "    self._num_offline_updates = num_offline_updates\n",
        "\n",
        "    # Store behavior policy.\n",
        "    self._behaviour_policy = behaviour_policy\n",
        "\n",
        "    # Containers you may find useful.\n",
        "    self._state = None\n",
        "    self._action = None\n",
        "    self._next_state = None\n",
        "\n",
        "    # Create a container for experiences.\n",
        "    self._replay_buffer = []\n",
        "    \n",
        "  @property\n",
        "  def q_values(self):\n",
        "    return self._q\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    return self._behaviour_policy(self._q[observation])\n",
        "    \n",
        "  def observe_first(self, timestep):\n",
        "    self._state = timestep.observation\n",
        "\n",
        "  def observe(self, action, next_timestep):\n",
        "    # Unpacking the timestep to lighten notation.\n",
        "    s = self._state\n",
        "    a = action\n",
        "    r = next_timestep.reward\n",
        "    g = next_timestep.discount\n",
        "    next_s = next_timestep.observation\n",
        "\n",
        "    # Compute the TD error.\n",
        "    self._action = a\n",
        "    self._next_state = next_s\n",
        "    self._td_error = r + g * np.max(self._q[next_s]) - self._q[s, a]\n",
        "\n",
        "    if self._num_offline_updates > 0:\n",
        "      # ============ YOUR CODE HERE =============\n",
        "      # Update replay buffer.\n",
        "      pass\n",
        "\n",
        "  def update(self):\n",
        "    # Optional unpacking to lighten notation.\n",
        "    s = self._state\n",
        "    a = self._action\n",
        "\n",
        "    # Update the Q-value table value at (s, a).\n",
        "    self._q[s, a] += self._step_size * self._td_error\n",
        "\n",
        "    # Update the current state.\n",
        "    self._state = self._next_state\n",
        "\n",
        "    # Perform offline Q-value updates.\n",
        "    # ============ YOUR CODE HERE ============="
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "I6Lunsx1-kmf",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title **[Solution]**  Q-learning with replay { form-width: \"30%\" }\n",
        "\n",
        "class ReplayQLearningAgent(acme.Actor):\n",
        "\n",
        "  def __init__(\n",
        "      self,\n",
        "      environment_spec: specs.EnvironmentSpec,\n",
        "      behaviour_policy: ValueBasedPolicy, \n",
        "      num_offline_updates: int = 0,\n",
        "      step_size: float = 0.1\n",
        "  ): \n",
        "\n",
        "    # Get number of states and actions from the environment spec.\n",
        "    self._num_states = environment_spec.observations.num_values\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "\n",
        "    # Create the table of Q-values, all initialized at zero.\n",
        "    self._q = np.zeros((self._num_states, self._num_actions))\n",
        "\n",
        "    # Store algorithm hyper-parameters.\n",
        "    self._step_size = step_size\n",
        "    self._num_offline_updates = num_offline_updates\n",
        "\n",
        "    # Store behavior policy.\n",
        "    self._behaviour_policy = behaviour_policy\n",
        "\n",
        "    # Containers you may find useful.\n",
        "    self._state = None\n",
        "    self._action = None\n",
        "    self._next_state = None\n",
        "\n",
        "    # Create a container for experiences.\n",
        "    self._replay_buffer = []\n",
        "    \n",
        "  @property\n",
        "  def q_values(self):\n",
        "    return self._q\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    return self._behaviour_policy(self._q[observation])\n",
        "    \n",
        "  def observe_first(self, timestep):\n",
        "    self._state = timestep.observation\n",
        "\n",
        "  def observe(self, action, next_timestep):\n",
        "    # Unpacking the timestep to lighten notation.\n",
        "    s = self._state\n",
        "    a = action\n",
        "    r = next_timestep.reward\n",
        "    g = next_timestep.discount\n",
        "    next_s = next_timestep.observation\n",
        "\n",
        "    # Compute the TD error.\n",
        "    self._action = a\n",
        "    self._next_state = next_s\n",
        "    self._td_error = r + g * np.max(self._q[next_s]) - self._q[s, a]\n",
        "\n",
        "    if self._num_offline_updates > 0:\n",
        "      self._replay_buffer.append((s, a, r, g, next_s))\n",
        "\n",
        "  def update(self):\n",
        "    # Optional unpacking to lighten notation.\n",
        "    s = self._state\n",
        "    a = self._action\n",
        "\n",
        "    # Update the Q-value table value at (s, a).\n",
        "    self._q[s, a] += self._step_size * self._td_error\n",
        "\n",
        "    # Update the current state.\n",
        "    self._state = self._next_state\n",
        "\n",
        "    # Perform offline Q-value updates.\n",
        "    if len(self._replay_buffer) > self._num_offline_updates:\n",
        "      for i in range(self._num_offline_updates):\n",
        "        # Randomly sample from the replay buffer.\n",
        "        idx = np.random.randint(0, len(self._replay_buffer))\n",
        "        s, a, r, g, next_s = self._replay_buffer[idx]\n",
        "\n",
        "        # Compute TD error of sampled transition.\n",
        "        td_error = r + g * np.max(self._q[next_s]) - self._q[s, a]\n",
        "\n",
        "        # Perform an offline update.\n",
        "        self._q[s, a] += self._step_size * td_error"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "k3J6CE2M_AdF",
        "colab_type": "text"
      },
      "source": [
        "#### **Task**: Compare Q-learning with/without experience replay\n",
        "\n",
        "Use a small number of training steps (e.g. `num_steps = 1e3`) and vary `num_offline_updates` between `0` and `30`."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9yLCXKBH_F0j",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "num_offline_updates = 20  # @param {type:\"integer\"}\n",
        "epsilon = 1.  #@param {type:\"number\"} \n",
        "num_steps = 1e3  # @param {type: \"number\"}\n",
        "num_steps = int(num_steps)\n",
        "\n",
        "# Create the environment.\n",
        "grid = build_gridworld_task(task='obstacle')\n",
        "environment, environment_spec = setup_environment(grid)\n",
        "\n",
        "# behavior policy\n",
        "behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon)\n",
        "\n",
        "agent = ReplayQLearningAgent(\n",
        "    environment_spec,\n",
        "    behaviour_policy=behavior_policy,\n",
        "    num_offline_updates=num_offline_updates,\n",
        "    step_size=0.1)\n",
        "\n",
        "# Run experiment and get the value functions from agent.\n",
        "returns = run_loop(environment=environment, agent=agent, num_steps=num_steps)\n",
        "\n",
        "# Plot values and policy.\n",
        "q = agent.q_values.reshape(grid.layout.shape + (4,))\n",
        "plot_action_values(q)\n",
        "grid.plot_greedy_policy(q)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Rkn2ud_0Pn2o",
        "colab_type": "text"
      },
      "source": [
        "# Part 2 - Value-based methods: function approximation\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yxqnvCLoe3KU",
        "colab_type": "text"
      },
      "source": [
        "<center>\n",
        "<img src=\"https://drive.google.com/uc?id=1XIj68U3eB1bKYfIEHAcVbfwobmMYQQ4X\" width=\"500\" />\n",
        "</center>\n",
        "\n",
        "So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\\color{red}{s}, \\color{blue}{a})$, had an entry in our $\\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).\n",
        "\n",
        "An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.\n",
        "\n",
        "<center><img width=\"200\" alt=\"portfolio_view\" src=\"https://miro.medium.com/max/1760/1*XyIpmXXAjbXerDzmGQL1yA.gif\"></center>\n",
        "\n",
        "But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\\color{red}{s}, \\color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.\n",
        "\n",
        "To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf).\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RNvwEXNlhXlq",
        "colab_type": "text"
      },
      "source": [
        "### Quick recap on replay\n",
        "\n",
        "An important property of off-policy methods like $\\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.\n",
        "\n",
        "In order to optimize the $\\color{green}Q$-function we can then sample data from the replay <font color='purple'>**dataset**</font> and use that data to perform an update. An illustration of this learning loop is shown below.\n",
        "\n",
        "<center><img src=\"https://drive.google.com/uc?id=1ivTQBHWkYi_J9vWwXFd2sSWg5f2TB5T-\" width=\"400\" /></center> \n",
        "\n",
        "In the next section we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a8mxBAuzhq_r",
        "colab_type": "text"
      },
      "source": [
        "## 2.0 Implement a simple replay buffer"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1wWxdc7ZM-pr",
        "colab_type": "code",
        "cellView": "both",
        "colab": {}
      },
      "source": [
        "#@title simple replay buffer  { form-width: \"30%\" }\n",
        "\n",
        "# Create a convenient container for the SARS tuples required by deep RL agents.\n",
        "Transitions = collections.namedtuple(\n",
        "    'Transitions', ['state', 'action', 'reward', 'discount', 'next_state'])\n",
        "\n",
        "class ReplayBuffer(object):\n",
        "  \"\"\"A simple Python replay buffer.\"\"\"\n",
        "\n",
        "  def __init__(self, capacity: int = None):\n",
        "    self.buffer = collections.deque(maxlen=capacity)\n",
        "    self._prev_state = None\n",
        "\n",
        "  def add_first(self, initial_timestep: dm_env.TimeStep):\n",
        "    self._prev_state = initial_timestep.observation\n",
        "\n",
        "  def add(self, action: int, timestep: dm_env.TimeStep):\n",
        "    transition = Transitions(\n",
        "        state=self._prev_state,\n",
        "        action=action,\n",
        "        reward=timestep.reward,\n",
        "        discount=timestep.discount,\n",
        "        next_state=timestep.observation,\n",
        "    )\n",
        "    self.buffer.append(transition)\n",
        "    self._prev_state = timestep.observation\n",
        "\n",
        "  def sample(self, batch_size: int) -> Transitions:\n",
        "    # Sample a random batch of Transitions as a list.\n",
        "    batch_as_list = random.sample(self.buffer, batch_size)\n",
        "\n",
        "    # Convert the list of `batch_size` Transitions into a single Transitions\n",
        "    # object where each field has `batch_size` stacked fields.\n",
        "    return tree_utils.stack_sequence_fields(batch_as_list)\n",
        "  \n",
        "  def flush(self) -> Transitions:\n",
        "    entire_buffer = tree_utils.stack_sequence_fields(self.buffer)\n",
        "    self.buffer.clear()\n",
        "    return entire_buffer\n",
        "\n",
        "  def is_ready(self, batch_size: int) -> bool:\n",
        "    return batch_size <= len(self.buffer)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OTAYVPnaJN0t",
        "colab_type": "text"
      },
      "source": [
        "## 2.1 NFQ agent"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-omtUOQCS8VI",
        "colab_type": "text"
      },
      "source": [
        "[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$\n",
        "In other words, the value $\\color{green}Q(\\color{red}{s}, \\color{blue}{a})$ are approximated by the output of a neural network $\\color{green}{Q_w}(\\color{red}{s}, \\color{blue}{a})$ for each possible action $\\color{blue}{a} \\in \\color{blue}{\\mathcal{A}}$.$^2$\n",
        "\n",
        "When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.\n",
        "\n",
        "By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.\n",
        "Thanks to automatic differentiation, we can just write the TD error as a loss, e.g. with an $\\ell^2$ loss, but others would work too:\n",
        "\n",
        "$$L(\\color{green}w) = \\mathbb{E}\\left[ \\left( \\color{green}{r} + \\gamma \\max_\\color{blue}{a'} \\color{green}{Q_w}(\\color{red}{s'}, \\color{blue}{a'}) − \\color{green}{Q_w}(\\color{red}{s}, \\color{blue}{a})  \\right)^2\\right].$$\n",
        "\n",
        "Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.\n",
        "\n",
        "NFQ builds on $\\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.\n",
        "Instead, NFQ uses a replay buffer, similar to what you just implemented above, to update the Q-value in a batched setting.\n",
        "\n",
        "When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).\n",
        "But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.\n",
        "\n",
        "<br />\n",
        "\n",
        "---\n",
        "\n",
        "<sub>$^1$ If you read the NFQ paper, they use a \"control\" notation, where there is a \"cost to minimize\", instead of \"rewards to maximize\", so don't be surprised if signs/max/min do not correspond.</sub>\n",
        "\n",
        "<sub>$^2$ We could feed it $\\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass.</sub>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "cellView": "form",
        "id": "CSB6SG-ZeUwU",
        "colab": {}
      },
      "source": [
        "#@title **[Coding task]** NFQ Agent  { form-width: \"30%\" }\n",
        "\n",
        "class NeuralFittedQAgent(acme.Actor):\n",
        "\n",
        "  def __init__(self,\n",
        "               environment_spec: specs.EnvironmentSpec,\n",
        "               q_network: snt.Module,\n",
        "               replay_capacity: int = 100_000,\n",
        "               epsilon: float = 0.1,\n",
        "               batch_size: int = 1,\n",
        "               learning_rate: float = 3e-4):\n",
        "\n",
        "    # Store agent hyperparameters and network.\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "    self._epsilon = epsilon\n",
        "    self._batch_size = batch_size\n",
        "    self._q_network = q_network\n",
        "\n",
        "    # Container for the computed loss (see run_loop implementation above).\n",
        "    self.last_loss = 0.0\n",
        "\n",
        "    # Create the replay buffer.\n",
        "    self._replay_buffer = ReplayBuffer(replay_capacity)\n",
        "\n",
        "    # Initialize network by feeding a dummy (batched) observation.\n",
        "    dummy_observation = environment_spec.observations.generate_value()\n",
        "    _ = self._q_network(dummy_observation[None, ...])\n",
        "\n",
        "    # Setup optimizer that will train the network to minimize the loss.\n",
        "    self._optimizer = snt.optimizers.Adam(learning_rate)\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    # Compute Q-values.\n",
        "    # Sonnet requires a batch dimension, which we squeeze out right after.\n",
        "    q_values = self._q_network(observation[None, ...])  # Adds batch dimension.\n",
        "    q_values = tf.squeeze(q_values, axis=0)  # Removes batch dimension.\n",
        "\n",
        "    # Select epsilon-greedy action.\n",
        "    if self._epsilon < tf.random.uniform(shape=()):\n",
        "      action = tf.argmax(q_values, axis=-1)\n",
        "    else:\n",
        "      action = tf.random.uniform(\n",
        "          shape=(), maxval=self._num_actions, dtype=tf.int32)\n",
        "\n",
        "    return action\n",
        "\n",
        "  def q_values(self, observation):\n",
        "    q_values = self._q_network(observation[None, ...])\n",
        "    return tf.squeeze(q_values, axis=0)\n",
        "\n",
        "  def update(self):\n",
        "\n",
        "    if not self._replay_buffer.is_ready(self._batch_size):\n",
        "      # If the replay buffer is not ready to sample from, do nothing.\n",
        "      return\n",
        "    \n",
        "    # Sample a minibatch of transitions from experience replay.\n",
        "    transitions = self._replay_buffer.sample(self._batch_size)\n",
        "\n",
        "    # Optionally unpack the transitions to lighten notation.\n",
        "    # Note: each of these tensors will be of shape [batch_size, ...].\n",
        "    s = transitions.state\n",
        "    a = transitions.action\n",
        "    r = transitions.reward\n",
        "    d = transitions.discount\n",
        "    next_s = transitions.next_state\n",
        "\n",
        "    # Compute the Q-values at next states in the transitions.\n",
        "    q_next_s = self._q_network(next_s)  # Shape [batch_size, num_actions].\n",
        "    max_q_next_s = tf.reduce_max(q_next_s, axis=-1)  # Shape [batch_size].\n",
        "\n",
        "    # Compute the TD error and then the losses.\n",
        "    target_q_value = r + d * max_q_next_s\n",
        "\n",
        "    # Note: the following computation must happen on Tensorflow's gradient tape\n",
        "    # so that we can differentiate their result (loss) with respect to the\n",
        "    # q_network trainable variables.\n",
        "    with tf.GradientTape() as tape:\n",
        "\n",
        "      # Compute the Q-values at original state.\n",
        "      q_s = self._q_network(s)\n",
        "\n",
        "      # Gather the Q-value corresponding to each action in the batch.\n",
        "      q_s_a = tf.gather(q_s, a, axis=-1, batch_dims=1)\n",
        "\n",
        "      # ============ YOUR CODE HERE =============\n",
        "      # Compute the TD errors.\n",
        "      td_error = ...\n",
        "\n",
        "      # Average the squared TD errors over the entire batch (axis=0).\n",
        "      loss = ...\n",
        "\n",
        "    # Compute the gradients of the loss with respect to the q_network variables.\n",
        "    gradients = tape.gradient(loss, q_network.trainable_variables)\n",
        "\n",
        "    # Apply the gradient update.\n",
        "    self._optimizer.apply(gradients, q_network.trainable_variables)\n",
        "\n",
        "    # Store the loss for logging purposes (see run_loop implementation above).\n",
        "    self.last_loss = loss.numpy()\n",
        "\n",
        "  def observe_first(self, timestep: dm_env.TimeStep):\n",
        "    self._replay_buffer.add_first(timestep)\n",
        "\n",
        "  def observe(self, action: int, next_timestep: dm_env.TimeStep):\n",
        "    self._replay_buffer.add(action, next_timestep)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "cellView": "form",
        "id": "MLYUDq7iLuTS",
        "colab": {}
      },
      "source": [
        "#@title **[Solution]** NFQ Agent  { form-width: \"30%\" }\n",
        "\n",
        "# Create a convenient container for the SARS tuples required by NFQ.\n",
        "Transitions = collections.namedtuple(\n",
        "    'Transitions', ['state', 'action', 'reward', 'discount', 'next_state'])\n",
        "\n",
        "class NeuralFittedQAgent(acme.Actor):\n",
        "\n",
        "  def __init__(self,\n",
        "               environment_spec: specs.EnvironmentSpec,\n",
        "               q_network: snt.Module,\n",
        "               replay_capacity: int = 100_000,\n",
        "               epsilon: float = 0.1,\n",
        "               batch_size: int = 1,\n",
        "               learning_rate: float = 3e-4):\n",
        "\n",
        "    # Store agent hyperparameters and network.\n",
        "    self._num_actions = environment_spec.actions.num_values\n",
        "    self._epsilon = epsilon\n",
        "    self._batch_size = batch_size\n",
        "    self._q_network = q_network\n",
        "\n",
        "    # Container for the computed loss (see run_loop implementation above).\n",
        "    self.last_loss = 0.0\n",
        "\n",
        "    # Create the replay buffer.\n",
        "    self._replay_buffer = ReplayBuffer(replay_capacity)\n",
        "\n",
        "    # Initialize network by feeding a dummy (batched) observation.\n",
        "    dummy_observation = environment_spec.observations.generate_value()\n",
        "    _ = self._q_network(dummy_observation[None, ...])\n",
        "\n",
        "    # Setup optimizer that will train the network to minimize the loss.\n",
        "    self._optimizer = snt.optimizers.Adam(learning_rate)\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    # Compute Q-values.\n",
        "    # Sonnet requires a batch dimension, which we squeeze out right after.\n",
        "    q_values = self._q_network(observation[None, ...])  # Adds batch dimension.\n",
        "    q_values = tf.squeeze(q_values, axis=0)  # Removes batch dimension.\n",
        "\n",
        "    # Select epsilon-greedy action.\n",
        "    if self._epsilon < tf.random.uniform(shape=()):\n",
        "      action = tf.argmax(q_values, axis=-1)\n",
        "    else:\n",
        "      action = tf.random.uniform(\n",
        "          shape=(), maxval=self._num_actions, dtype=tf.int32)\n",
        "\n",
        "    return action\n",
        "\n",
        "  def q_values(self, observation):\n",
        "    q_values = self._q_network(observation[None, ...])\n",
        "    return tf.squeeze(q_values, axis=0)\n",
        "\n",
        "  def update(self):\n",
        "\n",
        "    if not self._replay_buffer.is_ready(self._batch_size):\n",
        "      # If the replay buffer is not ready to sample from, do nothing.\n",
        "      return\n",
        "    \n",
        "    # Sample a minibatch of transitions from experience replay.\n",
        "    transitions = self._replay_buffer.sample(self._batch_size)\n",
        "\n",
        "    # Optionally unpack the transitions to lighten notation.\n",
        "    # Note: each of these tensors will be of shape [batch_size, ...].\n",
        "    s = transitions.state\n",
        "    a = transitions.action\n",
        "    r = transitions.reward\n",
        "    d = transitions.discount\n",
        "    next_s = transitions.next_state\n",
        "\n",
        "    # Compute the Q-values at next states in the transitions.\n",
        "    q_next_s = self._q_network(next_s)  # Shape [batch_size, num_actions].\n",
        "    max_q_next_s = tf.reduce_max(q_next_s, axis=-1)  # Shape [batch_size].\n",
        "\n",
        "    # Compute the TD error and then the losses.\n",
        "    target_q_value = r + d * max_q_next_s\n",
        "\n",
        "    # Note: the following computation must happen on Tensorflow's gradient tape\n",
        "    # so that we can differentiate their result (loss) with respect to the\n",
        "    # q_network trainable variables.\n",
        "    with tf.GradientTape() as tape:\n",
        "\n",
        "      # Compute the Q-values at original state.\n",
        "      q_s = self._q_network(s)\n",
        "\n",
        "      # Gather the Q-value corresponding to each action in the batch.\n",
        "      q_s_a = tf.gather(q_s, a, axis=-1, batch_dims=1)\n",
        "\n",
        "      # Compute the TD errors.\n",
        "      td_error = target_q_value - q_s_a\n",
        "\n",
        "      # Average the squared TD errors over the entire batch (axis=0).\n",
        "      loss = 0.5 * tf.reduce_mean(td_error ** 2, axis=0)\n",
        "\n",
        "    # Compute the gradients of the loss with respect to the q_network variables.\n",
        "    gradients = tape.gradient(loss, q_network.trainable_variables)\n",
        "\n",
        "    # Apply the gradient update.\n",
        "    self._optimizer.apply(gradients, q_network.trainable_variables)\n",
        "\n",
        "    # Store the loss for logging purposes (see run_loop implementation above).\n",
        "    self.last_loss = loss.numpy()\n",
        "\n",
        "  def observe_first(self, timestep: dm_env.TimeStep):\n",
        "    self._replay_buffer.add_first(timestep)\n",
        "\n",
        "  def observe(self, action: int, next_timestep: dm_env.TimeStep):\n",
        "    self._replay_buffer.add(action, next_timestep)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MQoI1y88Mfsz",
        "colab_type": "text"
      },
      "source": [
        "### **Task: Train a NFQ agent**\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "g7QmF3UGgYJa",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Training the NFQ Agent.  { form-width: \"30%\" }\n",
        "epsilon = 0.5  # @param {type:\"number\"}\n",
        "\n",
        "max_episode_length = 200\n",
        "\n",
        "# Create the environment.\n",
        "grid = build_gridworld_task(\n",
        "    task='simple',\n",
        "    observation_type=ObservationType.AGENT_GOAL_POS,\n",
        "    max_episode_length=max_episode_length)\n",
        "environment, environment_spec = setup_environment(grid)\n",
        "\n",
        "# Define the neural function approximator (aka Q network).\n",
        "q_network = snt.Sequential([\n",
        "    snt.nets.MLP([50, 50, environment_spec.actions.num_values])\n",
        "])\n",
        "\n",
        "# Build the trainable Q-learning agent\n",
        "agent = NeuralFittedQAgent(\n",
        "    environment_spec,\n",
        "    q_network,\n",
        "    epsilon=epsilon,\n",
        "    replay_capacity=100_000,\n",
        "    batch_size=10,\n",
        "    learning_rate=1e-3)\n",
        "\n",
        "returns = run_loop(\n",
        "    environment=environment,\n",
        "    agent=agent,\n",
        "    num_episodes=100,\n",
        "    logger_time_delta=1.,\n",
        "    log_loss=True)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YWbMwjdgmxGe",
        "colab_type": "text"
      },
      "source": [
        "### Evaluate the policy it learned"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bZM2TNJ0PB6F",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Evaluating the agent.  { form-width: \"30%\" }\n",
        "\n",
        "# Temporarily change epsilon to be more greedy; remember to change it back.\n",
        "agent.epsilon = 0.05\n",
        "\n",
        "# Record a few episodes.\n",
        "frames = evaluate(environment, agent, evaluation_episodes=5)\n",
        "\n",
        "# Change epsilon back.\n",
        "agent.epsilon = epsilon\n",
        "\n",
        "# Display the video of the episodes.\n",
        "display_video(frames, frame_rate=6)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vYmDVoZ4sDjJ",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Visualise the learned Q values\n",
        "\n",
        "# Evaluate the policy for every state, similar to tabular agents above.\n",
        "\n",
        "environment.reset()\n",
        "pi = np.zeros(grid._layout_dims, dtype=np.int32)\n",
        "q = np.zeros(grid._layout_dims + (4,))\n",
        "for y in range(grid._layout_dims[0]):\n",
        "  for x in range(grid._layout_dims[1]):\n",
        "    # Hack observation to see what the Q-network would output at that point.\n",
        "    environment.set_state(x, y)\n",
        "    obs = environment.get_obs()\n",
        "    q[y, x] = np.asarray(agent.q_values(obs))\n",
        "    pi[y, x] = np.asarray(agent.select_action(obs))\n",
        "    \n",
        "plot_action_values(q)\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gp9D-vWJpN23",
        "colab_type": "text"
      },
      "source": [
        "### Compare the greedy and behaviour ($\\epsilon$-greedy) policies\n",
        "\n",
        "Notice that the behaviour policy randomly flips arrows to random directions."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ek3MNCu0LGBE",
        "colab_type": "code",
        "cellView": "both",
        "colab": {}
      },
      "source": [
        "environment.plot_greedy_policy(q)\n",
        "plt.title('Greedy policy using the learnt Q-values')\n",
        "\n",
        "environment.plot_policy(pi)\n",
        "plt.title(\"Policy using the agent's behaviour policy\");"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Clv_QlpgoY1J",
        "colab_type": "text"
      },
      "source": [
        "## 2.2 DQN from pixels"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gjR8zkBdjIrB",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "<!-- <center><img src=\"https://drive.google.com/uc?id=1ivTQBHWkYi_J9vWwXFd2sSWg5f2TB5T-\" width=\"500\" /></center>  -->\n",
        "\n",
        "<center><img src=\"https://media.springernature.com/full/springer-static/image/art%3A10.1038%2Fnature14236/MediaObjects/41586_2015_Article_BFnature14236_Fig1_HTML.jpg\" width=\"500\" /></center> \n",
        "\n",
        "In this subsection, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NbHdPc-nxO2j",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Create the environment with pixel observations\n",
        "\n",
        "grid = build_gridworld_task(\n",
        "    task='simple', \n",
        "    observation_type=ObservationType.GRID,\n",
        "    max_episode_length=200)\n",
        "environment, environment_spec = setup_environment(grid)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "3Jcjk1w6oHVX",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Create an Acme DQN agent  { form-width: \"30%\" }\n",
        "\n",
        "epsilon = 0.25  # @param {type: \"number\"}\n",
        "\n",
        "# Build the agent's network.\n",
        "q_network = snt.Sequential([\n",
        "    snt.Conv2D(32, kernel_shape=[4,4], stride=[2,2], padding='VALID'),\n",
        "    tf.nn.relu,\n",
        "    snt.Conv2D(64, kernel_shape=[3,3], stride=[1,1], padding='VALID'),\n",
        "    tf.nn.relu,\n",
        "    snt.Flatten(),\n",
        "    snt.nets.MLP([50, 50, environment_spec.actions.num_values])\n",
        "])\n",
        "\n",
        "# Use the DQN agent implementation from Acme.\n",
        "agent = dqn.DQN(\n",
        "    environment_spec=environment_spec,\n",
        "    network=q_network,\n",
        "    batch_size=10,\n",
        "    samples_per_insert=2,\n",
        "    epsilon=epsilon,\n",
        "    min_replay_size=10,)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0dHDdPDr3QxI",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "# @title Run a training loop  { form-width: \"30%\" }\n",
        "# Rerun this cell until the agent has learned the given task.\n",
        "\n",
        "# Train for `num_episodes` episodes.\n",
        "returns = run_loop(\n",
        "    environment=environment,\n",
        "    agent=agent,\n",
        "    num_episodes=300,\n",
        "    num_steps=100_000)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "8ksVITeN5_Vq",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "# @title Visualise the learned Q values { form-width: \"30%\" }\n",
        "\n",
        "# Evaluate the policy for every state, similar to tabular agents above.\n",
        "pi = np.zeros(grid._layout_dims, dtype=np.int32)\n",
        "q = np.zeros(grid._layout_dims + (4,))\n",
        "for y in range(grid._layout_dims[0]):\n",
        "  for x in range(grid._layout_dims[1]):\n",
        "    # Hack observation to see what the Q-network would output at that point.\n",
        "    environment.set_state(x, y)\n",
        "    obs = environment.get_obs()\n",
        "    q[y, x] = np.asarray(agent._learner._network(np.expand_dims(obs, axis=0)))\n",
        "    pi[y, x] = np.asarray(agent.select_action(obs))\n",
        "    \n",
        "plot_action_values(q)\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6PQaQej4LsU-",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Compare the greedy policy with the agent's policy { form-width: \"30%\" }\n",
        "\n",
        "environment.plot_greedy_policy(q)\n",
        "plt.title('Greedy policy using the learnt Q-values')\n",
        "\n",
        "environment.plot_policy(pi)\n",
        "plt.title(\"Policy using the agent's epsilon-greedy policy\");"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hEavg5tydOqk",
        "colab_type": "text"
      },
      "source": [
        "# Part 3: Policy Gradients\n",
        "\n",
        "Now we will turn to policy gradient methods. Rather than defining the policy in terms a value function, i.e. $\\color{blue}\\pi(\\color{red}s) = \\arg\\max_{\\color{blue}a}\\color{green}Q(\\color{red}s, \\color{blue}a)$, we will directly parameterize the policy and write it as the distribution\n",
        "\n",
        "$$\\color{blue}a \\sim \\color{blue}\\pi_{\\theta}(\\color{blue}a|\\color{red}s).$$\n",
        "\n",
        "Here $\\theta$ represent the parameters of the policy.\n",
        "\n",
        "One convenient way to represent the conditional distribution above is as a function that takes a state $\\color{red}s$ and returns a distribution over actions $\\color{blue}a$. Exactly as we did above we will write this function as a `sonnet` module, but now we will assume the module returns a `Distribution` object as defined by `tensorflow_probability.distributions`. An instance of this object can be sampled from by calling the `sample` method, and we will show how to construct this module below.\n",
        "\n",
        "Defined below is an agent using the same interface as above which implements the REINFORCE algorithm. Recall from the lecture notes that this algorithm computes the policy gradient:\n",
        "\n",
        "$$\n",
        "\\nabla J(\\theta) \n",
        "= \\mathbb{E}\n",
        "\\left[\n",
        "  \\sum_{t=0}^T \\color{green} G_t \n",
        "  \\nabla\\log\\color{blue}\\pi_\\theta(\\color{red}{s_t})\n",
        "\\right]\n",
        "$$\n",
        "\n",
        "where $\\color{green} G_t$ is the sum over future rewards from time $t$, defined as\n",
        "\n",
        "$$\n",
        "\\color{green} G_t \n",
        "= \\sum_{n=t}^T \\gamma^{n-t} \n",
        "\\color{green} R(\\color{red}{s_t}, \\color{blue}{a_t}, \\color{red}{s_{t+1}}).\n",
        "$$\n",
        "\n",
        "The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0HdOC80fG_b_",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "# @title Policy gradient agent: REINFORCE\n",
        "\n",
        "class PolicyGradientAgent(acme.Actor):\n",
        "  \"\"\"Implements a vanilla policy gradient agent.\"\"\"\n",
        "  \n",
        "  def __init__(\n",
        "      self,\n",
        "      # The policy network should output a tfp.distributions.Distribution.\n",
        "      policy_network: snt.Module,\n",
        "      discount: float = 0.9,\n",
        "      learning_rate: float = 1e-3,\n",
        "  ):\n",
        "\n",
        "    # Store the policy neural network and the agent's discount.\n",
        "    self.policy_network = policy_network\n",
        "    self.discount = discount\n",
        "\n",
        "    # Create the replay buffer to store transitions.\n",
        "    self._replay_buffer = ReplayBuffer()\n",
        "\n",
        "    # Flag to update agent, set to False until we've seen a full episode.\n",
        "    self._should_update_agent = False\n",
        "\n",
        "    # Create the optimizer that will be used to minimize the loss.   \n",
        "    self.optimizer = snt.optimizers.Adam(learning_rate)\n",
        "\n",
        "  def select_action(self, observation):\n",
        "    # Pass the observation through the network to get the action distribution.\n",
        "    action_distribution = self.policy_network(observation[None, ...])\n",
        "\n",
        "    # Sample a single action.\n",
        "    action = action_distribution.sample()\n",
        "\n",
        "    # Convert to numpy and squeeze out the added batch dimension.\n",
        "    action = action.numpy()\n",
        "    return np.squeeze(action, axis=0)\n",
        "  \n",
        "  def observe(self, action, next_timestep: dm_env.TimeStep):\n",
        "    self._replay_buffer.add(action, next_timestep)\n",
        "\n",
        "    # If the transition lands in a terminal state, flag the agent to be updated.\n",
        "    self._should_update_agent = next_timestep.last()\n",
        "  \n",
        "  def observe_first(self, timestep: dm_env.TimeStep):\n",
        "    self._replay_buffer.add_first(timestep)\n",
        "  \n",
        "  def update(self):\n",
        "    if not self._should_update_agent:\n",
        "      return\n",
        "\n",
        "    # Get transitions from the buffer and clear it.\n",
        "    transitions = self._replay_buffer.flush()\n",
        "\n",
        "    # Helper function to compute the reward-to-go.\n",
        "    def sum_discounted_rewards(partial_sum, discount_and_reward):\n",
        "      # Unpack the environment discount and reward.\n",
        "      discount, reward = discount_and_reward\n",
        "      # We also need to multiply by the agent's discount along the way.\n",
        "      return reward + self.discount * discount * partial_sum\n",
        "\n",
        "    # Compute the reward-to-go.\n",
        "    G_t = tf.scan(sum_discounted_rewards,\n",
        "                  (transitions.discount, transitions.reward),\n",
        "                  reverse=True,\n",
        "                  initializer=0.0)\n",
        "    \n",
        "    with tf.GradientTape() as tape:\n",
        "      # Compute the action distribution and log-probabilities of taken actions.\n",
        "      action_distribution = self.policy_network(transitions.state)\n",
        "      logprobs = action_distribution.log_prob(transitions.action)\n",
        "\n",
        "      # Compute the policy gradient loss.\n",
        "      loss = -tf.reduce_sum(G_t * logprobs, axis=0)\n",
        "\n",
        "    # Compute and apply the gradients.  \n",
        "    gradients = tape.gradient(loss, self.policy_network.trainable_variables)\n",
        "    self.optimizer.apply(gradients, self.policy_network.trainable_variables)\n",
        "\n",
        "    # Reset the update flag.\n",
        "    self._should_update_agent = False"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "WFIxyoEgHjlP",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Training the policy gradient agent\n",
        "\n",
        "num_steps = 1e5  # @param {type: \"number\"}\n",
        "\n",
        "# Get number of available actions.\n",
        "num_actions = environment_spec.actions.num_values\n",
        "\n",
        "# Build the agent's network.\n",
        "policy_network = snt.Sequential([     \n",
        "    snt.Conv2D(32, kernel_shape=[4,4], stride=[2,2], padding='VALID'),\n",
        "    tf.nn.relu,\n",
        "    snt.Conv2D(64, kernel_shape=[3,3], stride=[1,1], padding='VALID'),\n",
        "    tf.nn.relu,\n",
        "    snt.Flatten(),\n",
        "    snt.nets.MLP([50, 50, num_actions]),\n",
        "    # This final layer outputs a categorical distribution with `num_actions`\n",
        "    # possible outcomes. This distribution implements the\n",
        "    # tfp.distribubtions.Distribution interface.\n",
        "    tfp.distributions.Categorical,\n",
        "])\n",
        "\n",
        "# Create the agent.\n",
        "agent = PolicyGradientAgent(policy_network)\n",
        "\n",
        "# Create the environment with pixel observations\n",
        "grid = build_gridworld_task(\n",
        "    task='simple', \n",
        "    observation_type=ObservationType.GRID,\n",
        "    max_episode_length=200)\n",
        "environment, environment_spec = setup_environment(grid)\n",
        "\n",
        "# Run a training loop\n",
        "returns = run_loop(environment, agent, num_episodes=500, num_steps=num_steps)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dDmLcICc98Z8",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Visualise the training curve { form-width: \"30%\" }\n",
        "\n",
        "# Compute rolling average over returns\n",
        "returns_avg = pd.Series(returns).rolling(10, center=True).mean()\n",
        "\n",
        "plt.figure(figsize=(8, 5))\n",
        "plt.plot(range(len(returns)), returns_avg)\n",
        "plt.xlabel('Episodes')\n",
        "plt.ylabel('Episode return');"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "na7yQ4IQVrkq",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "# @title Visualise learned policy\n",
        "\n",
        "pi = np.zeros(grid._layout_dims, dtype=np.int32)\n",
        "logits = np.zeros(grid._layout_dims + (num_actions,))\n",
        "\n",
        "for y in range(grid._layout_dims[0]):\n",
        "  for x in range(grid._layout_dims[1]):\n",
        "    # Hack observation to see what the Q-network would output at that point.\n",
        "    environment.set_state(x, y)\n",
        "    obs = environment.get_obs()\n",
        "    # We can reuse the policy network handle we passed to the agent.\n",
        "    logits[y, x] = policy_network(obs[None, ...]).logits.numpy()\n",
        "    pi[y, x] = np.asarray(agent.select_action(obs))\n",
        "\n",
        "environment.plot_greedy_policy(logits)\n",
        "plt.title('Greedy policy maximizing policy logits')\n",
        "\n",
        "environment.plot_policy(pi)\n",
        "plt.title(\"Policy sampling from a categorical\");"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5S7HQvITeXRj",
        "colab_type": "text"
      },
      "source": [
        "One of the powers of policy search methods is generally how easily they can be applied to continuous control problems. In this tutorial we will continue to focus on discrete action environments in order to show the distinctions between policy-based and value-function methods. However, for an example of continuous action problems see the  [Acme tutorial](https://github.com/deepmind/acme/blob/master/examples/tutorial.ipynb)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iJi7LDrn0eO4",
        "colab_type": "text"
      },
      "source": [
        "# Part 4: Agents on the Gym Cartpole environment\n",
        "\n",
        "Here we show that you can apply what you learned to other environments such as Cartpole in [Gym](https://gym.openai.com/).\n",
        "\n",
        "\n",
        "<center><img src=\"https://user-images.githubusercontent.com/10624937/42135683-dde5c6f0-7d13-11e8-90b1-8770df3e40cf.gif\" height=\"250\" /></center>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EdQ80rqfjlao",
        "colab_type": "text"
      },
      "source": [
        "## DQN on Cartpole"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DIERzZVk0xIh",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Construct the agent and run the training loop { form-width: \"30%\" }\n",
        "\n",
        "num_episodes = 500  # @param {type: \"number\"}\n",
        "epsilon = 0.1  # @param {type: \"number\"}\n",
        "learning_rate = 1e-3  # @param {type: \"number\"}\n",
        "\n",
        "# Create the environment.\n",
        "environment = wrappers.gym_wrapper.GymWrapper(gym.make('CartPole-v0'))\n",
        "environment = wrappers.SinglePrecisionWrapper(environment)\n",
        "\n",
        "# Get the environment's specification.\n",
        "environment_spec = acme.specs.make_environment_spec(environment)\n",
        "\n",
        "# Build the agent's network.\n",
        "q_network = snt.Sequential([\n",
        "    snt.Flatten(),\n",
        "    snt.nets.MLP([100, environment_spec.actions.num_values])\n",
        "])\n",
        "\n",
        "# Create the agent.\n",
        "agent = dqn.DQN(\n",
        "    environment_spec=environment_spec,\n",
        "    network=q_network,\n",
        "    batch_size=64,\n",
        "    epsilon=epsilon,\n",
        "    learning_rate=learning_rate,\n",
        "    min_replay_size=100)\n",
        "\n",
        "# Train the agent.\n",
        "returns = run_loop(environment=environment, agent=agent, num_episodes=num_episodes, \n",
        "         logger_time_delta=0.25)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wm6Z8nsOC_nW",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "#@title Visualise the training curve { form-width: \"30%\" }\n",
        "\n",
        "# Compute rolling average over returns\n",
        "returns_avg = pd.Series(returns).rolling(10, center=True).mean()\n",
        "\n",
        "plt.figure(figsize=(8, 5))\n",
        "plt.plot(range(len(returns)), returns_avg)\n",
        "plt.xlabel('Episodes')\n",
        "plt.ylabel('Episode return');"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nRBRHYWEj2Sw",
        "colab_type": "text"
      },
      "source": [
        "## Run Policy Gradients Agent on Cartpole"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qiwVyvtmj-fw",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Construct the agent and run the training loop { form-width: \"30%\" }\n",
        "\n",
        "num_episodes = 500  # @param {type: \"number\"}\n",
        "discount = 0.99  # @param {type: \"number\"}\n",
        "learning_rate = 1e-3  # @param {type: \"number\"}\n",
        "\n",
        "# Create the environment.\n",
        "environment = wrappers.gym_wrapper.GymWrapper(gym.make('CartPole-v0'))\n",
        "environment = wrappers.SinglePrecisionWrapper(environment)\n",
        "\n",
        "# Get the environment's specification.\n",
        "environment_spec = acme.specs.make_environment_spec(environment)\n",
        "\n",
        "# Build the agent's network.\n",
        "policy_network = snt.Sequential([\n",
        "    snt.nets.MLP([100, environment_spec.actions.num_values]),\n",
        "    tfp.distributions.Categorical,\n",
        "])\n",
        "\n",
        "# Create the agent.\n",
        "agent = PolicyGradientAgent(\n",
        "    policy_network=policy_network,\n",
        "    discount=discount,\n",
        "    learning_rate=learning_rate)\n",
        "\n",
        "# Train the agent.\n",
        "returns = run_loop(environment=environment, agent=agent, num_episodes=num_episodes, \n",
        "                   logger_time_delta=0.25)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ugI1aT_mD4c7",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Visualise the training curve { form-width: \"30%\" }\n",
        "\n",
        "# Compute rolling average over returns\n",
        "returns_avg = pd.Series(returns).rolling(10, center=True).mean()\n",
        "\n",
        "plt.figure(figsize=(8, 5))\n",
        "plt.plot(range(len(returns)), returns_avg)\n",
        "plt.xlabel('Episodes')\n",
        "plt.ylabel('Episode return');"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NztsdECqUY4Y",
        "colab_type": "text"
      },
      "source": [
        "# Why are the agents structured this way?\n",
        "\n",
        "You might wonder why the agents we've presented are structured this way? I.e. we make the distinction between an Actor, an Agent, and a Learner. As we've implemented above we can put everything into the `Actor` structure, so why separate them?\n",
        "\n",
        "In Acme we have done this specifically to enable distributed training where we make use of a single Learner and multiple Actors to generate data. This is described in the following diagram:\n",
        "\n",
        "<center><img src=\"https://drive.google.com/uc?id=1tKjwIKUEKtPOe1REdCfq0NuITsUfVATP\" width=\"400\" /></center> \n",
        "\n",
        "For more information on this you can see the [Acme white paper](http://go/arxiv/2006.00979). While we have not yet opensourced the distributed versions of these agents (we're working on it!) this enables us to distribute single-process versions of these agents that uses exactly the same acting and learning code---it's just constrained to a single process.\n",
        "\n",
        "<center><img src=\"https://drive.google.com/uc?id=1rfYbmVwS_E2DaCAX5iG_fDnSCoTiR6e3\" width=\"400\" /></center> \n",
        "\n",
        "This also allows us to attack offline-RL (also known as batch-RL) problems where you don't have any data generation and can only learn from a fixed dataset. To see some examples of this see work from [RL unplugged](https://github.com/deepmind/deepmind-research/tree/master/rl_unplugged). In Acme, tackling these problems should be as easy as just instantiating a Learner.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rzqrYxAtH11S",
        "colab_type": "text"
      },
      "source": [
        "# Want to learn more?\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "odBz1OO0JIXY",
        "colab_type": "text"
      },
      "source": [
        "Books and lecture notes\n",
        "*   [Reinforcement Learning: an Introduction by Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf)\n",
        "* [Algorithms for Reinforcement Learning by Csaba Szepesvari](https://sites.ualberta.ca/~szepesva/papers/RLAlgsInMDPs.pdf)\n",
        "\n",
        "Lectures and course \n",
        "*   [RL Course by David Silver](https://www.youtube.com/playlist?list=PLzuuYNsE1EZAXYR4FJ75jcJseBmo4KQ9-)\n",
        "*   [Reinforcement Learning Course | UCL & DeepMind](https://www.youtube.com/playlist?list=PLqYmG7hTraZBKeNJ-JE_eyJHZ7XgBoAyb)\n",
        "*   [Emma Brunskill Stanford RL Course](https://www.youtube.com/playlist?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u)\n",
        "*   [RL Course on Coursera by Martha White & Adam White](https://www.coursera.org/specializations/reinforcement-learning)\n",
        "\n",
        "More practical:\n",
        "* [Spinning Up in Deep RL by Josh Achiam](https://spinningup.openai.com/en/latest/)\n",
        "*   [Acme white paper](http://go/arxiv/2006.00979) & [Colab tutorial](https://github.com/deepmind/acme/blob/master/examples/tutorial.ipynb)\n",
        "\n",
        "<br>\n",
        "\n",
        "This Colab is based on the [EEML 2020 RL practical](https://colab.research.google.com/github/eemlcommunity/PracticalSessions2020/blob/master/rl/EEML2020_RL_Tutorial.ipynb) by Feryal Behbahani & Gheorghe Comanici. If you are interested in JAX you should try the colab :)\n",
        "\n",
        "\n"
      ]
    }
  ]
}