{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c09ada11-d3a6-4837-b4d0-9657106a54b5",
   "metadata": {},
   "source": [
    "# RLSS2023 - DQN Tutorial: Deep Q-Network (DQN)\n",
    "\n",
    "## Part II: DQN Update and Training Loop\n",
    "\n",
    "Website: https://rlsummerschool.com/\n",
    "\n",
    "Github repository: https://github.com/araffin/rlss23-dqn-tutorial\n",
    "\n",
    "Gymnasium documentation: https://gymnasium.farama.org/\n",
    "\n",
    "<div>\n",
    "    <img src=\"https://araffin.github.io/slides/dqn-tutorial/images/dqn/dqn.png\" width=\"800\"/>\n",
    "</div>\n",
    "\n",
    "### Introduction\n",
    "\n",
    "In this notebook, you will finish the implementation of the [Deep Q-Network (DQN)](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) algorithm (started in part I) by implementing the training loop and the DQN gradient update."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7fe1a9aa-5735-4614-9a76-031656397899",
   "metadata": {},
   "outputs": [],
   "source": [
    "# for autoformatting\n",
    "# !pip install jupyter-black\n",
    "# %load_ext jupyter_black"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8188798b-daf5-43a7-91ec-a7a922bc2034",
   "metadata": {},
   "source": [
    "### Install Dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0b55494c-fff2-4459-87e1-e7399afd56d5",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install git+https://github.com/araffin/rlss23-dqn-tutorial/ --upgrade"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "600ca128-eaab-4cda-a378-8b122bb214ca",
   "metadata": {},
   "outputs": [],
   "source": [
    "!apt-get install ffmpeg  # For visualization"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "30fc32dc-45f6-4303-ab92-4d0f4cc34c67",
   "metadata": {},
   "source": [
    "### Imports (from Part I)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "34722639-6c01-4c5b-ab65-c8d5cb2adaad",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Optional\n",
    "\n",
    "import numpy as np\n",
    "import torch as th\n",
    "import gymnasium as gym\n",
    "from gymnasium import spaces\n",
    "\n",
    "# We implemented those components in part I\n",
    "from dqn_tutorial.dqn import ReplayBuffer, epsilon_greedy_action_selection, collect_one_step, linear_schedule, QNetwork\n",
    "from dqn_tutorial.dqn.evaluation import evaluate_policy\n",
    "from dqn_tutorial.notebook_utils import show_videos"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "24f1c3fc-078e-43b2-984f-a6bde4ce5796",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "\n",
    "import dqn_tutorial\n",
    "\n",
    "video_folder = Path(dqn_tutorial.__file__).parent.parent  / \"logs/videos\""
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "82559d16-3fc6-4d47-a373-f94adcab7102",
   "metadata": {},
   "source": [
    "## DQN Update rule (no target network)\n",
    "<div>\n",
    "    <img src=\"https://araffin.github.io/slides/dqn-tutorial/images/dqn/annotated_dqn.png\" width=\"1000\"/>\n",
    "</div>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c41c2092-ffa8-4369-a5ac-e9b81fd2c12f",
   "metadata": {},
   "source": [
    "### Exercise (15 minutes): write DQN update\n",
    "\n",
    "**HINT**: DQN update is heavily inspired by FQI update, if you block, you can take a look at what you did in the first notebook on FQI\n",
    "\n",
    "**HINT**: The data sampled from the replay buffer uses the following structure:\n",
    "\n",
    "```python\n",
    "@dataclass\n",
    "class ReplayBufferSamples:\n",
    "    \"\"\"\n",
    "    A dataclass containing transitions from the replay buffer.\n",
    "    \"\"\"\n",
    "\n",
    "    observations: np.ndarray  # same as states in the theory\n",
    "    next_observations: np.ndarray\n",
    "    actions: np.ndarray\n",
    "    rewards: np.ndarray\n",
    "    terminateds: np.ndarray\n",
    "```\n",
    "\n",
    "**HINT**: You can take a look at the section about Q-Network in the second notebook (DQN part I) to recall how to predict q-values using a q-network."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0d3fb635-fa44-4a52-88be-1c17d5a639c7",
   "metadata": {},
   "outputs": [],
   "source": [
    "def dqn_update_no_target(\n",
    "    q_net: QNetwork,\n",
    "    optimizer: th.optim.Optimizer,\n",
    "    replay_buffer: ReplayBuffer,\n",
    "    batch_size: int,\n",
    "    gamma: float,\n",
    ") -> None:\n",
    "    \"\"\"\n",
    "    Perform one gradient step on the Q-network\n",
    "    using the data from the replay buffer.\n",
    "    Note: this is the same as dqn_update in dqn.py, but without the target network.\n",
    "\n",
    "    :param q_net: The Q-network to update\n",
    "    :param optimizer: The optimizer to use\n",
    "    :param replay_buffer: The replay buffer containing the transitions\n",
    "    :param batch_size: The minibatch size, how many transitions to sample\n",
    "    :param gamma: The discount factor\n",
    "    \"\"\"\n",
    "    ### YOUR CODE HERE\n",
    "\n",
    "    # Sample the replay buffer and convert them to PyTorch tensors\n",
    "    # using `.to_torch()` method\n",
    "    replay_data = replay_buffer.sample(batch_size).to_torch()\n",
    "\n",
    "    # We should not compute gradient with respect to the target\n",
    "    with th.no_grad():\n",
    "        # Compute the Q-values for the next observations\n",
    "        # (replay_data.next_observations)\n",
    "        # (batch_size, n_actions)\n",
    "\n",
    "        # Follow greedy policy: use the one with the highest value\n",
    "        # shape: (batch_size,)\n",
    "        # Note: tensor.max(dim=..) returns a tuple (max, indices) in PyTorch\n",
    "\n",
    "        # If the episode is terminated, set the target to the reward\n",
    "        # (same as FQI, you can use `th.logical_not` to mask the next q values)\n",
    "\n",
    "        # 1-step TD target (TD(0) same as for FQI)\n",
    "        td_target = ...\n",
    "\n",
    "    # Get current Q-values estimates for the replay_data (batch_size, n_actions)\n",
    "\n",
    "    # Select the Q-values corresponding to the actions that were selected\n",
    "    # during data collection,\n",
    "    # you should use `th.gather()`\n",
    "\n",
    "    # Reshape from (batch_size, 1) to (batch_size,) to avoid broadcast error\n",
    "    # You can use `tensor.squeeze(dim=..)`\n",
    "    current_q_values = ...\n",
    "\n",
    "    # Check for any shape/broadcast error\n",
    "    # Current q-values must have the same shape as the TD target\n",
    "    assert current_q_values.shape == (batch_size,), f\"{current_q_values.shape} != {(batch_size,)}\"\n",
    "    assert current_q_values.shape == td_target.shape, f\"{current_q_values.shape} != {td_target.shape}\"\n",
    "\n",
    "    # Compute the Mean Squared Error (MSE) loss\n",
    "    # Optionally, one can use a Huber loss instead of the MSE loss\n",
    "    loss = ...\n",
    "\n",
    "    ### END OF YOUR CODE\n",
    "\n",
    "    # Reset gradients\n",
    "    optimizer.zero_grad()\n",
    "    # Compute the gradients\n",
    "    loss.backward()\n",
    "    # Update the parameters of the q-network\n",
    "    optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4c7b067-cd2d-4dbb-a6b5-dd7cfb515137",
   "metadata": {},
   "source": [
    "Let's test the implementation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ab535b42-e643-422a-8087-51311b4cdb54",
   "metadata": {},
   "outputs": [],
   "source": [
    "env = gym.make(\"CartPole-v1\")\n",
    "q_net = QNetwork(env.observation_space, env.action_space)\n",
    "optimizer = th.optim.Adam(q_net.parameters(), lr=0.001)\n",
    "replay_buffer = ReplayBuffer(2000, env.observation_space, env.action_space)\n",
    "\n",
    "obs, _ = env.reset()\n",
    "# Let's collect some data following an epsilon-greedy policy\n",
    "for _ in range(1000):\n",
    "    obs = collect_one_step(env, q_net, replay_buffer, obs, exploration_rate=0.1)\n",
    "\n",
    "# Try to do some gradient steps:\n",
    "for _ in range(10):\n",
    "    dqn_update_no_target(q_net, optimizer, replay_buffer, batch_size=32, gamma=0.99)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "515755c2-f802-4662-8337-918d1505fc9a",
   "metadata": {},
   "source": [
    "### Exercise (10 minutes): write the training loop\n",
    "\n",
    "Let's put everything together and implement the training loop that alternates between data collection and updating the Q-Network.\n",
    "At first we will not use any target network.\n",
    "\n",
    "<div>\n",
    "    <img src=\"https://araffin.github.io/slides/dqn-tutorial/images/dqn/dqn_loop.png\" width=\"600\"/>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "790401fe-f7cc-4338-a6d4-b7996d275dd2",
   "metadata": {},
   "outputs": [],
   "source": [
    "def run_dqn_no_target(\n",
    "    env_id: str = \"CartPole-v1\",\n",
    "    replay_buffer_size: int = 50_000,\n",
    "    # Exploration schedule\n",
    "    # (for the epsilon-greedy data collection)\n",
    "    exploration_initial_eps: float = 1.0,\n",
    "    exploration_final_eps: float = 0.01,\n",
    "    n_timesteps: int = 20_000,\n",
    "    update_interval: int = 2,\n",
    "    learning_rate: float = 3e-4,\n",
    "    batch_size: int = 64,\n",
    "    gamma: float = 0.99,\n",
    "    n_eval_episodes: int = 10,\n",
    "    evaluation_interval: int = 1000,\n",
    "    eval_exploration_rate: float = 0.0,\n",
    "    seed: int = 2023,\n",
    "    # device: Union[th.device, str] = \"cpu\",\n",
    "    eval_render_mode: Optional[str] = None,  # \"human\", \"rgb_array\", None\n",
    ") -> QNetwork:\n",
    "    \"\"\"\n",
    "    Run Deep Q-Learning (DQN) on a given environment.\n",
    "    (without target network)\n",
    "\n",
    "    :param env_id: Name of the environment\n",
    "    :param replay_buffer_size: Max capacity of the replay buffer\n",
    "    :param exploration_initial_eps: The initial exploration rate\n",
    "    :param exploration_final_eps: The final exploration rate\n",
    "    :param n_timesteps: Number of timesteps in total\n",
    "    :param update_interval: How often to update the Q-network\n",
    "        (every update_interval steps)\n",
    "    :param learning_rate: The learning rate to use for the optimizer\n",
    "    :param batch_size: The minibatch size\n",
    "    :param gamma: The discount factor\n",
    "    :param n_eval_episodes: The number of episodes to evaluate the policy on\n",
    "    :param evaluation_interval: How often to evaluate the policy\n",
    "    :param eval_exploration_rate: The exploration rate to use during evaluation\n",
    "    :param seed: Random seed for the pseudo random generator\n",
    "    :param eval_render_mode: The render mode to use for evaluation\n",
    "    \"\"\"\n",
    "    # Set seed for reproducibility\n",
    "    # Seed Numpy as PyTorch pseudo random generators\n",
    "    # Seed Numpy RNG\n",
    "    np.random.seed(seed)\n",
    "    # seed the RNG for all devices (both CPU and CUDA)\n",
    "    th.manual_seed(seed)\n",
    "\n",
    "    # Create the environment\n",
    "    env = gym.make(env_id)\n",
    "    assert isinstance(env.observation_space, spaces.Box)\n",
    "    assert isinstance(env.action_space, spaces.Discrete)\n",
    "    env.action_space.seed(seed)\n",
    "\n",
    "    # Create the evaluation environment\n",
    "    eval_env = gym.make(env_id, render_mode=eval_render_mode)\n",
    "    eval_env.reset(seed=seed)\n",
    "    eval_env.action_space.seed(seed)\n",
    "\n",
    "    ### YOUR CODE HERE\n",
    "    # TODO:\n",
    "    # 1. Instantiate the Q-Network and the optimizer\n",
    "    # 2. Instantiate the replay buffer\n",
    "    # 3. Compute the current exploration rate (epsilon)\n",
    "    # 4. Collect new transition by stepping in the env following\n",
    "    # an epsilon-greedy strategy\n",
    "    # 5. Update the Q-Network using gradient descent\n",
    "\n",
    "    # Create the q-network\n",
    "\n",
    "    # Create the optimizer (PyTorch `th.optim.Adam` will be helpful here)\n",
    "\n",
    "    # Create the Replay buffer\n",
    "\n",
    "    # Reset the env\n",
    "\n",
    "    for current_step in range(1, n_timesteps + 1):\n",
    "        # Compute the current exploration rate\n",
    "        # according to the exploration schedule (update the value of epsilon)\n",
    "        # you should use `linear_schedule()`\n",
    "\n",
    "        # Do one step in the environment following an epsilon-greedy policy\n",
    "        # and store the transition in the replay buffer\n",
    "        # you can re-use `collect_one_step()`\n",
    "\n",
    "        \n",
    "        # Update the Q-Network every `update_interval` steps\n",
    "        if (current_step % update_interval) == 0:\n",
    "            # Do one gradient step (using `dqn_update_no_target()`)\n",
    "            ...\n",
    "            \n",
    "        ### END OF YOUR CODE\n",
    "\n",
    "        if (current_step % evaluation_interval) == 0:\n",
    "            print()\n",
    "            print(f\"Evaluation at step {current_step}:\")\n",
    "            # Evaluate the current greedy policy (deterministic policy)\n",
    "            evaluate_policy(eval_env, q_net, n_eval_episodes, eval_exploration_rate=eval_exploration_rate)\n",
    "    return q_net"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79c35cd4-9d23-4083-9384-482fdaf1e9e6",
   "metadata": {},
   "source": [
    "## Train a DQN agent on CartPole environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d24455f-5cb6-4b2c-8b48-2f395bbce8e1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "# create log folder\n",
    "os.makedirs(\"../logs/\", exist_ok=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3967c2a6-469f-4766-9911-fab855f945d8",
   "metadata": {},
   "outputs": [],
   "source": [
    "env_id = \"CartPole-v1\"\n",
    "q_net = run_dqn_no_target(env_id)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5acb6e99-8328-4c6c-be81-32845d4d3dda",
   "metadata": {},
   "source": [
    "### Record and show video of the trained agent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e4b988f9-05b0-46e3-a143-d43162d5cd7a",
   "metadata": {},
   "outputs": [],
   "source": [
    "eval_env = gym.make(env_id, render_mode=\"rgb_array\")\n",
    "n_eval_episodes = 3\n",
    "eval_exploration_rate = 0.0\n",
    "video_name = f\"DQN_no_target_{env_id}\"\n",
    "\n",
    "evaluate_policy(\n",
    "    eval_env,\n",
    "    q_net,\n",
    "    n_eval_episodes,\n",
    "    eval_exploration_rate=eval_exploration_rate,\n",
    "    video_name=video_name,\n",
    ")\n",
    "\n",
    "show_videos(video_folder, prefix=video_name)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "b269a237-a0db-40a4-9621-449538c410b7",
   "metadata": {},
   "source": [
    "## [Bonus] DQN Target Network\n",
    "\n",
    "\n",
    "<div>\n",
    "    <img src=\"https://araffin.github.io/slides/dqn-tutorial/images/dqn/target_q_network.png\" width=\"1000\"/>\n",
    "</div>\n",
    "\n",
    "The only things that is changing is when predicting the next q value.\n",
    "\n",
    "In DQN without target, the online network with weights **$\\theta$** is used:\n",
    "\n",
    "$y = r_t + \\gamma \\cdot \\max_{a \\in A}(\\hat{Q}_{\\pi}(s_{t+1}, a; \\theta))$\n",
    "\n",
    "\n",
    "whereas with DQN with target network, the target q-network (a delayed copy of the q-network) with weights **$\\theta^\\prime$** is used instead:\n",
    "\n",
    "$y = r_t + \\gamma \\cdot \\max_{a \\in A}(\\hat{Q}_{\\pi}(s_{t+1}, a; \\theta^\\prime))$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "781e7e78-b156-43f0-aa4d-f7c842d3d837",
   "metadata": {},
   "source": [
    "### Exercise (5 minutes): write the DQN update with target network\n",
    "\n",
    "**HINT**: it is exactly the same as `dqn_update_no_target` expect for computing the next q-values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "27f17152-59cf-495a-bde7-73b6f219e779",
   "metadata": {},
   "outputs": [],
   "source": [
    "def dqn_update(\n",
    "    q_net: QNetwork,\n",
    "    q_target_net: QNetwork,\n",
    "    optimizer: th.optim.Optimizer,\n",
    "    replay_buffer: ReplayBuffer,\n",
    "    batch_size: int,\n",
    "    gamma: float,\n",
    ") -> None:\n",
    "    \"\"\"\n",
    "    Perform one gradient step on the Q-network\n",
    "    using the data from the replay buffer.\n",
    "\n",
    "    :param q_net: The Q-network to update\n",
    "    :param q_target_net: The target Q-network, to compute the td-target.\n",
    "    :param optimizer: The optimizer to use\n",
    "    :param replay_buffer: The replay buffer containing the transitions\n",
    "    :param batch_size: The minibatch size, how many transitions to sample\n",
    "    :param gamma: The discount factor\n",
    "    \"\"\"\n",
    "\n",
    "    # Sample the replay buffer and convert them to PyTorch tensors\n",
    "    replay_data = replay_buffer.sample(batch_size).to_torch()\n",
    "\n",
    "    with th.no_grad():\n",
    "        ### YOUR CODE HERE\n",
    "        # TODO: use the target q-network instead of the online q-network\n",
    "        # to compute the next values\n",
    "\n",
    "        # Compute the Q-values for the next observations (batch_size, n_actions)\n",
    "        # using the target network\n",
    "\n",
    "        # Follow greedy policy: use the one with the highest value\n",
    "        # (batch_size,)\n",
    "\n",
    "        # If the episode is terminated, set the target to the reward\n",
    "\n",
    "        # 1-step TD target\n",
    "\n",
    "        ### END OF YOUR CODE\n",
    "\n",
    "    # Get current Q-values estimates for the replay_data (batch_size, n_actions)\n",
    "    q_values = q_net(replay_data.observations)\n",
    "    # Select the Q-values corresponding to the actions that were selected\n",
    "    # during data collection\n",
    "    current_q_values = th.gather(q_values, dim=1, index=replay_data.actions)\n",
    "    # Reshape from (batch_size, 1) to (batch_size,) to avoid broadcast error\n",
    "    current_q_values = current_q_values.squeeze(dim=1)\n",
    "\n",
    "    # Check for any shape/broadcast error\n",
    "    # Current q-values must have the same shape as the TD target\n",
    "    assert current_q_values.shape == (batch_size,), f\"{current_q_values.shape} != {(batch_size,)}\"\n",
    "    assert current_q_values.shape == td_target.shape, f\"{current_q_values.shape} != {td_target.shape}\"\n",
    "\n",
    "    # Compute the Mean Squared Error (MSE) loss\n",
    "    # Optionally, one can use a Huber loss instead of the MSE loss\n",
    "    loss = ((current_q_values - td_target) ** 2).mean()\n",
    "    # Huber loss\n",
    "    # loss = th.nn.functional.smooth_l1_loss(current_q_values, td_target)\n",
    "\n",
    "    # Reset gradients\n",
    "    optimizer.zero_grad()\n",
    "    # Compute the gradients\n",
    "    loss.backward()\n",
    "    # Update the parameters of the q-network\n",
    "    optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "85c74b0e-1923-4c6e-9a58-f83a97435f0d",
   "metadata": {},
   "source": [
    "### Updated training loop"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "23e326d9-1fa7-4efd-a841-b1d9e5ef9484",
   "metadata": {},
   "outputs": [],
   "source": [
    "def run_dqn(\n",
    "    env_id: str = \"CartPole-v1\",\n",
    "    replay_buffer_size: int = 50_000,\n",
    "    # How often do we copy the parameters from the Q-network to the target network\n",
    "    target_network_update_interval: int = 1000,\n",
    "    # Warmup phase\n",
    "    learning_starts: int = 100,\n",
    "    # Exploration schedule\n",
    "    # (for the epsilon-greedy data collection)\n",
    "    exploration_initial_eps: float = 1.0,\n",
    "    exploration_final_eps: float = 0.01,\n",
    "    exploration_fraction: float = 0.1,\n",
    "    n_timesteps: int = 20_000,\n",
    "    update_interval: int = 2,\n",
    "    learning_rate: float = 3e-4,\n",
    "    batch_size: int = 64,\n",
    "    gamma: float = 0.99,\n",
    "    n_hidden_units: int = 64,\n",
    "    n_eval_episodes: int = 10,\n",
    "    evaluation_interval: int = 1000,\n",
    "    eval_exploration_rate: float = 0.0,\n",
    "    seed: int = 2023,\n",
    "    # device: Union[th.device, str] = \"cpu\",\n",
    "    eval_render_mode: Optional[str] = None,  # \"human\", \"rgb_array\", None\n",
    ") -> QNetwork:\n",
    "    \"\"\"\n",
    "    Run Deep Q-Learning (DQN) on a given environment.\n",
    "    (with a target network)\n",
    "\n",
    "    :param env_id: Name of the environment\n",
    "    :param replay_buffer_size: Max capacity of the replay buffer\n",
    "    :param target_network_update_interval: How often do we copy the parameters\n",
    "         to the target network\n",
    "    :param learning_starts: Warmup phase to fill the replay buffer\n",
    "        before starting the optimization.\n",
    "    :param exploration_initial_eps: The initial exploration rate\n",
    "    :param exploration_final_eps: The final exploration rate\n",
    "    :param exploration_fraction: The fraction of the number of steps\n",
    "        during which the exploration rate is annealed from\n",
    "        initial_eps to final_eps.\n",
    "        After this many steps, the exploration rate remains constant.\n",
    "    :param n_timesteps: Number of timesteps in total\n",
    "    :param update_interval: How often to update the Q-network\n",
    "        (every update_interval steps)\n",
    "    :param learning_rate: The learning rate to use for the optimizer\n",
    "    :param batch_size: The minibatch size\n",
    "    :param gamma: The discount factor\n",
    "    :param n_hidden_units: Number of units for each hidden layer\n",
    "        of the Q-Network.\n",
    "    :param n_eval_episodes: The number of episodes to evaluate the policy on\n",
    "    :param evaluation_interval: How often to evaluate the policy\n",
    "    :param eval_exploration_rate: The exploration rate to use during evaluation\n",
    "    :param seed: Random seed for the pseudo random generator\n",
    "    :param eval_render_mode: The render mode to use for evaluation\n",
    "    \"\"\"\n",
    "    # Set seed for reproducibility\n",
    "    # Seed Numpy as PyTorch pseudo random generators\n",
    "    # Seed Numpy RNG\n",
    "    np.random.seed(seed)\n",
    "    # seed the RNG for all devices (both CPU and CUDA)\n",
    "    th.manual_seed(seed)\n",
    "\n",
    "    # Create the environment\n",
    "    env = gym.make(env_id)\n",
    "    # For highway env\n",
    "    env = gym.wrappers.FlattenObservation(env)\n",
    "    env = gym.wrappers.RecordEpisodeStatistics(env)\n",
    "    assert isinstance(env.observation_space, spaces.Box)\n",
    "    assert isinstance(env.action_space, spaces.Discrete)\n",
    "    env.action_space.seed(seed)\n",
    "\n",
    "    # Create the evaluation environment\n",
    "    eval_env = gym.make(env_id, render_mode=eval_render_mode)\n",
    "    eval_env = gym.wrappers.FlattenObservation(eval_env)\n",
    "    eval_env.reset(seed=seed)\n",
    "    eval_env.action_space.seed(seed)\n",
    "\n",
    "    # Create the q-network\n",
    "    q_net = QNetwork(env.observation_space, env.action_space, n_hidden_units=n_hidden_units)\n",
    "    # Create the target network\n",
    "    q_target_net = QNetwork(env.observation_space, env.action_space, n_hidden_units=n_hidden_units)\n",
    "    # Copy the parameters of the q-network to the target network\n",
    "    q_target_net.load_state_dict(q_net.state_dict())\n",
    "\n",
    "    # For flappy bird\n",
    "    if env.observation_space.dtype == np.float64:\n",
    "        q_net.double()\n",
    "        q_target_net.double()\n",
    "\n",
    "    # Create the optimizer, we only optimize the parameters of the q-network\n",
    "    optimizer = th.optim.Adam(q_net.parameters(), lr=learning_rate)\n",
    "\n",
    "    # Create the Replay buffer\n",
    "    replay_buffer = ReplayBuffer(replay_buffer_size, env.observation_space, env.action_space)\n",
    "    # Reset the env\n",
    "    obs, _ = env.reset(seed=seed)\n",
    "    for current_step in range(1, n_timesteps + 1):\n",
    "        # Update the current exploration schedule (update the value of epsilon)\n",
    "        exploration_rate = linear_schedule(\n",
    "            exploration_initial_eps,\n",
    "            exploration_final_eps,\n",
    "            current_step,\n",
    "            int(exploration_fraction * n_timesteps),\n",
    "        )\n",
    "        # Do one step in the environment following an epsilon-greedy policy\n",
    "        # and store the transition in the replay buffer\n",
    "        obs = collect_one_step(\n",
    "            env,\n",
    "            q_net,\n",
    "            replay_buffer,\n",
    "            obs,\n",
    "            exploration_rate=exploration_rate,\n",
    "            verbose=0,\n",
    "        )\n",
    "\n",
    "        # Update the target network\n",
    "        # by copying the parameters from the Q-network every target_network_update_interval steps\n",
    "        if (current_step % target_network_update_interval) == 0:\n",
    "            q_target_net.load_state_dict(q_net.state_dict())\n",
    "\n",
    "        # Update the Q-network every update_interval steps\n",
    "        # after learning_starts steps have passed (warmup phase)\n",
    "        if (current_step % update_interval) == 0 and current_step > learning_starts:\n",
    "            # Do one gradient step\n",
    "            dqn_update(q_net, q_target_net, optimizer, replay_buffer, batch_size, gamma=gamma)\n",
    "\n",
    "        if (current_step % evaluation_interval) == 0:\n",
    "            print()\n",
    "            print(f\"Evaluation at step {current_step}:\")\n",
    "            print(f\"exploration_rate={exploration_rate:.2f}\")\n",
    "            # Evaluate the current greedy policy (deterministic policy)\n",
    "            evaluate_policy(eval_env, q_net, n_eval_episodes, eval_exploration_rate=eval_exploration_rate)\n",
    "            # Save a checkpoint\n",
    "            th.save(q_net.state_dict(), f\"../logs/q_net_checkpoint_{env_id}_{current_step}.pth\")\n",
    "    return q_net"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "14af9427-8166-40fa-a581-51599d3be306",
   "metadata": {},
   "source": [
    "## Train DQN agent with target network on CartPole env"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "55922877-c7b1-4772-ad1a-e4c1f3ac67d6",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Tuned hyperparameters from the RL Zoo3 of the Stable Baselines3 library\n",
    "# https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/dqn.yml\n",
    "\n",
    "env_id = \"CartPole-v1\"\n",
    "\n",
    "q_net = run_dqn(\n",
    "    env_id=env_id,\n",
    "    replay_buffer_size=100_000,\n",
    "    # Note: you can remove the target network\n",
    "    # by setting target_network_update_interval=1\n",
    "    target_network_update_interval=10,\n",
    "    learning_starts=1000,\n",
    "    exploration_initial_eps=1.0,\n",
    "    exploration_final_eps=0.04,\n",
    "    exploration_fraction=0.1,\n",
    "    n_timesteps=80_000,\n",
    "    update_interval=2,\n",
    "    learning_rate=1e-3,\n",
    "    batch_size=64,\n",
    "    gamma=0.99,\n",
    "    n_eval_episodes=10,\n",
    "    evaluation_interval=5000,\n",
    "    # No exploration during evaluation\n",
    "    # (deteministic policy)\n",
    "    eval_exploration_rate=0.0,\n",
    "    seed=2022,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "52450a1f-60b4-4098-bc64-5860476a50cc",
   "metadata": {},
   "source": [
    "### Visualize the trained agent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "56e18bf9-e3e4-495d-be4b-6acf0ae28ee1",
   "metadata": {},
   "outputs": [],
   "source": [
    "eval_env = gym.make(env_id, render_mode=\"rgb_array\")\n",
    "n_eval_episodes = 3\n",
    "eval_exploration_rate = 0.0\n",
    "video_name = f\"DQN_{env_id}\"\n",
    "\n",
    "# Optional: load checkpoint\n",
    "# q_net = QNetwork(eval_env.observation_space, eval_env.action_space, n_hidden_units=64)\n",
    "# q_net.load_state_dict(th.load(\"../logs/q_net_checkpoint_CartPole-v1_75000.pth\"))\n",
    "\n",
    "evaluate_policy(\n",
    "    eval_env,\n",
    "    q_net,\n",
    "    n_eval_episodes,\n",
    "    eval_exploration_rate=eval_exploration_rate,\n",
    "    video_name=video_name,\n",
    ")\n",
    "\n",
    "show_videos(video_folder, prefix=video_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "642e02d8-bd9c-4a07-9a0d-86748dcdd391",
   "metadata": {},
   "source": [
    "## Training DQN agent on flappy bird:\n",
    "\n",
    "You can go in the [GitHub repo](https://github.com/araffin/flappy-bird-gymnasium/tree/patch-1) to learn more about this environment.\n",
    "\n",
    "<div>\n",
    "    <img src=\"https://raw.githubusercontent.com/markub3327/flappy-bird-gymnasium/main/imgs/dqn.gif\" width=\"300\"/>\n",
    "</div>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "75e9703d-7b5b-4188-9737-786476a627ec",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install \"flappy-bird-gymnasium @ git+https://github.com/araffin/flappy-bird-gymnasium@patch-1\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e2a0eef2-d652-4224-86c6-8f669e82ec3e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import flappy_bird_gymnasium  # noqa: F401"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bf79677e-561c-4bb5-95d2-d4a99eb0579c",
   "metadata": {},
   "outputs": [],
   "source": [
    "env_id = \"FlappyBird-v0\"\n",
    "\n",
    "q_net = run_dqn(\n",
    "    env_id=env_id,\n",
    "    replay_buffer_size=100_000,\n",
    "    # Note: you can remove the target network\n",
    "    # by setting target_network_update_interval=1\n",
    "    target_network_update_interval=250,\n",
    "    learning_starts=10_000,\n",
    "    exploration_initial_eps=1.0,\n",
    "    exploration_final_eps=0.03,\n",
    "    exploration_fraction=0.1,\n",
    "    n_timesteps=500_000,\n",
    "    update_interval=4,\n",
    "    learning_rate=1e-3,\n",
    "    batch_size=128,\n",
    "    gamma=0.98,\n",
    "    n_eval_episodes=5,\n",
    "    evaluation_interval=50000,\n",
    "    n_hidden_units=256,\n",
    "    # No exploration during evaluation\n",
    "    # (deteministic policy)\n",
    "    eval_exploration_rate=0.0,\n",
    "    seed=2023,\n",
    "    eval_render_mode=None,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a08cc96-2601-49dc-921c-44e2a048df81",
   "metadata": {},
   "source": [
    "### Record a video of the trained agent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e9e191f2-e3b3-41d0-bff2-50e1b9c9d19c",
   "metadata": {},
   "outputs": [],
   "source": [
    "eval_env = gym.make(env_id, render_mode=\"rgb_array\")\n",
    "n_eval_episodes = 3\n",
    "eval_exploration_rate = 0.00\n",
    "video_name = f\"DQN_{env_id}\"\n",
    "\n",
    "\n",
    "# Optional: load checkpoint\n",
    "q_net = QNetwork(eval_env.observation_space, eval_env.action_space, n_hidden_units=256)\n",
    "# Convert weights from float32 to float64 to match flappy bird obs\n",
    "q_net.double()\n",
    "q_net.load_state_dict(th.load(\"../logs/q_net_checkpoint_FlappyBird-v0_200000.pth\"))\n",
    "\n",
    "evaluate_policy(\n",
    "    eval_env,\n",
    "    q_net,\n",
    "    n_eval_episodes,\n",
    "    eval_exploration_rate=eval_exploration_rate,\n",
    "    video_name=video_name,\n",
    ")\n",
    "\n",
    "show_videos(video_folder, prefix=video_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "69a60230-6477-4318-8d60-10f6eada6064",
   "metadata": {},
   "source": [
    "### Going further\n",
    "\n",
    "- analyse the learned q-values\n",
    "- explore different value for the target update, use soft update instead of hard-copy\n",
    "- experiment with Huber loss (smooth l1 loss) instead of l2 loss (mean squared error)\n",
    "- play with different environments\n",
    "- implement a CNN to play flappybird/pong from pixels (need to stack frames)\n",
    "- implement DQN extensions (double Q-learning, prioritized experience replay, ...)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6be2cc11-79e6-40c6-ab61-f78fec498315",
   "metadata": {},
   "source": [
    "## Conclusion\n",
    "\n",
    "In this notebook, you have seen how to implement the DQN algorithm (update rule and training loop) using all the components from part I (replay buffer, epsilon-greedy exploration strategy, Q-Network, ...)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "89c3f25b-4eed-456a-b4f7-68ce16f56763",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
