{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# IERG 5350 Assignment 3: Value Function Approximation in RL\n",
    "\n",
    "*2021-2022 1st term, IERG 5350: Reinforcement Learning. Department of Information Engineering, The Chinese University of Hong Kong. Course Instructor: Professor ZHOU Bolei. Assignment author: PENG Zhenghao.*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "| Student Name | Student ID |\n",
    "| :----: | :----: |\n",
    "| TYPE_YOUR_NAME_HERE | TYPE_YOUR_STUDENT_ID_HERE |\n",
    "\n",
    "------"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Welecome to the assignment 3 of our RL course. \n",
    "\n",
    "We will cover the following knowledege in this assignment:\n",
    "\n",
    "1. The n-step TD control algorithm\n",
    "2. Value approximation through linear function\n",
    "3. Feature construction\n",
    "4. Neural network based value approximation\n",
    "5. The basic usage of Pytorch\n",
    "\n",
    "The following figure demonstrates the structure of this assignment.\n",
    "\n",
    "![](overview.png)\n",
    "\n",
    "\n",
    "**Before starting, make sure you have installed the following packages:**\n",
    "\n",
    "\n",
    "1. Python 3\n",
    "2. Jupyter Notebook\n",
    "3. Gym, **Please install via `pip install 'gym[all]'` to ensure all funcitonality of gym are properly set up.**\n",
    "5. Numpy\n",
    "6. Pytorch, install via `pip install torch`. Please refer to official website https://pytorch.org for detailed installation guideline.\n",
    "7. Opencv, install via `pip install opencv-python`\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "------\n",
    "\n",
    "## Section 1: Basic Reinforcement Learning Pipeline\n",
    "\n",
    "(5 / 100 points)\n",
    "\n",
    "In this section, we will prepare several functions for evaulation, training RL algorithms. \n",
    "We will also build an `AbstractTrainer` class used as a general framework for different function approximation methods."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import gym\n",
    "import numpy as np\n",
    "import torch\n",
    "from utils import *\n",
    "import torch\n",
    "import torch.nn as nn"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "def evaluate(policy, num_episodes=1, seed=0, env_name='FrozenLake8x8-v1',\n",
    "             render=False, existing_env=None):\n",
    "    \"\"\"This function evaluate the given policy and return the mean episode \n",
    "    reward.\n",
    "    :param policy: a function whose input is the observation\n",
    "    :param num_episodes: number of episodes you wish to run\n",
    "    :param seed: the random seed\n",
    "    :param env_name: the name of the environment\n",
    "    :param render: a boolean flag indicating whether to render policy\n",
    "    :return: the averaged episode reward of the given policy.\n",
    "    \"\"\"\n",
    "    if existing_env is None:\n",
    "        env = gym.make(env_name)\n",
    "        env.seed(seed)\n",
    "    else:\n",
    "        env = existing_env\n",
    "    rewards = []\n",
    "    if render: num_episodes = 1\n",
    "    for i in range(num_episodes):\n",
    "        obs = env.reset()\n",
    "        act = policy(obs)\n",
    "        ep_reward = 0\n",
    "        while True:\n",
    "            obs, reward, done, info = env.step(act)\n",
    "            act = policy(obs)\n",
    "            ep_reward += reward\n",
    "            if render:\n",
    "                env.render()\n",
    "                wait(sleep=0.05)\n",
    "            if done:\n",
    "                break\n",
    "        rewards.append(ep_reward)\n",
    "    if render:\n",
    "        env.close()\n",
    "    return np.mean(rewards)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "def run(trainer_cls, config=None, reward_threshold=None):\n",
    "    \"\"\"Run the trainer and report progress, agnostic to the class of trainer\n",
    "    :param trainer_cls: A trainer class \n",
    "    :param config: A dict\n",
    "    :param reward_threshold: the reward threshold to break the training\n",
    "    :return: The trained trainer and a dataframe containing learning progress\n",
    "    \"\"\"\n",
    "    if config is None:\n",
    "        config = {}\n",
    "    trainer = trainer_cls(config)\n",
    "    config = trainer.config\n",
    "    start = now = time.time()\n",
    "    stats = []\n",
    "    for i in range(config['max_iteration'] + 1):\n",
    "        stat = trainer.train()\n",
    "        stats.append(stat or {})\n",
    "        if i % config['evaluate_interval'] == 0 or \\\n",
    "                i == config[\"max_iteration\"]:\n",
    "            reward = trainer.evaluate(config.get(\"evaluate_num_episodes\", 50))\n",
    "            print(\"({:.1f}s,+{:.1f}s)\\tIteration {}, current mean episode \"\n",
    "                  \"reward is {}. {}\".format(\n",
    "                time.time() - start, time.time() - now, i, reward,\n",
    "                {k: round(np.mean(v), 4) for k, v in\n",
    "                 stat.items()} if stat else \"\"))\n",
    "            now = time.time()\n",
    "        if reward_threshold is not None and reward > reward_threshold:\n",
    "            print(\"In {} iteration, current mean episode reward {:.3f} is \"\n",
    "                  \"greater than reward threshold {}. Congratulation! Now we \"\n",
    "                  \"exit the training process.\".format(\n",
    "                i, reward, reward_threshold))\n",
    "            break\n",
    "    return trainer, stats\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Solve TODOs and remove \"pass\"\n",
    "\n",
    "default_config = dict(\n",
    "    env_name=\"CartPole-v0\",\n",
    "    max_iteration=1000,\n",
    "    max_episode_length=1000,\n",
    "    evaluate_interval=100,\n",
    "    gamma=0.99,\n",
    "    eps=0.3,\n",
    "    seed=0\n",
    ")\n",
    "\n",
    "\n",
    "class AbstractTrainer:\n",
    "    \"\"\"This is the abstract class for value-based RL trainer. We will inherent\n",
    "    the specify algorithm's trainer from this abstract class, so that we can\n",
    "    reuse the codes.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, config):\n",
    "        self.config = merge_config(config, default_config)\n",
    "\n",
    "        # Create the environment\n",
    "        self.env_name = self.config['env_name']\n",
    "        self.env = gym.make(self.env_name)\n",
    "        if self.env_name == \"Pong-ram-v0\":\n",
    "            self.env = wrap_deepmind_ram(self.env)\n",
    "\n",
    "        # Apply the random seed\n",
    "        self.seed = self.config[\"seed\"]\n",
    "        np.random.seed(self.seed)\n",
    "        self.env.seed(self.seed)\n",
    "\n",
    "        # We set self.obs_dim to the number of possible observation\n",
    "        # if observation space is discrete, otherwise the number\n",
    "        # of observation's dimensions. The same to self.act_dim.\n",
    "        if isinstance(self.env.observation_space, gym.spaces.box.Box):\n",
    "            assert len(self.env.observation_space.shape) == 1\n",
    "            self.obs_dim = self.env.observation_space.shape[0]\n",
    "            self.discrete_obs = False\n",
    "        elif isinstance(self.env.observation_space,\n",
    "                        gym.spaces.discrete.Discrete):\n",
    "            self.obs_dim = self.env.observation_space.n\n",
    "            self.discrete_obs = True\n",
    "        else:\n",
    "            raise ValueError(\"Wrong observation space!\")\n",
    "\n",
    "        if isinstance(self.env.action_space, gym.spaces.box.Box):\n",
    "            assert len(self.env.action_space.shape) == 1\n",
    "            self.act_dim = self.env.action_space.shape[0]\n",
    "        elif isinstance(self.env.action_space, gym.spaces.discrete.Discrete):\n",
    "            self.act_dim = self.env.action_space.n\n",
    "        else:\n",
    "            raise ValueError(\"Wrong action space!\")\n",
    "\n",
    "        self.eps = self.config['eps']\n",
    "\n",
    "        # You need to setup the parameter for your function approximator.\n",
    "        self.initialize_parameters()\n",
    "\n",
    "    def initialize_parameters(self):\n",
    "        self.parameters = None\n",
    "        raise NotImplementedError(\n",
    "            \"You need to override the \"\n",
    "            \"Trainer._initialize_parameters() function.\")\n",
    "\n",
    "    def process_state(self, state):\n",
    "        \"\"\"Preprocess the state (observation).\n",
    "\n",
    "        If the environment provides discrete observation (state), transform\n",
    "        it to one-hot form. For example, the environment FrozenLake-v0\n",
    "        provides an integer in [0, ..., 15] denotes the 16 possible states.\n",
    "        We transform it to one-hot style:\n",
    "\n",
    "        original state 0 -> one-hot vector [1, 0, 0, 0, 0, 0, 0, 0, ...]\n",
    "        original state 1 -> one-hot vector [0, 1, 0, 0, 0, 0, 0, 0, ...]\n",
    "        original state 15 -> one-hot vector [0, ..., 0, 0, 0, 0, 0, 1]\n",
    "\n",
    "        If the observation space is continuous, then you should do nothing.\n",
    "        \"\"\"\n",
    "        if not self.discrete_obs:\n",
    "            return state\n",
    "        else:\n",
    "            new_state = np.zeros((self.obs_dim,))\n",
    "            new_state[state] = 1\n",
    "        return new_state\n",
    "\n",
    "    def compute_values(self, processed_state):\n",
    "        \"\"\"Approximate the state value of given state.\n",
    "        This is a private function.\n",
    "        Note that you should NOT preprocess the state here.\n",
    "        \"\"\"\n",
    "        raise NotImplementedError(\"You need to override the \"\n",
    "                                  \"Trainer.compute_values() function.\")\n",
    "\n",
    "    def compute_action(self, processed_state, eps=None):\n",
    "        \"\"\"Compute the action given the state. Note that the input\n",
    "        is the processed state.\"\"\"\n",
    "\n",
    "        values = self.compute_values(processed_state)\n",
    "        assert values.ndim == 1, values.shape\n",
    "\n",
    "        if eps is None:\n",
    "            eps = self.eps\n",
    "\n",
    "        # [TODO] Implement the epsilon-greedy policy here. We have `eps`\n",
    "        #  probability to choose a uniformly random action in action_space,\n",
    "        #  otherwise choose action that maximizes the values.\n",
    "        # Hint: Use the function of self.env.action_space to sample random\n",
    "        # action.\n",
    "        pass\n",
    "        action = None\n",
    "        \n",
    "        return action\n",
    "\n",
    "    def evaluate(self, num_episodes=50, *args, **kwargs):\n",
    "        \"\"\"Use the function you write to evaluate current policy.\n",
    "        Return the mean episode reward of 50 episodes.\"\"\"\n",
    "        policy = lambda raw_state: self.compute_action(\n",
    "            self.process_state(raw_state), eps=0.0)\n",
    "        if \"MetaDrive\" in self.env_name:\n",
    "            kwargs[\"existing_env\"] = self.env\n",
    "        result = evaluate(policy, num_episodes, seed=self.seed,\n",
    "                          env_name=self.env_name, *args, **kwargs)\n",
    "        return result\n",
    "\n",
    "    def compute_gradient(self, *args, **kwargs):\n",
    "        \"\"\"Compute the gradient.\"\"\"\n",
    "        raise NotImplementedError(\n",
    "            \"You need to override the Trainer.compute_gradient() function.\")\n",
    "\n",
    "    def apply_gradient(self, *args, **kwargs):\n",
    "        \"\"\"Compute the gradient\"\"\"\n",
    "        raise NotImplementedError(\n",
    "            \"You need to override the Trainer.apply_gradient() function.\")\n",
    "\n",
    "    def train(self):\n",
    "        \"\"\"Conduct one iteration of learning.\"\"\"\n",
    "        raise NotImplementedError(\"You need to override the \"\n",
    "                                  \"Trainer.train() function.\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "class TestTrainer(AbstractTrainer):\n",
    "    \"\"\"This class is used for testing. We don't really train anything.\"\"\"\n",
    "    def compute_values(self, state):\n",
    "        return np.random.random_sample(size=self.act_dim)\n",
    "    def initialize_parameters(self):\n",
    "        self.parameters = np.random.random_sample(size=(self.obs_dim, self.act_dim))\n",
    "    \n",
    "t = TestTrainer(dict(env_name=\"CartPole-v0\"))\n",
    "obs = t.env.observation_space.sample()\n",
    "processed = t.process_state(obs)\n",
    "assert processed.shape == (4, )\n",
    "assert np.all(processed == obs)\n",
    "# Test compute_action\n",
    "values = t.compute_values(processed)\n",
    "correct_act = np.argmax(values)\n",
    "assert t.compute_action(processed, eps=0) == correct_act\n",
    "print(\"Average episode reward for a random policy in 500 episodes in CartPole-v0: \",\n",
    "      t.evaluate(num_episodes=500))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 2: Linear function approximation\n",
    "\n",
    "In this section, we implement a simple linear function to approximate the value function, whose input is the state (or the processed state) and output is the state-action value.\n",
    "\n",
    "First, we implement a `LinearTrainer` class which implements (1) linear function approximation and (2) n-step semi-gradient method to update the linear function.\n",
    "\n",
    "Then we further implement a `LinearTrainerWithFeatureConstruction` class which processs the input state and provide polynomial features that increase the effectiveness of the linear function approximation.\n",
    "\n",
    "Please also refer to the Chapter 9.4 (linear method), 9.5 (feature construction), and 10.2 (n-step semi-gradient method) of the RL textbook.\n",
    "\n",
    "---\n",
    "\n",
    "In this section, we leverage the n-step semi-gradient. \n",
    "\n",
    "What is the \"correct value\" of a state-action pair in one-step TD learning? We consider it is $r_t + \\gamma Q(s_{t+1}, a_{t+1})$ and thus lead to the one-step TD error:\n",
    "\n",
    "$TD = r_t + \\gamma Q(s_{t+1}, a_{t+1}) - Q(s_t, a_t)$.\n",
    "\n",
    "In n-step case, the target value of Q is:\n",
    "\n",
    "$Q(s_t, a_t) = \\sum_{i=t}^{t+n-1}[\\gamma^{i-t}r_i] + \\gamma^n Q(s_{t+n}, a_{t+n})$\n",
    "\n",
    "We follow the pipeline depicted in Chapter 10.2 of the textbook to implement this logic. Note that notation of the time step is different in this assignment and the textbook. In textbook, the reward $R_{t+1}$ is the reward when apply action $a_{t}$ to the environment at state $s_t$. In the equation above the $r_t$ has exactly the same meaning as the $R_{t+1}$ in the textbook. In the code below, we store the states, actions and rewards to lists during training. **You need to make sure the indices of these lists, namely the `tau` in  `actions[tau]` has the correct meaning.**\n",
    "\n",
    "After computing the target Q value, we need to derive the gradient to update the parameters. Consider a loss function, the Mean Square Error between the target Q value and the estimated Q value: \n",
    "\n",
    "$\\text{loss} = \\cfrac{1}{2}[\\sum_{i=t}^{t+n-1}\\gamma^{i-t}r_i + \\gamma^n Q(s_{t+n}, a_{t+n}) - Q(s_t, a_t)]^2$\n",
    "\n",
    "Compute the gradient of Loss with respect to the Q function:\n",
    "\n",
    "$\\cfrac{d \\text{loss}}{d Q} = -(\\sum_{i=t}^{t+n-1}\\gamma^{i-t}r_i + \\gamma^n Q(s_{t+n}, a_{t+n}) - Q(s_t, a_t))$\n",
    "\n",
    "According to the chain rule, the gradient of the loss w.r.t. the parameter ($W$) is:\n",
    "\n",
    "$\\cfrac{d \\text{loss}}{d W} = -(\\sum_{i=t}^{t+n-1}\\gamma^{i-t}r_i + \\gamma^n Q(s_{t+n}, a_{t+n}) - Q(s_t, a_t))\\cfrac{d Q}{d W}$\n",
    "\n",
    "To minimize the loss, we only need to descent the gradient:\n",
    "\n",
    "$W = W - lr \\cfrac{d \\text{loss}}{d W}$\n",
    "\n",
    "wherein $lr$ is the learning rate. Therefore, the final update rule of parameters is:\n",
    "\n",
    "$W = W + lr (\\sum_{i=t}^{t+n-1}\\gamma^{i-t}r_i + \\gamma^n Q(s_{t+n}, a_{t+n}) - Q(s_t, a_t))\\cfrac{d Q}{d W}$\n",
    "\n",
    "In the following code, we denote $G = \\sum_{i=t}^{t+n-1}\\gamma^{i-t}r_i + \\gamma^n Q(s_{t+n}, a_{t+n})$ and will compute $dQ / dW$ according to the form of the approximator."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Section 2.1: Basics\n",
    "\n",
    "(30 / 100 points)\n",
    "\n",
    "We want to approximate the state-action values, the expected return when applying action $a_t$ in state $s_t$. \n",
    "\n",
    "Linear methods approximate state-action value function by the inner product between a parameter matatrix $W$ and the input state vector $s$:\n",
    "\n",
    "$v(s, W) = W^T\\cdot s$\n",
    "\n",
    "Note that $W\\in \\mathbb R^{(O, A)}$ and $s \\in \\mathbb R^{O}$, wherein $O$ is the observation (state) dimensions, the `self.obs_dim`, and $A$ is the action dimension, the `self.act_dim` in the trainer. \n",
    "The output $v(s, W) \\in \\mathbb R^{A}$. Each entry to the output corresponds to one action.\n",
    "\n",
    "Note that you should finish this section **purely by Numpy without calling any other package**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Solve the TODOs and remove `pass`\n",
    "\n",
    "# Build the algorithm-specify config.\n",
    "linear_approximator_config = merge_config(dict(\n",
    "    parameter_std=0.01,\n",
    "    learning_rate=0.01,\n",
    "    n=3,\n",
    "), default_config)\n",
    "\n",
    "\n",
    "class LinearTrainer(AbstractTrainer):\n",
    "    def __init__(self, config):\n",
    "        config = merge_config(config, linear_approximator_config)\n",
    "\n",
    "        # Initialize the abstract class.\n",
    "        super().__init__(config)\n",
    "\n",
    "        self.max_episode_length = self.config[\"max_episode_length\"]\n",
    "        self.learning_rate = self.config[\"learning_rate\"]\n",
    "        self.gamma = self.config[\"gamma\"]\n",
    "        self.n = self.config[\"n\"]\n",
    "\n",
    "    def initialize_parameters(self):\n",
    "        # [TODO] Initialize self.parameters, which is two dimensional matrix,\n",
    "        #  and subjects to a normal distribution with scale\n",
    "        #  config[\"parameter_std\"].\n",
    "        std = self.config[\"parameter_std\"]\n",
    "        self.parameters = None\n",
    "        pass\n",
    "        \n",
    "        print(\"Initialize parameters with shape: {}.\".format(\n",
    "            self.parameters.shape))\n",
    "\n",
    "    def compute_values(self, processed_state):\n",
    "        # [TODO] Compute the value for each potential action. Note that you\n",
    "        #  should NOT preprocess the state here.\"\"\"\n",
    "        assert processed_state.ndim == 1, processed_state.shape\n",
    "        pass\n",
    "        ret = None\n",
    "        \n",
    "        return ret\n",
    "\n",
    "    def train(self):\n",
    "        \"\"\"\n",
    "        Please implement the n-step Sarsa algorithm presented in Chapter 10.2\n",
    "        of the textbook. You algorithm should reduce the convention one-step\n",
    "        Sarsa when n = 1. That is:\n",
    "            TD = r_t + gamma * Q(s_t+1, a_t+1) - Q(s_t, a_t)\n",
    "            Q(s_t, a_t) = Q(s_t, a_t) + learning_rate * TD\n",
    "        \"\"\"\n",
    "        s = self.env.reset()\n",
    "        processed_s = self.process_state(s)\n",
    "        processed_states = [processed_s]\n",
    "        rewards = [0.0]\n",
    "        actions = [self.compute_action(processed_s)]\n",
    "        T = float(\"inf\")\n",
    "\n",
    "        for t in range(self.max_episode_length):\n",
    "            if t < T:\n",
    "                # [TODO]  When the termination is not reach, apply action,\n",
    "                #  process state, record state / reward / action to the\n",
    "                #  lists defined above, and deal with termination.\n",
    "                next_state, reward, done = None, None, None\n",
    "                pass\n",
    "                \n",
    "                processed_s = self.process_state(next_state)\n",
    "                processed_states.append(processed_s)\n",
    "                rewards.append(reward)\n",
    "                if done:\n",
    "                    pass\n",
    "                \n",
    "                else:\n",
    "                    next_act = self.compute_action(processed_s)\n",
    "                    actions.append(next_act)\n",
    "\n",
    "            tau = t - self.n + 1\n",
    "            if tau >= 0:\n",
    "                gradient = self.compute_gradient(\n",
    "                    processed_states, actions, rewards, tau, T\n",
    "                )\n",
    "                self.apply_gradient(gradient)\n",
    "            if tau == T - 1:\n",
    "                break\n",
    "\n",
    "    def compute_gradient(self, processed_states, actions, rewards, tau, T):\n",
    "        \"\"\"Compute the gradient\"\"\"\n",
    "        n = self.n\n",
    "\n",
    "        # [TODO] Compute the approximation goal, the truth state action value\n",
    "        #  G. It is a n-step discounted sum of rewards. Refer to Chapter 10.2\n",
    "        #  of the textbook.\n",
    "        # [HINT] G have two parts: the accumuted reward computed from step tau to \n",
    "        #  step tau+n, and the possible state value at time step tau+n, if the episode\n",
    "        #  is not terminated. Remember to apply the discounter factor (\\gamma^n) to\n",
    "        #  the second part of G if applicable.\n",
    "        pass\n",
    "        G = None\n",
    "        \n",
    "        if tau + n < T:\n",
    "            # [TODO] If at time step tau + n the episode is not terminated,\n",
    "            # then we should add the state action value at tau + n\n",
    "            # to the G.\n",
    "            pass\n",
    "        \n",
    "        # Denote the state-action value function Q, then the loss of\n",
    "        # prediction error w.r.t. the weights can be separated into two\n",
    "        # parts (the chain rule):\n",
    "        #     dLoss / dweight = (dLoss / dQ) * (dQ / dweight)\n",
    "        # We call the first one loss_grad, and the latter one\n",
    "        # value_grad. We consider the Mean Square Error between the target\n",
    "        # value (G) and the predicted value (Q(s_t, a_t)) to be the loss.\n",
    "\n",
    "        loss_grad = np.zeros((self.act_dim, 1))\n",
    "        # [TODO] fill the propoer value of loss_grad, denoting the gradient\n",
    "        # of the MSE w.r.t. the output of the linear function.\n",
    "        pass\n",
    "\n",
    "        # [TODO] compute the value of value_grad, denoting the gradient of\n",
    "        # the output of the linear function w.r.t. the parameters.\n",
    "        value_grad = None\n",
    "        pass\n",
    "\n",
    "        assert loss_grad.shape == (self.act_dim, 1)\n",
    "        assert value_grad.shape == (self.obs_dim, 1)\n",
    "\n",
    "        # [TODO] merge two gradients to get the gradient of loss w.r.t. the\n",
    "        # parameters.\n",
    "        gradient = None\n",
    "        pass\n",
    "    \n",
    "        return gradient\n",
    "\n",
    "    def apply_gradient(self, gradient):\n",
    "        \"\"\"Apply the gradient to the parameter.\"\"\"\n",
    "        assert gradient.shape == self.parameters.shape, (\n",
    "            gradient.shape, self.parameters.shape)\n",
    "        # [TODO] apply the gradient to self.parameters\n",
    "        pass\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "# Build the test trainer.\n",
    "test_trainer = LinearTrainer(dict(parameter_std=0.0))\n",
    "\n",
    "# Test self.parameters.\n",
    "assert test_trainer.parameters.std() == 0.0, \\\n",
    "    \"Parameters should subjects to a normal distribution with standard \" \\\n",
    "    \"deviation config['parameter_std'], but you have {}.\" \\\n",
    "    \"\".format(test_trainer.parameters.std())\n",
    "assert test_trainer.parameters.mean() == 0, \\\n",
    "    \"Parameters should subjects to a normal distribution with mean 0. \" \\\n",
    "    \"But you have {}.\".format(test_trainer.parameters.mean())\n",
    "\n",
    "# Test compute_values\n",
    "fake_state = test_trainer.env.observation_space.sample()\n",
    "processed_state = test_trainer.process_state(fake_state)\n",
    "assert processed_state.shape == (test_trainer.obs_dim, ), processed_state.shape\n",
    "values = test_trainer.compute_values(fake_state)\n",
    "assert values.shape == (test_trainer.act_dim, ), values.shape\n",
    "\n",
    "# Test compute_gradient\n",
    "tmp_gradient = test_trainer.compute_gradient(\n",
    "    [processed_state]*10, [test_trainer.env.action_space.sample()]*10, [0.0]*10, 2, 5)\n",
    "assert tmp_gradient.shape == test_trainer.parameters.shape\n",
    "\n",
    "test_trainer.train()\n",
    "print(\"Now your codes should be bug-free.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "linear_trainer, _ = run(LinearTrainer, dict(\n",
    "    max_iteration=10000,\n",
    "    evaluate_interval=1000, \n",
    "    parameter_std=0.01,\n",
    "    learning_rate=0.01,\n",
    "    n=3,\n",
    "    env_name=\"CartPole-v0\"\n",
    "))\n",
    "\n",
    "# It's OK to see bad performance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "# You should see a pop up window which display the movement of the cart and pole.\n",
    "print(\"Average episode reward for your linear agent in CartPole-v0: \",\n",
    "      linear_trainer.evaluate(1, render=True))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You will notice that the linear trainer only has 8 trainable parameters and its performance is quiet bad. In the following section, we will introduce more features as the input to the value approximator so that the system can learn a better value function."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Section 2.2: Linear Model with Feature Construction\n",
    "\n",
    "(15 / 100 points)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Solve the TODOs and remove `pass`\n",
    "\n",
    "linear_fc_config = merge_config(dict(\n",
    "    polynomial_order=1,\n",
    "), linear_approximator_config)\n",
    "\n",
    "\n",
    "def polynomial_feature(sequence, order=1):\n",
    "    \"\"\"\n",
    "    Construct the order-n polynomial-basis feature of the state.\n",
    "    Refer to Chapter 9.5.1 of the textbook. \n",
    "    We expect to get a vector of length `(order+1)^k` as the output,\n",
    "    wherein `k` is the dimensions of the state.\n",
    "\n",
    "    For example:\n",
    "    When the state is [2, 3, 4] (so k=3), \n",
    "    the first order polynomial feature of the state is \n",
    "    [\n",
    "        1,\n",
    "        2,\n",
    "        3,\n",
    "        4,\n",
    "        2 * 3 = 6,\n",
    "        2 * 4 = 8,\n",
    "        3 * 4 = 12,\n",
    "        2 * 3 * 4 = 24\n",
    "    ].\n",
    "    \n",
    "    We have `(1+1)^3=8` output dimensions.\n",
    "\n",
    "    Note: it is not necessary to follow the ascending order.\n",
    "    \"\"\"\n",
    "    # [TODO] finish this function.\n",
    "    \n",
    "    return output\n",
    "\n",
    "assert sorted(polynomial_feature([2, 3, 4])) == [1, 2, 3, 4, 6, 8, 12, 24]\n",
    "assert len(polynomial_feature([2, 3, 4], 2)) == 27\n",
    "assert len(polynomial_feature([2, 3, 4], 3)) == 64\n",
    "\n",
    "class LinearTrainerWithFeatureConstruction(LinearTrainer):\n",
    "    \"\"\"In this class, we will expand the dimension of the state.\n",
    "    This procedure is done at self.process_state function.\n",
    "    The modification of self.obs_dim and the shape of parameters\n",
    "    is also needed.\n",
    "    \"\"\"\n",
    "    def __init__(self, config):\n",
    "        config = merge_config(config, linear_fc_config)\n",
    "        # Initialize the abstract class.\n",
    "        super().__init__(config)\n",
    "\n",
    "        self.polynomial_order = self.config[\"polynomial_order\"]\n",
    "\n",
    "        # Expand the size of observation\n",
    "        self.obs_dim = (self.polynomial_order + 1) ** self.obs_dim\n",
    "\n",
    "        # Since we change self.obs_dim, reset the parameters.\n",
    "        self.initialize_parameters()\n",
    "\n",
    "    def process_state(self, state):\n",
    "        \"\"\"Please finish the polynomial function.\"\"\"\n",
    "        processed = polynomial_feature(state, self.polynomial_order)\n",
    "        processed = np.asarray(processed)\n",
    "        assert len(processed) == self.obs_dim, processed.shape\n",
    "        return processed"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "linear_fc_trainer, _ = run(LinearTrainerWithFeatureConstruction, dict(\n",
    "    max_iteration=10000,\n",
    "    evaluate_interval=1000, \n",
    "    parameter_std=0.01,\n",
    "    learning_rate=0.001,\n",
    "    polynomial_order=1,\n",
    "    n=3,\n",
    "    env_name=\"CartPole-v0\"\n",
    "), reward_threshold=195.0)\n",
    "\n",
    "assert linear_fc_trainer.evaluate() > 20.0, \"The best episode reward happening \" \\\n",
    "    \"during training should be greater than the random baseline. That is more than 20+.\"\n",
    "\n",
    "# This cell should be finished within 10 minitines."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "# You should see a pop up window which display the movement of the cart and pole.\n",
    "print(\n",
    "    \"In CartPole-v0, the average episode reward for the value estimator \"\n",
    "    \"with feature construction is: \",\n",
    "      linear_fc_trainer.evaluate(1, render=True))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 3: Multi-layer Perceptron as the approximiator"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this section, you are required to implement a Multi-layer Perceptron (MLP) as the value estimator using purely Numpy package. \n",
    "\n",
    "The differences between MLP and linear function are (1) MLP has a hidden layer which increase its representation capacity and (2) MLP utilizes activation function after the output of each layer such that non-linearity is introduced.\n",
    "\n",
    "Consider a MLP with one hidden layer containing 100 neurons and an activation function `f()`. \n",
    "We call the layer that receives the state as input and output the activations as the **hidden layer**.\n",
    "The next layer that accepts the activations as input and produces the estimated values is the **output layer**. \n",
    "\n",
    "The activations of the hidden layer is:\n",
    "\n",
    "$a(s_t) = f( W_h^T s_t)$\n",
    "\n",
    "It is obvious that the activations is a 100-length vector. \n",
    "\n",
    "The output estimated values are:\n",
    "\n",
    "$Q(s_t) = f(W_o ^ T a(s_t))$\n",
    "\n",
    "wherein $W_h, W_o$ are the parameters of the hidden layer and output layer, respectively. \n",
    "\n",
    "In this section we do not add activation function and hence $f(x) = x$.\n",
    "\n",
    "\n",
    "Moreover, we also introduce the **gradient clipping mechanism**. \n",
    "\n",
    "In on-policy learning, the norm of gradient is prone to vary drastically, since the output of Q function is unbounded and it can be as large as possible. The unbounded values lead to the *exploding gradient* issue. Gradient clipping is used to bound the norm of gradient while keeps the direction of gradient vector unchanged. \n",
    "\n",
    "Concretely, the formulation of gradient clipping is:\n",
    "\n",
    "$g_{clipped} = g_{original} \\cfrac{c}{\\max(c, \\text{norm}(g))}$\n",
    "\n",
    "wherein $c$ is a hyperparameter specified by `config[\"clip_norm\"]` in our implementation. \n",
    "Gradient clipping bounds the gradient norm to $c$ if the norm of original gradient is greater than $c$. You need to implement this mechanism in function `apply_gradient` in the following cell."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Solve the TODOs and remove `pass`\n",
    "\n",
    "# Build the algorithm-specify config.\n",
    "mlp_trainer_config = merge_config(dict(\n",
    "    parameter_std=0.01,\n",
    "    learning_rate=0.01,\n",
    "    hidden_dim=100,\n",
    "    n=3,\n",
    "    clip_norm=1.0,\n",
    "    clip_gradient=True\n",
    "), default_config)\n",
    "\n",
    "\n",
    "class MLPTrainer(LinearTrainer):\n",
    "    def __init__(self, config):\n",
    "        config = merge_config(config, mlp_trainer_config)\n",
    "        self.hidden_dim = config[\"hidden_dim\"]\n",
    "        super().__init__(config)\n",
    "\n",
    "    def initialize_parameters(self):\n",
    "        # [TODO] Initialize self.hidden_parameters and self.output_parameters,\n",
    "        #  which are two dimensional matrices, and subject to normal\n",
    "        #  distributions with scale config[\"parameter_std\"]\n",
    "        std = self.config[\"parameter_std\"]\n",
    "        self.hidden_parameters = None\n",
    "        self.output_parameters = None\n",
    "        pass\n",
    "\n",
    "    def compute_values(self, processed_state):\n",
    "        \"\"\"[TODO] Compute the value for each potential action. Note that you\n",
    "        should NOT preprocess the state here.\"\"\"\n",
    "        assert processed_state.ndim == 1, processed_state.shape\n",
    "        activation = self.compute_activation(processed_state)\n",
    "        values = None\n",
    "        pass\n",
    "        \n",
    "        return values\n",
    "\n",
    "    def compute_activation(self, processed_state):\n",
    "        \"\"\"[TODO] Compute the action values values.\n",
    "        Given a processed state, first we need to compute the activtaion\n",
    "        (the output of hidden layer). Then we compute the values (the output of\n",
    "        the output layer).\n",
    "        \"\"\"\n",
    "        activation = None\n",
    "        pass\n",
    "    \n",
    "        return activation\n",
    "\n",
    "    def compute_gradient(self, processed_states, actions, rewards, tau, T):\n",
    "        n = self.n\n",
    "        \n",
    "        # [TODO] compute the target value.\n",
    "        # Hint: copy your codes in LinearTrainer.\n",
    "        G = None\n",
    "        pass\n",
    "        if tau + n < T:\n",
    "            pass\n",
    "\n",
    "        # Denote the state-action value function Q, then the loss of\n",
    "        # prediction error w.r.t. the output layer weights can be \n",
    "        # separated into two parts (the chain rule):\n",
    "        #     dError / dweight = (dError / dQ) * (dQ / dweight)\n",
    "        # We call the first one loss_grad, and the latter one\n",
    "        # value_grad. We consider the Mean Square Error between the target\n",
    "        # value (G) and the predict value (Q(s_t, a_t)) to be the loss.\n",
    "        cur_state = processed_states[tau]\n",
    "\n",
    "        loss_grad = np.zeros((self.act_dim, 1))  # [act_dim, 1]\n",
    "        # [TODO] compute loss_grad\n",
    "        pass\n",
    "        \n",
    "        # [TODO] compute the gradient of output layer parameters\n",
    "        output_gradient = None\n",
    "        pass\n",
    "\n",
    "        \n",
    "        # [TODO] compute the gradient of hidden layer parameters\n",
    "        # Hint: using chain rule and derive the formulation\n",
    "        hidden_gradient = None\n",
    "        pass\n",
    "    \n",
    "        assert np.all(np.isfinite(output_gradient)), \\\n",
    "            \"Invalid value occurs in output_gradient! {}\".format(\n",
    "                output_gradient)\n",
    "        assert np.all(np.isfinite(hidden_gradient)), \\\n",
    "            \"Invalid value occurs in hidden_gradient! {}\".format(\n",
    "                hidden_gradient)\n",
    "        return [hidden_gradient, output_gradient]\n",
    "\n",
    "    def apply_gradient(self, gradients):\n",
    "        \"\"\"Apply the gradientss to the two layers' parameters.\"\"\"\n",
    "        assert len(gradients) == 2\n",
    "        hidden_gradient, output_gradient = gradients\n",
    "\n",
    "        assert output_gradient.shape == (self.hidden_dim, self.act_dim)\n",
    "        assert hidden_gradient.shape == (self.obs_dim, self.hidden_dim)\n",
    "        \n",
    "        # [TODO] Implement the clip gradient mechansim\n",
    "        # Hint: when the old gradient has norm less that clip_norm,\n",
    "        #  then nothing happens. Otherwise shrink the gradient to\n",
    "        #  make its norm equal to clip_norm.\n",
    "        if self.config[\"clip_gradient\"]:\n",
    "            clip_norm = self.config[\"clip_norm\"]\n",
    "            pass\n",
    "\n",
    "        # [TODO] update the parameters\n",
    "        # Hint: Remember to check the sign when applying the gradient\n",
    "        #  into the parameters. Should you add or minus the gradients?\n",
    "        pass\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "print(\"Now let's see what happen if gradient clipping is not enable!\\n\")\n",
    "try:\n",
    "    failed_mlp_trainer, _ = run(MLPTrainer, dict(\n",
    "        max_iteration=3000,\n",
    "        evaluate_interval=100, \n",
    "        parameter_std=0.01,\n",
    "        learning_rate=0.001,\n",
    "        hidden_dim=100,\n",
    "        clip_gradient=False,  # <<< Gradient clipping is OFF!\n",
    "        env_name=\"CartPole-v0\"\n",
    "    ), reward_threshold=195.0)\n",
    "    print(\"\\nWe expect to see bad performance (<195). \"\n",
    "          \"The performance without gradient clipping: {}.\"\n",
    "          \"\".format(failed_mlp_trainer.evaluate()))\n",
    "except AssertionError as e:\n",
    "    print(traceback.format_exc())\n",
    "    print(\"Infinity happen during training. It's OK since the gradient is not bounded.\")\n",
    "finally:\n",
    "    print(\"Try next cell to see the impact of gradient clipping.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "print(\"Now let's see what happen if gradient clipping is enable!\\n\")\n",
    "mlp_trainer, _ = run(MLPTrainer, dict(\n",
    "    max_iteration=3000,\n",
    "    evaluate_interval=100, \n",
    "    parameter_std=0.01,\n",
    "    learning_rate=0.001,\n",
    "    hidden_dim=100,\n",
    "    clip_gradient=True,  # <<< Gradient clipping is ON!\n",
    "    env_name=\"CartPole-v0\"\n",
    "), reward_threshold=195.0)\n",
    "\n",
    "assert mlp_trainer.evaluate() > 195.0, \"Check your codes. \" \\\n",
    "    \"Your agent should achieve {} reward in 200 iterations.\" \\\n",
    "    \"But it achieve {} reward in evaluation.\"\n",
    "\n",
    "# In our implementation, the task is solved in 200 iterations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "# You should see a pop up window which display the movement of the cart and pole.\n",
    "print(\"Average episode reward for your MLP agent with gradient clipping in CartPole-v0: \",\n",
    "      mlp_trainer.evaluate(1, render=True))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Interesting right? The gradient clipping technique makes the training converge much faster!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Section 4: Implement Deep Q Learning in Pytorch\n",
    "\n",
    "(50 / 100 points)\n",
    "\n",
    "In this section, we will get familiar with the basic logic of pytorch, a powerful Deep Learning framework, which lays the ground for future tasks.\n",
    "\n",
    "We will implement a MLP similar to the one in Section 3 using Pytorch. Before start, you need to make sure the pytorch is properly installed.\n",
    "\n",
    "If you are not familiar with Pytorch, we suggest you to go through pytorch official tutorials:\n",
    "\n",
    "1. quickstart: https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
    "2. tutorial on RL: https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html\n",
    "\n",
    "---\n",
    "\n",
    "Different from the algorithm in Section 3, we will implement Deep Q Network (DQN) in this section. The main differences are concluded as following:\n",
    "\n",
    "**1. DQN requires an experience replay memory (buffer) to store the transitions.** \n",
    "\n",
    "A replay memory is implemented in the following `ExperienceReplayMemory` class. It can store a certain amount of transitions: `(s_t, a_t, r_t, s_t+1, done_t)`. When the memory is full, the earliest transition is discarded.\n",
    "\n",
    "The introduction of replay memory increases the sample efficiency since each transition might be used multiple times, though you may find it learn slowly in this assignment since the CartPole-v0 is a relatively easy environment.\n",
    "\n",
    "\n",
    "**2. DQN is an off-policy algorithm and computes TD error in a different way compared to Sarsa.** \n",
    "\n",
    "In Sarsa, the TD error is computed as: \n",
    "\n",
    "$TD = r_t + \\gamma \\mathbb E_{a_{t+1} \\sim \\pi}Q(s_{t+1}, a_{t+1}) - Q(s_t, a_t)$\n",
    "\n",
    "wherein the next action $a_{t+1}$ is *selected by current policy*. However, in traditional Q learning, it assume the next action is the one that maximizes the state-action values and use this assumption to compute the TD as: \n",
    "\n",
    "$TD = r_t + \\gamma \\max_{a_{t+1}} Q(s_{t+1}, a_{t+1}) - Q(s_t, a_t)$.\n",
    "\n",
    "**3. DQN delayed updates the target network**. This is another difference even compared to the traditional Q learning. \n",
    "\n",
    "DQN maintains another neural network called the *target network*. The target network has identical structure of the Q network. \n",
    "After certain amount of steps, the target network replaces its parameters by the latest parameters of the Q network.\n",
    "The update of target network is much less frequent than the update of the Q network since the Q network is updated in each step.\n",
    "\n",
    "The reason to leverage the target network is to stabilize the estimation of TD error. ***In DQN, the TD error is evaluated as:***\n",
    "\n",
    "$TD = r_t + \\gamma \\max_{a_{t+1}} Q^{target}(s_{t+1}, a_{t+1}) - Q(s_t, a_t)$\n",
    "\n",
    "The Q values of next state is estimated by the target network, not the Q network that is being updated. \n",
    "This mechanism can reduce the variance of the TD error because the estimation of Q values of next states is not influenced by the update of the Q network.\n",
    "\n",
    "---\n",
    "\n",
    "In the engineering perspective, the differences between `DQNTrainer` and the previous `MLPTrainer` are:\n",
    "\n",
    "1. DQN uses **pytorch model** to serve as the approximator. So we need to rewrite the `initialize_parameter` function to build the pytorch model. Also the `train` function is changed since the gradient optimization is conducted by pytorch. We need to write the pytorch pipeline in `train`.\n",
    "\n",
    "2. DQN has **replay memory**. So we need to initialize it, feed data into it and take the transitions out from it.\n",
    "\n",
    "3. Thank to the replay memory and pytorch, DQN can be updated through batches of transitions. Therefore you need to carefully compute the Q target via **matrix computation**.\n",
    "\n",
    "4. We use Adam optimizer to conduct the gradient optimization. You need to get familiar with how to compute the loss and conduct **backward propagation**.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Solve the TODOs and remove `pass`\n",
    "\n",
    "from collections import deque\n",
    "import random\n",
    "\n",
    "class ExperienceReplayMemory:\n",
    "    \"\"\"Store and sample the transitions\"\"\"\n",
    "    def __init__(self, capacity):\n",
    "        # deque is a useful class which acts like a list but only contain\n",
    "        # finite elements.When appending new element make deque exceeds the \n",
    "        # `maxlen`, the oldest element (the index 0 element) will be removed.\n",
    "        \n",
    "        # [TODO] uncomment next line. \n",
    "        # self.memory = deque(maxlen=capacity)\n",
    "        pass\n",
    "\n",
    "    def push(self, transition):\n",
    "        self.memory.append(transition)\n",
    "\n",
    "    def sample(self, batch_size):\n",
    "        return random.sample(self.memory, batch_size)\n",
    "    \n",
    "    def __len__(self):\n",
    "        return len(self.memory)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Solve the TODOs and remove `pass`\n",
    "\n",
    "class PytorchModel(nn.Module):\n",
    "    def __init__(self, input_shape, num_actions):\n",
    "        super(PytorchModel, self).__init__()\n",
    "        \n",
    "        # [TODO] Build a sequential model with two layers.\n",
    "        # The first hidden layer has 100 hidden nodes, followed by\n",
    "        # a ReLU activation function.\n",
    "        # The second output layer take the activation vector, who has\n",
    "        # 100 elements, as input and return the action values.\n",
    "        # So the return values is a vector with num_actions elements.\n",
    "        self.action_value = None\n",
    "        pass\n",
    "\n",
    "    def forward(self, obs):\n",
    "        return self.action_value(obs)\n",
    "    \n",
    "# Test\n",
    "assert isinstance(PytorchModel((3,), 7).action_value, nn.Module)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Solve the TODOs and remove `pass`\n",
    "\n",
    "pytorch_config = merge_config(dict(\n",
    "    memory_size=50000,\n",
    "    learn_start=5000,\n",
    "    batch_size=32,\n",
    "    target_update_freq=500,  # in steps\n",
    "    learn_freq=1,  # in steps\n",
    "    n=1\n",
    "), mlp_trainer_config)\n",
    "\n",
    "\n",
    "def to_tensor(x):\n",
    "    \"\"\"A helper function to transform a numpy array to a Pytorch Tensor\"\"\"\n",
    "    if isinstance(x, np.ndarray):\n",
    "        x = torch.from_numpy(x).type(torch.float32)\n",
    "    assert isinstance(x, torch.Tensor)\n",
    "    if x.dim() == 3 or x.dim() == 1:\n",
    "        x = x.unsqueeze(0)\n",
    "    assert x.dim() == 2 or x.dim() == 4, x.shape\n",
    "    return x\n",
    "\n",
    "\n",
    "class DQNTrainer(MLPTrainer):\n",
    "    def __init__(self, config):\n",
    "        config = merge_config(config, pytorch_config)\n",
    "        self.learning_rate = config[\"learning_rate\"]\n",
    "        super().__init__(config)\n",
    "\n",
    "        self.memory = ExperienceReplayMemory(config[\"memory_size\"])\n",
    "        self.learn_start = config[\"learn_start\"]\n",
    "        self.batch_size = config[\"batch_size\"]\n",
    "        self.target_update_freq = config[\"target_update_freq\"]\n",
    "        self.clip_norm = config[\"clip_norm\"]\n",
    "        self.step_since_update = 0\n",
    "        self.total_step = 0\n",
    "\n",
    "    def initialize_parameters(self):\n",
    "        input_shape = self.env.observation_space.shape\n",
    "\n",
    "        # [TODO] Initialize two network using PytorchModel class\n",
    "        self.network = None\n",
    "        pass\n",
    "\n",
    "        self.network.eval()\n",
    "        self.network.share_memory()\n",
    "\n",
    "        # [TODO] Initialize target network then copy the weight\n",
    "        # of original network to it. So you should\n",
    "        # put the weights of self.network into self.target_network.\n",
    "        self.target_network = None\n",
    "        pass\n",
    "\n",
    "        self.target_network.eval()\n",
    "\n",
    "        # Build Adam optimizer and MSE Loss.\n",
    "        # [TODO] Uncomment next few lines\n",
    "        # self.optimizer = torch.optim.Adam(\n",
    "        #     self.network.parameters(), lr=self.learning_rate\n",
    "        # )\n",
    "        # self.loss = nn.MSELoss()\n",
    "        pass\n",
    "\n",
    "    def compute_values(self, processed_state):\n",
    "        \"\"\"Compute the value for each potential action. Note that you\n",
    "        should NOT preprocess the state here.\"\"\"\n",
    "        # [TODO] Convert the output of neural network to numpy array\n",
    "        values = None\n",
    "        pass\n",
    "    \n",
    "        return values\n",
    "\n",
    "    def train(self):\n",
    "        s = self.env.reset()\n",
    "        processed_s = self.process_state(s)\n",
    "        act = self.compute_action(processed_s)\n",
    "        stat = {\"loss\": []}\n",
    "\n",
    "        for t in range(self.max_episode_length):\n",
    "            next_state, reward, done, _ = self.env.step(act)\n",
    "            next_processed_s = self.process_state(next_state)\n",
    "\n",
    "            # Push the transition into memory.\n",
    "            self.memory.push(\n",
    "                (processed_s, act, reward, next_processed_s, done)\n",
    "            )\n",
    "\n",
    "            processed_s = next_processed_s\n",
    "            act = self.compute_action(next_processed_s)\n",
    "            self.step_since_update += 1\n",
    "            self.total_step += 1\n",
    "\n",
    "            if done:\n",
    "                break\n",
    "                \n",
    "            if t % self.config[\"learn_freq\"] != 0:\n",
    "                # It's not necessary to update in each step.\n",
    "                continue\n",
    "\n",
    "            if len(self.memory) < self.learn_start:\n",
    "                continue\n",
    "            elif len(self.memory) == self.learn_start:\n",
    "                print(\"Current memory contains {} transitions, \"\n",
    "                      \"start learning!\".format(self.learn_start))\n",
    "\n",
    "            batch = self.memory.sample(self.batch_size)\n",
    "\n",
    "            # Transform a batch of state / action / .. into a tensor.\n",
    "            state_batch = to_tensor(\n",
    "                np.stack([transition[0] for transition in batch])\n",
    "            )\n",
    "            action_batch = to_tensor(\n",
    "                np.stack([transition[1] for transition in batch])\n",
    "            )\n",
    "            reward_batch = to_tensor(\n",
    "                np.stack([transition[2] for transition in batch])\n",
    "            )\n",
    "            next_state_batch = torch.stack(\n",
    "                [transition[3] for transition in batch]\n",
    "            )\n",
    "            done_batch = to_tensor(\n",
    "                np.stack([transition[4] for transition in batch])\n",
    "            )\n",
    "\n",
    "            with torch.no_grad():\n",
    "                # [TODO] Compute the values of Q in next state in batch.\n",
    "                Q_t_plus_one = None\n",
    "                pass\n",
    "                \n",
    "                assert isinstance(Q_t_plus_one, torch.Tensor)\n",
    "                assert Q_t_plus_one.dim() == 1\n",
    "                \n",
    "                # [TODO] Compute the target value of Q in batch.\n",
    "                Q_target = None\n",
    "                pass\n",
    "                \n",
    "                assert Q_target.shape == (self.batch_size,)\n",
    "            \n",
    "            # [TODO] Collect the Q values in batch.\n",
    "            # Hint: Remember to call self.network.train()\n",
    "            #  before you get the Q value from self.network(state_batch),\n",
    "            #  otherwise the graident will not be recorded by pytorch.\n",
    "            Q_t = None\n",
    "            pass\n",
    "    \n",
    "            assert Q_t.shape == Q_target.shape\n",
    "\n",
    "            # Update the network\n",
    "            self.optimizer.zero_grad()\n",
    "            loss = self.loss(input=Q_t, target=Q_target)\n",
    "            loss_value = loss.item()\n",
    "            stat['loss'].append(loss_value)\n",
    "            loss.backward()\n",
    "            \n",
    "            # [TODO] Gradient clipping. Uncomment next line\n",
    "            # nn.utils.clip_grad_norm_(self.network.parameters(), self.clip_norm)\n",
    "            pass\n",
    "            \n",
    "            self.optimizer.step()\n",
    "            self.network.eval()\n",
    "\n",
    "        if len(self.memory) >= self.learn_start and \\\n",
    "                self.step_since_update > self.target_update_freq:\n",
    "            print(\"{} steps has passed since last update. Now update the\"\n",
    "                  \" parameter of the behavior policy. Current step: {}\".format(\n",
    "                self.step_since_update, self.total_step\n",
    "            ))\n",
    "            self.step_since_update = 0\n",
    "            # [TODO] Copy the weights of self.network to self.target_network.\n",
    "            pass\n",
    "            \n",
    "            self.target_network.eval()\n",
    "            \n",
    "        return {\"loss\": np.mean(stat[\"loss\"]), \"episode_len\": t}\n",
    "\n",
    "    def process_state(self, state):\n",
    "        return torch.from_numpy(state).type(torch.float32)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "# Build the test trainer.\n",
    "test_trainer = DQNTrainer({})\n",
    "\n",
    "# Test compute_values\n",
    "fake_state = test_trainer.env.observation_space.sample()\n",
    "processed_state = test_trainer.process_state(fake_state)\n",
    "assert processed_state.shape == (test_trainer.obs_dim, ), processed_state.shape\n",
    "values = test_trainer.compute_values(processed_state)\n",
    "assert values.shape == (test_trainer.act_dim, ), values.shape\n",
    "\n",
    "test_trainer.train()\n",
    "print(\"Now your codes should be bug-free.\")\n",
    "\n",
    "_ = run(DQNTrainer, dict(\n",
    "    max_iteration=20,\n",
    "    evaluate_interval=10, \n",
    "    learn_start=100,\n",
    "    env_name=\"CartPole-v0\",\n",
    "))\n",
    "\n",
    "print(\"Test passed!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "pytorch_trainer, pytorch_stat = run(DQNTrainer, dict(\n",
    "    max_iteration=2000,\n",
    "    evaluate_interval=10, \n",
    "    learning_rate=0.01,\n",
    "    clip_norm=10.0,\n",
    "    memory_size=50000,\n",
    "    learn_start=1000,\n",
    "    eps=0.1,\n",
    "    target_update_freq=1000,\n",
    "    batch_size=32,\n",
    "    env_name=\"CartPole-v0\",\n",
    "), reward_threshold=195.0)\n",
    "\n",
    "reward = pytorch_trainer.evaluate()\n",
    "assert reward > 195.0, \"Check your codes. \" \\\n",
    "    \"Your agent should achieve {} reward within 1000 iterations.\" \\\n",
    "    \"But it achieve {} reward in evaluation.\".format(195.0, reward)\n",
    "\n",
    "# Should solve the task in 10 minutes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this cell without modification\n",
    "\n",
    "# You should see a pop up window which display the movement of the cart and pole.\n",
    "print(\"Average episode reward for your Pytorch agent in CartPole-v0: \",\n",
    "      pytorch_trainer.evaluate(1, render=True))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "------\n",
    "\n",
    "## Conclusion and Discussion\n",
    "\n",
    "In this assignment, we learn how to build several value approximation algorithms.\n",
    "We also get familar with the basic gradient descent methods and Pytorch.\n",
    "\n",
    "Following the submission instruction in the assignment to submit your assignment to our staff. Thank you!\n",
    "\n",
    "------"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  },
  "pycharm": {
   "stem_cell": {
    "cell_type": "raw",
    "metadata": {
     "collapsed": false
    },
    "source": []
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
