{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# MC Control \n",
    "\n",
    "We will be using MC Control on sample 4x4 grid world \n",
    "\n",
    "![GridWorld](./images/gridworld.png \"Grid World\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Monte Carlo Prediction for Estimation (\"first-visit\")\n",
    "\n",
    "Monte Carlo Prediction is carried out by sampling the trajectories over many episodes and using the rewards seen in samples as estimate for state values. The backup digram is given below. Pseudo code for the algorithm is given in Fig 4-1\n",
    "\n",
    "![MC backup](./images/mc_backup.png \"MC Backup\")\n",
    "\n",
    "### Monte Carlo greedy Policy Improvement\n",
    "\n",
    "We will use $ \\epsilon$-greedy policy improvement approach as discussed in the text. \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initial imports and enviroment setup\n",
    "import numpy as np\n",
    "import sys\n",
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "sns.set()\n",
    "\n",
    "# create grid world invironment\n",
    "from gridworld import GridworldEnv\n",
    "env = GridworldEnv()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# MC Prediction\n",
    "\n",
    "def mc_policy_eval(policy, env, discount_factor=1.0, episode_count=100):\n",
    "    \"\"\"\n",
    "    Evaluate a policy given an environment.\n",
    "\n",
    "    Args:\n",
    "        policy:[S, A] shaped matrix representing the policy. Random in our case\n",
    "        env: OpenAI env. In model free setup you have no access to env.P,\n",
    "             transition dynamics of the environment.\n",
    "             use step(a) to take an action and receive a tuple\n",
    "             of (s', r, done, info)\n",
    "             env.nS is number of states in the environment.\n",
    "             env.nA is number of actions in the environment.\n",
    "        episode_count: Number of episodes:\n",
    "        discount_factor: Gamma discount factor.\n",
    "\n",
    "    Returns:\n",
    "        Vector of length env.nS representing the value function.\n",
    "    \"\"\"\n",
    "    # Start with (all 0) state value array and a visit count of zero\n",
    "    V = np.zeros(env.nS)\n",
    "    N = np.zeros(env.nS)\n",
    "    i = 0\n",
    "\n",
    "    # run multiple episodes\n",
    "    while i < episode_count:\n",
    "\n",
    "        # collect samples for one episode\n",
    "        episode_states = []\n",
    "        episode_returns = []\n",
    "        state = env.reset()\n",
    "        episode_states.append(state)\n",
    "        while True:\n",
    "            action = np.random.choice(env.nA, p=policy[state])\n",
    "            (state, reward, done, _) = env.step(action)\n",
    "            episode_returns.append(reward)\n",
    "            if not done:\n",
    "                episode_states.append(state)\n",
    "            else:\n",
    "                break\n",
    "\n",
    "        # update state values\n",
    "        G = 0\n",
    "        count = len(episode_states)\n",
    "        for t in range(count-1, -1, -1):\n",
    "            s, r = episode_states[t], episode_returns[t]\n",
    "            G = discount_factor * G + r\n",
    "            if s not in episode_states[:t]:\n",
    "                N[s] += 1\n",
    "                V[s] = V[s] + 1/N[s] * (G-V[s])\n",
    "\n",
    "        i = i+1\n",
    "\n",
    "    return np.array(V)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Greedy in the Limit with Infinite Exploration (GLIE)\n",
    "\n",
    "def GLIE(env, discount_factor=1.0, episode_count=100):\n",
    "    \"\"\"\n",
    "    Find optimal policy given an environment.\n",
    "\n",
    "    Args:\n",
    "        env: OpenAI env. In model free setup you have no access to env.P,\n",
    "             transition dynamics of the environment.\n",
    "             use step(a) to take an action and receive a tuple\n",
    "             of (s', r, done, info)\n",
    "             env.nS is number of states in the environment.\n",
    "             env.nA is number of actions in the environment.\n",
    "        episode_count: Number of episodes:\n",
    "        discount_factor: Gamma discount factor.\n",
    "\n",
    "    Returns:\n",
    "        Vector of length env.nS representing the value function.\n",
    "        policy:[S, A] shaped matrix representing the policy. Random in our case\n",
    "\n",
    "    \"\"\"\n",
    "    # Start with (all 0) state value array and state-action matrix.\n",
    "    # also initialize visit count to zero for the state-action visit count.\n",
    "    V = np.zeros(env.nS)\n",
    "    N = np.zeros((env.nS, env.nA))\n",
    "    Q = np.zeros((env.nS, env.nA))\n",
    "    # random policy\n",
    "    policy = [np.random.randint(env.nA) for _ in range(env.nS)]\n",
    "    k = 1\n",
    "    eps = 1\n",
    "\n",
    "    def argmax_a(arr):\n",
    "        \"\"\"\n",
    "        Return idx of max element in an array.\n",
    "        Break ties uniformly.\n",
    "        \"\"\"\n",
    "        max_idx = []\n",
    "        max_val = float('-inf')\n",
    "        for idx, elem in enumerate(arr):\n",
    "            if elem == max_val:\n",
    "                max_idx.append(idx)\n",
    "            elif elem > max_val:\n",
    "                max_idx = [idx]\n",
    "                max_val = elem\n",
    "        return np.random.choice(max_idx)\n",
    "\n",
    "    def get_action(state):\n",
    "        if np.random.random() < eps:\n",
    "            return np.random.choice(env.nA)\n",
    "        else:\n",
    "            return argmax_a(Q[state])\n",
    "\n",
    "    # run multiple episodes\n",
    "    while k <= episode_count:\n",
    "\n",
    "        # collect samples for one episode\n",
    "        episode_states = []\n",
    "        episode_actions = []\n",
    "        episode_returns = []\n",
    "        state = env.reset()\n",
    "        episode_states.append(state)\n",
    "        while True:\n",
    "            action = get_action(state)\n",
    "            episode_actions.append(action)\n",
    "            (state, reward, done, _) = env.step(action)\n",
    "            episode_returns.append(reward)\n",
    "            if not done:\n",
    "                episode_states.append(state)\n",
    "            else:\n",
    "                break\n",
    "\n",
    "        # update state-action values\n",
    "        G = 0\n",
    "        count = len(episode_states)\n",
    "        for t in range(count-1, -1, -1):\n",
    "            s, a, r = episode_states[t], episode_actions[t], episode_returns[t]\n",
    "            G = discount_factor * G + r\n",
    "            N[s, a] += 1\n",
    "            Q[s, a] = Q[s, a] + 1/N[s, a] * (G-Q[s, a])\n",
    "\n",
    "        # Update policy and optimal value\n",
    "        k = k+1\n",
    "        eps = 1/k\n",
    "        # uncomment \"if\" to have higher exploration initially and\n",
    "        # then let epislon decay after 5000 episodes\n",
    "        #if k <=100:\n",
    "        #    eps = 0.02\n",
    "\n",
    "        for s in range(env.nS):\n",
    "            best_action = argmax_a(Q[s])\n",
    "            policy[s] = best_action\n",
    "            V[s] = Q[s, best_action]\n",
    "\n",
    "    return np.array(V), np.array(policy)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Custom print to show state values inside the grid\n",
    "def grid_print(V, k=None, shade_cell=True):\n",
    "    ax = sns.heatmap(V.reshape(env.shape),\n",
    "                     annot=True, square=True,\n",
    "                     cbar=False, cmap='Blues',\n",
    "                     xticklabels=False, yticklabels=False)\n",
    "\n",
    "    if k:\n",
    "        ax.set(title=\"K = {0}\".format(k))\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "policy\n",
      "\n",
      " [['*' 'LEFT' 'LEFT' 'LEFT']\n",
      " ['UP' 'LEFT' 'DOWN' 'UP']\n",
      " ['UP' 'DOWN' 'RIGHT' 'DOWN']\n",
      " ['RIGHT' 'RIGHT' 'RIGHT' '*']]\n"
     ]
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOcAAADnCAYAAADl9EEgAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/d3fzzAAAACXBIWXMAAAsTAAALEwEAmpwYAAAP00lEQVR4nO3dfXTMB77H8ffIZCh5QoyEpIRFXSKcFtVDuUvv0lLr4fRR7bZVWntXH87aq1rXqiqWVldDN6VOb0lViz1nq2RbVKuUa0kVG+JUJB4SEhJC5EFm7h/tze3ZO5zTmpnfNzmf11+d6e80n3PGO7+Z0fP7ufx+vx8RMaeR0wNEJDDFKWKU4hQxSnGKGKU4RYxyX+9f3tTr38O1I6wSBg1zekJIDBuQ4vSEkPlVWlunJ4RM346xAZ/XmVPEKMUpYpTiFDFKcYoYpThFjFKcIkYpThGjFKeIUYpTxCjFKWKU4hQxSnGKGKU4RYxSnCJGKU4RoxSniFGKU8QoxSlilOIUMUpxihilOEWMUpwiRl330pjhMrR/N1767b009rg5ePQUT856j/LLlU7PCqqFD6Vx5HQ5y7Ydc3pKUPRJjmVIp5b4gZpaHx/sL6KgrP69Zp9+9AFbP14HLhfexCQenzKdmLgWAY/1+/289dosktv/jLvHjAv5NsfPnPHNo8iYNY4Hpy4nbdRs8k6eY/aUe52eFTQdvVFkTu7LsB4JTk8JGm+Uh1GpXtJ3FjB36zE2HS5h4u3JTs/60fKO5rBpXSYzXn2buW++T0KbZNatzAh47KmCPOY9P5k9X24N2z7H4xxy+y3sPZTPtwXFALz14XYeGNbb4VXBM75/O9bsOsHG/YVOTwmaqz4/mfsKuVh5FYD8sivENHET4XJ42I+U0qkrf1y+jqbNoqiurqL0XDFR0YEv8Lxlw1oG/mIkfQYMDts+x9/WJiU05+SZsrrHp86WERt9E9HNmjSIt7Yz1x8CYECXeIeXBM/5ihrOV9TUPR6b2ppvCsuprYd3enW73ezduY23F88hMtLD6HETAx43fvJUAA5m7w7bNsfPnC6Xi0D3762t9TmwRn4MT4SLCX2SaBXlIXPfaafn/GS33jGIpe9/yi8ffoIFM6bg89n4s+f4mfNEUSm9U9vXPW7rjeX8hctUVFY7N+oGPDu0M0O6ewHYfPAsi7JyHV4UHMO7tiI1MRqAA4Xl7DheylP9bqaovIrXv8inxlc/TpvrVmaQvfsLANokpzB4+Fi6dOsJwMC7RvBO+jwuX7pIdEyccyO/53icW77KYd5zo+h4cyu+LShmwtgBbNh2wOlZP9mirNwGE+QPbcgpZkPOd98LNHY34oXBHdiVX8bGwyUOL/txxjwyiTGPTALgyMFsls57kZfTVxEdG8fObVkktetgIkwwEGdx6SUm/WEV7y14HI/bzbGTJUyY8a7Ts+Q6BnVoToumkaS1iSGtTUzd84u/zOdyda2Dy36cLt17ce8Dv+aVaU8SERFBXItWPDNjAQDHcv/BisVzeDk907F9Ln+gD3zf0y0A6xfdArB+0i0AReoZxSlilOIUMUpxihilOEWMUpwiRilOEaMUp4hRilPEKMUpYpTiFDFKcYoYpThFjFKcIkYpThGjFKeIUYpTxCjFKWKU4hQxSnGKGKU4RYxSnCJGXfe6tQ31EpLD7+zg9ISQeDg10ekJIZPWLvDlIxsynTlFjFKcIkYpThGjFKeIUYpTxCjFKWKU4hQxSnGKGKU4RYxSnCJGKU4RoxSniFGKU8QoxSlilOIUMUpxihilOEWMUpwiRilOEaMUp4hRilPEKMUpYpTiFDHqutetDbeFD6Vx5HQ5y7Ydc3pK0NyWHMOQTi3x+6G61sfab85woqzS6Vk37NOPPmTrxnW4XC68CW15bMp0YuJaOD0rqLZu2cwL06by1Z5sR36+iTNnR28UmZP7MqxHgtNTgsob5eGX3VuzdMcJ5n+Wx9+OlDChb5LTs25Y3tEcstZnMmPhcl5ZuprWbZJZtzLD6VlBlZ9/nNcWzMfvd26DiTjH92/Hml0n2Li/0OkpQXXV52f1vkIuVl0FoKC0kpgmbiJcDg+7QSmdujJ/2VqaNouiurqK0nPFRMU0nCuyX7lyhen/MZXf/X6aoztMvK2duf4QAAO6xDu8JLjOV9RwvqKm7vHo1NYcKCyn1sHfxsHidrvZ+9XnrFg8B3ekh9HjJjo9KWhmz/pPxt53P526dHF0h4kzZ0PniXDxWJ+2xEd5WJ3dcN4d3NpvIEtWf8KohyawcMbT+Hw+pyfdsDWrM4mIcDNq9Finpzhz5nx2aGeGdPcCsPngWRZl5ToxIyTu7hpPakI0AAeKyvnqeBmT+iVTVF7FG9vzqfHVz9Pm+pUZZO/eDkCbm1MYfM8YOnfrCcCdd43gnSXzqbhUXi/f3i554098/tlWANyRkVRWVnLf6JHU1NRQVfXdP6f/+S283tZh3eXy+6/9kTfl2Y/DuYUFD/Ygt/BSyL+tDdddxhq7GzHt5yn8d8EFNh0uCfnPC9ddxo4czObNP85g9huriI6N48stG8n6SyYvp2eG7Gf2bB8Xsv/2tZw6dZIxI0ew6++h/ba2yTVOkSY+czZUd3ZoToumkfRIjKZHYnTd82/sKKCiutbBZTemS/dejLj/UeY+/xQRjSKIaxnP0y8ucHpWg2PqzBkuuj9n/ePEmTNcrnXm1BdCIkYpThGjFKeIUYpTxCjFKWKU4hQxSnGKGKU4RYxSnCJGKU4RoxSniFGKU8QoxSlilOIUMUpxihilOEWMUpwiRilOEaMUp4hRilPEKMUpYpTiFDHqupfGfO6vh8O5JWzu69aw7mb2v97df9rpCSGzaXue0xNCJm/RPQGf15lTxCjFKWKU4hQxSnGKGKU4RYxSnCJGKU4RoxSniFGKU8QoxSlilOIUMUpxihilOEWMUpwiRilOEaMUp4hRilPEKMUpYpTiFDFKcYoYpThFjFKcIka5w/0Db02K4V87tsAPVNf6+MuBs5y8UHnN4x/slUjhxSq2fXs+fCN/ok8/+pCtH6/D5XLhTWzLY1OmExPXIuCxfr+fZa+9RFL7jtw9ZlyYl964PsmxDOnUEj9QU+vjg/1FFJRd+3WsjxY+lMaR0+Us23bMkZ8f1jNnq2YeRvyLl4xdJ3j18+Nszj3Ho73bBjzWG+XhqX7J9EiMDufEnyzvaA5Z6zKZ8epyXnlzNa3bJLNuZUbAY08X5DH/+d+w58utYV4ZHN4oD6NSvaTvLGDu1mNsOlzCxNuTnZ4VNB29UWRO7suwHs5e3zisZ86rPj9rvi6kvKoWgBNllUQ3cRPhgtp/urR1/5Tm7C64QOmVq+Gc+JOldOrK/OVrcbvdVFdXUXqumFat2wQ8dvOGtQz8xUhaeuvnxa2v+vxk7ivkYuV3r01+2RVirvE61kfj+7djza4TnC694uiOsMZZeqWG0is1dY9HdvdyqKg84Au6/sAZADq3ahaueTfM7Xazd+fnrFg8B3ekh9HjJgY8bvzkqQAczN4dznlBc76ihvMV//c6jk1tzTeFgV/H+mjm+kMADOgS7+gOR74Q8kS4GH9bG+KbeljzdZETE0Lm1jsGsuT9Txj18AQWzngan8/n9KSQ8US4mNAniVZRHjL3NdxbQTgl5GfOoV3i6ZYQBcChokvsKihjQp8kzlyqZunOAmp89ffX7fqVGWTv3g5Am+QUBg8fQ+duPQG4864RvJM+n4pL5UTFxDq4MjiGd21F6vef/w8UlrPjeClP9buZovIqXv8iv16/js8O7cyQ7l4ANh88y6KsXIcXfSfkcWYdKSHrSAkAjSMa8btB7dlz4gKf5J4L9Y8OudGPTGL0I5MAOHIwm6XzXmR2+iqiY+PYue1vJLXr0CDCBNiQU8yGnGIAGrsb8cLgDuzKL2Pj4RKHl924RVm5ZoL8obB+5uyfEkfzppGkJkbX/RYGeHNnAS2aeri/ZwKvfn48nJOCpkv3Xox44FHmTnuKiIgI4lrE8/SMBQDk5eawYvEcZqevcnhlcAzq0JwWTSNJaxNDWpuYuucXf5nP5epaB5c1LLoFYAOiWwDWT7oFoEg9ozhFjFKcIkYpThGjFKeIUYpTxCjFKWKU4hQxSnGKGKU4RYxSnCJGKU4RoxSniFGKU8QoxSlilOIUMUpxihilOEWMUpwiRilOEaMUp4hRilPEqOtet/bB1MRw7Qird75umJeQbMiXjyzatsnpCSGkS2OK1CuKU8QoxSlilOIUMUpxihilOEWMUpwiRilOEaMUp4hRilPEKMUpYpTiFDFKcYoYpThFjFKcIkYpThGjFKeIUYpTxCjFKWKU4hQxSnGKGKU4RYy67qUxQ+GTv37Alg3rcLlceBOTePyZ6cTGtQh4rN/vJ+PVWSS3/xn3jB0X5qXB0Ts5hiGdWoIfqmt9fPjNGQrKKp2eFTQLH0rjyOlylm075vSUoBjavxsv/fZeGnvcHDx6iidnvUf5ZWder7CeOfOO5rBxbSYzF73NvIz3SWibzNr/ygh47KmCPOZOm8ye7VvDOTGovFEeRnVvzZIdJ5j7WR5ZR0p4om+S07OCoqM3iszJfRnWI8HpKUET3zyKjFnjeHDqctJGzSbv5DlmT7nXsT1hjTOlU1cWrlhH02ZRVFdXcb6kmKiY2IDHbv5oLYOGjqTPgMHhnBhUV31+MvcVcrHqKgD5pZXENHET4XJ4WBCM79+ONbtOsHF/odNTgmbI7bew91A+3xYUA/DWh9t5YFhvx/aE/W2t2+3m7zu3sfz1OURGehg7fmLA4371m6kAHNi7O5zzgup8RQ3nK2rqHo9Jbc2BwnJq/Q6OCpKZ6w8BMKBLvMNLgicpoTknz5TVPT51tozY6JuIbtbEkbe2jnwhdNsdg/jzB58yetwTzH9hCj6fz4kZYeOJcPF4n7a0ivKQmd1wzjQNjcvlwu///785a2ud+fMZ8jPn2ncz2LfrCwDa3pzCkOFj6dK9JwAD/20EK96Yx+VLF4mOiQv1lLC4p2s8PRKiAfimqJydx8t4sl8yReVV/Gl7PjW++nnafHZoZ4Z09wKw+eBZFmXlOrwo+E4UldI7tX3d47beWM5fuExFZbUje0Ie59jxkxg7fhIAhw9mkz7vRV5Zsoro2Dh2fJZFcrsODSZMgI9zSvg4pwSAxu5GTP95CrsLLrDxcInDy27MoqzcBhnkD235Kod5z42i482t+LagmAljB7Bh2wHH9oT1M+ct3Xsx8oFfM+f3T9IoIoLmLVvxzMwFABzL/QfLX5/DK0szwzkppAZ2aE6LppGkJUaTlhhd9/ziHQVcrq51cJkEUlx6iUl/WMV7Cx7H43Zz7GQJE2a869gelz/Qm+zv7cm7EM4tYdNQbwG48YuG8XeNgTTkWwBeyU4P+Lz+DyERoxSniFGKU8QoxSlilOIUMUpxihilOEWMUpwiRilOEaMUp4hRilPEKMUpYpTiFDFKcYoYpThFjFKcIkYpThGjFKeIUYpTxCjFKWKU4hQxSnGKGKU4RYy67nVrRcQ5OnOKGKU4RYxSnCJGKU4RoxSniFGKU8So/wGvTFkWo/QpYwAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# run mc policy control GLIE\n",
    "V_pi, policy = GLIE(env, discount_factor=1.0, episode_count=10000)\n",
    "\n",
    "action_labels = {0:\"UP\", 1:\"RIGHT\", 2: \"DOWN\", 3:\"LEFT\"}\n",
    "# print policy\n",
    "optimal_actions = [action_labels[policy[s]] for s in range(env.nS)]\n",
    "optimal_actions[0] = \"*\" \n",
    "optimal_actions[-1] = \"*\" \n",
    "\n",
    "print(\"policy\\n\\n\",np.array(optimal_actions).reshape(env.shape))\n",
    "\n",
    "# print state values\n",
    "grid_print(V_pi.reshape(env.shape))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Conclusion\n",
    "\n",
    "We see that policy and state values converge fairly well for 1000 episode simulation. There are some states which have wrong value most likely due to fast decay of epsilon. We need more exploration in initial episode which is controlled by the epsilon value. We are taking it as 1/k which reduces exploration very fast. It may be better to have epsilon=0.05 or something like that for first 100-1000 episodes and then let epsilon reduce as 1/k. "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
