{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "W12_Tutorial2",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true,
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/w12-t2-prepod/tutorials/W12_DeepRL/student/W12_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-CFhJOG8TYIh"
      },
      "source": [
        "# CIS-522 Week 12 Part 2\n",
        "# Revisiting AlphaZero\n",
        "\n",
        "__Instructor:__ Dinesh Jayaraman\n",
        "\n",
        "__Content creators:__ Chuning Zhu\n",
        "\n",
        "---"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "K4QevOR5mcod"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SoKGVjMvnAJg"
      },
      "source": [
        "---\n",
        "# Setup"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BqLBTC-ZWpTD"
      },
      "source": [
        "# imports\n",
        "import gym\n",
        "import math\n",
        "import torch\n",
        "import torch.nn.functional as F\n",
        "import torch.optim as optim\n",
        "from torch import nn \n",
        "from numba import cuda, float32, int32\n",
        "import numpy as np\n",
        "import os\n",
        "from matplotlib import pyplot as plt\n",
        "import matplotlib.patches as patches\n",
        "from mpl_toolkits.axes_grid1 import make_axes_locatable\n",
        "import IPython\n",
        "from IPython.display import IFrame\n",
        "from google.colab import output\n",
        "from tqdm.notebook import tqdm\n",
        "from typing import List\n",
        "import time\n",
        "import warnings\n",
        "\n",
        "warnings.filterwarnings(\"ignore\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Un34UwDpUkre",
        "cellView": "form"
      },
      "source": [
        "# @title Figure Settings\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "%matplotlib inline \n",
        "\n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "\n",
        "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wed8m7iF1TPo"
      },
      "source": [
        "---\n",
        "# Preface\n",
        "\n",
        "Whew, what a long journey! We started off with a glorious introduction of AlphaZero, studied the basic components of deep learning, and grinded throught convnets, natural language processing, and reinforcement learning. Now, we stand at the end of Week 12, the last week of \"real\" material. In this tutorial, we will go all the way back to week 1 and revisit AlphaZero! The goal is to show that over the course of CIS 522, and particularly the RL weeks, you have been equipped with enough deep learning knowledge to truly understand how AlphaZero works! This will be purely an intellectual journey, and there will be no code for you to implement.\n",
        "\n",
        "Before we dive in, let's take a small detour and briefly talk about how we can solve sequential decision making problems without rewards. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7qdCpGIQUCSm"
      },
      "source": [
        "---\n",
        "# Section 1: Learning from Demonstrations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "JFQWIaAuXWMY",
        "cellView": "form"
      },
      "source": [
        "#@title Video : Imitation Learning\n",
        "\n",
        "import time\n",
        "try: t0;\n",
        "except NameError: t0=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"hT7jp4JlnSM\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P8kTznkZcT3W"
      },
      "source": [
        "How can we train an agent to drive a car in real world? Obviously, there are so many scenarios that any attempt at designing a reward function would fail miserably. In addition, there is no good mechanism to issue the rewards. What we do have is plenty of human drivers, or experts. If experts can stay in the middle of the road, then we should be able to do the same by imitiating what the expert does for any given observation. This intuition leads to a method for solving sequential decision making problems without rewards -- **imitation learning**. In particular, what we just mentioned is called **behavioral cloning**. \n",
        "\n",
        "In behavioral cloning, we train a neural network to predict an action given a state. This is exactly a policy network, but rather than maximizing the reward, we simply maximize the log likelihood of the expert's action given an observation. Let $D$ denote a dataset of expert demonstrations, the objective of behavioral cloning is:\n",
        "$$\\theta = \\arg\\max_{\\theta} E_{(s, a^*) \\sim D}\\left[\\log \\pi(a^*|s)\\right]$$\n",
        "\n",
        "Using plain behavioral cloning can lead to catastrophic failure. This is because the prediction error of our model compounds as we execute more actions, which eventually leads to a state not covered in the dataset's distribution. The model would have no idea what to do there. The **DAgger (Dataset Aggregation) algorithm** addresses this distributional shift by adding a data collection and labelling step to behaviorial cloning. In this way, it pulls the policy distribution closer to the data distribution, and provably leads to fewer errors. The pseducode is as follows:\n",
        "\n",
        "```\n",
        "Initialize dataset D with expert demonstrations\n",
        "Repeat\n",
        "    Train policy using observations and actions from D.\n",
        "    Run policy to generate more observations O_new.\n",
        "    Ask expert to label the observations with actions A_new. Get D_new = {O_new, A_new}.\n",
        "    Aggregate D = {D, D_new}.\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a0e0aariumC6"
      },
      "source": [
        "## Exercise 1: Behavioral Cloning\n",
        "\n",
        "In this exercise, you will use behavioral cloning to solve the CartPole environment. Run the first cell to train an expert policy using policy gradient, which should take about 3 minutes. Then, run the second cell to collect expert demonstrations. We store the data in a replay buffer of states and actions. Finally, follow the instuctions to complete the training and model update functions of behavior cloning. Since the dynamics of CartPole is very simple, we don't need DAgger to solve it."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "-o32NwFyuqRk"
      },
      "source": [
        "# @markdown ## Train expert using policy gradient\n",
        "\n",
        "class PolicyNetwork(nn.Module):\n",
        "    def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4):\n",
        "        super(PolicyNetwork, self).__init__()\n",
        "        self.num_actions = num_actions\n",
        "        self.linear1 = nn.Linear(num_inputs, hidden_size)\n",
        "        self.linear2 = nn.Linear(hidden_size, num_actions)\n",
        "        self.optimizer = optim.Adam(self.parameters(), lr=learning_rate)\n",
        "\n",
        "    def forward(self, state):\n",
        "        x = F.relu(self.linear1(state))\n",
        "        x = F.softmax(self.linear2(x), dim=1)\n",
        "        return x \n",
        "    \n",
        "    def get_action(self, state):\n",
        "        state = torch.from_numpy(state).float().unsqueeze(0)\n",
        "        probs = self.forward(state)\n",
        "        action = np.random.choice(self.num_actions, p=np.squeeze(probs.detach().numpy()))\n",
        "        log_prob = torch.log(probs.squeeze(0)[action])\n",
        "        return action, log_prob\n",
        "\n",
        "## Update\n",
        "def update_policy(policy_network, rewards, log_probs, gamma):\n",
        "    discounted_rewards = []\n",
        "    for t in range(len(rewards)):\n",
        "        Gt = 0 \n",
        "        pow = 0\n",
        "        for r in rewards[t:]:\n",
        "            Gt = Gt + gamma**pow * r\n",
        "            pow = pow + 1\n",
        "        discounted_rewards.append(Gt)\n",
        "    discounted_rewards = torch.tensor(discounted_rewards)\n",
        "    # Add whitening\n",
        "    discounted_rewards = (discounted_rewards - discounted_rewards.mean()) / (discounted_rewards.std() + 1e-9) # normalize discounted rewards\n",
        "\n",
        "    policy_gradients = []\n",
        "    for log_prob, Gt in zip(log_probs, discounted_rewards):\n",
        "        policy_gradients.append(-log_prob * Gt)\n",
        "    \n",
        "    policy_network.optimizer.zero_grad()\n",
        "    objective = torch.stack(policy_gradients).sum()\n",
        "    objective.backward()\n",
        "    policy_network.optimizer.step()\n",
        "\n",
        "## Main loop\n",
        "def reinforce_whitening(env, policy):\n",
        "    \n",
        "    max_episode_num = 3000\n",
        "    max_steps = 10000\n",
        "    numsteps = []\n",
        "    avg_numsteps = []\n",
        "    all_rewards = []\n",
        "\n",
        "    for episode in tqdm(range(max_episode_num),position=0, leave=True):\n",
        "        state = env.reset()\n",
        "        log_probs = []\n",
        "        rewards = []\n",
        "\n",
        "        for steps in range(max_steps):\n",
        "            action, log_prob = policy_net.get_action(state)\n",
        "            new_state, reward, done, _ = env.step(action)\n",
        "            log_probs.append(log_prob)\n",
        "            rewards.append(reward)\n",
        "\n",
        "            if done:\n",
        "                update_policy(policy_net, rewards, log_probs, 0.9)\n",
        "                numsteps.append(steps)\n",
        "                avg_numsteps.append(np.mean(numsteps[-10:]))\n",
        "                all_rewards.append(np.sum(rewards))\n",
        "                break\n",
        "            state = new_state\n",
        "        \n",
        "    plt.plot(numsteps)\n",
        "    plt.plot(avg_numsteps)\n",
        "    plt.ylabel(\"Reward\")\n",
        "    plt.xlabel('Episode')\n",
        "    plt.show()\n",
        "\n",
        "device = torch.device(\"cpu\")\n",
        "env = gym.make('CartPole-v0')\n",
        "policy_net = PolicyNetwork(env.observation_space.shape[0], env.action_space.n, 128)\n",
        "policy_net = policy_net.to(device)\n",
        "reinforce_whitening(env, policy_net)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "Iw0dPbASutgB"
      },
      "source": [
        "# @markdown ## Collect expert data\n",
        "\n",
        "class ReplayBuffer:\n",
        "  def __init__(self, state_dim, act_dim, buffer_size):\n",
        "    self.buffer_size = buffer_size\n",
        "    self.ptr = 0\n",
        "    self.n_samples = 0\n",
        "\n",
        "    self.state = torch.zeros(buffer_size, *state_dim, dtype=torch.float32, device=device)\n",
        "    self.action = torch.zeros(buffer_size, act_dim, dtype=torch.int64, device=device)\n",
        "\n",
        "  def add(self, state, action):\n",
        "    self.state[self.ptr] = torch.tensor(state)\n",
        "    self.action[self.ptr] = torch.tensor(action)\n",
        "    if self.n_samples < self.buffer_size:\n",
        "      self.n_samples += 1\n",
        "    self.ptr = (self.ptr + 1) % self.buffer_size\n",
        "\n",
        "  def sample(self, batch_size):      \n",
        "    # Select batch_size number of sample indicies at random from the buffer\n",
        "    idx = np.random.choice(self.n_samples, batch_size)    \n",
        "    # Using the random indices, assign the corresponding state and action\n",
        "    state = self.state[idx]\n",
        "    action = self.action[idx]\n",
        "    return state, action\n",
        "\n",
        "def collect_expert_data(env, expert, num_samples=10000):\n",
        "    print('Collecting expert data...')\n",
        "    replay_buffer = ReplayBuffer(state_dim=(4,), act_dim=1, buffer_size=num_samples)\n",
        "    state = env.reset()\n",
        "    while replay_buffer.n_samples < replay_buffer.buffer_size:\n",
        "        # Sample a random action\n",
        "        action, log_prob = expert.get_action(state)\n",
        "        # Execute action in environment\n",
        "        next_state, reward, done, _ = env.step(action)\n",
        "        # Add to replay buffer\n",
        "        replay_buffer.add(state, action)\n",
        "        # Update state\n",
        "        state = next_state\n",
        "        if done:\n",
        "            state = env.reset()\n",
        "    print('Done!')\n",
        "    return replay_buffer\n",
        "\n",
        "replay_buffer = collect_expert_data(env, policy_net)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hckaGea0uwWo"
      },
      "source": [
        "class BCAgent(nn.Module):\n",
        "    def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4):\n",
        "        super(BCAgent, self).__init__()\n",
        "        self.num_actions = num_actions\n",
        "        self.linear1 = nn.Linear(num_inputs, hidden_size)\n",
        "        self.linear2 = nn.Linear(hidden_size, num_actions)\n",
        "        self.optimizer = optim.Adam(self.parameters(), lr=learning_rate)\n",
        "        self.criterion = nn.CrossEntropyLoss()\n",
        "\n",
        "    def forward(self, state):\n",
        "        x = F.relu(self.linear1(state))\n",
        "        # We don't use softmax here because CrossEntropyLoss does that already.\n",
        "        x = self.linear2(x)\n",
        "        return x \n",
        "    \n",
        "    def get_action(self, state):\n",
        "        state = torch.from_numpy(state).float().unsqueeze(0)\n",
        "        probs = F.softmax(self.forward(state), dim=1)\n",
        "        action = torch.argmax(probs, dim=1)\n",
        "        return action.numpy()[0]\n",
        "    \n",
        "    def update(self, state, action):\n",
        "        ####################################################################\n",
        "        # Fill in missing code below (...),\n",
        "        # then remove or comment the line below to test your function\n",
        "        raise NotImplementedError(\"BC agent\")\n",
        "        ####################################################################\n",
        "\n",
        "        # Get output from model\n",
        "        output = ...\n",
        "\n",
        "        # Compute cross-entropy loss\n",
        "        loss = ...\n",
        "\n",
        "        # Take gradient step\n",
        "        ...\n",
        "\n",
        "        return loss.item()\n",
        "\n",
        "\n",
        "def behavioral_cloning(env, agent, buffer, num_epochs=10, iters_per_epoch=200, batch_size=50):\n",
        "    epoch_losses = []\n",
        "    epoch_rewards = []\n",
        "    for epoch in tqdm(range(num_epochs)):\n",
        "        total_loss = 0\n",
        "        for i in range(iters_per_epoch):\n",
        "            ####################################################################\n",
        "            # Fill in missing code below (...),\n",
        "            # then remove or comment the line below to test your function\n",
        "            raise NotImplementedError(\"BC training loop\")\n",
        "            ####################################################################\n",
        "\n",
        "            # Sample a batch of states and actions from the buffer\n",
        "            ...\n",
        "\n",
        "            # Update agent\n",
        "            loss = ...\n",
        "\n",
        "            total_loss += loss\n",
        "        # Log average loss\n",
        "        epoch_losses.append(total_loss / iters_per_epoch)\n",
        "        # Evaluate in environment\n",
        "        total_reward = 0\n",
        "        done = False\n",
        "        state = env.reset()\n",
        "        while not done:\n",
        "            with torch.no_grad():\n",
        "                action = agent.get_action(state)\n",
        "            next_state, reward, done, _ = env.step(action)\n",
        "            total_reward += reward\n",
        "            state = next_state\n",
        "        epoch_rewards.append(total_reward)\n",
        "        print(f'Epoch [{epoch}/{num_epochs}], loss: {epoch_losses[-1]}, reward: {epoch_rewards[-1]}')\n",
        "    return epoch_losses, epoch_rewards\n",
        "\n",
        "# # Uncomment to test\n",
        "# agent = BCAgent(env.observation_space.shape[0], env.action_space.n, 128)\n",
        "# losses, rewards = behavioral_cloning(env, agent, replay_buffer, batch_size=10)\n",
        "\n",
        "# # Plot learning curves\n",
        "# plt.figure(figsize=(9, 5))\n",
        "# plt.title('Learning curves')\n",
        "# ax = plt.gca()\n",
        "# ax.plot(losses, label=\"Loss\", c='r')\n",
        "# ax.set_xlabel('Epoch')\n",
        "# ax.set_ylabel('Loss', c='r')\n",
        "# plt.legend(loc=\"upper left\")\n",
        "# axtwin = ax.twinx()\n",
        "# axtwin.plot(rewards, label=\"Reward\", c='b')\n",
        "# axtwin.set_ylabel('Reward', c='b')\n",
        "# plt.legend(loc=\"upper right\")\n",
        "# plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TxpekhawvD4n"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/W12_Tutorial2_Solution_Ex01.py)\n",
        "\n",
        "*Example output:*  \n",
        "\n",
        "<img alt='Solution hint 1' align='left' width=630 height=350 src=https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W12_DeepRL/static/W12_Tutorial2_Solution_Ex01.png />"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "3SRvwuNwvG4_"
      },
      "source": [
        "#@markdown If we were to extend the code to DAgger, how could we use the expert to relabel new experiences?\n",
        "dagger_relabel = \"\" #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JOhe_siSvEqm"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/dagger_relabel.md)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tHNZyNdSXIUA"
      },
      "source": [
        "---\n",
        "# Section 2: Back to AlphaZero"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vqQlDCiZLwe7"
      },
      "source": [
        "## Key concepts in RL\n",
        "\n",
        "We are now ready to see our old friend AlphaZero again. Let's summarize a few key concepts in reinforcement learning introduced over the past two weeks, and try to connect them to AlphaZero in the rest of the tutorial. \n",
        "\n",
        "- Recall that a reinforcement learning problem can be formalized as a **Markov Decision Process (MDP)**. A MDP is a tuple $(S, A, P, R, \\gamma)$, where $S$ denote the set of states, $A$ the set of actions, $P$ the transition function, $R$ the reward function, and $\\gamma$ the discount factor. An MDP satisfies the Markov property since the reward and transition can be determined from the current state and action, without knowing the history. The reward is typically a function of state and action, but we can also have it be a function of state, or of state, action and next state. \n",
        "- A **policy** is a mapping from states to distributions over actions. The objective of reinforcement learning is to find a policy that maximizes the expected total reward: $$ \\pi^* = \\arg\\max_{\\pi}E_{\\pi}\\left[\\sum_{t=0}^{\\infty} \\gamma^t R(s_t, a_t)\\right]$$\n",
        "- The **value** of a state $V^{\\pi}(s)$ is defined as the expected total reward obtained by following policy $\\pi$ starting from state $s$. The value of a state-action pair, or the **Q-value**, $Q^{\\pi}(s, a)$ is deined as the expected total reward obtained by taking action $a$ at state $s$ and then following policy $\\pi$. Value and Q-value satisfy the Bellman equations. Using the Bellman equations, we can derive two algorithms for solving MDP with known transitions: **policy iteration** and **value iteration**.\n",
        "- To solve an MDP without knowledge of its transitions, we can use **temporal difference (TD) learning**. The idea is to update the value estimate of state $s_{t}$ by bootstraping a value of $s_{t+1}$ from the current value estimates. Two instantiations of TD learning are **Q-learning** and **SARSA**. Later in actor-critic methods we also saw $n$-step TD-methods, where we bootstrap a value after rolling out $n$ steps.\n",
        "- **Deep Q-learning** extends Q-learning to large state spaces by approximating the Q-values with a deep neural net. We need a few tricks to get it to work well, including replay buffer, target network, and double DQN. Deep Deterministic Policy Gradient (DDPG) further extends DQN to continuous action spaces.\n",
        "- **Policy gradient method** (REINFORCE) directly optimizes a neural-network-parametrized policy using gradient ascent. Since the objective involves an expectation, we need to use a score function estimator and Monte-Carlo sampling to estimate the gradient. The vanilla policy gradient suffers from a high variance, which we mitigate with reward-to-go and discounting.\n",
        "- To further reduce the varaince of policy gradients, we can subtract an unbiased baseline from the cumulative future reward term. A good baseline is the value function. This lead to the **actor-critic algorithm**, in which we jointly train an actor (policy) and a critic (value function). \n",
        "\n",
        "Finally, here are some recurring themes:\n",
        "- **On-policy vs Off-policy**: A method is on-policy if learning needs to be done with examples generated using the current policy, and off-policy otherwise. Q-learning is off-policy, whereas SARSA is on-policy. DQN is off-policy, whereas policy gradient is on-policy. \n",
        "- **Model-based vs Model-free**: A model-based RL algorithm approximates the dynamics of an environment using a model (e.g. neural network). It then plans from the model or use the model to provide additional training data for model-free algorithms. \n",
        "- **Exploration vs exploitation**: In any RL algorithm, we need to balance exploration and exploitation. Particularly, we need to make sure that the algorithm explores enough states to be able to learn the optimal policy. In Q-learning, this is handled by episilon-greedy policy, whereas a stochastic policy in policy gradient automatically takes care of that.\n",
        "\n",
        "In the remaining sections of the tutorial, there will be no video or code for you to implement. Thus, we encourage you to turn on your cameras and work through the rest of the tutorial with your pod. \n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GXAgpzNJJQjW"
      },
      "source": [
        "## The Othello Game\n",
        "\n",
        "Recall that the game of Othello involves two players and an 8x8 (or 6x6) board. Initially, four stones are placed at the center of the board. Players 1 (black) and player 2 (white) take turns placing stones on the board. A move is legal if 1) its position is adjacent to one of opponent's pieces, and 2) placing the stone will cause a \"flip\": all of opponents pieces trapped between the new piece and exisiting pieces are flipped. When neither player has a legal move, the player with  more stones on the board wins the game. Run the following code cells and make a few moves to familiarize yourself with the rules."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bxty7aS9N-V0",
        "cellView": "form"
      },
      "source": [
        "#@markdown OthelloGame and InteractiveOthelloGame\n",
        "\n",
        "# Environment config so that numba knows where to find CUDA libraries\n",
        "os.environ['NUMBAPRO_LIBDEVICE'] = \"/usr/local/cuda-10.0/nvvm/libdevice\"\n",
        "os.environ['NUMBAPRO_NVVM'] = \"/usr/local/cuda-10.0/nvvm/lib64/libnvvm.so\"\n",
        "\n",
        "# 1st dimension is row, 2nd is col. Top-left corner of board is (0,0). 8 vector\n",
        "# directions out from each point are:\n",
        "RAYS = np.array([[0, 1],   # east\n",
        "                 [0, -1],  # west\n",
        "                 [1, 0],   # south\n",
        "                 [-1, 0],  # north\n",
        "                 [1, 1],   # southeast\n",
        "                 [1, -1],  # southwest\n",
        "                 [-1, 1],  # northeast\n",
        "                 [-1, -1]],# northwest\n",
        "                dtype=np.int32)\n",
        "\n",
        "# ======================================\n",
        "# === BEGIN CUDA-ACCELERATED HELPERS ===\n",
        "# ======================================\n",
        "\n",
        "def torch2cuda(arr:torch.Tensor) -> cuda.cudadrv.devicearray.DeviceNDArray:\n",
        "    return cuda.as_cuda_array(arr)\n",
        "\n",
        "def cuda2torch(arr:cuda.cudadrv.devicearray.DeviceNDArray) -> torch.Tensor:\n",
        "    return torch.as_tensor(arr, device='cuda')\n",
        "\n",
        "@cuda.jit(device=True)\n",
        "def cuda_cast_rays(board, y0, x0, ray, player):\n",
        "    \"\"\"Helper CUDA kernel that searches out from row, column coordinate (y0,x0)\n",
        "    in the direction specified by \"ray\". Returns True iff there is an unbroken\n",
        "    line of opponent stones along the ray starting at (y0,x0), ending with a\n",
        "    stone belonging to \"player\".\n",
        "    \n",
        "    The device=True decorator means that this may only be called from other CUDA\n",
        "    kernels (see below).\n",
        "    \"\"\"\n",
        "    n = board.shape[1]\n",
        "    opponent = 3 - player\n",
        "    y, x = y0+ray[0], x0+ray[1]\n",
        "\n",
        "    # The ray must begin with an opponent stone 1 space away\n",
        "    if x < 0 or y < 0 or x >= n or y >= n or board[y, x] != opponent:\n",
        "        return False\n",
        "    \n",
        "    # Keep looking until edge of the board or we find an open space or we find\n",
        "    # a player-owned stone\n",
        "    y, x = y+ray[0], x+ray[1]\n",
        "    while x >= 0 and y >= 0 and x < n and y < n:\n",
        "        # Ray terminates on an open space. Not a valid move.\n",
        "        if board[y, x] == 0:\n",
        "            return False\n",
        "        \n",
        "        # Ray terminates on player. This is valid!\n",
        "        if board[y, x] == player:    \n",
        "            return True\n",
        "        \n",
        "        y, x = y+ray[0], x+ray[1]\n",
        "        \n",
        "    # Reached end of the board. Not a valid move.\n",
        "    return False\n",
        "\n",
        "@cuda.jit\n",
        "def cuda_get_valid_actions(boards, player, valid_actions):\n",
        "    \"\"\"Helper CUDA kernel that calls cuda_cast_rays to find all legal moves\n",
        "    in parallel across many boards and positions. Results are written into the\n",
        "    array \"valid_actions\" which must be the same shape as \"boards\".\n",
        "\n",
        "    Decorating with @cuda.jit means we can call it from python by passing cuda\n",
        "    device array objects for \"board\" and \"valid_actions\".\n",
        "\n",
        "    CUDA is structured by 'blocks' and 'threads'. Each block contains multiple\n",
        "    threads. It is the caller's job to say how many of each and their effective\n",
        "    size. For instance, cuda_get_valid_actions[4, (8,8)](...) is the syntax for\n",
        "    \"4 blocks, each of which has 8 'x' and 8 'y' threads\".\n",
        "\n",
        "    We use a separate block per board, and each thread evaluates a single board\n",
        "    coordinate.\n",
        "    \"\"\"\n",
        "    # From numba documentation: const.array_like copies the given array into\n",
        "    # constant GPU memory *at compile time*. This means the copy only happens\n",
        "    # the first time this function is called.\n",
        "    rays = cuda.const.array_like(RAYS)\n",
        "\n",
        "    # We access boards through the 1d block index (boardIdx.x) and positions\n",
        "    # within the board through the thread index x and y.\n",
        "    tx = cuda.threadIdx.x\n",
        "    ty = cuda.threadIdx.y\n",
        "    bx = cuda.blockIdx.x\n",
        "    \n",
        "    # If a space is occupied, it's an illegal move and we're done. Store a zero\n",
        "    # at coordinate (ty, tx) on board bx.\n",
        "    if boards[bx, ty, tx] != 0:\n",
        "        valid_actions[bx, ty, tx] = 0\n",
        "        return\n",
        "\n",
        "    # Try out all 8 ray directions. If any one is a hit, set return value to 1\n",
        "    # and break.\n",
        "    for i in range(8):\n",
        "        hit = cuda_cast_rays(boards[bx], ty, tx, rays[i], player)\n",
        "        if hit:\n",
        "            valid_actions[bx, ty, tx] = 1\n",
        "            return\n",
        "    \n",
        "    # We tried all of the rays and none were legal capturing moves.\n",
        "    valid_actions[bx, ty, tx] = 0\n",
        "\n",
        "@cuda.jit\n",
        "def cuda_step(boards, actions, player):\n",
        "    \"\"\"Helper CUDA kernel for playing actions across multiple boards in parallel.\n",
        "\n",
        "    Boards must be [num_games, n, n], actions must be [num_games, 2] and player\n",
        "    must be 1 or 2. Actions are (row, col) indices of the player's stone.\n",
        "\n",
        "    WARNING: no legality checks are performed here! It is the caller's\n",
        "    responsibility to ensure that all moves are legal.\n",
        "\n",
        "    Boards are indexed by block x and the 8 ray directions by thread x\n",
        "\n",
        "    If either of actions[b,:] is -1, treated as pass.\n",
        "    \"\"\"\n",
        "    # As in cuda_get_valid_actions, this copy happens once at compile time\n",
        "    rays = cuda.const.array_like(RAYS)\n",
        "\n",
        "    # Grab thread and block index from cuda context\n",
        "    tx = cuda.threadIdx.x\n",
        "    bx = cuda.blockIdx.x  \n",
        "\n",
        "    # Unpack the row, col coordinate of the action\n",
        "    act_y, act_x = actions[bx]\n",
        "\n",
        "    # If -1, the player passed and there is nothing to do.\n",
        "    if act_x == -1 or act_y == -1:\n",
        "        return\n",
        "\n",
        "    # Search a different ray direction on each of 8 threads. ASSUMES the move\n",
        "    # is legal.\n",
        "    is_hit = cuda_cast_rays(boards[bx], act_y, act_x, rays[tx], player)\n",
        "    if is_hit:\n",
        "        dy, dx = rays[tx]\n",
        "        opponent = 3 - player\n",
        "        y, x = act_y+dy, act_x+dx\n",
        "        while boards[bx, y, x] == opponent:\n",
        "            # Modify the board, flipping opponent to player\n",
        "            boards[bx, y, x] = player\n",
        "            y, x = y+dy, x+dx\n",
        "    \n",
        "    # Only need to do this once, so just let thread #0 handle it: update\n",
        "    # the board at location of the action\n",
        "    if tx == 0:\n",
        "        boards[bx, act_y, act_x] = player\n",
        "\n",
        "@torch.jit.script\n",
        "def zobrist_hash(boards:torch.Tensor, player:torch.Tensor, table:torch.Tensor, out:torch.Tensor):\n",
        "    n_games, n, _ = boards.size()\n",
        "    flat_boards = boards.reshape(n_games, n*n)\n",
        "    bit_strings = table[torch.arange(n*n), flat_boards.long()]\n",
        "    # Reset output values to 0\n",
        "    out.fill_(0)\n",
        "    # We have to loop manually because bitwise_xor has no dimension\n",
        "    # argument (see https://github.com/pytorch/pytorch/issues/35641)\n",
        "    for i in range(n*n):\n",
        "        torch.bitwise_xor(out, bit_strings[:,i], out=out)\n",
        "    # Just add the current player - ok that this isn't random bitstrings because\n",
        "    # just a single integer added per game.\n",
        "    torch.add(out, player, out=out)\n",
        "    return out\n",
        "\n",
        "# ====================================\n",
        "# === BEGIN OTHELLOGAME DEFINITION ===\n",
        "# ====================================\n",
        "\n",
        "class OthelloGame(object):\n",
        "    PLAYER1 = 1\n",
        "    PLAYER2 = 2\n",
        "    ACTIVE = 0\n",
        "    OVER = 1\n",
        "    PASS = (-1, -1)\n",
        "    # Unicode characters for dark and light circles if printing to console.\n",
        "    # By convention, player 1 is dark and 2 is light.\n",
        "    GAME_SYMBOLS = ['_', '\\u25cf', '\\u25cb']\n",
        "\n",
        "    def __init__(self, n_games=1, n=8):\n",
        "        \"\"\"Create a new batch of n_games othello games each with board size [n,n]\n",
        "\n",
        "        The game states are managed on the GPU with the help of CUDA kernels.\n",
        "\n",
        "        We expect all n games to update in lockstep, so that there is only one\n",
        "        'current_player' shared by all of them.\n",
        "        \"\"\"\n",
        "        self.n_games = n_games\n",
        "        self.n = n\n",
        "        self.boards = torch.zeros((n_games, n, n), dtype=torch.float32, device='cuda')\n",
        "        # Initial positions: 4 stones in the center alternating color\n",
        "        self.boards[:, [self.n//2, self.n//2-1], [self.n//2, self.n//2-1]] = OthelloGame.PLAYER2\n",
        "        self.boards[:, [self.n//2, self.n//2-1], [self.n//2-1, self.n//2]] = OthelloGame.PLAYER1\n",
        "        # Always start with player 1 (dark stones by convention)\n",
        "        self.current_player = OthelloGame.PLAYER1\n",
        "        # All games are initially active\n",
        "        self.game_status = OthelloGame.ACTIVE*torch.ones(n_games, dtype=torch.int32, device='cuda')\n",
        "        # If both players pass the game is over. Keep track of whether the last\n",
        "        # player passed.\n",
        "        self.last_player_pass = torch.zeros(n_games, dtype=torch.int32, device='cuda')\n",
        "\n",
        "        # _valid_moves is a container for storing which moves are valid. It is\n",
        "        # zero everywhere except where it is legal to play, which is 1.\n",
        "        self._valid_moves = torch.zeros_like(self.boards)\n",
        "\n",
        "        # Create a CUDA device array copy of each GPU tensor. These share GPU\n",
        "        # memory with torch tensors so any changes made in CUDA kernels are seen\n",
        "        # by torch.\n",
        "        self._cuda_boards = torch2cuda(self.boards)\n",
        "        self._cuda_valid = torch2cuda(self._valid_moves)\n",
        "\n",
        "        # Populate valid moves using CUDA-ized algorithm\n",
        "        self.refresh_legal()\n",
        "\n",
        "        # Initialize 'zobrist hash' table for get_uid() function\n",
        "        self._new_zobrist_table()\n",
        "\n",
        "    def step(self, actions:torch.Tensor):\n",
        "        \"\"\"Place a stone for the current player at location action=(row,col)\n",
        "        separately per board. Expected size of 'actions' is [n_games, 2].\n",
        "\n",
        "        Action coordinates of -1 treated as passing.\n",
        "\n",
        "        Actions are ignored for all games that are not in the ACTIVE state.\n",
        "\n",
        "        Note: no legality checking here! It is the caller's responsibility to\n",
        "        ensure 'action' is legal, e.g. by checking that action is in\n",
        "        game.get_available_actions().\n",
        "        \"\"\"\n",
        "        # Ensure actions is a torch tensor and on the GPU. Ensure its shape is\n",
        "        # [n_games, 2] even if the input was, say, just a (row,col) tuple.\n",
        "        actions = torch.as_tensor(actions, device='cuda').view(-1, 2)\n",
        "\n",
        "        # Ensure actions are \"pass\" for games that have completed. This tells\n",
        "        # the cuda_step kernel to ignore these games.\n",
        "        actions[self.game_status == OthelloGame.OVER, :] = -1\n",
        "\n",
        "        # Update board state with call to CUDA kernel cuda_step.\n",
        "        cuda_step[self.n_games, RAYS.shape[0]](self._cuda_boards, torch2cuda(actions), self.current_player)\n",
        "\n",
        "        # Advance to the next player (2->1 and 1->2).\n",
        "        self.current_player = 3-self.current_player\n",
        "\n",
        "        # Count passes - any games with 2 passes in a row is flagged as being over.\n",
        "        is_pass = torch.any(actions == -1, dim=1)\n",
        "        self.last_player_pass[~is_pass] = 0\n",
        "        self.last_player_pass[is_pass] += 1\n",
        "\n",
        "        # Flag games that have finished\n",
        "        is_game_over = self.last_player_pass >= 2\n",
        "        self.game_status[is_game_over] = OthelloGame.OVER\n",
        "\n",
        "        # Refresh legal moves for the next turn\n",
        "        self.refresh_legal()\n",
        "    \n",
        "    def refresh_legal(self):\n",
        "        # Look for valid moves in the updated boards for the new player\n",
        "        cuda_get_valid_actions[self.n_games, (self.n, self.n)](self._cuda_boards, self.current_player, self._cuda_valid)\n",
        "\n",
        "    def copy_state(self, idx=None):\n",
        "        # Grab the minimal amount of state info to be able to return the board\n",
        "        # to its current configuration later with a call to paste_state.\n",
        "        # NOTE: this copy is single-use only; once it is pasted, it will be\n",
        "        # modified in-place!\n",
        "        return {\"boards\": self.boards.clone(),\n",
        "                \"game_status\": self.game_status.clone(),\n",
        "                \"last_player_pass\": self.last_player_pass.clone(),\n",
        "                \"current_player\": self.current_player}\n",
        "\n",
        "    def paste_state(self, state):\n",
        "        self.__dict__.update(state)\n",
        "        self._cuda_boards = torch2cuda(self.boards)\n",
        "        self._cuda_valid = torch2cuda(self._valid_moves)\n",
        "        cuda_get_valid_actions[self.n_games, (self.n, self.n)](self._cuda_boards, self.current_player, self._cuda_valid)\n",
        "\n",
        "    def render(self, mode='human'):\n",
        "        boards = self.boards.cpu()\n",
        "        acts = self.get_available_actions()\n",
        "        for g in range(self.n_games):\n",
        "            if mode == 'text':\n",
        "                print(\"=\"*(self.n-1) + (f\"{OthelloGame.GAME_SYMBOLS[self.current_player]}\")*2 + \"=\"*(self.n-1))\n",
        "                for row in boards[g]:\n",
        "                    print(\" \".join(OthelloGame.GAME_SYMBOLS[v] for v in row))\n",
        "                print(\"=\"*(self.n*2))\n",
        "            else:\n",
        "                # Plot with matplotlib patches. Player 1 is dark and 2 is light.\n",
        "                fig, ax = plt.subplots(figsize=(6,6))\n",
        "                ax.set_aspect('equal')\n",
        "                rect = patches.Rectangle((0,0),self.n,self.n,linewidth=1,edgecolor='k',facecolor='g')\n",
        "                ax.add_patch(rect)\n",
        "                ax.set_xlim(0,self.n)\n",
        "                ax.set_ylim(0,self.n)\n",
        "                ax.set_yticklabels([])\n",
        "                ax.set_xticklabels([])\n",
        "                ax.tick_params(length=0)\n",
        "                ax.grid(which='major', zorder=0, c='k')\n",
        "                for i in range(self.n):\n",
        "                    for j in range(self.n):\n",
        "                        if boards[g,i,j]>0:\n",
        "                            # Draw stones\n",
        "                            c = (0,0,0,1) if boards[g,i,j]==OthelloGame.PLAYER1 else (1,1,1,1)\n",
        "                            circ = patches.Circle((j+.525,self.n-1-i+.5), .4, \n",
        "                                                facecolor=c, linewidth=1,edgecolor='k')\n",
        "                            ax.add_patch(circ)\n",
        "                        elif (i,j) in acts[g]:\n",
        "                            # Draw transluscent stones on legal move positions\n",
        "                            c = (0,0,0,0.25) if self.current_player==OthelloGame.PLAYER1 else (1,1,1,0.25)\n",
        "                            circ = patches.Circle((j+.525,self.n-1-i+.5), .4, facecolor=c, linewidth=0)\n",
        "                            ax.add_patch(circ)\n",
        "                plt.show()\n",
        "\n",
        "    def get_available_actions(self):\n",
        "        \"\"\"Get a list of all legal actions for the current player (passing not included).\n",
        "        \n",
        "        Note: assumes self._valid_moves is up to date!\n",
        "        \"\"\"\n",
        "        acts = [None]*self.n_games\n",
        "        for g in range(self.n_games):\n",
        "            i, j = torch.where(self._valid_moves[g] == 1)\n",
        "            acts[g] = list(zip(i.cpu().numpy(), j.cpu().numpy()))\n",
        "        return acts\n",
        "\n",
        "    def are_games_over(self):\n",
        "        return torch.all(self.game_status == OthelloGame.OVER).item()\n",
        "\n",
        "    def score_games(self):\n",
        "        \"\"\"Compute [n_games, 2] tuple containing \"scores\" for player 1 and 2 in\n",
        "        each column. Assumes game is over.\n",
        "\n",
        "        scores[:,0] is +1 if player 1 won, 0 for draw, or -1 for loss\n",
        "        scores[:,1] is the same from player 2's perspective\n",
        "        \"\"\"\n",
        "        flat_boards = self.boards.view(self.n_games, -1)\n",
        "        n_dark = torch.sum(flat_boards == OthelloGame.PLAYER1, dim=1)\n",
        "        n_lite = torch.sum(flat_boards == OthelloGame.PLAYER2, dim=1)\n",
        "        scores = torch.zeros_like(n_dark)\n",
        "        scores[n_dark > n_lite] = +1\n",
        "        scores[n_dark < n_lite] = -1\n",
        "        return torch.stack([scores,-scores], dim=1)\n",
        "    \n",
        "    def get_uid(self, player=None, out=None):\n",
        "        \"\"\"Return highly-probably-unique identifier for each game. Output is a\n",
        "        tensor of [n_games] int64 values.\n",
        "\n",
        "        Algorithm is Zobrist hashing.\n",
        "        \"\"\"\n",
        "        if player is None:\n",
        "            player = self.current_player * torch.ones(self.n_games, dtype=torch.int64, device='cuda')\n",
        "        elif not isinstance(player, torch.Tensor) or len(player) == 1:\n",
        "            player = player * torch.ones(self.n_games, dtype=torch.int64, device='cuda')\n",
        "        if out is None:\n",
        "            out = torch.zeros(self.n_games, dtype=torch.int64, device='cuda')\n",
        "        return zobrist_hash(self.boards, player, self._hash_board, out)\n",
        "\n",
        "    def _new_zobrist_table(self):\n",
        "        self._hash_board = torch.randint(2**63-1, size=(self.n*self.n, 3), dtype=torch.int64, device='cuda')\n",
        "        # Sanity check that no 2 random strings were identical\n",
        "        assert len(self._hash_board.flatten().unique()) == self.n*self.n*3\n",
        "\n",
        "################################\n",
        "#### INTERACTIVE GAME BOARD ####\n",
        "################################\n",
        "\n",
        "def temporary_info(message, clear=False):\n",
        "    \"\"\"Output overwritable message to the console. If clear=True, overwrite all old messages\n",
        "    \"\"\"\n",
        "    if clear:\n",
        "        output.clear(output_tags=\"temporary-info\")\n",
        "    with output.use_tags(\"temporary-info\"):\n",
        "        print(message)\n",
        "\n",
        "class InteractiveOthelloGame(object):\n",
        "    def __init__(self, game=None, player1=\"human\", player2=\"human\", n=8):\n",
        "        if game is None:\n",
        "            self.game = OthelloGame(n_games=1, n=n)\n",
        "        else:\n",
        "            self.game = game\n",
        "            assert game.n_games == 1, \"Can't handle >1 games interactively!\"\n",
        "        self.players = [player1, player2]\n",
        "\n",
        "    def next_turn(self):\n",
        "        self.available_acts = self.game.get_available_actions()[0]\n",
        "        if self.game.game_status == OthelloGame.ACTIVE:\n",
        "            the_player = self.players[self.game.current_player-1]\n",
        "            temporary_info(f\"Begin Player {self.game.current_player}'s turn\", clear=True)\n",
        "            if the_player == \"human\":\n",
        "                if len(self.available_acts) > 0:\n",
        "                    temporary_info(\"Input move by clicking the board\")\n",
        "                else:\n",
        "                    temporary_info(\"No legal moves! Click anywhere on the board to pass.\")\n",
        "                self.redraw()\n",
        "                self._reregister_click_callback()\n",
        "            else:\n",
        "                err = False\n",
        "                temporary_info(\"Waiting for AI to complete...\")\n",
        "                try:\n",
        "                    ai_action = the_player.select_move(self.game)\n",
        "                    try:\n",
        "                        self.game.step(ai_action)\n",
        "                    except Exception as e:\n",
        "                        err = True\n",
        "                        print(\"Error in game.step() on AI-selected move!\")\n",
        "                        print(e)\n",
        "                except Exception as e:\n",
        "                    err = True\n",
        "                    print(\"Error in AI select_move!\")\n",
        "                    print(e)\n",
        "                if not err:\n",
        "                    self.next_turn()\n",
        "        else:\n",
        "            self.redraw()\n",
        "            temporary_info(\"GAME OVER!\", clear=True)\n",
        "            values = self.game.score_games()\n",
        "            if values[0] == 0:\n",
        "                print(f\"Game ended in a draw\")\n",
        "            else:\n",
        "                winner_id = 1+np.argmax(values)\n",
        "                try:\n",
        "                    name = self.players[winner_id-1].name\n",
        "                except AttributeError:\n",
        "                    name = f\"Player {winner_id}\"\n",
        "                print(f\"{name} is the winner!\")\n",
        "\n",
        "    def handle_user_click(self, x_pix, y_pix, plot_width, plot_height):\n",
        "        if len(self.available_acts) == 0:\n",
        "            # If there is not available action all you can do is pass\n",
        "            self.game.step(OthelloGame.PASS)\n",
        "            self.next_turn()\n",
        "        else:\n",
        "            cell_width = plot_width / self.game.boards.shape[2]\n",
        "            cell_height = plot_height / self.game.boards.shape[1]\n",
        "            cell_x, cell_y = int(x_pix / cell_width), int(y_pix / cell_height)\n",
        "            action = (cell_y, cell_x)\n",
        "            if action in self.available_acts:\n",
        "                self.game.step(action)\n",
        "                self.next_turn()\n",
        "            else:\n",
        "                temporary_info(f\"Available actions are {self.available_acts} but you clicked {action}\")\n",
        "\n",
        "    def _reregister_click_callback(self):\n",
        "        # Inject javascript which will detect a click and invoke a function called 'pass_to_python_handler'\n",
        "        display(IPython.display.Javascript(\"\"\"\n",
        "        var plot_element = document.querySelector(\".output_image\").firstElementChild;\n",
        "        plot_element.onclick = function(event){\n",
        "            google.colab.kernel.invokeFunction(\"pass_to_python_handler\", [event.offsetX, event.offsetY, plot_element.width, plot_element.height], {});\n",
        "        };\n",
        "        \"\"\"))\n",
        "\n",
        "        # Tell colab that when 'pass_to_python_handler' is called in JS, it should\n",
        "        # call self.handle_user_click in python\n",
        "        output.register_callback(\"pass_to_python_handler\", self.handle_user_click)\n",
        "\n",
        "    def redraw(self):\n",
        "        # Clear previous output\n",
        "        output.clear(output_tags='othello-interactive')\n",
        "        # Draw a fresh plot and store the figsize in pixels\n",
        "        with output.use_tags('othello-interactive'):\n",
        "            self.game.render()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-HpVUBdX1Fws"
      },
      "source": [
        "interface = InteractiveOthelloGame(n=8)\n",
        "interface.next_turn()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AwmhhY9ANL1W"
      },
      "source": [
        "To solve Othello with reinforcement learning, we can formulate it as a Markov decision process. The set of states consists of all possible board configurations, and the set of actions consists of all positions on the board. The game admits a deterministic transition function from one state to another, connected by a legal move. How did we define the reward function for AlphaZero? Is the reward issued at every time step? Do players receive the same reward?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "0btitOwe7FWI"
      },
      "source": [
        "reward_function = \"\" #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GFsmkXsNtK-R"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/reward_function.md)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "w6ILzmPYJWZY"
      },
      "source": [
        "## Value and Policy"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Tusdxsqy7T8n"
      },
      "source": [
        "In week 1, we learned AlphaZero from bottom-up, building intuitions that eventually led to the full algorithm. This time let's start with the big picture. AlphaZero is a deep reinforcement learning algorithm that uses a deep neural net to provide heuristics for Monte-Carlo Tree Search (MCTS). We will go over MCTS in a bit, but let's first look at the deep learning components of AlphaZero.\n",
        "\n",
        "At the core of Alphazero is a convolutional neural network $f_{\\theta}$ with two heads: a value head and a policy head. The network takes an $n\\times n$ board state as input and outputs two things: a value of the board state $v_{\\theta}(s) \\in [-1, 1]$ and a policy $\\mathbf{p}_{\\theta}(s)$, which is a probability vector over actions. The value estimates the expected outcome of the board state, and the policy mimics the normalized visit counts of all actions collected by MCTS. Formally, given a state $s$, its expected outcome $z$ and a vector of normalized visit counts $\\mathbf{\\pi}$, the network is trained to minimize a sum of MSE loss and cross-entropy loss: $$\\ell = (z - v_{\\theta}(s))^2 - \\mathbf{\\pi}^\\top \\log \\mathbf{p}_{\\theta}(s) + c\\|\\theta\\|^2$$ where $c$ controls the weight of L2 regularization. Note that unlike in actor-critic algorithms where the policy is used for inference and the value networks guides policy training, here both the policy and the value are auxiliary to MCTS. We will see in the next part how the policy and the value are used. In inference time, we run MCTS for a fixed number of iterations and normalize the actual visit counts to get the policy."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gO990dHXeyIs"
      },
      "source": [
        "class PolicyValueNet(nn.Module):\n",
        "    \"\"\"This is a single neural network with two outputs: one \"policy\" output and\n",
        "    one \"value\" output. It has two \"heads\" that share the same \"body\".\n",
        "    \"\"\"\n",
        "    def __init__(self, num_channels=64, n=8, dropout=0.3):\n",
        "        super(PolicyValueNet, self).__init__()\n",
        "        \n",
        "        # game params\n",
        "        self.n = n\n",
        "        self.dropout = dropout\n",
        "        self.action_size = self.n*self.n\n",
        "        self.num_channels = num_channels\n",
        "\n",
        "        # Shared body: two conv layers followed by 2 fully connected layers\n",
        "        self.conv1 = nn.Conv2d(1, num_channels, 3)\n",
        "        self.conv2 = nn.Conv2d(num_channels, num_channels*2, 3)\n",
        "        self.fc1 = nn.Linear(num_channels*2*(self.n-4)*(self.n-4), num_channels*2)\n",
        "        self.fc2 = nn.Linear(num_channels*2, num_channels)\n",
        "\n",
        "        # Value head: one more linear layer after fc2\n",
        "        self.val_fc = nn.Linear(num_channels, 1)\n",
        "\n",
        "        # Policy head: linear from fc2 to action_size\n",
        "        self.pol_fc = nn.Linear(num_channels, self.action_size)\n",
        "\n",
        "    def forward(self, s):\n",
        "        # Body\n",
        "        s = s.view(-1, 1, self.n, self.n)\n",
        "        s = F.dropout2d(F.relu(self.conv1(s)), p=self.dropout, training=self.training)\n",
        "        s = F.dropout2d(F.relu(self.conv2(s)), p=self.dropout, training=self.training)\n",
        "        s = s.view(-1, self.num_channels*2*(self.n-4)*(self.n-4))\n",
        "        s = F.dropout(F.relu(self.fc1(s)), p=self.dropout, training=self.training)\n",
        "        s = F.dropout(F.relu(self.fc2(s)), p=self.dropout, training=self.training)\n",
        "        \n",
        "        # Value head\n",
        "        v = torch.tanh(self.val_fc(s))\n",
        "        \n",
        "        # Policy head\n",
        "        p = F.log_softmax(self.pol_fc(s), dim=1)\n",
        "\n",
        "        return p, v"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1KtyqFyDJT8p"
      },
      "source": [
        "## Monte-Carlo Tree Search (MCTS)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Dof7I32fOXNi"
      },
      "source": [
        "AlphaZero uses Monte-Carlo Tree search to improve its policy estimates and to generate the final policy. Recall that in an MCTS tree, each node corresponds to a board state, and each edge represents a transition by a valid move. An edge maintains three attributes: the expected reward for taking the action $Q(s, a)$, the number of times this edge has been visited $N(s, a)$, and the number of times the node has been visited $N(s)$. By now, you should be much more comfortable with the meaning of Q-value than you were back in week 1.\n",
        "\n",
        "Let's review the tree-growing procedure in more detail. In the beginning of each game, we start with an empty tree. In each turn, we grow the tree by a fixed number of nodes using MCTS. Specifically, each node is grown by starting from current state $s_t$ and taking actions that maximize the upper confidence bound (UCB) until a new leaf is grown. If this new leaf corresponds to a terminal state, then we take its reward and propagate all the way down to the root. Otherwise we bootstrap a value estimate of the new leaf state from the neural network and do the same propagation. By propagation, we mean that for each edge $(s, a)$ along the path, set $Q(s, a) = \\frac{Q(s, a) * N(s, a) + v}{N(s, a) + 1}$, where $v$ is the reward or the value of the leaf state. Note that this is the only place where we use the value network $v_{\\theta}$. Finally, having performed a number of searches in a turn, the statistics at $s_t$ provides a good estimate of the optimial policy. We sample an action from the normalized visit counts to take in this turn: $\\pi(a) = \\frac{N(s, a)}{N(s)}$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hvML0CaGRWkr",
        "cellView": "form"
      },
      "source": [
        "# @markdown Which concept does using a value estimate for the last state relate to?\n",
        "value_estimate = \" \" # @param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NKnuF3D6uHAX"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/value_estimate_T2.md)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nL5ruetiRxT-"
      },
      "source": [
        "What remains to be specified is the upper confidence bound (UCB). The UCB takes the following form: \n",
        "\n",
        "$$U(s,a)=Q(s,a) + c_{puct} P(s,a)\\frac{\\sqrt{N(s)}}{1+N(s,a)}$$\n",
        "\n",
        "In this expression, $P(s, a)$ is the policy output by the neural network. The first term favors actions with high values. The second term favors actions that are less visited but good according to the policy. $c_{puct}$ balances exploitation and exploration."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "rBbmiQk8T2fv"
      },
      "source": [
        "#@markdown What happens if we set c_puct too high? What about too low?\n",
        "c_puct = \" \" # @param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "64poDpfeuAOI"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/c_puct.md)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mWOLwm1PqNfe",
        "cellView": "form"
      },
      "source": [
        "#@markdown MCTSTree class\n",
        "import plotly.graph_objs as go\n",
        "import networkx as nx\n",
        "\n",
        "import warnings\n",
        "warnings.filterwarnings('ignore')\n",
        "\n",
        "class MCTSTree(object):\n",
        "    \"\"\"Monte Carlo Tree Search. An MCTS object is initialized with a policy and\n",
        "    value function.\n",
        "\n",
        "    Search behavior is controlled using num_search (number of searches to run)\n",
        "    and cpuct (explore/exploit hyperparameter)\n",
        "\n",
        "    Based partially on github.com/suragnair/alpha-zero-general/\n",
        "    \"\"\"\n",
        "\n",
        "    def __init__(self, policy_value_fun, n_games=1, n=8, num_search=100, cpuct:float=1.0):\n",
        "        assert cpuct > 0, \"cpuct parameter must be positive\"\n",
        "        assert num_search >= 1, \"Must have >=1 simulation\"\n",
        "        self.pol_val_fun = policy_value_fun\n",
        "        self.num_search = num_search\n",
        "        self.cpuct = cpuct\n",
        "        self.max_depth = 500\n",
        "        self.n_games = n_games\n",
        "        self.n = n\n",
        "        self.init_tree()\n",
        "\n",
        "        if self.n_games > 1 and self.__class__.__name__ == \"MCTSTree\":\n",
        "            raise ValueError(\"The base MCTSTree class can only handle one game at a time!\")\n",
        "    \n",
        "    def init_tree(self):\n",
        "        # s refers to state, a to actions. States are used as dictionary keys\n",
        "        # by calling state_key on single games.\n",
        "        self.Ns = {}  # stores #times state s was visited\n",
        "        self.Ls = {}  # stores list of legal moves at state s\n",
        "        self.Ps = {}  # stores snap policy judgment at s: tensor same size as Ls\n",
        "        self.Vs = {}  # stores snap value judgment at s: scalar tensor\n",
        "        self.Tsa = {} # stores total sub-tree values for s: tensor same size as Ls where index i corresponds to Ls[s][i]\n",
        "        self.Nsa = {} # stores #times edge s,a was visited: same format as Tsa\n",
        "        # For debugging/visualization...\n",
        "        self.Ksa = {} # stores a set of (action, child_idx) tuples from each parent node\n",
        "        self.Bs = {} # stores board state as a tuple of integers for plotting\n",
        "    \n",
        "    def state_key(self, game:OthelloGame, player):\n",
        "        # get_uid returns a [n_games] tensor of int64 hashes. Assume n_games=1\n",
        "        # and grab the int out of the tensor.\n",
        "        return game.get_uid(player).item()\n",
        "\n",
        "    def count_child_visits(self, game):\n",
        "        \"\"\"Return visit count for children of the given game\n",
        "        \"\"\"\n",
        "        s = self.state_key(game, game.current_player)\n",
        "        counts = torch.zeros_like(game.boards)\n",
        "        children = self.Ls[s]\n",
        "        for k, (i,j) in enumerate(children):\n",
        "            counts[0,i,j] = self.Nsa[s][k]\n",
        "        return counts\n",
        "    \n",
        "    def run_searches(self, game):\n",
        "        \"\"\"Run self.num_searches searches from the given game position, extending\n",
        "        whatever tree we already have.\n",
        "\n",
        "        Note: nothing is returned, since the effect of this function is to\n",
        "        expand the tree which is stored as instance variables.\n",
        "        \"\"\"\n",
        "        for i in range(self.num_search):\n",
        "            state = game.copy_state()\n",
        "            self.single_search(game)\n",
        "            game.paste_state(state)\n",
        "\n",
        "    def single_search(self, game:OthelloGame):\n",
        "        \"\"\"This function performs one iteration of MCTS. It loops until a leaf \n",
        "        state (unvisited or end of game) is found. The action chosen at each\n",
        "        step is one that has the maximum upper confidence bound (UCB).\n",
        "\n",
        "        Note: nothing is returned, since the effect of this function is to\n",
        "        expand the tree which is stored as instance variables.\n",
        "\n",
        "        After returning, assume 'game' is irreversible altered. The caller is\n",
        "        responsible for copying the game state as needed. (See @run_searches)\n",
        "        \"\"\"\n",
        "        path = []\n",
        "        for d in range(self.max_depth):\n",
        "            s = self.state_key(game, game.current_player)\n",
        "\n",
        "            # Record board position for plotting\n",
        "            if s not in self.Bs:\n",
        "                self.Bs[s] = tuple(int(v) for v in game.boards[0].flatten().cpu())\n",
        "            \n",
        "            # Count visit to this state\n",
        "            self.Ns[s] = 1 if s not in self.Ns else self.Ns[s]+1\n",
        "\n",
        "            is_unvisited = self.Ns[s] == 1\n",
        "            is_leaf = is_unvisited or game.game_status == OthelloGame.OVER\n",
        "            if is_leaf:\n",
        "                # Found a leaf! Get ready to propagate value back through the\n",
        "                # search path.\n",
        "                if is_unvisited:\n",
        "                    if game.game_status == OthelloGame.OVER:\n",
        "                        # No guesswork on the value. Use the actual game outcome.\n",
        "                        result = game.score_games()[0]\n",
        "                        self.Vs[s] = result[0] if game.current_player == OthelloGame.PLAYER1 else result[1]\n",
        "                    else:\n",
        "                        # This is a *new* leaf. Run policy and value on this state.\n",
        "                        pol, self.Vs[s] = self.pol_val_fun(game.boards, game.current_player)\n",
        "                        self.Ls[s] = game.get_available_actions()[0]\n",
        "                        # Store snap-policy judgment just over legal moves\n",
        "                        self.Ps[s] = torch.tensor([pol[0,i,j] for (i,j) in self.Ls[s]])\n",
        "                        # Initialize empty child counts from s\n",
        "                        self.Nsa[s] = torch.zeros(len(self.Ls[s]))\n",
        "                        self.Tsa[s] = torch.zeros(len(self.Ls[s]))\n",
        "                # Note: value Vs[s] is from the perspective of the player whos\n",
        "                # turn it is in state s, *not* from the perspective of whoever\n",
        "                # just played a turn before.\n",
        "                backup_value = self.Vs[s]\n",
        "                break\n",
        "            else:\n",
        "                # We've been here before. Select next move according to UCB.\n",
        "                legal_actions = self.Ls[s]\n",
        "                if len(legal_actions) == 0:\n",
        "                    next_action_idx, next_action = None, OthelloGame.PASS\n",
        "                else:\n",
        "                    # Q-value is average value = total value / number of visits\n",
        "                    Qsa = self.Tsa[s] / self.Nsa[s]\n",
        "                    # Wherever Nsa==0, set Q to (otherwise it is nan)\n",
        "                    Qsa[self.Nsa[s] == 0] = 0.0\n",
        "                    # UCB is a combination of Q values and snap-judgment policy.\n",
        "                    # Note: for UCB purposes, this visit isn't complete yet, so subtract 1 from Ns\n",
        "                    UCB = Qsa + self.cpuct * self.Ps[s] * math.sqrt(self.Ns[s] - 1) / (1 + self.Nsa[s])\n",
        "                    # Argmax the UCB score for next action\n",
        "                    next_action_idx = torch.argmax(UCB).item()\n",
        "                    next_action = self.Ls[s][next_action_idx]\n",
        "                \n",
        "                # Book-keeping: keep track of all (s,a) pairs along the search path\n",
        "                path.append((s, next_action_idx))\n",
        "\n",
        "                # Advance to the next state\n",
        "                game.step(next_action)\n",
        "                \n",
        "                # Store parent id -> (action, child id) map\n",
        "                if next_action_idx is not None:\n",
        "                    if s not in self.Ksa:\n",
        "                        self.Ksa[s] = set()\n",
        "                    self.Ksa[s].add((next_action, self.state_key(game, game.current_player)))\n",
        "        else:\n",
        "            # For...else syntax should be read as \"no break\" clause. We land\n",
        "            # here if no leaf was ever encountered.\n",
        "            raise RuntimeError(\"Never hit a leaf! This shouldn't happen!\")\n",
        "        \n",
        "        # print(\"[DEBUG] ENDED SEARCH AT\", s)\n",
        "        # print(\"[DEBUG] PATH WAS\", path)\n",
        "        \n",
        "        # Run back through the path updating states. Recall that values are always\n",
        "        # from the perspective of whoever's turn it is when the board was evaluated.\n",
        "        # Since player 1's gains are necessarily player 2's losses (AKA minimax),\n",
        "        # we have to flip the sign of the value for each step back up the tree.\n",
        "        while len(path) > 0:\n",
        "            backup_value = -backup_value\n",
        "            s, action_idx = path.pop()\n",
        "            # print(\"[DEBUG] BACKUP\", s, action_idx, self.Ls[s][action_idx])\n",
        "            # print(\"[DEBUG] VALUE\", backup_value)\n",
        "            if action_idx is not None:\n",
        "                self.Nsa[s][action_idx] = self.Nsa[s][action_idx] + 1\n",
        "                self.Tsa[s][action_idx] = self.Tsa[s][action_idx] + backup_value\n",
        "    \n",
        "    def to_graph(self, game:OthelloGame, g_idx=0, max_depth:int=100):\n",
        "        \"\"\"Debugging helper. Retuns a networkx.DiGraph representation of the tree.\n",
        "        \"\"\"\n",
        "        root = self.state_key(game, game.current_player)\n",
        "        G = nx.DiGraph()\n",
        "        G.add_node(root,\n",
        "                player=game.current_player,\n",
        "                value=self.Vs[root].item(),\n",
        "                # hash=root, # for debugging only\n",
        "                n=self.Ns[root])\n",
        "        # queue contains tuples of (parent uid, child idx, child uid, depth, player)\n",
        "        queue = [(root, act, ch, 1, 3-game.current_player) for act, ch in self.Ksa[root]]\n",
        "        while len(queue) > 0:\n",
        "            parent, act, child, depth, player = queue.pop()\n",
        "            idx = [i for i in range(len(self.Ls[parent])) if self.Ls[parent][i] == act][0]\n",
        "            G.add_node(child,\n",
        "                    player=player,\n",
        "                    value=self.Vs[child].item(),\n",
        "                    # hash=child, # for debugging only\n",
        "                    visits=self.Ns[child])\n",
        "            G.add_edge(parent, child,\n",
        "                    policy=self.Ps[parent][idx].item(),\n",
        "                    visits=self.Nsa[parent][idx].item(),\n",
        "                    q=self.Tsa[parent][idx].item()/self.Nsa[parent][idx].item(),\n",
        "                    action=act)\n",
        "            if depth < max_depth and child in self.Ksa:\n",
        "                queue.extend([(child, a, ch, depth+1, 3-player) for a, ch in self.Ksa[child]])\n",
        "        return G, root\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vTNdOKvyyhwH",
        "cellView": "form"
      },
      "source": [
        "#@markdown CUDA helper kernel\n",
        "\n",
        "@cuda.jit\n",
        "def cuda_index_of(values, table_entries, table_lengths, indices, add_new):\n",
        "    \"\"\"Naive linear search to find the index(es) of the given value(es). Runs in\n",
        "    parallel: multiple tables and values can be searched at once. Results are\n",
        "    stored in 'indices'. If add_new=True and an item isn't found, it is appended\n",
        "    to the table and the new index is returned. If add_new=False, then -1 is used\n",
        "    to mean 'not found'.\n",
        "\n",
        "    Uses block indexing, so call with cuda_index_of[num_tables,1](...)\n",
        "    \"\"\"\n",
        "    bx = cuda.blockIdx.x\n",
        "    max_search = table_lengths[bx]\n",
        "    val = values[bx]\n",
        "\n",
        "    for i in range(max_search):\n",
        "        if table_entries[bx, i] == val:\n",
        "            indices[bx] = i\n",
        "            return\n",
        "\n",
        "    # Not found! either add a new entry or return -1\n",
        "    if add_new:\n",
        "        table_entries[bx, max_search] = val\n",
        "        table_lengths[bx] = table_lengths[bx] + 1\n",
        "        indices[bx] = max_search\n",
        "    else:\n",
        "        indices[bx] = -1\n",
        "\n",
        "def test_cuda_index_of():\n",
        "    table = torch.zeros(2, 10, dtype=torch.int64, device='cuda')\n",
        "    hashes = torch.zeros(2, dtype=torch.int64, device='cuda')\n",
        "    lengths = torch.zeros(2, dtype=torch.int32, device='cuda')\n",
        "    indices = torch.zeros(2, dtype=torch.int32, device='cuda')\n",
        "\n",
        "    hashes[0] = 42\n",
        "    hashes[1] = 101\n",
        "\n",
        "    # Empty table --> results should have index -1 and table and lengths should be unchanged\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), False)\n",
        "    assert torch.all(indices == -1)\n",
        "    assert torch.all(lengths == 0)\n",
        "    assert torch.all(table == 0)\n",
        "\n",
        "    # Same inputs except now add_new=True. Indices should be 0 (1st element) and\n",
        "    # table and lengths should be updated\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), True)\n",
        "    assert torch.all(indices == 0)\n",
        "    assert torch.all(lengths == 1)\n",
        "    assert torch.all(table[:,0] == hashes)\n",
        "    assert torch.all(table[:,1:] == 0)\n",
        "\n",
        "    # add_new a second time with same inputs --> no effect\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), True)\n",
        "    assert torch.all(indices == 0)\n",
        "    assert torch.all(lengths == 1)\n",
        "    assert torch.all(table[:,0] == hashes)\n",
        "    assert torch.all(table[:,1:] == 0)\n",
        "\n",
        "    # Another round of new values, add_new=False\n",
        "    hashes[0] = 11\n",
        "    hashes[1] = 2021\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), False)\n",
        "    assert torch.all(indices == -1)\n",
        "    assert torch.all(lengths == 1)\n",
        "    assert torch.all(table[:,0] == torch.as_tensor([42, 101], device='cuda'))\n",
        "    assert torch.all(table[:,1:] == 0)\n",
        "\n",
        "    # Another round of new values, add_new=True\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), True)\n",
        "    assert torch.all(indices == 1)\n",
        "    assert torch.all(lengths == 2)\n",
        "    assert torch.all(table[:,0] == torch.as_tensor([42, 101], device='cuda'))\n",
        "    assert torch.all(table[:,1] == torch.as_tensor([11, 2021], device='cuda'))\n",
        "    assert torch.all(table[:,2:] == 0)\n",
        "\n",
        "    # add_new again --> no effect\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), True)\n",
        "    assert torch.all(indices == 1)\n",
        "    assert torch.all(lengths == 2)\n",
        "    assert torch.all(table[:,0] == torch.as_tensor([42, 101], device='cuda'))\n",
        "    assert torch.all(table[:,1] == torch.as_tensor([11, 2021], device='cuda'))\n",
        "    assert torch.all(table[:,2:] == 0)\n",
        "\n",
        "    # One new one old, add_new=False\n",
        "    hashes[0] = 12345\n",
        "    hashes[1] = 101\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), False)\n",
        "    assert torch.all(indices == torch.as_tensor([-1, 0], device='cuda'))\n",
        "    assert torch.all(lengths == torch.as_tensor([2, 2], device='cuda'))\n",
        "    assert torch.all(table[:,:2] == torch.as_tensor([[42,11],[101,2021]], device='cuda'))\n",
        "    assert torch.all(table[:,2:] == 0)\n",
        "\n",
        "    # One new one old, add_new=True\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), True)\n",
        "    assert torch.all(indices == torch.as_tensor([2, 0], device='cuda'))\n",
        "    assert torch.all(lengths == torch.as_tensor([3, 2], device='cuda'))\n",
        "    assert torch.all(table[:,:3] == torch.as_tensor([[42,11,12345],[101,2021,0]], device='cuda'))\n",
        "    assert torch.all(table[:,3:] == 0)\n",
        "\n",
        "    # add_new again --> no effect\n",
        "    cuda_index_of[2,1](torch2cuda(hashes), torch2cuda(table), torch2cuda(lengths), torch2cuda(indices), True)\n",
        "    assert torch.all(indices == torch.as_tensor([2, 0], device='cuda'))\n",
        "    assert torch.all(lengths == torch.as_tensor([3, 2], device='cuda'))\n",
        "    assert torch.all(table[:,:3] == torch.as_tensor([[42,11,12345],[101,2021,0]], device='cuda'))\n",
        "    assert torch.all(table[:,3:] == 0)\n",
        "\n",
        "    print(\"TEST PASSED\")\n",
        "\n",
        "test_cuda_index_of()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VyubxMriyhwH",
        "cellView": "form"
      },
      "source": [
        "#@markdown ParallelMCTSTree class\n",
        "class ParallelMCTSTree(MCTSTree):\n",
        "    \"\"\"Same function as MCTSTree, but parallelized over games, and searches\n",
        "    live entirely on the GPU so there is no GPU<->CPU transfer bottleneck.\n",
        "\n",
        "    policy_value_fun must also be able to accept and return entirely GPU tensors.\n",
        "\n",
        "    Overrides init_tree, count_child_visits, single_search, to_graph, and\n",
        "    state_key, but inherits run_searches and is otherwise functionally identical\n",
        "    \"\"\"\n",
        "    INITIAL_TREE_SIZE = 256\n",
        "\n",
        "    def __init__(self, *args, **kwargs):\n",
        "        if 'debug' in kwargs:\n",
        "            self._debug = kwargs['debug']\n",
        "            del kwargs['debug']\n",
        "        else:\n",
        "            self._debug = False\n",
        "        super(ParallelMCTSTree, self).__init__(*args, **kwargs)\n",
        "        # Helper for slicing into the n_games dimension, since writing\n",
        "        # torch.arange(self.n_games) over and over again is ugly.\n",
        "        self._g_idx = torch.arange(self.n_games)\n",
        "    \n",
        "    def state_key(self, game:OthelloGame, player):\n",
        "        # Whereas MCTSTree returns the .item() of a single game hash, here\n",
        "        # we want all of them. Calling get_uid with out=self.cur_hash deposits\n",
        "        # the result in the memory already owned by self.cur_hash and\n",
        "        # self._cuda_cur_hash. The 'return' is redundant; it returns self.cur_hash\n",
        "        # A call to _set_idx_to_hash(...) is still required to line up cur_idx\n",
        "        # with cur_hash!\n",
        "        return game.get_uid(player, out=self.cur_hash)\n",
        "    \n",
        "    def _set_idx_to_hash(self, add_new):\n",
        "        \"\"\"Set self.cur_idx to the index in the lookup_hash table that matches\n",
        "        self.cur_hash. If add_new is True, creates a new table entry for unseen\n",
        "        hashes. If add_new is false, then cur_idx will be -1 if unseen.\n",
        "        \"\"\"\n",
        "        cuda_index_of[self.n_games, 1](self._cuda_cur_hash,\n",
        "                                       self._cuda_lookup_hash,\n",
        "                                       self._cuda_num_nodes,\n",
        "                                       self._cuda_cur_idx,\n",
        "                                       add_new)\n",
        "\n",
        "    def init_tree(self):\n",
        "        \"\"\"Override MCTSTree.init_tree. Rather than python dicts, we do all\n",
        "        tree book-keeping using GPU tensors.\n",
        "\n",
        "        The idea is to pre-allocate space for a small-ish tree. As we search, if\n",
        "        the data structures for the tree fill up, we double its allocated space\n",
        "        and copy data over (see _expand_tree and _expand_tree_if_full)\n",
        "        \"\"\"\n",
        "        # Clear the GPU memory before trying to allocate big tensors. Otherwise,\n",
        "        # GPU memory fills up and we get esoteric error messages about \"CUDA\n",
        "        # device-side assertion failure\"\n",
        "        torch.cuda.empty_cache()\n",
        "        # Allocate a tree with space for INITIAL_TREE_SIZE nodes. During search,\n",
        "        # we will expand the memory as needed.\n",
        "        self._tree_size = ParallelMCTSTree.INITIAL_TREE_SIZE\n",
        "        self.Ns  = torch.zeros(self.n_games, self._tree_size, dtype=torch.float32, device='cuda') # One count per game per node\n",
        "        self.Vs  = torch.zeros(self.n_games, self._tree_size, dtype=torch.float32, device='cuda') # One value per game per node\n",
        "        self.Ps  = torch.zeros(self.n_games, self._tree_size, self.n, self.n, dtype=torch.float32, device='cuda') # Per-game, per-node, snap judgment policy output\n",
        "        self.Tsa = torch.zeros(self.n_games, self._tree_size, self.n, self.n, dtype=torch.float32, device='cuda') # Per-game, per-node, per-position, sum of sub-tree values\n",
        "        self.Nsa = torch.zeros(self.n_games, self._tree_size, self.n, self.n, dtype=torch.int32, device='cuda') # Per-game, per-node, per-position, visit counts\n",
        "        self.lookup_hash = torch.zeros(self.n_games, self._tree_size, dtype=torch.int64, device='cuda') # Lookup table of board hashes\n",
        "        self.cur_hash = torch.zeros(self.n_games, dtype=torch.int64, device='cuda') # Helper during search: current hash\n",
        "        self.cur_idx = torch.zeros(self.n_games, dtype=torch.int64, device='cuda') # Helper during search: current index corresponding to cur_hash\n",
        "        self.num_nodes = torch.zeros(self.n_games, dtype=torch.int32, device='cuda') # How many nodes we have so far to cap the search through lookup_hash\n",
        "        if self._debug:\n",
        "            # Only while in debug mode: keep a dictionary of parent->child UIDs\n",
        "            # just like in the MCTSTree parent class. This requires GPU->CPU\n",
        "            # data transfer so it is disabled by default. But it is necessary\n",
        "            # for to_graph().\n",
        "            self.Ksa = {}\n",
        "            # Also keep a record of uid -> board state for visualization\n",
        "            self.Bs = {}\n",
        "\n",
        "        # Make _cuda_X wrappers around variables involved in hash lookups\n",
        "        to_wrap = ['lookup_hash', 'cur_hash', 'cur_idx', 'num_nodes']\n",
        "        for var in to_wrap:\n",
        "            self.__dict__['_cuda_' + var] = torch2cuda(self.__dict__[var])\n",
        "    \n",
        "    def _expand_tree_if_full(self):\n",
        "        buffer = 2 # Give some wiggle room just in case\n",
        "        if self.num_nodes.max() >= self._tree_size - buffer:\n",
        "            self._expand_tree()\n",
        "    \n",
        "    def _expand_tree(self):\n",
        "        # print(f\"[DEBUG] resizing tree from {self._tree_size} to {self._tree_size*2}\")\n",
        "        old_size = self._tree_size\n",
        "        self._tree_size *= 2\n",
        "\n",
        "        # Make a copy of each of the to_copy variables, expanding them along\n",
        "        # dimension 1 to twice their previous size.\n",
        "        to_copy = ['Ns', 'Vs', 'Ps', 'Tsa', 'Nsa', 'lookup_hash']\n",
        "        for var in to_copy:\n",
        "            old_tensor = self.__dict__[var]\n",
        "            sz = list(old_tensor.size())\n",
        "            assert sz[1] == old_size, f\"Expected {var} to have dimension 1 size {self._tree_size} but was {sz[1]}. What gives?\"\n",
        "            sz[1] = self._tree_size\n",
        "            new_tensor = torch.zeros(tuple(sz), dtype=old_tensor.dtype, device=old_tensor.device)\n",
        "            new_tensor[:, :old_size, ...] = old_tensor\n",
        "            self.__dict__[var] = new_tensor\n",
        "        \n",
        "        # Note we didn't touch cur_hash, cur_idx, or num_nodes so those can stay as they were\n",
        "        to_wrap = ['lookup_hash']\n",
        "        for var in to_wrap:\n",
        "            self.__dict__['_cuda_' + var] = torch2cuda(self.__dict__[var])\n",
        "    \n",
        "    def count_child_visits(self, game:OthelloGame) -> torch.Tensor:\n",
        "        # First, get the hash and index of the given game\n",
        "        self.state_key(game, game.current_player)\n",
        "        self._set_idx_to_hash(add_new=False)\n",
        "        # Sanity check that all states do in fact exist in the table\n",
        "        if torch.any(self.cur_idx == -1):\n",
        "            raise ValueError(\"Failed to find a node! This shouldn't happen!\")\n",
        "        # Once index_of completes, we have the index into the nodes table per\n",
        "        # game stored in self.cur_idx\n",
        "        return self.Nsa[self._g_idx, self.cur_idx, :, :]\n",
        "\n",
        "    def single_search(self, game:OthelloGame):\n",
        "        # Path logs the (state, action) pairs along search paths. State is a\n",
        "        # copy of cur_idx, and action is a [n_games,2] tensor. Both live on GPU.\n",
        "        path = []\n",
        "\n",
        "        # Each search (probably) adds a node. Expand the tree's memory if it's\n",
        "        # close to filling up\n",
        "        self._expand_tree_if_full()\n",
        "\n",
        "        # Keep selecting moves until every game hits a leaf. We flag a game as\n",
        "        # being at a leaf by setting game_status=OVER, even though it might not\n",
        "        # really be over.\n",
        "\n",
        "        # Pre-allocate actions tensor (init to zero so the did_pass test is\n",
        "        # False on the 1st loop iteration)\n",
        "        actions = torch.zeros(self.n_games, 2, dtype=torch.int32, device='cuda')\n",
        "        # Pre-allocate boolean mask indicating which games need \"expansion\" aka\n",
        "        # need the policy_value_function to be called on them. We don't expand\n",
        "        # end-of-game states after their first visit.\n",
        "        needs_expansion = torch.zeros(self.n_games, dtype=torch.bool, device='cuda')\n",
        "        # Keep track of which game states are OVER for real (because leaf states\n",
        "        # are flagged as OVER without really being over)\n",
        "        is_terminal = torch.zeros(self.n_games, dtype=torch.bool, device='cuda')\n",
        "        # Different games may hit leaves at different times, meaning they may\n",
        "        # have different 'current_player' values! leaf_player keeps track of who\n",
        "        # 'current_player' was each time we hit a leaf. Note: if player A plays\n",
        "        # a move that results in a novel state (a leaf), then leaf_player will\n",
        "        # be player B because that's whose turn it is _in the state being\n",
        "        # evaluated_. So leaf_player is set to player B. Similarly, if player A\n",
        "        # plays the final PASS move that ends the game, then player B is again \n",
        "        # the leaf_player.\n",
        "        leaf_player = torch.zeros(self.n_games, dtype=torch.int64, device='cuda')\n",
        "        depth = 0\n",
        "        while depth < self.max_depth and torch.any(leaf_player == 0):\n",
        "            depth += 1\n",
        "\n",
        "            # Compute hashes for each game's current state -> result stored in\n",
        "            # self.cur_hash. \n",
        "            self.state_key(game, game.current_player)\n",
        "            # Lookup node index by hash -> result stored in self.cur_idx. Set\n",
        "            # add_new to True to add this state to the table if it didn't\n",
        "            # already exist\n",
        "            self._set_idx_to_hash(add_new=True)\n",
        "            # print(f\"[DEBUG] Depth {depth} hash {self.cur_hash.cpu().numpy()}\")\n",
        "            # print(f\"[DEBUG] Depth {depth} index {self.cur_idx.cpu().numpy()}\")\n",
        "\n",
        "            # print(\"[DEBUG]\" + str(depth) + \"  \"*depth + f\"TREE SIZE {self.num_nodes.max().item()}/{self._tree_size}; INDEX {self.cur_idx.max().item()}\")\n",
        "            if self.cur_idx.max() >= self._tree_size-1:\n",
        "                raise RuntimeError(\"TREE IS FULL! This should not happen!\")\n",
        "\n",
        "            # Book-keeping for plotting\n",
        "            if self._debug:\n",
        "                for g in range(self.n_games):\n",
        "                    s = self.cur_hash[g].item()\n",
        "                    self.Bs[s] = tuple(game.boards[g].cpu().int().flatten().numpy())\n",
        "\n",
        "            #########################\n",
        "            ### DETECT NEW LEAVES ###\n",
        "            #########################\n",
        "\n",
        "            # Did we hit any new leaves? If so, mark them as leaves that need\n",
        "            # expanding. (This does apply to end-of-game states that we're\n",
        "            # seeing for the first time)\n",
        "            needs_expansion = needs_expansion | (self.Ns[self._g_idx, self.cur_idx] == 0)\n",
        "            # We can tell that this leaf was *just* encountered because it will\n",
        "            # being the state of both needs_expansion but also ACTIVE. Record\n",
        "            # who the player is at the time of encountering this leaf.\n",
        "            is_new_leaf = needs_expansion & (game.game_status == OthelloGame.ACTIVE)\n",
        "            # Flag all new leaves as OVER to freeze their board state.\n",
        "            game.game_status[is_new_leaf] = OthelloGame.OVER\n",
        "            # Record leaf_player: whose turn is it *now* (\"player B\" in the\n",
        "            # comments above)\n",
        "            leaf_player[is_new_leaf] = game.current_player\n",
        "\n",
        "            # Detect games that *just* ended and store whose turn it is *now*\n",
        "            # (\"player B\" in above comments)\n",
        "            just_ended = (game.game_status == OthelloGame.OVER) & (leaf_player == 0)\n",
        "            # Mark this game as in a \"terminal\" state\n",
        "            is_terminal = is_terminal | just_ended\n",
        "            # Record whose turn it is\n",
        "            leaf_player[just_ended] = game.current_player\n",
        "\n",
        "            ########################\n",
        "            ### INCREMENT VISITS ###\n",
        "            ########################\n",
        "\n",
        "            # Increment visit count unless the previous move was a PASS, in which\n",
        "            # case we're seeing the same state over and over again but it is not\n",
        "            # a new visit.\n",
        "            did_pass = actions[:,0] == -1\n",
        "            self.Ns[self._g_idx[~did_pass], self.cur_idx[~did_pass]] = \\\n",
        "                self.Ns[self._g_idx[~did_pass], self.cur_idx[~did_pass]] + 1\n",
        "\n",
        "            # We can skip the rest of the loop if everything is done. This is\n",
        "            # important to ensure that path.append() was called exactly as many\n",
        "            # times as game.step().\n",
        "            if torch.all(game.game_status == OthelloGame.OVER):\n",
        "                # print(f\"[DEBUG] BREAKING on {depth}\")\n",
        "                break\n",
        "\n",
        "            ############################\n",
        "            ### UCB ACTION SELECTION ###\n",
        "            ############################\n",
        "            \n",
        "            # For all non-leaf games, select another move using max of UCB.\n",
        "            T = self.Tsa[self._g_idx, self.cur_idx, ...]\n",
        "            N = self.Nsa[self._g_idx, self.cur_idx, ...]\n",
        "            P = self.Ps[self._g_idx, self.cur_idx, ...]\n",
        "            Ntot = self.Ns[self._g_idx, self.cur_idx].view(self.n_games,1,1) - 1 # For UCB purposes, this visit isn't complete yet, so subtract 1\n",
        "            # Q-value is average value = total value / number of visits, or zero\n",
        "            # if the (s,a) pair has not yet been tried. Shape is [n_games, n, n]\n",
        "            Q = T/N\n",
        "            Q[N==0] = 0\n",
        "            # UCB is a combination of Q values and snap-judgment policy. Shape\n",
        "            # is [n_games, n, n]\n",
        "            UCB = Q + self.cpuct * P * torch.sqrt(Ntot) / (1 + N)\n",
        "            UCB.masked_fill_(game._valid_moves == 0, float('-inf'))\n",
        "            # Argmax the UCB score for next action\n",
        "            next_action_flat_idx = torch.argmax(UCB.view(self.n_games, -1), dim=1)\n",
        "            actions[:,0], actions[:,1] = next_action_flat_idx // self.n, next_action_flat_idx % self.n\n",
        "\n",
        "            # Pass if there were no moves available or if the game is OVER. This\n",
        "            # includes games that are flagged as leaves.\n",
        "            do_pass = torch.all(game._valid_moves.view(self.n_games, -1) == 0, dim=1) | (game.game_status == OthelloGame.OVER)\n",
        "            actions[do_pass, :] = -1\n",
        "\n",
        "            ##############################\n",
        "            ### RECORD ACTION AND STEP ###\n",
        "            ##############################\n",
        "            \n",
        "            # Book-keeping: keep track of all (state idx, action) tuples along\n",
        "            # the search path so we can back-up values later.\n",
        "            path.append((self.cur_idx.clone(), actions.clone()))\n",
        "\n",
        "            if self._debug:\n",
        "                # If in debug mode, make a copy of the 'parent' hash value\n",
        "                parent_hash = self.cur_hash.cpu().numpy()\n",
        "\n",
        "            # print(f\"[DEBUG] Depth {depth} actions are {actions.cpu().numpy()}\")\n",
        "\n",
        "            # Advance to the next state.\n",
        "            game.step(actions)\n",
        "\n",
        "            # When in debug mode, keep track of set of parent->child hashes\n",
        "            if self._debug:\n",
        "                child_hash = game.get_uid().cpu().numpy()\n",
        "                actions_copy = actions.cpu().int().numpy()\n",
        "                # Record parent hash -> set of tuples of (action, child hashes)\n",
        "                for g in range(self.n_games):\n",
        "                    s = self.cur_hash[g].item()\n",
        "                    # Get action as a tuple of (row,col) integers\n",
        "                    act = tuple(actions_copy[g,:].flatten())\n",
        "                    if do_pass[g]:\n",
        "                        continue\n",
        "                    elif parent_hash[g] in self.Ksa:\n",
        "                        self.Ksa[s].add((act, child_hash[g]))\n",
        "                    else:\n",
        "                        self.Ksa[s] = {(act, child_hash[g])}\n",
        "\n",
        "        # After loop: ensure hashes and indices are up to date with game states.\n",
        "        # Importantly, state_key depends on who the player is. Make sure all\n",
        "        # hashes are relative to the leaf_player!\n",
        "        self.state_key(game, leaf_player)\n",
        "        self._set_idx_to_hash(add_new=True)\n",
        "        # print(f\"[DEBUG] After loop, hashes are {self.cur_hash}\")\n",
        "        # print(f\"[DEBUG] ...and indices are {self.cur_idx}\")\n",
        "\n",
        "        # Sanity check that there is a leaf_player for all games\n",
        "        # print(f\"[DEBUG] leaf_player = {leaf_player.cpu().numpy()}\")\n",
        "        assert torch.all(leaf_player > 0), \"Missing leaf player! This shouldn't happen!\"\n",
        "\n",
        "        # Prepare back-up values, which will all be from the perspective of\n",
        "        # 'leaf_player'. Begin with the final-outcome value which will be used\n",
        "        # for terminal states.\n",
        "        backup_value = game.score_games()[self._g_idx, leaf_player-1].float()\n",
        "        \n",
        "        # \"Expand\" states that are being visited for the first time. This means\n",
        "        # calling the policy+value function.\n",
        "        needs_expansion = needs_expansion & ~is_terminal # don't bother expanding terminal states\n",
        "        if torch.any(needs_expansion):\n",
        "            pol_expansion, val_expansion = self.pol_val_fun(game.boards[needs_expansion, ...], leaf_player[needs_expansion])\n",
        "\n",
        "            # Store new expansion results\n",
        "            self.Vs[self._g_idx[needs_expansion], self.cur_idx[needs_expansion]] = val_expansion\n",
        "            self.Ps[self._g_idx[needs_expansion], self.cur_idx[needs_expansion], ...] = pol_expansion\n",
        "            \n",
        "            # Use estimated value for expanded states (note that terminal\n",
        "            # states' values are left unchanged)\n",
        "            backup_value[needs_expansion] = val_expansion\n",
        "\n",
        "        # Which player did we end on? Note leaf_player may be different for\n",
        "        # each game, but backup assumes all values are w.r.t. the current player.\n",
        "        # We address this by flipping all value signs wherever leaf_player is\n",
        "        # not equal to the current_player. The backup routine does not\n",
        "        # record values for PASS moves, and all games that hit a leaf early on\n",
        "        # were padded with PASS actions. By aligning all the values to the\n",
        "        # perspective of the \"current_player\", the backup is synchronized to\n",
        "        # the same player perspective across all games. This works becaues all\n",
        "        # games will go through the same number of backup steps; those that hit\n",
        "        # a leaf early will simply back-up through multiple PASSes, which has\n",
        "        # no effect.\n",
        "        backup_value[game.current_player != leaf_player] = -backup_value[game.current_player != leaf_player]\n",
        "\n",
        "        # Run back through the path updating states. Recall that values are always\n",
        "        # from the perspective of whoever's turn it is when the board was evaluated.\n",
        "        # Since player 1's gains are necessarily player 2's losses (AKA minimax),\n",
        "        # we have to flip the sign of the value for each step back up the tree.\n",
        "        while len(path) > 0:\n",
        "            node_idx, action = path.pop()\n",
        "            act_i, act_j = action[:,0].long(), action[:,1].long()\n",
        "            valid = act_i != -1 # Don't back up anything through pass actions\n",
        "            key = (self._g_idx[valid], node_idx[valid], act_i[valid], act_j[valid])\n",
        "            \n",
        "            # Calling pop() moved us one step back in time. Flip perspective.\n",
        "            backup_value = -backup_value\n",
        "\n",
        "            # Increment visit count at (state, action) pair if not pass\n",
        "            self.Nsa[key] = self.Nsa[key] + 1\n",
        "\n",
        "            # Count total value if not pass\n",
        "            self.Tsa[key] = self.Tsa[key] + backup_value[valid]\n",
        "    \n",
        "    def to_graph(self, game:OthelloGame, g_idx=0, max_depth:int=100):\n",
        "        \"\"\"Debugging helper. Retuns a networkx.DiGraph representation of the\n",
        "        tree containing useful info in the node and edge attributes.\n",
        "        \"\"\"\n",
        "        if not self._debug:\n",
        "            raise RuntimeError(\"Cannot vall to_graph if _debug is False!\")\n",
        "        root = self.state_key(game, game.current_player)[g_idx].item()\n",
        "        self._set_idx_to_hash(add_new=False)\n",
        "        G = nx.DiGraph()\n",
        "        G.add_node(root,\n",
        "                player=game.current_player,\n",
        "                value=self.Vs[g_idx, self.cur_idx[g_idx]].item(),\n",
        "                visits=self.Ns[g_idx, self.cur_idx[g_idx]])\n",
        "        # queue contains tuples of (parent uid, child idx, child uid, depth, player)\n",
        "        queue = [(root, act, ch, 1, 3-game.current_player) for act, ch in self.Ksa[root]]\n",
        "        while len(queue) > 0:\n",
        "            parent, act, child, depth, player = queue.pop()\n",
        "            self.cur_hash[:] = child\n",
        "            self._set_idx_to_hash(add_new=False)\n",
        "            if self.cur_idx[g_idx] == -1:\n",
        "                # Child not found (must be terminal)\n",
        "                continue\n",
        "            G.add_node(child,\n",
        "                    player=player,\n",
        "                    value=self.Vs[g_idx, self.cur_idx[g_idx]].item(),\n",
        "                    visits=self.Ns[g_idx, self.cur_idx[g_idx]].item())\n",
        "            child_idx = torch.where(self.lookup_hash[g_idx, ...] == child)[0].cpu().item()\n",
        "            G.add_edge(parent, child,\n",
        "                       action=act,\n",
        "                       policy=self.Ps[g_idx, child_idx, act[0], act[1]].item(),\n",
        "                       visits=self.Ns[g_idx, child_idx].item())\n",
        "            if depth < max_depth and child in self.Ksa:\n",
        "                queue.extend([(child, act, ch, depth+1, 3-player) for act, ch in self.Ksa[child]])\n",
        "        return G, root"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "nBLkFOCtMnsP"
      },
      "source": [
        "#@markdown Base agent\n",
        "class ArtificialPlayer(object):\n",
        "    \"\"\"A base class for agents that play Othello. Sub-classes must provide a\n",
        "    select_move method which takes in an OthelloGame and returns an action\n",
        "    (in other words, returns a tuple (row,col) to place a stone or None to pass)\n",
        "    \"\"\"\n",
        "    def __init__(self, name=\"AI Agent\"):\n",
        "        self.name = name\n",
        "\n",
        "    def select_move(self, game:OthelloGame):\n",
        "        raise NotImplementedError(\"Must subclass with a playing rule\")\n",
        "    \n",
        "    def new_game(self):\n",
        "        \"\"\"Reset agent state. By default, do nothing...\n",
        "        \"\"\"\n",
        "        pass\n",
        "\n",
        "\n",
        "class PolicyAgentBase(ArtificialPlayer):\n",
        "    \"\"\"Base class for game-playing using a policy. Sub-classes need only provide\n",
        "    a get_policy function.\n",
        "    \"\"\"\n",
        "\n",
        "    def __init__(self, name=None, temperature:float=1.0):\n",
        "        self.name = self.__class__.__name__ if name is None else name\n",
        "        self.t = temperature\n",
        "    \n",
        "    def get_policy(self, game:OthelloGame) -> torch.Tensor:\n",
        "        \"\"\"PolicyAgentBase.get_policy return a distribution over the board\n",
        "        of \"good\" moves for the current player. Doesn't need to be normalized\n",
        "        over even all legal (those are handled by PolicyAgentBase.select_move).\n",
        "\n",
        "        Warning: Must not modify game state!\n",
        "        \"\"\"\n",
        "        raise NotImplementedError(\"Must be implemented by sub-class!\")\n",
        "\n",
        "    def get_selected_move_index(self, policy):\n",
        "        \"\"\"Takes a 1d flattened policy for each board and outputs the index of\n",
        "        the action selected, sampling with the appropriate temperature.\n",
        "        \n",
        "        Inputs: policy, a (n_games, n_moves) Tensor\n",
        "        Outputs: choice_idx, a (n_games) Tensor with integers specifying\n",
        "                 the action selected for each game (i.e. the column of policy)\n",
        "        \"\"\"\n",
        "        if self.t < 1e-6:\n",
        "            # Super low temperature is treated as greedy\n",
        "            choice_idx = torch.argmax(policy, dim=1, keepdim=True)\n",
        "        else:\n",
        "            choice_idx = torch.multinomial(policy**(1/self.t), num_samples=1)\n",
        "        return choice_idx    \n",
        "    \n",
        "    def select_move(self, game:OthelloGame):\n",
        "        policy = self.get_policy(game)\n",
        "        policy = policy * game._valid_moves.to(policy.device)\n",
        "        flat_policy = policy.view(game.n_games, -1)\n",
        "        # Place a dummy value into boards that have no legal moves\n",
        "        no_moves = torch.sum(game._valid_moves.view(game.n_games, -1), dim=1) == 0\n",
        "        flat_policy[no_moves, 0] = 1\n",
        "\n",
        "        flat_choice_idx = self.get_selected_move_index(flat_policy)\n",
        "\n",
        "        # Now we need to take this unrolled index and convert it into the x,y \n",
        "        # board position of the selected move\n",
        "        i_max, j_max = flat_choice_idx // game.n, flat_choice_idx % game.n\n",
        "        actions_ij = torch.cat([i_max, j_max], dim=1)\n",
        "        # Pass on all games that had no valid moves\n",
        "        actions_ij[no_moves, :] = -1\n",
        "        return actions_ij\n",
        "    \n",
        "    def plot_policy(self, game, mask_legal=True):\n",
        "        with torch.no_grad():\n",
        "            policy = self.get_policy(game).cpu()\n",
        "            if mask_legal:\n",
        "                policy = policy * game._valid_moves.cpu()\n",
        "            # Normalize it\n",
        "            policy = policy / torch.sum(policy, dim=(1,2), keepdims=True)\n",
        "\n",
        "            for g in range(game.n_games):\n",
        "                plt.figure(figsize=(6,6))\n",
        "                ax = plt.gca()\n",
        "                im = plt.imshow(policy[g,:,:])\n",
        "                plt.xticks([]); plt.yticks([])\n",
        "                suffix = '(full board)' if not mask_legal else '(legal moves only)'\n",
        "                plt.title(f'Policy heatmap {suffix}')\n",
        "                divider = make_axes_locatable(ax)\n",
        "                cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n",
        "                plt.colorbar(im, cax=cax)\n",
        "                plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "sTOZHeJvEARg",
        "cellView": "form"
      },
      "source": [
        "#@markdown MCTS agent\n",
        "\n",
        "class MCTSAgent(PolicyAgentBase):\n",
        "    def __init__(self, tree:MCTSTree, temperature=1.0):\n",
        "        super(MCTSAgent, self).__init__(temperature=temperature)\n",
        "        self.tree = tree # you'll need this!\n",
        "    \n",
        "    def new_game(self):\n",
        "        \"\"\"Reset the tree for new games.\n",
        "        \"\"\"\n",
        "        self.tree.init_tree()\n",
        "\n",
        "    def get_policy(self, game:OthelloGame) -> torch.Tensor:\n",
        "        \"\"\"Input: game, an instance of OthelloGame\n",
        "                  MCTStree, a built tree with the root as the starting board\n",
        "         Returns: policy a (n_games, n, n) Tensor. The distribution over the \n",
        "                  board of \"good\" moves for the current player. \n",
        "        \"\"\"\n",
        "        # Run searches\n",
        "        self.tree.run_searches(game)\n",
        "        # The policy is proportional to visit counts of the children\n",
        "        return self.tree.count_child_visits(game)\n",
        "\n",
        "\n",
        "def mcts_visits_callback(game:OthelloGame, agents:List[MCTSAgent]):\n",
        "    \"\"\"A callback to store visit counts in the MCTS agent's tree.\n",
        "\n",
        "    Assuming both agents[0] and agents[1] are instances of MCTSAgent.\n",
        "    \"\"\"\n",
        "    player_agent = agents[game.current_player-1]\n",
        "    return player_agent.tree.count_child_visits(game).clone()\n",
        "        "
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KmpeY6iIJL6g"
      },
      "source": [
        "## Training AlphaZero\n",
        "Finally, let's put everything together in the training loop. AlphaZeros is trained by self-play, which means we have one agent play both sides of the game. In each game, we build a new MCTS tree from scratch, but the model parameters persist throughout training. An sketch of the training loop is as follows:\n",
        "```\n",
        "Initialize parameters of the policy & value network.\n",
        "For n epochs:\n",
        "    Self-play m games (in parallel) using the current network parameters.\n",
        "    Collect training data: states, outcomes, and visit counts.\n",
        "    Train agent for k steps using MSE and cross-entropy loss.\n",
        "```\n",
        "\n",
        "We see that AlphaZero involves several concepts we are now familiar with: value, Q-value, policy, etc. Discuss with your pod the following questions that are designed to help you better understand AlphaZero. \n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "hsIpUTCoRKMy"
      },
      "source": [
        "#@markdown AlphaZero is in some sense a policy iteration algorithm. How is the policy evaluated, and how is it improved?\n",
        "policy_iteration = \" \" # @param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eXg9iEVOuM9V"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/policy_iteration.md)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "GcAKaNEpL9_-"
      },
      "source": [
        "#@markdown Is AlphaZero on-policy or off-policy?\n",
        "on_off_policy = \" \" # @param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3TgYnBGSuPz2"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/on_off_policy.md)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PLGx3iYwMDhZ",
        "cellView": "form"
      },
      "source": [
        "#@markdown Is AlphaZero model-based or model-free?\n",
        "model = \" \" # @param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OSb_lUdHuTQ7"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W12_DeepRL/solutions/model.md)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FFT5oqoxJJ7z",
        "cellView": "form"
      },
      "source": [
        "#@markdown Self-play function\n",
        "\n",
        "def ai_vs_ai(agent1, agent2=None, n_games=1, n=8, randomize_players=False,\n",
        "             print_result=False, progbar=True, callback=None, del_progbar=False):\n",
        "    \"\"\"Pit 2 AI players against each other (or self-play if just 1 given). Return\n",
        "    end-state values and list of board positions.\n",
        "    \"\"\"\n",
        "    agent1.new_game()\n",
        "    if agent2 is None:\n",
        "        agent2 = agent1\n",
        "    else:\n",
        "        agent2.new_game()\n",
        "    game = OthelloGame(n_games=n_games, n=n)\n",
        "    saved_boards = []\n",
        "    callback_data = []\n",
        "    agents = [agent1, agent2]\n",
        "    if randomize_players:\n",
        "        agents = [agent2, agent1]\n",
        "    if progbar:\n",
        "        # Games rarely go beyond n^2-4 moves since passes are rare\n",
        "        progress_bar = tqdm(total=game.n**2-4, desc='# moves per game',\n",
        "                            leave=not del_progbar)\n",
        "    \n",
        "    while torch.any(game.game_status == OthelloGame.ACTIVE):\n",
        "        # Save board state and flag all boards that are done by flooding them with nan\n",
        "        saved_boards.append(game.boards.clone())\n",
        "        saved_boards[-1][game.game_status == OthelloGame.OVER, ...] = float('nan')\n",
        "        \n",
        "        # Get move by current player\n",
        "        act = agents[game.current_player-1].select_move(game)\n",
        "        \n",
        "        # Do callback after AI decision but before updating sate\n",
        "        if callback is not None:\n",
        "            callback_data.append(callback(game, agents))\n",
        "        \n",
        "        # Update state\n",
        "        game.step(act)\n",
        "        \n",
        "        # Update progress bar\n",
        "        if progbar: progress_bar.update(1)\n",
        "    \n",
        "    if progbar:\n",
        "        progress_bar.total = progress_bar.n\n",
        "        progress_bar.close()\n",
        "    values = game.score_games()\n",
        "    if print_result:\n",
        "        for g in n_games:\n",
        "            print(f\"Game {g+1} of {n_games}:\", end=\"\\t\")\n",
        "            if values[0] == 0:\n",
        "                print(\"Draw\")\n",
        "            else:\n",
        "                print(f\"{agents[torch.argmax(values)].name} (Player {torch.argmax(values)+1}) is the winner!\")\n",
        "    # Return end-game value according to player 1 and 2, as well as a list of\n",
        "    # all board states throughout play\n",
        "    return values, saved_boards, callback_data"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "KntZyEXz01MG",
        "cellView": "form"
      },
      "source": [
        "#@markdown Helper functions\n",
        "\n",
        "SYMMETRIES = [lambda x: x,\n",
        "              lambda x: torch.rot90(x, 1, dims=(1, 2)),\n",
        "              lambda x: torch.rot90(x, 2, dims=(1, 2)),\n",
        "              lambda x: torch.rot90(x, 3, dims=(1, 2)),\n",
        "              lambda x: torch.flip(x, dims=(1,)),\n",
        "              lambda x: torch.flip(x, dims=(2,)),\n",
        "              lambda x: torch.transpose(x, 1, 2)]\n",
        "\n",
        "\n",
        "def boards2nn(board_states) -> torch.Tensor:\n",
        "    \"\"\"Canonical preprocessing, taking in a set of boards and outputting NN inputs.\n",
        "    \"\"\"\n",
        "    if isinstance(board_states, list):\n",
        "        board_states = torch.cat(board_states, dim=0)\n",
        "    else:\n",
        "        board_states = board_states.clone() # Copy so we don't affect the original\n",
        "    # Convert from [0,1,2] to [0,1,-1] (player 2 represented with -1)\n",
        "    board_states[torch.where(board_states == 2)] = -1\n",
        "    return board_states\n",
        "\n",
        "\n",
        "def prepare_alpha_zero_examples(raw_states:List[torch.Tensor], \n",
        "                                raw_counts:List[torch.Tensor], \n",
        "                                player1_values:torch.Tensor):\n",
        "    \"\"\"Convert from ai_vs_ai output to training examples.\n",
        "\n",
        "    This function performs canonical preprocessing (e.g. +1 is \"current player\"\n",
        "    and -1 is \"other player\"), concatenates things, and adds in all symmetries.\n",
        "    \"\"\"\n",
        "    n_moves = len(raw_states)\n",
        "    n_games, n, _ = raw_states[0].size()\n",
        "    # Concatenate and preprocess board states. Result is size [n_games*n_moves,n,n]\n",
        "    # ordered like [game0turn0, game1turn0, ... gameNturn0, game0turn1, game1turn1, ... ]\n",
        "    nn_states = boards2nn(raw_states)\n",
        "    # Repeat values for every turn (player1_values begins as size [n_games])\n",
        "    values = player1_values.flatten().float().repeat(n_moves)\n",
        "    # Flip the sign of all boards that were player 2\n",
        "    turn_number = torch.arange(n_moves)\n",
        "    player = (turn_number % 2) + 1\n",
        "    # 'repeat_interleave()' takes inputs [a b c] and repeats them like\n",
        "    # [a a a ... b b b ... c c c ...]. Not to be confused with 'repeat()' which\n",
        "    # outputs [a b c a b c a b c ...].\n",
        "    player = player.repeat_interleave(n_games)\n",
        "    nn_states[player == 2, ...] = -nn_states[player == 2, ...]\n",
        "    values[player == 2, ...] = -values[player == 2, ...]\n",
        "    # Concatenate and normalize all MCTS visit probabilities\n",
        "    counts = torch.cat(raw_counts, dim=0).reshape(n_moves*n_games, n, n)\n",
        "    total_visits = counts.sum(dim=2, keepdim=True).sum(dim=1, keepdim=True)\n",
        "    probs = counts / total_visits\n",
        "    # It may be the case that not all games ran for n_moves. These are indicated\n",
        "    # by nan values in the board state (see ai_vs_ai). Also drop states in which\n",
        "    # there were no legal moves, indicated by probs being nan (from 0/0)\n",
        "    drop_states = torch.isnan(nn_states[:,0,0]) | torch.isnan(probs[:,0,0])\n",
        "    train_states = nn_states[~drop_states, ...]\n",
        "    train_probs  = probs[~drop_states, ...]\n",
        "    train_values = values[~drop_states]\n",
        "    # Add all symmetries (increases effective data 8-fold)\n",
        "    train_states = torch.cat([sym(train_states) for sym in SYMMETRIES], dim=0)\n",
        "    train_probs  = torch.cat([sym(train_probs) for sym in SYMMETRIES], dim=0)\n",
        "    train_values = train_values.repeat(len(SYMMETRIES))\n",
        "    \n",
        "    return train_states, train_probs, train_values\n",
        "\n",
        "\n",
        "def MCTSagent_from_net(policy_value_net: PolicyValueNet, n=8, \n",
        "                       games_per_iter=128, num_mcts_search=50):\n",
        "    \"\"\"A helper function that takes a PolicyValueNet and returns an MCTSAgent\n",
        "    that uses that network.\"\"\"\n",
        "\n",
        "    def pol_val_fn(boards:torch.Tensor, whoami:torch.Tensor):\n",
        "        \"\"\" This helper function creates prepares the boards to give to the network\n",
        "        and puts the output in the correct shapes. For use inside the MCTS agent.\"\"\"\n",
        "        policy_value_net.eval()\n",
        "        # Standard board to nn preprocessing: player 2 is now -1\n",
        "        boards_nn = boards2nn(boards)\n",
        "        # Always evaluate with +1 meaning \"myself\" and -1 meaning \"other player\"\n",
        "        boards_nn[whoami == OthelloGame.PLAYER2, ...] = -boards_nn[whoami == OthelloGame.PLAYER2, ...]\n",
        "        with torch.no_grad():   # saves on overhead\n",
        "            log_policy_output, value = policy_value_net(boards_nn)\n",
        "        policy = torch.exp(log_policy_output).view(-1, n, n)\n",
        "        return policy, value.flatten()\n",
        "\n",
        "    tree = ParallelMCTSTree(pol_val_fn, n_games=games_per_iter, n=n,\n",
        "                            num_search=num_mcts_search)\n",
        "    agent = MCTSAgent(tree, temperature=1.0)\n",
        "    return agent\n",
        "\n",
        "\n",
        "def cross_entropy_loss(target_p, log_q):\n",
        "    \"\"\"Computs the cross entropy. Assumes target_p is a normalized distribution\n",
        "    and log_q is a log normalized distribution (e.g. output of log_softmax)\"\"\"\n",
        "    b = target_p.size()[0]\n",
        "    plogq = target_p.view(b, -1) * log_q.view(b, -1)\n",
        "    cross_entropy = -plogq.sum(dim=1)\n",
        "    return cross_entropy.mean()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Gu2WxVQy4XyO"
      },
      "source": [
        "def train_alpha_zero(policy_value_net, n_iters=20, steps_per_iter=50, n=8,\n",
        "                     games_per_iter=128, num_mcts_search=50):\n",
        "    \"\"\"This master funtion creates the MCTSAgent from a policy/value net,\n",
        "    then trains the networks via self-play\"\"\"\n",
        "    \n",
        "    # first create the agent\n",
        "    agent = MCTSagent_from_net(policy_value_net, n, games_per_iter, \n",
        "                               num_mcts_search)\n",
        "    \n",
        "    # create the optimizer and MSE loss function for value part\n",
        "    mse_loss = nn.MSELoss(reduction='mean')\n",
        "    opt = torch.optim.Adam(policy_value_net.parameters(), lr=1e-3)\n",
        "\n",
        "    # Training loop\n",
        "    losses = torch.zeros(n_iters*steps_per_iter).cuda()\n",
        "    pol_losses = torch.zeros(n_iters*steps_per_iter).cuda()\n",
        "    val_losses = torch.zeros(n_iters*steps_per_iter).cuda()\n",
        "    for i in tqdm(range(n_iters), desc='epochs'):\n",
        "        ##### Here we build our training examples #####\n",
        "\n",
        "        # Play AI against itself, running games_per_iter games all in parallel\n",
        "        outcomes, raw_states, raw_mcts_counts = \\\n",
        "            ai_vs_ai(agent, n=n, n_games=games_per_iter,\n",
        "                     callback=mcts_visits_callback, del_progbar=True)\n",
        "\n",
        "        # Get all trainable information from this batch of games\n",
        "        train_states, train_probs, train_values = \\\n",
        "            prepare_alpha_zero_examples(raw_states, raw_mcts_counts, outcomes[:,0])\n",
        "\n",
        "        ##### Here we train the network #####\n",
        "        for j in range(steps_per_iter):\n",
        "            policy_value_net.train() # Ensure we are in 'training' mode rather than 'evaluation' mode\n",
        "            opt.zero_grad() # ready the optimizer\n",
        "            log_policy_output, value_output = policy_value_net(train_states)\n",
        "            pol_loss = cross_entropy_loss(train_probs, log_policy_output) # policy part of loss\n",
        "            val_loss = mse_loss(value_output.flatten(), train_values) # value part of loss\n",
        "            loss = pol_loss + val_loss # total loss\n",
        "            loss.backward() # get gradients of loss w/r/t the network parameters\n",
        "            opt.step() # take a step to lower the loss\n",
        "\n",
        "            losses[i*steps_per_iter + j] = loss.detach()\n",
        "            pol_losses[i*steps_per_iter + j] = pol_loss.detach()\n",
        "            val_losses[i*steps_per_iter + j] = val_loss.detach()\n",
        "\n",
        "    return losses.cpu().numpy(), pol_losses.cpu().numpy(), val_losses.cpu().numpy()\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_GNpiKKLLCaa"
      },
      "source": [
        "Finally, run the following cells to load a pre-trained AlphaZero agent and train it for another 10 epochs. This should take about 15 minutes. When training the agent, discuss with you pod any remaining questions that you have about AlphaZero, and think about other intuitive ways to understand AlphaZero. Write down the your thoughts in the cell below."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Cp5BcH1vSM_5",
        "cellView": "form"
      },
      "source": [
        "remaining_questions = \" \" # @param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4PvEXBT6RGuO",
        "cellView": "form"
      },
      "source": [
        "# @markdown Run cell to train agent\n",
        "!if [ ! -f  pretrained_policy_value_weights_8x8.pt ]; then wget https://osf.io/8c6dx/download -O pretrained_policy_value_weights_8x8.pt; fi\n",
        "data = torch.load(\"pretrained_policy_value_weights_8x8.pt\")\n",
        "\n",
        "# Load pre-trained weights into the model\n",
        "n = 8\n",
        "pol_val_net = PolicyValueNet(n=n).cuda()\n",
        "pol_val_net.load_state_dict(data[\"state_dict\"])\n",
        "l, p_l, v_l = train_alpha_zero(pol_val_net, n_iters=10, steps_per_iter=50, n=n,\n",
        "                               games_per_iter=128, num_mcts_search=20) # n_iters = 100\n",
        "\n",
        "# Save the trained model to your local machine.\n",
        "from google.colab import files\n",
        "data = {\"state_dict\": pol_val_net.state_dict(),\n",
        "        \"losses\":l, \"pol_losses\": p_l, \"val_losses\": v_l}\n",
        "torch.save(data, f\"my_policy_value_weights_{n}x{n}.pt\")\n",
        "files.download(f\"my_policy_value_weights_{n}x{n}.pt\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "lSHZjgP6GY_6"
      },
      "source": [
        "# @markdown Plot training loss\n",
        "plt.plot(l)\n",
        "plt.plot(p_l)\n",
        "plt.plot(v_l)\n",
        "plt.legend([\"Total Loss\", \"Policy Cross-Entropy\", \"Value MSE\"])\n",
        "plt.xlabel('Number of gradient steps')\n",
        "plt.xticks(np.arange(0,len(l),len(l)//10))\n",
        "plt.ylabel('Losses')\n",
        "plt.title(f'Alpha Zero Training Loss for {n}x{n} Othello')\n",
        "plt.show()\n",
        "\n",
        "\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "JlaGhuDZW9jv"
      },
      "source": [
        "# @markdown Visualize outcome\n",
        "\n",
        "def outcome2color(outcome, alpha=1):\n",
        "    \"\"\"Return RGB color for given outcome (-1, 0, +1)\n",
        "    \"\"\"\n",
        "    if outcome == +1:\n",
        "        return (0, 1, 0, alpha) # Win as green\n",
        "    elif outcome == -1:\n",
        "        return (1, 0, 0, alpha) # Loss as red\n",
        "    else:\n",
        "        return (0, 0, 1, alpha) # Draw as blue\n",
        "\n",
        "pol_val_net.eval()\n",
        "alpha_zero_agent = MCTSagent_from_net(pol_val_net, n, 100, 20)\n",
        "alpha_zero_agent.temperature = 0.1\n",
        "outcomes, raw_states, raw_mcts_counts = ai_vs_ai(alpha_zero_agent, n_games=100,\n",
        "                                                 n=n, callback=mcts_visits_callback)\n",
        "plt.figure(figsize=(10,5))\n",
        "ax1, ax2 = plt.subplot(1,2,1), plt.subplot(1,2,2)\n",
        "for t in range(len(raw_states)):\n",
        "    sgn = +1 if t % 2 == 0 else -1\n",
        "    player = 1 if t % 2 == 0 else 2\n",
        "    pol_pred, val_pred = pol_val_net(sgn*boards2nn(raw_states[t]))\n",
        "    counts_t = raw_mcts_counts[t].view(-1, n*n)\n",
        "    targets_t = counts_t / counts_t.sum(dim=1, keepdim=True)\n",
        "    valid = ~torch.any(torch.isnan(raw_states[t][:,0,0]))\n",
        "    ax1.scatter(targets_t[valid,...].cpu().T, \n",
        "                pol_pred[valid,...].detach().exp().view(-1, n*n).cpu().T, \n",
        "                marker='.', c = 'k', alpha=.1)\n",
        "    for g in range(len(raw_states)):\n",
        "        if not torch.isnan(raw_states[t][g,0,0]):\n",
        "            ax2.plot(t, val_pred[g].item(), marker='.', \n",
        "                      color=outcome2color(outcomes[g,player-1], 0.25))\n",
        "  \n",
        "ax1.set_xlabel('normalized MCTS visit counts')\n",
        "ax1.set_ylabel('policy net output')\n",
        "ax1.set_title('Policy Net Predictions')\n",
        "ax2.set_xlabel('Turn Number')\n",
        "ax2.set_ylabel('Value Net Output')\n",
        "ax2.set_title('Value Net Predictions')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LIX5lNrgPovC"
      },
      "source": [
        "## Play a game against your trained agent\n",
        "\n",
        "Let's wrap up by playing a game against your own agent. Have fun!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "plnY8M07PoaU"
      },
      "source": [
        "alpha_zero_agent = MCTSagent_from_net(pol_val_net, n=8, games_per_iter=1,\n",
        "                                      num_mcts_search=50)\n",
        "alpha_zero_agent.temperature = 0.1\n",
        "interface = InteractiveOthelloGame(player1='human',\n",
        "                                   player2=alpha_zero_agent, n=n)\n",
        "interface.next_turn()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qACwQoEFncOY"
      },
      "source": [
        "---\n",
        "# Wrap up"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YLd2XFOfneF2",
        "cellView": "form"
      },
      "source": [
        "import time\n",
        "import numpy as np\n",
        "import urllib.parse\n",
        "from IPython.display import IFrame\n",
        "\n",
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefill_fields = {}\n",
        "  for key in fields:\n",
        "      new_key = 'prefill_' + key\n",
        "      prefill_fields[new_key] = fields[key]\n",
        "  prefills = urllib.parse.urlencode(prefill_fields)\n",
        "  src = src + prefills\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"Select\"\n",
        "try: dagger_relabel;\n",
        "except NameError: dagger_relabel = \"\"\n",
        "try: reward_function;\n",
        "except NameError: reward_function = \"\"\n",
        "try: value_estimate;\n",
        "except NameError: value_estimate = \"\"\n",
        "try: c_puct;\n",
        "except NameError: c_puct = \"\"\n",
        "try: policy_iteration;\n",
        "except NameError: policy_iteration = \"\"\n",
        "try: on_off_policy;\n",
        "except NameError: on_off_policy = \"\"\n",
        "try: model;\n",
        "except NameError: model = \"\"\n",
        "try: remaining_questions;\n",
        "except NameError: remaining_questions = \"\"\n",
        "\n",
        "times = np.array([t1])-t0\n",
        "\n",
        "fields = {\"pennkey\": my_pennkey,\n",
        "          \"pod\": my_pod,\n",
        "          \"dagger_relabel\": dagger_relabel,\n",
        "          \"reward_function\": reward_function,\n",
        "          \"value_estimate\": value_estimate,\n",
        "          \"c_puct\": c_puct,\n",
        "          \"policy_iteration\": policy_iteration,\n",
        "          \"on_off_policy\": on_off_policy,\n",
        "          \"model\": model,\n",
        "          \"remaining_questions\": remaining_questions,\n",
        "          \"cumulative_times\": times}\n",
        "\n",
        "src = \"https://airtable.com/embed/shr0mEmUMsPA1dEzV?\"\n",
        "\n",
        "\n",
        "# now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400));"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yqNvjiiHn5Jt"
      },
      "source": [
        "## Feedback\n",
        "How could this session have been better? How happy are you in your group? How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link:\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5Zi03nD9n5yT"
      },
      "source": [
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}
