{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "YCBS_277_A1.ipynb",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PGx6PWwhOvf8",
        "colab_type": "text"
      },
      "source": [
        "# Assignment #1"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aHR_1k1zO1O0",
        "colab_type": "text"
      },
      "source": [
        "*** Edit this cell ***\n",
        "\n",
        "Enter your details here:\n",
        "\n",
        "**Name:**\n",
        "\n",
        "**McGill Id:**\n",
        "\n",
        "ALso save your file as '277_A1_[your mcgill id]'.\n",
        "\n",
        "Example if your mcgill id is '123456', your file should be named as :\n",
        "'277_A1_123456.pynb'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Eq32iJSJsvP0",
        "colab_type": "text"
      },
      "source": [
        "## Instructions for Assignment 3\n",
        "\n",
        "1. All the submissions should be attempted individually\n",
        "\n",
        "2. Please submit the assignment on mycources. In case of any issues, you can submit the assignment (jupyter notebook) to arbaaz.khan@mail.mcgill.ca\n",
        "\n",
        "3. Please follow the McGill Integrity code, through out the process of the notebook. Read for more info: Academic Integrity\n",
        "\n",
        "4. A student should be able to replicate the results in front of a TA or the Instructor, if asked. If you code is dependent on hyperparameters values, then include appropriate seeding.\n",
        "\n",
        "5. You can add text/code cells as per your need.\n",
        "\n",
        "6. Additional marks for clean and well commented code.\n",
        "\n",
        "7. Please explain any assumption made during the exercise.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "njEE0k4HOlrA",
        "colab_type": "text"
      },
      "source": [
        "## 1. Search Algorithm\n",
        "\n",
        "Points: [50 %]"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZYW1uIteGlG7",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "#remove \" > /dev/null 2>&1\" to see what is going on under the hood\n",
        "!pip install gym pyvirtualdisplay > /dev/null 2>&1\n",
        "!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7JJ5cmVrGp4c",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import gym\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "3cQLQrLIGqV0",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        },
        "outputId": "c5fb7635-51b4-4768-bbd6-9690de466441"
      },
      "source": [
        "env = gym.make('FrozenLake-v0', is_slippery= False)\n",
        "env.reset()\n",
        "env.render()"
      ],
      "execution_count": 22,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "\n",
            "\u001b[41mS\u001b[0mFFF\n",
            "FHFH\n",
            "FFFH\n",
            "HFFG\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "w4F7eMq-1zrH",
        "colab_type": "text"
      },
      "source": [
        "**Description of the game:**\n",
        "\n",
        "Taking a step() has the following usage:\n",
        "\n",
        "```\n",
        "next_state, reward, done, info = env.step (action)\n",
        "```\n",
        "The state can be represented by the distribution of 4 characters:\n",
        "\n",
        "      S: Start position (0 in this case)\n",
        "      F: Frozen Ice (Safe to be in this location)\n",
        "      H: Hole (Unsafe to be in this location)\n",
        "      G: Goal state (Target)\n",
        "\n",
        "\n",
        "Possible actions:\n",
        "\n",
        "      0: Left\n",
        "      1: Down\n",
        "      2: Right\n",
        "      3: Up\n",
        "\n",
        "The aim of the game is to navigate (the red cursor) from ''S -> G'', without stepping on ''H (hole)''\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kvzIMzNrGykk",
        "colab_type": "code",
        "outputId": "00c66e55-c327-4816-e411-ee662a130355",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 136
        }
      },
      "source": [
        "next_state, reward, done, _ = env.step(2) # the cursor will move to right\n",
        "env.render()\n",
        "print('reward: ', reward)\n",
        "print('done: ', done)"
      ],
      "execution_count": 23,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "  (Right)\n",
            "S\u001b[41mF\u001b[0mFF\n",
            "FHFH\n",
            "FFFH\n",
            "HFFG\n",
            "reward:  0.0\n",
            "done:  False\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ik2wmsUT3G3d",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 136
        },
        "outputId": "a2ff6f51-2343-47ee-9115-0d562cdfc2c0"
      },
      "source": [
        "next_state, reward, done, _ = env.step(1) # the cursor will move 'down'\n",
        "env.render()\n",
        "print('reward: ', reward)\n",
        "print('done: ', done)"
      ],
      "execution_count": 24,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "  (Down)\n",
            "SFFF\n",
            "F\u001b[41mH\u001b[0mFH\n",
            "FFFH\n",
            "HFFG\n",
            "reward:  0.0\n",
            "done:  True\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e6_29rwO3Yk9",
        "colab_type": "text"
      },
      "source": [
        "Since we moved to ''H (Hole)'', the game ended, thus\n",
        "\n",
        "```\n",
        "done: True\n",
        "```\n",
        "\n",
        "Also, since we failed to reach 'G (goal)', at the bottom right corner, the reward accumulated in the episode is\n",
        "```\n",
        "reward = 0.0\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Exxnms9z4LQs",
        "colab_type": "text"
      },
      "source": [
        "I will encourage you to play the game yourself, to gain familiarity."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZxnkXK8dgexW",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# importing modules\n",
        "\n",
        "import numpy as np\n",
        "from itertools import count\n",
        "from collections import deque"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HUlinODCJ7bY",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# defining the necessary datastructures\n",
        "\n",
        "class Queue:\n",
        "    def __init__(self):\n",
        "        self.queue = deque([])\n",
        "\n",
        "    def isEmpty(self):\n",
        "        return len(self.queue) == 0\n",
        "\n",
        "    def push(self, item):\n",
        "        self.queue.append(item)\n",
        "\n",
        "    def pop(self):\n",
        "        return self.queue.popleft()\n",
        "\n",
        "    def peek(self):\n",
        "        return self.queue[0]\n",
        "\n",
        "    def size(self):\n",
        "        return len(self.queue)\n",
        "\n",
        "\n",
        "class Stack:\n",
        "    def __init__(self):\n",
        "         self.stack = deque([])\n",
        "\n",
        "    def isEmpty(self):\n",
        "        return len(self.stack) == 0\n",
        "\n",
        "    def push(self, item):\n",
        "        self.stack.append(item)\n",
        "\n",
        "    def pop(self):\n",
        "        return self.stack.pop()\n",
        "\n",
        "    def peek(self):\n",
        "        return self.stack[-1]\n",
        "\n",
        "    def size(self):\n",
        "        return len(self.stack)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "oqeOCPV3G3oo",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "class Node:\n",
        "    \n",
        "    # data: numpy array containing the sequence of actions\n",
        "    # parent: a reference to the parent node\n",
        "\n",
        "    def __init__(self, data=None, parent=None):\n",
        "        self.data = data\n",
        "        self.parent = parent\n",
        "    \n",
        "    # return depth of current board\n",
        "    def depth(self):\n",
        "        if self.parent is not None:\n",
        "            return self.parent.depth() + 1\n",
        "        else:\n",
        "            return 0\n",
        "\n",
        "\n",
        "class FrozenLake_mod:\n",
        "\n",
        "  '''\n",
        "A wraper class over the gym environment.\n",
        "This will take in a sequence of actions to be performed in the \n",
        "environment. \n",
        "The environment accepts 4 inputs (for movement in 4 directions):\n",
        "\n",
        "      0: Left\n",
        "      1: Down\n",
        "      2: Right\n",
        "      3: Up\n",
        "\n",
        "The agent (positioned with red cursor), is free to move in the grid consisting \n",
        "of 4 kinds of cells:\n",
        "      S: Start position (0 in this case)\n",
        "      F: Frozen Ice (Safe to be in this location)\n",
        "      H: Hole (Unsafe to be in this location)\n",
        "      G: Goal state (Target)\n",
        "\n",
        "Your task will be to find a path from S -> G, such that your agent does *not* \n",
        "step on the hole (H). \n",
        "\n",
        "If your agent steps on the hole, the game will end with a reward of 0.\n",
        "If your agent manages to reach the 'G' (goal location), the game will end with a\n",
        "reward of 1.\n",
        "\n",
        "'''\n",
        "  def __init__(self, gym_env):\n",
        "    self.gym_env = gym_env\n",
        "    \n",
        "  def play_steps(self, actions):\n",
        "    '''\n",
        "    Input arguments:\n",
        "      actions: 1d array with a list of actions (0-4) to perform. For eg:\n",
        "              if the dimension of actions is 3 and suppose the \n",
        "              actions = [2, 2, 1],\n",
        "                  the game will take the following actions:\n",
        "                  [Right, right, down]\n",
        "    Output arguments:\n",
        "      reward: THe score received from the game for the set of actions performed.\n",
        "\n",
        "    Other parameters:\n",
        "      next_state: It stores the transition to the new frame of the game, \n",
        "                  when an action is performed. The value stores the current \n",
        "                  position of the agent.\n",
        "      \n",
        "      done: (bool) [True/False] : determines when the game ends.\n",
        "                  done: 1/True: Game ends\n",
        "                  done: 0/False: Game is active\n",
        "\n",
        "      reward: [0/1]: Successful completion of task will fetch a reward of 1.\n",
        "\n",
        "    '''\n",
        "    state = self.gym_env.reset()\n",
        "    print(actions.data)\n",
        "    for act in actions.data:\n",
        "      next_state, reward, done, _ = self.gym_env.step(int(act))\n",
        "      if done:\n",
        "        break\n",
        "        \n",
        "      return (reward)\n",
        "  \n",
        "  def feasible_moves(self, board, n=):\n",
        "    feasible_moves = []\n",
        "    data = board.data.copy()\n",
        "\n",
        "    \n",
        "    ### YOUR CODE HERE ###\n",
        "    '''\n",
        "    Write code to find feasible moves in any state. Return the list of possible\n",
        "    moves.\n",
        "    '''\n",
        "\n",
        "    ### YOUR CODE ENDS HERE ###  \n",
        "    return (feasible_moves)\n",
        "  \n",
        "  def extract_solution(self, actions):\n",
        "    '''\n",
        "    Helper function to visualize the performance of your agent in an environemnt.\n",
        "    Note that 'env.render()', will continuously print out the visuals of the \n",
        "    game. Using it might take more time.\n",
        "    '''\n",
        "    state = self.gym_env.reset()\n",
        "    for act in actions.data:\n",
        "      next_state, reward, done, _ = self.gym_env.step(act)\n",
        "      env.render()\n",
        "      if done:\n",
        "        break\n",
        "        \n",
        "      return (reward, done)\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lleWArz3KfWP",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Your agent that can play Frozen Lake\n",
        "\n",
        "def agent(env, agent_actions):\n",
        "    \n",
        "    ### START OF YOUR CODE ###\n",
        "\n",
        "    ### END OF YOUR CODE ###                               \n",
        "    raise Exception(\"Unreachable\")\n",
        "\n",
        "\n",
        "\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "P8C0l6LROPXW",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Do not change this cell\n",
        "gym_env = gym.make('FrozenLake-v0', is_slippery= False)\n",
        "env = FrozenLake_mod(gym_env)\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ILYc1v35OTEC",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "agent_actions = Node(data=np.zeros(10))\n",
        "solution = agent(env, agent_actions)\n",
        "\n",
        "print('Solution:')\n",
        "env.extract_solution(solution)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EPvwwh5QPji2",
        "colab_type": "text"
      },
      "source": [
        "## 2. Neural Networks\n",
        "\n",
        "Points: [50%]\n",
        "\n",
        "This is supervised learning task, where you have 'X_train.npy' containing a dataset of images (2 mnist digits). The label set 'y_train.npy' contains the labels of the bigger number out of the two. \n",
        "\n",
        "Train a neural network to find the bigger number out of the two in the training sample. \n",
        "\n",
        "Your performance will be evaluated based on the accuracy of your network on a **hidden** test dataset.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1pDaMWrpQ1Vv",
        "colab_type": "text"
      },
      "source": [
        "### 2.1 Loading the dataset"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0OS7L6vjQeMo",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import numpy as np\n",
        "x_data = np.load('x_data.npy')\n",
        "y_data = np.load('y_data.npy')"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wHJqBQfQ1ikY",
        "colab_type": "text"
      },
      "source": [
        "The dataset comprises of 27000 samples"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GKx2Kn3s1EPL",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        },
        "outputId": "65e54ed1-1e68-4664-a808-951a0e21b80b"
      },
      "source": [
        "x_data.shape"
      ],
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(27000, 28, 56)"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_fEZ_AaI1Hk_",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 233
        },
        "outputId": "6810a8a8-e083-4689-b55b-43007d8d7dd0"
      },
      "source": [
        "import matplotlib.pyplot as plt\n",
        "plt.imshow(x_data[0])"
      ],
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<matplotlib.image.AxesImage at 0x7fc844ce65f8>"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 14
        },
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAADHCAYAAAAAoQhGAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAS0ElEQVR4nO3df5TVdZ3H8dfbYQD5JY7IREiCiD9Q\nc8wJdGVLTV0sNzSJNCu3LCwlwzAzTrtaJ3ftbGpI5i4qiR1/hrJyzC0RS201ZPC3qYA4JjgMEKJo\nCszw3j/ul9OE72Hmzr1zZz53no9zPHPv6/74fr76ndd8/d7P/X7N3QUASM9uXT0AAEDHUOAAkCgK\nHAASRYEDQKIocABIFAUOAInqVciLzWyipFmSKiTd4O5X7Or5va2P91X/QhYJAD3OZr2xwd333jnv\ncIGbWYWkayWdKGm1pKVmttDd/9Taa/qqv8bbJzq6SADokR7w+a9GeSGHUMZJWunuq9x9q6TbJU0q\n4P0AAHkopMCHS3qtxf3VWfZ3zGyqmdWZWd02bSlgcQCAljr9Q0x3n+Pute5eW6k+nb04AOgxCinw\nNZJGtLi/T5YBAEqgkAJfKmmMmY0ys96SzpC0sDjDAgC0pcOzUNy9ycymSfqtctMI57r780UbGQBg\nlwqaB+7u90m6r0hjAQDkgW9iAkCiKHAASBQFDgCJosABIFEUOAAkigIHgERR4ACQKAocABJFgQNA\noihwAEgUBQ4AiaLAASBRFDgAJIoCB4BEUeAAkCgKHAASRYEDQKIocABIFAUOAImiwAEgURQ4ACSq\noKvSm1m9pM2SmiU1uXttMQbVU1mv+D9Hxd5DivL+L100Msyb+20P831HrwvzfudZmK+9qneYP1F7\nR5hvaH4nzMf/akaY7//tP4Y50FMVVOCZ49x9QxHeBwCQBw6hAECiCi1wl3S/mS0zs6nRE8xsqpnV\nmVndNm0pcHEAgB0KPYQywd3XmNlQSYvM7EV3f7jlE9x9jqQ5kjTIqrzA5QEAMgXtgbv7muznOkkL\nJI0rxqAAAG3r8B64mfWXtJu7b85unyTph0UbWTdUcfCYMPc+lWH++scHh/m7R8WzL6r2iPNHDo9n\ncXS2//3rwDD/8c8mhvmSw24N81e2vRvmVzSeGOYffIT/Uetp7MhDwrx5QDyzqTW96+P5FE2vvpb3\nmFJQyCGUakkLzGzH+9zq7r8pyqgAAG3qcIG7+ypJhxdxLACAPDCNEAASRYEDQKIocABIVDG+Sl92\nmo/9SJhfddO1YX5AZX6flHc327w5zP9t9r+Eea934lkiR/9qWpgPXNMU5n02xLNT+tUtCXN0P+9O\nimcObxodV8uxn18a5hcN/e8wH17RL6/xzN60X5jff2r8O928YlVe79/dsAcOAImiwAEgURQ4ACSK\nAgeARFHgAJAoZqEE+rz0epgve29EmB9Q2diZw2nVjIajwnzV2/EVfG4aPT/M39wezyqpvubRjg2s\nnTjjSTremTw+zP2r68P8ycPiba01v/7r0DB/sHlAXu9zfP8Xw/zsB/8U5md+5tww96XPhnmvEfuE\n+erZ8XmDDhwSX9XqzQl/CfN8sQcOAImiwAEgURQ4ACSKAgeARFHgAJAoZqEEmhrWhvnsH382zC+f\nGF9Jp+KZ+BP0p8+bndd4frThw2G+8oT4PBHNmxrC/PNHnxfm9RfEyx2lp9seHMrKuvP+IcwvvODO\nMD9rYDzL4ogr4/PiDPpzfN6dQb9fGebNG/KbrfHTb04O82su/HmYvzw5/h09YP2HwvzwBfVhfvng\n+Bwv06fF/x76iFkoANCjUeAAkCgKHAASRYEDQKIocABIlLnv+owUZjZX0imS1rn7oVlWJekOSSMl\n1Uua4u5vtLWwQVbl4+0TBQ65+6kYsleYN/9lY5i/cms8q+T5j80N83H//s0wH3pt556rBOWr134j\nw/wzv/5jmB/fL54lctpPLg7zD/xXXZj7tq1tD64A1srVsZbfcGiYv3TC9WH+StN7Yb6+efcwP//q\neLZJ9ezi/I4+4POXuXvtznl79sBvkjRxp+wSSYvdfYykxdl9AEAJtVng7v6wpJ13JSdJmpfdnifp\n1CKPCwDQho5+kafa3Xd8W2StpOrWnmhmUyVNlaS+yu8CpQCA1hX8IabnDqK3eiDd3ee4e62711aq\nT6GLAwBkOlrgjWY2TJKyn/H3aQEAnaajh1AWSjpb0hXZz3uKNqIE5Xu+hm1vxZ+Ut+aQs+Kriay/\nriJ+wfb4fBPADn+e/MEwP2eP+DxANf8RzzZpbZZFV11t6bWL3jdRQ5K04oSftfIKC9NvrDgzzPtM\nfjPMqzd1zYywNvfAzew2SY9JOtDMVpvZOcoV94lmtkLSCdl9AEAJtbkH7u7xnyKp/CZ0A0BC+CYm\nACSKAgeARFHgAJAorsjTBQ7+7vIw//Jh8ccKv9h3cZh//LPnh/nAO+LzWQA79P7HDWG+uuntMK9e\nsrkzh6Pd+sVf8tt4+uFhfvT0+Ao4V+11ZZgv3xbPiznj6ovCfPgvngvz5rfeCvOuwh44ACSKAgeA\nRFHgAJAoChwAEkWBA0CimIXSBZo3xedT+Ms3Dg7zPy98N8wv+dHNYf69KaeFuT+5R5iPuPyxMFcb\nV2tCuj68d0OYH3frd8J8v8db2UZas1t8np53P31kmPe7YE2YP3rgtWG+dEu8bU6668IwHz0jnpn1\nAcXnMEnlbELsgQNAoihwAEgUBQ4AiaLAASBRFDgAJMq8hDMNBlmVjzdOI56vjV85OsxvufQnYT6q\nV9+83v+Qm6eF+Zjr45kKTavq83p/dD97Pzo4zAdVvhfm9Sf3D/PmjZvCvGH6+DB/ckZ8ZZymVuZ9\nHLzo62E+6pdhrF6Ll8UPJO4Bn7/M3d93uSH2wAEgURQ4ACSKAgeARFHgAJAoChwAEtXmuVDMbK6k\nUyStc/dDs+wySV+TtD572kx3v6+zBtnTVc2Nz0Mx7aX4ijyDrlgd5rft99swf/5L8cyAg0Z8NcwP\n/EH8d795xaowR/fz6NKDwvynn4zPr3PFCV8K80FTXwvz60bG29Q/Lz8lzN+5cp8wH3Pv42GOnPbs\ngd8kaWKQX+3uNdk/lDcAlFibBe7uD0vaWIKxAADyUMgx8Glm9oyZzTWzPVt7kplNNbM6M6vbpi0F\nLA4A0FJHC/w6SaMl1UhqkBRfClqSu89x91p3r61Unw4uDgCwsw4VuLs3unuzu2+XdL2kccUdFgCg\nLR26Io+ZDXP3HSfKOE3Sc8UbEtrL/u+pMP/r5KFh/tHPfTPMl3x3Vpi/eNwNYX7WyJPC/M0JYYyE\nfKrf23F+5c/D/JH34gq57EtfCfPd/hBvs331ejtGh521ZxrhbZKOlTTEzFZLulTSsWZWI8kl1Us6\ntxPHCAAItFng7n5mEN/YCWMBAOSBb2ICQKIocABIFAUOAInq0CwUdG/NjevCvPqaOH/v4qYw72e9\nw/z6kfeG+SmnTY/fZ8GSMEfx7NY3vgrTxilHhPnDp/1nK+/UL0xrHv9CmA+fsjIez7Z4tgmKiz1w\nAEgUBQ4AiaLAASBRFDgAJIoCB4BEMQslYdsn1IT5y5+NZyQcWlMf5q3NNmnN7I3xzIZ+99Tl9T4o\nnvqLPxLmz50bXxnnl5v3C/MvDlwb5luf3yPMfdvWdowOnYU9cABIFAUOAImiwAEgURQ4ACSKAgeA\nRDELpRux2kPDfPkFrZyT5Jh5Yf6xvsWZGbDFt4X5HzeOil+wvSHOUTQrrhkf56fHs00OfvjLYb7/\npfGVdzYviGcSDXitHYNDybEHDgCJosABIFEUOAAkigIHgERR4ACQqDZnoZjZCEk3S6qW5JLmuPss\nM6uSdIekkZLqJU1x9zc6b6jp6TVq3zB/+csfDPPLPnd7mJ8+YEPRxhSZ2Vgb5g/NOirM95z3WGcO\nB5LeOT2ebXLucQ+G+UEPfSXMD/hOfBWm1nx091VhPn9Nc17vg9Jozx54k6QZ7j5W0lGSzjezsZIu\nkbTY3cdIWpzdBwCUSJsF7u4N7v5EdnuzpBckDZc0SdKOicjzJJ3aWYMEALxfXl/kMbORko6QtERS\ntbvv+ObGWuUOsUSvmSppqiT1beWCqQCA/LX7Q0wzGyDpLknT3f2tlo+5uyt3fPx93H2Ou9e6e22l\n+hQ0WADA37SrwM2sUrnyvsXd787iRjMblj0+TFJ+n5YAAArSnlkoJulGSS+4+1UtHloo6WxJV2Q/\n7+mUEXYjvUZ+KMzfPHJYmH/uh78J868PvjvMi2VGQzx75LGfx7NNqm56PMz33M5sk66y5uR41sdF\nVS+F+e39jwzzpjWvh3nFkL3C/JktI8L87XM3hXnfe8MYJdKeY+DHSPqipGfN7Kksm6lccd9pZudI\nelXSlM4ZIgAg0maBu/sfJFkrD3+iuMMBALQX38QEgERR4ACQKAocABLVo6/I02vYB8J849z+Yf6N\nUQ+F+ZkDG4s2psi0NRPC/InrasJ8yPznwrxqM7NKUjH4yfgqTDo5jvfY/b283t8qK8N8dO94W26+\nf0gr77Q8r+WiuNgDB4BEUeAAkCgKHAASRYEDQKIocABIVFnNQtn6T/G5PrZeuDHMZ+5/X5iftPs7\nRRtTpLH53TD/2MIZYX7Q918M86pN8ayS7R0bFrqRYb9tCPPffzuePXLP2NvC/NRFZ4T5Ofv+PswP\nrHwzzIcu69zfCXQMe+AAkCgKHAASRYEDQKIocABIFAUOAIkqq1ko9afGf4+WH/arorz/tZtGh/ms\nh04Kc2uOT6N+0I9eCfMxjUvCPL42C8pZ88p4G/n+v34tzMdeEJ//5pDBa8P8klZmp4w5P94GTU+H\nOboWe+AAkCgKHAASRYEDQKIocABIFAUOAIkyd9/1E8xGSLpZUrUklzTH3WeZ2WWSviZpffbUme4e\nn1wkM8iqfLxxIXsAyMcDPn+Zu7/vZE/tmUbYJGmGuz9hZgMlLTOzRdljV7v7T4o5UABA+7RZ4O7e\nIKkhu73ZzF6QNLyzBwYA2LW8joGb2UhJR0jaMdt/mpk9Y2ZzzWzPVl4z1czqzKxum7YUNFgAwN+0\nu8DNbICkuyRNd/e3JF0nabSkGuX20K+MXufuc9y91t1rK9WnCEMGAEjtLHAzq1SuvG9x97slyd0b\n3b3Z3bdLul7SuM4bJgBgZ20WuJmZpBslveDuV7XIh7V42mmS4pMxAAA6RXtmoRwj6YuSnjWzp7Js\npqQzzaxGuamF9ZLO7ZQRAgBC7ZmF8gdJ0Wn1djnnGwDQufgmJgAkigIHgERR4ACQKAocABJFgQNA\noihwAEgUBQ4AiaLAASBRFDgAJKrNK/IUdWFm6yW9mt0dImlDyRbe9Vjf8tWT1lVifbvCvu6+985h\nSQv87xZsVhddIqhcsb7lqyetq8T6diccQgGARFHgAJCorizwOV247K7A+pavnrSuEuvbbXTZMXAA\nQGE4hAIAiaLAASBRJS9wM5toZi+Z2Uozu6TUyy8FM5trZuvM7LkWWZWZLTKzFdnPPbtyjMViZiPM\n7Hdm9icze97MvpXl5bq+fc3scTN7OlvfH2T5KDNbkm3Xd5hZ764ea7GYWYWZPWlm92b3y3ld683s\nWTN7yszqsqzbbsslLXAzq5B0raSTJY1V7rqaY0s5hhK5SdLEnbJLJC129zGSFmf3y0GTpBnuPlbS\nUZLOz/6bluv6bpF0vLsfLqlG0kQzO0rSjyVd7e77S3pD0jldOMZi+5akF1rcL+d1laTj3L2mxdzv\nbrstl3oPfJykle6+yt23Srpd0qQSj6HTufvDkjbuFE+SNC+7PU/SqSUdVCdx9wZ3fyK7vVm5X/Th\nKt/1dXd/O7tbmf3jko6XND/Ly2Z9zWwfSZ+SdEN231Sm67oL3XZbLnWBD5f0Wov7q7OsJ6h294bs\n9lpJ1V05mM5gZiMlHSFpicp4fbNDCk9JWidpkaSXJW1y96bsKeW0Xf9U0sWStmf391L5rquU+2N8\nv5ktM7OpWdZtt+U2r0qP4nN3N7Oymr9pZgMk3SVpuru/ldtRyym39XX3Zkk1ZjZY0gJJB3XxkDqF\nmZ0iaZ27LzOzY7t6PCUywd3XmNlQSYvM7MWWD3a3bbnUe+BrJI1ocX+fLOsJGs1smCRlP9d18XiK\nxswqlSvvW9z97iwu2/Xdwd03SfqdpKMlDTazHTtE5bJdHyPp02ZWr9zhzuMlzVJ5rqskyd3XZD/X\nKffHeZy68bZc6gJfKmlM9il2b0lnSFpY4jF0lYWSzs5uny3pni4cS9Fkx0RvlPSCu1/V4qFyXd+9\nsz1vmdnukk5U7rj/7yRNzp5WFuvr7t9z933cfaRyv6sPuvtZKsN1lSQz629mA3fclnSSpOfUjbfl\nkn8T08w+qdxxtQpJc9398pIOoATM7DZJxyp3GspGSZdK+h9Jd0r6kHKn1J3i7jt/0JkcM5sg6RFJ\nz+pvx0lnKnccvBzX98PKfZBVodwO0J3u/kMz20+5vdQqSU9K+oK7b+m6kRZXdgjlInc/pVzXNVuv\nBdndXpJudffLzWwvddNtma/SA0Ci+CYmACSKAgeARFHgAJAoChwAEkWBA0CiKHAASBQFDgCJ+n9y\nnCvLsVBnXQAAAABJRU5ErkJggg==\n",
            "text/plain": [
              "<Figure size 432x288 with 1 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vn8Vi4891nIt",
        "colab_type": "text"
      },
      "source": [
        "The label is the bigger of the two numbers"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7yy40dLv1Ruz",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        },
        "outputId": "b70efbe4-5f44-4360-c4e3-27c59bf39d98"
      },
      "source": [
        "y_data[0]"
      ],
      "execution_count": 15,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "8"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 15
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Yabaf7I0Q5R6",
        "colab_type": "text"
      },
      "source": [
        "### 2.1 **[TO-DO]** Partition the dataset in train and test sets with a split of 90:10.\n",
        "\n",
        "Your training images should be stored in the variable: x_train\n",
        "\n",
        "training labels in: y_train\n",
        "\n",
        "test images: x_test\n",
        "\n",
        "test_labels: y_test"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "z8QaeMBFRC3A",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Iyh3eE4gRhKN",
        "colab_type": "text"
      },
      "source": [
        "###2.2 **[TO-DO]** Define the architecture of a feed-forward neural network with 1 hidden layer containing 32 neurons"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "n7neDYWgRgkg",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZrXP6odQRt-o",
        "colab_type": "text"
      },
      "source": [
        "### 2.3 Setting the compile arguments"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-dhiUVetSDV9",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VWd5wZ-fSEAF",
        "colab_type": "text"
      },
      "source": [
        "### 2.4 **[TO_DO]** Train the neural network for 25 epochs with a batch size of 32 and a validation split of 80:20"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BLEzAT8ASPbV",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zlOlCnP9SUK_",
        "colab_type": "text"
      },
      "source": [
        "### 2.5 **[T0-DO]** Plot out the training and performance curve. A line plot of Training & Validation accuracy/loss vs. #Epochs"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gFQCnh6zSpJ0",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "R0l-dZBsTL-f",
        "colab_type": "text"
      },
      "source": [
        "### 2.6 **[TO-DO]** Predict the labels of the 'x_test' array that you have partitioned."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RecCynrsTWLE",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fQrMTkMsSrhH",
        "colab_type": "text"
      },
      "source": [
        "### 2.7 **[TO_DO]** [Open-ended]\n",
        "\n",
        "Make improvements to the neural network as you see fit. Change the architecture, compile parameters or training strategy"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YhZT-0LkS9dL",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OF-LWzX8S-IH",
        "colab_type": "text"
      },
      "source": [
        "### 2.8 [BONUS] Improve the accuracy of your predictions by usig a Convolutional Neural Network (CNN)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MeZOF-WvTJDi",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    }
  ]
}