{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "10_Convolutional_Neural_Networks",
      "provenance": [],
      "collapsed_sections": [
        "vskwiiI3V3S6"
      ],
      "toc_visible": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eTdCMVl9YAXw",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://practicalai.me\"><img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png\" width=\"100\" align=\"left\" hspace=\"20px\" vspace=\"20px\"></a>\n",
        "\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/convolution.gif\" width=\"200\" align=\"right\">\n",
        "\n",
        "<div align=\"left\">\n",
        "<h1>Convolutional Neural Networks (CNN)</h1>\n",
        "\n",
        "In this lesson we will explore the basics of Convolutional Neural Networks (CNNs) applied to text for natural language processing (NLP) tasks."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xuabAj4PYj57",
        "colab_type": "text"
      },
      "source": [
        "<table align=\"center\">\n",
        "  <td>\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png\" width=\"25\"><a target=\"_blank\" href=\"https://practicalai.me\"> View on practicalAI</a>\n",
        "  </td>\n",
        "  <td>\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/colab_logo.png\" width=\"25\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/practicalAI/practicalAI/blob/master/notebooks/10_Convolutional_Neural_Networks.ipynb\"> Run in Google Colab</a>\n",
        "  </td>\n",
        "  <td>\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/github_logo.png\" width=\"22\"><a target=\"_blank\" href=\"https://github.com/practicalAI/practicalAI/blob/master/notebooks/10_Convolutional_Neural_Networks.ipynb\"> View code on GitHub</a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JqxyljU18hvt",
        "colab_type": "text"
      },
      "source": [
        "# Overview\n",
        "\n",
        "* **Objective:**  Detect spatial substructure from input data.\n",
        "* **Advantages:** \n",
        "  * Small number of weights (shared)\n",
        "  * Parallelizable\n",
        "  * Detects spatial substrcutures (feature extractors)\n",
        "  * [Interpretability](https://arxiv.org/abs/1312.6034) via filters\n",
        "  * Used for in images/text/time-series etc.\n",
        "* **Disadvantages:**\n",
        "  * Many hyperparameters (kernel size, strides, etc.)\n",
        "* **Miscellaneous:** \n",
        "  * Lot's of deep CNN architectures constantly updated for SOTA performance\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rmvSOsB-CuJb",
        "colab_type": "text"
      },
      "source": [
        "# Set up"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bpFe3zVoCuTD",
        "colab_type": "code",
        "outputId": "0cfc46a8-6d02-4bf8-8c32-198126244a5b",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Use TensorFlow 2.x\n",
        "%tensorflow_version 2.x"
      ],
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "TensorFlow 2.x selected.\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2PXdZbDzCuRE",
        "colab_type": "code",
        "outputId": "e58937dd-9190-4cbb-ee9b-3d008a768d33",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "import os\n",
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "print(\"GPU Available: \", tf.test.is_gpu_available())"
      ],
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "GPU Available:  True\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "f-NnTT8LDjf3",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "SEED = 1234\n",
        "SHUFFLE = True\n",
        "DATA_FILE = 'news.csv'\n",
        "INPUT_FEATURE = 'title'\n",
        "OUTPUT_FEATURE = 'category'\n",
        "FILTERS = \"!\\\"'#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\"\n",
        "LOWER = True\n",
        "CHAR_LEVEL = True\n",
        "TRAIN_SIZE = 0.7\n",
        "VAL_SIZE = 0.15\n",
        "TEST_SIZE = 0.15\n",
        "NUM_EPOCHS = 10\n",
        "BATCH_SIZE = 64\n",
        "NUM_FILTERS = 50\n",
        "FILTER_SIZE = 3 # tri-grams\n",
        "HIDDEN_DIM = 100\n",
        "DROPOUT_P = 0.1\n",
        "LEARNING_RATE = 1e-3\n",
        "EARLY_STOPPING_CRITERIA = 3"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nB7mfVd7Dn9e",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Set seed for reproducability\n",
        "np.random.seed(SEED)\n",
        "tf.random.set_seed(SEED)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c69z9wpJ56nE",
        "colab_type": "text"
      },
      "source": [
        "# Data"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2V_nEp5G58M0",
        "colab_type": "text"
      },
      "source": [
        "We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120000 text samples from 4 unique classes ('Business', 'Sci/Tech', 'Sports', 'World')"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "y3qKSoEe57na",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import pandas as pd\n",
        "import re\n",
        "import urllib"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cGQo98566GIV",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Upload data from GitHub to notebook's local drive\n",
        "url = \"https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/news.csv\"\n",
        "response = urllib.request.urlopen(url)\n",
        "html = response.read()\n",
        "with open(DATA_FILE, 'wb') as fp:\n",
        "    fp.write(html)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dG_Oltib6G-9",
        "colab_type": "code",
        "outputId": "cced9cc0-50ec-4794-8880-3bb0044b9319",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 204
        }
      },
      "source": [
        "# Load data\n",
        "df = pd.read_csv(DATA_FILE, header=0)\n",
        "X = df[INPUT_FEATURE].values\n",
        "y = df[OUTPUT_FEATURE].values\n",
        "df.head(5)"
      ],
      "execution_count": 7,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>title</th>\n",
              "      <th>category</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>Wall St. Bears Claw Back Into the Black (Reuters)</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>Carlyle Looks Toward Commercial Aerospace (Reu...</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>Oil and Economy Cloud Stocks' Outlook (Reuters)</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>Iraq Halts Oil Exports from Main Southern Pipe...</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>Oil prices soar to all-time record, posing new...</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "                                               title  category\n",
              "0  Wall St. Bears Claw Back Into the Black (Reuters)  Business\n",
              "1  Carlyle Looks Toward Commercial Aerospace (Reu...  Business\n",
              "2    Oil and Economy Cloud Stocks' Outlook (Reuters)  Business\n",
              "3  Iraq Halts Oil Exports from Main Southern Pipe...  Business\n",
              "4  Oil prices soar to all-time record, posing new...  Business"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 7
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hxo6RKCQ71dl",
        "colab_type": "text"
      },
      "source": [
        "# Split data"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eS6kCcfY6IHE",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import collections\n",
        "from sklearn.model_selection import train_test_split"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hWkiSVb4SA0e",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "51FfQoKbSA67",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def train_val_test_split(X, y, val_size, test_size, shuffle):\n",
        "    \"\"\"Split data into train/val/test datasets.\n",
        "    \"\"\"\n",
        "    X_train, X_test, y_train, y_test = train_test_split(\n",
        "        X, y, test_size=test_size, stratify=y, shuffle=shuffle)\n",
        "    X_train, X_val, y_train, y_val = train_test_split(\n",
        "        X_train, y_train, test_size=val_size, stratify=y_train, shuffle=shuffle)\n",
        "    return X_train, X_val, X_test, y_train, y_val, y_test"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8XIdYU_n7536",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kqiQd2j_76gP",
        "colab_type": "code",
        "outputId": "8a4d421e-8795-4dfa-d398-bdf8bc8bf632",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 119
        }
      },
      "source": [
        "# Create data splits\n",
        "X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(\n",
        "    X=X, y=y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)\n",
        "class_counts = dict(collections.Counter(y))\n",
        "print (f\"X_train: {X_train.shape}, y_train: {y_train.shape}\")\n",
        "print (f\"X_val: {X_val.shape}, y_val: {y_val.shape}\")\n",
        "print (f\"X_test: {X_test.shape}, y_test: {y_test.shape}\")\n",
        "print (f\"X_train[0]: {X_train[0]}\")\n",
        "print (f\"y_train[0]: {y_train[0]}\")\n",
        "print (f\"Classes: {class_counts}\")"
      ],
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "X_train: (86700,), y_train: (86700,)\n",
            "X_val: (15300,), y_val: (15300,)\n",
            "X_test: (18000,), y_test: (18000,)\n",
            "X_train[0]: PGA overhauls system for Ryder Cup points\n",
            "y_train[0]: Sports\n",
            "Classes: {'Business': 30000, 'Sci/Tech': 30000, 'Sports': 30000, 'World': 30000}\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dIfmW7vJ8Jx1",
        "colab_type": "text"
      },
      "source": [
        "# Tokenizer"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nvzE9a-KKJTJ",
        "colab_type": "text"
      },
      "source": [
        "We're going to process our input text at the character level and then one-hot encode each character. Each character has a token id and the one hot representation for each character is an array of zeros except for the <token_id> index which is a 1. So in the example below, the letter e is the token index 2 and it's one hot encoded form is an array of zeros except for a 1 at index 2 ( `[0. 0. 1. 0. ... 0.]` )."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lQ6G4t-JKayP",
        "colab_type": "text"
      },
      "source": [
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/onehot.png\" width=\"900\">"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DHPAxkKR7736",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.preprocessing.text import Tokenizer\n",
        "from tensorflow.keras.utils import to_categorical"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NI7z9-mPXjLA",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Wp30jhS2XjhU",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def untokenize(indices, tokenizer):\n",
        "    \"\"\"Untokenize a list of indices into string.\"\"\"\n",
        "    return \" \".join([tokenizer.index_word[index] for index in indices])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "EAZylERnYHhz",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def untokenize_one_hot(seq, tokenizer):\n",
        "    \"\"\"Untokenize a one-hot encoded matrix.\"\"\"\n",
        "    indices = [np.argmax(one_hot) for one_hot in seq]\n",
        "    return untokenize(indices=indices, tokenizer=tokenizer)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_BD3XPKF8L84",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "WcscM_vL8KvP",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Input vectorizer\n",
        "X_tokenizer = Tokenizer(filters=FILTERS,\n",
        "                        lower=LOWER,\n",
        "                        char_level=CHAR_LEVEL,\n",
        "                        oov_token='<UNK>')"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xV2JgpOA8PwO",
        "colab_type": "code",
        "outputId": "df8ece21-ec3a-41d9-b5fb-de664703d7db",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Fit only on train data\n",
        "X_tokenizer.fit_on_texts(X_train)\n",
        "vocab_size = len(X_tokenizer.word_index) + 1\n",
        "print (f\"# tokens: {vocab_size}\")"
      ],
      "execution_count": 15,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "# tokens: 58\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ybb-YZSz8Qno",
        "colab_type": "code",
        "outputId": "fc677dc5-77ef-4f30-e5e0-7dad895a81bd",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        }
      },
      "source": [
        "# Convert text to sequence of tokens\n",
        "original_text = X_train[0]\n",
        "X_train = np.array(X_tokenizer.texts_to_sequences(X_train))\n",
        "X_val = np.array(X_tokenizer.texts_to_sequences(X_val))\n",
        "X_test = np.array(X_tokenizer.texts_to_sequences(X_test))\n",
        "preprocessed_text = untokenize(X_train[0], X_tokenizer)\n",
        "print (f\"{original_text} \\n\\t→ {preprocessed_text} \\n\\t→ {X_train[0]}\")"
      ],
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "PGA overhauls system for Ryder Cup points \n",
            "\t→ p g a   o v e r h a u l s   s y s t e m   f o r   r y d e r   c u p   p o i n t s \n",
            "\t→ [15, 18, 5, 2, 7, 24, 3, 9, 16, 5, 14, 11, 4, 2, 4, 22, 4, 6, 3, 17, 2, 19, 7, 9, 2, 9, 22, 13, 3, 9, 2, 12, 14, 15, 2, 15, 7, 8, 10, 6, 4]\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1DMYB0O_I7Uu",
        "colab_type": "code",
        "outputId": "5dc61e11-1e7b-474e-e561-23b93d8442d7",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 170
        }
      },
      "source": [
        "# One hot\n",
        "X_train = np.array([to_categorical(seq, num_classes=vocab_size) for seq in X_train])\n",
        "X_val = np.array([to_categorical(seq, num_classes=vocab_size) for seq in X_val])\n",
        "X_test = np.array([to_categorical(seq, num_classes=vocab_size) for seq in X_test])\n",
        "print (f\"X_train[0]:\\n {X_train[0]}\")\n",
        "print (f\"X_train[0].shape: {X_train[0].shape}\")"
      ],
      "execution_count": 17,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "X_train[0]:\n",
            " [[0. 0. 0. ... 0. 0. 0.]\n",
            " [0. 0. 0. ... 0. 0. 0.]\n",
            " [0. 0. 0. ... 0. 0. 0.]\n",
            " ...\n",
            " [0. 0. 0. ... 0. 0. 0.]\n",
            " [0. 0. 0. ... 0. 0. 0.]\n",
            " [0. 0. 0. ... 0. 0. 0.]]\n",
            "X_train[0].shape: (41, 58)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ORGuhjCf8TKh",
        "colab_type": "text"
      },
      "source": [
        "# LabelEncoder"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7aBBgzkW8Rxv",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from sklearn.preprocessing import LabelEncoder"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "z_jVCsl98U09",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ckM_MnQi8UTH",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Output vectorizer\n",
        "y_tokenizer = LabelEncoder()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0-FkxqCT8WUk",
        "colab_type": "code",
        "outputId": "f2c675f7-54cd-4071-a22f-91d4207eb69e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Fit on train data\n",
        "y_tokenizer = y_tokenizer.fit(y_train)\n",
        "classes = list(y_tokenizer.classes_)\n",
        "print (f\"classes: {classes}\")"
      ],
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "classes: ['Business', 'Sci/Tech', 'Sports', 'World']\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yrLHd1i_8XAJ",
        "colab_type": "code",
        "outputId": "f0621e5d-9a36-44b3-dcca-be2889bc14b1",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Convert labels to tokens\n",
        "y_train = y_tokenizer.transform(y_train)\n",
        "y_val = y_tokenizer.transform(y_val)\n",
        "y_test = y_tokenizer.transform(y_test)\n",
        "print (f\"y_train[0]: {y_train[0]}\")"
      ],
      "execution_count": 21,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "y_train[0]: 2\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GsZB_xtORcKw",
        "colab_type": "code",
        "outputId": "f22dde75-d0f6-44f7-c21e-a1b531cb6bc6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "source": [
        "# Class weights\n",
        "counts = np.bincount(y_train)\n",
        "class_weights = {i: 1.0/count for i, count in enumerate(counts)}\n",
        "print (f\"class counts: {counts},\\nclass weights: {class_weights}\")"
      ],
      "execution_count": 22,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "class counts: [21675 21675 21675 21675],\n",
            "class weights: {0: 4.61361014994233e-05, 1: 4.61361014994233e-05, 2: 4.61361014994233e-05, 3: 4.61361014994233e-05}\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eoWQk0hO9bK2",
        "colab_type": "text"
      },
      "source": [
        "# Generators"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iz51F32FNNmz",
        "colab_type": "text"
      },
      "source": [
        "We're going to create a generator that will load our data splits. Each input will be padded up to the largest sequence length in that particular batch."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GVxnbzgW8X1V",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import math\n",
        "from tensorflow.keras.preprocessing.sequence import pad_sequences\n",
        "from tensorflow.keras.utils import Sequence"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IMtHyqex9gVI",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1w6wVKJe9fxk",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "class DataGenerator(Sequence):\n",
        "    \"\"\"Custom data loader.\"\"\"\n",
        "    def __init__(self, X, y, batch_size, vocab_size, max_filter_size, shuffle=True):\n",
        "        self.X = X\n",
        "        self.y = y\n",
        "        self.batch_size = batch_size\n",
        "        self.vocab_size = vocab_size\n",
        "        self.max_filter_size = max_filter_size\n",
        "        self.shuffle = shuffle\n",
        "        self.on_epoch_end()\n",
        "\n",
        "    def __len__(self):\n",
        "        \"\"\"# of batches.\"\"\"\n",
        "        return math.ceil(len(self.X) / self.batch_size)\n",
        "\n",
        "    def __str__(self):\n",
        "        return (f\"<DataGenerator(\" \\\n",
        "                f\"batch_size={self.batch_size}, \" \\\n",
        "                f\"batches={len(self)}, \" \\\n",
        "                f\"shuffle={self.shuffle})>\")\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        \"\"\"Generate a batch.\"\"\"\n",
        "        # Gather indices for this batch\n",
        "        batch_indices = self.epoch_indices[\n",
        "            index * self.batch_size:(index+1)*self.batch_size]\n",
        "\n",
        "        # Generate batch data\n",
        "        X, y = self.create_batch(batch_indices=batch_indices)\n",
        "\n",
        "        return X, y\n",
        "\n",
        "    def on_epoch_end(self):\n",
        "        \"\"\"Create indices after each epoch.\"\"\"\n",
        "        self.epoch_indices = np.arange(len(self.X))\n",
        "        if self.shuffle == True:\n",
        "            np.random.shuffle(self.epoch_indices)\n",
        "\n",
        "    def create_batch(self, batch_indices):\n",
        "        \"\"\"Generate batch from indices.\"\"\"\n",
        "        # Get batch data\n",
        "        X = self.X[batch_indices]\n",
        "        y = self.y[batch_indices]\n",
        "\n",
        "        # Pad batch\n",
        "        max_seq_len = max(self.max_filter_size, max([len(x) for x in X]))\n",
        "        X = pad_sequences(X, padding=\"post\", maxlen=max_seq_len)\n",
        "\n",
        "        return X, y"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "u37JyFYV9ilS",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5T8mVj9d9hNI",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Dataset generator\n",
        "training_generator = DataGenerator(X=X_train,\n",
        "                                   y=y_train,\n",
        "                                   batch_size=BATCH_SIZE,\n",
        "                                   vocab_size=vocab_size,\n",
        "                                   max_filter_size=FILTER_SIZE,\n",
        "                                   shuffle=False)\n",
        "validation_generator = DataGenerator(X=X_val,\n",
        "                                     y=y_val,\n",
        "                                     batch_size=BATCH_SIZE,\n",
        "                                     vocab_size=vocab_size,\n",
        "                                     max_filter_size=FILTER_SIZE,\n",
        "                                     shuffle=False)\n",
        "testing_generator = DataGenerator(X=X_test,\n",
        "                                  y=y_test,\n",
        "                                  batch_size=BATCH_SIZE,\n",
        "                                  vocab_size=vocab_size,\n",
        "                                  max_filter_size=FILTER_SIZE,\n",
        "                                  shuffle=False)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "drbY5WDX9kcL",
        "colab_type": "code",
        "outputId": "61bc1b80-0c59-4344-c38b-911d41bf0670",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        }
      },
      "source": [
        "print (f\"training_generator: {training_generator}\")\n",
        "print (f\"validation_generator: {validation_generator}\")\n",
        "print (f\"testing_generator: {testing_generator}\")"
      ],
      "execution_count": 26,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "training_generator: <DataGenerator(batch_size=64, batches=1355, shuffle=False)>\n",
            "validation_generator: <DataGenerator(batch_size=64, batches=240, shuffle=False)>\n",
            "testing_generator: <DataGenerator(batch_size=64, batches=282, shuffle=False)>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NK4-o0trwA3p",
        "colab_type": "text"
      },
      "source": [
        "Now that we have our preprocessed data all set up, let's explore the components of the CNN before we create our model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I6Cs6GYInnD8",
        "colab_type": "text"
      },
      "source": [
        "# Inputs"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sWf6pSvunpoC",
        "colab_type": "text"
      },
      "source": [
        "We're going to learn about CNNs by applying them on 1D text data. In the dummy example below, our inputs are composed of character tokens that are one-hot encoded. We have a batch of N samples, where each sample has 8 characters and each character is represented by an array of 10 values (vocab size=10). This gives our inputs the size (N, 8, 10)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "c4AW9_QGpBqS",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.layers import Input"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bCQeJEUVnnrR",
        "colab_type": "code",
        "outputId": "5cc45fd2-95af-4a7c-a726-e411cfb6709c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Assume all our inputs are padded to have the same # of words\n",
        "sequence_size = 8 # words per input\n",
        "vocab_size = 10 # vocab size (one_hot dimension)\n",
        "x = Input(shape=(sequence_size, vocab_size))\n",
        "print (x)"
      ],
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Tensor(\"input_1:0\", shape=(None, 8, 10), dtype=float32)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UAJMhb7DsFgV",
        "colab_type": "text"
      },
      "source": [
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/inputs.png\" width=\"700\">"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3uLslqxFl_au",
        "colab_type": "text"
      },
      "source": [
        "# Filters"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JhhTVijAl-Yp",
        "colab_type": "text"
      },
      "source": [
        "At the core of CNNs are filters (aka weights, kernels, etc.) which convolve (slide) across our input to extract relevant features. The filters are initialized randomly but learn to pick up meaningful features from the input that aid in optimizing for the objective. The intuition here is that each filter represents a feature and we will use this filter on other inputs to capture the same feature -- feature extraction via parameter sharing. \n",
        "\n",
        "We can see convolution in the diagram below where we simplified the filters and inputs to be 2D for ease of visualization. Also note that the values are 0/1s but in reality they can be any floating point value."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "du4gRM5htR9W",
        "colab_type": "text"
      },
      "source": [
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/convolution.gif\" width=\"500\">"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "96PfAWzYsOEI",
        "colab_type": "text"
      },
      "source": [
        "Now let's return to our actual inputs `x`, which is of shape (8, 10) [`max_seq_len`, `vocab_size`] and we want to convolve on this input using filters. We will use 50 filters that are of size (1, 3) and has the same depth as the number of channels (`num_channels` = `vocab_size` = `one_hot_size` = 10). This gives our filter a shape of (3, 10, 50) [`kernel_size`, `vocab_size`, `num_filters`]\n",
        "\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/filters_conv1d.png\" width=\"500\">"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XAgzaWVsmmwh",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.layers import Conv1D"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "26nX0cVX-Bwl",
        "colab_type": "text"
      },
      "source": [
        "* **stride**: amount the filters move from one convolution operation to the next.\n",
        "* **padding**: values (typically zero) padded to the input, typically to create a volume with whole number dimensions."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "i65abvD4H2dM",
        "colab_type": "text"
      },
      "source": [
        "So far we've used a `stride` of 1 and `VALID` padding (no padding) but let's look at an example with a higher stride and difference between different padding approaches.\n",
        "\n",
        "Padding types:\n",
        "* **VALID**: no padding, the filters only use the \"valid\" values in the input. If the filter cannot reach all the input values (filters go left to right), the extra values on the right are dropped.\n",
        "* **SAME**: adds padding evenly to the right (preferred) and left sides of the input so that all values in the input are processed.\n",
        "\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/padding.png\" width=\"600\">\n",
        "\n",
        "There are many other ways to pad our inputs as well, including [custom](https://www.tensorflow.org/api_docs/python/tf/pad) options where we pad the inputs first and then pass it into the CONV layer). A common one is `CONSTANT` padding where we add enough padding to have every value in the input convolve with every value in the filter. We'll explore these custom padding options is later lessons."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "stBW81Uimmz6",
        "colab_type": "code",
        "outputId": "9350861d-04c4-4dc4-c170-249f55ebfebf",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        }
      },
      "source": [
        "# Convolutional weights\n",
        "conv = Conv1D(filters=50, \n",
        "              kernel_size=(3,), \n",
        "              strides=1,\n",
        "              padding='valid', # no padding\n",
        "              activation='relu')\n",
        "z_conv = conv(x)\n",
        "W_conv, b_conv = conv.weights\n",
        "print (f\"x {x.shape}\")\n",
        "print (f\"W_conv {W_conv.shape}\")\n",
        "print (f\"z_conv {z_conv.shape}\")"
      ],
      "execution_count": 30,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "x (None, 8, 10)\n",
            "W_conv (3, 10, 50)\n",
            "z_conv (None, 6, 50)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wFD9qBslxs4A",
        "colab_type": "text"
      },
      "source": [
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/conv2.png\" width=\"700\">"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tcBbTPW6tZtr",
        "colab_type": "text"
      },
      "source": [
        "When we apply these filter on our inputs, we receive an output of shape (N, 6, 50). We get 50 for the output channel dim because we used 50 filters and 6 for the conv outputs because:\n",
        "\n",
        "$W_1 = \\frac{W_2 - F + 2P}{S} + 1 = \\frac{8 - 3 + 2(0)}{1} + 1 = 6$\n",
        "\n",
        "$H_1 = \\frac{H_2 - F + 2P}{S} + 1 = \\frac{1 - 1 + 2(0)}{1} + 1 = 1$\n",
        "\n",
        "$D_2 = D_1 $\n",
        "\n",
        "where:\n",
        "  * W: width of each input = 8\n",
        "  * H: height of each input = 1\n",
        "  * D: depth (# channels)\n",
        "  * F: filter size = 3\n",
        "  * P: padding = 0\n",
        "  * S: stride = 1"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FRD72rMHvWHN",
        "colab_type": "text"
      },
      "source": [
        "<img height=\"45\" src=\"http://bestanimations.com/HomeOffice/Lights/Bulbs/animated-light-bulb-gif-29.gif\" align=\"left\" vspace=\"5px\" hspace=\"10px\">\n",
        "\n",
        "We will explore larger dimensional convolution layers in subsequent lessons. For example, [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) is used with 3D inputs (images, char-level text, etc.) and [Conv3D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv3D) is used for 4D inputs (videos, time-series, etc.)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HpB8HSJNwLp-",
        "colab_type": "text"
      },
      "source": [
        "# Pooling"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WxRnbPy6wLuc",
        "colab_type": "text"
      },
      "source": [
        "The result of convolving filters on an input is a feature map. Due to the nature of convolution and overlaps, our feature map will have lots of redundant information. Pooling is a way to summarize a high-dimensional feature map into a lower dimensional one for simplified downstream computation. The pooling operation can be the max value, average, etc. in a certain receptive field. Below is an example of pooling where the outputs from a conv layer are 4X4 and we're going to apply max pool filters of size 2X2.\n",
        "\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/pooling.png\" width=\"500\">"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ok0EBECKc2QU",
        "colab_type": "text"
      },
      "source": [
        "$W_2 = \\frac{W_1 - F}{S} + 1 = \\frac{4 - 2}{2} + 1 = 2$\n",
        "\n",
        "$H_2 = \\frac{H_1 - F}{S} + 1 = \\frac{4 - 2}{2} + 1 = 2$\n",
        "\n",
        "$ D_2 = D_1 $\n",
        "\n",
        "where:\n",
        "  * W: width of each input = 4\n",
        "  * H: height of each input = 4\n",
        "  * D: depth (# channels)\n",
        "  * F: filter size = 2\n",
        "  * S: stride = 2"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5ijJtky9QHeX",
        "colab_type": "text"
      },
      "source": [
        "In our use case, we want to just take the one max value so we will use the [GlobalMaxPool1D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalMaxPool1D) layer, so our max-pool filter size will be max_seq_len.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "niptcsv2wUPA",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.layers import GlobalMaxPool1D"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xYOfH3_vwDTw",
        "colab_type": "code",
        "outputId": "eda83f59-e9e5-44e6-f043-28ba7cc7502b",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Max pooling\n",
        "pool = GlobalMaxPool1D(data_format='channels_last')\n",
        "z_pool = pool(z_conv)\n",
        "print (f\"z_pool {z_pool.shape}\")"
      ],
      "execution_count": 32,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "z_pool (None, 50)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SEFYMYZFeFVk",
        "colab_type": "text"
      },
      "source": [
        "<div align=\"left\">\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/lightbulb.gif\" width=\"45px\" align=\"left\" hspace=\"10px\">\n",
        "</div>\n",
        "\n",
        "data_format=`channels_last`: 3D tensor with shape: (batch_size, steps, features) <br>\n",
        "data_format=`channels_first`: 3D tensor with shape: (batch_size, features, steps)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ccelFfH-s3ZY",
        "colab_type": "text"
      },
      "source": [
        "# Batch Normalization"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9f67F4o1HHQp",
        "colab_type": "text"
      },
      "source": [
        "The last topic we'll cover before constructing our model is [batch normalization](https://arxiv.org/abs/1502.03167). It's an operation that will standardize (mean=0, std=1) the activations from the previous layer. Recall that we used to standardize our inputs in previous notebooks so our model can optimize quickly with larger learning rates. It's the same concept here but we continue to maintain standardized values throughout the forward pass to further aid optimization. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zHfhcEDus8yi",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.layers import BatchNormalization"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "owtCbYoZs82g",
        "colab_type": "code",
        "outputId": "eb4fb86e-55a0-459d-8fc5-c9b9bd9a3959",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Batch normalization\n",
        "batch_norm = BatchNormalization()\n",
        "z_batch_norm = batch_norm(z_conv) # applied to activations (after conv layer & before pooling)\n",
        "print (f\"z {z_batch_norm.shape}\")"
      ],
      "execution_count": 34,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "z (None, 6, 50)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pfhjWZRD94hK",
        "colab_type": "text"
      },
      "source": [
        "# Model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zVmJGm8m-KIz",
        "colab_type": "text"
      },
      "source": [
        "Let's visualize the model's forward pass.\n",
        "\n",
        "1. We'll first tokenize our inputs (`batch_size`, `max_seq_len`).\n",
        "2. Then we'll one-hot encode our tokenized inputs (`batch_size`, `max_seq_len`, `vocab_size`).\n",
        "3. We'll apply convolution via filters (`filter_size`, `vocab_size`, `num_filters`) followed by batch normalization. Our filters act as character level n-gram detectors.\n",
        "4. We'll apply 1D global max pooling which will extract the most relevant information from the feature maps for making the decision.\n",
        "5. We feed the pool outputs to a fully-connected (FC) layer (with dropout).\n",
        "6. We use one more FC layer with softmax to derive class probabilities. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3ilKr0yTgl6o",
        "colab_type": "text"
      },
      "source": [
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/10_Convolutional_Neural_Networks/forward_pass.png\" width=\"1000\">"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "T8oCutDJ-d1J",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.layers import Activation\n",
        "from tensorflow.keras.layers import BatchNormalization\n",
        "from tensorflow.keras.layers import Conv1D\n",
        "from tensorflow.keras.layers import Dense\n",
        "from tensorflow.keras.layers import Dropout\n",
        "from tensorflow.keras.layers import GlobalMaxPool1D\n",
        "from tensorflow.keras.losses import SparseCategoricalCrossentropy\n",
        "from tensorflow.keras.models import Model\n",
        "from tensorflow.keras.optimizers import Adam"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6B2MewCdCeKC",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UPP5ROd69mXC",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "class TextClassificationCNNModel(Model):\n",
        "    def __init__(self, filter_size, num_filters,\n",
        "                 hidden_dim, dropout_p, num_classes):\n",
        "        super(TextClassificationCNNModel, self).__init__()\n",
        "        \n",
        "        # Convolutional filters\n",
        "        self.conv = Conv1D(filters=num_filters, kernel_size=filter_size, padding='same')\n",
        "        self.relu = Activation('relu')\n",
        "        self.batch_norm = BatchNormalization()\n",
        "        self.pool = GlobalMaxPool1D(data_format='channels_last')\n",
        "\n",
        "        # FC layers\n",
        "        self.fc1 = Dense(units=hidden_dim, activation='relu')\n",
        "        self.dropout = Dropout(rate=dropout_p)\n",
        "        self.fc2 = Dense(units=num_classes, activation='softmax')\n",
        "        \n",
        "    def call(self, x_in, training=False):\n",
        "        \"\"\"Forward pass.\"\"\"\n",
        "\n",
        "        # Cast input to float\n",
        "        x_in = tf.cast(x_in, tf.float32)\n",
        "\n",
        "        # Convolutions\n",
        "        z = self.conv(x_in)\n",
        "        z = self.relu(z)\n",
        "        z = self.batch_norm(z)\n",
        "        z = self.pool(z)\n",
        "\n",
        "        # FC\n",
        "        z = self.fc1(z)\n",
        "        if training:\n",
        "            z = self.dropout(z, training=training)\n",
        "        y_pred = self.fc2(z)\n",
        "\n",
        "        return y_pred\n",
        "\n",
        "    def sample(self, input_shape):\n",
        "        x = Input(shape=input_shape)\n",
        "        return Model(inputs=x, outputs=self.call(x)).summary()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y8JzMrcv_p8a",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wD4sRUS5_lwq",
        "colab_type": "code",
        "outputId": "ba70bef9-d8ad-4a6d-9299-ec5b336bf6a1",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 391
        }
      },
      "source": [
        "model = TextClassificationCNNModel(filter_size=FILTER_SIZE,\n",
        "                                   num_filters=NUM_FILTERS,\n",
        "                                   hidden_dim=HIDDEN_DIM,\n",
        "                                   dropout_p=DROPOUT_P,\n",
        "                                   num_classes=len(classes))\n",
        "vocab_size = len(X_tokenizer.word_index) + 1\n",
        "model.sample(input_shape=(sequence_size, vocab_size,))"
      ],
      "execution_count": 37,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"model\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_2 (InputLayer)         [(None, 8, 58)]           0         \n",
            "_________________________________________________________________\n",
            "conv1d_1 (Conv1D)            (None, 8, 50)             8750      \n",
            "_________________________________________________________________\n",
            "activation (Activation)      (None, 8, 50)             0         \n",
            "_________________________________________________________________\n",
            "batch_normalization_1 (Batch (None, 8, 50)             200       \n",
            "_________________________________________________________________\n",
            "global_max_pooling1d_1 (Glob (None, 50)                0         \n",
            "_________________________________________________________________\n",
            "dense (Dense)                (None, 100)               5100      \n",
            "_________________________________________________________________\n",
            "dense_1 (Dense)              (None, 4)                 404       \n",
            "=================================================================\n",
            "Total params: 14,454\n",
            "Trainable params: 14,354\n",
            "Non-trainable params: 100\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s1L7-vxbgCup",
        "colab_type": "text"
      },
      "source": [
        "<div align=\"left\">\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/lightbulb.gif\" width=\"45px\" align=\"left\" hspace=\"10px\">\n",
        "</div>\n",
        "\n",
        "Note that we used `SAME` padding (w/ stride=1) which means that the conv outputs will have the same width as our inputs. The amount of padding differs for each batch based on the `max_seq_len` but you can calculate it by solving for P in the equation below.\n",
        "\n",
        "$ \\frac{W_1 - F + 2P}{S} + 1 = W_2 $\n",
        "$ \\frac{\\text{max_seq_len } - \\text{ filter_size } + 2P}{\\text{stride}} + 1 = W_{conv_{output}} $"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YlFBYfpPC5ap",
        "colab_type": "text"
      },
      "source": [
        "# Training"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ktgC5WsCC-bg",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.callbacks import Callback\n",
        "from tensorflow.keras.callbacks import EarlyStopping\n",
        "from tensorflow.keras.callbacks import ReduceLROnPlateau\n",
        "from tensorflow.keras.callbacks import TensorBoard\n",
        "%load_ext tensorboard"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Ucn3tYq1_sE1",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Compile\n",
        "model.compile(optimizer=Adam(lr=LEARNING_RATE),\n",
        "              loss=SparseCategoricalCrossentropy(),\n",
        "              metrics=['accuracy'])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xwYD62fzpQ-1",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Callbacks\n",
        "callbacks = [EarlyStopping(monitor='val_loss', patience=EARLY_STOPPING_CRITERIA, verbose=1, mode='min'),\n",
        "             ReduceLROnPlateau(patience=1, factor=0.1, verbose=0),\n",
        "             TensorBoard(log_dir='tensorboard', histogram_freq=1, update_freq='epoch')]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qlqe2TVlCxvj",
        "colab_type": "code",
        "outputId": "a1166af9-8f6b-44e4-a0b8-17ceedf58f68",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 394
        }
      },
      "source": [
        "# Training\n",
        "training_history = model.fit_generator(generator=training_generator,\n",
        "                                       epochs=NUM_EPOCHS,\n",
        "                                       validation_data=validation_generator,\n",
        "                                       callbacks=callbacks,\n",
        "                                       shuffle=False,\n",
        "                                       class_weight=class_weights,\n",
        "                                       verbose=1)"
      ],
      "execution_count": 41,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "   1/1355 [..............................] - ETA: 2:03:37 - loss: 2.1479e-04 - accuracy: 0.2969WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.185355). Check your callbacks.\n",
            "1355/1355 [==============================] - 55s 40ms/step - loss: 5.2768e-05 - accuracy: 0.4209 - val_loss: 0.9817 - val_accuracy: 0.6050\n",
            "Epoch 2/10\n",
            "1355/1355 [==============================] - 48s 35ms/step - loss: 4.2652e-05 - accuracy: 0.6136 - val_loss: 0.9055 - val_accuracy: 0.6416\n",
            "Epoch 3/10\n",
            "1355/1355 [==============================] - 48s 35ms/step - loss: 3.9452e-05 - accuracy: 0.6551 - val_loss: 0.8625 - val_accuracy: 0.6622\n",
            "Epoch 4/10\n",
            "1355/1355 [==============================] - 48s 35ms/step - loss: 3.7632e-05 - accuracy: 0.6775 - val_loss: 0.8343 - val_accuracy: 0.6717\n",
            "Epoch 5/10\n",
            "1355/1355 [==============================] - 47s 35ms/step - loss: 3.6372e-05 - accuracy: 0.6923 - val_loss: 0.8130 - val_accuracy: 0.6868\n",
            "Epoch 6/10\n",
            "1355/1355 [==============================] - 48s 35ms/step - loss: 3.5395e-05 - accuracy: 0.7017 - val_loss: 0.7945 - val_accuracy: 0.6965\n",
            "Epoch 7/10\n",
            "1355/1355 [==============================] - 48s 35ms/step - loss: 3.4659e-05 - accuracy: 0.7098 - val_loss: 0.7893 - val_accuracy: 0.6982\n",
            "Epoch 8/10\n",
            "1355/1355 [==============================] - 47s 35ms/step - loss: 3.3938e-05 - accuracy: 0.7181 - val_loss: 0.7695 - val_accuracy: 0.7061\n",
            "Epoch 9/10\n",
            "1355/1355 [==============================] - 48s 35ms/step - loss: 3.3331e-05 - accuracy: 0.7228 - val_loss: 0.7643 - val_accuracy: 0.7114\n",
            "Epoch 10/10\n",
            "1355/1355 [==============================] - 48s 35ms/step - loss: 3.2874e-05 - accuracy: 0.7282 - val_loss: 0.7625 - val_accuracy: 0.7107\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TaBb83O5T0Wj",
        "colab_type": "code",
        "outputId": "b4f87623-2371-4a02-f477-d7a40d8b0102",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 17
        }
      },
      "source": [
        "%tensorboard --logdir tensorboard"
      ],
      "execution_count": 42,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/html": [
              "\n",
              "    <div id=\"root\"></div>\n",
              "    <script>\n",
              "      (function() {\n",
              "        window.TENSORBOARD_ENV = window.TENSORBOARD_ENV || {};\n",
              "        window.TENSORBOARD_ENV[\"IN_COLAB\"] = true;\n",
              "        document.querySelector(\"base\").href = \"https://localhost:6006\";\n",
              "        function fixUpTensorboard(root) {\n",
              "          const tftb = root.querySelector(\"tf-tensorboard\");\n",
              "          // Disable the fragment manipulation behavior in Colab. Not\n",
              "          // only is the behavior not useful (as the iframe's location\n",
              "          // is not visible to the user), it causes TensorBoard's usage\n",
              "          // of `window.replace` to navigate away from the page and to\n",
              "          // the `localhost:<port>` URL specified by the base URI, which\n",
              "          // in turn causes the frame to (likely) crash.\n",
              "          tftb.removeAttribute(\"use-hash\");\n",
              "        }\n",
              "        function executeAllScripts(root) {\n",
              "          // When `script` elements are inserted into the DOM by\n",
              "          // assigning to an element's `innerHTML`, the scripts are not\n",
              "          // executed. Thus, we manually re-insert these scripts so that\n",
              "          // TensorBoard can initialize itself.\n",
              "          for (const script of root.querySelectorAll(\"script\")) {\n",
              "            const newScript = document.createElement(\"script\");\n",
              "            newScript.type = script.type;\n",
              "            newScript.textContent = script.textContent;\n",
              "            root.appendChild(newScript);\n",
              "            script.remove();\n",
              "          }\n",
              "        }\n",
              "        function setHeight(root, height) {\n",
              "          // We set the height dynamically after the TensorBoard UI has\n",
              "          // been initialized. This avoids an intermediate state in\n",
              "          // which the container plus the UI become taller than the\n",
              "          // final width and cause the Colab output frame to be\n",
              "          // permanently resized, eventually leading to an empty\n",
              "          // vertical gap below the TensorBoard UI. It's not clear\n",
              "          // exactly what causes this problematic intermediate state,\n",
              "          // but setting the height late seems to fix it.\n",
              "          root.style.height = `${height}px`;\n",
              "        }\n",
              "        const root = document.getElementById(\"root\");\n",
              "        fetch(\".\")\n",
              "          .then((x) => x.text())\n",
              "          .then((html) => void (root.innerHTML = html))\n",
              "          .then(() => fixUpTensorboard(root))\n",
              "          .then(() => executeAllScripts(root))\n",
              "          .then(() => setHeight(root, 800));\n",
              "      })();\n",
              "    </script>\n",
              "  "
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vskwiiI3V3S6",
        "colab_type": "text"
      },
      "source": [
        "# Evaluation"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Itq7lT9qV9Y8",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import io\n",
        "import itertools\n",
        "import json\n",
        "import matplotlib.pyplot as plt\n",
        "from sklearn.metrics import classification_report\n",
        "from sklearn.metrics import confusion_matrix\n",
        "from sklearn.metrics import precision_recall_fscore_support"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "g05NJhAQUDwd",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RHeDoXgwUC_F",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def plot_confusion_matrix(y_true, y_pred, classes, cmap=plt.cm.Blues):\n",
        "    \"\"\"Plot a confusion matrix using ground truth and predictions.\"\"\"\n",
        "    # Confusion matrix\n",
        "    cm = confusion_matrix(y_true, y_pred)\n",
        "    cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n",
        "\n",
        "    #  Figure\n",
        "    fig = plt.figure()\n",
        "    ax = fig.add_subplot(111)\n",
        "    cax = ax.matshow(cm, cmap=plt.cm.Blues)\n",
        "    fig.colorbar(cax)\n",
        "\n",
        "    # Axis\n",
        "    plt.title(\"Confusion matrix\")\n",
        "    plt.ylabel(\"True label\")\n",
        "    plt.xlabel(\"Predicted label\")\n",
        "    ax.set_xticklabels([''] + classes)\n",
        "    ax.set_yticklabels([''] + classes)\n",
        "    ax.xaxis.set_label_position('bottom') \n",
        "    ax.xaxis.tick_bottom()\n",
        "\n",
        "    # Values\n",
        "    thresh = cm.max() / 2.\n",
        "    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n",
        "        plt.text(j, i, f\"{cm[i, j]:d} ({cm_norm[i, j]*100:.1f}%)\",\n",
        "                 horizontalalignment=\"center\",\n",
        "                 color=\"white\" if cm[i, j] > thresh else \"black\")\n",
        "\n",
        "    # Display\n",
        "    plt.show()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fcQIsZxGUI70",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def get_performance(y_true, y_pred, classes):\n",
        "    \"\"\"Per-class performance metrics. \"\"\"\n",
        "    performance = {'overall': {}, 'class': {}}\n",
        "    y_pred = np.argmax(y_pred, axis=1)\n",
        "    metrics = precision_recall_fscore_support(y_true, y_pred)\n",
        "\n",
        "    # Overall performance\n",
        "    performance['overall']['precision'] = np.mean(metrics[0])\n",
        "    performance['overall']['recall'] = np.mean(metrics[1])\n",
        "    performance['overall']['f1'] = np.mean(metrics[2])\n",
        "    performance['overall']['num_samples'] = np.float64(np.sum(metrics[3]))\n",
        "\n",
        "    # Per-class performance\n",
        "    for i in range(len(classes)):\n",
        "        performance['class'][classes[i]] = {\n",
        "            \"precision\": metrics[0][i],\n",
        "            \"recall\": metrics[1][i],\n",
        "            \"f1\": metrics[2][i],\n",
        "            \"num_samples\": np.float64(metrics[3][i])\n",
        "        }\n",
        "\n",
        "    return performance"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yCAHPsyWWAar",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qdAj6KyCU88E",
        "colab_type": "code",
        "outputId": "37e46dcb-6678-44c4-fb8b-98c0c93c11b4",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        }
      },
      "source": [
        "# Evaluation\n",
        "test_history = model.evaluate_generator(generator=testing_generator, verbose=1)\n",
        "y_pred = model.predict_generator(generator=testing_generator, verbose=1)\n",
        "print (f\"test history: {test_history}\")"
      ],
      "execution_count": 46,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "282/282 [==============================] - 4s 16ms/step - loss: 0.7581 - accuracy: 0.7145\n",
            "282/282 [==============================] - 2s 8ms/step\n",
            "test history: [0.7580525716568561, 0.7145]\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "uLQNo8NXWBqI",
        "colab_type": "code",
        "outputId": "ad46645b-b5c9-46b4-9200-f79797c6e33c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 595
        }
      },
      "source": [
        "# Class performance\n",
        "performance = get_performance(y_true=y_test,\n",
        "                              y_pred=y_pred,\n",
        "                              classes=classes)\n",
        "print (json.dumps(performance, indent=4))"
      ],
      "execution_count": 47,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "{\n",
            "    \"overall\": {\n",
            "        \"precision\": 0.7160589897350307,\n",
            "        \"recall\": 0.7145,\n",
            "        \"f1\": 0.7128911295045124,\n",
            "        \"num_samples\": 18000.0\n",
            "    },\n",
            "    \"class\": {\n",
            "        \"Business\": {\n",
            "            \"precision\": 0.7008547008547008,\n",
            "            \"recall\": 0.656,\n",
            "            \"f1\": 0.6776859504132232,\n",
            "            \"num_samples\": 4500.0\n",
            "        },\n",
            "        \"Sci/Tech\": {\n",
            "            \"precision\": 0.7459698387935517,\n",
            "            \"recall\": 0.6375555555555555,\n",
            "            \"f1\": 0.6875149772346034,\n",
            "            \"num_samples\": 4500.0\n",
            "        },\n",
            "        \"Sports\": {\n",
            "            \"precision\": 0.7206100402457106,\n",
            "            \"recall\": 0.756,\n",
            "            \"f1\": 0.7378809239778765,\n",
            "            \"num_samples\": 4500.0\n",
            "        },\n",
            "        \"World\": {\n",
            "            \"precision\": 0.6968013790461597,\n",
            "            \"recall\": 0.8084444444444444,\n",
            "            \"f1\": 0.7484826663923464,\n",
            "            \"num_samples\": 4500.0\n",
            "        }\n",
            "    }\n",
            "}\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nRbPfqgZWaof",
        "colab_type": "code",
        "outputId": "89ea0faf-8286-4e66-c895-8acfa83f9a23",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 598
        }
      },
      "source": [
        "# Confusion matrix\n",
        "plt.rcParams[\"figure.figsize\"] = (7,7)\n",
        "y_pred = np.argmax(y_pred, axis=1)\n",
        "plot_confusion_matrix(y_test, y_pred, classes=classes)\n",
        "print (classification_report(y_test, y_pred))"
      ],
      "execution_count": 48,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAc8AAAGKCAYAAABq7cr0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOzdd3wURRvA8d+TECBIJ/TepYcWekso\noShFFFAUe+8CSpMiCCrCK4IgglgQAcGCEgwdpNcAgoUuJYQSCCWVZN4/dnMk5NLgAoQ8Xz/38XZ2\nZnZu2dyzMzu3K8YYlFJKKZV2bre7AUoppVRmo8FTKaWUSicNnkoppVQ6afBUSiml0kmDp1JKKZVO\n2W53A5RSSt093POWNeZqhEvqMhFnAo0x/i6pzMU0eCqllHIZczWCHFUfckldkUFTvFxSUQbQ4KmU\nUsqFBOTuvyJ4939CpZRSysU0eCqVChHxFJFfRSRMRH64iXoeEZGlrmzb7SIiLUTkn9vdDnUHEkDE\nNa87mAZPddcQkYdFZJuIXBaRYBFZIiLNXVB1T6AoUMgY8+CNVmKM+c4Y094F7clQImJEpFJKeYwx\nfxhjqt6qNqlMRtxc87qD3dmtUyqNRORN4H/A+1iBrgzwGdDVBdWXBf41xlx1QV2ZnojoXAmV5Wnw\nVJmeiOQDRgEvGWN+NMZcMcbEGGN+NcYMsPPkEJH/ichJ+/U/Eclhr2stIsdF5C0ROW33Wp+w140E\n3gV62T3ap0RkhIjMTrD9cnZvLZu9/LiIHBKRSyJyWEQeSZC+LkG5piKy1R4O3ioiTROsWy0i74nI\neruepSLidOZhgvYPTND+biLSSUT+FZFQERmcIL+PiGwUkQt23skikt1et9bOtsv+vL0S1P+2iJwC\nZsWn2WUq2tuoZy+XEJEzItL6pv5hVealw7ZKZQpNgJzATynkGQI0BryBOoAPMDTB+mJAPqAk8BQw\nRUQKGGOGY/Vm5xljchtjZqbUEBG5B5gEdDTG5AGaAkFO8hUEFtt5CwETgMUiUihBtoeBJ4AiQHag\nfwqbLoa1D0piBfsvgL5AfaAFMExEytt5Y4E3AC+sfecHvAhgjGlp56ljf955CeoviNULfzbhho0x\nB4G3gdkikguYBXxtjFmdQnvVXUt02FapTKIQcDaVYdVHgFHGmNPGmDPASODRBOtj7PUxxpgA4DJw\no9f04oCaIuJpjAk2xux1kqczsN8Y860x5qox5nvgb+C+BHlmGWP+NcZEAPOxAn9yYoAxxpgYYC5W\nYPzEGHPJ3v4+rJMGjDHbjTGb7O0eAT4HWqXhMw03xkTZ7UnEGPMFcADYDBTHOllR6q6lwVPdDc4B\nXqlciysBHE2wfNROc9RxXfANB3KntyHGmCtAL+B5IFhEFovIvWloT3ybSiZYPpWO9pwzxsTa7+OD\nW0iC9RHx5UWkioj8JiKnROQiVs86tR+jnzHGRKaS5wugJvCpMSYqlbzqbqbDtkplChuBKKBbCnlO\nYg05xitjp92IK0CuBMvFEq40xgQaY9ph9cD+xgoqqbUnvk0nbrBN6TEVq12VjTF5gcFYPzBIiUlp\npYjkxpqwNRMYYQ9Lq6xI0GFbpTIDY0wY1nW+KfZEmVwi4iEiHUXkQzvb98BQESlsT7x5F5idXJ2p\nCAJaikgZe7LSoPgVIlJURLra1z6jsIZ/45zUEQBUsX9ek01EegHVgd9usE3pkQe4CFy2e8UvXLc+\nBKiQzjo/AbYZY57GupY77aZbqVQKRCSniGwRkV0istee3IeIfGVP1AuyX952uojIJBE5ICK74ye4\n2ev6ich++9UvLdvX4KnuCsaYj4E3sSYBnQGOAS8DP9tZRgPbgN3AHmCHnXYj21oGzLPr2k7igOdm\nt+MkEIp1LfH64IQx5hzQBXgLa9h5INDFGHP2RtqUTv2xJiNdwuoVz7tu/Qjga3s2bqo3KRWRroA/\n1z7nm0C9+FnGKqtx0ZBt6sO2UYCvMaYO1nwAfxFpbK8bYIzxtl/xE/Y6ApXt17NYIzDxk/eGA42w\nJhIOF5ECqX5KY1IcjVFKKaXSzC13cZOj9hMuqSty49jtxpgGqeWzZ3mvwzqBewH4zRiz4Lo8nwOr\n7cl5iHWHrNbxL2PMc87yJUd7nkoppTIlEXEXkSDgNLDMGLPZXjXGHpqdKPbvubEm4x1LUPy4nZZc\neoo0eCqllHIt1w3beol1y8341/W/MY41xngDpQAfEamJNQfhXqAh1m+T386Ij6i32VJKKeVCLn0k\n2dm0DNsaYy6IyCrA3xgz3k6OEpFZXLu5yAmgdIJipey0E1hDtwnTV6e2Te15KqWUynTsmfP57fee\nQDvgbxEpbqcJ1s/X/rSLLAIes2fdNgbCjDHBQCDQXkQK2BOF2ttpKdKep1JKKdeJfyRZxiuONSvc\nHasjON8Y85uIrBSRwnZLgrBuWALWz8M6Yd0JKxzr1pcYY0JF5D1gq51vlDEmNLWNa/BUSinlWrfg\nBgfGmN1AXSfpvsnkN8BLyaz7EvgyPdvXYVullFIqnbTnqZRSyoVcOmHojqXBUymllGu53dk3dXeF\nu//0QCmllHIx7XkqpZRynfinqtzlNHgqpZRyrTv8WZyucPefHiillFIupj1PpZRSLqSzbZVSSqn0\n02FbpZRSSl1Pe55KKaVcS4dtlVJKqXS49izOu5oGT6WUUq6lPU8F4JYzr3HPU/h2NyPTqVYy/+1u\nQqaUBU7aM4wxt7sFmc/xY0cJPXdWj7p00uCZBu55ClOw67jb3YxMZ/GH99/uJmRK2bPd/WftGSX6\natztbkKm09m3qesrzQJngBo8lVJKuVDW+J3n3f8JlVJKKRfTnqdSSinX0mFbpZRSKh2yyFNV7v5P\nqJRSSrmY9jyVUkq5UNaYMKTBUymllGtlgWued//pgVJKKeVi2vNUSinlWjpsq5RSSqWTDtsqpZRS\n6nra81RKKeU6orNtlVJKqfTTYVullFJKXU97nkoppVxKskDPU4OnUkoplxGyRvDUYVullFIqnbTn\nqZRSynXEft3lNHgqpZRyIdFhW6WUUkolpT1PpZRSLpUVep4aPJVSSrlUVgieOmyrlFJKpZP2PJVS\nSrlUVuh5avBUSinlOlnkpyo6bKuUUkqlk/Y8lVJKuYxkkd95avBUSinlUlkheOqwrVJKKZVO2vNU\nSinlUlmh56nBUymllEtlheCpw7Y3qEQBT37s34q1ozqwZmR7nvGr5FhXvVQ+Fg/yZfWI9nz7SjNy\n57TOUUoXysWRz3qw4t12rHi3HR/2rQeAZ3Z3Zr/anHXvWXUNfaBWstvt6F2CN7tUcyzf36CUow1T\nn2nkSD85vadjO9+83CzZ+pyVr1g0N0uHtWXViHY0qFAQAHc34Yc3W+KZ3d1R9vNnG1G+SO707LYk\nmnpXoV3z+vi38qGzb9Mk66dP+R9lCuUk9NxZAC5eDOOJh3vQoWVD/JrWZf53XzutNzIiggfva0ts\nbCwAjz54HzXLF+XxPt0T5Xv1uX609qlF22b16P/Ks8TExCSpa++eXXTr0Aq/pnVp36IBi376IVH5\n9i0a8MF7wxxpk8aPJXDxIsfy8sAAPh47Mh17JW1iY2Np27whfR/qdq09LzxFw1pV8GveAL/mDfhz\ndxAAC+fPoU3TerRuUpcu7Vqyd88up3UaY3igS3suXbwIwOsvPUONiiVp1dg7Ub7zoaE81LUjTepW\n56GuHblw/rzT+ubN+YYmdavTpG515s35BoCoqCj69OhCq8bezPpimiNv/1dfYHfQTsfyzOmfMefb\nr9K/Y1Kgx5tyFQ2eN+hqnGH4/F20fDeQTu+v5Ik2lahSPA8AE/o1YPTC3bQesZSAHSd4qUNVR7mj\nZy7jN2oZfqOWMXD2Dkf61MB/aD4skLajltGwYiF8axZzut2X/Kvy1eqDAJQvkptXO93LfeNW0mr4\nUobNDXLki4yOdWznscnrndaVXPnHWlVk6Pc7efiTdbxgt/3x1hVZsOkoEdGxjvJfrT7Iy/5Vndad\nHvN+CeT3NVtYvHJDovSTJ46xdtVySpYq7Uj7ZsY0KlepRuDarcxftJT33n2H6OjopHV+9zX+Xbrh\n7m4F++defoOJU79Mkq9bzz6s2rybZeu2ExkZwdxvZyXJ4+mZi4mfzWTFhp18M38RI4cMICzsAn/t\n3UPOnJ4s/WMbu3Zu5+LFMEJOBbNzx1Y6dL7fUd6vfUeWBwYQER5+w/vImS+mfkrlqvcmSX/3vbGs\nWLeNFeu2UbO2FfTKlC3PT4tXsHrjTt4YOJj+r73otM7lS5dQo1Zt8uTNC0Cvhx/j+4W/Jcn36cQP\nadGqDRt37qNFqzZ8OvHDJHnOh4by8bgxBKxYx5KV6/l43BgunD/P6hVL8WnSlFUbdrBg3neAFTBi\nY2Op7V3XUb5P38f58vMp6d8xqdDjLYOJC18pbUYkp4hsEZFdIrJXREba6eVFZLOIHBCReSKS3U7P\nYS8fsNeXS1DXIDv9HxHpkJaPqcHzBp0Oi2TPfxcAuBJ1lf3BFylWwBOAikXzsPFf68x1zb4QOtcv\nlWJdEdGxrP/nDAAxsYY9/12ghF1XQhWK5ib6ahyhl60/3r4tyzNr1UHCwq2z17OXotL1GZIrHxMb\nh2f2bHhmd+dqbBx5PT1oX6c48zceTVR+0/6ztKxWFHe3jBmiGTlkIINHvJ94CEiEK5cvYYzhypXL\n5C9QgGzZkl59+HnBXNp37OJYbt7Kl9y5k/aSfdv5I2JNrfeu15Dgk8eT5KlQqTLlK1ojC8WKl8DL\nqzChZ8+SLZsHkZERxMXFcfVqDO5u7nw8bhRvvj0sUXkRoXGzlixfGnCjuyKJkyeOszxwCY889mSa\n8jds1IT8BQoAUL9BI4JPnnCa78f539Oh032O5SbNWjjKJRQY8CsPPfwoAA89/Ci/J+j5xFu9cimt\n2vhRoGBB8hcoQKs2fqxaEUg2Dw8iwsOJiYnBGAPAB2NG8PbQEYnK58qVi9Jly7Jj+9Y0fcabpceb\n68R/xpt9pSIK8DXG1AG8AX8RaQx8AEw0xlQCzgNP2fmfAs7b6RPtfIhIdaA3UAPwBz4TEXdSkWHB\nU0RiRSTIPivYISJJx0jSVs/zIvKYq9vnSqUL5aJmmQLsOBQKwD8nw+joXQKA+xqUomTBa4GwjNc9\nLH+3LT8NaE2jyl5J6ooPVH/8dTrJOp9KXuz+79rwWMWieahQNDe/vtOGgEG+tKlR1LEuh4cbgUP9\nCBjk62jL9ZIrP2vVAV7rfC+fPunDJwF/8+Z91fgk4G/s7zkHY+DwmcvUKJ0vjXsqKRGhb88udPJt\nwndfz3CkLw34lWLFS1C9Zu1E+R9/+gUO7P+bBjXK075FA0a8/zFubokP4+joaP47epjSZcqluR0x\nMTH8OH8Orfzap5gvaPtWYqKjKVu+ApWr3kvBQoXp1KYxbTt05sjhg5i4OGrVqZukXG3vemzd6HwE\n4EYMe+ctho0ai7gl/RMe9967tGlaj3cH9ScqKukJ1ZxvZ+Hb1vnJ9ZbNG6njXS/V7Z85c5qixYoD\nUKRoMc6cSXq8Bp88SYlS104ci5csSfDJk7Rq05Zj/x2ls19znn7+JQIDfqVWnboUK570OK3jXZ/N\nG9al2p600uPt7mEsl+1FD/tlAF9ggZ3+NRB/XaOrvYy93k+sCN0VmGuMiTLGHAYOAD6pbT8jJwxF\nGGO8Aexu8FigVXorMcZMSz3X7ZMrhzszX2zKsHlBXI68CsDrX21jTB9v3rivOoFBJ4m+GgdASFgk\n9QYu5vyVaGqXzc9XLzWj5buBjnLubsK0ZxsxY8UBjp69kmRbRfPl5FyC3mU2N6FCkTx0/2g1JQp4\n8vPANrQevpSLETHUf3sxpy5EUtbrHhb0b8W+E2EcPZO4zuTKnwiNoMdHawAoV+QeShTIxf7gi0x+\nyofs2dwY9/OfHAqxjtmzFyMplt+T3Ucv3ND+W7h4JcVKlOTsmdM88kBnKlWuSm3v+kye+CGznQwX\nrlm1jOo16zD350COHj7EIw90wqdxM8cwI0DoubPkzZu+gD5kwKv4NGlOoybNk80TciqY1194kglT\nZji+QEe8P96x/omHezD248l8+vE49u3dQ4vWvjz8mHXSW8irMCGngtPVpuQs/X0xXoWLUKduPdb/\nsSbx5xg+miJFixEdHU3/115g8v8+4q23hzrWr1u7mu+/ncUvgaud1n3hfCi58+RJV3tEBEnH/diy\nZcvG1JnfAlYQ6d29M19/v5Dhgwdw/Nh/PNSnr6P361W4CPv3/5Ou9qREj7eM5+KbJHiJyLYEy9ON\nMdMd27J6iNuBSsAU4CBwwRhz1c5yHChpvy8JHAMwxlwVkTCgkJ2+KcE2EpZJ1q0ats2L1X1GRFqL\niOMoFZHJIvK4/X6ciOwTkd0iMt5OGyEi/e33q0XkA3uc+18RaWGnu4vIRyKy1S77nJ1eXETW2j3g\nP0WkhZ33K3t5j4i8caMfKpu78OULTVm46SgBO64Ngx04dYleE/+g/XvL+WnLf46gFX01jvNXrCHX\n3UcvcOTMZSoWvfZF9fFj9Tl8+jLTl+93ur3ImFhyeFwbTTh5PoLAXSe5Gmv472w4h0IuUaGoNVR0\n6kIkAEfPXmHDP2eoVSZ/kvpSKh9vcPdajP3pT572q8x3fxxi1A+76X9fdcf6HB7uRCa4DppexUpY\nx6hX4SJ06Hw/QTu2cfTIIY79dwT/lg1p6l2F4JMn6NSmMadDTvHDnG/w79IVEaFchYqULlOOg9d9\nueb09CQqKjLNbZj44WhCz57l3dFJr9vFu3TxIk/06c6AoSOp17BRkvVL7d5T+JUrHD1yiKlffkfA\nop8c152ioqLImTNnmtuUkq2bNrB0yW80qFWZ55/sy/q1q3jpmX4AFC1WHBEhR44c9H6kHzu3X/ve\n2ffnbt565Xm++n4hBQsWclp3NvdsxMXFpdqGwoWLOL6cQ04F41W4cJI8xUuU4OTxa8OSwSdOULxE\n4t7lVzOm8WCfvmzfupk8efMy/as5TJ38P8f6yKhIPF2030CPt1vFhcO2Z40xDRK8pifcjjEm1u6k\nlcLqLSadBJBBMjJ4etpB629gBvBeSplFpBDQHahhjKkNjE4mazZjjA/wOjDcTnsKCDPGNAQaAs+I\nSHngYSDQ3rl1gCCssfGSxpiaxphaQNIr9lZ7nhWRbSKyLS7iotOGTOzXgP3BF/l8WeJg55Unh10H\nvNG5Gl/bE3wK5c5O/OXBsl73UKFIHo6etXpw73SrQR5PD4YmmPRzvX9PXkw0u3XJzhM0rWp9aRXM\nnZ0KRfNw9MwV8uXyIHs2N0e6T6VC/Hsy6WdIrny8JlW8OHUhgsOnL+OZ3Z04A3HG4Jn92oBFxaJ5\n+OtEWLJtTkn4lStcvnTJ8f6PVSuoWq0G91avyc5/jrEh6F82BP1L8RIlCVi1iSJFi1GiZGnWr10F\nwJnTIRw8sJ8y5conqjd//gLExsYSGZn6F9r3337J2pXLmfzFN0mG4+JFR0fzzGMP0aPXI3S+v0eS\n9TExMcz8/FNeeOUtIiMjHGfdsbGxRMdYJ0uHD+6nSrUaad85KRgyYgw7/zrMtj37mfblbJq1bMOU\nL6zRqPiAZozh98WLuLeadaJz/Nh/PNm3F5Onz6JipSrJ1l2xchWOHj6Uahvad7yP+XOs3uP8Od8m\nuk4ar7Vve1avXM6F8+etiUIrl9Pa99ow5YXz51n2ewAP9elLREQ4bm5uiAiRERGOPIcO7OdeF+03\nPd7uXsaYC8AqoAmQX0Tiv6RKAfE9mxNAaQB7fT7gXMJ0J2WSlZHBM8IY422MuRfrIuw3knJfPgyI\nBGaKSA8gualiP9r/3w6Us9+3Bx4TkSBgM1ZXvDKwFXhCREYAtYwxl4BDQAUR+VRE/AGnkdEYMz3+\nbMfNM2+S9T6VCvFQ03I0r1bE8ZMQv1rWDNnuPqXZMNqf9e/5ExIWyffrjwDQuEphVo1oz4p32zHj\nhSYMnL2dC1diKF7Akze6VKdKibwsH2bV9UiL8km2uWn/WWqWvtaDXLU3hPOXo1k7qgM/9m/NqB92\nc/5KNJWL52Xp0LasHN6OH/u35tMlf/NvsPWlMbBrDTrUKZ5i+XhvdKnOhN/2AfDt2kOM7u3Nd681\n57Ol1pl34bw5iIyJ5czF9E1UinfmTAgPdPalQ8uG3NeuOb7t/GmdyjWgV/sPYvuWTbRrXp8+3Tsy\naPhoChZKeu24ZZu2bN107ZrPA519eeHJR1i/dhU+NSuyZuUyAAa/9QpnzoTQzb8V/q18+N9HYwDY\ntXM7A197HoDffl7Alo3rWPD9t/i38sG/lU+in3p8PXMaPXv1xTNXLqrVqEVERDjtmtenVp165Mtn\n/XttWLcGv/Ydb2g/pceLT/ejdZO6tG5Sl9BzZ3ljwGAAJnwwhvOh53jnrVfwa96A9q0aOy3ftkNH\nNqxb61h+/sm+dGnXkoP7/6VutfLM+cY613zlzQGsWbWCJnWrs3b1Sl55YyAAQTu28+bLzwFQoGBB\n3hg4GP82TfFv05Q33x5CgYIFHXVP+HAMr/V/Bzc3N1r7tWfzhvW0blKXnr0fceTZumkDLX3bumTf\n6PF2C92a2baFRSS//d4TaAf8hRVEe9rZ+gG/2O8X2cvY61caa8baIqC3WLNxy2PFji2pfkRz/SwQ\nFxGRy8aY3AmWQ4BaQBVgsDGmk50+A1hnjPlKRHIAflgfrJwxxtcOfJeNMeNFZDXQ3xizTUS8gG3G\nmHIishBrLDzQSTtKAJ2Bl4AJxphvRCQ30AF4FAg1xqQ4ZdGjcEVTsOu4m9wjrjG6tzdLd51krZMJ\nRbfac+0qcykihjnrjjhdv+3D+52m3wp7du1kxtRJfDLN6cDCLXXmdAivPNuPuT//nqb88aMGt0PI\nqWBeee5J5v+y5La1Id6eXTv5fMonTJ7+VZrLxM8vuNUy8/HW2bcpu4O2u+wipSu/L0/PfGi7MaaB\ns3UiUhtrApA7VkdwvjFmlIhUAOYCBYGdQF9jTJSI5AS+BeoCoUBvY8whu64hwJPAVeB1Y0yqfwC3\n5A5DInIv1gc8BxwFqtuB0hMrWK6zA1ouY0yAiKzH6iGmVSDwgoisNMbEiEgVrG63F3DcGPOFvb16\nIhIARBtjForIP8Bsl33QW+CTgL+oV75g6hlvgbDwGH647ucrd4paderStEUrYmNjHb+9u11OHj/G\nsPc+uK1tSKuixYrzSL8nuXTxYqJJMbdD6LlzDBwy4ra2Ia30eLv1jDG7sQLh9emHcDJb1hgTCTyY\nTF1jgDHp2X5GBk9PexgVrA54P2NMLHBMROYDfwKHsc4MAPIAv9hnBwK8mY5tzcAawt1hDw2fwZqe\n3BoYICIxwGXgMaxZVLNEJP70ftCNfbzb48zFKAJ33Z5ZdNebaw9H36l6PfL47W4CAHXqOT1xvmN1\n7eH0++WWa+Wi4dpbRY+3a1w42/aOlWHB0xiT7OmXMWYgMNDJKmdnCyMSvG+d4P1Z7Guexpg4YLD9\nSuhrrv2uJ6HUf8imlFLqhmSF4Kl3GFJKKaXSSZ+qopRSymVcfJOEO5YGT6WUUq5198dOHbZVSiml\n0kt7nkoppVxHssaEIQ2eSimlXCorBE8dtlVKKaXSSXueSimlXCor9Dw1eCqllHKtuz92avBUSinl\nWlmh56nXPJVSSql00p6nUkoplxHROwwppZRS6ZYVgqcO2yqllFLppD1PpZRSLpUVep4aPJVSSrnW\n3R87ddhWKaWUSi/teSqllHIpHbZVSiml0iOLPFVFh22VUkqpdNKep1JKKZcRIAt0PDV4KqWUcqWs\ncYchHbZVSiml0kl7nkoppVwqC3Q8NXgqpZRyLR22VUoppVQS2vNUSinlOqLDtkoppVS6CODmdvdH\nTx22VUoppdJJe55KKaVcSodtlVJKqXTS2bZKKaWUSkJ7nkoppVxHZ9uqeNVK5mfRuPtudzMynSbD\nltzuJmRKm8d0ut1NyLSywCRPl3N1oLNuDH/3/0PosK1SSimVTtrzVEop5UJZ46kqGjyVUkq5VBaI\nnTpsq5RSSqWX9jyVUkq5lA7bKqWUUumRRX6qosO2SimlVDppz1MppZTLZJXfeWrwVEop5VJZIHbq\nsK1SSimVXho8lVJKuZSIuOSVyjZKi8gqEdknIntF5DU7fYSInBCRIPvVKUGZQSJyQET+EZEOCdL9\n7bQDIvJOWj6jDtsqpZRyqVs0bHsVeMsYs0NE8gDbRWSZvW6iMWZ84jZJdaA3UAMoASwXkSr26ilA\nO+A4sFVEFhlj9qW0cQ2eSimlMh1jTDAQbL+/JCJ/ASVTKNIVmGuMiQIOi8gBwMded8AYcwhAROba\neVMMnjpsq5RSynXEpcO2XiKyLcHrWaebFCkH1AU220kvi8huEflSRArYaSWBYwmKHbfTkktPkfY8\nlVJKuYz1UxWXVXfWGNMgxe2J5AYWAq8bYy6KyFTgPcDY//8YeNJlLbJp8FRKKZUpiYgHVuD8zhjz\nI4AxJiTB+i+A3+zFE0DpBMVL2WmkkJ4sHbZVSinlQq4Zsk3DbFsBZgJ/GWMmJEgvniBbd+BP+/0i\noLeI5BCR8kBlYAuwFagsIuVFJDvWpKJFqX1K7XkqpZRyqVs027YZ8CiwR0SC7LTBQB8R8cYatj0C\nPAdgjNkrIvOxJgJdBV4yxsRa7ZWXgUDAHfjSGLM3tY1r8FRKKZXpGGPWYV1ivV5ACmXGAGOcpAek\nVM4ZDZ5KKaVcSu9tq5RSSqWHPpJMKaWUUs5oz1MppZTL6CPJlFJKqRuQFYKnDtsqpZRS6aQ9T6WU\nUi6VBTqeGjyVUkq5lg7bKqWUUioJ7XkqpZRynSzyO08NnkoppVxGSP2m7ncDDZ5KKaVcKgvETr3m\nqZRSSqWXBk8XuRh2gRee6INfkzq0berNjq2bALhwPpS+PTvTxqcmfXt2JuzCeQA+nzyBTq0b0al1\nIzq0qE/Fovdw4XxoknqNMTzc3Z9Lly4CMPDV52hQrQwdWtRPlO/jsSPxb9WQTq0b8eiDXQg5ddJp\nOysWvcex3af79nSkv/784/i3ashHo991pH368TiWBlx7rN2KpQFMGDfqBveQpXh+T+a/1oyVQ31Z\nMdSXp1pXcKyrXiofi/q3JNaUU1YAACAASURBVHBQGxYPbIV32fyOdU0qexE4qA0rhvqy4PXmjvSn\nWldg+RC7rjYVk93uU20q8oDPtefdPtGqAquH+bFiqC9DutUAwLtsfgIHtSFwUBuWDmqDf53iTutq\nVtWLJW+3JnBQG358swXlCt/jqHP5EF++ebExHu7WqXfDigUZ/kBNR9mCubMz+6Um6dllTjWuXQW/\npvVo36Ihndpcq+/8+VD6dO9I8/rV6dO9Ixfs4y0wYBFtm9V35N+ycb3TeiMiInigc1tiY2MBeKRn\nF6qXLUK/Xt0S5Zs1/TOa1atGqQI5CD131mlde/fs4v72LfFt4k3bZvVZ9OMPjnUvP9OPts3qM27U\nMEfaJ+PH8vviXxzLy39fzEfvj0znnklZ4zpV8GtWn/Ytfejk29SR/t67g2jVqDZtmzfgqUcfIizs\nAgDnQ8/x4P3tqVK6EEMGvp5i3c/268PRI4cA+GD0uzSsWZEqpQslyrNpwx/4t25M2cL38NsvPyZb\nV3R0NANff5EWDWvSqlFtFi/6CYAvp3+GX9N6PPpQV6KjowHYsmk9IwYPcJQ9d/YMj/S8Lx17xfXc\nRFzyupNp8HSRkYP708q3PSs27iJg9RYqVbkXgKmTxtOsRWtWbfmTZi1aM3XSeACee/lNAlZvJmD1\nZgYMHUWjpi3IX6BgknpXLfudajVqkSdPXgAe6P0oX839JUm+Z19+g9/XbCVg9WZ823Vk0vixTtuZ\nM6enY7szZi8A4K+9e8iR05Pf12xlV9B2Ll4M4/SpYIJ2bKV9p/sdZX3bdWRFYAAR4eE3vJ9i4+IY\n9eOf+I5eyf0fraVfywpULpYHgCHdajAx4G86jF3Fx4v/Zkg3K+jk9fRgTK/aPDFtE36jV/LcjC0A\nVC2ehz7NytHlwzW0f38VbWsWdQSyhNzdhN5NyvDztuMANK3sRfvaxWg/dhV+o1cybfl+AP4+eYlO\nH6ymw9hV9J2ygXF9vHF3S/oHPLaXN698tY0OY1fx89bjvOpfFYDuDUvR7v2VbDsUSqtqRQF4zb8q\nnyz5x1E29HI0IWGRNKiQ9N86vX74dSlL/9hKwKqNjrQpEz+iWUtf1m3fR7OWvkyZ+BEAzVv6smzd\nNpb+sZXxn05nwGvPO61z3uyv6HhfV9zd3QF44ZU3+WTal0nyNWzclLk/L6FU6bLJts/T05P/TZ3J\nyo1BzF7wKyMG9ycs7AL7/txDTk9Plq/fzq6d27gYFkbIqWB2btuCf+eujvJ+HTqx/PfFN3W8OfPD\nokCWrt1CwMoNjrSWrX1ZsX4Hy9dto0LFyky291uOHDkZMHg4w0aNS7HOf/7aR2xcLGXLWSeDbTt0\n5rfl65LkK1mqNBOmfEG3nr1SrG/Sx+PwKlyYP7b+yaqNQTRp1gKAn36Yy7J122jg05g1K5dhjOGT\nj8by2oBBjrKFvApTtGgxtm7akFz1GU7ENa87mQZPF7h4MYwtm9bRq+/jAGTPnp28+axe07Ilv/FA\nr74APNCrL0sDfk1S/tcf53Nfj4ec1v3Lwrm063jtLLJR0+ZOg2x8cAWICA9P1wV7Dw8PoiIjiIuL\n42pMDO5u7kz44D3eGDg0UT4RoXGzFqxYmq7H3iVy+mIUfx4LA+BK1FX2h1yiWP6cgNXLzp3Tw/o8\nObMREhYBQLcGpVgSFMzJ89byucvWGXelYnkIOnKeyJhYYuMMm/afo6OT3mKzKl7sORZGbJwB4NGW\n5ZmydD/RV+MS1RdfD0AOD3eMMU4/g8GQx9Nup+e1doqAh7sbntnduRoXxwM+pVm17zQXwmMSlQ/c\nHUz3hqXSve/SYumSX3mwj3W8PdinL4H2yME9uXM7jomI8CvJHh8//TCXDp2uHW/NW/lyT548SfLV\nrO1N6TLlUmxLhUpVqFCxMgDFipegkFdhzp09g4dHNiIjrOMtJuYq7u7ujB87ircGvZuovIjQpHlL\nlgfe+PGWVq1825EtmzUFpF4DH4JPWidaue65B5/GzciRI0eK5X9a8D0dOnZxLNdv2IiixZIei6XL\nlKN6jVq4uaX81Tvvu695+fWBALi5uVGwkBdg/Y3ExMQQERFOtmweLJw/hzZtO1Dguu+EDp3v56cF\nc1P51Opm3JLgKSJDRGSviOwWkSARaZRMvgYiMinBsoeIHLbLBInIKRE5kWA5ezrbMVpEUh57uQHH\njx6hYCEvBrzyLJ3bNObt118g/MoVAM6eOU0R+4+ocNFinD1zOlHZiPBw1qxcRscu3ZLUC7Bty0Zq\n1qmbpnZ8NGY4TetU4peFc3nj7WFO80RFRXJ/22Z092/pGJKtVOVeChbyootvE/w6dOLo4YOYuDin\n263lXY+tm5wP+aVXqYK5qFkqHzuPWEOLIxbsYWj3GmwZ3Z5hPWoydtE+ACoUyU2+XB788FpzAt5u\n7Rh+/efkRXwqFiL/PR7k9HDHt0ZRShTIlWQ7DSsWYs9/FxzLFYrkplGlQvw6oCULXm9OnTLXhofr\nlivAiqG+LB/iy6C5uxzBNKEB3wXxzQtN2Dq6Aw/4lGbKUqvn+tWawyzq35KSBXKx9WAoDzUuw9dr\nDiUpv/voBRpVLJQkPT1E4OEenenYujGzv5rhSD97+rTjS7tI0WKcPX3teFvy2y+08qnFY7268fGn\n05PUGR0dzX9HD6caFG/Ezu1biYmJplz5ilSuWo1CXl74t2pEO/9OHDl8kLi4OGo5Od5qe9dn88ak\nPbgbJSI8/EAXOrZpkmi/JTTvu69p07ZDuurdunkjtbzruaKJjiHjj94fiX/rxjz3+MOcOR0CwOPP\nPM/97Vty4vgxGjZqwvw539Dv6aSjCLW967E5maH5jGb1GsUlrztZhs+2FZEmQBegnjEmSkS8AKdB\nzxizDdiWIKk58Jsx5hW7rhHAZWPM+Ixtdfpcjb3K3t1BjBg7gbr1fRg5+C2mThrPW4OGJ8rn7IBY\nEbiY+j5NnPYmAcLOnyd37qRn/s4MGDKSAUNG8tn/PuKbmdOcBtB1O/+hWPGS/HfkMA/38KdqtZqU\nLV+Bd8dc26VPPfIA73/8KZMnfMBfe3fTvLUffR59EoBCXkU4fSo4Te1JSa4c7kx/xocRC/ZwOfIq\nAI+1LM/IhX8SEHSSLvVKMP6RuvT5dAPZ3IXaZfLTa9J6cnq4s6h/S3YcCeVAyGU+W7afOS83Izz6\nKntPhDkNdkXy5mT/qUuOZXc3IX+u7Nz30Vq8y+Zn6lMNaTp8GQA7j5zHb/RKKhXNzf8eq8+qvSFE\n2T3UeM/4VuSxqRvZeeQ8z7etxPAeNRkwJ4iFW46xcMsxAF7vWJUvVx+iTY2i9GxUmpPnIxj1458Y\nA2cvRVE0n+dN7b8fl6yieImSnD1zmj7dO1GpclUa20N78a4/3jp26UrHLl3ZtP4PPnp/BHN//j1R\n/tBzZ8mbL99NtcuZkFPBvPb8E0z8bKajxzVy7MeO9Y/37s64iVOYNH4c+/bupkVrPx7p9xQAXoUL\nE+KC4y3ejwErr+23Hp2pVKUqjZte22+TPh6He7Zs9HiwT7rqPR1yikJ27/BmxV69SvDJE9T3aczw\nMR8yfconvPfuO0yaNouevR6hZ69HAJj44RiefPYlVi0PZMHc7yhRshTvjv4ANzc3vAoXcel+Sy8n\nVzvuOrei51kcOGuMiQIwxpw1xpwUkYYiskFEdonIFhHJIyKtReS3BGX9gSUpVS4i/ezyQSLymYi4\n2emdRWSHXf/SBEVqicgaETkkIi+55AMWL0mxEiWpW98HgI73dWfv7iAAvApfCzanTwVTyKtworK/\n/vwD9/d4MNm63bNlIy4uLtn1znTt2Yvff/vZ6bpixUsCUKZceRo3bcnePUGJ1i9d8iu16tTlypUr\n/HfkEFNmfseSRT85rjtFRUWSw/PmvvizuQnTn/bhp63HWLLr2h94z0ZlCAiyJjr9tuMk3mULABB8\nPoI1f50mIjqW81ei2XzgHNVLWl/yczcepdMHq+k5cR1h4dEcOn05yfYiY2LJ4eHuWD51IYIl9naC\njl4gzlgTeRI6EHKZK1FXqVoib6L0grmzU63ktd7you0nqH/d9cui+XLiXbYAgbuDec6vEi/M3MrF\niBiaV7X+7XN4uBMZE5v+HZdA8RLWv6NX4SL4d+lK0I6t1nKRa1+aIaeCKVS4cJKyjZu14L8jh5NM\n9Mnp6UlUZNRNtet6ly5epF+vbgwcOor6DZMOOAUGLKKWdz3Cr1zm6JFDTJs1h4BFP1473iIjyZnz\n5o63hBLtt873E7T92rn6/DnfsDxwCZM//yrdvZ6cOT2Jiop0SRsLFCyEZ65cdLrPGo3q0rUHf+5K\n/Hd6KvgkQTu24d/5fj6f8j+mfjmbvPnysW7NSsD1+00ldSuC51KgtIj8awe3VvZw6zzgNWNMHaAt\nEOGkbBtgdXIVi0hNoDvQ1BjjjdWT7i0ixYCpQHe7/t4JilUB2gGNgVEi4o4TIvKsiGwTkW3nzp1J\n8QMWLlqM4iVKcfDAvwBs+GM1lapaE4ba+ndm4bzZACycN5t2Ca6LXLwYxuYN62jnn/zMuAqVKvPf\nkcMpbh/g8MEDjvfLlvxGhUpVkuQJu3CeqCjryzH03Fm2b9lI5arVHOtjYmKY9flknnv5TSIjIhxf\nIHFxscTERNvb2U/Ve6un2p6UjO9blwOnLvPFyoOJ0kPCImlS2Tp7b1bVi8NnrKHvwN3BNKxYCHc3\nIaeHO97lCnDA7kkWsoNeiQKedKxTwjEpKKH9py5RzuvaRKLfdwXTtIq1nfJF7iF7NiH0cjSlC+Vy\nTBAqWdCTikVzc+xc4skqYeEx5PXMRvkiVn0t7y3CgVOJA/aALtUYv/gvAHJ6uGOAuDjwzG4dahWK\n3MM/wRfTudeuCb9yhcuXLjner125nKrVrBnD7fy78MP31vH2w/ezaW9fLz986IDjGu6eXTuJio6m\nQMHEQ8f58xcgNjaWyEjXBIHo6GiefvRBevZ+hC5deyRZHxMTw4ypk3nx1beIjIh0zBCJjY0l2j7e\nDh3c7/hsNyvJflu1wlH3quVLmTppArPmLMAzV9Kh/9RUrnIvRw4dTD1jGogI7Tp0ZuO6NQCsW7sq\n0d8pWEO6/e1rxJGRkYgIbm5uRERYX6PWfru5v9ObocO2LmCMuSwi9YEWWMFwHjAGCDbGbLXzXITE\nNxMWkZJAqDEmpal2bYGGwDa7rCdwDCsQrzLGHLXrT/gbkN+MMdHAaREJBQoDp5y0ezowHaC2d33n\nM0cSGDl2Am88/wTRMdGUKVuOjyZZ15ReeLU/Lz/dl/nffU3J0mWYPGO2o8zSxYto0dqPXPcknSEa\nr027jmxav5ZyFayfYbz67GNsWv8H50PP0qR2RV4fOIxefR/nw/eGcujgfsTNjZKlyjBmvHXpeHfQ\ndr77agYf/G8qB/79myH9X0Hc3DBxcTz/av9Ef5TfzpzGA7364pkrF9Vq1CIiIhz/lg1o3baDYwLU\npnVrGTD0xn+u0rBiQXo2KsNfJ8IIHNQGgA8W7WPl3hAGztnJyJ61yeYmRF2N5e05OwGrF7h6XwjL\nBrchzsD3G47yT7D1JTj9GR8K3JOdq7GGIfN3cTEiJsk2V+0L4ZN+137aM2/jUT7uW4/lQ3yJuRrH\n69/sAMCnYiFebF+Zq7GGuDjDkHm7OX/F+hL/5sXGDPguiJCwSAbOCeKLp32IM1YwfWv2DkfdNUpZ\nPeL4SVE/bTvO8iG+BJ+PYKo9q7dplcKs+DPkhvfhmTMhPN3XmmAWG3uVbg/0dlyje/mNATz/xMPM\nnT2LUqXLMHXWHAACFv3MwnmzyZbNg5yenkydOdvpl1NL37Zs3bSeFq39AOjR0ZcD+//hypXLNKhR\ngfGTptHarz0zP5/M1EkTOBNyinbNG9CmnT/jJ01j187tfDvrC8ZPmsavPy1g84Z1nA8NZf6cbwGY\n+NkMatSqA8DXM6byYB/7eKtZi8jwcPya1sO3nT/57ONtwx9reOfd0Te8r5Lst0etWa6xV6/SrWcv\n2rRtD8DQt18nOiqKPj06A9akoXETJgPWz1suXbpETEw0gYt/Zc7C36hyb+Jg5tven43r1zr22+jh\ng/l5wTwiwsNpUKMifR59nLfeGUbQjm08/WgvwsLOs+z3ACaMe4+VG63jvH1LH5autWaSDx4xmtee\nf5LhgwdQyMuLCZOvXaP+0x7Vir9G3P2BXrRtVp/iJUvxwqtvOfabX/uOLtlvN+IOj3suIcnNKMyw\nDYr0BF4Cshtjml23rjXQ3xjTRUSeAvIaYyYmWD+CBNc8ReQNoKAxZth19XQHuhlj+l2XPhprCPl/\n9vLfQFtjTNLuSgK1veubRctvz8X306eCefPlp5m9YPFt2X5CZ06H8Przj/PdjymOpDs0H/576plu\nkRnP+DDm572O3uzttOCN5jw1bTNhTgI9wOYxnW5xi67Zs2snX3w2iUmfz7ptbYh35nQILz/zGPN+\nCUx7oVv8fRYvIiKCh+7vwM+/r3L8zOd2eqCzHzO/W0D+/AVSzdvJtym7dm53WbjLV7aaaT74a5fU\nFfB8o+3GmAYuqczFMnzYVkSqikjlBEnewF9AcRFpaOfJIyLX94JTvd4JLAcesichISKFRKQMsAFo\nIyJl7fSb/1HdbVKkWHF6933CcZOE2+nkiWMMGZny793uVO//so8i+XLe7mZQMHd2vlhxMNnAebvV\nqlOXpi1aOW6ScDudOH6Md0d/eLubkSaenp689c4wTgWfuN1N4dzZMzzz4mtpCpwZQbDvb+uC/+5k\nt+LetrmBT0UkP3AVOAA8C8yy0z2xhlnbxhewr0NWMsb8nVLFxpg9IjISWG5PFIoBnjfGbBWRF4Bf\nxBqbOgncvjGMm9SlW8/UM90CderekSeAaXLo9GWnk4lutdDL0QTuvn2zINOit/175dvNu17mOt5a\n+7W73U0ArJsk+He+P/WMGSgrzLa9Fdc8twNNnaw6izVpJ6HVwGoRaQ5sdlLXCCdpc4A5TtIXA4uv\nSxt63fK9KbdeKaWUSuqOfKqKMWYd4LpfRiullLo1MsFMWVe4I4OnUkqpzCsLxE69t61SSimVXtrz\nVEop5TICd/zjxFxBg6dSSimXygKxU4dtlVJKqfTSnqdSSimX0tm2SimlVDpYz/O83a3IeDpsq5RS\nSqWT9jyVUkq5VJaebSsieZNbB9ceI6aUUkoldPeHzpR7nnsBQ+L9EL9sgDIZ2C6llFLqjpVs8DTG\nlL6VDVFKKXV3yAqzbdM0YUhEeovIYPt9KRGpn7HNUkoplRlZdxhyzetOlmrwFJHJQBvgUTspHJiW\nkY1SSiml7mRpmW3b1BhTT0R2AhhjQkUkewa3SymlVGakjyRziBERN6xJQohIISAuQ1ullFIq08oC\nsTNN1zynAAuBwiIyEush1R9kaKuUUkqpO1iqPU9jzDcish1oayc9aIz5M2ObpZRSKrPSYdtr3IEY\nrKFbvaWfUkopp+Jn297t0jLbdgjwPVACKAXMEZFBGd0wpZRS6k6Vll7kY0BDY8xQY8wQwAd4PENb\npZRSKtMSe8btzb5S2UZpEVklIvtEZK+IvGanFxSRZSKy3/5/ATtdRGSSiBwQkd0iUi9BXf3s/PtF\npF9aPmNagmcwiYd3s9lpSimlVBLiolcqrgJvGWOqA42Bl0SkOvAOsMIYUxlYYS8DdAQq269ngalg\nBVtgONAIq3M4PD7gpiSlG8NPxLrGGQrsFZFAe7k9sDX1z6WUUkplDGNMMHZHzhhzSUT+AkoCXYHW\ndravgdXA23b6N8YYA2wSkfwiUtzOu8wYEwogIssAf6zLlclKacJQ/IzavcDiBOmb0vjZlFJKZTEi\nLn0kmZeIbEuwPN0YMz3pNqUcUBfYDBS1AyvAKaCo/b4kcCxBseN2WnLpKUrpxvAzUyuslFJKXc+F\nv1Q5a4xpkPK2JDfWvQheN8ZcTHit1BhjRMS4rDUJpGW2bUURmWtfYP03/pURjVFKKaXSSkQ8sALn\nd8aYH+3kEHs4Fvv/p+30E0DCp4WVstOSS09RWiYMfQXMwrp+2xGYD8xLQzmllFJZ0C2abSvATOAv\nY8yEBKsWAfEzZvsBvyRIf8yeddsYCLOHdwOB9iJSwJ4o1N5OS1FagmcuY0wggDHmoDFmKFYQVUop\npZIQcc0rFc2wnvblKyJB9qsTMA5oJyL7se6MN87OHwAcAg4AXwAvgvWwE+A9rImwW4FR8ZOHUpKW\nOwxF2TeGPygiz2N1Z/OkoZxSSimVIYwx60j+Fy1+TvIb4KVk6voS+DI9209L8HwDuAd4FRgD5AOe\nTM9GlFJKZQ2CuHK27R0rLTeG32y/vcS1B2IrpZRSSaVtyDXTS+kmCT9hP8PTGWNMjwxpkVJKKXWH\nS6nnOfmWteIO5+4m5Mvlcbubken8M7Hr7W5CpuTlP/Z2NyHTOvBj/9vdBEUWfySZMWbFrWyIUkqp\nu0NWeG5lVviMSimllEul9WHYSimlVKqELD5sez0RyWGMicrIxiillMr83O7+2Jmme9v6iMgeYL+9\nXEdEPs3wlimllMqU3MQ1rztZWq55TgK6AOcAjDG7gDYZ2SillFLqTpaWYVs3Y8zR68awYzOoPUop\npTIx6760d3i30QXSEjyPiYgPYETEHXgF0EeSKaWUcupOH3J1hbQM274AvAmUAUKAxnaaUkoplSWl\n5d62p4Het6AtSiml7gJZYNQ29eApIl/g5B63xphnM6RFSimlMi0BfaqKbXmC9zmB7sCxjGmOUkop\ndedLy7DtvITLIvItsC7DWqSUUipTywr3fb2R2/OVB4q6uiFKKaXuDllg1DZN1zzPc+2apxsQCryT\nkY1SSiml7mQpBk+xfulaBzhhJ8UZY5J9QLZSSqmsTUSyxIShFIem7UAZYIyJtV8aOJVSSqXIusvQ\nzb/uZGm5rhskInUzvCVKKaVUJpHssK2IZDPGXAXqAltF5CBwBetnPMYYU+8WtVEppVQmkhVuz5fS\nNc8tQD3g/lvUFqWUUpmc3iTB2gcYYw7eorYopZRSmUJKwbOwiLyZ3EpjzIQMaI9SSqlMLgt0PFMM\nnu5AbuweqFJKKZUq0WuewcaYUbesJUoppVQmkeo1T6WUUio9JAuEj5SCp98ta4VSSqm7gjXb9na3\nIuMle5MEY0zorWyIUkoplVncyFNVlFJKqWRlhZ6nBk+llFIuJVngtypZ4ZmlSimllEtpz1MppZTL\nZJUJQxo8lVJKuU4meJyYK+iwrYtFRkbStmVjWjSqR5MGtRk7ekSSPO/0f53SRfI5lo8f+4/7O/rR\nqkkDmvvUZdnvAU7rPhUcTO8HrPv0/zB3Di0b13e8CuX2YM+uoCRlnnysjyNPnWoVadm4PgCbNq6n\nuU9dfJs34uCB/QCEXbhAj/v8iYuLc5Tv3rk9F86fv9HdkWbHjx2jY3tf6tepQQPvmkz59BPHulEj\nhtGofh2aNKzL/Z06EHzypGPd2jWradKwLg28a9KhbWundRtj6NTBj4sXLwJQvUp5fOrVpknDurRo\n0tBpmXnff0ej+nXwqVcbv1bN2LN7FwBnzpyhXZsWNKxbi19/+dmRv9cD3RK1a/Db/Vm9auUN74/r\n5fBw54/PHmfzF0+x/ctnGNqvRZI8H7/cjjOL+zuWs3u48+2wbvz57fOsndKPMkWtY863fjnWT3uC\nrTOeZv20J2hVt2yy250zvAfliucnt2d2Nk1/yvE69tPrfPRSWwD6dqjFfz++7lj3eKc6TuvyyObG\n5Dc7svvr5wj66jm6tagKwAvdG7Bt5jP8NPYhPLJZX0lNa5biwxfbOsp65cvFL+N6pXOvJdW4dhX8\nmtajfYuGdGrTxJH+288L8W3iTemCOdm1c3uScieO/UeVUgWZ9qnzu5IaY3jo/g5cso+xt15+ljqV\nS+HXJPHTHFPbTkKxsbF0aOlDv17dHGkvP9OPts3qM27UMEfaJ+PH8vviXxzLy39fzEfvj0yxbnXz\nNHi6WI4cOfg5YDl/bN7B2o3bWbEskK1bNjnW79yxLUkwGv/B+3Tr8SBrNm5jxtff0f+NV5zW/dmn\nE3nsiacBeLD3w6zdtJ21m7YzbcZXlC1Xnlp1vJOU+fKb7x357uvanS5drT/EzyZNZN5Pv/L+hx8z\na8bndjvG8OaAd3Bzu3ZYPNSnLzO/mHpzOyUNsmXLxtgPxrN9115W/bGRL6Z9xl9/7QPg9TcHsHn7\nLjZu3Yl/p86MHWPd+OrChQu88epLzF/4C9uC/uTbOfOd1h24JIBatWqTN29eR1rA0pVs3LqTPzZu\ndVqmbLny/L58NVt27ObtQUN55cXnAPhh3vc89cxzrFm/mSmTrQAf8Nuv1Pb2pniJEo7yz7/4ChM+\n+uDmd4wtKiYW/ze/o9EzM2n0zEza+1TAp9q17dWrUoz8eXImKvN4xzqcvxRJzUen8emCrYx5tg0A\n58Ii6DnkBxo+PYNnxv3Gl4OcPzipWjkv3N2FI8EXuBwRTeNnZzpe/4WE8fMf/zjyLly9z7Huq4Bd\nTut7+5FmnLkQTu1+n1P3ic/5Y9d/APT2q0HDp79g094TtGtYAYB3Hm3O2G/XOcqeDQvnVOhlmtQo\ndQN7L7Effl3K0j+2ErBqoyOtarXqfPHNPBo1TXpSAjBy6EDatO2QbJ0rly6hes1a5LGPsQf7PMrs\nBb8myZfadhKaOe1TKlW517G878895PT0ZPn67ezauY2LYWGEnApm57Yt+Hfu6sjn16ETy39fTER4\neKrbyChuIi553ck0eLqYiJA7d24AYmJiuBpz1THzLDY2luFD3mbE6HFJysSfsV68GEax4sWd1v3r\nLz/h1y7pH/DCH+bSo+dDKbbLGMPPPy7ggQd7A5DNw4OI8HDCw8Px8PDg8KGDnDhxnOYtWycq17Hz\nfSycPy/1D36TihUvjndd6xGxefLkoeq91Qg+cQIgUdALD7/i2J/z587h/m7dKV2mDABFihRxWve8\nuXPofF9Xp+uS07hJfsr8XAAAIABJREFUUwoUKABAw0aNOXHiOAAeHh6Eh4cTFRWFu5s7V69eZcqn\nn/DGWwMTlS9TtiyhoecIOXUqXdv9f3v3HR9F0QZw/PekEKoECAgCivROCIEA0pHeBAUEEekClhc7\nKlIEKaKvjZeuUhSkCNJVegm9N6UGBKSFJpACSeb9YzfHhUvngCDPl08+uZudmd0bNjc7ZWcTcz3i\npnUMXh54eXlijBXu4SEMfbkuH46L29Jt+lRRfvx9DwBzVv9BrYACAOw6fJbTF64BsP/YedKn8yKd\nt6fL/p6vW4oFwQddwgvny04u30wE7z6RouN/qVE5Rk5bD4AxcOGfcMDq4vP28iSjjxc3o2JoV680\nv28+wqWrEXHSL1h3kLZPl0rRPpOrSLESFCpSLN5tvy6aR/7HC1C0eMkE08+Z9RP1GzdzvK/8VHV8\n7fMnuftx9vepkyz/fQntO3Z2hHl7exERHk5MTAw3b0bh6enJZ8M+5q33+8dJKyJUqVaDZb/F34N1\nt8WOebrjJy3TyvMuiI6OpkblChQrkIdadeoSWDEIgAlj/0fDxs1cKsf3PujPzJ+mUarIE7Rt1YwR\nn3/lkufxYyH4+mbDx8fHZdvcn2fRyq4UE7IheC25cj1KocJFAHjjrffo1b0TX34+gm49X2HIoI/4\nsL/rUsa+2bJx40YkFy9cSPbnv1PHjx1j164dBFYKcoQN7P8hxQo9zozp0+g3wDrOw4cOcvnSJRrW\nq021yoFM+2FKvPlt3BBM+YAKjveC0KJJA6pVDuS7ieOTPJ4p339L/QYNAWjzfHsWLZhP88b1efu9\n9xk/djTtXuhAxowZXdL5ly/Phg3BKfrsifHwEDaO78pfc/qwYmsIW/60uol7PRPIog0HOXPxepz4\nj/ll4eQ566IsOsbwz/VIcjySIU6cljWKs/PQGW7cjHbZX5XS+dlx0LXyb127JLNX7Y8T1qJ6cTZP\n6Ma0Aa3IlzOLS5qsmazzdkDnGqwf14UfB7QkV7ZMAIz5ZRurR71E/lxZ2bD3JB0blmXsL65dmtsP\nnuapMvkTLJ/kEIH2rZrQqFZlfpg0Mcn4169dY/RXn/Pme/0Sjbd10wbKlgu4o2NzNvCDt/lw0DDE\nqReoSLES5PDzo2HNIOo1bMyxkCPExMRQplx5l/Rl/SuwacM6l3DlPmlqwpCIfAi0B6KBGOBlY8ym\nO8yzFnDDGLP+zo8weTw9PVmzcRtXLl/mxXbPsn/fXrJlz868ubNZ8KvrONjPs36iXYeOvPqfN9m8\naQM9u3Vi/ZZdcbpPz5w5TQ4/P5e0W7dsIkOGjJQsVTrRY/p51gxatb41ZlSmnD9LV1lFsn7dGh59\nNDfGGLp0bIe3lzeDh40k16OPAuCXMydnTv9N9hw5UlUeKXHt2jVeeP45Rnz2RZwW58CPP2Hgx5/w\n2afDGDdmFP36DyIqKoqdO7az8NdlhIeHU7dGVSpWqkyRokXj5Hnp4kWyZLn1hb505Voey5uXc+fO\n0bxxfYoWK0616jXiPZ7Vq1YyedJ3LF25FoCsWbPy87yFVr6XLvHfkSOYPmsOr/bqzqVLl3m9z5sE\nVbbG0vxy5oozDnqnYmIMlXt8S9ZMPsz4+DlKFsjJpavhtKpZnPpv/JDi/EoU8GNIj9o0fXd6vNtz\nZ89E6GXXrr/WtUvSddh8x/vFGw4zc8V+btyMpmvT8kzo24xGb02Lk8bL04N8uR5h475TvDdmOa8/\nV4lhPevQddgCpi/dy/SlewF4/8VqjJ6zlQZBhXihXhlOnv+H98Yswxg4dymMPDkyp/hzOpuzZCV5\nHstL6PlztGvZmMJFilH5qYS7UP87YjDde71OpsyJ7/fy5YtkzuJ60ZAay35dhJ9fTsr6B7B+3eo4\n2wYN+9zxutPzLRn+xf/4+rPh7N+3m+q16vLCS10B62/27JnTbjme1EjjPa5ukWZaniJSBWgKBBhj\nygJPAynrF3LN0wuoBVS94wNMhay+vlSrUYvlS39jz64dhBw5QoUyxShXohBhYWFUKGN13/ww5Xue\nebY1AJWCqhAZEcGF0NA4eWVIn4HIyAiXfcyZNYNn2yQ+kSIqKoqF8+bSMp6uXWMMn40Yyjt9+/Hp\n0MEMGjKcjp27Mn7MN444kRGRpM+QwSWtu928eZMX2j5H2+fb0+KZVvHGafv8C8ybOweAvPnyUbde\nfTJlyoSfnx9PVa/Onj2u421eXl5xJkE9ljcvYHXzNmvxDNu2bI53X3v37ObVnt2ZMfsXcsRz4TBi\n6GDe6fsBs2ZMp0rVaoz/dhJDB9+aqBEZEUGGu1BuV65HsnrncepXKki5wo9SMG829v3Qiz+n9Saj\njzd7p/YE4O/Qq+TLZV2AeHoIj2TycXSV5vXLwoxBz9Jt2AJC/r4c737Cb0Thky7u9XWZgrnw8hR2\nHLrVIr34T7ij5fr94p2UL5LbJa8L/4RzPfwGv6z9E7C6kf1vi5cnR2YCi+dhQfBB/tM6iA6D53L5\nWgS17e7m9Om8iLgRldLiiruPx6z/e7+cuWjYtAU7t8c/5h1rx9YtfDLgAyqXLcq3Y77hm/9+yvfj\nR7vE8/KMe47diS2bNvD7r4uoXLYor3R9keC1q3itR6c4cX5bPJ8y/gGEXb/G8WNHGfv9NBbPn+MY\n54yMiCB9+rv/Nxs/wcNNP2lZmqk8gTxAqDEmEsAYE2qM+VtEjonIpyKyR0Q2i0hhABEpICIrRGS3\niCwXkcft8EkiMlZENgEzgZ7AGyKyU0Sqi0hrEdkrIrtEZI27P0To+fNcuWx9GYWHh7NqxTKKFitG\n/YZN+DPkFLv+OMKuP46QMWNGtu2xJlzky5efNfbMzAN//kFkRAR+OXPGybdQkaL8dfx4nLCYmBjm\nzZlNq+cSrzxXrVhGkWLFyJvXdbLFTz9OpV6DRmTLnp3w8DDEwwMPDw/C7D9CYwznzp7h8ScKpKo8\nkssYQ++Xu1GseHFe6xP3GeyHDx1yvF64YB5Fi1mTKJo0bcGG4GCioqIICwtjy+bNFCtewiXvIkWL\nEXL0KADXr1/n6tWrjtcrli2Nt9V+4q+/aN/mWSZ8P8WlJRt7TKdOnaJGzVqEhYXh4eGBiBAeER4n\nTlI9AsnllzWjo+szfTov6lZ4kgN/XeDXTUd48rmvKd5+NMXbjyYs8ialXxwLwKL1h3ihfhkAWtUs\nweod1vmTNZMPc4a14aOJq9iw72SC+zxw/AKF8sYdt2tTtyQzV8Ttss2dPZPjddOqRTjwV/xd/Is3\nHKaGvzWzt1ZAAf48HvcCsX/nGgyeZP1JZvDxwhhDTIwho483AEXyZWdfyPlESilxYdevc83+vw+7\nfp01K5ZRrETiY6hzlqxg4+6DbNx9kK69XuO1N9+lc4/eLvEKFinK8WNHU31szt4fMISt+46ycfdB\n/vftVJ6qXotvxk9ybL958yYTx4yi9+tvEREe4WjmRUdHc+PmDQCOHjmU5GdTdyYtVZ6/A/lF5KCI\njBaRmk7brhhjygCjgC/tsG+AyXYr9Ufga6f4+YCqxphWwFjgC2OMvzFmLdAfaGCMKQfEP80QEJEe\nIrJVRLaGhib/D/bsmdM0b/Q01SqVp271ytSq8zQNGjVNNM3gYSOZMmki1YMC6N6pA6PGfeuyvFWm\nTJl48smCHD1y2BG2ft0aHsuXjwJPFowT9/XePdixfavj/dzZMx0ThZyFhYUx/cfJdHvZ+jLo/Vof\n2rZsxgfvvkXnbtbs0p07tlGhUhBeXne3h3/D+mCm/ziV1atWUqVieapULM9vS6wJD/37vU/F8mUI\nqlCOFcuWMvJz6xQoXqIE9eo3IKhCOWo+FUSnzl0pFU9l1aBRY9auWQXAubNnqVe7OpUD/an5VBAN\nGjWmnj2eOXH8WCaOtyqe4UM/5uLFC7zx+ivx3tIyaEA/BgwaAkDrtu2YOH4sNapWoverrwPWF9yR\nI4cJqBDolvLJnSMTv/73BTZP6Ma6MZ1Zvi2EJRsPJ5pm0uKd5Miagb1Te/J660r0m7ASgJ4tAyn0\nWDbef7Ga4/aSnL6uY7ZLNt2q7GI9W7OES+XZu1VFtn3XnU0TutK7ZUW6j1jo2LZxfFfH634TVtDv\npepsntCN9vXK0HfMcse2coWtIYKdh84CMGP5PrZ+250qpfPx+xarUqpZ/gl+3XQkybJKyPnzZ2nZ\nqDb1qgXS9OmnqFu/kWMG7ZKF8wgsVZDtWzbyUttneOHZJinKu279RmxYd+ta/JWuL9Kifk2OHD5I\nYKmCTJ/6faL7OXP6b15sneDXURyTJ46hdbsOZMiYkRKlyxARFkbdqgGUKRdA1qy+AKxfu5q69Rul\n6DO4i2DV5+74ScvExE7ZSwNExBOoDtQGXgb6AgOBOsaYoyLiDZwxxuQQkVAgjzHmph1+2hjjJyKT\ngJXGmMl2ngOBa8aYz+z3Y4FCWK3SOcaYJGfClA8INCvW3dHQq1ssnP8Lu3Zs48MBg+/ZPvu+/QaN\nmjSlZu2UP6EunVfauDY7c/o03bu8xIIlv9+zfc6fN5edO7bTf2DK/6/8Gg67C0eUcunTefHbf1+g\n9utTiIm5/98TS7/sQOt+s7l8zXX4ItbhOW8nuO1uOnvmNH16dWH63CX3Zf/Ozp87y6vdOzJj3m/J\nit+4dhV27djmtqrqiRJlzfvfzU86YjL0qvrkNmOMe65A3SxtfLvZjDHRxphVxpgBwKvAs7GbnKMl\nI6vrCW0wxvQE+gH5gW0icvdnwbhJ0+bPkP8ud5/erkSpUqmqONOS3Hny0KlrN8ciCfdCVFQUr/d5\n657t726IuBHF4ElryOvnnokwd8Iva0a+nrU50Yrzfno0dx7ad+zquOXsfjp18gT9h3x6vw/jXy/N\nVJ4iUkxEijgF+QOxg3xtnX7H3tm8Hojti3wBWJtA1lcBx1+/iBQyxmwyxvQHzmNVog+Mjp26Jh3J\njV6yF2V40D37XJs4s3fvtlbPtsbX1/ee7e9uWbY1hBPn7n+FEHolLN57TtOSZi2fcyyScD/5BwRS\nqkz8qzzdK/dqkQQR+U5EzonIXqewgSJyyp7nslNEGjtte19EDovIARFp4BTe0A47LCJ9k/MZ09Kt\nKpmBb0TEF4gCDgM9sGbgZhOR3UAk0M6O/xrwvYi8g1UJdnbNEoAFwGwRaWGnecOupAVYDsS/HIpS\nSqkUix3zvEcmYc2Fuf0m7y9ih+piiUhJrAZXKeAxYJmIxM4G/B9QDzgJbBGR+caYuIP7t0kzlacx\nZhvx3FJiT5wZaYx577b4x4E68eTT6bb3B4GyTkEJtVCVUko9QIwxa0SkQDKjtwB+su/oCBGRw0Al\ne9thY8xRABH5yY6baOWZZrptlVJK/Tu4sdvWL/auB/unRzIP4VX7NsbvRCT2fqu8xF074KQdllB4\notJMyzMhxpgC9/sYlFJKJZ8bu21DUzHbdgwwGGty6WDgc6CL247IluYrT6WUUiq5jDFnY1+LyAQg\n9sbjU8SdIJrPDiOR8ARpt61SSim3EayKxR0/qdq/iPOTN1oCsTNx5wPPi4iPiDwJFAE2A1uAIiLy\npIikw5pUlOSNqtryVEop5T7imOh593clMh1r/XI/ETkJDABqiYg/VrftMawFdzDG7BORmVgTgaKA\nV4wx0XY+rwK/AZ7Ad8aYfUntWytPpZRSDyRjTLt4gr9NJP4nwCfxhC8GUvQAVK08lVJKuVUaX5bW\nLbTyVEop5TYCyVod6EGnE4aUUkqpFNKWp1JKKbf697c7tfJUSinlZg9Br6122yqllFIppS1PpZRS\nbiT37D7P+0krT6WUUm4Tu8LQv51WnkoppdzqYWh5PgwXCEoppZRbactTKaWUW/37251aeSqllHKn\ne7gw/P2k3bZKKaVUCmnLUymllNvobFullFIqFbTbVimllFIutOWplFLKrf797U6tPJVSSrnZQ9Br\nq922SimlVEppy1MppZTbWLNt//1NT608lVJKuZV22yqllFLKhbY8lVJKuZEg2m2rlFJKpYx22yql\nlFLKhbY8lVJKuY3OtlVKKaVSSh6OblutPJPJ4yE4GdzNUwstVU4tePd+H8IDK2+1Pvf7EB44kQdO\n3O9DeCBp5amUUsqttOWplFJKpdDDcKuKzrZVSimlUkhbnkoppdxGeDjmiGjlqZRSyq2021YppZRS\nLrTlqZRSyq10tq1SSimVQtptq5RSSikX2vJUSinlNjrbVimllEqxh+N5ntptq5RSSqWQtjyVUkq5\njz5VRSmllEq5h6Du1G5bpZRSKqW05amUUsptrNm2//62p1aeSiml3OrfX3Vqt61SSimVYtryVEop\n5V4PQdNTW55KKaXcStz0L8n9iHwnIudEZK9TWHYRWSoih+zf2exwEZGvReSwiOwWkQCnNC/Z8Q+J\nyEvJ+YxaeSqllHpQTQIa3hbWF1hujCkCLLffAzQCitg/PYAxYFW2wAAgCKgEDIitcBOjladSSim3\nEnHPT1KMMWuAi7cFtwAm268nA884hU8xlo2Ar4jkARoAS40xF40xl4CluFbILnTMUymllFu5ccjT\nT0S2Or0fb4wZn0SaR40xp+3XZ4BH7dd5gRNO8U7aYQmFJ0orT6WUUmlVqDEmMLWJjTFGRIw7DyiW\ndtsqpZRyL3HTT+qctbtjsX+fs8NPAfmd4uWzwxIKT5RWnkoppdzGqvfuzWzbBMwHYmfMvgTMcwrv\naM+6rQxcsbt3fwPqi0g2e6JQfTssUdptq5RS6oEkItOBWlhjoyexZs0OB2aKSFfgONDGjr4YaAwc\nBsKAzgDGmIsiMhjYYsf72Bhz+yQkF1p5KqWUcp97+EgyY0y7BDbVjSeuAV5JIJ/vgO9Ssm+tPJVS\nSrnVQ7DAkI55KqWUUimlLU+llFLu9RA0PbXl6WYRERHUqV6Zp4ICqFyhLEMHD3RsO3YshLo1qlC+\ndDE6v9iOGzduAPDj1MkUejw31YIqUC2oAlO+/zbevMPDw2lcvzbR0dEAnDjxFy2bNaRS+dIEBZTh\n+PFjLmlOnPiLpg3rUr1yIFUrlef3XxcDsHFDMFUrlafWU0EcOXwIgMuXL9OyWUNiYmIc6Vs0qc/l\nS5fcUTRJerlbFx5/LBcV/EvHCd+9axc1q1Uh0L8Mzz7TjH/++QeALZs3E1TBn6AK/lQKKMe8X+bG\nm68xhob16jjSAURHR1M5sDytWjSNN82EcWMJ9C9DUAV/6tSsxh/79wOwPjiYiuXL8lRQIIcP3Sq3\npo3qxym3xg2e5tI9KLeIiAierlmFGpUDqBpYjuFDBjm2GWMYMvAjKvmXpHJAGcaN/iZO2u3btpAr\na3rmz/053rzDw8Np1qCO43wb2K8vVQPLUTmgDH3f7oM1hBTXvDmzqRpYDr8s6dix/da97Zs2BFM9\nqDx1qt86365cvsyzzRvFKbeWTRu49XzzSefF2qlvs2lGX7bN/pB+PRvH2T7wlWbs/qU/O37uR+92\nNQFoWqsMm2e8z8af+rLux3ep6l/QEf+T/7Rg2+wP2fFzPz5/97kE9zttZFcK5M0BQJuGFdgy8wM2\nz3ifeaN6k8M3EwDZHsnIwjGvsmdefxaOeRXfLBnizSu+fabz9mLeqN5snfUBPVpXd8Qd1a8d/sXz\nOd73bFuDji0qp6TI3MBdc23Tdg2slaeb+fj4MH/JMoI3bWftxm0sX/obWzZvBGBgv/fp/Vofduw9\ngK9vNqZOujU+3erZNqzbtI11m7bRsXPXePP+YfL3NGvREk9PTwB6duvE633eYvOOvSxfs4GcOXO5\npPls+FBatmrN2o1b+W7yj7zV5zUARn31BbPmLGDYyM/5buI4K+6IT3jznb54eNw6Ldq268DE8WPc\nUzhJePGlTsxb+KtLeK+XuzFk6HC27txD8xYt+eLzkQCUKl2a4E1b2bRtJ/MW/cprvV8mKirKJf2v\nSxZTpmw5HnnkEUfYqK+/oliJEgkeS9t27dm6cw+btu3kzbff5b133gTgqy8/Z+6CxXz6+ZdMGD8W\ngOFDh/Bu3w/ilFv7F15k/NjRqSuIFPDx8eGXRUtZs3E7qzdsZfmyW+fbtB8mc+rUCTZu38vG7Xto\n9VxbR7ro6GgGffQBtevWSzDvH6d8T9Pmz+Dp6cnmjevZtHE9azdtJ3jLTnZs30rw2jUuaYqXLMXk\naTOp+lT1OOH/+/pLfpqzgKEjPmfSt9YCMZ9/OpQ33o57vrVp9wLfThh7R2XiLPJGFA17fE1Q2+EE\nPT+M+lVLUqlMAQBebF6ZfLl9KddyMOWfHcKsX7cBsHLTASq1HUbl54fTc+APjO7fHoDK5Z6kin9B\nKrYZSoXWn1Ch1BNUr1DEZZ8lCubG08ODY6cu4Onpwch3nqNhj6+o1HYYew+domdbq5J+u3M9Vm0+\nQJkWH7Nq8wHe7lzfJa+E9lmvagnW7zxCxTbDaN+0EgBliubF01PY+edJR/rJ8zbQ6/mabitPdYtW\nnm4mImTOnBmAmzdvcvNmFIJgjGHN6pW0aPksAO06vMiihfMSy8rFrBnTaNy0OQB//rGf6Kgox5df\n5syZyZgxY7zHc/Wq1eL6558r5MmTBwBvb2/Cw8MIDwvD29ubkKNHOHXyJNVr1IqTvnGTZvw8a0aK\njjO1qlWvQfbs2V3CDx86SLXqNQCo83Q9frFbShkzZsTLyxp5iIyIQBKY4vfT9B9p1ryF4/3Jkyf5\ndckiOnfpluCxOFe0169fd+Tt7e1NeFgY4eFWuR09coSTJ09Qo2atOOmbNGvOzBnTk/Gp78zt51vU\nzZuOY/1+4jje6dvPUTnlzHXr4mrC2FE0a9ESv5w5E8x79szpNLLPNxEhMiKCGzduEBkZyc2bN+Pk\nF6tY8RIUKVrMJfxWuYXj5XS+VasR94u9UeNmzHHz+XY93Orh8fbyxMvL09Fi7tG6GkPHL3G8P3/p\nWpz4AJky+BDbwDYGfNJ5k87bC590Xnh5eXLu4j/c7vnGFVmwajdwa43WTBnSAZAlcwZOn78CQNNa\nZflhwSYAfliwiWa1y7rkldA+b0ZFkzF9Ory9PB3ts/69m/Lx6EVx0odH3OSvvy8SWOqJFJfbnbhX\na9veTzrmeRdER0dTs2olQo4eptvLvQisFMSF0FCyZvV1fNk/ljcfp//+25Fm/i9zCA5eS+HCRRj6\n6efky5c/Tp43btzgWEgITzxRAIDDhw6RNasvHZ5/juPHjlGrTh0GDh7maJXG6vthf1o1b8T4Mf/j\neth15i207v194+336NmtE+kzZGDcxMl89MG79Bvwsctn8c2WjcjISC5euED2HDncWUzJVqJkKRbM\nn0fzFs8wZ/YsTp64tQzl5k2b6NmjC38dP863k6Y6ytfZhvXBjBo9zvH+nbf68MmwT7l27Wqi+x07\n+n98/dV/uXHjBr/+vsJK++77dO3ckQwZMvDtpKm8/97bDBw0xCVtNrvcLly4QI67XG7R0dHUqVaJ\nkKNH6NKjF4EVgwA4FnKUuT/PYtGCX/Dzy8mwkV9QqHAR/v77FIvmz2PekmW81iv+C4gbN25wPCSE\nx+3zrWJQFarVqEXJwvkxxtCtR2+KFU+45X67Pm+/S+8encmQPgOjJ05iwAfv8UH/QS7xfLNl48YN\n955vHh7C+mnvUSh/TsbNWMOWvccBeDJfTp6rX4HmdcoReukqb306myN/nQegee2yfPxac3Jmz0Kr\n162W8KbdIazZeoiQpZ8gCGNnrOFAyFmX/VXxL8hMuxUbFRXDf4bOYMvMD7gefoMjJ87TZ5h1cZAr\nRxbOhFqV75nQf8iVI4tLXgnt8/Bf52nfpBKrp7zFl5OX06RmGXb+ccJRMTvbtv8vngooxNZ9x91Q\nmkm7s8WBHhxpsuUpIl+ISB+n97+JyESn95+LyJspyO9aAuGTRCThgYtU8vT0ZN2mbew7dJxtW7ew\nf9/eROM3atyU3X8eYf3mHdSu8zS9und2iXMhNJSsvr6O99HRUWxYv44hwz5l5bqNHAsJ4cepk13S\nzZ71E+06dGT/4ePMmruAl7t1IiYmhrLl/Fm2ej0Lf13OsWNHeTR3bowxdH6xHT26dOTc2VtfCjlz\n5uT06b9d8r5Xxk34jvFjR1O1UgWuXbtKunTpHNsqBQWxfdc+1m3YwsgRw4iIiHBJf+niRbJksb6Y\nFi9aSK6cuQioUCHJ/fbs/Qr7DxxhyNARDB9qVZDl/P1ZE7yR35at5FjIUXLnzoMxhg7t29K5YwfO\nxim3XHEukO4WT09PVm/Yxp4Dx9ixdQt/2OfbjchI0qdPz4q1m3ixU1de79UdgA/ffYv+g4fG6S69\n3YULoTyS9db5dvTIYQ4e+JM9B46x9+Bx1q5ZyYbgdck+xjJl/fl9ZTDzlizjeEiI43zr2rE9L3eN\ne7755czJGTeebzExhsrPD6dwg34Eln6CkoWs3hefdF5E3rhJtRc+5fs56xk34AVHmvkrd+Pfaght\n3hxP/95NACiY349iTz5K4Qb9KNTgQ2pVKspT5Qu57C+3X1ZCL1kXZl5eHnR/rjqV242gYP0P2Xvw\nFO90ce2eBYhnCDnBfUZHx9Dpg0lUaTeCn5dt59X2tfhq6nJGvNWKaSO70qRmGUce5y9eJU/OrKku\nv1S5v8vz3RNpsvIEgoGqACLiAfgBpZy2VwXWJ5WJiNzXlrWvry/Va9Ri+dLfyJ4jB1euXHaMyf19\n6iR5HnsMgOw5cuDj4wNAx85d2bVju0teGTJkiFMxPJY3L6XLlqPAkwXx8vKiSbMW7N65wyXdD5O/\np+WzrQGoFFSFiIgILoSGOrYbY/hsxFDe7duPEUMHM+iT4XTs3DXO5JKIiEgyZIh/MsO9UKx4cRYu\n+Z31m7fRpm07nizo+oVVvEQJMmfOzL69rhcqXl5ejkkpG9YHs3DhfIoVLkDHF55n1coVdO7YIdH9\nt2n7PAvm/xInzBjD8KFDeP/Dj/hk8CA+GfYpXbp1Z/Sorx1xIiMi7mm5ZfX1pVqNWixf9jsAeR7L\nR9Pm1tOYmjbel5WPAAAU8ElEQVR/hn379gCwc8c2unfqgH/Jwiz4ZQ7vvPEaixbEHULIkD4DkZG3\nzrdFC+YRWDGIzJkzkzlzZp6u19AxtpoSxhg+/3Qob7/3ISOHDWbgkGF07NSN8WNGOeJERkSQ/i6U\n25Vr4azeepD6VUsCcOrsJX5ZvguAeSt2UbqI64M0grcf4cm8fuTwzUSL2uXYvOcY18NvcD38Br8F\n7yOo7JMuacIjb+CTzhuAckWtyTshJ62/udlLt1O5nDUB6dyFq+T2s4YHcvs9wvmLrj0hydnny61r\n8OPCzVQq8yRXrobT4b3v+M+LdRzb0/t4Ex5xM2WFpZKUVivP9UAV+3UpYC9w1V570AcoAewQkZEi\nsldE9ohIWwARqSUia0VkPrDfOVN7TcNRInJARJYBroM2dyj0/HkuX74MWLMVV61YRpGixRARqteo\nxTx7vG76D1Np3MQaTzpz+rQj/eKFCyharLhLvr7ZshETHe2oQAMqVOTKlSuEnre6mdasWhlvN1q+\nfPlZvdLqcjzw5x9ERkTEGeea/uNU6jVoRLbs2QkLC8PDwwMPDw/CwsMA68vu3Nkzju67++HcOWtd\n55iYGIYPHUL3Hj0BOBYS4rgYOX78OAcO/MkTBQq4pC9StBghR48CMPiTYRw5dpIDh48x5cefqFW7\nDt9P+cElTexMWoAlixdRuHDciSE/Tp1Cg4aNyZ49O2HhVrmJhwdhYbfK7czZM/EejzuFnj/PlXjO\nN4DGzZqzbs0qAILXrqGQ/Rl27DvEzv2H2bn/MM2eacXIL76hSbMWcfL1zZaNaKfzLV++/ASvW0NU\nVBQ3b94keN2aeM/TpPw0bSr1GjS0z7dwx/kW7nS+nT171m3nm1+2zGTNbFXE6X28qRtUnAPHrFbu\nglW7qVnRKpPqFYpw+C/rPCuY38+R3r94PnzSeXHh8nVOnLlE9QqF8fT0wMvLg+oBRfgz5IzLPg+E\nnKXQ49bf2N/nr1C8YG78slnj0nUrF+eAnWbR6j10aGZ1sXdoFsRCe5zUWVL79M2SgUY1SvPjws1k\nzOBNjDEYAxl8vB1xijyRi/1HTrvkfTc9DLNt0+SYpzHmbxGJEpHHsVqZG7Cer1YFuALsAZoC/kA5\nrJbpFhGJnf4XAJQ2xoTclnVLoBhQEusZb/tJYEkmEemB9bRx8ud/PNnHfubMaXp170J0TDQmJoZn\nWj1Hw8bW7RCDhgyjS8f2DBnUn7Ll/HmxUxcAxo35hiWLFuLp5UW2bNkYPT7+VaJq163HxvXrqFXn\naTw9PRkydATNm9QHYyhXPoCX7Akwn3w8gPIBgTRu2owhw0fyn1deZvSorxCE0eO/dUwoCQsLY9oP\nk5m7wJrh+srrfWjTshne3umYOGkqADu3byOwUlC8Y4nu1rFDO9auXkVoaCiFCuTjo/6D6NSlKzN/\nms64sf8DoMUzrejYyerWXh+8js9GDsfbyxsPDw+++mY0fn5+Lvk2atyENatXUahw4UT3//HA/gRU\nCKRps+aMGT2KlSuW4e3ljW+2bEz47laXeFhYGFOnTGLhEquF93qfN2nZrDHp0qVj0tRpAGzfto1K\nQZXvermdPXuaV3p0ITo6mpgYwzOtnqNBI6ubsc+b7/Jy146MGfUVmTJn5qv/jUsit7hq132ajRuC\nqVW7Ls1bPsva1SupVqk8IkLdevUd5/V/XulBp649KB8QyML5v9D37T5cCD1Pu2dbULpsOWbPs26P\nCgsL46cfpjB7/hIAer/Wh7atmpEuXTrGfWefbzu2EVjRfedbbr9HmPDxi3h6eODhIfy8dDtL1lq9\nE599t5Tvh77Eay/U4Xp4JL0+tv7vWtb1p33TIG5GRRMReZMX37P+Hucs20HNikXZOvMDDIal6/9g\n8RrXno4la/dSo0IRVm46wOnzVxg6fglLJ/bhZlQ0f52+SI8B1sXaZ98v5YcRXXjpmSr8dfoiHd61\n9hNQ8nG6PVeN3h9PS3KfH/RoxIiJv2GMte3lNjXYOusDJs6+1aVeuVxBhoxd7JbyTK60PtnHHSS+\ne7XSAhH5EVgANAL+i1V5VsWqPHMAPsAee01CRGQqMAv4BxhgjKntlNc1Y0xmEfkS2O2UZg4wzRgz\nO7FjKR8QaFYFb3L3R0yxnTu2M3rUV4z/1nVs82557+03aNykKTVruywVmSQfb8+kI90Dp0+fplvn\njiz6dek92+dbb/yHps2aU7tOysstLNL1dpv7YdfO7YwZ9RVjJ9678+39d96gYeNm1KxdJ+nI8chb\nrU/Ske6y9D7e/Db+dWp3/i8xMff3+7VcsXy83qEOXT+akmCcyAMziQk757bqrlTZAPPTYtfbmFKj\nbP4s2+7keZ53U1rttoVb455lsLptN2K1PJMz3nn97h7a/eFfPoDqNWo5blq/F0qWLJWqijMtyZMn\nD527do+zSMLdVqpU6VRVnGlJOf97f76VKFkq1RVnWhEReZPBYxeTN5dv0pHvshy+mRk0euE93+9D\nMF8oTbc8/YE5wFFjzNN22DasFmhpoAbwMtYjZrIDW4EgoDjwtjGmqVNesS3PVk5pcmF123Z/UFqe\nD5q00vJ80KSVlueDKC20PB80bm95lgswM9zU8iyTL+22PNPkmKdtD9ZY5rTbwjIbY0JFZC5WS3QX\nYIB3jTFnRCSxWQxzgTpYleZfWGOpSimlVIqk2crTGBMNPHJbWCen1wZ4x/5xjrMKWHVbWGanNK/e\njeNVSillSeszZd0hzVaeSimlHjzCwzHbNi1PGFJKKaXSJG15KqWUcquHoOGpladSSik3ewhqT+22\nVUoppVJIW55KKaXcSmfbKqWUUimks22VUkop5UJbnkoppdzqIWh4auWplFLKzR6C2lO7bZVSSqkU\n0panUkopt7EeJ/bvb3pq5amUUsp9RGfbKqWUUioe2vJUSinlVg9Bw1MrT6WUUm72ENSe2m2rlFJK\npZC2PJVSSrmR6GxbpZRSKqV0tq1SSimlXGjLUymllNsID8V8Ia08lVJKudlDUHtqt61SSimVQtry\nVEop5VY621YppZRKIZ1tq5RSSikX2vJUSinlVg9Bw1MrT6WUUm6kjyRTSimlVHy05amUUsrN/v1N\nT215KqWUchvB6rZ1x0+S+xI5JiJ7RGSniGy1w7KLyFIROWT/zmaHi4h8LSKHRWS3iATcyefUylMp\npdSDrLYxxt8YE2i/7wssN8YUAZbb7wEaAUXsnx7AmDvZqVaeSiml3Erc9JNKLYDJ9uvJwDNO4VOM\nZSPgKyJ5UrsTHfNMhp07toX6ZvQ6fr+PIwF+QOj9PogHkJZb6mi5pU5aLrcn3J2hG2fb+sV2x9rG\nG2PGO703wO8iYoBx9rZHjTGn7e1ngEft13mBE05pT9php0kFrTyTwRiT834fQ0JEZKtTd4VKJi23\n1NFySx0tt1QLTaLcqhljTolILmCpiPzpvNEYY+yK1e208lRKKeVW92ptW2PMKfv3ORGZC1QCzopI\nHmPMabtb9pwd/RSQ3yl5PjssVXTMUymllHvdg0FPEckkIlliXwP1gb3AfOAlO9pLwDz79Xygoz3r\ntjJwxal7N8W05fngG590FBUPLbfU0XJLHS0393sUmCvWAKsXMM0Y86uIbAFmikhX4DjQxo6/GGgM\nHAbCgM53snMx5q50ByullHoIlStfwfy+eqNb8sqdNd22tDpWrC1PpZRSbpPcBQ4edDrmqZRSSqWQ\nVp73iIhE20tI7RKR7SJSNZX59BSRju4+vrRORD4UkX32slo7RSQogXiBIvK103tvEQmx0+wUkTMi\ncsrpfboUHscQEelzp5/nfktueaYwz1qpPa8fFCLyhfP/v4j8JiITnd5/LiJvpiC/awmETxKR5+7s\naO8fcdO/tEy7be+dcGOMP4CINACGATVTmokxZqy7DyytE5EqQFMgwBgTKSJ+QLyVnjFmK+B8U3U1\nYKEx5jU7r4HANWPMZ3f3qNOulJRnCvL0AmoB14D1d3yQaVcw1gSUL0XEA2vxg0ectlcF3kgqExHx\nMsZE3Z1DTAPSdr3nFtryvD8eAS6B42p9YewGERklIp3s18NFZL/dOvjMDhsoIm/br1eJyAgR2Swi\nB0Wkuh3uKSIjRWSLnfZlOzyPiKyxWxp7RaS6HXeS/X6PiCT5h38f5MG6WToSwBgTaoz5W0Qqish6\nuzW/WUSy3F6eQENgSWKZi8hLdvqdIjLa/lJERJrYvQS7ROR3pyRlRGS1iBwVkVfc/WHvgYTK85iI\nfGqfB5tFpDCAiBQQkRX2ubRcRB63wyeJyFgR2QTMBHoCb9jlWF1EWtvn1S4RWXO/PqybrQeq2K9L\nYd0acVVEsomID1AC2GH//cX+TbUFx9/6WhGZD+x3zlQso0TkgIgsA3Ldu4+kUkNbnvdOBhHZCaTH\n+vKqk1hkEckBtASK26tk+CYQ1csYU0lEGgMDgKeBrlj3MFW0/6CD7S//VsBvxphPRMQTyAj4A3mN\nMaXt/Sa0n/vpd6C/iBwElgEzgA3277bGmC0i8ggQHk/a2sCghDIWkdJY5VzVGBMlIuOB50VkBdbC\n0dWNMcdFJLtTsqJAXcAX+ENExhpjou/8Y94zLuVpjFltb7tijCkj1tDAl1gt1G+AycaYySLSBfia\nW+uF5sMqu+jbW/UisgdoYK8AkxbPqxSzLzKi7AuIqljnYV6sCvUKsAerzPyBclgt0y1OFw8BQGlj\nTMhtWbcEigElsW7B2A98d5c/zl3zEDQ8teV5D4XbK/8Xx2oNTRFJdE7aFSAC+FZEWmHdlxSfOfbv\nbUAB+3V9rJuBdwKbgBxYTxLYAnS2v+TKGGOuAkeBgiLyjYg0BP5J7Qe8W4wx14AKWE9COI9Vab4M\nnDbGbLHj/HN7N5iI5AUuGmMSKjuwLjYqAlvt8qoJFML6MlxpjDlu53/RKc1CY8wNY8w54CKQZpdv\njE985Rnb2wFMd/od28KqAkyzX0/F6gqPNSuRC4dgYJKIdAc83XP0acJ6rIoztvLc4PQ+GKt8phtj\noo0xZ4HVWOcYwOZ4Kk6AGk5p/gZW3OXPcFfdq0eS3U/a8rwPjDEb7HGmnEAUcS9i0ttxokSkElYL\n5zngVeJvrUbav6O59f8pwGvGmN9ujywiNYAmWF9q/zXGTBGRckADrG63NkCXO/yIbmd/Qa8CVtkt\nmuR0lzYEXMrgNgJ8Z4z5KE6gSMtE0kQ6vXYu9wdGPOUZuyKL843fybkJ/Hoi++gp1kSkJsA2Ealg\njLmQykNOS4KxKsoyWN22J4C3sC48v8fq7UhIguX175H2J/u4g7Y87wMRKY51JX4BawWMkiLiY3dt\n1bXjZAayGmMWY01AKJeCXfwG9BIRbzuvomItZfUEcNYYMwGYCATYlbiHMeZnoB9Wt1KaIiLFRKSI\nU5A/8AeQR0Qq2nGyiDVpxVmS451Y3ZZt7HJARHLYXXLrgdp2mXFbt+0DLYHyjH1qUFun3xvs1+uB\n5+3XLwBrE8j6KpDFaT+FjDGbjDH9sVq4+RNI96BZj9U1e9FuKV7E6sKvYm9bC7QVaz5BTqxW5eYk\n8lzjlCYPiVfAKg144K6YH2CxY55gtXZesq/+T4jITKwr2BBghx0nCzBPRNLb8ZM9/R2rYiwAbLe7\nhs9jjVHVAt4RkZtYsyI7Yo3XfB87SQZ4P3Uf767KDHxjX1xEYS2v1QPrKv8bEcmANd75dGwCe0y3\nsDHmz3jyczDG7BGRQcAyuwxuAj3tcdReWP8HAvyN9TDdf4OEyrMpkE1EdmO1rtvZ8V/DOkfewTqX\nElrWbAEwW0Ra2GnesCtpwXoo8a679HnutT1YY5nTbgvLbIwJFWuB8ipYn9cA7xpjztgXzQmZi9Wz\ntB/4i1sXLg8cIe13ubqDLs+n/pVEpBrQwRjT834fy4NCRI4BgcaYtPrcSfUAKB8QaFas2+SWvLJn\n8tLl+ZS6l4wx64B19/s4lFL/Tlp5KqUAMMYUuN/HoP4dHoZuW608lVJKuZXOtlVKKaWUC215KqWU\ncp8HYIEDd9CWp3qoya2n3ewVkVkikvEO8nKsqysizUWkbyJxfUWkdyr24VjbODnht8VJ0ZM6xFrT\ndm9Kj1E93MSNP2mZVp7qYRe7bGJp4AbWKksO9oLdKf47McbMN8YMTySKL5DiylMplTZo5anULWuB\nwnaL64CITMFavCK/iNQXkQ1iPWVllr0CFCLSUET+FJHtWAvvY4d3EpFR9utHRWSuWE8X2SXWMy+H\nA4XsVu9IO947cutJOIOc8vpQrKfmrMNaPDxRItLdzmeXiPx8W2v6aRHZaufX1I4f71N4lEq1h6Dp\nqZWnUjieR9kIa6UYsBbSH22MKYW1Hmk/4GljTADW80LftFd/mgA0w1poPXcC2X8NrDbGlMNa/nAf\n0Bc4Yrd63xGR+vY+K2Etl1dBRGqISAWspfH8gcbcWmA8MXOMMRXt/f2B9ZSdWAXsfTQBxtqfwfEU\nHjv/7iLyZDL2o1S89GHYSv37OS+buBb4FngMOG6M2WiHV8Z6VFSwtVIf6bCWTysOhBhjDgGIyA9Y\ny9zdrg7WUoixC7JfEZFst8Wpb//ELs+YGasyzQLMjX0yjFjPgkxKaREZgtU1nJm4i+PPNMbEAIdE\n5Kj9GeoDZZ3GQ7Pa+z6YjH0p9VDSylM97MKNMf7OAXYF6fz0CwGWGmPa3RYvTro7JMAwY8y42/bR\nJxV5TQKeMcbsEutRY7Wctt2+HqchgafwiEiBVOxbKZ1tq5QCYCPwlIgUBhDrCTVFgT+BAiJSyI7X\nLoH0y4FedlpPEcnKbU8gwWoddnEaS80rIrmwnrbxjIhkEJEsWF3ESckCnBbrqTov3LattYh42Mdc\nEDhAAk/hScZ+lIrXQzDkqS1PpZJijDlvt+Cmi4iPHdzPGHNQRHoAi0QkDKvbN0s8WfwHGC8iXbGe\n/9nLfqZrsH0ryBJ73LMEsMFu+V7DWth+u4jMwHpCxzmsB5on5SOsh6Cft387H9NfWI/HegTr6TER\nIpLQU3iUUgnQp6oopZRym4AKgWbdxuRc4yUtUzoPfaqKUkqph0NanynrDjrmqZRSSqWQtjyVUkq5\njfBwzLbVMU+llFJuIyK/An5uyi7UGNPQTXm5lVaeSimlVArpmKdSSimVQlp5KqWUUimkladSSimV\nQlp5KqWUUimkladSSimVQv8HGXZ/3m3eWggAAAAASUVORK5CYII=\n",
            "text/plain": [
              "<Figure size 504x504 with 2 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        },
        {
          "output_type": "stream",
          "text": [
            "              precision    recall  f1-score   support\n",
            "\n",
            "           0       0.70      0.66      0.68      4500\n",
            "           1       0.75      0.64      0.69      4500\n",
            "           2       0.72      0.76      0.74      4500\n",
            "           3       0.70      0.81      0.75      4500\n",
            "\n",
            "    accuracy                           0.71     18000\n",
            "   macro avg       0.72      0.71      0.71     18000\n",
            "weighted avg       0.72      0.71      0.71     18000\n",
            "\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yeiD1T_QZpdk",
        "colab_type": "text"
      },
      "source": [
        "# Inference"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "z7G7vuSTZHkQ",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import collections"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cuDfZqrQUg-M",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yMSgbd1oUgPJ",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def get_probability_distributions(probabilities, classes):\n",
        "    \"\"\"Produce probability distributions with labels.\"\"\"\n",
        "    probability_distributions = []\n",
        "    for i, y_prob in enumerate(probabilities):\n",
        "        probability_distribution = {}\n",
        "        for j, prob in enumerate(y_prob):\n",
        "            probability_distribution[classes[j]] = np.float64(prob)\n",
        "        probability_distribution = collections.OrderedDict(\n",
        "            sorted(probability_distribution.items(), key=lambda kv: kv[1], reverse=True))\n",
        "        probability_distributions.append(probability_distribution)\n",
        "    return probability_distributions"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6EZdo-fKZwo6",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CLP2Vzp3Zwth",
        "colab_type": "code",
        "outputId": "0b26ce3a-cd06-4128-817c-9874e0f26e85",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 105
        }
      },
      "source": [
        "# Inputs\n",
        "texts = [\"Roger Federer wins the Wimbledon tennis tournament once again.\",\n",
        "         \"Scientist warn global warming is a serious scientific phenomenom.\"]\n",
        "num_samples = len(texts)\n",
        "X_infer = np.array(X_tokenizer.texts_to_sequences(texts))\n",
        "print (f\"X_infer[0] seq:\\n{X_infer[0]}\")\n",
        "print (f\"len(X_infer[0]): {len(X_infer[0])} characters\")\n",
        "X_infer = np.array([to_categorical(seq, num_classes=vocab_size) for seq in X_infer])\n",
        "print (f\"X_infer[0] one hot shape: {X_infer[0].shape}\")\n",
        "y_filler = np.array([0]*num_samples)"
      ],
      "execution_count": 51,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "X_infer[0] seq:\n",
            "[9, 7, 18, 3, 9, 2, 19, 3, 13, 3, 9, 3, 9, 2, 21, 8, 10, 4, 2, 6, 16, 3, 2, 21, 8, 17, 20, 11, 3, 13, 7, 10, 2, 6, 3, 10, 10, 8, 4, 2, 6, 7, 14, 9, 10, 5, 17, 3, 10, 6, 2, 7, 10, 12, 3, 2, 5, 18, 5, 8, 10, 26]\n",
            "len(X_infer[0]): 62 characters\n",
            "X_infer[0] one hot shape: (62, 58)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "q1gFlI5MZ143",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Inference data generator\n",
        "inference_generator = DataGenerator(X=X_infer,\n",
        "                                    y=y_filler,\n",
        "                                    batch_size=len(y_filler),\n",
        "                                    vocab_size=vocab_size,\n",
        "                                    max_filter_size=FILTER_SIZE,\n",
        "                                    shuffle=False)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UFE4sp_7aHTq",
        "colab_type": "code",
        "outputId": "34e8df56-9e74-4199-bd69-1f017afc135e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Predict\n",
        "probabilities = model.predict_generator(generator=inference_generator,\n",
        "                                        verbose=1)"
      ],
      "execution_count": 53,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "\r1/1 [==============================] - 0s 22ms/step\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bGi_NvbBaMap",
        "colab_type": "code",
        "outputId": "3b5260f6-fe0d-4623-ff78-4efb34713076",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 391
        }
      },
      "source": [
        "# Results\n",
        "probability_distributions = get_probability_distributions(probabilities=probabilities,\n",
        "                                                          classes=y_tokenizer.classes_)\n",
        "results = []\n",
        "for index in range(num_samples):\n",
        "    results.append({\n",
        "        'raw_input': texts[index],\n",
        "        'preprocessed_input': untokenize_one_hot(seq=X_infer[index], tokenizer=X_tokenizer),\n",
        "        'probabilities': probability_distributions[index]\n",
        "                   })\n",
        "print (json.dumps(results, indent=4))"
      ],
      "execution_count": 54,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "[\n",
            "    {\n",
            "        \"raw_input\": \"Roger Federer wins the Wimbledon tennis tournament once again.\",\n",
            "        \"preprocessed_input\": \"r o g e r   f e d e r e r   w i n s   t h e   w i m b l e d o n   t e n n i s   t o u r n a m e n t   o n c e   a g a i n .\",\n",
            "        \"probabilities\": {\n",
            "            \"Sports\": 0.37462228536605835,\n",
            "            \"Sci/Tech\": 0.26097172498703003,\n",
            "            \"Business\": 0.25116461515426636,\n",
            "            \"World\": 0.11324141919612885\n",
            "        }\n",
            "    },\n",
            "    {\n",
            "        \"raw_input\": \"Scientist warn global warming is a serious scientific phenomenom.\",\n",
            "        \"preprocessed_input\": \"s c i e n t i s t   w a r n   g l o b a l   w a r m i n g   i s   a   s e r i o u s   s c i e n t i f i c   p h e n o m e n o m .\",\n",
            "        \"probabilities\": {\n",
            "            \"Sci/Tech\": 0.7040546536445618,\n",
            "            \"World\": 0.22490553557872772,\n",
            "            \"Business\": 0.05717660114169121,\n",
            "            \"Sports\": 0.01386315654963255\n",
            "        }\n",
            "    }\n",
            "]\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_tdQ9G48Hszk",
        "colab_type": "text"
      },
      "source": [
        "---\n",
        "<div align=\"center\">\n",
        "\n",
        "Subscribe to our <a href=\"https://practicalai.me/#newsletter\">newsletter</a> and follow us on social media to get the latest updates!\n",
        "\n",
        "<a class=\"ai-header-badge\" target=\"_blank\" href=\"https://github.com/practicalAI/practicalAI\">\n",
        "              <img src=\"https://img.shields.io/github/stars/practicalAI/practicalAI.svg?style=social&label=Star\"></a>&nbsp;\n",
        "            <a class=\"ai-header-badge\" target=\"_blank\" href=\"https://www.linkedin.com/company/practicalai-me\">\n",
        "              <img src=\"https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social\"></a>&nbsp;\n",
        "            <a class=\"ai-header-badge\" target=\"_blank\" href=\"https://twitter.com/practicalAIme\">\n",
        "              <img src=\"https://img.shields.io/twitter/follow/practicalAIme.svg?label=Follow&style=social\">\n",
        "            </a>\n",
        "              </div>\n",
        "\n",
        "</div>"
      ]
    }
  ]
}
