{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "11_Embeddings",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eTdCMVl9YAXw",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://practicalai.me\"><img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png\" width=\"100\" align=\"left\" hspace=\"20px\" vspace=\"20px\"></a>\n",
        "\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/basic_ml/11_Embeddings/skipgram.png\" width=\"250\" align=\"right\">\n",
        "\n",
        "<div align=\"left\">\n",
        "<h1>Embeddings</h1>\n",
        "\n",
        "In this lesson we will learn how to map tokens to vectors (embeddings) that capture the contextual, semantic and syntactic value of a token in text.",
        "</div>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xuabAj4PYj57",
        "colab_type": "text"
      },
      "source": [
        "<table align=\"center\">\n",
        "  <td>\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png\" width=\"25\"><a target=\"_blank\" href=\"https://practicalai.me\"> View on practicalAI</a>\n",
        "  </td>\n",
        "  <td>\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/colab_logo.png\" width=\"25\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/practicalAI/practicalAI/blob/master/notebooks/11_Embeddings.ipynb\"> Run in Google Colab</a>\n",
        "  </td>\n",
        "  <td>\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/images/github_logo.png\" width=\"22\"><a target=\"_blank\" href=\"https://github.com/practicalAI/practicalAI/blob/master/notebooks/basic_ml/11_Embeddings.ipynb\"> View code on GitHub</a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JqxyljU18hvt",
        "colab_type": "text"
      },
      "source": [
        "So far, we've also represented our text data in a one-hot encoded form where each token is represented by an n-dimensional array.\n",
        " \n",
        " ```python\n",
        "[[0. 0. 0. ... 0. 0. 0.]\n",
        " [0. 0. 1. ... 0. 0. 0.]\n",
        " [0. 0. 0. ... 0. 0. 0.]\n",
        " ...\n",
        " [0. 0. 0. ... 0. 0. 0.]\n",
        " [0. 0. 0. ... 0. 0. 0.]\n",
        " [0. 0. 0. ... 0. 0. 0.]]\n",
        "```\n",
        "\n",
        "This allows us to preserve the structural information but there are two major disadvantages here. We used character level representations in the CNN lessons because the number of characters is small. Suppose we wanted to one-hot encode each word instead. Now the vocabulary sizes quickly grows leading to large computes. And though we preserve the structure within the text, the actual representation for each token does not preserve any relationship with respect to other tokens.\n",
        "\n",
        "In this notebook, we're going to learn about embeddings and how they address all the shortcomings of the representation methods we've seen so far.\n",
        "\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iOP-CR39jen0",
        "colab_type": "text"
      },
      "source": [
        "# Overview"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yN73ZCCnjezh",
        "colab_type": "text"
      },
      "source": [
        "* **Objective:**  Represent tokens in text that capture the intrinsic semantic relationships.\n",
        "* **Advantages:** \n",
        "    * Low-dimensionality while capturing relationships.\n",
        "    * Interpretable token representations\n",
        "* **Disadvantages:** None\n",
        "* **Miscellaneous:** There are lot's of pretrained embeddings to choose from but you can also train your own from scratch."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oSOqIKzckhfc",
        "colab_type": "text"
      },
      "source": [
        "# Set up"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vSPqM7rrkhE8",
        "colab_type": "code",
        "outputId": "6effb6f9-15e9-439d-be34-807372e04904",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Use TensorFlow 2.x\n",
        "%tensorflow_version 2.x"
      ],
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "TensorFlow 2.x selected.\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wKLO2_LM3RH_",
        "colab_type": "code",
        "outputId": "c351bc10-0c82-4eb5-fb9e-6c870c591b94",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "import os\n",
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "print(\"GPU Available: \", tf.test.is_gpu_available())"
      ],
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "GPU Available:  True\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0Zwz1m7mknyd",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "SEED = 1234\n",
        "SHUFFLE = True\n",
        "FILTERS = \"!\\\"'#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\"\n",
        "LOWER = True\n",
        "CHAR_LEVEL = False"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HgEiBe6DkpXV",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Set seed for reproducibility\n",
        "np.random.seed(SEED)\n",
        "tf.random.set_seed(SEED)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yjIUhBwxkBHc",
        "colab_type": "text"
      },
      "source": [
        "# Learning embeddings"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rozFTf06ji1b",
        "colab_type": "text"
      },
      "source": [
        "The main idea of embeddings is to have fixed length representations for the tokens in a text regardless of the number of tokens in the vocabulary. So instead of each token representation having the shape [1 X V] where V is vocab size, each token now has the shape [1 X D] where D is the embedding size (usually 50, 100, 200, 300). The numbers in the representation will no longer be 0s and 1s but rather floats that represent that token in a D-dimensional latent space. If the embeddings really did capture the relationship between tokens, then we should be able to inspect this latent space and confirm known relationships (we'll do this soon).\n",
        "\n",
        "But how do we learn the embeddings the first place? The intuition behind embeddings is that the definition of a token depends on the token itself but on its context. There are several different ways of doing this:\n",
        "\n",
        "1. Given the word in the context, predict the target word (CBOW - continuous bag of words).\n",
        "2. Given the target word, predict the context word (skip-gram).\n",
        "3. Given a sequence of words, predict the next word (LM - language modeling).\n",
        "\n",
        "All of these approaches involve create data to train our model on. Every word in a sentence becomes the target word and the context words are determines by a window. In the image below (skip-gram), the window size is 2 (2 words to the left and right of the target word). We repeat this for every sentence in our corpus and this results in our training data for the unsupervised task. This in an unsupervised learning technique since we don't have official labels for contexts. The idea is that similar target words will appear with similar contexts and we can learn this relationship by repeatedly training our mode with (context, target) pairs.\n",
        "\n",
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/basic_ml/11_Embeddings/skipgram.png\" width=\"600\">\n",
        "\n",
        "We can learn embeddings using any of these approaches above and some work better than others. You can inspect the learned embeddings but the best way to choose an approach is to empirically validate the performance on a supervised task."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BZhlrMM1uaJ6",
        "colab_type": "text"
      },
      "source": [
        "# Word2Vec"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "F47IiPgUupAk",
        "colab_type": "text"
      },
      "source": [
        "We can learn embeddings by creating our models in TensorFLow but instead, we're going to use a library that specializes in embeddings and topic modeling called [Gensim](https://radimrehurek.com/gensim/). "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LVvJNkSKlWzk",
        "colab_type": "code",
        "outputId": "d4da70fb-5fa6-497e-b454-a768d1886714",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "source": [
        "import gensim\n",
        "from gensim.models import KeyedVectors\n",
        "from gensim.models import FastText\n",
        "from gensim.test.utils import get_tmpfile\n",
        "import nltk; nltk.download('punkt')\n",
        "from tensorflow.keras.preprocessing.text import text_to_word_sequence\n",
        "import urllib\n",
        "import warnings; warnings.filterwarnings('ignore')"
      ],
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "[nltk_data] Downloading package punkt to /root/nltk_data...\n",
            "[nltk_data]   Unzipping tokenizers/punkt.zip.\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kYKmeuv6kJAD",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "DATA_FILE = 'harrypotter.txt'\n",
        "EMBEDDING_DIM = 100\n",
        "WINDOW = 5\n",
        "MIN_COUNT = 3 # Ignores all words with total frequency lower than this\n",
        "SKIP_GRAM = 1 # 0 = CBOW\n",
        "NEGATIVE_SAMPLING = 20"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LegtLIr-lxxZ",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Upload data from GitHub to notebook's local drive\n",
        "url = \"https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/harrypotter.txt\"\n",
        "response = urllib.request.urlopen(url)\n",
        "html = response.read()\n",
        "with open(DATA_FILE, 'wb') as fp:\n",
        "    fp.write(html)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vF5D_nNjlx2d",
        "colab_type": "code",
        "outputId": "afb82844-2276-4542-d5f7-61dc5e83aaac",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "source": [
        "# Split text into sentences\n",
        "tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')\n",
        "with open(DATA_FILE, encoding='cp1252') as fp:\n",
        "    book = fp.read()\n",
        "sentences = tokenizer.tokenize(book)\n",
        "print (len(sentences))\n",
        "print (sentences[11])"
      ],
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "15640\n",
            "Snape nodded, but did not elaborate.\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NsZz5jfMlx0d",
        "colab_type": "code",
        "outputId": "9c65165f-06e1-4e7e-e223-b21b6823b401",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Preprocess sentences\n",
        "sentences = [text_to_word_sequence(\n",
        "    text=sentence,\n",
        "    filters=FILTERS,\n",
        "    lower=LOWER,\n",
        "    split=' ') for sentence in sentences]\n",
        "print (sentences[11])"
      ],
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "['snape', 'nodded', 'but', 'did', 'not', 'elaborate']\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VeszvcMOji4u",
        "colab_type": "text"
      },
      "source": [
        "When we have large vocabularies to learn embeddings for, things can get complex very quickly. Recall that the backpropagation with softmax updates both the correct and incorrect class weights. This becomes a massive computation for every backwas pass we do so a workaround is to use [negative sampling](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/) which only updates the correct class and a few arbitrary incorrect classes (negative_sampling=20). We're able to do this because of the large amount of training data where we'll see the same word as the target class multiple times.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Ha3I2oSsmhJa",
        "colab_type": "code",
        "outputId": "c7d41469-788d-4f76-dbec-2d7994634d29",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Super fast because of optimized C code under the hood\n",
        "w2v = gensim.models.Word2Vec(sentences=sentences, size=EMBEDDING_DIM, \n",
        "                             window=WINDOW, min_count=MIN_COUNT, \n",
        "                             sg=SKIP_GRAM, negative=NEGATIVE_SAMPLING)\n",
        "print (w2v)"
      ],
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Word2Vec(vocab=4963, size=100, alpha=0.025)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Cl6oJv8jmhHE",
        "colab_type": "code",
        "outputId": "95b772d1-e79d-46ae-a09a-c959679c0948",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 374
        }
      },
      "source": [
        "# Vector for each word\n",
        "w2v.wv.get_vector(\"potter\")"
      ],
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "array([ 0.15084217,  0.13705796,  0.23965293, -0.02775109, -0.12870483,\n",
              "        0.10343612,  0.11936715,  0.06283281,  0.32566255,  0.2959624 ,\n",
              "        0.4694637 ,  0.04122435,  0.09656719, -0.05427365, -0.10788797,\n",
              "       -0.02175392,  0.57813907,  0.04841102,  0.39841792, -0.01750858,\n",
              "        0.0592449 , -0.10498977, -0.29645777, -0.20408414, -0.063689  ,\n",
              "       -0.10009151, -0.35133523, -0.0872144 , -0.24457319,  0.29759392,\n",
              "       -0.4093383 , -0.37275356,  0.0440071 ,  0.03008361, -0.24179529,\n",
              "       -0.08881909, -0.13796206, -0.40826833, -0.01125353, -0.3181275 ,\n",
              "       -0.04164799,  0.05872981, -0.03018922,  0.0534426 ,  0.26192155,\n",
              "       -0.30446118, -0.30542514,  0.26205966,  0.3725973 ,  0.24522388,\n",
              "       -0.13399486,  0.0712282 , -0.5862857 ,  0.09795435, -0.47784102,\n",
              "       -0.03435729,  0.39924994,  0.20377865,  0.22092214, -0.19310077,\n",
              "        0.10236361,  0.01111186,  0.10014018,  0.05529214,  0.19012617,\n",
              "        0.00109139,  0.14768417,  0.16705877, -0.0500677 , -0.13725002,\n",
              "       -0.01225467, -0.47667435,  0.0644568 , -0.13771257,  0.10909976,\n",
              "       -0.04351163,  0.26104054,  0.42997754,  0.23504022, -0.3976249 ,\n",
              "       -0.15433961,  0.25122407,  0.0413951 ,  0.22834083,  0.15248089,\n",
              "        0.34259176,  0.3136082 , -0.17885484,  0.53498435,  0.13894436,\n",
              "       -0.2155322 , -0.18140729, -0.16036892, -0.01318709, -0.27774736,\n",
              "        0.5554768 ,  0.28562534, -0.11389014, -0.19887821,  0.1654813 ],\n",
              "      dtype=float32)"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 11
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DyuLX9DTnLvM",
        "colab_type": "code",
        "outputId": "acd7a885-cc2f-4611-d49a-c9ba533a57f5",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "source": [
        "# Get nearest neighbors (excluding itself)\n",
        "w2v.wv.most_similar(positive=\"scar\", topn=5)"
      ],
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[('pain', 0.9374247789382935),\n",
              " ('forehead', 0.9309570789337158),\n",
              " ('mouth', 0.9206055402755737),\n",
              " ('shaking', 0.9201192259788513),\n",
              " ('burning', 0.9184731841087341)]"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YT7B0KRVTFew",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Saving and loading\n",
        "w2v.wv.save_word2vec_format('model.bin', binary=True)\n",
        "w2v = KeyedVectors.load_word2vec_format('model.bin', binary=True)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JZXVP5vfuiD5",
        "colab_type": "text"
      },
      "source": [
        "# FastText"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uvuoeWYMuqsa",
        "colab_type": "text"
      },
      "source": [
        "What happen's when a word doesn't exist in our vocabulary? We could assign an UNK token which is used for all OOV (out of vocabulary) words or we could use [FastText](https://radimrehurek.com/gensim/models/fasttext.html), which uses character-level n-grams to embed a word. This helps embed rare words, misspelled words, and also words that don't exist in our corpus but are similar to words in our corpus."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eTNW4Mfgrpo0",
        "colab_type": "code",
        "outputId": "622c9d6e-7d0d-4fe2-cb79-9d11dfe8138a",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Super fast because of optimized C code under the hood\n",
        "ft = gensim.models.FastText(sentences=sentences, size=EMBEDDING_DIM, \n",
        "                            window=WINDOW, min_count=MIN_COUNT, \n",
        "                            sg=SKIP_GRAM, negative=NEGATIVE_SAMPLING)\n",
        "print (ft)"
      ],
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "FastText(vocab=4963, size=100, alpha=0.025)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LbA4vU5uxiw3",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# This word doesn't exist so the word2vec model will error out\n",
        "# w2v.wv.most_similar(positive=\"scarring\", topn=5)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eRG30aE4sMjt",
        "colab_type": "code",
        "outputId": "d3416d80-97ce-4c4d-96a2-223948ba57ad",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "source": [
        "# FastText will use n-grams to embed an OOV word\n",
        "ft.wv.most_similar(positive=\"scarring\", topn=5)"
      ],
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[('sparkling', 0.9798979759216309),\n",
              " ('trembling', 0.9788705110549927),\n",
              " ('spiraling', 0.9761584997177124),\n",
              " ('lightning', 0.9757795929908752),\n",
              " ('fluttering', 0.9737103581428528)]"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 16
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7SE5fPMUnLyP",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Save and loading\n",
        "ft.wv.save('model.bin')\n",
        "ft = KeyedVectors.load('model.bin')"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "67UmjtK0pF9X",
        "colab_type": "text"
      },
      "source": [
        "# Pretrained embeddings"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Xm1GPn4spF6x",
        "colab_type": "text"
      },
      "source": [
        "We can learn embeddings from scratch using one of the approaches above but we can also leverage pretrained embeddings that have been trained on millions of documents. Popular ones include Word2Vec (skip-gram) or GloVe (global word-word co-occurrence). We can validate that these embeddings captured meaningful semantic relationships by confirming them."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Hh42Mb4lLbuB",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from gensim.scripts.glove2word2vec import glove2word2vec\n",
        "from io import BytesIO\n",
        "import matplotlib.pyplot as plt\n",
        "from sklearn.decomposition import PCA\n",
        "from urllib.request import urlopen\n",
        "from zipfile import ZipFile"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CNxNo8eTMIdi",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ANfQHxGrMKTe",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def plot_embeddings(words, embeddings, pca_results):\n",
        "    for word in words:\n",
        "        index = embeddings.index2word.index(word)\n",
        "        plt.scatter(pca_results[index, 0], pca_results[index, 1])\n",
        "        plt.annotate(word, xy=(pca_results[index, 0], pca_results[index, 1]))\n",
        "    plt.show()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PmYFnmaVMKYe",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "m9gxHJA9M8hK",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "EMBEDDING_DIM = 100"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZW9Qtkz3LfdY",
        "colab_type": "code",
        "outputId": "110290ee-0370-4982-c7a3-90c2b5f000cc",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        }
      },
      "source": [
        "# Unzip the file (may take ~3-5 minutes)\n",
        "resp = urlopen('http://nlp.stanford.edu/data/glove.6B.zip')\n",
        "zipfile = ZipFile(BytesIO(resp.read()))\n",
        "zipfile.namelist()"
      ],
      "execution_count": 21,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "['glove.6B.50d.txt',\n",
              " 'glove.6B.100d.txt',\n",
              " 'glove.6B.200d.txt',\n",
              " 'glove.6B.300d.txt']"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 21
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bWnVBrOaLjIC",
        "colab_type": "code",
        "outputId": "b984f539-1558-4967-cd3e-0391f908c566",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Write embeddings to file\n",
        "embeddings_file = 'glove.6B.{0}d.txt'.format(EMBEDDING_DIM)\n",
        "zipfile.extract(embeddings_file)"
      ],
      "execution_count": 22,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'/content/glove.6B.100d.txt'"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 22
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qFLyIqIxrUIs",
        "colab_type": "code",
        "outputId": "88dd85bd-29cf-4fea-dfa9-d527e4fe5c04",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 323
        }
      },
      "source": [
        "# Preview of the GloVe embeddings file\n",
        "with open(embeddings_file, 'r') as fp:\n",
        "    line = next(fp)\n",
        "    values = line.split()\n",
        "    word = values[0]\n",
        "    embedding = np.asarray(values[1:], dtype='float32')\n",
        "    print (f\"word: {word}\")\n",
        "    print (f\"embedding:\\n{embedding}\")\n",
        "    print (f\"embedding dim: {len(embedding)}\")"
      ],
      "execution_count": 23,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "word: the\n",
            "embedding:\n",
            "[-0.038194 -0.24487   0.72812  -0.39961   0.083172  0.043953 -0.39141\n",
            "  0.3344   -0.57545   0.087459  0.28787  -0.06731   0.30906  -0.26384\n",
            " -0.13231  -0.20757   0.33395  -0.33848  -0.31743  -0.48336   0.1464\n",
            " -0.37304   0.34577   0.052041  0.44946  -0.46971   0.02628  -0.54155\n",
            " -0.15518  -0.14107  -0.039722  0.28277   0.14393   0.23464  -0.31021\n",
            "  0.086173  0.20397   0.52624   0.17164  -0.082378 -0.71787  -0.41531\n",
            "  0.20335  -0.12763   0.41367   0.55187   0.57908  -0.33477  -0.36559\n",
            " -0.54857  -0.062892  0.26584   0.30205   0.99775  -0.80481  -3.0243\n",
            "  0.01254  -0.36942   2.2167    0.72201  -0.24978   0.92136   0.034514\n",
            "  0.46745   1.1079   -0.19358  -0.074575  0.23353  -0.052062 -0.22044\n",
            "  0.057162 -0.15806  -0.30798  -0.41625   0.37972   0.15006  -0.53212\n",
            " -0.2055   -1.2526    0.071624  0.70565   0.49744  -0.42063   0.26148\n",
            " -1.538    -0.30223  -0.073438 -0.28312   0.37104  -0.25217   0.016215\n",
            " -0.017099 -0.38984   0.87424  -0.72569  -0.51058  -0.52028  -0.1459\n",
            "  0.8278    0.27062 ]\n",
            "embedding dim: 100\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9eD5doqFLjFY",
        "colab_type": "code",
        "outputId": "95389ce4-c69d-4411-f322-7ba961c3ed42",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Save GloVe embeddings to local directory in word2vec format\n",
        "word2vec_output_file = '{0}.word2vec'.format(embeddings_file)\n",
        "glove2word2vec(embeddings_file, word2vec_output_file)"
      ],
      "execution_count": 24,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(400000, 100)"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 24
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "To4sx_1iMCX0",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Load embeddings (may take a minute)\n",
        "glove = KeyedVectors.load_word2vec_format(word2vec_output_file, binary=False)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UEhBhvgHMEH9",
        "colab_type": "code",
        "outputId": "255ce1e5-9a10-45dc-9b39-df4556751047",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "source": [
        "# (king - man) + woman = ?\n",
        "glove.most_similar(positive=['woman', 'king'], negative=['man'], topn=5)"
      ],
      "execution_count": 26,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[('queen', 0.7698541283607483),\n",
              " ('monarch', 0.6843380928039551),\n",
              " ('throne', 0.6755735874176025),\n",
              " ('daughter', 0.6594556570053101),\n",
              " ('princess', 0.6520534753799438)]"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 26
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xR94AICkMEFV",
        "colab_type": "code",
        "outputId": "83ad7e1b-f250-40bb-f6a8-4be6537adbe8",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "source": [
        "# Get nearest neighbors (exlcusing itself)\n",
        "glove.wv.most_similar(positive=\"goku\", topn=5)"
      ],
      "execution_count": 27,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[('gohan', 0.7246542572975159),\n",
              " ('bulma', 0.6497020125389099),\n",
              " ('raistlin', 0.6443604230880737),\n",
              " ('skaar', 0.6316742897033691),\n",
              " ('guybrush', 0.6231324672698975)]"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 27
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gseqjBmzMECq",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Reduce dimensionality for plotting\n",
        "X = glove[glove.wv.vocab]\n",
        "pca = PCA(n_components=2)\n",
        "pca_results = pca.fit_transform(X)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LFQWGyncMHgK",
        "colab_type": "code",
        "outputId": "d0133aeb-6855-4fcd-a048-e7b7b12a1bbf",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 265
        }
      },
      "source": [
        "# Visualize\n",
        "plot_embeddings(words=[\"king\", \"queen\", \"man\", \"woman\"], \n",
        "                embeddings=glove, \n",
        "                pca_results=pca_results)"
      ],
      "execution_count": 29,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAD4CAYAAADiry33AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAVwElEQVR4nO3df5CV1Z3n8feXBrtRHNiVxiGgQSvg\nD2yRpnHHILFFV1AUZiqJI8XMxknUVOJEYyIx1hplTaUqE6z1RyqjwdGgpkQdMRSgjmhEJRoHGkFG\nUYTFnhU0glnSEYRI49k/uu1AC/Sv2325x/erqqvuc57zPOd76lIfj8/z3HsjpYQkKV+9il2AJKl7\nGfSSlDmDXpIyZ9BLUuYMeknKXO9iDTxw4MA0bNiwYg0vSSVpxYoV76WUKjtyTNGCftiwYdTV1RVr\neEkqSRHxnx09xks3kpQ5g16SMmfQS1LmDHpJypxBX+Lq6+s56aST9mqrq6vjiiuuKFJFkg42RXvq\nRt2npqaGmpqaYpch6SDhij4jGzZsYPTo0cyaNYvzzz8fgJkzZ/LVr36V2tpajj32WG677baW/j/8\n4Q857rjjOP3005k2bRo33XRTsUqX1I1c0Wdi7dq1XHTRRcyZM4etW7fy7LPPtux7/fXXWbJkCe+/\n/z7HHXcc3/jGN1i1ahXz5s3j5ZdfZteuXVRXVzNmzJgizkBSdzHoS9D8lZuY9cRa3v7DDv5ramDj\nO+8ydepUHnnkEU488USeeeaZvfpPnjyZ8vJyysvLGTRoEO+++y7PP/88U6dOpaKigoqKCi644ILi\nTEZSt/PSTYmZv3IT1z7yH2z6ww4S8O4fd/IB5VT8lyP5zW9+s89jysvLW16XlZXR2NjYQ9VKOhgY\n9CVm1hNr2bFr996NvcqoOPd73Hvvvdx///3tOs+4ceNYuHAhO3fuZNu2bSxatKgbqpV0MDDoS8zb\nf9ixz/Z3P4BFixZx880388c//rHN84wdO5YpU6Zw8sknc+6551JVVUX//v0LXa6kg0C09ZuxEXE3\ncD6wOaV00gH6jQV+C1yUUnq4rYFramqSX2rWceN+/DSb9hH2Qwb05fnvT+jQubZt20a/fv344IMP\n+MIXvsDs2bOprq4uVKmSukFErEgpdej56fas6OcAk9oYuAz4J2BxRwZXx82YeBx9+5Tt1da3Txkz\nJh7X4XNddtllnHLKKVRXV/PFL37RkJcy1eZTNyml5yJiWBvdvgXMA8YWoCYdwF+PHgLQ8tTNZwb0\nZcbE41raO6K91/MllbYuP14ZEUOAvwHOxKDvEX89ekingl3Sp1MhbsbeAlyTUvqorY4RcVlE1EVE\n3ZYtWwowtCSpLYX4wFQN8EBEAAwEzouIxpTS/NYdU0qzgdnQdDO2AGNLktrQ5aBPKR3z8euImAMs\n2lfIS5KKo82gj4i5QC0wMCI2AjcAfQBSSnd0a3WSpC5rz1M309p7spTSxV2qRpJUcH4yVpIyZ9BL\nUuYMeknKnEEvSZkz6CUpcwa9JGXOoJekzBn0kpQ5g16SMmfQS1LmDHpJypxBL0mZM+glKXMGvSRl\nzqCXpMwZ9JKUOYNekjJn0EtS5gx6ScqcQS9JmTPoJSlzBr0kZc6gl6TMGfSSlDmDXpIyZ9BLUuYM\neknKnEEvSZkz6CUpcwa9JGXOoJekzBn0kpQ5g16SMmfQS1LmDHpJylybQR8Rd0fE5oh4ZT/7p0fE\n6oj4j4h4ISJGFb5MSVJntWdFPweYdID9bwJnpJSqgB8CswtQlySpQHq31SGl9FxEDDvA/hf22HwR\nGNr1siRJhVLoa/RfAx7f386IuCwi6iKibsuWLQUeWpK0LwUL+og4k6agv2Z/fVJKs1NKNSmlmsrK\nykINLUk6gDYv3bRHRJwM/Atwbkrp94U4pySpMLq8oo+Io4FHgL9PKb3R9ZIkSYXU5oo+IuYCtcDA\niNgI3AD0AUgp3QFcDxwB/HNEADSmlGq6q2BJUse056mbaW3svwS4pGAVSZIKyk/GSlLmDHpJypxB\nL0mZM+glKXMGvSRlzqCXpMwZ9JKUOYNekjJn0EtS5gx6ScqcQS9JmTPoJSlzBr0kZc6gl6TMGfSS\nlDmDXpIyZ9BLUuYMeknKnEEvST2svr6e448/nosvvpgRI0Ywffp0nnrqKcaNG8fw4cNZtmwZy5Yt\n47TTTmP06NF8/vOfZ+3atR8ffkREPBIR/xYR6yLiJ22NFyml7p3RftTU1KS6urqijC1JxVRfX8/n\nPvc5Vq5cyciRIxk7diyjRo3irrvuYsGCBfziF7/g3nvv5dBDD6V379489dRT3H777cybN4+IqAcS\nMBr4E7AWOD2l9Nb+xmvzx8ElSV336IZHufWlW/nd9t/Rf3t/Bg0dRFVVFQAjR47krLPOIiKoqqqi\nvr6ehoYGvvKVr7Bu3Toigl27du15ul+nlBoAImIN8Flgv0HvpRtJ6maPbniUmS/M5J3t75BIbP5g\nM1sbt/LohkcB6NWrF+Xl5S2vGxsb+cEPfsCZZ57JK6+8wsKFC9m5c+eep/zTHq9308ai3aCXpG52\n60u3snP3XkFNInHrS7fu95iGhgaGDBkCwJw5c7o0vkEvSd3sd9t/16F2gO9973tce+21jB49msbG\nxi6N781YSepm5zx8Du9sf+cT7YMPG8ziLy3u0LkiYkVKqaYjx7iil6RudmX1lVSUVezVVlFWwZXV\nV/bI+D51I0ndbPKxkwFanrr5y8P+kiurr2xp724GvST1gMnHTu6xYG/NSzeSlDmDXpIyZ9BLUuYM\neknKnEEvSZkz6CUpc20GfUTcHRGbI+KV/eyPiLgtItZHxOqIqC58mZKkzmrPin4OMOkA+88Fhjf/\nXQbc3vWyJEmF0mbQp5SeA/7fAbpMBe5NTV4EBkTE4EIVKEnqmkJcox/C3l94v7G57RMi4rKIqIuI\nui1bthRgaElSW3r0ZmxKaXZKqSalVFNZWdmTQ0vSp1Yhgn4TcNQe20Ob2yRJB4FCBP0C4H80P33z\nV0BDSumTX7wsSSqKNr+9MiLmArXAwIjYCNwA9AFIKd0BPAacB6wHPgD+obuKlSR1XJtBn1Ka1sb+\nBFxesIokSQXlJ2MlKXMGvSRlzqCXpMwZ9JKUOYNekjJn0EtS5gx6ScqcQS9JmTPoJSlzBr0kZc6g\nl6TMGfSSlDmDXpIyZ9BLUuYMeknKnEEvSZkz6CUpcwa9JGXOoJekzBn0kpQ5g16SMmfQS1LmDHpJ\nypxBL0mZM+glKXMGvSRlzqCXpMwZ9JKUOYNekjJn0EtS5gx6ScqcQS9JmTPoJSlzBr0kZa5dQR8R\nkyJibUSsj4jv72P/0RGxJCJWRsTqiDiv8KVKkjqjzaCPiDLgZ8C5wInAtIg4sVW364CHUkqjgYuA\nfy50oZKkzmnPiv5UYH1KaUNK6UPgAWBqqz4J+Ivm1/2BtwtXoiSpK3q3o88Q4K09tjcC/61Vn5nA\n4oj4FnAYcHZBqpMkdVmhbsZOA+aklIYC5wH3RcQnzh0Rl0VEXUTUbdmypUBDS5IOpD1Bvwk4ao/t\noc1te/oa8BBASum3QAUwsPWJUkqzU0o1KaWaysrKzlUsSeqQ9gT9cmB4RBwTEYfQdLN1Qas+/xc4\nCyAiTqAp6F2yS9JBoM2gTyk1Av8IPAG8RtPTNa9GxI0RMaW523eBSyPiZWAucHFKKXVX0ZKk9mvP\nzVhSSo8Bj7Vqu36P12uAcYUtTZJUCH4yVpIyZ9BLUuYMeknKnEEvSZkz6CUpcwa9JGXOoJekzBn0\nkpQ5g16SMmfQS1LmDHpJypxBL0mZM+glKXMGvSRlzqCXpMwZ9FIPmjVrFrfddhsAV111FRMmTADg\n6aefZvr06cydO5eqqipOOukkrrnmmpbj+vXrx4wZMxg5ciRnn302y5Yto7a2lmOPPZYFC5p+8K2+\nvp7x48dTXV1NdXU1L7zwAgDPPPMMtbW1fOlLX+L4449n+vTp+LtAny4GvdSDxo8fz9KlSwGoq6tj\n27Zt7Nq1i6VLlzJixAiuueYann76aVatWsXy5cuZP38+ANu3b2fChAm8+uqrHH744Vx33XU8+eST\n/OpXv+L665t+A2jQoEE8+eSTvPTSSzz44INcccUVLeOuXLmSW265hTVr1rBhwwaef/75np+8isag\nl3pAw8KFrJtwFode/A+8uGgRbz34IOXl5Zx22mnU1dWxdOlSBgwYQG1tLZWVlfTu3Zvp06fz3HPP\nAXDIIYcwadIkAKqqqjjjjDPo06cPVVVV1NfXA7Br1y4uvfRSqqqq+PKXv8yaNWtaxj/11FMZOnQo\nvXr14pRTTmk5Rp8OBr3UzRoWLuSdH1xP49tv0wcY0qsXd3znu1QPHMj48eNZsmQJ69evZ9iwYfs9\nR58+fYgIAHr16kV5eXnL68bGRgBuvvlmjjzySF5++WXq6ur48MMPW47/uD9AWVlZyzH6dDDopW62\n+eZbSDt3tmyP6duXuze/y4mvr2X8+PHccccdjB49mlNPPZVnn32W9957j927dzN37lzOOOOMdo/T\n0NDA4MGD6dWrF/fddx+7d+/ujumoBBn0UjdrfOedvbbH9D2U9xobqdqxgyOPPJKKigrGjx/P4MGD\n+fGPf8yZZ57JqFGjGDNmDFOnTm33ON/85je55557GDVqFK+//jqHHXZYoaeiEhXFuvteU1OT6urq\nijK21JPWTTiLxrff/kR77898huFP/7oIFamURcSKlFJNR45xRS91s0FXfZuoqNirLSoqGHTVt4tU\nkT5tehe7ACl3/S+4AGi6Vt/4zjv0HjyYQVd9u6Vd6m4GvdQD+l9wgcGuovHSjSRlzqCXpMwZ9JKU\nOYNekjJn0EtS5gx6ScqcQS9JmTPoJSlzBr0kZc6gl6TMtSvoI2JSRKyNiPUR8f399LkwItZExKsR\ncX9hy5QkdVab33UTEWXAz4D/DmwElkfEgpTSmj36DAeuBcallLZGxKDuKliS1DHtWdGfCqxPKW1I\nKX0IPAC0/jWES4GfpZS2AqSUNhe2TElSZ7Un6IcAb+2xvbG5bU8jgBER8XxEvBgRk/Z1ooi4LCLq\nIqJuy5YtnatYktQhhboZ2xsYDtQC04A7I2JA604ppdkppZqUUk1lZWWBhpYkHUh7gn4TcNQe20Ob\n2/a0EViQUtqVUnoTeIOm4JckFVl7gn45MDwijomIQ4CLgAWt+synaTVPRAyk6VLOhgLWKUnqpDaD\nPqXUCPwj8ATwGvBQSunViLgxIqY0d3sC+H1ErAGWADNSSr/vrqIlSe0XKaWiDFxTU5Pq6uqKMrYk\nlaqIWJFSqunIMX4yVpIyZ9BLUuYMeknKnEEvSZkz6CUpcwa9JGWupIP+Rz/6ESNGjOD0009n2rRp\n3HTTTdTW1vLxY5vvvfcew4YNA2D37t3MmDGDsWPHcvLJJ/Pzn/+85TyzZs1qab/hhhsAqK+v54QT\nTuDSSy9l5MiRnHPOOezYsaPH5yhJXVWyQb9ixQoeeOABVq1axWOPPcby5csP2P+uu+6if//+LF++\nnOXLl3PnnXfy5ptvsnjxYtatW8eyZctYtWoVK1as4LnnngNg3bp1XH755bz66qsMGDCAefPm9cTU\nJKmg2vw++oPK6ofg1zdCw0aWrurL33x+HIceeigAU6ZMOeChixcvZvXq1Tz88MMANDQ0sG7dOhYv\nXszixYsZPXo0ANu2bWPdunUcffTRHHPMMZxyyikAjBkzhvr6+u6bmyR1k9IJ+tUPwcIrYFfz5ZOd\nW+GNf2tqP/nClm69e/fmo48+auqyc2dLe0qJn/70p0ycOHGv0z7xxBNce+21fP3rX9+rvb6+nvLy\n8pbtsrIyL91IKkmlc+nm1zf+OeSBL3y2N/PX7GDH4zN5//33WbhwIQDDhg1jxYoVAC2rd4CJEydy\n++23s2vXLgDeeOMNtm/fzsSJE7n77rvZtm0bAJs2bWLzZn83RVI+SmdF37Bxr83qwWX87cg+jPrJ\nWgYtPJexY8cCcPXVV3PhhRcye/ZsJk+e3NL/kksuob6+nurqalJKVFZWMn/+fM455xxee+01Tjvt\nNAD69evHL3/5S8rKynpubpLUjUrnS81uPgka3vpke/+j4KpXmDlzJv369ePqq68uXJGSdJDJ+0vN\nzroe+vTdu61P36Z2SdJ+lc6lm49vuDY/dUP/oU0h39w+c+bM4tUmSQex0gl6aAr1PZ6wkSS1rXQu\n3UiSOsWgl6TMGfSSlDmDXpIyZ9BLUuaK9oGpiNgC/Gc3nX4g8F43nbvYcp2b8yo9uc7tYJ/XZ1NK\nlR05oGhB350ioq6jnxwrFbnOzXmVnlznluO8vHQjSZkz6CUpc7kG/exiF9CNcp2b8yo9uc4tu3ll\neY1ekvRnua7oJUnNDHpJylzJBn1EVETEsoh4OSJejYj/tZ9+F0bEmuY+9/d0nZ3RnrlFxNERsSQi\nVkbE6og4rxi1dkZElDXXvWgf+8oj4sGIWB8R/x4Rw3q+ws5pY17faf53uDoifh0Rny1GjZ1xoHnt\n0eeLEZEioqQeS2xrbqWYH/tSWl9TvLc/ARNSStsiog/wm4h4PKX04scdImI4cC0wLqW0NSIGFavY\nDmpzbsB1wEMppdsj4kTgMWBYEWrtjCuB14C/2Me+rwFbU0qfi4iLgH8C/rYni+uCA81rJVCTUvog\nIr4B/IQ85kVEHN7c5997sqgC2e/cSjg/PqFkV/SpybbmzT7Nf63vLF8K/CyltLX5mJL41e92zi3x\n53+c/YG3e6i8LomIocBk4F/202UqcE/z64eBsyIieqK2rmhrXimlJSmlD5o3XwSG9lRtXdGO9wvg\nhzT9B3lnjxRVIO2YW0nmx76UbNBDy/92rQI2A0+mlFqvKEYAIyLi+Yh4MSIm9XyVndOOuc0E/i4i\nNtK0mv9WD5fYWbcA3wM+2s/+IcBbACmlRqABOKJnSuuStua1p68Bj3dvOQVzwHlFRDVwVErp0R6t\nqjDaes9KNj9aK+mgTyntTimdQtPq6NSIOKlVl97AcKAWmAbcGREDerbKzmnH3KYBc1JKQ4HzgPsi\n4qB+PyPifGBzSmlFsWsppI7MKyL+DqgBZnV7YV3U1rya/739b+C7PVpYAbTzPSvZ/GjtoA6G9kop\n/QFYArT+L+5GYEFKaVdK6U3gDZreuJJxgLl9DXiouc9vgQqavozpYDYOmBIR9cADwISI+GWrPpuA\nowAiojdNl6V+35NFdkJ75kVEnA38T2BKSulPPVtip7Q1r8OBk4Bnmvv8FbCgRG7Ituc9K/n8aJFS\nKsk/oBIY0Py6L7AUOL9Vn0nAPc2vB9J0SeCIYtdeoLk9Dlzc/PoEmq7RR7Fr78Aca4FF+2i/HLij\n+fVFNN1wLnq9BZjXaOD/AMOLXWMh59WqzzM03XAuer0Fes9KMj/29VfKK/rBwJKIWA0sp+k69qKI\nuDEipjT3eQL4fUSsoWlVPCOldLCvDqF9c/sucGlEvAzMpSn0S/Jjzq3mdRdwRESsB74DfL94lXVN\nq3nNAvoB/xoRqyJiQRFL65JW88pKJvnxCX4FgiRlrpRX9JKkdjDoJSlzBr0kZc6gl6TMGfSSlDmD\nXpIyZ9BLUub+PznaPqUD7BNaAAAAAElFTkSuQmCC\n",
            "text/plain": [
              "<Figure size 432x288 with 1 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MzrZ2_RBMHdn",
        "colab_type": "code",
        "outputId": "6b9abd12-a444-411f-e80d-27a598a58ce5",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "source": [
        "# Bias in embeddings\n",
        "glove.most_similar(positive=['woman', 'doctor'], negative=['man'], topn=5)"
      ],
      "execution_count": 30,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[('nurse', 0.7735227346420288),\n",
              " ('physician', 0.7189429998397827),\n",
              " ('doctors', 0.6824328303337097),\n",
              " ('patient', 0.6750682592391968),\n",
              " ('dentist', 0.6726033687591553)]"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 30
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ci4qkGddpF3t",
        "colab_type": "text"
      },
      "source": [
        "# Using Embeddings"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QdaK-gQ2z2Wi",
        "colab_type": "text"
      },
      "source": [
        "There are several different ways to use embeddings. \n",
        "\n",
        "1. Use your own trained embeddings (trained on an unsupervised dataset).\n",
        "2. Use pretrained embeddings (GloVe, word2vec, etc.)\n",
        "3. Randomly initialized embeddings.\n",
        "\n",
        "We will explore the different options by revisiting our AGNews classification task."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rmvSOsB-CuJb",
        "colab_type": "text"
      },
      "source": [
        "# Set up"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "f-NnTT8LDjf3",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "SEED = 1234\n",
        "SHUFFLE = True\n",
        "DATA_FILE = 'news.csv'\n",
        "INPUT_FEATURE = 'title'\n",
        "OUTPUT_FEATURE = 'category'\n",
        "FILTERS = \"!\\\"'#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\"\n",
        "LOWER = True\n",
        "CHAR_LEVEL = False\n",
        "TRAIN_SIZE = 0.7\n",
        "VAL_SIZE = 0.15\n",
        "TEST_SIZE = 0.15\n",
        "NUM_EPOCHS = 10\n",
        "BATCH_SIZE = 64\n",
        "EMBEDDING_DIM = 100\n",
        "NUM_FILTERS = 50\n",
        "FILTER_SIZES = [2, 3, 4]\n",
        "HIDDEN_DIM = 100\n",
        "DROPOUT_P = 0.1\n",
        "LEARNING_RATE = 1e-3\n",
        "EARLY_STOPPING_CRITERIA = 3"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c69z9wpJ56nE",
        "colab_type": "text"
      },
      "source": [
        "# Data"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2V_nEp5G58M0",
        "colab_type": "text"
      },
      "source": [
        "We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120000 text samples from 4 unique classes ('Business', 'Sci/Tech', 'Sports', 'World')"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "y3qKSoEe57na",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import pandas as pd\n",
        "import re\n",
        "import urllib"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cGQo98566GIV",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Upload data from GitHub to notebook's local drive\n",
        "url = \"https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/news.csv\"\n",
        "response = urllib.request.urlopen(url)\n",
        "html = response.read()\n",
        "with open(DATA_FILE, 'wb') as fp:\n",
        "    fp.write(html)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dG_Oltib6G-9",
        "colab_type": "code",
        "outputId": "8dd2b59f-0681-4b35-f97a-213f70d202fa",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 204
        }
      },
      "source": [
        "# Load data\n",
        "df = pd.read_csv(DATA_FILE, header=0)\n",
        "X = df[INPUT_FEATURE].values\n",
        "y = df[OUTPUT_FEATURE].values\n",
        "df.head(5)"
      ],
      "execution_count": 34,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>title</th>\n",
              "      <th>category</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>Wall St. Bears Claw Back Into the Black (Reuters)</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>Carlyle Looks Toward Commercial Aerospace (Reu...</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>Oil and Economy Cloud Stocks' Outlook (Reuters)</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>Iraq Halts Oil Exports from Main Southern Pipe...</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>Oil prices soar to all-time record, posing new...</td>\n",
              "      <td>Business</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "                                               title  category\n",
              "0  Wall St. Bears Claw Back Into the Black (Reuters)  Business\n",
              "1  Carlyle Looks Toward Commercial Aerospace (Reu...  Business\n",
              "2    Oil and Economy Cloud Stocks' Outlook (Reuters)  Business\n",
              "3  Iraq Halts Oil Exports from Main Southern Pipe...  Business\n",
              "4  Oil prices soar to all-time record, posing new...  Business"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 34
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hxo6RKCQ71dl",
        "colab_type": "text"
      },
      "source": [
        "# Split data"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eS6kCcfY6IHE",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import collections\n",
        "from sklearn.model_selection import train_test_split"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kfr_NTy8WYt3",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-ZFVitqVWY4J",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def train_val_test_split(X, y, val_size, test_size, shuffle):\n",
        "    \"\"\"Split data into train/val/test datasets.\n",
        "    \"\"\"\n",
        "    X_train, X_test, y_train, y_test = train_test_split(\n",
        "        X, y, test_size=test_size, stratify=y, shuffle=shuffle)\n",
        "    X_train, X_val, y_train, y_val = train_test_split(\n",
        "        X_train, y_train, test_size=val_size, stratify=y_train, shuffle=shuffle)\n",
        "    return X_train, X_val, X_test, y_train, y_val, y_test"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8XIdYU_n7536",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kqiQd2j_76gP",
        "colab_type": "code",
        "outputId": "a56ce24b-c42d-477e-aca3-acef34b388bd",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 119
        }
      },
      "source": [
        "# Create data splits\n",
        "X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(\n",
        "    X=X, y=y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)\n",
        "class_counts = dict(collections.Counter(y))\n",
        "print (f\"X_train: {X_train.shape}, y_train: {y_train.shape}\")\n",
        "print (f\"X_val: {X_val.shape}, y_val: {y_val.shape}\")\n",
        "print (f\"X_test: {X_test.shape}, y_test: {y_test.shape}\")\n",
        "print (f\"X_train[0]: {X_train[0]}\")\n",
        "print (f\"y_train[0]: {y_train[0]}\")\n",
        "print (f\"Classes: {class_counts}\")"
      ],
      "execution_count": 37,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "X_train: (86700,), y_train: (86700,)\n",
            "X_val: (15300,), y_val: (15300,)\n",
            "X_test: (18000,), y_test: (18000,)\n",
            "X_train[0]: Last call for Jack Daniel #39;s?\n",
            "y_train[0]: Business\n",
            "Classes: {'Business': 30000, 'Sci/Tech': 30000, 'Sports': 30000, 'World': 30000}\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dIfmW7vJ8Jx1",
        "colab_type": "text"
      },
      "source": [
        "# Tokenizer"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JP4VCO0LAJUt",
        "colab_type": "text"
      },
      "source": [
        "Unlike the previous notebook, we will be processing our text at a word-level (as opposed to character-level)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DHPAxkKR7736",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.preprocessing.text import Tokenizer\n",
        "from tensorflow.keras.utils import to_categorical"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dAdGiy7cbUo4",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dMg5QhVybVfL",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def untokenize(indices, tokenizer):\n",
        "    \"\"\"Untokenize a list of indices into string.\"\"\"\n",
        "    return \" \".join([tokenizer.index_word[index] for index in indices])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_BD3XPKF8L84",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "WcscM_vL8KvP",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Input vectorizer\n",
        "X_tokenizer = Tokenizer(filters=FILTERS,\n",
        "                        lower=LOWER,\n",
        "                        char_level=CHAR_LEVEL,\n",
        "                        oov_token='<UNK>')"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xV2JgpOA8PwO",
        "colab_type": "code",
        "outputId": "836c885c-f167-4742-8232-e22c0b6c9d26",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Fit only on train data\n",
        "X_tokenizer.fit_on_texts(X_train)\n",
        "vocab_size = len(X_tokenizer.word_index) + 1\n",
        "print (f\"# tokens: {vocab_size}\")"
      ],
      "execution_count": 41,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "# tokens: 29917\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ybb-YZSz8Qno",
        "colab_type": "code",
        "outputId": "0d2c8be5-1e43-44dc-f59a-bd8c63088eae",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        }
      },
      "source": [
        "# Convert text to sequence of tokens\n",
        "original_text = X_train[0]\n",
        "X_train = np.array(X_tokenizer.texts_to_sequences(X_train))\n",
        "X_val = np.array(X_tokenizer.texts_to_sequences(X_val))\n",
        "X_test = np.array(X_tokenizer.texts_to_sequences(X_test))\n",
        "preprocessed_text = untokenize(X_train[0], X_tokenizer)\n",
        "print (f\"{original_text} \\n\\t→ {preprocessed_text} \\n\\t→ {X_train[0]}\")"
      ],
      "execution_count": 42,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Last call for Jack Daniel #39;s? \n",
            "\t→ last call for jack daniel 39 s \n",
            "\t→ [316, 314, 5, 6877, 10686, 4, 6]\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ORGuhjCf8TKh",
        "colab_type": "text"
      },
      "source": [
        "# LabelEncoder"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7aBBgzkW8Rxv",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from sklearn.preprocessing import LabelEncoder"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "z_jVCsl98U09",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ckM_MnQi8UTH",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Output vectorizer\n",
        "y_tokenizer = LabelEncoder()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0-FkxqCT8WUk",
        "colab_type": "code",
        "outputId": "f0929fb1-3617-44e1-f7fe-8ff4447c4ab6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Fit on train data\n",
        "y_tokenizer = y_tokenizer.fit(y_train)\n",
        "classes = list(y_tokenizer.classes_)\n",
        "print (f\"classes: {classes}\")"
      ],
      "execution_count": 45,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "classes: ['Business', 'Sci/Tech', 'Sports', 'World']\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yrLHd1i_8XAJ",
        "colab_type": "code",
        "outputId": "b7879bdf-dc44-452c-fded-6ed80c323954",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Convert labels to tokens\n",
        "y_train = y_tokenizer.transform(y_train)\n",
        "y_val = y_tokenizer.transform(y_val)\n",
        "y_test = y_tokenizer.transform(y_test)\n",
        "print (f\"y_train[0]: {y_train[0]}\")"
      ],
      "execution_count": 46,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "y_train[0]: 0\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DY91F44BR15z",
        "colab_type": "code",
        "outputId": "366c7c03-802f-4d46-a0b6-e70a97008ecf",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "source": [
        "# Class weights\n",
        "counts = np.bincount(y_train)\n",
        "class_weights = {i: 1.0/count for i, count in enumerate(counts)}\n",
        "print (f\"class counts: {counts},\\nclass weights: {class_weights}\")"
      ],
      "execution_count": 47,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "class counts: [21675 21675 21675 21675],\n",
            "class weights: {0: 4.61361014994233e-05, 1: 4.61361014994233e-05, 2: 4.61361014994233e-05, 3: 4.61361014994233e-05}\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eoWQk0hO9bK2",
        "colab_type": "text"
      },
      "source": [
        "# Generators"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GVxnbzgW8X1V",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import math\n",
        "from tensorflow.keras.preprocessing.sequence import pad_sequences\n",
        "from tensorflow.keras.utils import Sequence"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IMtHyqex9gVI",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1w6wVKJe9fxk",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "class DataGenerator(Sequence):\n",
        "    \"\"\"Custom data loader.\"\"\"\n",
        "    def __init__(self, X, y, batch_size, max_filter_size, shuffle=True):\n",
        "        self.X = X\n",
        "        self.y = y\n",
        "        self.batch_size = batch_size\n",
        "        self.max_filter_size = max_filter_size\n",
        "        self.shuffle = shuffle\n",
        "        self.on_epoch_end()\n",
        "\n",
        "    def __len__(self):\n",
        "        \"\"\"# of batches.\"\"\"\n",
        "        return math.ceil(len(self.X) / self.batch_size)\n",
        "\n",
        "    def __str__(self):\n",
        "        return (f\"<DataGenerator(\" \\\n",
        "                f\"batch_size={self.batch_size}, \" \\\n",
        "                f\"batches={len(self)}, \" \\\n",
        "                f\"shuffle={self.shuffle})>\")\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        \"\"\"Generate a batch.\"\"\"\n",
        "        # Gather indices for this batch\n",
        "        batch_indices = self.epoch_indices[\n",
        "            index * self.batch_size:(index+1)*self.batch_size]\n",
        "\n",
        "        # Generate batch data\n",
        "        X, y = self.create_batch(batch_indices=batch_indices)\n",
        "\n",
        "        return X, y\n",
        "\n",
        "    def on_epoch_end(self):\n",
        "        \"\"\"Create indices after each epoch.\"\"\"\n",
        "        self.epoch_indices = np.arange(len(self.X))\n",
        "        if self.shuffle == True:\n",
        "            np.random.shuffle(self.epoch_indices)\n",
        "\n",
        "    def create_batch(self, batch_indices):\n",
        "        \"\"\"Generate batch from indices.\"\"\"\n",
        "        # Get batch data\n",
        "        X = self.X[batch_indices]\n",
        "        y = self.y[batch_indices]\n",
        "\n",
        "        # Pad batch\n",
        "        max_seq_len = max(self.max_filter_size, max([len(x) for x in X]))\n",
        "        X = pad_sequences(X, padding=\"post\", maxlen=max_seq_len)\n",
        "\n",
        "        return X, y"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "u37JyFYV9ilS",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5T8mVj9d9hNI",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Dataset generator\n",
        "training_generator = DataGenerator(X=X_train,\n",
        "                                   y=y_train,\n",
        "                                   batch_size=BATCH_SIZE,\n",
        "                                   max_filter_size=max(FILTER_SIZES),\n",
        "                                   shuffle=SHUFFLE)\n",
        "validation_generator = DataGenerator(X=X_val,\n",
        "                                     y=y_val,\n",
        "                                     batch_size=BATCH_SIZE,\n",
        "                                     max_filter_size=max(FILTER_SIZES),\n",
        "                                     shuffle=False)\n",
        "testing_generator = DataGenerator(X=X_test,\n",
        "                                  y=y_test,\n",
        "                                  batch_size=BATCH_SIZE,\n",
        "                                  max_filter_size=max(FILTER_SIZES),\n",
        "                                  shuffle=False)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "drbY5WDX9kcL",
        "colab_type": "code",
        "outputId": "443af08f-fc32-442c-fcfe-1ae0c63f0fae",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        }
      },
      "source": [
        "print (f\"training_generator: {training_generator}\")\n",
        "print (f\"validation_generator: {validation_generator}\")\n",
        "print (f\"testing_generator: {testing_generator}\")"
      ],
      "execution_count": 51,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "training_generator: <DataGenerator(batch_size=64, batches=1355, shuffle=True)>\n",
            "validation_generator: <DataGenerator(batch_size=64, batches=240, shuffle=False)>\n",
            "testing_generator: <DataGenerator(batch_size=64, batches=282, shuffle=False)>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pfhjWZRD94hK",
        "colab_type": "text"
      },
      "source": [
        "# Model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eI5xEaMN-vT9",
        "colab_type": "text"
      },
      "source": [
        "Let's visualize the model's forward pass.\n",
        "\n",
        "1. We'll first tokenize our inputs (`batch_size`, `max_seq_len`).\n",
        "2. Then we'll embed our tokenized inputs (`batch_size`, `max_seq_len`, `embedding_dim`).\n",
        "3. We'll apply convolution via filters (`filter_size`, `vocab_size`, `num_filters`) followed by batch normalization. Our filters act as character level n-gram detecors. We have three different filter sizes (2, 3 and 4) and they will act as bi-gram, tri-gram and 4-gram feature extractors, respectivelyy. \n",
        "4. We'll apply 1D global max pooling which will extract the most relevant information from the feature maps for making the decision.\n",
        "5. We feed the pool outputs to a fully-connected (FC) layer (with dropout).\n",
        "6. We use one more FC layer with softmax to derive class probabilities. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zVmJGm8m-KIz",
        "colab_type": "text"
      },
      "source": [
        "<img src=\"https://raw.githubusercontent.com/practicalAI/images/master/basic_ml/11_Embeddings/forward_pass.png\" width=\"1000\">"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JrVDcLC9kNMq",
        "colab_type": "text"
      },
      "source": [
        "The `FILTER_SIZES` are [2, 3, 4] which effectively act as bi-gram, tri-gram and 4th-gram feature extractors when applied to our text."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "T8oCutDJ-d1J",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.layers import Concatenate\n",
        "from tensorflow.keras.layers import Conv1D\n",
        "from tensorflow.keras.layers import Dense\n",
        "from tensorflow.keras.layers import Dropout\n",
        "from tensorflow.keras.layers import Embedding\n",
        "from tensorflow.keras.layers import GlobalMaxPool1D\n",
        "from tensorflow.keras.layers import Input\n",
        "from tensorflow.keras.losses import SparseCategoricalCrossentropy\n",
        "from tensorflow.keras.models import Model\n",
        "from tensorflow.keras.optimizers import Adam"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6B2MewCdCeKC",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UPP5ROd69mXC",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "class TextClassificationCNNModel(Model):\n",
        "    def __init__(self, vocab_size, embedding_dim, filter_sizes, num_filters,\n",
        "                 hidden_dim, dropout_p, num_classes, freeze_embeddings=False):\n",
        "        super(TextClassificationCNNModel, self).__init__()\n",
        "\n",
        "        # Embeddings\n",
        "        self.embedding = Embedding(input_dim=vocab_size,\n",
        "                                   output_dim=embedding_dim,\n",
        "                                   trainable=not freeze_embeddings)\n",
        "        # Convolutional filters\n",
        "        self.convs = []\n",
        "        self.pools = []\n",
        "        for filter_size in filter_sizes:\n",
        "            conv = Conv1D(filters=num_filters, kernel_size=filter_size, \n",
        "                          padding='same', activation='relu')\n",
        "            pool = GlobalMaxPool1D(data_format='channels_last')\n",
        "            self.convs.append(conv)\n",
        "            self.pools.append(pool)\n",
        "\n",
        "        # Concatenation\n",
        "        self.concat = Concatenate(axis=1)\n",
        "\n",
        "        # FC layers\n",
        "        self.fc1 = Dense(units=hidden_dim, activation='relu')\n",
        "        self.dropout = Dropout(rate=dropout_p)\n",
        "        self.fc2 = Dense(units=num_classes, activation='softmax')\n",
        "\n",
        "    def call(self, x_in, training=False):\n",
        "        \"\"\"Forward pass.\"\"\"\n",
        "\n",
        "        # Embed\n",
        "        x_emb = self.embedding(x_in)\n",
        "\n",
        "        # Convolutions\n",
        "        convs = []\n",
        "        for i in range(len(self.convs)):\n",
        "            z = self.convs[i](x_emb)\n",
        "            z = self.pools[i](z)\n",
        "            convs.append(z)\n",
        "\n",
        "        # Concatenate\n",
        "        z_cat = self.concat(convs)\n",
        "\n",
        "        # FC\n",
        "        z = self.fc1(z_cat)\n",
        "        if training:\n",
        "            z = self.dropout(z, training=training)\n",
        "        y_pred = self.fc2(z)\n",
        "\n",
        "        return y_pred\n",
        "\n",
        "    def sample(self, input_shape):\n",
        "        x = Input(shape=input_shape)\n",
        "        return Model(inputs=x, outputs=self.call(x)).summary()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QBmYu6wjkgf0",
        "colab_type": "text"
      },
      "source": [
        "# GloVe embeddings"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3gRcURJfWluw",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "x9uev5AGsuqq",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def load_glove_embeddings(embeddings_file):\n",
        "    \"\"\"Load embeddings from a file.\"\"\"\n",
        "    embeddings = {}\n",
        "    with open(embeddings_file, \"r\") as fp:\n",
        "        for index, line in enumerate(fp):\n",
        "            values = line.split()\n",
        "            word = values[0]\n",
        "            embedding = np.asarray(values[1:], dtype='float32')\n",
        "            embeddings[word] = embedding\n",
        "    return embeddings"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "tQHD-ThwWnjD",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def make_embeddings_matrix(embeddings, word_index, embedding_dim):\n",
        "    \"\"\"Create embeddings matrix to use in Embedding layer.\"\"\"\n",
        "    embedding_matrix = np.zeros((len(word_index) + 1, embedding_dim))\n",
        "    for word, i in word_index.items():\n",
        "        embedding_vector = embeddings.get(word)\n",
        "        if embedding_vector is not None:\n",
        "            embedding_matrix[i] = embedding_vector\n",
        "    return embedding_matrix"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WVMJQyYsLlXu",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9WxP2GR3LmrO",
        "colab_type": "code",
        "outputId": "aefa7cea-1edb-4ce6-dff2-cc48305cf01e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Create embeddings\n",
        "embeddings_file = 'glove.6B.{0}d.txt'.format(EMBEDDING_DIM)\n",
        "glove_embeddings = load_glove_embeddings(embeddings_file=embeddings_file)\n",
        "embedding_matrix = make_embeddings_matrix(embeddings=glove_embeddings, \n",
        "                                          word_index=X_tokenizer.word_index, \n",
        "                                          embedding_dim=EMBEDDING_DIM)\n",
        "print (f\"<Embeddings(words={embedding_matrix.shape[0]}, dim={embedding_matrix.shape[1]})>\")"
      ],
      "execution_count": 56,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<Embeddings(words=29917, dim=100)>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C26maF-9Goit",
        "colab_type": "text"
      },
      "source": [
        "# Experiments"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eTWQcUJ_GrIx",
        "colab_type": "text"
      },
      "source": [
        "Once you have chosen your embeddings, you can choose to freeze them or continue to train them using the supervised data (this could lead to overfitting). In this example, we will do three experiments: \n",
        "* frozen GloVe embeddings\n",
        "* fine-tuned (unfrozen) GloVe embeddings\n",
        "* randomly initialized embeddings"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "geKOPVzVK6S9",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow.keras.callbacks import Callback\n",
        "from tensorflow.keras.callbacks import EarlyStopping\n",
        "from tensorflow.keras.callbacks import ReduceLROnPlateau\n",
        "from tensorflow.keras.callbacks import TensorBoard\n",
        "%load_ext tensorboard"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "To_CB7ibLesP",
        "colab_type": "text"
      },
      "source": [
        "## GloVe embeddings (frozen)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "oT9w__AMkqfG",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "FREEZE_EMBEDDINGS = True"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yg13AyoUkqcJ",
        "colab_type": "code",
        "outputId": "f3ebadd7-3f46-45c6-de66-d67111924f7e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 561
        }
      },
      "source": [
        "# Initialize model\n",
        "glove_frozen_model = TextClassificationCNNModel(vocab_size=vocab_size,\n",
        "                                                embedding_dim=EMBEDDING_DIM,\n",
        "                                                filter_sizes=FILTER_SIZES,\n",
        "                                                num_filters=NUM_FILTERS,\n",
        "                                                hidden_dim=HIDDEN_DIM,\n",
        "                                                dropout_p=DROPOUT_P,\n",
        "                                                num_classes=len(classes),\n",
        "                                                freeze_embeddings=FREEZE_EMBEDDINGS)\n",
        "glove_frozen_model.sample(input_shape=(10,))"
      ],
      "execution_count": 59,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"model\"\n",
            "__________________________________________________________________________________________________\n",
            "Layer (type)                    Output Shape         Param #     Connected to                     \n",
            "==================================================================================================\n",
            "input_1 (InputLayer)            [(None, 10)]         0                                            \n",
            "__________________________________________________________________________________________________\n",
            "embedding (Embedding)           (None, 10, 100)      2991700     input_1[0][0]                    \n",
            "__________________________________________________________________________________________________\n",
            "conv1d (Conv1D)                 (None, 10, 50)       10050       embedding[0][0]                  \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_1 (Conv1D)               (None, 10, 50)       15050       embedding[0][0]                  \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_2 (Conv1D)               (None, 10, 50)       20050       embedding[0][0]                  \n",
            "__________________________________________________________________________________________________\n",
            "global_max_pooling1d (GlobalMax (None, 50)           0           conv1d[0][0]                     \n",
            "__________________________________________________________________________________________________\n",
            "global_max_pooling1d_1 (GlobalM (None, 50)           0           conv1d_1[0][0]                   \n",
            "__________________________________________________________________________________________________\n",
            "global_max_pooling1d_2 (GlobalM (None, 50)           0           conv1d_2[0][0]                   \n",
            "__________________________________________________________________________________________________\n",
            "concatenate (Concatenate)       (None, 150)          0           global_max_pooling1d[0][0]       \n",
            "                                                                 global_max_pooling1d_1[0][0]     \n",
            "                                                                 global_max_pooling1d_2[0][0]     \n",
            "__________________________________________________________________________________________________\n",
            "dense (Dense)                   (None, 100)          15100       concatenate[0][0]                \n",
            "__________________________________________________________________________________________________\n",
            "dense_1 (Dense)                 (None, 4)            404         dense[0][0]                      \n",
            "==================================================================================================\n",
            "Total params: 3,052,354\n",
            "Trainable params: 60,654\n",
            "Non-trainable params: 2,991,700\n",
            "__________________________________________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "Q5sGbMQ8wOLz",
        "colab": {}
      },
      "source": [
        "# Set embeddings\n",
        "glove_frozen_model.layers[0].set_weights([embedding_matrix])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "OTJDJMjRkt7M",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Compile\n",
        "glove_frozen_model.compile(optimizer=Adam(lr=LEARNING_RATE),\n",
        "                           loss=SparseCategoricalCrossentropy(),\n",
        "                           metrics=['accuracy'])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "n-OQ-PRfJFdR",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Callbacks\n",
        "callbacks = [EarlyStopping(monitor='val_loss', patience=EARLY_STOPPING_CRITERIA, verbose=1, mode='min'),\n",
        "             ReduceLROnPlateau(patience=1, factor=0.1, verbose=0),\n",
        "             TensorBoard(log_dir='tensorboard/glove_frozen', histogram_freq=1, update_freq='epoch')]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gi9DSAYGkuBW",
        "colab_type": "code",
        "outputId": "ea113110-1dc3-4f97-bfcb-3202f78284d6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 272
        }
      },
      "source": [
        "# Training\n",
        "training_history = glove_frozen_model.fit_generator(generator=training_generator,\n",
        "                                                    epochs=NUM_EPOCHS,\n",
        "                                                    validation_data=validation_generator,\n",
        "                                                    callbacks=callbacks,\n",
        "                                                    shuffle=False,\n",
        "                                                    class_weight=class_weights,\n",
        "                                                    verbose=1)"
      ],
      "execution_count": 63,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "1355/1355 [==============================] - 61s 45ms/step - loss: 2.3203e-05 - accuracy: 0.7550 - val_loss: 0.4013 - val_accuracy: 0.8539\n",
            "Epoch 2/10\n",
            "1355/1355 [==============================] - 56s 42ms/step - loss: 1.7316e-05 - accuracy: 0.8671 - val_loss: 0.3742 - val_accuracy: 0.8662\n",
            "Epoch 3/10\n",
            "1355/1355 [==============================] - 56s 41ms/step - loss: 1.5339e-05 - accuracy: 0.8831 - val_loss: 0.3771 - val_accuracy: 0.8659\n",
            "Epoch 4/10\n",
            "1355/1355 [==============================] - 56s 42ms/step - loss: 1.2683e-05 - accuracy: 0.9015 - val_loss: 0.3650 - val_accuracy: 0.8717\n",
            "Epoch 5/10\n",
            "1355/1355 [==============================] - 56s 41ms/step - loss: 1.2124e-05 - accuracy: 0.9071 - val_loss: 0.3682 - val_accuracy: 0.8721\n",
            "Epoch 6/10\n",
            "1355/1355 [==============================] - 57s 42ms/step - loss: 1.1633e-05 - accuracy: 0.9109 - val_loss: 0.3678 - val_accuracy: 0.8720\n",
            "Epoch 7/10\n",
            "1355/1355 [==============================] - 58s 43ms/step - loss: 1.1598e-05 - accuracy: 0.9110 - val_loss: 0.3678 - val_accuracy: 0.8716\n",
            "Epoch 00007: early stopping\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Ese-u_8hshfK",
        "colab_type": "code",
        "outputId": "4342201a-9f40-4ef3-d39e-7bb29186586c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Evaluation\n",
        "testing_history = glove_frozen_model.evaluate_generator(generator=testing_generator,\n",
        "                                                        verbose=1)"
      ],
      "execution_count": 64,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "282/282 [==============================] - 6s 22ms/step - loss: 0.3690 - accuracy: 0.8684\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dUVkeDbNqO7V",
        "colab_type": "text"
      },
      "source": [
        "## Fine-tuned GloVe embeddings (unfrozen)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eubLrHydkt_J",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "FREEZE_EMBEDDINGS = False"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "IGeZwoy9qUpa",
        "colab_type": "code",
        "outputId": "9c6d7f13-4545-4c74-e7b1-d7cfec2be725",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 561
        }
      },
      "source": [
        "# Initialize model\n",
        "glove_finetuned_model = TextClassificationCNNModel(vocab_size=vocab_size,\n",
        "                                                   embedding_dim=EMBEDDING_DIM,\n",
        "                                                   filter_sizes=FILTER_SIZES,\n",
        "                                                   num_filters=NUM_FILTERS,\n",
        "                                                   hidden_dim=HIDDEN_DIM,\n",
        "                                                   dropout_p=DROPOUT_P,\n",
        "                                                   num_classes=len(classes),\n",
        "                                                   freeze_embeddings=FREEZE_EMBEDDINGS)\n",
        "glove_finetuned_model.sample(input_shape=(10,))"
      ],
      "execution_count": 66,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"model_1\"\n",
            "__________________________________________________________________________________________________\n",
            "Layer (type)                    Output Shape         Param #     Connected to                     \n",
            "==================================================================================================\n",
            "input_2 (InputLayer)            [(None, 10)]         0                                            \n",
            "__________________________________________________________________________________________________\n",
            "embedding_1 (Embedding)         (None, 10, 100)      2991700     input_2[0][0]                    \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_3 (Conv1D)               (None, 10, 50)       10050       embedding_1[0][0]                \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_4 (Conv1D)               (None, 10, 50)       15050       embedding_1[0][0]                \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_5 (Conv1D)               (None, 10, 50)       20050       embedding_1[0][0]                \n",
            "__________________________________________________________________________________________________\n",
            "global_max_pooling1d_3 (GlobalM (None, 50)           0           conv1d_3[0][0]                   \n",
            "__________________________________________________________________________________________________\n",
            "global_max_pooling1d_4 (GlobalM (None, 50)           0           conv1d_4[0][0]                   \n",
            "__________________________________________________________________________________________________\n",
            "global_max_pooling1d_5 (GlobalM (None, 50)           0           conv1d_5[0][0]                   \n",
            "__________________________________________________________________________________________________\n",
            "concatenate_1 (Concatenate)     (None, 150)          0           global_max_pooling1d_3[0][0]     \n",
            "                                                                 global_max_pooling1d_4[0][0]     \n",
            "                                                                 global_max_pooling1d_5[0][0]     \n",
            "__________________________________________________________________________________________________\n",
            "dense_2 (Dense)                 (None, 100)          15100       concatenate_1[0][0]              \n",
            "__________________________________________________________________________________________________\n",
            "dense_3 (Dense)                 (None, 4)            404         dense_2[0][0]                    \n",
            "==================================================================================================\n",
            "Total params: 3,052,354\n",
            "Trainable params: 3,052,354\n",
            "Non-trainable params: 0\n",
            "__________________________________________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "oUaEr92PqUml",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Set embeddings\n",
        "glove_finetuned_model.layers[0].set_weights([embedding_matrix])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NpyhLUK2qUjb",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Compile\n",
        "glove_finetuned_model.compile(optimizer=Adam(lr=LEARNING_RATE),\n",
        "                              loss=SparseCategoricalCrossentropy(),\n",
        "                              metrics=['accuracy'])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HmiZwAdvI66W",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Callbacks\n",
        "callbacks = [EarlyStopping(monitor='val_loss', patience=EARLY_STOPPING_CRITERIA, verbose=1, mode='min'),\n",
        "             ReduceLROnPlateau(patience=1, factor=0.1, verbose=0),\n",
        "             TensorBoard(log_dir='tensorboard/glove_finetuned', histogram_freq=1, update_freq='epoch')]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "D9cAs9veqYSl",
        "colab_type": "code",
        "outputId": "67d6e87a-bd92-4b7f-c2f5-520fb505b02e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 238
        }
      },
      "source": [
        "# Training\n",
        "training_history = glove_finetuned_model.fit_generator(generator=training_generator,\n",
        "                                                       epochs=NUM_EPOCHS,\n",
        "                                                       validation_data=validation_generator,\n",
        "                                                       callbacks=callbacks,\n",
        "                                                       shuffle=False,\n",
        "                                                       class_weight=class_weights,\n",
        "                                                       verbose=1)"
      ],
      "execution_count": 70,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "1355/1355 [==============================] - 129s 95ms/step - loss: 2.2635e-05 - accuracy: 0.7600 - val_loss: 0.3951 - val_accuracy: 0.8571\n",
            "Epoch 2/10\n",
            "1355/1355 [==============================] - 128s 94ms/step - loss: 1.6803e-05 - accuracy: 0.8707 - val_loss: 0.3703 - val_accuracy: 0.8671\n",
            "Epoch 3/10\n",
            "1355/1355 [==============================] - 128s 95ms/step - loss: 1.4548e-05 - accuracy: 0.8899 - val_loss: 0.3637 - val_accuracy: 0.8717\n",
            "Epoch 4/10\n",
            "1355/1355 [==============================] - 138s 102ms/step - loss: 1.2642e-05 - accuracy: 0.9054 - val_loss: 0.3704 - val_accuracy: 0.8716\n",
            "Epoch 5/10\n",
            "1355/1355 [==============================] - 137s 101ms/step - loss: 9.7322e-06 - accuracy: 0.9264 - val_loss: 0.3730 - val_accuracy: 0.8737\n",
            "Epoch 6/10\n",
            "1355/1355 [==============================] - 139s 103ms/step - loss: 9.0700e-06 - accuracy: 0.9330 - val_loss: 0.3746 - val_accuracy: 0.8739\n",
            "Epoch 00006: early stopping\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "17a1r7gbsehX",
        "colab_type": "code",
        "outputId": "f61f23ad-b182-49ef-8e78-40e1f183f90c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Evaluation\n",
        "testing_history = glove_finetuned_model.evaluate_generator(generator=testing_generator,\n",
        "                                                           verbose=1)"
      ],
      "execution_count": 71,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "282/282 [==============================] - 6s 21ms/step - loss: 0.3710 - accuracy: 0.8728\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y8JzMrcv_p8a",
        "colab_type": "text"
      },
      "source": [
        "## Randomly initialized embeddings"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TnLSYV0WKo8x",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Arguments\n",
        "FREEZE_EMBEDDINGS = False"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wD4sRUS5_lwq",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "random_initialized_model = TextClassificationCNNModel(vocab_size=vocab_size,\n",
        "                                                      embedding_dim=EMBEDDING_DIM,\n",
        "                                                      filter_sizes=FILTER_SIZES,\n",
        "                                                      num_filters=NUM_FILTERS,\n",
        "                                                      hidden_dim=HIDDEN_DIM,\n",
        "                                                      dropout_p=DROPOUT_P,\n",
        "                                                      num_classes=len(classes),\n",
        "                                                      freeze_embeddings=FREEZE_EMBEDDINGS)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Ucn3tYq1_sE1",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Compile\n",
        "random_initialized_model.compile(optimizer=Adam(lr=LEARNING_RATE),\n",
        "                                 loss=SparseCategoricalCrossentropy(),\n",
        "                                 metrics=['accuracy'])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "F7bTmNdCJA0g",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Callbacks\n",
        "callbacks = [EarlyStopping(monitor='val_loss', patience=EARLY_STOPPING_CRITERIA, verbose=1, mode='min'),\n",
        "             ReduceLROnPlateau(patience=1, factor=0.1, verbose=0),\n",
        "             TensorBoard(log_dir='tensorboard/randomly_initialized', histogram_freq=1, update_freq='epoch')]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qlqe2TVlCxvj",
        "colab_type": "code",
        "outputId": "8f213d09-2e1f-4308-e76c-fef21c04a8de",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 238
        }
      },
      "source": [
        "# Training\n",
        "training_history = random_initialized_model.fit_generator(generator=training_generator,\n",
        "                                                          epochs=NUM_EPOCHS,\n",
        "                                                          validation_data=validation_generator,\n",
        "                                                          callbacks=callbacks,\n",
        "                                                          shuffle=False,\n",
        "                                                          class_weight=class_weights,\n",
        "                                                          verbose=1)"
      ],
      "execution_count": 76,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "1355/1355 [==============================] - 140s 103ms/step - loss: 4.2080e-05 - accuracy: 0.4307 - val_loss: 0.4631 - val_accuracy: 0.8380\n",
            "Epoch 2/10\n",
            "1355/1355 [==============================] - 140s 104ms/step - loss: 1.6937e-05 - accuracy: 0.8697 - val_loss: 0.3931 - val_accuracy: 0.8629\n",
            "Epoch 3/10\n",
            "1355/1355 [==============================] - 145s 107ms/step - loss: 1.1248e-05 - accuracy: 0.9216 - val_loss: 0.3895 - val_accuracy: 0.8671\n",
            "Epoch 4/10\n",
            "1355/1355 [==============================] - 125s 92ms/step - loss: 7.6980e-06 - accuracy: 0.9482 - val_loss: 0.4191 - val_accuracy: 0.8652\n",
            "Epoch 5/10\n",
            "1355/1355 [==============================] - 124s 92ms/step - loss: 4.1468e-06 - accuracy: 0.9729 - val_loss: 0.4374 - val_accuracy: 0.8699\n",
            "Epoch 6/10\n",
            "1355/1355 [==============================] - 124s 92ms/step - loss: 3.3484e-06 - accuracy: 0.9795 - val_loss: 0.4394 - val_accuracy: 0.8701\n",
            "Epoch 00006: early stopping\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fylHduxTK-0N",
        "colab_type": "code",
        "outputId": "2149dbbb-1c6f-4936-d083-f5ec297d7ee6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Evaluation\n",
        "testing_history = random_initialized_model.evaluate_generator(generator=testing_generator,\n",
        "                                                              verbose=1)"
      ],
      "execution_count": 77,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "282/282 [==============================] - 5s 18ms/step - loss: 0.4531 - accuracy: 0.8613\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "12BIGxY6k17Q",
        "outputId": "dfda0143-f1e8-43b5-bdd3-ac65ddc0d80e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 17
        }
      },
      "source": [
        "%tensorboard --logdir tensorboard"
      ],
      "execution_count": 78,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/html": [
              "\n",
              "    <div id=\"root\"></div>\n",
              "    <script>\n",
              "      (function() {\n",
              "        window.TENSORBOARD_ENV = window.TENSORBOARD_ENV || {};\n",
              "        window.TENSORBOARD_ENV[\"IN_COLAB\"] = true;\n",
              "        document.querySelector(\"base\").href = \"https://localhost:6006\";\n",
              "        function fixUpTensorboard(root) {\n",
              "          const tftb = root.querySelector(\"tf-tensorboard\");\n",
              "          // Disable the fragment manipulation behavior in Colab. Not\n",
              "          // only is the behavior not useful (as the iframe's location\n",
              "          // is not visible to the user), it causes TensorBoard's usage\n",
              "          // of `window.replace` to navigate away from the page and to\n",
              "          // the `localhost:<port>` URL specified by the base URI, which\n",
              "          // in turn causes the frame to (likely) crash.\n",
              "          tftb.removeAttribute(\"use-hash\");\n",
              "        }\n",
              "        function executeAllScripts(root) {\n",
              "          // When `script` elements are inserted into the DOM by\n",
              "          // assigning to an element's `innerHTML`, the scripts are not\n",
              "          // executed. Thus, we manually re-insert these scripts so that\n",
              "          // TensorBoard can initialize itself.\n",
              "          for (const script of root.querySelectorAll(\"script\")) {\n",
              "            const newScript = document.createElement(\"script\");\n",
              "            newScript.type = script.type;\n",
              "            newScript.textContent = script.textContent;\n",
              "            root.appendChild(newScript);\n",
              "            script.remove();\n",
              "          }\n",
              "        }\n",
              "        function setHeight(root, height) {\n",
              "          // We set the height dynamically after the TensorBoard UI has\n",
              "          // been initialized. This avoids an intermediate state in\n",
              "          // which the container plus the UI become taller than the\n",
              "          // final width and cause the Colab output frame to be\n",
              "          // permanently resized, eventually leading to an empty\n",
              "          // vertical gap below the TensorBoard UI. It's not clear\n",
              "          // exactly what causes this problematic intermediate state,\n",
              "          // but setting the height late seems to fix it.\n",
              "          root.style.height = `${height}px`;\n",
              "        }\n",
              "        const root = document.getElementById(\"root\");\n",
              "        fetch(\".\")\n",
              "          .then((x) => x.text())\n",
              "          .then((html) => void (root.innerHTML = html))\n",
              "          .then(() => fixUpTensorboard(root))\n",
              "          .then(() => executeAllScripts(root))\n",
              "          .then(() => setHeight(root, 800));\n",
              "      })();\n",
              "    </script>\n",
              "  "
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vskwiiI3V3S6",
        "colab_type": "text"
      },
      "source": [
        "# Complete evaluation"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6tO2hX8OLQ5s",
        "colab_type": "text"
      },
      "source": [
        "Looks like fine-tuned glove embeddings had the best test performance so let's do proper evaluation and inference with that strategy."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "avmwpr5syKHY",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "model = glove_finetuned_model"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Itq7lT9qV9Y8",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import io\n",
        "import itertools\n",
        "import json\n",
        "import matplotlib.pyplot as plt\n",
        "from sklearn.metrics import classification_report\n",
        "from sklearn.metrics import confusion_matrix\n",
        "from sklearn.metrics import precision_recall_fscore_support"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WBoa4CtjW3OW",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NNeyYs3tW3VN",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def plot_confusion_matrix(y_true, y_pred, classes, cmap=plt.cm.Blues):\n",
        "    \"\"\"Plot a confusion matrix using ground truth and predictions.\"\"\"\n",
        "    # Confusion matrix\n",
        "    cm = confusion_matrix(y_true, y_pred)\n",
        "    cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n",
        "\n",
        "    #  Figure\n",
        "    fig = plt.figure()\n",
        "    ax = fig.add_subplot(111)\n",
        "    cax = ax.matshow(cm, cmap=plt.cm.Blues)\n",
        "    fig.colorbar(cax)\n",
        "\n",
        "    # Axis\n",
        "    plt.title(\"Confusion matrix\")\n",
        "    plt.ylabel(\"True label\")\n",
        "    plt.xlabel(\"Predicted label\")\n",
        "    ax.set_xticklabels([''] + classes)\n",
        "    ax.set_yticklabels([''] + classes)\n",
        "    ax.xaxis.set_label_position('bottom') \n",
        "    ax.xaxis.tick_bottom()\n",
        "\n",
        "    # Values\n",
        "    thresh = cm.max() / 2.\n",
        "    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n",
        "        plt.text(j, i, f\"{cm[i, j]:d} ({cm_norm[i, j]*100:.1f}%)\",\n",
        "                 horizontalalignment=\"center\",\n",
        "                 color=\"white\" if cm[i, j] > thresh else \"black\")\n",
        "\n",
        "    # Display\n",
        "    plt.show()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "smP8T1bEW3fH",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def get_performance(y_true, y_pred, classes):\n",
        "    \"\"\"Per-class performance metrics. \"\"\"\n",
        "    performance = {'overall': {}, 'class': {}}\n",
        "    y_pred = np.argmax(y_pred, axis=1)\n",
        "    metrics = precision_recall_fscore_support(y_true, y_pred)\n",
        "\n",
        "    # Overall performance\n",
        "    performance['overall']['precision'] = np.mean(metrics[0])\n",
        "    performance['overall']['recall'] = np.mean(metrics[1])\n",
        "    performance['overall']['f1'] = np.mean(metrics[2])\n",
        "    performance['overall']['num_samples'] = np.float64(np.sum(metrics[3]))\n",
        "\n",
        "    # Per-class performance\n",
        "    for i in range(len(classes)):\n",
        "        performance['class'][classes[i]] = {\n",
        "            \"precision\": metrics[0][i],\n",
        "            \"recall\": metrics[1][i],\n",
        "            \"f1\": metrics[2][i],\n",
        "            \"num_samples\": np.float64(metrics[3][i])\n",
        "        }\n",
        "\n",
        "    return performance"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yCAHPsyWWAar",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qdAj6KyCU88E",
        "colab_type": "code",
        "outputId": "619af0fb-b2b7-440a-c1f4-396e30d139d3",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        }
      },
      "source": [
        "# Evaluation\n",
        "test_history = model.evaluate_generator(generator=testing_generator, verbose=1)\n",
        "y_pred = model.predict_generator(generator=testing_generator, verbose=1)\n",
        "print (f\"test history: {test_history}\")"
      ],
      "execution_count": 83,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "282/282 [==============================] - 5s 19ms/step - loss: 0.3710 - accuracy: 0.8728\n",
            "282/282 [==============================] - 3s 9ms/step\n",
            "test history: [0.37095893919467926, 0.8728333]\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "uLQNo8NXWBqI",
        "colab_type": "code",
        "outputId": "9566ad57-305d-4a2d-da76-a107d8117bda",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 595
        }
      },
      "source": [
        "# Class performance\n",
        "performance = get_performance(y_true=y_test,\n",
        "                              y_pred=y_pred,\n",
        "                              classes=classes)\n",
        "print (json.dumps(performance, indent=4))"
      ],
      "execution_count": 84,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "{\n",
            "    \"overall\": {\n",
            "        \"precision\": 0.8728250133452566,\n",
            "        \"recall\": 0.8728333333333333,\n",
            "        \"f1\": 0.8726781141026867,\n",
            "        \"num_samples\": 18000.0\n",
            "    },\n",
            "    \"class\": {\n",
            "        \"Business\": {\n",
            "            \"precision\": 0.8363068688670829,\n",
            "            \"recall\": 0.8333333333333334,\n",
            "            \"f1\": 0.8348174532502226,\n",
            "            \"num_samples\": 4500.0\n",
            "        },\n",
            "        \"Sci/Tech\": {\n",
            "            \"precision\": 0.8455718736092568,\n",
            "            \"recall\": 0.8444444444444444,\n",
            "            \"f1\": 0.8450077829664221,\n",
            "            \"num_samples\": 4500.0\n",
            "        },\n",
            "        \"Sports\": {\n",
            "            \"precision\": 0.9007486631016043,\n",
            "            \"recall\": 0.9357777777777778,\n",
            "            \"f1\": 0.9179291553133515,\n",
            "            \"num_samples\": 4500.0\n",
            "        },\n",
            "        \"World\": {\n",
            "            \"precision\": 0.9086726478030825,\n",
            "            \"recall\": 0.8777777777777778,\n",
            "            \"f1\": 0.8929580648807506,\n",
            "            \"num_samples\": 4500.0\n",
            "        }\n",
            "    }\n",
            "}\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nRbPfqgZWaof",
        "colab_type": "code",
        "outputId": "a187a32e-22d6-44d0-e126-59e3514b0dd8",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 598
        }
      },
      "source": [
        "# Confusion matrix\n",
        "plt.rcParams[\"figure.figsize\"] = (7,7)\n",
        "y_pred = np.argmax(y_pred, axis=1)\n",
        "plot_confusion_matrix(y_test, y_pred, classes=classes)\n",
        "print (classification_report(y_test, y_pred))"
      ],
      "execution_count": 85,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAc8AAAGKCAYAAABq7cr0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOzdd3wURRvA8d9zSWjSixA6SFFaEkjo\nHaRJ9QWkSBNFFFQUUJpSBBU7oIIIAqLSRDoIgoJ0QhVR6SihhyokBBLm/WM3R8KlwiUk5Pn6uY+3\nszOzs5uQ56bsnhhjUEoppVTCOe53A5RSSqnURoOnUkoplUgaPJVSSqlE0uCplFJKJZIGT6WUUiqR\nPO93A5RSSj04PLIWMSY81C11mdBzK40xTdxSmZtp8FRKKeU2JjyU9KXbu6Wu67s/z+2WipKABk+l\nlFJuJCAP/ozgg3+GSimllJtp8FQqHiKSUUSWiMhlEZl3D/V0FpFV7mzb/SIitURk//1uh0qBBBBx\nzysF0+CpHhgi0klEtovIVRE5JSIrRKSmG6puC+QFchlj2t1tJcaY74wxjdzQniQlIkZESsSVxxiz\n3hhTOrnapFIZcbjnlYKl7NYplUAi8hrwKfAOVqArDHwBtHJD9UWAA8aYcDfUleqJiK6VUGmeBk+V\n6olINmAU0McY86Mx5pox5qYxZokxZqCdJ72IfCoiJ+3XpyKS3t5XV0SCRKS/iJy1e6097H0jgbeA\np+webU8RGSEi30Y5flG7t+Zpb3cXkSMi8p+IHBWRzlHSN0QpV11EAu3h4EARqR5l31oReVtENtr1\nrBKRGFceRmn/61Ha31pEmonIARG5ICJDouSvLCKbReSSnfczEUln7/vNzrbHPt+notT/hoicBqZF\nptllHrGPUdHezi8i50Sk7j39YFXqpcO2SqUK1YAMwII48gwFqgK+gA9QGRgWZX8+IBtQAOgJfC4i\nOYwxw7F6s3OMMZmNMVPjaoiIPASMB5oaY7IA1YHdMeTLCSyz8+YCPgaWiUiuKNk6AT2Ah4F0wIA4\nDp0P6xoUwAr2XwFPA5WAWsCbIlLMzhsBvArkxrp2DYAXAYwxte08Pvb5zolSf06sXnivqAc2xhwG\n3gC+FZFMwDRghjFmbRztVQ8s0WFbpVKJXEBwPMOqnYFRxpizxphzwEigS5T9N+39N40xy4GrwN3O\n6d0CyolIRmPMKWPMvhjyPAEcNMbMNMaEG2NmAX8DLaLkmWaMOWCMCQXmYgX+2NwExhhjbgKzsQLj\nOGPMf/bx/8T60IAxZocxZot93GPAl0CdBJzTcGNMmN2eaIwxXwGHgK2AN9aHFaUeWBo81YPgPJA7\nnrm4/MA/Ubb/sdOcddwRfEOAzIltiDHmGvAU0Bs4JSLLROTRBLQnsk0FomyfTkR7zhtjIuz3kcHt\nTJT9oZHlRaSUiCwVkdMicgWrZx3fzejnjDHX48nzFVAOmGCMCYsnr3qQ6bCtUqnCZiAMaB1HnpNY\nQ46RCttpd+MakCnKdr6oO40xK40xj2P1wP7GCirxtSeyTSfusk2JMRGrXSWNMVmBIVg3GMTFxLVT\nRDJjLdiaCoywh6VVWiTosK1SqYEx5jLWPN/n9kKZTCLiJSJNReR9O9ssYJiI5LEX3rwFfBtbnfHY\nDdQWkcL2YqXBkTtEJK+ItLLnPsOwhn9vxVDHcqCUfXuNp4g8BZQBlt5lmxIjC3AFuGr3il+4Y/8Z\noHgi6xwHbDfGPIs1lzvpnlupVAqmwVM9EIwxHwGvYS0COgccB/oCC+0so4HtwO/AXmCnnXY3x/oZ\nmGPXtYPoAc9ht+MkcAFrLvHO4IQx5jzQHOiPNez8OtDcGBN8N21KpAFYi5H+w+oVz7lj/whghr0a\nN96HlIpIK6AJt8/zNaBi5Cpjlda4acg2hQ/bijFxjsYopZRSCebI7G3SV+jhlrqub353hzHG3y2V\nuZn2PJVSSqlE0ieFKKWUcq8UPuTqDtrzVEop5UbJ+5AEEfEQkV0istTeLiYiW0XkkIjMifL0rPT2\n9iF7f9EodQy20/eLSOOEHFeDp1JKqdTsFeCvKNtjgU+MMSWAi1hPDMP+/0U7/RM7HyJSBugAlMVa\n+PaFiHjEd1ANnkoppdwnGb+STEQKYj2ta4q9LUB94Ac7ywxu3//dyt7G3t/Azt8KmG0/Peso1pOy\nKsd3bJ3zVEop5V7ue8BBbhHZHmV7sjFmcpTtT7Fu88pib+cCLkV5WlgQt5/aVQDrFjaMMeEictnO\nXwDYEqXOqGVipcFTKaVUShUc260qItIcOGuM2XE/vsFHg6dSSik3kuR6tF4NoKWINMP6RqGsWE+6\nyi4innbvsyC3H3l5AigEBNnPwc6G9YCSyPRIUcvESuc8lVJKuZdD3POKgzFmsDGmoDGmKNaCn1+M\nMZ2BX4G2drZuwCL7/WJ7G3v/L8Z6StBioIO9GrcYUBLYFt8pas9TKaXUg+QNYLaIjAZ2YX1ZAfb/\nZ4rIIaxHZ3YAMMbsE5G5WF/bFw70ifINRbHSx/MppZRyG0fWAia9/4tuqev6r8NS7OP5tOeplFLK\nvfQJQ0oppZS6k/Y8lVJKuVGyrba9rzR4KqWUci8dtlVKKaXUnbTnqZRSyr102FYppZRKhAQ+1D21\n0+CplFLKvbTnqQAkXWYjmXLd72akOuWL5b7fTUiVPOJ5LJlS7vTvP8cIDg7WX7pE0uCZAJIpF+lr\nDbrfzUh1ls/sGX8m5SJbRv1nqZJP7erxfnVl4umwrVJKKZUYaeM+zwf/DJVSSik3056nUkop99Jh\nW6WUUioRBB22VUoppZQr7XkqpZRyo7SxYEiDp1JKKfdKA3OeD/7HA6WUUsrNtOeplFLKvXTYViml\nlEokHbZVSiml1J2056mUUsp9RFfbKqWUUomnw7ZKKaWUupP2PJVSSrmVpIGepwZPpZRSbiOkjeCp\nw7ZKKaVUImnPUymllPuI/XrAafBUSinlRqLDtkoppZRypT1PpZRSbpUWep4aPJVSSrlVWgieOmyr\nlFJKJZL2PJVSSrlVWuh5avBUSinlPmnkVhUdtlVKKaUSSXueSiml3Eb0Pk+llFIq8UTELa94jpFB\nRLaJyB4R2SciI+306SJyVER22y9fO11EZLyIHBKR30WkYpS6uonIQfvVLSHnqD1PpZRSqVEYUN8Y\nc1VEvIANIrLC3jfQGPPDHfmbAiXtVxVgIlBFRHICwwF/wAA7RGSxMeZiXAfXnqdSSim3So6ep7Fc\ntTe97JeJo0gr4Bu73BYgu4h4A42Bn40xF+yA+TPQJL5z1OCplFLKrdwYPHOLyPYor153HMdDRHYD\nZ7EC4FZ71xh7aPYTEUlvpxUAjkcpHmSnxZYeJw2edym9lwfrP3qSrePbsePzpxjWKcC5b/V7rdky\nrh1bxrXjyPSuzB1qfYipVS4/p2c/49w3uEMlZ5nHKxZiz8SO/PFlJwa09Yv1uB88W4MaZb0BqFuh\nAJs+bcuWce1YM7Y1xb2zAvBskzIETmjvTH+0UI5EtX9a/wZsG9+ekV2qONPeaF+RFlWLOrebBhTh\nzc4BuENERASNa1em21OtnWmvvvgs1XxK0ahWAI1qBbBv7x4ADh34m5aNalM8bxYmTfg41jqNMbRv\n2Zj/rlwBoH/fXviULEiDatGv7cWLF+jYpik1K5WhY5umXLoU80jNvFkzqVmpDDUrlWHerJkAhIWF\n0bltcxpU82PGlEnOvK/3e4G9e3Y5t6dN/oLZ305P3EVJhBd69aRYoXxUrlghxv3jP/2YLBk8CA4O\nBmDpkkVU9feleuWK1K5emU0bN8RYLjQ0lCYN6xEREcG///xDzar+VK9ckQC/8kz9alKMZYYOfp2K\nFcpQ1d+Xju2f5NKlSwBs3rSRqv6+1K5emUOHDgJw6dIlWj3RmFu3bjnLt2jaiIsX4xwtc5vYrts7\nb4+kVPFCVK9ckeqVK7Lyp+XOfX/s/Z36dWoQ4FeeKpV8uH79eox1P92xHUePHAGgTYumVAvwI8Cv\nPK/0fYGIiAiX/OvXraXAwzmcx3xvzNsAnDt3jsfr1aZyxQosWbzQmf+ptq05dfKkc3vIoIGs+/WX\nu78YKVewMcY/ymty1J3GmAhjjC9QEKgsIuWAwcCjQACQE3gjKRqmwfMuhd2MoMnQxVR5eR5VXp5H\no4qFqFw6LwANBy2k6ivzqPrKPLbuP8PCTUec5Tb+ecq5793ZOwBwOIRPe9ei1Yil+PWZTbvaJWIM\neDmzpKdy6bxs3HcKgPEv1qbHh6up+so85qw7yKCnrGA8Z91BAl6aS9VX5vHx/N2M7Vk9we0vVzQn\noTciqPzyXCqVzEPWTOnIlyMTAaXzsmTLMWf5FYH/0CygCBnT3/u0+dRJEyhR6lGX9KGj3mPV+kBW\nrQ+kbHkfALLnyMmo9z7m+b6vxlnnL6tWUKZcebJktT5QtOvYhW9/WOKS7/NPPqBG7fps2PEnNWrX\n5/NPPnDJc/HiBT4ZO5olqzewdM1GPhk7mkuXLrJuzSoqV63Bzxt3MH/u9wD8ufd3IiJuUd7ndpDu\n8HR3pk3+IuEXJJE6d+nGgsXLY9wXdPw4v6xeRaFChZ1pdes1YHPgLjZt28kXX06h7wu9Yiw7c8Y0\nWrZug4eHB/m8vVmzbiObtu3k1/Wb+fiD96P98Y5Uv35Dtu38nS3bd1OiZCk++uA9ACaM+5j5C5fy\n3gcfM/WrLwF4/70xDHh9MA7H7T9DHTp1ZsqXE+/6WiRGXNetz0v92LRtJ5u27aRxk2YAhIeH82yP\nroyb8AWBu/ayfNUveHl5uZT96899REREUKx4cQBmfDeHzYG72Lbzd4KDz7Fg/rwYj1mtRk3nMQcN\nfROAH+bOpudzvVi7YQtfTBgPwPJlS/Dx8cM7f35n2d4v9OXjD8fe/cVwJ3HjK4GMMZeAX4EmxphT\n9tBsGDANqGxnOwEUilKsoJ0WW3qcNHjeg2vXwwHw8nTg6enAmOjD7VkyelGnQgGWbDkaZz0BJR/m\n8KnLHDvzHzfDbzHvt0M0r1LUJV/r6sVZtfNf57YxkDVTOsD6/6nzIQD8F3rTmeehDJ6xTgLE1P6b\n4bfImM4DESs94tYt3uwcwOjvA13Kr//jJM0CisR5bvE5eSKINatW0KlrjwTlz53nYXwr+uMZwx+t\nqH6cN5tGzVo4t6vWqEX2HK4fSFatWEK7jk8D0K7j06xcvtglz7o1P1OrbgNy5MhJ9uw5qFW3AWtX\nr8LTy4vQkBBu3rzp/Nl/8M4IBg4ZHq18xkyZKFi4CLt2uF5Dd6hZqzY5cuSMcd+g11/j7XfGRps/\nypw5s3P72rVrsc4tzZn9PU80bwlAunTpSJ/eGv0KCwuL1luMqsHjjfD0tD5QBVSuwsmgIAC8vLwI\nCQ0hNCQELy8vjhw+zImg49SqUzda+WbNWzJv7uwEnvm9ieu6xWTN6lWUK1ee8hWsD3K5cuXCw8PD\nJd+cWbevG0BW+wNceHg4N27cSNRtHF5enoSEhBAWFoaHhwfh4eF8MWE8/foPjJavcJEiXLhwgTOn\nTye47qSUTKtt84hIdvt9RuBx4G97HhOxKmgN/GEXWQx0tVfdVgUuG2NOASuBRiKSQ0RyAI3stDgl\nWfAUkQh7mfAeEdkpIq7dn4TV01tEurq7fe7gcAhbxrXj35nd+WVXEIEHzkbb36JqMdbuCYoWzKqU\nzsfW8e1YOOIJHits/THPn+shgoKvOfOcOH+NArkecjletce82XXonHP7xQlrWTD8CQ5N60KneqX4\n8Iedzn3PNyvLvsmdGNO9Gv2/jHlYLqb27w+6RPDl62z+tB3Lt/3DI97ZcIiw+3CwS/mdB89Ro4x3\ngq5VbEYMGcDQke8iDtdfxfdHv0XDGpUYMWQAYWFhiap3+9bNVPCpGG++4LNnyZvPOoeH8+Yj+OxZ\nlzynT50gf8HbH0y9CxTk9KkT1K7XkOP//kPLx2vxTK8+rFq+hHI+fuTzzu9Sh49vJbZtjvnnkFSW\nLllE/vwFnH/so1q8aAEVK5ShXZsWfPHlFJf9N27c4NjRIxQpWtSZFnT8OFX9fXmsRBFeHfB6tJ5P\nTGbOmMbjja0pi/4DB9GrZ3c++mAsz/fuw6gRw3hzxNsuZXLkyMGNsDDOnz+fuJN1s8kTP6eqvy8v\n9OrpHEY+dPAgIkLr5k2oWdWfTz5yHaUA2LJ5E34VK0VLa928CcUL5SNL5iy0frJtjOW2bd1CtQA/\nnmzZjL/+3AdAu6c6sWzpYlo90ZgBrw/iqy8n0qFTZzJlyuRS3sfXjy2bN97Laac23sCvIvI7EIg1\n57kU+E5E9gJ7gdzAaDv/cuAIcAj4CngRwBhzAXjbriMQGGWnxSkpe56hxhhfY4wP1hj0u3dTiTFm\nkjHmG/c2zT1u3TJUfWUeJXp8g3+phylTOPqn2PZ1SjL3t0PO7d2Hz1G650yqvDyPiUv2OudCEypf\nzkwEX7k9x/JSqwq0GbmMEj1mMnP1fsY+W8O578vl+yjb63uGzdjiHM5NaPsHTtlI1VfmMW7hHt56\nujKjvtvG6+0r8u0bj9Oj0WPO8mcvh+IdQ5BPqNU/LSN37jxU8HUNcoPeept12/ay7JdNXLp4kS/G\nfZioui9dukDmLFkSVSYhn3aj8vT05PMp37Dyt200b/0/pkz6jOf79GPk0IH06taBVctvDxPnypOH\nM6dPJao99yIkJISP3n+PoW+NjHF/y1Zt2Pn7n3w/90dGjxzusv98cDDZsmWPllawUCG2bN/Nnn0H\n+P7bbzh75kysx//gvXfw9PTkqY6dAajg48uvv21i+ao1HD16hHz5vDHG0O3pDjzbvUu0unLneZjT\np1yHhJPLs7168/tfB9m0bSf58nkz5I0BgNVz3LxpI1Omf8uqX35jyeKFrP1ljUv506dPkTt3nmhp\nC5f+xMFjJwi7ERbj3KSPX0X+PHCUzYG7eP7FvnRs9yQA2bJlY/7Cpfy2aRs+fhVZsWwJrZ9sS98X\nevF0x3Zs3bLZWUeePA9z6lTy/Y7FJvIhCcmw2vZ3Y4yfMaaCMaacMWaUnV7fGFPeTns6ckWuPZTb\nxxjziL1/e5S6vjbGlLBf0xJynsk1bJsVuAggInVFZGnkDhH5TES62+/fE5E/7VVSH9ppI0RkgP1+\nrYiMFevG2AMiUstO9xCRD0Qk0C77vJ3uLSK/2T3gP0Sklp13ur29V0TinjxLgMvXbrBu7wkaVbrd\nO8mVNQP+JR9mReA/zrT/Qm86h0pX7vgXLw8HubJm4OT5axTMfTsIFcj1ECfO3+6JRgoNCye9lzVM\nlDtrBsoXy+Xs7f6w4RBVH83rUmbubwejLfRJaPsBmlcpyq5D53gogxfF82Xl6bE/06ZGcec8ZwYv\nD0LDwuOsOy6BWzez6qdlVK1Qij49u7Bx/Vpe6tUdgLz5vBER0qdPT/vOXdmdyCFPTw/PWIcWo8r9\n8MPOoHbm9Cly5cnjkiefdwFOBt1ejHfqRBD5vKMvxpsxdRJtO3Rm5/atZMmajYlff8eXn49z7g8L\nu06GDBkTdQ734uiRwxw7dpTqAX6ULVWcEyeCqFXV32VYr2at2hw7esS5mChShowZCYtlMYx3/vw8\nVqYsmzauj3H/t99MZ8WKZUyd/q3LH0BjDB+8N4bXBw/jvdGjeHvMWLo/8ywTP5/gzBMWdp0MGZPv\nWt3p4bx58fDwwOFw0P2ZZ9mx3frdK1CgANVr1iJ37txkypSJxo2bsnv3LpfyGTNmjHEhUYYMGXii\neUuWLXWdGsiaNSuZM2cGoHGTZty8edPlZzL23dEMfGMI8+bMolr1Gnw5ZTrvjr794eh62HUyZMhw\nT+fuLskRPO+3pAyeGe2g9TcwBatbHCsRyQW0AcoaYypwu6t9J09jTGWgH9aNrQA9scavA7BWWD0n\nIsWATsBKezWWD7Ab8AUK2J9KymNNKMfUnl720ujt5sZVl/25s2Yg20PWfGOGdB408C3E/qBLzv1t\nqhdnReA/hN28vbIub/bbfxD8Sz6MwyGcv3Kd7QfPUiJ/dorkzYKXp4N2tUuwbNsxl2PuD7rII/mz\nAXDxahhZH0pHCXu7vm9B5/Ef8c7mLNPUvwiHTl5OdPs9PRz0bVmBj3/cTcZ0nkRO53o4HKTztH5t\nShbIzp//xju6EavBw0ezfd8Rtvx+gM+nzqRGrbpMmDwdwBnQjDGsXLaY0o+VTVTdxUuW4p9jR+LN\n93iT5syb9S0A82Z9S6OmLVzy1GnwOL/9uppLly5y6dJFfvt1NXUaPO7cf+nSRdasXE7bDk8TGhqC\nw+FARLh+PdSZ58ihg4k+h3tRtlx5jh4/zb4DR9h34AgFChRk/Zbt5M2Xj8OHDznnaHfv2knYjTBy\n5coVrXyOHDmIiIhwBoETQUGEhlrnc/HiRTZv2kjJUqVdjvvzqp/49OMPmfPDwhiHFr//9hsaNWlG\nzpw5CYm8Vg4HoaHWfL0xhjNnTlOkSFF3Xo5EOR2l97Zk8ULKlLV+bg0eb8yf+/4gJCSE8PBwNqz/\njUcfe8ylfOlHH+XIYWvE6erVq876wsPDWfnTckqVdl0cd+b0aefPZHvgNm7duhXtZ3Lo0EFOngii\nVp26hIaGOn/HQkNvB+lDBw9Qpmw5N1wBlRBJ+YShUDtoISLVgG/sZcSxuQxcB6baPdOlseT70f7/\nDqCo/b4RUEFEIicTsmE9RSIQ+Fqsp08sNMbsFpEjQHERmQAsA1bFdBB7SfRkAEf2Ii5rbvLlzMRX\n/erj4XDgcAjzNxyK1stsV7sEH/4Q/VNpmxqP8FyzsoRH3OJ6WARd3/8ZgIhbhlcnrWfJyOZ4OIQZ\nq//mr39dl+v/FPgvPZuUYfqqv4i4ZegzYR2zBjfmljFcuhrG8+N+BeCF5uWo51uQm+G3uHQ1jOc+\ntYaJvHNm4ouX6tJm5PJ429/7iXJ8+8t+QsPC2XvsPJnSexI4oT0rt//L5Ws3AKhdIT9vzdjq0k53\neKlXd84HnwNjKFPeh/c+/gyAs2dO06x+da7+dwWHOJgy6TN+3bzbuao2UoNGTdm84TeKFS8BQJ+e\nXdi88TcunA/Gv2xx+g96k45detD31YH07tGJ2d9Oo2ChwkycZq2a3bNrBzOnfcWH4yeRI0dOXhk4\nhCfqW9P2/V4fGm2hyafvj+Gl/oNwOBzUqd+IGVMm0bBGRZ7u8Zwzz/atm+k/6M0kuVY9unRi/fp1\nnA8OpvQjhRkybDjdevSMNf+iBT8y67uZeHl5kSFjRqbPnBXjp/z6DR9n88YN1GvQkP1//8WQQQMR\nEYwxvNzvNcqWKw9An97P0fO556lYyZ8B/V4mLCyMVk80BqxFQ+M+s1bPhoSE8N3Mb1i07CcA+r78\nKv9r3Zx06dIxdYb1AWbXzh0EVK7iXHSUlGK7bm8OeYPff9+DiFC4SBHGf2bdlpMjRw76vtyPOjWq\nICI0atKUJk2fcKm3cZNmrP9tHfUaNCTk2jWeatvauciqdp269HzueQDn7T49n+vNwgXzmTJ5Ep6e\nnmTImJFpM7+P9jMZNXwYb420+hPt2negQ/sn+fjD9xn21ggAbt68yZHDh6lYyT8pL1nCpexOo1vI\nnStE3VaxyFVjTOYo22eA8kApYIgxppmdPgXYYIyZLtbNrA2AtkBRY0x9ERkBXDXGfCgia4EBxpjt\nIpIb2G6MKSoi84HJxhiXFVIikh94AugDfGyM+UZEMmM9VaILcMEY80xc5+LIXsSkrzXoHq+Ie6wZ\n25onRy13BrD75eHsGZk+oCHNhrne/hHp0MzY/4AntTOnT9HvhWeYtWBF/JmT2B+/72by5+MY/2WC\nplLIljFlPDVz966dfD7+U76alnxLDl7v349mT7Sgbv0GyXZMdwsNDaVZ4was/nV9jKtxk8LiRQvY\ns2sXb44YleiytatXZueO7W4Ld155HjE5W73nlrrOTm2/wxiTQj4RRJcsc54i8ijgAZwH/gHKiEh6\ne5lxAztPZiCbMWY58CrWMGtCrQResHuYiEgpEXlIRIoAZ4wxX2ENHVe0g67DGDMfGAbEvyQzBRk0\ndROF8mSOP2MSK5QnM4OmbrrfzYhV3nzedOra0/mQhPvpwvlgBg51XZST0vn6VaRWnbox3tSfVB4r\nUzZVB06w5jyHvjmckyfivVXQbSLCw3mp32vJdjyVtMO2GcV6bBJYnfhuxpgI4LiIzMW69+YoEDm2\nmQVYJCIZ7PyJ+U2YgjWEu1OssY5zWPf31AUGishN4CrQFeuxS9NEJPKDw+C7O737487bYe6XHQfP\nxZ/pPmvRJuZbApJb7XoN73cT7lrX7nEOyrhdj57PxZ8pFWj4eONkPV6b/7VL1uPFJ6Uv9nGHJAue\nxphYxyuMMa8Dr8ewq3IMeUdEeV83yvtg7DlPY8wtYIj9imqG/bpTquptKqVUapIWgqc+YUgppZRK\npJSxMkEppdQDIfIhCQ86DZ5KKaXc68GPnTpsq5RSSiWW9jyVUkq5j6SNBUMaPJVSSrlVWgieOmyr\nlFJKJZL2PJVSSrlVWuh5avBUSinlXg9+7NTgqZRSyr3SQs9T5zyVUkqpRNKep1JKKbcR0ScMKaWU\nUomWFoKnDtsqpZRSiaQ9T6WUUm6VFnqeGjyVUkq514MfO3XYVimllEos7XkqpZRyKx22VUoppRIj\njXyrig7bKqWUUomkPU+llFJuI0Aa6Hhq8FRKKeVOaeMJQzpsq5RSSiWS9jyVUkq5VRroeGrwVEop\n5V46bKuUUkopF9rzVEop5T6iw7ZKKaVUogjgcDz40VOHbZVSSqlE0p6nUkopt0oLw7ba81RKKeVW\nIuKWVzzHyCAi20Rkj4jsE5GRdnoxEdkqIodEZI6IpLPT09vbh+z9RaPUNdhO3y8ijRNyjho8lVJK\npUZhQH1jjA/gCzQRkarAWOATY0wJ4CLQ087fE7hop39i50NEygAdgLJAE+ALEfGI7+AaPJVSSrmP\nvdrWHa+4GMtVe9PLfhmgPvCDnT4DaG2/b2VvY+9vIFb3thUw2xgTZow5ChwCKsd3mjrnmQDli+Vm\n5bc948+ooinWcdL9bkKqdG5B3/vdhFQrLdycn9JZD4Z3288ht4hsj7I92Rgz2Xksq4e4AygBfA4c\nBi4ZY8LtLEFAAft9AeA4gB/36gMAACAASURBVDEmXEQuA7ns9C1RjhG1TKw0eCqllEqpgo0x/rHt\nNMZEAL4ikh1YADyaXA3T4KmUUsqNkv9bVYwxl0TkV6AakF1EPO3eZ0HghJ3tBFAICBIRTyAbcD5K\neqSoZWKlc55KKaXcKjnmPEUkj93jREQyAo8DfwG/Am3tbN2ARfb7xfY29v5fjDHGTu9gr8YtBpQE\ntsV3jtrzVEoplRp5AzPseU8HMNcYs1RE/gRmi8hoYBcw1c4/FZgpIoeAC1grbDHG7BORucCfQDjQ\nxx4OjpMGT6WUUm6VHMO2xpjfAb8Y0o8Qw2pZY8x1oF0sdY0BxiTm+Bo8lVJKuU8aeTC8znkqpZRS\niaQ9T6WUUm7j5vs8UywNnkoppdwqDcROHbZVSimlEkt7nkoppdxKh22VUkqpREoDsVOHbZVSSqnE\n0p6nUkop9xEdtlVKKaUSxbpV5X63IunpsK1SSimVSNrzVEop5UbJ/5Vk94MGT6WUUm6VBmKnDtsq\npZRSiaU9T6WUUm6lw7ZKKaVUYuhXkimllFIqJtrzVEop5Tb6lWRKKaXUXUgLwVOHbZVSSqlE0p6n\nUkopt0oDHU8NnkoppdxLh22VUkop5UJ7nkoppdwnjdznqcFTKaWU24g+GF4ppZRKvDQQO3XOUyml\nlEosDZ5JJCIigsdrVabLU62daV9P/oJqfo/hnT09588HO9M3rV9HqcJ5aFgzgIY1A/h47JgY6zTG\n0LZFY/67cgWAryZOoG41P+pU9WXyF+NjLDN/7izqV69EveoVadGoDvv2/g5AcPA5WjapR91qfqxY\nusiZv3vH/3H61Enn9shhb7Bh3a93fyHukN7Lg/WftGfrZx3ZMbEzwzpXce6r61OQTeM7sGVCR9Z8\n0Jbi3tkASOfpwcxBTfhjSld++6Q9hR/O4iwzoL0/f0zpyp7JXWhYsXCsx13xbhuyZEwHwEutfdkx\nsTPbv+jMjNcbk97LI1rej56vzbn5veM8j0J5MnNufm/6PekHQO6sGVnzQVu2f9GZFtWKO/PNfbM5\n3jkfcm6/27MmdXwKxneZEuWFXj0pVigflStWiHH/+E8/JksGD4KDrd+5y5cv0+7JllQL8CPArzwz\nZ0yLsVxoaChNGtYjIiKCf//5h5pV/aleuSIBfuWZ+tWkGMu8PeItqvr7Ur1yRVo90ZhTJ63fpUUL\n5hPgV55G9etw/vx5AI4cPky3pzs4y964cYPGDeoSHh5+19ciMV7o9QxFC+YlwK98tPRRI96kSiUf\nqgX40bLZ7XP49KMPqBbg57xuWTN6cuHCBZd6jTE0a9yAK1eucP36derUqEJVf1/8fcsxetTwGNsS\nFhZG184dqPBYSerWrMo/x44BsHnTRqpU8qFWtQAOHTwIwKVLl2jZrDG3bt1ylm/e5HEuXrzojsvi\nFg4Rt7xSMg2eSeSriRMoWfrRaGkBVaozd+EKChYq4pK/SrUarN4QyOoNgbz2xtAY61yzagVlypUn\nS9as/P3nPr775muWr9nImg3bWb1yOUePHHIpU7hIUX5cvppfN+2k38DBDOz3IgALf5hD1x7PsXzN\nRr6a+BkAq1YspVwFX/J553eWf6bXi0z49IO7vg53CrsZQZPBC6jSdxZV+s6ikX8RKpfOB8D4vvXo\n8cFKqr40izlr9zOoQwAA3RuX4eLVMMo9+w0TFuxizDM1AHi0UE7a1S5Jxd7f0fLNRYzrUw+Hw/Uf\nXJOAouw9Esx/oTfIn+shXmzpQ41XZuP/4nd4eDhoV6eUM2/Fkg+TPUuGeM9j7HO1WbX9H+d2+7ql\n+Gr5Xmq9Ooe+rXwBaFa5GHuOnOPUhWvOfBOX7GFAO/+7uHKx69ylGwsWL49xX9Dx4/yyehWFCt3+\nYDF50hc8+lgZNgfuYvmqXxg6aCA3btxwKTtzxjRatm6Dh4cH+by9WbNuI5u27eTX9Zv5+IP3nUEl\nqldeG8CW7bvZtG0nTZo157133gZg0hefs27jVp559jnmzZkFwNsj3uTNEW87y6ZLl4669eozf96c\ne7oeCdW5S3cWLlnhkt7vtYFs3bGHzYG7aNLsCd4dM8pK7z+QzYG72By4i5Fvv0PN2nXImTOnS/mV\nK5ZTvnwFsmbNSvr06Vm2cg1btu9mc+AuVq9aybatW1zKzJg2lezZs/P7Xwfp83I/3hw6CLA++Py4\naBljP/zE+YHl/XdHM+CNwTgct/98d+z8NF99+YVbros7iLjnlZJp8EwCJ08EsWbVCjp16REtvbyP\nL4WKFL3ren+cN5smzVoAcPDA31SsVJlMmTLh6elJ1Rq1Wb5koUuZgCrVyJ49BwCVAqpw6uQJALy8\nvAgNDeHGjTA8PByEh4fz1cQJvPhK/2jlCxUuwsULFzh75vRdt/tO167ftNrg6cDTw4HBAGAMZM1k\n9Q6zPpTeGXSaVy3Od6v/sq7BhkPU9SlkpVcrzrzfDnIjPIJ/zlzh8MlLBJTK63K8DvVKs2TLEee2\np4eDjOk88XAIGdN7cuq8dRyHQ3jnmZoMnbohzva3qFacY6cv8+e/t3sdN8NvkSm9J+m9PIi4ZfBw\nCH1b+/LxDzuilf337H/kzJKBvDkyJfyCxaNmrdrkyOH6Rxxg0Ouv8fY7Y6Mt4BARrv73H8YYrl29\nSo4cOfH0dF3+MGf29zzRvCVgBbb06dMDVi8paq8nqqxZszrfX7t2zXlch8NBWFgYISEheHl5snHD\nevLmy0eJEiWjlW/eshVzZ3+fiLO/e7Fdt6jnEBJyLcbFL/PmzqZd+w4u6WBftxatAOtaZ86cGYCb\nN29y8+bNGOtbtmQxnbt0A6DNk21Z++sajDF4eXkREhJCaEgIXl5eHDl8mKCgIGrXqRutfLPmLZk3\nZ3bCTly5RbIETxEZKiL7ROR3EdktIlViyecvIuOjbHuJyFG7zG4ROS0iJ6Jsp0tkO0aLSL97PZ/4\nvDV4AMNGvRvtk2F8dmzbSoMa/nRq24L9f/0ZY55tWzZTwbciAKUfK8PWzRu4cOE8ISEh/PLzT5wM\nCorzGLNmTqN+w8YAtGnbgZXLl/BU62a83P8Npk+ZRNunOpMpk+sf9fI+fmzbsjnB5xIfh0PYMqEj\n/37/LL/s+pfA/WcAeHHcGhaMbMmhb56hU/1H+XCuFXjy58pM0LmrAETcMlwJuUGurBkokOshgs79\n56z3RPBV8ufK7HK8amW82XXoLAAnz1/j0x93cmBGD45+9yxXroWxZte/ALzQogLLth7h9MWQWNv+\nUAYv+retxJjvt0VLn7N2P82rFmfpmNa8PyeQ55tX4Ptf/iY0zHUIcvfhs1Qr452YS3ZXli5ZRP78\nBShfwSda+vMv9GH/339TslhBqvr7MPajT1x+V2/cuMGxo0coUrSoMy3o+HGq+vvyWIkivDrgdbzz\n5ycmI98axqOPFGHu7O8Z+tZIAPoPfIOWzRqxYvlS2rbvyPvvjub1wcNcypYpW44dO7bf45nfuxFv\nDaX0I4WZM+t7hg0fFW1fSEgIq1f9RKs2/4ux7JbNG/GrWMm5HRERQbUAP4oVzEv9Bg0JqOz65+/k\nyRMULGh9KPT09CRb1mycP3+eAa8PolfPbnz4wXs8/0JfRg4fxltReuuRcuTIwY2wMOeQ+P1k9RrF\nLa+ULMmDp4hUA5oDFY0xFYCGwPGY8hpjthtjXo6SVBNYaozxNcb4ApOATyK3jTGuY0332c8/LSN3\nnjz42EEuIcr7+BG49yBrNm6nZ68X6dG5bYz5Ll26QOYs1nxfqdKP0eeVAXRo8wSd/teCsuUr4PDw\niLEcwMbf1vL9zOkMHWnNp2bNlo1v5y5i5drNlPfx4+efltG81ZP0f/kFnu3age3bbg8t5c6ThzOn\nXYfo7tatW4aqL82iRNev8S+VjzJFrE//L7X2pc3wxZTo+jUzf/6Tsb1queV4OTJn4Gqo1dvNnjk9\nzasW57EeMyj+9FQeyuBFh3ql8c75EE/WLMkXi/fEWdewzlWYsHC3s/cc6UrIDZ4csYSar8xh9+Fz\nNKtSjAUbDvH5y/X5fkgzqjyaz5n33KXQaPOgSSEkJISP3n/PGbyiWvPzSir4+HDwaBAbt+1kQL+X\nuWLPo0c6HxxMtmzZo6UVLFSILdt3s2ffAb7/9hvOnjkT47GHjxrN34f/oX2HTkye+DkA9Rs+zvrN\ngcz7cTHLliyiUZNmHDp4gKc7tqPvC70ICbE+sHh4eJAuXTr++++/GOtOLiNGjWH/4X95qmMnvrSn\nNSItX7aEqtVqxDhkC3DxwgWyZLk9L+/h4cHmwF3sP3Kc7dsD2bfvjwS3o4KPL7+u38yKVb9w9OgR\n8uXLhzGGrp070LN7F85E+RnkefhhTp1y37/Te+EQ97xSsuToeXoDwcaYMABjTLAx5qSIBIjIJhHZ\nIyLbRCSLiNQVkaVRyjYBXCclohCRbnb53SLyhYg47PQnRGSnXf+qKEXKi8g6ETkiIn3cfbLbtm5m\n1YplBJQvRe+eXdjw21r69OoeZ5ksWbPykD2006BRU27eDI+2oCiSp4dntOGyTl17sGrdFhauWEO2\n7Dl45I4hsEh//rGX/i/3Zvr3P5AzZy6X/Z+8/w6v9B/EgvlzqFK1OuMnTuXD925/ug27fp0MGTMm\n5PQT5fK1G6z7PYhGlYqQO2tGyhfP4+yF/vDbAao+ZvXOTp6/SsE81vXxcAhZM6Xj/JXrnDh/jYJ5\nbv+RKpA7MyfPX3U5TnjELef8SX3fQhw7fYXgK6GER9xi4cbDVH3MG59H8lDcOxv7pnbj72ndyZTe\niz+mdHWpK6B0XsY8U4O/p3WnbytfBj4VQO/m0RfqDO5YmbGzA2lfpxSb9p3k2Y9WMTTKwqgM6TwI\nvRFxbxcvHkePHObYsaNUD/CjbKninDgRRK2q/pw5fZqZ30ynRas2iAiPPFKCIkWLcWD/39HKZ8iY\nkbDr12Os2zt/fh4rU5ZNG9fH2YanOnRi0cIfo6WFhITw3cxv6NX7Rd55ewRfTplOteo1mDPrO2ee\nsLAwMmSIf945OTzVoTOLFkQ/hx/mzqHdUzEP2YLVc4xpWDt79uzUrlOX1St/ctmXP38BgoKsPkV4\neDiXr1wmV67b/1aNMbz/7hjeGPIm744Zxeh3xtL9mWeZ+PnthYLXr18nYwb3/ztVMUuO4LkKKCQi\nB+zgVscebp0DvGKM8cHqjYbGULYesDa2ikWkHNAGqG73TD2BDiKSD5gItLHrj/qbXgp4HKgKjBKR\nGLtrItJLRLaLyPaYAllshg4fzc4/jxC49wCTps6kZu26fD55epxlzp45jTHWvN+uHYHcMrdiDHKP\nlCzFP8duz90Fn7OGIoOO/8vyJQtp09b1H3TQ8X/p2aU9E76cxiMlSrnsP3L4IKdOnqB6rTqEhoQg\nDgeIcD309h/OI4cO8uhjZRN0/vHJnTUj2R6yRtszpPOggV8h9gdd5OLV62TNlI4SBazeTn2/wuw/\nbs0pLtt6lM4NHwPgyZolWPe7NTy9bMsR2tUuSTpPD4rkzUqJ/NkJPODaGzp44hLF8lkrd4+f+4/K\nj+YjY3prjq+ebyH2H7/AT4HHKPb0VB7tMZ1He0wnJOwm5Z79xqWuhq/Pd+b5bNFuPpgTyKSlvzv3\nP5I/GwVyZ2b93hNkyuDJLQMGnMcDKFEgB38eS9rhtbLlynP0+Gn2HTjCvgNHKFCgIOu3bCdvvnwU\nKlSYdb/+AsDZM2c4eHA/RYsVj1Y+R44cREREcN0OoCeCgggNtf6JXrx4kc2bNlKyVGmX4x46dND5\nftnSxZQqHT3PuI8/pHefvvac+3VEBIfD4az7/Pnz5MqVGy8vL/ddjESKXNUK1tB3qSgL/y5fvszG\n9eucc5oxKVmqNEePWP9Oz507x6VLlwBr9fIva1ZHqy9Ss+Yt+G7mDAAW/PgDderWjzZs+f2339C4\nSVNy5sxJSEgIDofDum52j90Yw5kzp6MNs99PaWHYNskfkmCMuSoilYBaWMFwDjAGOGWMCbTzXIHo\nDxMWkQLABWNM7BNQVtANALbbZTNiDQmHAr8aY/6x64+6nnypPdx7VkQuAHkAl9UwxpjJwGQAH79K\nJvFn7mrKpM/4YvzHnD1zmgY1/GnweBM+mjCJpYt+ZMbXk/H08CRDxoxMmjozxl+cBo2asmnDbxQr\nXgKAnl07cPHCebw8vXj3w3Fky24FnhlfTwag2zO9+OT9d7h44QKD+1uj4R6enqxce3v+8r23hzPo\nTWtor03bp+jRuR2fffoBAwdbS+pv3rzJ0aOH8fG7PYdzL/LlzMRX/Rvh4bCWos9ff5AV244B0Gf8\nGmYNbcatW4ZLV8N4/tPVAExfuY+vBzTijyldufjfdbqMtT65//XvBeavP8iuL58mPOIW/Sau5dYt\n1x/VisCj1K5QkCOnLhO4/wwLNhxi8/gOhEcY9hw5x9QV++Js8xNVilGx5MO8/e3WeM9vZLfqDJ+x\nCYC5aw8w983mDGhXibe/tYbBPT0cPOKdjR0HYx7yvBs9unRi/fp1nA8OpvQjhRkybDjdevSMNf8b\ng4fR+7keVKnkgzGGUaPfJXfu3C756jd8nM0bN1CvQUP2//0XQwYNREQwxvByv9coW866xaNP7+fo\n+dzzVKzkz/Bhgzl44AAOh4NChQszbsJEZ32nTp5k+/ZtDB72FgC9X+xDnRpVyJYtO7PmWb279et+\npXHTZm67NnHp3qUT639by/ngYEoVL8TQN0fQrUdP3ho2mIMH9uNwOChcuAjjPrt9DksWLaB+w0Y8\n9FDsw+6NmzZj/W9reaRECc6cPkWvnt2JiIjg1q1bPNm2HU2faA7A2yPfomJFf55o0ZJuPXrybI+u\nVHisJDly5mT6zFnO+kJCQvh25gwWL1sJwEuvvMqTrZ4gXbp0fD3D6rHv2rmDgMpVY1z4dT+k8Ljn\nFhLZ40m2A4q0BfoA6YwxNe7YVxcYYIxpLiI9gazGmE+i7B8BXDXGfGhvvwrkNMa8eUc9bYDWxphu\nd6SPxhpC/tTe/htoaIyJc6WNj18lEzXg3C9nTp/i5d7PMGdhnCPZbrV8ySL27tnFG8NGJLpssY4x\n3wuY3PLlyMSUAY1oPtR1NXJya1mtOL4lHmbUTNfbFSKdW9A3GVsUu927dvL5+E/5apprDzypdHrq\nf4wc/S4lS7qOkiRESuitnD51iuee6caSFaviz+wmA197hWbNW1KvfoNEl61VLYCdO7a77cJlK/KY\nqTlkhlvqWt67yg5jjHvv7XKT5FgwVFpEok7G+QJ/Ad4iEmDnySIid35kine+E1gNtBeR3HY9uUSk\nMLAJqCciRez0mGf2U5m8+bzp3K2n8yEJySEiIpzefZN8gXKSOn0xhGk/7XM+JOF+8vRwMO7Hnfe7\nGQni61eRWnXqEhGRtPOzkW7cuEHzFq3uOnCmFPm8vene81mXRVhJqUzZcncVOJOCYD/f1g3/pWTJ\nMeeZGZghIn+KyO9AGeAt4ClggojsAX4GnCsE7HnIEsaYv2OqMJIxZi8wElht170KyGuMOQO8ACyy\n6/8ujmpSlZZt2pIlyn1oSa1F6/85h4NTs/nrD/Jf6P1fnP3jhkNcvnb/25FQXbs/g0ccq7jdKV26\ndHR62nWRVmr0v7bto90vmtR69Hwu2Y6VEMmx2lZEConIr3Zs2Scir9jpI+64pbFZlDKDReSQiOwX\nkcZR0pvYaYdEZFBCzjE55jx3ANVj2BWMtWgnqrXAWhGpCbhMMBljRsSQ9j3gcle1MWYZsOyOtGF3\nbLvO3CullEoNwoH+xpidIpIF2CEiP9v7Pomc3oskImWwFo+WBfJjdboihzk+x1pIGgQEishiY0zM\nN9zbUsbs8h2MMRuAuB/zopRSKuVJppWyxphTwCn7/X8i8hdQII4irYDZ9m2TR0XkEFDZ3nfIGHME\nQERm23njDJ76eD6llFJu5cZn2+aOvGXQfvWK+XhSFPDj9ohlX7GeaPe1iOSw0woQ/QE9QXZabOlx\n0uCplFIqpQo2xvhHeU2+M4OIZAbmA/3s2x4nAo9gLU49BXyUFA1LkcO2SimlUieBZPs6MRHxwgqc\n3xljfgSwF4xG7v8KiHxq3QmgUJTiBe004kiPlfY8lVJKuVVyfCWZWBOrU4G/jDEfR0mP+q0LbYDI\nhwkvxnoCXXoRKQaUBLYBgUBJESlmP/2ug503TtrzVEoplRrVALoAe0Vkt502BOgoIr5YT8Y8BjwP\nYIzZJyJzsRYChQN9jDERACLSF1gJeABfG2PifuwYGjyVUkq5WTKttt0AMT5JIeZvhrfKjMF6POyd\n6cvjKhcTDZ5KKaXcJiFDrg8CnfNUSimlEkl7nkoppdwquVbb3k+xBk8RifPBjJFfI6aUUkpF9eCH\nzrh7nvuwVitFvQ6R2wYonITtUkoppVKsWIOnMaZQbPuUUkqp2KSE71VNaglaMCQiHURkiP2+oIhU\nStpmKaWUSo2sJwwl/VeS3W/xBk8R+Qyoh3UzKkAIMCkpG6WUUkqlZAlZbVvdGFNRRHYBGGMu2I8w\nUkoppaJLpq8ku98SEjxviogDa5EQIpILuJWkrVJKKZVqpYHYmaA5z8+xnlqfR0RGYn1J9dgkbZVS\nSimVgsXb8zTGfCMiO4CGdlI7Y8wfcZVRSimVdumw7W0ewE2soVt9pJ9SSqkYRa62fdAlZLXtUGAW\nkB/rS0K/F5HBSd0wpZRSKqVKSM+zK+BnjAkBEJExwC7g3aRsmFJKqdRJh20tp+7I52mnKaWUUi4e\n/NAZ94PhP8Ga47wA7BORlfZ2IyAweZqnlFJKpTxx9TwjV9TuA5ZFSd+SdM1RSimVmomk8a8kM8ZM\nTc6GKKWUejCkgdgZ/5yniDwCjAHKABki040xpZKwXUoppVSKlZB7NqcD07DmgJsCc4E5SdgmpZRS\nqZjYz7e911dKlpDgmckYsxLAGHPYGDMMK4gqpZRSLkTc80rJEnKrSpj9YPjDItIbOAFkSdpmKaWU\nUilXQoLnq8BDwMtYc5/ZgGeSslFKKaVSJ0HS9mrbSMaYrfbb/7j9hdhKKaWUq1Qw5OoOcT0kYQH2\nd3jGxBjzZJK0SCmllErh4up5fpZsrUjhPBxCtkxe97sZqc7FxS/f7yakSjkC+t7vJqRaF7ZNuN9N\nUKTxZ9saY9YkZ0OUUko9GNLC91amhXNUSiml3CqhX4atlFJKxUtI48O2dxKR9MaYsKRsjFJKqdTP\n8eDHzviHbUWksojsBQ7a2z4iorPySimlYuQQ97xSsoTMeY4HmgPnAYwxe4B6SdkopZRSKiVLyLCt\nwxjzzx1j2BFJ1B6llFKpmPVc2hTebXSDhATP4yJSGTAi4gG8BBxI2mYppZRKrVL6kKs7JGTY9gXg\nNaAwcAaoaqcppZRSaVJCnm17FuiQDG1RSin1AEgDo7bxB08R+YoYnnFrjOmVJC1SSimVagkky7eq\niEgh4BsgL1aMmmyMGSciOYE5QFHgGNDeGHNRrInYcUAzIATobozZadfVDRhmVz3aGDMjvuMnZNh2\nNbDGfm0EHgb0fk+llFL3UzjQ3xhTBms6sY+IlAEGAWuMMSWx4tYgO39ToKT96gVMBLCD7XCgClAZ\nGC4iOeI7eEKGbedE3RaRmcCGBJ2aUkqpNCc5nvtqjDkFnLLf/ycifwEFgFZAXTvbDGAt8Iad/o0x\nxgBbRCS7iHjbeX82xlwAEJGfgSbArLiOfzeP5yuG1U1WSimlXLhx1Da3iGyPsj3ZGDPZ9XhSFPAD\ntgJ57cAKcJrb8aoAcDxKsSA7Lbb0OCVkzvMit+c8HcAFbneDlVJKqaQSbIzxjyuDiGQG5gP9jDFX\not5jaowxIhLr91LfiziDpz3B6gOcsJNu2V1epZRSyoWIJMuCIftYXliB8ztjzI928hkR8TbGnLKH\nZc/a6SeAQlGKF7TTTnB7mDcyfW18x45zaNoOlMuNMRH2SwOnUkqpOFlPGbr3V9zHEAGmAn8ZYz6O\nsmsx0M1+3w1YFCW9q1iqApft4d2VQCMRyWEvFGpkp8UpIXOeu0XEzxizKwF5lVJKqeRQA+gC7BWR\n3XbaEOA9YK6I9AT+Adrb+5Zj3aZyCOtWlR4AxpgLIvI2EGjnGxW5eCgusQZPEfE0xoRjTcIGishh\n4BrWbTzGGFMxUaeplFIqTUiOx/MZYzZgxaOYNIghvwH6xFLX18DXiTl+XD3PbUBFoGViKlRKKZV2\nJddDEu63uIKnABhjDidTW5RSSqlUIa7gmUdEXott5x0TtEoppRSgz7b1ADIT+5iyUkopFZ2kja8k\niyt4njLGjEq2liillFKpRLxznkoppVRiSBoIH3EFT5elvkoppVRcrNW297sVSS/WJwwl5CZRpZRS\nKi26m29VUUoppWKVFnqeGjyVUkq5laSBe1WS4ztLlVJKqQeK9jyVUkq5TVpZMKTBUymllPsk4OvE\nHgQ6bJsEnn/uGYoUyIu/b/lo6RcuXKB500aUL1OK5k0bcfHiRQA++egDqvj7UcXfD3/f8mTO4MmF\nC66LnY0xNG3UgCtXrhB0/DhNHq9PxQplqeRTjs8njIuzTdu3B5IloxcL5v8AwIH9+6lexZ/KFX3Y\numUzAOHh4TzR5HFCQkKc5bp27sihgwfv6Xok1oH9+6lSydf5ejhnViaM+xSAPbt3U7tGVapU8qVG\nFX8Ct22LsY7du3bR+7meAOz/+2/q1KxGtofS88nHH8Z63AZ1azmPWaxwftr9rzUAC36cT0WfsjSo\nW4vz588DcOTwYZ7u9JSz7I0bN2hYrzbh4eFuuQaxcTiEzbPeYP643s60aWO6sWfBm2yfN4RJwzvj\n6Wn9sy5VNC9rZ/Tn0tZP6Ncl+p1nk4Z35p8177J93pA4j9e3U106Na8MQPlSBVg7oz+Bc4fww6fP\nk+WhDAD4ly3CltmD2DJ7EFvnDKJlvQqx1jeiTwt+X/gWu+YP48WOdQD4f3v3HR9F0QZw/PekAKF3\nCAgoHemhdwREepPeVRJU1AAAIABJREFUO6LYRcEXBQUboiIoICACKtIUCEgR6Z3QEUQ6KIQSkJqQ\nkGTeP3ZzXHIJJOEIwTxfPvfJ3dzO7O5we8/O7NxOq/rl2Dn/f/z+7StkzZQOgCcey873H/dy5PP2\n8mTlt6/g6em+r6yEHqdRYh5LMYWEhNCwfl0iIiLYu2cPdWtVp0LZUlT2K8v8uXNizTNl8iQqlS9D\nlYrlqV+3Fn8ePAjAls2bqOxXlhpVKzmOwytXrtC8yTNERkY68jdt9LTLdqoHS4PnA9Cte08WLlnm\nkv7Z6I+p+1Q99h88TN2n6vHZ6I8BePX1wWzbsZttO3bz3qgPqVW7DlmzZnXJv3zZUkqXKUPGjBnx\n9PLio9Fj2LXvAGs3buGbiRMcB1xMERERvPP2EOo/3dCR9u3Ubxjz+Vh+8f+VsZ9/BsCUbybSsXMX\n0qZN61iu34Dn+Pyz0fdVHwlVtFgxtu3cw7ade9i8fSdp06alRavWAPxv6Jv8753hbNu5h3dGvM//\nhr4ZaxmjP/mQ5we9BECWrFn57ItxvPLaG3dd76q1GxzrrVK1Gq1atQFg4tfj2bglgL79BjDnp1kA\njBg+jBHvjXLkTZUqFU/Vq8+8OL4c3WVQ56f468T5aGmzlwVQtvVIKrb7EJ803vRqXR2Af6/e5PVP\n5jF25mqXcr5fvJWWL3x913V5enrQvWU15izbAcDEdzszbNwiKrX/EP81e3m1hxWQDxw7S40uo6na\n8WNavjCB8cM6xRrkurWoymO5M1O29UjKPzuKect3AjCwYx1qdh3N1J830aFxRQBGvNCMEROWOPLe\nDo9gzba/aNfQfTMhJvQ4hdiPpZhmTJ9Gy1at8fT0JG3atEydNoOde/9g4ZJlDH7jVa5cueKSp0PH\nzgTs3se2Hbt59fXBvPXm6wB8+cXn/OL/K59+9gVTp0wC4JOPRjH4raF4eNyp406duzJ50oRE14W7\neYi45ZGcafB8AGrWqk3WLK7Bb8lif7p0syY479KtB4v9F7ksM2/ObNp16BhruXN+mkWz5i0B8PX1\npXx564skQ4YMFCtegrNnz8Sab+LX42nZug05c+R0pHl7eRMcHExIcDDe3t5cuXKFpb8uoUvX7tHy\n1qhZizWrVz3wFlVc1qxexRMFC1GgQAHAGsV37do1AK5evYpvnjwuea5fv84f+/dRpmxZAHLmzEnF\nSpXw9vaO1zqvXbvGujWrad7Sanl6eHgQGhpKsF1XGzduIFeu3BQuUiRavuYtWjHnpx8Tva/3kjdn\nZhrVLMl3CzZHS1+x8c5J044/TpE3ZxYALv57g50HT3M7PMKlrE27jnH5arBLurO6lYqy59DfRERY\nLZzC+XOycedRAFZvPUSr+uUACLl127FM6lTeWNMmuurfriYfTl7meP/ivzcAiIyMJLW3F2nTpOJ2\neAQ1yhfifNA1jp2+GC3/4rX76NCk0l23OSESc5zGdizF5HycFila1PE5yZMnDzlz5CTo4kWXPBkz\nZnQ8D7550zFa1dvbm5DgYOuz5+XN8WPH+Ofvf6hdp260/E2bt2DenNnx3PMHK+qapzseyZle80xC\nFy6cx9fXF4DcuXNz4UL0FkRwcDArf1vO51+OjzX/li2bGD9hkkv6qZMn2bt3N5UqV3F578yZM/gv\nWsjylat5bkcfR/qAgS/Qt3cPQkNDGf/1JD7+cKTL2SxYgaNQocLs27cXP78KCd7n+zVvzmzad+jk\neP3pZ2Np3vQZhr71BpGRkaxZv9klz66dO3iyZKlEr3PxooXUrVff8YU2+K2hNH2mAb558jBtxg90\n6diOmT+6flGVLFWKnTsCXNLd5dPBz/K/LxeSPm2aWN/38vKgU9PKDP409u7EhKpWriC7//zb8frP\n44E0r1uGxWv30eZpPx7LlcXxXqVSBZg0oiv5fbPSZ9gMRzB19sRjOWjbsAIt6pUl6N/rvD56PsdO\nX+TTaSv5ddKLBF68Su9hM/hxdB+6D/nOJf+Bo2epUDK/W/btbuI6TuM6lpyFhYVx4sRxCjz+uMt7\nAQHbCQsLo2ChQrHmnTTxa8Z/+QVhYWEsW7EKgDfeHELf3j3w8fFh6nczefutwQx/b6RL3ixZshAa\nFsqlS5fIli1bYnZbJVCyanmKyP9E5ICI7BORPSLiGg0SXmZdEanuju1zJxFx+S3U0iWLqVqtRqxd\ntgD/Xr5MhgwZoqXduHGDTh3aMnrMF9HOXqO8+fqrjPrwY5egmC9/flb8voa1GzaTNm1azvxzhmLF\nS9CnZ3e6de7IkcOHHcvmyJGTwLNnE7uriRYWFsavS/xp07adI23yNxMZPeYLjp74m9FjvmBgf9cv\nscDAQHJkz5Ho9c6d81O0gF2/wdNs3r6TnxcuZon/Ip5p1IQjhw/TqUNbnh/Qz3GN2NPTE+9Uqbh+\n/Xqi1x2XxrVKceHy9WjBLKYvh3Zg066jbNrtnil4c2fPRJDdOgQYMOJH+revxaYf3yR92tSE3b7T\nog344xQV2n5Aza6jGdy7IalTuZ6Xp07lRWjYbWp2Gc13v2zmm+FdAFi97RA1uoym7Svf0KxuGVZs\nPECRAjmZ9Wkfvn6nEz5prB6DyEjD7dsRpE+b2i37Fx/Ox2lcx5KzoKAgMmfK7JIeGBhI357d+Wbq\ntDjzPzfwBQ4cOsqoDz7mk48+AKBsuXKs27iF5StXc/LEcXL75sYYQ7fOHendoxvnz985AX9Yx2ls\nRNzzSM6STctTRKoBzQA/Y0yoiGQHUt1nmV5AXeAG4NpESWI5c+YiMDAQX19f6ws+RtfPvLlzaB9H\nly2Al5cXkZGRjoPv9u3bdO7Qlo6dOtOqdZtY8+zatYPuXa1AcCkoiBXLl+Lp5UULu0sSYMS7wxj+\n3kgmfDWOnr37UKDA4wx/5398N/MHAG7duoWPj8997XtirFi+jHLl/ciVK5cj7cfvZ/DZF9bgqGfb\ntuP5AX1d8vn4+HDr1q1ErTMoKIgdAduZM3+By3vBwcF8P3M6i5euoE3LZsye9wsLfp7P7Fk/0rtv\nPwDCQkNJkyb2luH9qFauIM3qlKZRzZKkTuVNxnRpmDaqO72HzQTg7f6NyZElPR1GTXXbOm+FhkUL\ngodPnqf589Z10sL5c9K4VkmXPH+dOM+N4FBKFs7DroOno7135vy/LFy1F4BFq/fyzYiu0d73SeNN\nt+ZVaP7C1/zy5UA6vj6F1k+Xp2PjSo6u6lTeXtwKu+22fYxNXMdpfI4lHx8fboVG/+xdu3aNNi2b\nMeL9UVSuUvWe62/XoSMvv/h8tDRjDJ989AEzfviJ1195iQ8++oRTp04y4atxvDfSCrQP6zh1JXik\ngBvDJ6eWpy8QZIwJBTDGBBljzorISREZLSL7RWS7iBQGEJHHRWS13UpdJSL57fTpIjJJRLYBc4Hn\ngFftlmwtEWknIn+IyF4RWZ+UO9i0eXN+/H4GYAWBZs1bON67evUqGzeso1mLlnHmL1K0GCeOHwes\ng2lg/74UK16cl16Jc85y/jx8nENHTnDoyAlat2nL2HFfRzvYN6xfh28eXwoXKUJISDAeHh54eHhE\nG3F79Mjh++oGTayYLUAA3zx52LB+HQBr16ymcOEiLvmKFy/BsWNHE7XOBT/Pp3GTZrEGwC8++5Tn\nB71kXYcKCUFEotXVpUuXyJY9e7yvrSbEu+P9KdzoHYo3HU73Id+xNuCwI3D2bF2Np6uXoPvQ6XFe\nb0yMQyfOUyjfnRZ8jizpAas1NqTfM0yZvxGAAnmyOQYI5ffNQrEncnPq7CWX8hav3UedStb/V60K\nRTh6+kK091/t3oAJP60jPDwSnzTeGAyRkZGkTWOdQ2fNlI5LV24QHu7aJexOcR2n9zqWwOo+jYiI\ncJy8hYWF0bFdG7p07UbrZ9vGuU7nEe3Llv5KoRif6x+/n8kzjRqTNWtWgoPvHKchIdZnzxjD+fPn\nYu0uVg9Gsml5Ar8B74rIYeB3YI4xZp393lVjTGkR6Q6MxWqhjgdmGGNmiEhvYBwQ9Ul+DKhujIkQ\nkRHADWPMGAAR2Q88Y4w5IyKu/Ss2EekP9AerizMhenTtzPr1a7kUFEThJ/Ix7N0R9OzVh9cHD6Fb\n5w7MmD6N/PkL8P2sOyMz/RctoH6DhqRLly7Ochs1bsL69WspVLgwWzZvYtaP31OqVGmqVCwPwHsj\nP6BR4yZMmWxdF+3X/7k4y4I7Z7NR1+969+lPrx5diQgP58uvrJF758+fJ42PD7lz505QHdyvmzdv\nsvr3lXw14Zto6V9PnMLg114mPDyc1GnS8NXEyS55ixUvzrWrV7l+/ToZMmTg3Llz1KhakevXruHh\n4cFX48aye99BMmbMSKvmTZjwzVTy2AOP5s2dzRtvDnEp8+zZs+wI2M7/3hkOwMAXXqRmtUpkypSZ\nuT8vBGDd2jU0atzU3VVxT+Pf7sjpwMusnWGN0Fy0eg8fTV5OrmwZ2PTjm2RIl4ZIYxjUpS7ln/2A\n6zdvMeOjntSqUITsmdNzdPlIRk5ayoyFW6KV+9umA3w7qofjdftGFRnQobZjHTMXbQWgevmCvNGr\nIbfDI4iMNLz84RwuXbkJwILxA3n+/VkEXrzKmGkr+e7DHrzYpR43Q0IZ+P4sR9m+OTJRsVQBPpxs\njX6d+NM6Nv7wJlevB9P+tSkA1KlUhOUbD7it3hJznMZH/QZPs3nTRurVb8DP8+ayccN6Ll26xPcz\nrYA8eep3lC1XjvdHvItfhYo0a96CSRO/Ys2qVXh5e5MlSxamfDvdUV5wcDA/fD+DxUtXAPDSK6/S\nukVTvFOlYvpMa4Darl07qVy5Kl5eD/8rXUj+Xa7uIO48U71fIuIJ1AKeAgYAQ4ARQD1jzHER8QbO\nGWOyiUgQ4GuMuW2nBxpjsovIdGCNMWaGXeYIogfPSUAhrFbpL8YY11PkGPwqVDSbtj64gSDxFRgY\nSL/ePViy7LckW+f4L78gQ8aM9OwV+wCJu3mY97ccN/YLMmTIQK8+rt26D0qHdm0Y9cHHFCla9L7K\nyVJpkJu26P7N+awfb3+50GXk68Mwe0xfho3zd2mxOru8PfbBdklp9+5dfPXlWL6dPjPJ1vnGay/T\ntFkLnqqX8Jkka1StxK6dO9x2sBYoUcYMnebvlrIGVn9ipzGmolsKc7Pk1G2LMSbCGLPWGDMcGAQ8\nG/WW82LxKOrmXdbxHDAMyAfsFJFHZmiar68vvXr3dfxUIylkypyZrt163HvBZKb/cwNJnTrpBpaE\nhYXRokWr+w6cyc2wcYvInd11IFpS8/byxH/tvrsGzuSifHk/ate1bpKQVJ4sWSpRgVMlXrIJniJS\nTEScO/rLAafs5x2c/kb1LW0GokbXdAE2xFH0dcAxRFVEChljthlj3gUuYgXRR8az7drHOqr2Qene\no1ey6ApKqDRp0tC5a7ckW1+qVKno0q37vRd8xBw5dYFNu9wzevd+3A6PYNaS2O8mlRz16NkbT0/P\nJFtf7z79kmxd8ZESbpKQnL4V0wPj7euQ4cBRrGuOzYAsIrIPCAWiRpC8CHwnIoOxgmAv1yIBWAzM\nF5GWdp5X7SAtwCpg7wPaH6WUSnFSyjXPZBM8jTE7AZffY9rXzT41xrwVY/lTQL1YyukZ4/VhwPlm\nm3G1UJVSSql4STbBUyml1H9Dcu9ydYdkHzyNMY8/7G1QSikVfykgdiafAUNKKaXUoyLZtzyVUko9\nOoSU0SrT4KmUUsp95OHeICWppIQTBKWUUsqttOWplFLKrf777U4NnkoppdxISBk/VdFuW6WUUiqB\ntOWplFLKrf777U4NnkoppdwsBfTaaretUkqpR5OITBORCyLyh1PaCBE5IyJ77EcTp/eGishREflL\nRJ5xSm9kpx0VkSHxWbe2PJVSSrmRJOXvPKcDXwExZx7/whgzJtpWiTyJNY1lSSAP8LuIRE3A+zXw\nNPAPECAi/saYg3dbsQZPpZRSbpOUdxgyxqwXkcfjuXhLYLYxJhQ4ISJHgcr2e0eNMccBRGS2vexd\ng6d22yqllHIrEXHLA8guIjucHv3juQmDRGSf3a2bxU7LC/zttMw/dlpc6XelwVMppVRyFWSMqej0\nmByPPBOBQkA5IBD47EFsmHbbKqWUcquHOdjWGHPesR0iU4Al9sszQD6nRR+z07hLepy05amUUsp9\nxK3dtglfvYiv08vWQNRIXH+go4ikFpEngCLAdiAAKCIiT4hIKqxBRf73Wo+2PJVSSj2SROQnoC7W\ntdF/gOFAXREpBxjgJDAAwBhzQETmYg0ECgdeMMZE2OUMAlYAnsA0Y8yBe61bg6dSSim3SeLRtp1i\nSf72Lst/AHwQS/pSYGlC1q3BUymllFvpfJ5KKaWUcqEtT6WUUm713293avBUSinlZimg11a7bZVS\nSqmE0panUkopt7FG2/73m54aPJVSSrmVdtsqpZRSyoW2PJVSSrmRINptq5RSSiWMdtsqpZRSyoW2\nPJVSSrmNjrZVSimlEkpSRretBs94MAbCwiMf9mY8cjw9UsAR9ABc3DruYW/CIytrQ5cJM9Q9hB4O\nfNib8EjS4KmUUsqttOWplFJKJVBK+KmKjrZVSimlEkhbnkoppdxGgJQw3EGDp1JKKbfSblullFJK\nudCWp1JKKbfS0bZKKaVUAmm3rVJKKaVcaMtTKaWU2+hoW6WUUirBUsZ8ntptq5RSSiWQtjyVUkq5\nj86qopRSSiVcCoid2m2rlFJKJZS2PJVSSrmNNdr2v9/21OCplFLKrf77oVO7bZVSSqkE05anUkop\n90oBTU8NnkoppdxKb5KglFJKKRfa8lRKKeVWKWCwrQZPpZRS7pUCYqd22yqllHo0icg0EbkgIn84\npWUVkZUicsT+m8VOFxEZJyJHRWSfiPg55elhL39ERHrEZ90aPJVSSrmXuOlxb9OBRjHShgCrjDFF\ngFX2a4DGQBH70R+YCFawBYYDVYDKwPCogHs3GjyVUkq5jRX33PPvXowx64HLMZJbAjPs5zOAVk7p\nM41lK5BZRHyBZ4CVxpjLxph/gZW4BmQXes1TKaVUcpVdRHY4vZ5sjJl8jzy5jDGB9vNzQC77eV7g\nb6fl/rHT4kq/Kw2eSiml3Me9U5IFGWMqJjazMcaIiHHb1jjRblullFJulXSXPGN13u6Oxf57wU4/\nA+RzWu4xOy2u9LvS4KmUUuq/xB+IGjHbA1jklN7dHnVbFbhqd++uABqKSBZ7oFBDO+2utNtWKaWU\neyXRDz1F5CegLta10X+wRs1+DMwVkT7AKaC9vfhSoAlwFAgGegEYYy6LyEggwF7ufWNMzEFILrTl\n6Wb//PM3zRrVp4pfaapWKMPEr8c53lv4y3yqVihDlnTe7N555xr4mlUrqVO9MtUrlaNO9cqsW7s6\nzvK7d27PyRPHo6V1bNuKahXLxrr8uC/GULNKBWpWqUC1imXJmj4V/16+TNDFizSqX5tqFcuyxH+R\nY/lO7VoTePas4/WwoYPvuj3uNLB/H57Il5vKfmVc3ps04Sv8yjxJpfKlGfb2WwCcOnmSHJnTUb2y\nH9Ur+/HyoIFxlt21UztOHLfqrXXzxlSrVJ5K5Uvz8qCBREREuCw/9vMxjnIr+5UhU1pvLl++zMWL\nF3n6qdpU9ivDYv+FjuU7tG0Vrd7eHjKYdWsebr316NrRsQ8lixakemXrZ207ArY70qtVKo//ogWx\nlmuMoekzDbh27Rq3bt2ibs2qjnr74P0Rseb5+/RpmjSsT40qFahasRwrli8FYMvmTVStWI7a1Stz\n9OgRAK5cuULLps8QGRnpyN+8cUP+/fff+60Sh9TenmyY0IttU/ux87sBDOtZ2/FenfKPs/mbPuyY\n1p8pQ1rg6WF949cqW4Bzi99g65S+bJ3Sl6HdaznyPF2pIHtnDOSPH57njU7V41zvpy88TY0y+QGo\n62etZ+uUvqwa14OCeaxfQYx+/mnHOvbNHEjg4jdiLat9vZIEfNuf7VP7seiTTmTL6APAqP712D61\nH1OHtnAs27FBKQY9W9nxuuQTOZj8VvOEVtt9ctdY23iNtu1kjPE1xngbYx4zxnxrjLlkjKlvjCli\njGkQFQjtUbYvGGMKGWNKG2N2OJUzzRhT2H58F5+91ODpZl6eXoz66FO27drPyrWbmPrNRA79eRCA\nEk+W5Puf5lG9Zq1oebJmy87s+QvZHLCHiVOm8VyfnrGW/efBA0RERPD4EwUdaf4LF5A+ffo4t+el\nV99g47adbNy2k3ffG0WNWrXJkjUr8+fNplffAaxav4WJX38JwLJfF1OmbDl88+Rx5O8/cBBjPxud\nyNpImC7derDAf6lL+vq1a/h1sT9bAnYTsHs/L7/yuuO9JwoWYvP2XWzevosvv5oYa7lR9fZEQave\nZvw4hy0Bu9m+ax9BQRdZ8PM8lzyvvPaGo9wRIz+gZq06ZM2alflzZ9OnX3/WbtzKhPHWidHSXxdT\ntmz5aPX23MBBfD7mk/uqj/iKq95m/DDbsQ8tWrehRcvWADxZshTrN29n8/ZdLPBfysuDBhIeHu6S\nf8XypZQuU4aMGTOSOnVqliz/nS0Bu9m8fRe/r1zB9m1bXfKM/vgD2rRty6ZtO5n+/Sxee2kQAOO/\n/JyfFy7h408/59sp3ziWfePNoXh43Pka6ti5C1O/if3/MTFCb0fQ6LUfqNJ3ClX6TqFh5UJULpEX\nEZg6pAXdRy6gYu/JnD5/la6N7pyAbtr/N1X7TaVqv6l8NHMDAB4ewtiXG9NyyE+U7zmJdvVLUrxA\ndpd1Zs3oQ+Un87Jp32kAxr3SmF4fLKRqv6nMWfUHQ7rVBODNCSsd65i4IIBFGw65lOXpIXw6qCGN\nXv2eyn2n8Mfx8zzXuhIZ06WmXJHcVO47hbDbEZR8IgdpUnnRvXFZJi28c2J+4MRF8ubISL6cGd1W\np8qiwdPNcvv6Uq68dYafIUMGihYrTuBZ69pzseIlKFK0mEuesuXufPGWeLIkIbdCCA0NdVlu3uxZ\nNGl25yzzxo0bTBj/BW+89Xa8tu3neXNo264jAN5e3oQEBxMWGoqnpyfh4eFM/HocL782OFqe/PkL\ncPnSZc6fOxevddyPmrVqkyVLVpf0qVMm8dobb5I6dWoAcuTMmaBy5/w0i6ZO9ZYxo/VFEh4eTlhY\nGHKPoYHz58ymbfsOAHh7exEcHEyoU71NGD+OV16PUW8FCnD58sOttyjGGBbMn0fbDtb/fdq0afHy\nsq7Y3Lp1K879n+tUbyLiOEm7ffs2t2/fjjWfiHDt2nUArl69Sm77c+3t7U1wSDAhwcF4e3tz/Ngx\nzvzzN7Xq1I2Wv0mzFsybOzsBe39vN2/dtrbBywMvTw8MhmwZ0xJ2O4Kj/1i9c6t3HKdVreJ3LadS\n8TwcO3uZk4FXuB0eybzVB2hWo6jLcq1qF+e37Xd6h4yBjOmsz27GdKkJvHTDJU/7eiWZu+qAS7qI\nIALpfFIBkCFtagIvXScy0uDt5QlA2jTe3A6P5JUOVZn4SwDhEZHRyli65TDt6pW86765m4h7HsmZ\nBs8H6NSpk+zfu4cKlarEO4//wl8oW668I1A427p1syMwA3zw/ru88NJr+KRNe89yg4OD+X3lClq0\nagNA2w6dWLrEn1bNGvH64CFMnTyRDp26kjaWssqWK8/WrZvjvQ/udvTIETZv2shTtarRqMFT7NwR\n4Hjv1MkT1KhSgUYNnmLTxg2x5t+6ZTPl/SpES2vVrBEF8+UmQ/oMtGrTNs51R9Vby9bPAtCuQ2d+\nXeJPy6bP8MabQ5jyzUQ6du4Sd71t2ZSYXXarTRs3kDNXLgoXLuJIC9i+jUrlS1O1YlnGjp/gCKbO\ntm7ZTDmneouIiKB6ZT8K5svNU/UbUKmy6+f67WHDmfPTjxQrlJ+2rZox5nOrV+P1wUPo36cnn336\nCQOee4H3RwzjnREjXfJnyZKFsNBQLl265IY9t3h4CFun9OX0gtdYvfMEAX+eJehqMF6eHvgV9QWg\ndZ0SPObUOqvyZF62Te3Hwo87UuJxq3WZJ3sG/rlwzbHMmYvXyZs9g8v6qpXKx+7DgY7Xz49ZwoKP\nOnJ07kt0fro0Y2ZF/0zkz5WJAr6ZWbv7pEtZ4RGRvPzFMgK+7c/x+S9TokAOpi/dw42QMFZsO8rW\nKX05d+kG126GUqlEXhZvOuxSxq6/AqleOp9L+oPirpG2yTx2Js/gKSJfiMgrTq9XiMhUp9efichr\nCSjP9VTPSp8uInF/c96HGzdu0L1Tez4c/bmjpXMvfx48wPBhQxk7PvZuq/PnzpE9ew4A9u3dw4nj\nx2neslWsy8a0fOkSqlStTpasVgslU6ZMzF2wmLWbtlG2nB/Lly6hZetneen5AXTv3J7t27Y48ubI\nkYNzgWfjKvqBCw8P599/L7N6/WZGffQJPbp0xBhDbl9fDh45yaZtO/lo9Bj69OjKtWvXXPKfOxfo\nqLcoC5cs58jJM4SGhd712uSyXxdTpVp1sjrV288Ll7B+83bKlvdj2a+LadWmLYMG9qdrp3Zs2+pc\nbzkJDAyMq+gkM3/ubNq27xgtrVLlKgTs3s/aTdv4/NNPuHXrlku+f/+9TIYMd4KDp6cnm7fv4tCx\n0+wMCODggT9c8sybO5su3Xrw17HTzF+4hH69exAZGUmZsuVYs34zS39bxYkTx8md2xdjDD26dqRv\nz25cOH/eUUb2HDnd+nmLjDRU7TeVwu2+pGLxPDz5uPVZ6D7yF0a/8DQbJvTienAYEfa11z1HAinW\ncTxV+k5h4oIA5o5sf7fiXeTOmp6gK8GO1y+2rULrobMp3H4c3y/fyyfPPx1t+XZPPcnCdYeIjHT9\nOaKXpwf9Wlagav+pFGz7JX8cP8/gzjUA+Hz2Fqr2m8qQib/zbu86jPxuHT2blOOH4W14q2tNRxkX\n/r2JbyxB/oFKAdEzWQZPYBNQHUBEPIDsgHO/Q3Xgnk0hEXkoo4lv375N987taNexEy1atY5XnjP/\n/EPXjm2ZNPXXULggAAAVnElEQVQ7nihYKNZl0vj4cCvU+pIL2LaVPbt2Urp4IRrXr8PRI4dp+ky9\nOMv/ed4cly/QKKM/HsXrbw7l57mzqVq9BhOnfMfHH7zveP9W6C180vjEaz8ehLx589KiZWtEhIqV\nKuPh4UFQUBCpU6cmW7ZsAJT3q8ATBQtx9IjrmbePj0+swSFNmjQ0bdaCX5f4x7nu+fPm0C6Oevvk\no1EMfutt5s35iWrVa/DN1Ol8NOo9x/u3Qm+RJk2ahO6uW4WHh+O/aAHPto09ABQvXoJ06dLHGgi9\nvLyiDeaJkjlzZmrXqcvK31xH88+cPo02z7YDoErVaoTeusWloCDH+8YYPv34A94cOoyPR73PyA8+\noWfvvkz8erxjmdDQW6Txcf/n7erNUNbtOUXDytbxte3gGRq8PJNaz3/Hxn2nHV2414PDHF29K7Yd\nw9vLg2wZfTgbdD1a6zRvjgycCbrusp6QsNukTmV99WTPlJbShXIR8Kd1MjB/zUGqlnws2vJt65Vk\n7mrXLluAsoWtm+OcOGsNopq/9k+X/GUL50JEOPz3JdrULUHX936hYJ4sFMprDUxKk8qLW6Gu17TV\n/UmuwXMzUM1+XhL4A7hu/w4nNVAC2C0in4rIHyKyX0Q6AIhIXRHZICL+wEHnQu3f93wlIn+JyO9A\nwi6exYMxhkED+1G0WAkGvfRqvPJcuXKF9s+2YPj7H1K1Wo04lytWrDgnjh0FoE//5zh0/G/2HzrG\nslXrKFykKL+uiL0FdfXqVTZtXB/temmUY0ePcPbMGWrVrktwSDAeHh6ICCEhIY5ljh45QomSSXvN\nxFmzFi1Zv24tAEeOHCYsLIzs2bNz8eJFx0jZE8ePc+zYkWiDqaIUK16c43a93bhxg3N2azA8PJwV\ny5dStFjs17quXr3Kpg3radq8pct7R48e4eyZf6hVpy4hISFO9XYnSB89cpgnS5a6r32/X2tW/07R\nosXJ+9idL9yTJ044BgidPnWKw4cPkb/A4y55Cxctxgl7ZPfFixe5cuUKACEhIaxe9TtFi7lev8+X\nLx9r16wC4NChP7kVeovsOe60+mf9MJOGjZqQNWvWO583Dw9CQqyWmjGG8+fPUSCW7UmM7JnSksm+\n3pgmlRf1KzzBX6etYJ4js9XVnsrbk9c7VWOK/y4AcmVJ58hfsXgePES4dC2EHYfOUjhvVgrkzoy3\nlwft6pXk182uJ2t/nQpyBK5/r4eQMX1qCj9m9VzUq1jQsX6AovmykSVDGrYe+CfW7T8bdJ3iBbKT\nPZO1rc7bH+Xd3nV5f9pavD098LQHX0UaQ9o03gAUyZeNAycukJSSarTtw5Qsf+dpjDkrIuEikh+r\nlbkF616D1YCrwH6gGVAOKIvVMg0QkfV2EX5AKWPMiRhFtwaKAU9i3e/wIDAttm0Qkf5Yd94nX778\n8d72rVs2MWfWDzxZqjQ1q1jXi959byQNGzVh8aKFvPX6ywQFXaT9sy0oXaYsv/gvY8qkrzlx7Cij\nPxrF6I9GAbBg8TKXgTENGzVh4/p11K3X4K7bMM0ezdi73wAAlvgvpF79p0mXLp3LsiNHvOO49tS2\nXUe6dGjD2M9GM/SdEYDVij5x/Bjl/RJ9h6x469WtMxs2rONSUBDFCuXn7WHD6dGrD9169Ob5/n2o\n7FeGVKlS8c3U7xARNm9cz6j3R+Dt7Y2Hhwdjx09wdK86e6ZREzasX8dT9RsQfPMmHdq2IjQ0lMjI\nSGrXqUsfu56+nTIJgD79ngNg8aIF1GsQe729P3wY775n/V+1a9+Rju3b8PmY0Qx7dwRg1dvxY8fw\nq/Dw6g1g/tw5tOvQIdryWzZv5PMxox319vmXX5E9u+uo0UaNmrBh3VoKFSrM+XOBDOjbi4iICCIj\nI2nzbDsaN2kGwKj3hlO+QgWaNmvBh5+MYdDAAXw9/ktEhEmTpzkGFgUHB/Pj9zNZ9OtyAAa99CrP\ntmpGqlSp+HbGDwDs3rWTSpWrxHoNNjFyZ0vv+BmKh4fw89o/WbbVOpF6tUM1GlcrgocIU/x3ss6+\n5ti6Tgn6taxAeEQkt0Jv032k9VOeiEjDq+OWs3h0Jzw9PJixbA9/ngxyWefyrUfp09yP6Uv3EBFp\neGHMr/z0XlsijeHK9VsMGL3YsWy7eiWZF0urc+uUvlTtN5XASzf4cMYGVn7ZndvhEZw+f5X+n9zJ\n37xGUXb9FegYhLTv6DkCvu3PH8cvsP+YFTDrlCvAcnufk0pyH+zjDmLMA7nt330TkR+BxVjTyHyO\nFTyrYwXPbEBqYL8xZpq9/PfAPOAaMNwY85RTWTeMMelFZCywzynPL8AsY8z8u21Leb+KZu2mbe7e\nxQQLCQmheaP6rFi9AU9PzyRZ5+JFC9m7ZxfDhr9/74VjiPrd3MMWEhJCk2fq8/uapKs3/0UL2Lt7\nN++MSHi9JRfnAgPp36cH/kt/S7J1vvn6KzRp2py69eonKn+ORh+6eYsSZ9W4HrQZOpurN11HzSel\nVN6erBzbjXovziAilmuqAKE7JhJ5/YzbDtaSZfzM7KXr771gPJTJl2Hn/dzb9kFKrt22cOe6Z2ms\nbtutWC3P+FzvvPlgN+3h8PHxYeiw4Zw9e8/bLrpNREQ4g16O99isZMnHx4f/vTOcs2eSsN7Cw3nx\nlUe73nL7+tKzd99YB2E9KCWeLJnowJmcDJm4kny5Mj3szSBfzowMm7wmzsD5oKSA8ULJuuVZDvgF\nOG6MaWCn7cRqgZYCagMDsG63lBXYgTWZaXHgDWNMM6eyolqebZzy5MTqtu33qLQ8HzXJpeWpUo7k\n0vJ8lLi95VnWz8xxU8uz9GPJt+WZLK952vZjXcucFSMtvTEmSEQWYLVE9wIGeNMYc05E7vZL5wVA\nPaygeRrrWqpSSimVIMk2eBpjIoCMMdJ6Oj03wGD74bzMWmBtjLT0TnkGPYjtVUopZUnuI2XdIdkG\nT6WUUo8eIWWMtk3OA4aUUkqpZElbnkoppdwqBTQ8NXgqpZRysxQQPbXbVimllEogbXkqpZRyKx1t\nq5RSSiWQjrZVSimllAtteSqllHKrFNDw1OCplFLKzVJA9NRuW6WUUiqBtOWplFLKbazpxP77TU8N\nnkoppdxHdLStUkoppWKhLU+llFJulQIanho8lVJKuVkKiJ7abauUUkolkLY8lVJKuZHoaFullFIq\noXS0rVJKKaVcaMtTKaWU2wgpYryQBk+llFJulgKip3bbKqWUUgmkLU+llFJupaNtlVJKqQTS0bZK\nKaVUMiUiJ0Vkv4jsEZEddlpWEVkpIkfsv1nsdBGRcSJyVET2iYjf/axbg6dSSim3Ejc94ukpY0w5\nY0xF+/UQYJUxpgiwyn4N0BgoYj/6AxMTu3+gwVMppZQ72VOSueORSC2BGfbzGUArp/SZxrIVyCwi\nvoldiQZPpZRSyVV2Ednh9Ogf430D/CYiO53ey2WMCbSfnwNy2c/zAn875f3HTksUHTCklFLKzdw2\nYijIqTs2NjWNMWdEJCewUkQOOb9pjDEiYty1Mc40eCqllHIbIelG2xpjzth/L4jIAqAycF5EfI0x\ngXa37AV78TNAPqfsj9lpiaLdtkoppR45IpJORDJEPQcaAn8A/kAPe7EewCL7uT/Q3R51WxW46tS9\nm2Da8lRKKeVWSdTwzAUsEKuZ6wXMMsYsF5EAYK6I9AFOAe3t5ZcCTYCjQDDQ635WrsEzHvbs3hmU\nOa3XqYe9HXHIDgQ97I14BGm9JY7WW+Ik53or4O4Ck6Lb1hhzHCgbS/oloH4s6QZ4wV3r1+AZD8aY\nHA97G+IiIjvucUFdxULrLXG03hJH6+2/R4OnUkopt9J72yqllFIJ9d+PnTra9j9g8sPegEeU1lvi\naL0ljtbbf4y2PB9xxhg9KBNB6y1xtN4SJ6XVWwpoeGrwVEop5T73eV/aR4Z22yqllFIJpMEziYhI\nhD3n3F4R2SUi1RNZznMi0t3d25fcicj/ROSAPQ/fHhGpEsdyFUVknNNrbxE5YefZIyLnROSM0+tU\nCdyOUSLyyv3uz8MW3/pMYJl1E/u5flSIyBfO//8iskJEpjq9/kxEXktAeTfiSJ8uIm3vb2sfHnHT\nv+RMu22TTogxphyAiDwDfATUSWghxphJ7t6w5E5EqgHNAD9jTKiIZAdiDXrGmB3ADqekmsASY8yL\ndlkjgBvGmDEPdquTr4TUZwLK9ALqAjeAzfe9kcnXJqw71owVEQ+smx9kdHq/OvDqvQoRES9jTPiD\n2cRkIHnHPbfQlufDkRH4Fxxn60ui3hCRr0Skp/38YxE5aLcOxthpI0TkDfv5WhH5RES2i8hhEall\np3uKyKciEmDnHWCn+4rIerul8YeI1LKXnW6/3i8i9zzwHwJfrNkVQgGMMUHGmLMiUklENtut+e0i\nkiFmfQKNgGV3K1xEetj594jIBPtLERFpavcS7BWR35yylBaRdSJyXETcdseSJBRXfZ4UkdH252C7\niBQGEJHHRWS1/VlaJSL57fTpIjJJRLYBc4HngFfteqwlIu3sz9VeEVn/sHbWzTYD1eznJbHupXpd\nRLKISGqgBLDbPv6ijqkO4DjWN4iIP3DQuVCxfCUif4nI70DOpNsllRja8kw6PiKyB0iD9eVV724L\ni0g2oDVQ3J5WJ3Mci3oZYyqLSBNgONAA6IN10+NK9gG9yf7ybwOsMMZ8ICKeQFqgHJDXGFPKXm9c\n63mYfgPeFZHDwO/AHGCL/beDMSZARDICIbHkfQp4L66CRaQUVj1XN8aEi8hkoKOIrMaaab6WMeaU\niGR1ylYU6/ZfmYE/RWSSMSbi/nczybjUpzFmnf3eVWNMabEuDYzFaqGOB2YYY2aISG9gHHcmGH4M\nq+4iYrbqRWQ/8Iw9ZVRy/FwlmH2SEW6fQFTH+hzmxQqoV4H9WHVWDuvWcdmBAKeTBz+glDHmRIyi\nWwPFgCex7tl6EJj2gHfngUkBDU9teSahEGNMOWNMcazW0EyRu45JuwrcAr4VkTZYNzKOzS/2353A\n4/bzhlizB+wBtgHZgCJAANDL/pIrbYy5DhwHCorIeBFpBFxL7A4+KMaYG0AFoD9wEStoDgACjTEB\n9jLXYnaDiUhe4LIxJq66A+tkoxKww66vOkAhrC/DNcaYU3b5l53yLDHGhBljLgCXgWR7+8bYxFaf\nUb0dwE9Of6NaWNWAWfbz77G6wqPMu8uJwyZguoj0Azzds/XJwmaswBkVPLc4vd6EVT8/GWMijDHn\ngXVYnzGA7bEEToDaTnnOAqsf8D48UFEjbu/3kZxpy/MhMMZssa8z5QDCiX4Sk8ZeJlxEKmO1cNoC\ng4i9tRpq/43gzv+nAC8aY1bEXFhEagNNsb7UPjfGzBSRssAzWN1u7YHe97mLbmd/Qa8F1totmvh0\nlzYCXOogBgGmGWPeiZYo0voueUKdnjvX+yMjlvqMmsLJeeLg+EwifPMu63hOrIFITYGdIlLBvmn3\no24TVqAsjdVt+zfwOtaJ53dYvR1xibO+/juS/2Afd9CW50MgIsWxzsQvYU2Z86SIpLa7turby6QH\nMhljlmINQHCZPeAuVgADRcTbLquoWHPfFQDOG2OmAFMBPzuIexhjfgaGYXUrJSsiUkxEijgllQP+\nBHxFpJK9TAaxBq04u+f1Tqxuy/Z2PSAi2ewuuc3AU3adEaPb9pEWR31GzRrUwenvFvv5ZqCj/bwL\nsCGOoq8DGZzWU8gYs80Y8y5WCzdfHPkeNZuxumYv2y3Fy1hd+NXs9zYAHcQaT5ADq1W5/R5lrnfK\n48vdA7BKBh65M+ZHWNQ1T7BaOz3ss/+/RWQu1hnsCWC3vUwGYJGIpLGXj/fwd6zA+Diwy+4avoh1\njaouMFhEbmONiuyOdb3mu6hBMsDQxO3eA5UeGG+fXIRjzcfXH+ssf7yI+GBd72wQlcG+plvYGHPo\nbgUbY/aLyHvA73Yd3Aaes6+jDsT6PxDgLND4AezbwxBXfTYDsojIPqzWdSd7+RexPiODsT5Lcc2D\nuBiYLyIt7Tyv2kFagFXA3ge0P0ltP9a1zFkx0tIbY4JEZAFWIN2L1Xp/0xhzzj5pjssCrJ6lg8Bp\n7py4PHKE5N/l6g5iTXGm1H+LiNQEuhpjnnvY2/KoEJGTQEVjTHKdd1I9Asr7VTSrN25zS1lZ03nt\nTK5TuWnLU/0nGWM2Ahsf9nYopf6bNHgqpQAwxjz+sLdB/TekhG5bDZ5KKaXcSkfbKqWUUsqFtjyV\nUkq5zyNwgwN30JanStHkzmw3f4jIPBFJex9lOe6rKyItRGTIXZbNLCLPJ2Idjnsbxyc9xjIJmqlD\nrHva/pHQbVQpm7jxkZxp8FQpXdRtE0sBYVh3WXKwb9id4OPEGONvjPn4LotkBhIcPJVSyYMGT6Xu\n2AAUtltcf4nITKybV+QTkYYiskWsWVbm2XeAQkQaicghEdmFdeN97PSeIvKV/TyXiCwQa3aRvWLN\nefkxUMhu9X5qLzdY7syE855TWf8Ta9acjVg3D78rEelnl7NXRH6O0ZpuICI77PKa2cvHOguPUomW\nApqeGjyVwjEfZWOsO8WAdSP9CcaYklj3Ix0GNDDG+GHNF/qaffenKUBzrBut546j+HHAOmNMWazb\nHx4AhgDH7FbvYBFpaK+zMtbt8iqISG0RqYB1a7xyQBPu3GD8bn4xxlSy1/cn1iw7UR6319EUmGTv\ng2MWHrv8fiLyRDzWo1SsdDJspf77nG+buAH4FsgDnDLGbLXTq2JNFbXJulMfqbBun1YcOGGMOQIg\nIj9g3eYupnpYt0KMuiH7VRHJEmOZhvYj6vaM6bGCaQZgQdTMMGLNBXkvpURkFFbXcHqi3xx/rjEm\nEjgiIsftfWgIlHG6HprJXvfheKxLqRRJg6dK6UKMMeWcE+wA6Tz7hQArjTGdYiwXLd99EuAjY8w3\nMdbxSiLKmg60MsbsFWuqsbpO78W8H6chjll4ROTxRKxbKR1tq5QCYCtQQ0QKA4g1Q01R4BDwuIgU\nspfrFEf+VcBAO6+niGQixgwkWK3D3k7XUvOKSE6s2TZaiYiPiGTA6iK+lwxAoFiz6nSJ8V47EfGw\nt7kg8BdxzMITj/UoFasUcMlTW55K3Ysx5qLdgvtJRFLbycOMMYdFpD/wq4gEY3X7ZoiliJeBySLS\nB2v+z4H2nK6b7J+CLLOve5YAttgt3xtYN7bfJSJzsGbouIA1ofm9vIM1CfpF+6/zNp3Gmh4rI9bs\nMbdEJK5ZeJRScdBZVZRSSrmNX4WKZuPW+Jzj3Vu6VB46q4pSSqmUIbmPlHUHveaplFJKJZC2PJVS\nSrmNkDJG2+o1T6WUUm4jIsuB7G4qLsgY08hNZbmVBk+llFIqgfSap1JKKZVAGjyVUkqpBNLgqZRS\nSiWQBk+llFIqgTR4KqWUUgn0fxPhfJ9EBjEhAAAAAElFTkSuQmCC\n",
            "text/plain": [
              "<Figure size 504x504 with 2 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        },
        {
          "output_type": "stream",
          "text": [
            "              precision    recall  f1-score   support\n",
            "\n",
            "           0       0.84      0.83      0.83      4500\n",
            "           1       0.85      0.84      0.85      4500\n",
            "           2       0.90      0.94      0.92      4500\n",
            "           3       0.91      0.88      0.89      4500\n",
            "\n",
            "    accuracy                           0.87     18000\n",
            "   macro avg       0.87      0.87      0.87     18000\n",
            "weighted avg       0.87      0.87      0.87     18000\n",
            "\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yeiD1T_QZpdk",
        "colab_type": "text"
      },
      "source": [
        "# Inference"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "z7G7vuSTZHkQ",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import collections"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Xeu952p4Zweb",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "362Bl2chXDOA",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def get_probability_distributions(probabilities, classes):\n",
        "    \"\"\"Produce probability distributions with labels.\"\"\"\n",
        "    probability_distributions = []\n",
        "    for i, y_prob in enumerate(probabilities):\n",
        "        probability_distribution = {}\n",
        "        for j, prob in enumerate(y_prob):\n",
        "            probability_distribution[classes[j]] = np.float64(prob)\n",
        "        probability_distribution = collections.OrderedDict(\n",
        "            sorted(probability_distribution.items(), key=lambda kv: kv[1], reverse=True))\n",
        "        probability_distributions.append(probability_distribution)\n",
        "    return probability_distributions"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6EZdo-fKZwo6",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CLP2Vzp3Zwth",
        "colab_type": "code",
        "outputId": "925e4006-b7dd-48fa-fb4e-797ef2742627",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        }
      },
      "source": [
        "# Inputs\n",
        "texts = [\"This weekend the greatest tennis players will fight for the championship.\"]\n",
        "num_samples = len(texts)\n",
        "X_infer = np.array(X_tokenizer.texts_to_sequences(texts))\n",
        "print (f\"{texts[0]} \\n\\t→ {untokenize(X_infer[0], X_tokenizer)} \\n\\t→ {X_infer[0]}\")\n",
        "print (f\"len(X_infer[0]): {len(X_infer[0])} words\")\n",
        "y_filler = np.array([0]*num_samples)"
      ],
      "execution_count": 88,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "This weekend the greatest tennis players will fight for the championship. \n",
            "\t→ this weekend the greatest tennis players will fight for the championship \n",
            "\t→ [ 272 2283   10 6450  878  370   60  238    5   10 1465]\n",
            "len(X_infer[0]): 11 words\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "q1gFlI5MZ143",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Inference data generator\n",
        "inference_generator = DataGenerator(X=X_infer,\n",
        "                                    y=y_filler,\n",
        "                                    batch_size=BATCH_SIZE,\n",
        "                                    max_filter_size=max(FILTER_SIZES),\n",
        "                                    shuffle=False)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UFE4sp_7aHTq",
        "colab_type": "code",
        "outputId": "2b6f5610-f8bc-499d-e57c-f1c54e02e84c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "# Predict\n",
        "probabilities = model.predict_generator(generator=inference_generator,\n",
        "                                        verbose=1)"
      ],
      "execution_count": 90,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "\r1/1 [==============================] - 0s 56ms/step\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bGi_NvbBaMap",
        "colab_type": "code",
        "outputId": "ebd752e8-97eb-44a6-dcf4-aada39df94b4",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 238
        }
      },
      "source": [
        "# Results\n",
        "probability_distributions = get_probability_distributions(probabilities=probabilities,\n",
        "                                                          classes=y_tokenizer.classes_)\n",
        "results = []\n",
        "for index in range(num_samples):\n",
        "    results.append({\n",
        "        'raw_input': texts[index],\n",
        "        'preprocessed_input': untokenize(indices=X_infer[index], tokenizer=X_tokenizer),\n",
        "        'tokenized_input': str(X_infer[index]),\n",
        "        'probabilities': probability_distributions[index]\n",
        "                   })\n",
        "print (json.dumps(results, indent=4))"
      ],
      "execution_count": 91,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "[\n",
            "    {\n",
            "        \"raw_input\": \"This weekend the greatest tennis players will fight for the championship.\",\n",
            "        \"preprocessed_input\": \"this weekend the greatest tennis players will fight for the championship\",\n",
            "        \"tokenized_input\": \"[ 272 2283   10 6450  878  370   60  238    5   10 1465]\",\n",
            "        \"probabilities\": {\n",
            "            \"Sports\": 0.7571110129356384,\n",
            "            \"World\": 0.2408323436975479,\n",
            "            \"Sci/Tech\": 0.0012546397047117352,\n",
            "            \"Business\": 0.0008020797977223992\n",
            "        }\n",
            "    }\n",
            "]\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y4-WkjN595lO",
        "colab_type": "text"
      },
      "source": [
        "# Interpretability"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Uo0FqqEY98El",
        "colab_type": "text"
      },
      "source": [
        "Recall that each our unique filter sizes (2, 3 and 4) act as n-gram feature detectors. When these filters convolve on our embedded input (`N`, `max_seq_len`, `embedding_dim`), they produce feature maps which are shape ((`N`, `max_seq_len`, `num_filters`) for each filter size. Since we used `SAME` padding with stride=1, our feature maps have the same length as our inputs ('max_seq_len') which you can think of as what the filters extracted from each n-gram window. When we apply 1d global max-pooling we're effectively extracting the most relevant information from the feature maps. We can inspect the trained model at the pooling step to determine which n-grams were most relevant towards the prediction."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Zv2uqi6mOe9Z",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import seaborn as sns\n",
        "from statistics import mode"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "J6FZJVbUBz36",
        "colab_type": "text"
      },
      "source": [
        "### Components"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "M-aGz2BgCCKq",
        "colab_type": "text"
      },
      "source": [
        "We're going to copy the same model structure as before but now we'll stop just after convolution since those are the outputs we care about."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_nzdZ2_tBsfc",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "class ConvOutputsModels(Model):\n",
        "    def __init__(self, vocab_size, embedding_dim, filter_sizes, num_filters):\n",
        "        super(ConvOutputsModels, self).__init__()\n",
        "\n",
        "        # Embeddings\n",
        "        self.embedding = Embedding(input_dim=vocab_size,\n",
        "                                   output_dim=embedding_dim)\n",
        "        # Convolutional filters\n",
        "        self.convs = []\n",
        "        for filter_size in filter_sizes:\n",
        "            conv = Conv1D(filters=num_filters, kernel_size=filter_size, \n",
        "                          padding='same', activation='relu')\n",
        "            self.convs.append(conv)\n",
        "            \n",
        "    def call(self, x_in, training=False):\n",
        "        \"\"\"Forward pass.\"\"\"\n",
        "\n",
        "        # Embed\n",
        "        x_emb = self.embedding(x_in)\n",
        "\n",
        "        # Convolutions\n",
        "        convs = []\n",
        "        for i in range(len(self.convs)):\n",
        "            z = self.convs[i](x_emb)\n",
        "            convs.append(z)\n",
        "\n",
        "        return convs\n",
        "\n",
        "    def sample(self, input_shape):\n",
        "        x = Input(shape=input_shape)\n",
        "        return Model(inputs=x, outputs=self.call(x)).summary()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kCam06jHB2X-",
        "colab_type": "text"
      },
      "source": [
        "### Operations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XfWHwZ7DB2gf",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 323
        },
        "outputId": "c1d619dd-2839-4852-c8af-f60212be828d"
      },
      "source": [
        "# Initialize model\n",
        "conv_layer_outputs_model = ConvOutputsModels(vocab_size=vocab_size, \n",
        "                                             embedding_dim=EMBEDDING_DIM, \n",
        "                                             filter_sizes=FILTER_SIZES, \n",
        "                                             num_filters=NUM_FILTERS)\n",
        "conv_layer_outputs_model.sample(input_shape=(10,))"
      ],
      "execution_count": 134,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"model_6\"\n",
            "__________________________________________________________________________________________________\n",
            "Layer (type)                    Output Shape         Param #     Connected to                     \n",
            "==================================================================================================\n",
            "input_8 (InputLayer)            [(None, 10)]         0                                            \n",
            "__________________________________________________________________________________________________\n",
            "embedding_8 (Embedding)         (None, 10, 100)      2991700     input_8[0][0]                    \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_24 (Conv1D)              (None, 10, 50)       10050       embedding_8[0][0]                \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_25 (Conv1D)              (None, 10, 50)       15050       embedding_8[0][0]                \n",
            "__________________________________________________________________________________________________\n",
            "conv1d_26 (Conv1D)              (None, 10, 50)       20050       embedding_8[0][0]                \n",
            "==================================================================================================\n",
            "Total params: 3,036,850\n",
            "Trainable params: 3,036,850\n",
            "Non-trainable params: 0\n",
            "__________________________________________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4fL_exZ2CMP0",
        "colab_type": "text"
      },
      "source": [
        "Since we already trained our model, we'll transfer those weights to our new model."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Q24ZsZofCkNV",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        },
        "outputId": "5d8de9fd-9727-4130-c668-1e776fa77395"
      },
      "source": [
        "# Model's layers\n",
        "conv_layer_outputs_model.layers"
      ],
      "execution_count": 135,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[<tensorflow.python.keras.layers.embeddings.Embedding at 0x7feb61c418d0>,\n",
              " <tensorflow.python.keras.layers.convolutional.Conv1D at 0x7feb61c41cc0>,\n",
              " <tensorflow.python.keras.layers.convolutional.Conv1D at 0x7feb61ade128>,\n",
              " <tensorflow.python.keras.layers.convolutional.Conv1D at 0x7feb61ade710>]"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 135
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vmlbHbj7CKmT",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Set embeddings weights\n",
        "conv_layer_outputs_model.layers[0].set_weights(model.layers[0].get_weights())\n",
        "\n",
        "# Set conv weights\n",
        "conv_layer_start_num = 1\n",
        "for layer_num in range(conv_layer_start_num, conv_layer_start_num + len(FILTER_SIZES)):\n",
        "    conv_layer_outputs_model.layers[layer_num].set_weights(model.layers[layer_num].get_weights())"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pZQY75xXC4rZ",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 68
        },
        "outputId": "bbe15345-6bab-4c00-87fa-8747b34d915b"
      },
      "source": [
        "# Forward pass\n",
        "conv_outputs = conv_layer_outputs_model.predict_generator(generator=inference_generator,\n",
        "                                                          verbose=1)\n",
        "print (len(conv_outputs)) # each filter_size has feature maps\n",
        "print (conv_outputs[0].shape)"
      ],
      "execution_count": 141,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "\r1/1 [==============================] - 0s 10ms/step\n",
            "3\n",
            "(1, 11, 50)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "sTAS1b7vMn_u",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        },
        "outputId": "b27e5c8c-5940-4633-b3dd-a274b1de309d"
      },
      "source": [
        "conv_outputs[0].shape"
      ],
      "execution_count": 138,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(1, 11, 50)"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 138
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RyC7FJndIFaE",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 509
        },
        "outputId": "73ea5e4e-6647-44a5-abe9-669772bd25d3"
      },
      "source": [
        "# Visualize bi-gram filters\n",
        "tokens = untokenize(X_infer[0], X_tokenizer).split()\n",
        "sns.heatmap(conv_outputs[0][0].T, xticklabels=tokens)"
      ],
      "execution_count": 145,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<matplotlib.axes._subplots.AxesSubplot at 0x7feb5cfda128>"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 145
        },
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZsAAAHbCAYAAAAdwDeUAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deZwlZXn28d81w6JsA4qILCIgSFSM\nyAioUUAR0RjwTVxwA0l0lATRLG6vRhSNAX0R0cRlooCgomIU0QCGyKYiyKDIJiDiwgwKsq9xHPp6\n/6hqONNMd81SVaem6vryOZ8+p86pvp9ups9z6lnuW7aJiIho0qxxNyAiIvovnU1ERDQunU1ERDQu\nnU1ERDQunU1ERDQunU1ERDQunU1ERDRujaoXSNoB2A/YvDy0CDjV9s+abFhERPTHjFc2kt4BfBkQ\n8KPyJuAkSe9svnkREdEEScdKuknS5dM8/2pJl0q6TNL5kv505LlflccvkbRgueLNlEFA0jXAk2z/\nccrxtYArbG83zXnzgHkAmj1n51mz1l2etkRErHaWLF6kur7XH2++rraULmtuvM2M7ZL0HOBu4ATb\nT17G888Efmb7NkkvBN5ne9fyuV8Bc23fvLztqZqzmQA2W8bxx5TPLZPt+bbn2p6bjiYiontsnwfc\nOsPz59u+rXx4AbDFqsSrmrN5K/BdST8Hri+PPRZ4PHDIqgSOiIgpJu6v7VuNjjCV5tuev5Lf7m+A\n00ceG/hvSQY+szzfd8bOxvYZkrYHdmHpBQIX2a7vtxIREbUqO4CV7VweIGlPis7mz0YO/5ntRZI2\nAc6UdFV5pTStytVoticoLqEiIqJJnnZ2YiwkPQX4LPBC27dMHre9qPx6k6RvUFyQzNjZZJ9NRERX\nTEzUd1tFkh4LfB14re1rRo6vK2n9yfvA3sAyV7SNqryyiYiI/pF0ErAHsLGkhcBhwJoAtj8NvBd4\nJPBJSQBLbM8FHg18ozy2BvAl22dUxmu6eNoaa22e6mwR0Vt1Ln1efMMVtb1frrXZk2prVx1yZRMR\n0RU1DH91VeZsIiKicbmyiYjoio6tRqtTOpuIiK6ocVNn1/Sus3nvY/ZoPebhvz2n1Xh3f/eIVuPx\nsHXajQf4usqVlLVa/7WrvPet817+mF1ajffV3/6o1XgAp2/0Z9UvirHoXWcTEbHa6vEwWuUCAUk7\nSHqepPWmHN+nuWZFRAxQhzZ11q2qns2hwDeBNwOXS9pv5OkPNdmwiIjoj6phtDcAO9u+W9LjgK9J\nepztYyiKqC3TlHo2pMxAREQ193gYraqzmWX7bgDbv5K0B0WHsxUzdDaj2UaTQSAiYjl1cPirLlVz\nNjdKeurkg7LjeTGwMbBjkw2LiIj+qLqyOQBYMnrA9hLgAEmfaaxVERFD1ONhtCTijIhYBXUm4vzD\nVefW9n659g67dyoRZ3KjRURE47KpMyKiK3o8jJbOJiKiKwa8Gi0iImKV5comIqIrMowWERGNyzBa\nRETEymv8yuaO9+/VdIilPPWj7dZBAXjkmuu3Gm/BzT9vNV7EyvjUJnu2HvPgm85uPWad7BRPi4iI\npvV4zmaFh9EkndBEQyIior9mvLKRdOrUQ8CekjYEsL1vUw2LiBicHi8QqBpG2wK4EvgsYIrOZi5w\n1Ewnjdaz+fhLduOvd9l+1VsaEdF3Ax5GmwtcDLwbuMP2OcB9ts+1fe50J9meb3uu7bnpaCIiYsYr\nGxdl446WdHL59caqcyIiYiVNDHw1mu2FwMsk/TlwZ7NNiogYqB4Po6WeTVR6z2P2aD3mv912Uavx\nbv/fe1qNF81YZ821W4955z3X1VY35n9/dHJt75cP2+VlnapnkyGxiIiuGPBqtIiIaEuPh9GSGy0i\nIhqXK5uIiK7IMFpERDSux51NhtEiIqJxubKJiOiIlBiITrnuKTu0Gu///v6+VuMBbLDWuq3Gyz6b\nfth6/U3H3YRV0+NhtHQ2ERFdMdSlz5J2lbRBef/hkt4v6VuSjpQ0p50mRkTE6q5qgcCxwL3l/WOA\nOcCR5bHjGmxXRMTwTEzUd+uYqmG0WbaXlPfn2n5aef/7ki6Z7qTRejaaPYdZs9odf4+IWC0NdRgN\nuFzSQeX9n0qaCyBpe+CP0500Ws8mHU1ERFRd2bweOEbSe4CbgR9Kuh64vnwuIiLq0sHhr7pUFU+7\nA3hduUhg6/L1C23f2EbjYtm2ufSqVuPdetCTW40H8Ijjbmo9Zqz+7l/dh6FW9/bPYHmLp90J/LTh\ntkRERE9ln01ERFcMdRgtIiJa1OPOJok4IyKicbmyiYjoiqEvEIiIiBZkGC0iImLl5comKj3iuMvH\n3YReWn+th7ca767F7ZeKaNtVt10/7iasmgyjRURE4zKMFhERsfJmvLKRtBawP3CD7f+R9CrgmcDP\ngPm2p03GGRERK2jAw2jHla9ZR9KBwHrA14HnAbsABzbbvIiIAenxMFpVZ7Oj7adIWgNYBGxm+35J\nX2CGXGmpZxMREaMqi6eVQ2nrAutQVOq8FVgbWHO6k2zPB+YDrLHW5q6nqRERPTfgK5vPAVcBs4F3\nAydLug7YDfhyw22LiBgW9/ezeVU9m6MlfaW8f4OkE4C9gP+w/aM2GhjRV0PY9xIxqXKfje0bRu7f\nDnyt0RZFRAzVgIfRIiKiLT3ubLKpMyIiGpcrm4iIrhjwps6IiGhLhtEiIiJWXq5sIiK6osf7bHJl\nExHRFRMT9d0qSDpW0k2SllmwSoWPS7pW0qWSnjby3IGSfl7elitHZjqbiIhhOh7YZ4bnXwhsV97m\nAZ8CkPQI4DBgV4qEzIdJ2qgqWDqbiIiuaPHKxvZ5FLkup7MfcIILFwAbSnoM8ALgTNu32r4NOJOZ\nOy0gczYREd1R49Ln0ez7pfllkuTltTkwWmd7YXlsuuMzSmcTEdFDo9n3u2DGYTRJcyQdIekqSbdK\nukXSz8pjG85w3jxJCyQtmJi4p/5WR0T0kCdc260Gi4AtRx5vUR6b7viMquZsvgrcBuxh+xG2Hwns\nWR776nQn2Z5ve67tuSmcFhGxnFqcs1kOpwIHlKvSdgPusP1b4DvA3pI2KhcG7F0em1HVMNrjbB85\nesD274AjJf31yrU/IiLGTdJJwB7AxpIWUqwwWxPA9qeB04AXAdcC9wIHlc/dKukDwEXltzrc9kwL\nDYDqzubXkt4OfN72jWUDHw28jqUniCIiYlW1mBvN9isrnjfwd9M8dyxw7IrEqxpGewXwSODccs7m\nVuAc4BHAy1YkUEREVJhwfbeOqarUeRvwjvK2FEkHAcc11K6IiOiRVdnU+f7aWhEREV1bIFCrGa9s\nJF063VPAo+tvTkTEgHWwk6hL1QKBR1OkJrhtynEB5zfSooiI6J2qzubbwHq2L5n6hKRzGmlRRMRQ\n9bjEQNUCgb+Z4blX1d+ciIgBG/AwWgTP3uSJrcf83k1Xth4zIpqTziYiois6uD+mLulsIiK6osUM\nAm1L8bSIiGhcrmwiIrqix8NoVfVsNpD0r5JOlPSqKc99cobzUs8mImIFeWKitlvXVA2jHUexgfM/\ngf0l/aektcvndpvupNSziYiIUVXDaNva/qvy/imS3g2cJWnfhtsVETE8PR5Gq+ps1pY0yy6WSNj+\nF0mLgPOA9ZYnwCxpFZu4YiZ6vAN30pqz251qe/Yam7QaD+B7ZJ/N6u6Zj9qh9Zjn//6q1mPWasCr\n0b4FPHf0gO3jgX8EFjfUpoiIYRpwPZu3T3P8DEkfaqZJERHRN6lnExHRFaln89CnSD2biIh6dXD4\nqy6pZxMREY1LPZuIiK7o8Wq01LOJiOiKAQ+jrbIh7Htp2x/vX9JqvA/dcE6r8aIfVvs9L1GrJOKM\niOiILuY0q0s6m4iIrujxMFrq2URERONW+MpG0ia2b6p4zTxgHoBmzyGZnyMilkOPr2yqNnU+Yuoh\n4EeSdgJk+9ZlnWd7PjAfYI21Nu/vby8iok5DXfoM3Az8esqxzYEfAwa2aaJRERHRL1WdzduA5wNv\ns30ZgKRf2t668ZZFRAzNUIfRbB8l6SvA0ZKuBw6juKKJEdvMeUyr8fZed9tW4336hu+3Gi/64emP\n2r71mBf9/prWY9bJPe5sKlej2V5o+2XAOcCZwDpNNyoiIvpluZc+2z4V2BPYC0DSQU01KiJikHpc\nPG2F9tnYvs/25eXD1LOJiKhT6tk89ClSzyYiIpZT6tlERHRFB4e/6pJ6NhERXTHUzib1bCIiog7J\n+lyD6+74bavxPt1yvIiVsbrveRkH97j+VzqbiIiu6PEwWkoMRERE43JlExHRFT2+slmZejaPtH1L\nxWtSzyYiYgUNNjeapCMkbVzenyvpOuBCSb+WtPt059meb3uu7bnpaCIiomrO5s9t31ze/wjwCtuP\npyg7cFSjLYuIGJoe50arGkZbQ9IatpcAD7d9EYDtaySt3XzzIiIGpHspzWpT1dl8EjhN0hHAGZKO\nAb4OPBd4SFaBaMfRj96z1Xh/f+PZrcaLiP6pyiDwCUmXAQcD25ev3w44BfhA882LiBiOPi8QqFyN\nZvscisJpSynr2RxXf5MiIgaqx53NqmzqTD2biIhYLqlnExHRFQNeIJB6NhERLRnynE3q2URExCpL\nPZuIiK4Y8DBadNBitRvvt7s/vt2AwGPOvbb1mBHj1udhtJQYiIiIxuXKJiKiKzKMFhERTXOPO5tG\nhtEkzZO0QNKCiYl7mggRERGrkap6NnMlnS3pC5K2lHSmpDskXSRpp+nOSz2biIiVMFHjrWOWJ+vz\nYcCGFJs4/9728yU9r3zuGQ23LyJiMIY8jLam7dNtnwTY9tco7nwXeFjjrYuIiF6ourL5X0l7A3MA\nS3qJ7VPKktD3N9+8Ffe8Rz+l9ZjfvXG6FHLNePE6t7Qab90X/0mr8QDuePYWrcab88FzWo0HsObs\ndtfnzFK7G7Rev8lurcYD2O7+1XzNU4+vbKr+z7wJ+DDFr+AFwMGSjgcWAW9otmkREcMy2GE02z+1\n/QLbL7R9le232N7Q9pOAJ7TUxoiIWM2lnk1EREd4or5b16SeTURER7TdSUjaBzgGmA181vYRU54/\nGtizfLgOsIntDcvn7gcuK5/7je19Z4qVejYREQMkaTbw78DzgYXARZJOtX3l5Gts//3I698MjO6v\nvM/2U5c3XurZRER0hVtdMbgLcK3t6wAkfRnYD7hymte/kmLf5UqR3WxK6zXW2ry/ObMjYvCWLF5U\nWw/xu+fsUdv75WO+d+4bgXkjh+bbnj/5QNJLgX1sv758/FpgV9uHTP1ekrYCLgC2sH1/eWwJcAmw\nBDjC9ikztWc1X5QeEdEfnqjvyqbsWOZXvnD57A98bbKjKW1le5GkbYCzJF1m+xfTfYPUs4mIGKZF\nwJYjj7cojy3L/sBJowdsLyq/Xgecw9LzOQ+RziYioiNaXvp8EbCdpK0lrUXRoZw69UWSdgA2An44\ncmwjSWuX9zcGnsX0cz1A9dLnOcC7gJcAmwAGbgK+STFGd/ty/UgREVHJLS4QsL1E0iHAdyiWPh9r\n+wpJhwMLbE92PPsDX/bSE/x/AnxG0gTFRcsRo6vYlmXGBQKSvgOcBXze9u/KY5sCBwLPs733NOfN\no5yY0uw5O6fMQET0VZ0LBBY947m1LRDY/IdntZsMr0JVZ3O17WWmpZnpuVFZjRYRfVZnZ7Nw1/o6\nmy0u7FZnU7Ua7deS3k5xZXMjgKRHA68Drm+4bRERg1LnarSuqepsXgG8Ezi37GQM3EgxifTyhtsW\n0zhos2e2Gu+4G5IsIiJWzYydje3bJB0HnAlcYPvuyefKnDpnNNy+iIjBaHiP/VjNuPRZ0qEUK88O\nAS6XtN/I0x9qsmEREUPjCdV265qqYbQ3ADvbvlvS44CvSXqc7WMoknFGRERUqupsZk0Ondn+laQ9\nKDqcrUhnExFRqy5ekdSlKoPAjZIeSCFddjwvBjYGdmyyYRERQ2PXd+uaqs7mAOB3owdsL7F9APCc\nxloVERG9UrUabeEMz/2g/uZERAxXn4fReldi4FHrzGk95u/vvaPVeNn3EivjGY/aodV4P/z9Va3G\n64M2c6O1LVmfIyKicb27somIWF0tZ2mA1VI6m4iIjpgY6jCapA0k/aukEyW9aspzn2y2aRER0RdV\nczbHUWze/E9gf0n/OVmdDdhtupMkzZO0QNKCiYl7ampqRES/2art1jVVw2jb2v6r8v4pkt4NnCVp\n35lOsj0fmA+pZxMRsbyGvPR5bUmz7GLayva/SFoEnAes13jrIiKiF6o6m28BzwX+Z/KA7eMl/Q74\nRJMNW1lt73mJWF0MYd/Lro+qLB7caV1MM1OXGedsbL8dWCjpeZLWGzl+BnBo042LiBiSPpcYqFqN\n9maKejZv5qH1bP6lyYZFRER/VA2jzSP1bCIiWtHnfTapZxMR0RFdXLJcl9SziYiIxlVd2RwALBk9\nYHsJcICkzzTWqoiIAerzarTUs4mI6Ighz9lERKw2fnzrL8bdhJhGOpuIiI7o8wKBdDYRER3R5zmb\nFa7UKWmTJhoSERH9NeOVjaRHTD0E/EjSToBs39pYyyIiBmbICwRuBn495djmwI8BA9ss6yRJ8yiy\nD6DZc5g1a91VbGZERP/1ec6mahjtbcDVwL62t7a9NbCwvL/MjgaKeja259qem44mIiKq9tkcJekr\nwNGSrgcOo7iiiYiImg15GG1yY+fLyuqcZwLrNN6qmNEFmzy91Xi73XRRq/GiHz74mD1bj/me357d\nesw69fmTfOVqNEk7SHoecBawJ7BXeXyfhtsWERE9UVXP5lBG6tkAe9u+vHz6Qw23LSJiUCas2m5d\nUzWM9gZSzyYiohV9Xo2WejYREdG41LOJiOiIiRpvXZN6NhERHeEeDxilnk1ERDQuWZ9XQ9n3EquD\n1X3PyzhM9HijTTqbiIiOmOjxMNoKlxiIiIhYUSt8ZSPpkbZvaaIxERFD1ucFAlUZBI6QtHF5f66k\n64ALJf1a0u6ttDAiYiD6vPS5ahjtz23fXN7/CPAK248Hng8cNd1JkuZJWiBpwcTEPTU1NSIiVldV\nw2hrSFqj3FvzcNsXAdi+RtLa051kez4wH2CNtTbv8fqKiIj69HkYraqz+SRwmqQjgDMkHQN8HXgu\ncEnTjYuIGJIuDn/VpWpT5yckXQYcDGxfvn474BTgg803L5blmY/aodV4px2wYavxADY86oLWY7bt\nI5u2W+/lbb/LvpeuG2xnU/odxZDYhZNJOeGBejZnNNWwiIjojxWqZyNpv5GnU88mIqJGRrXduib1\nbCIiOmKix++qqWcTERGNSz2biIiOmEC13bqmqrM5gGKBwANsL7F9APCcxloVETFArvHWNalnExER\njUuJgdXQ+b+/qtV4G06bmKg5z2h5L9EPW/6dQva9xEMNfZ9NRES0YELdm2upS+rZRERE43JlExHR\nEV2c2K9LVQaBuZLOlvQFSVtKOlPSHZIukrTTDOelxEBExAoacj2bTwIfBv4LOB/4jO05wDvL55bJ\n9nzbc23PnTVr3doaGxERq6eqzmZN26fbPgmw7a9R3Pku8LDGWxcRMSATqu/WNVVzNv8raW9gDmBJ\nL7F9SlkS+v7mmxcRMRxd3Plfl6rO5k0Uw2gTwAuAgyUdDyyiSNIZY3D8xu3WQXndze3vB2l738sT\nNtqi1XgAV9827Z7piFaUpWKOAWYDn7V9xJTnXwd8hOI9H+DfbH+2fO5A4D3l8Q/a/vxMsaoyCPxU\n0luBzYCFtt8CvGWkkRERUZM2V6NJmg38O/B8YCFwkaRTbV855aVfsX3IlHMfARwGzKVo9sXlubdN\nF2956tl8g9SziYhoXMtzNrsA19q+zvZi4MvAfhXnTHoBcKbtW8sO5kxgxguQqgUCbwDm2n4JsAfw\nz5LeUj7X38HFiIj+2xy4fuTxwvLYVH8l6VJJX5O05Qqe+4CqzmapejYUHc4LJX2UdDYREbWqc5/N\n6H7H8jZvJZr0LeBxtp9CcfUy47zMTFLPJiKiI+osMTC637G8zZ8SbhGw5cjjLXhwIUDRHvsW238o\nH34W2Hl5z50q9WwiIobpImA7SVtLWgvYHzh19AWSHjPycF/gZ+X97wB7S9pI0kbA3uWxaaWeTURE\nR7S5GdP2EkmHUHQSs4FjbV8h6XBgge1TgUMl7QssAW4FXleee6ukD1B0WACH2751pniNJ+J87qPb\nHW077/dTV+01Ty2nBb9qrS5mPlq9DWHPy+6bPKnVeL9fcler8QCuvPU3rcesU9t/2bZPA06bcuy9\nI/ffBbxrmnOPBY5d3lgpMRAREY1LiYGIiI7o85hFOpuIiI5wjzeUVGUQWEPSGyWdUW7quVTS6ZLe\nJGnNGc57YH33wruvn+5lERExEFVXNicCtwPvo9ghCsV66gOBLwCvWNZJ5Xru+QB7b7lPn4vPRUTU\nZsjDaDvb3n7KsYXABZKuaahNERGD1OfOpmo12q2SXibpgddJmiXpFcC02T0jIiJGVV3Z7A8cCfy7\npNvLYxsCZ5fPVTrrxstWvnWxTGf8IfNgseLOvemKVuPNnpWdFSuqz3MOVRkEflUm3TwK+AWwA/AM\n4Erbv2yhfRERg9HFcs51mbGzkXQY8MLydWdS1D84B3inpJ1s/0vjLYyIiNVe1TDaS4GnAmtTJOTc\nwvadkv4fcCGQziYioiZ9XiBQ1dkssX0/cK+kX9i+E8D2fZL6/HuJiGhdn99Uq2bwFktap7w/WccA\nSXPo9+8lIiJqVHVl85zJwjm2RzuXNSk2dkZERE2GvBrtD9Mcvxm4uZEWRUQM1GBXo0U3XXLLda3G\n++ZG7Rdl3e+281qP2ba3btbu7/VjN7T7O71/ov2R9rv+J2uWuiqdTURER/R5IjydTURER/R5zib5\nJCIionFV9Wxml/VsPiDpWVOee88M5z1Qz2Zi4p662hoR0WsTuLZb11Rd2XwG2B24Bfh4mSdt0l9O\nd5Lt+bbn2p47a9a6NTQzIqL/Jmq8dU1VZ7OL7VfZ/hiwK7CepK9LWhvo8SK9iIioU1Vns9bkHdtL\nbM8DfgqcBazXZMMiIobGNd66Rvb0zZL0BeALts+Ycvz1wKdsr1kVYI21Nu/iz12reZs9q/pFNZp/\nww9ajRcR01uyeFFtozzv2+rVtb1fvu/XX+zU6NOMVza2X0NRrfPpAJKeKOkfgBuWp6OJiIiAFahn\nI+lMinmbs0k9m4iI2g05XU3q2UREtKSLS5brUrVAYInt+23fCyxVz4Zurq6LiIgOqrqyWSxpnbKz\nST2biIgG9fe6JvVsIiI6o8+f4FPPpgZZihwRdRjynE1ERMQqS4mBiIiO6O91TTqbiIjO6POczQoP\no0m6pomGREREf1VlELiLB6/sJve2rjN53PYG05w3D5gHoNlzSJmBiIhqQ14gcBxwCrCd7fVtrw/8\npry/zI4GUs8mImJl9Dnrc1UizkOBY4CTJB0qaRbd/DkiIqLDKudsbF8M7FU+PBd4WKMtiogYqD5X\n6qxcjSZpF4r5mY9L+gmwp6QX2T6t+eZFRAyHezxwtKIlBnYBziElBiIiYgWkxEBEREd0cfirLlWd\nzRLb9wP3SlqqxICkPv9eIiJaN+Slz4slrVPeT4mBiIhYKSkxEBHREf29rkmJgYiIzhjyMFpERMQq\nS9bniIiO6PNEeDqbiIiO6POmzgyjRURE42bsbCQdImnj8v7jJZ0n6XZJF0rasZ0mRkQMQ59zo1Vd\n2RxcrjyDIvvz0bY3BN4BfHq6kyTNk7RA0oKJiXtqampERL+5xv+6pqqzGZ3T2cT2NwBsnwOsP91J\nqWcTERGjqjqbr0k6XtI2wDckvVXSVpIOAn7TQvsiIgajz8NoVZs63y3pdcBJwLYUCTnnUVTvfHXj\nrYtOOGrTPVuP+eG7ftJqvBvvub3VeNGM+2743ribsEom3L3hr7osz9LnK4FDbF8k6UnAPsDPbN/R\nbNMiIqIvUs8mIqIj+ntdk3o2ERGdMeTcaEts32/7XmCpejZ0cw4qIiI6qOrKZrGkdcrOJvVsIiIa\n1MX9MXVJPZuIiI7o8yf41LOJiIjGJetzVPrH35097iZELJeHb/bs1mMuWbyotu/V5wUC6WwiIjqi\nz3M2KTEQERGNy5VNRERH9HmBQFU9m20kHSvpg5LWk/Qfki6XdLKkx7XTxIiIYbBd261rqobRjgcu\nAu4GLgCuokhfcwZw7HQnpZ5NRET3SdpH0tWSrpX0zmU8/w+SrpR0qaTvStpq5Ln7JV1S3k6tjDVT\nDyjpJ7Z3Ku//xvZjl/XcTNZYa/PudbERETVZsniR6vpe+z32xbW9X37zN9+esV2SZgPXAM8HFlJc\nWLzS9pUjr9kTuND2vZIOBvaw/Yryubttr7e87am6spmQtL2kXYB1JM0tgzwemL28QSIiolrL9Wx2\nAa61fZ3txcCXgf1GX2D77DKDDBSjW1us7M9WtUDg7cC3KNr+EuBdkp4CzAHesLJBIyKiWZLmUdQf\nmzTf9vyRx5sD1488XgjsOsO3/Bvg9JHHD5O0AFgCHGH7lJnaU5VB4LuSDgAmyno2t1HM2Vxp+7SZ\nzo2IiBVT5z6bsmOZX/nC5SDpNcBcYPeRw1vZXlRWcj5L0mW2fzHd90g9m4iIjmg5g8AiYMuRx1uU\nx5YiaS/g3cDuoynMbC8qv14n6RxgJ2DlOhtSzyYioq8uAraTtDVFJ7M/8KrRF0jaCfgMsI/tm0aO\nbwTca/sPkjYGngV8eKZgVZ3NEtv3A/dKWqqejaQ+7z+KiGhdm/tjbC+RdAjwHYoFX8favkLS4cAC\n26cCHwHWA06WBPAb2/sCfwJ8puwHZlHM2Vy5zECl1LOJiOiItt9Uy7n306Yce+/I/b2mOe98YMcV\niZV6NhER0bjUs4mI6Ig+Z31OIs6IiI7ocz2blBiIiIjG5comIqIjupituS7pbCIiOqLPw2hVGQTW\noMiH83+AzcrDi4BvAp+z/cdmmxcREX1QdWVzInA78D6KJG1QpDQ4EPgC8IplnTSaAE6z5zBr1rp1\ntDUioteGvBptZ9vbTzm2ELhA0jXTnTSaAC71bCIils9Ej+dsqlaj3SrpZZIeeJ2kWZJeAdzWbNMi\nIobFNd66purKZn/gSOCTZb13/ogAABdPSURBVHkBUdSyObt8LiKiM47feM9xNyGmUZVB4FeU8zKS\nHlkePsb2axpuV0TE4Ax5Ndqpyzj83MnjZfbPiIiowWA7G4qVZ1cCn6UYBhTwdOCohtsVERE9UrVA\nYC5wMUWVtjtsnwPcZ/tc2+c23biIiCGxXduta6rmbCaAoyWdXH69seqciIhYOUMeRgPA9kLgZZL+\nHLiz2SZFRETfrNBViu3/Av6robZERAzakDMIRHDbX69Q9ddarDH3ia3GW/9vv9JqvGjGi/e8YdxN\nWCVdnGupS+rZRERE43JlExHREYNfIBAREc0b7DCapNmS3ijpA5KeNeW598xw3jxJCyQtmJi4p662\nRkTEaqpqzuYzwO7ALcDHJX105Lm/nO4k2/Ntz7U9N7VsIiKWzwSu7dY1VZ3NLrZfZftjwK7AepK+\nLmltitQ1ERFRE9f4X9dUdTZrTd6xvcT2POCnwFnAek02LCIi+qNqgcACSfvYPmPygO33S1oEfKrZ\npsV0tp6zaavx1jzwwFbjAfzxiye2HjNWf2u/dO9xN2GV9LlSZ1VutIfUrZF0gu0DKDJBR0RETbo4\n/FWXFa1nI2BPSRtC6tlERMTyqRpG2xK4gqXr2cwl9WwiImrX52G0qgUCO5N6NhERrejzarTUs4mI\niMalnk1EREf0eRgt9WwiIjqii8NfdVHTid/WWGvz/v72IlbBszdpt2bP9266stV4Q7Fk8aLasqls\n96ida3u//PnvL+5UlpfMv0REdESG0SIionF9HkZLpc6IiGjcCl/ZSLrG9vYVr5kHzAPQ7DmkzEBE\nRLVit0k/VaWruQseuK6bnGxaZ/K47Q2WdZ7t+cB8yAKBiIjl1cU6NHWpGkY7DjgF2M72+rbXB35T\n3l9mRxMRETFVVQaBQyXtDJwk6RTg36DHXW9ExBg1vRVlnCrnbGxfLGkv4BDgXOBhjbdqNfOEjbZo\nNd7Vty1sNV40I/teYqohD6MBRY402x8HXg6s3WyTIiKib1a0ng3A2pPHU88mIqI+Qx5G2wK4kqXr\n2Tyd1LOJiKhdnzMIVA2jzSX1bCIiYhWlnk1EREf0OV1N6tlERHTEkOdslpJ6NhERsTIyJFaD7HuJ\neKi3bvac1mN+7IbzWo9Zpz7vs0lnExHREX0eRkuJgYiIaNyMnY2kQyRtXN5/vKTzJN0u6UJJO7bT\nxIiIYZiwa7t1TdWVzcG2by7vHwMcbXtD4B3Ap6c7SdI8SQskLZiYuKempkZE9Jvt2m5dU9XZjM7p\nbGL7GwDl5s71pzvJ9nzbc23PTeG0iIio6my+Jul4SdsA35D0VklbSToI+E0L7YuIGIwJXNuta6oy\nCLy77FhOAralyPg8j6Kg2qubb15ExHB0cfirLlrRH07SibZfu7yvT1noiOizJYsXqa7vtcG629T2\nfnnnPdfV1q46rEyJgeemxEBERP26uIqsLikxEBHREX1OxJkSAxER0biUGIiI6IghD6MBKTEQEdGG\nPq9GS4mBiIiO6POcTeNDYv/Ycprxo1bzFOMRfbH1nE1bj/nLO37XesxYPpl/iYjoiD4Po6XEQERE\nR7SdiFPSPpKulnStpHcu4/m1JX2lfP5CSY8bee5d5fGrJb2gKlY6m4iIAZI0G/h34IXAE4FXSnri\nlJf9DXCb7ccDRwNHluc+EdgfeBKwD/DJ8vtNq6qezTaSjpX0QUnrSfoPSZdLOnm0h4uIiFXnGm/L\nYRfgWtvX2V4MfBnYb8pr9gM+X97/GvA8SSqPf9n2H2z/Eri2/H7TqpqzOZ4iCecc4ALgOOBwYG/g\nWOC5yzpJ0jyKhJ0Ab7Q9vyLOMr/Hypx35IqeUEPM1SXeOGL2Pd44YuZnXP3jTafOPGtT3ocB5k/5\nGTcHrh95vBDYdcq3eeA1tpdIugN4ZHn8ginnbj5Te6qG0da3/SnbRwAb2D7K9vW2PwdsNN1Jo/Vs\nVuF/4Lzql9Su7Zj5GVf/eOOImZ9x9Y/XuCnvw6vyXlyLqs5mQtL2kp4OrCNpLhQlooEZx+ciIqLT\nFgFbjjzeojy2zNdIWoNilOuW5Tx3KVWdzduBbwEnAC8B3iXp58D5wHsrzo2IiO66CNhO0taS1qKY\n8J+a6f9U4MDy/kuBs1wsdTsV2L9crbY1sB3wo5mCVeVG+y7whJFD35f0bWDfMm9ak8Zxydd2zPyM\nq3+8ccTMz7j6xxu7cg7mEOA7FCNVx9q+QtLhwALbpwKfA06UdC1wK0WHRPm6r1JUBVgC/J3t+2eK\nN2PxtOnq2QBnlQFTzyYiIipVrUbbEriC1LOJiIhVUHVlMwt4C/Ai4G22L5F0ne1t2mpgRESs/mbs\nbB54kbQFxe7RGynmax7bdMMiIqI/litdje2Ftl8GnA58odkmRaw8SbMkbTDudqzOJM2W9MVxtyP6\nZYVyo9n+L9v/t6nGSHqLpA1U+JykH0vau4E4j5jpVne8kbiPLn+u08vHT5T0N03FK2M8JKnCso7V\nGG/dcviVco/WvpLWbCpeGedL5b+bdYHLgSslva2BOJdJunQZt8skXVp3vCmxv7s8x+pQriraqlwO\n24px/G2UcTYt/43+haT2ayIMyHINo7VF0k9t/2mZQfSNwD8DJ9p+Ws1xfsmDCx4eC9xW3t8Q+I3t\nreuMNxL3dIqUP+8uf841gJ/Y3rGJeGXMH0/9/Um61PZTGop3MfBsigwTP6BYy7/Y9qubiFfGvMT2\nUyW9Gnga8E7g4rp/RklbzfS87V/XGa+M+TBgHeBsYA+Kf6cAGwBn2N6h7phl3BOAP6HYT3HP5HHb\nH20o3jj+Nl5PsV/wLIrf6+7A4baPbSrmkHWtns3kH9KLKDqZK8qkb7Wa7Ewk/QfwDdunlY9fSLF5\ntSkb2/6qpHeV7Vgiaca16StL0sHA3wLbTPnUvT5FJ9AU2b63/FT6SdsflnRJg/EA1iyvnl4C/Jvt\nP0qq/VNUE53Jcngj8FZgM+BiHvwbuRP4twbj/qK8zaL4N9O01v42RrwN2Mn2LQCSHkmxYT2dTQO6\n1tlcLOm/ga0pshWsDzS5eXQ322+YfGD7dEkfbjDePeU/aANI2g24o6FYX6KYY/tXik/6k+6yfWtD\nMQEk6RnAqynSk0PzqY0+DfwK+ClwXnkFcmfdQSTdxbIT6gqw7drnimwfAxwj6c22P1H3958h7vsB\nJK1XPr674ZBt/m1MugW4a+TxXeWxaEDXhtFmAU8FrrN9e/mPb3PbjYyHS/oO8D0eXPTwauA5tisL\nAa1kvKcBnwCeTDG38CjgpU39fGXMbYGFtv8gaQ/gKcAJtm9vKN7uwD8CP7B9pKRtgLfaPrSheLMo\nfodfHTkmYLbtJU3EHBdJzwQex8iHRNsnNBTrycCJwOQc5s3AAbavaCjeOP42TgB2BL5J0cntB1xa\n3hobMhyqTnQ2knawfVX5D+4hbP+4obiPAA4DnlMeOg94f5Of/Mux6CdQfBq+2vYfm4pVxrsEmEvx\nJnUaxR/Wk2y/qMm4bZK0wPbcFuJsYPvO6RaRNPzv5kRgW+ASYHJ4yQ124udTzJ+cXT7eA/iQ7Wc2\nEa+M0fbfxmEzPT95dRf16EpnM9/2PElnL+Np215m3ZzVUZufTst4P7b9NElvB+6z/QlJP7G9U81x\nPmb7rZK+xTKGmppMbSTpCIpP3l9h6cnsWt/8JX3b9ounLDAZCdfcZmdJPwOe6Jb+YCcX61Qdqzlm\nq38b0a5OzNnYnld+3bPNuJK2B/6Jh/4Db6Rzm+7TKUVW7ab8UdIrgQOAvyiPNbEU+cTy6/9r4HtX\neUX59e9Gjhmo9c3f9ovLuz8AzgW+Z/uqOmPM4HJgU+C3LcW7TtI/8+D/19cA1zUVrM2/jXF+MBqy\nTlzZjGp5XPqnFJPLF/PgP3BsX9xQvFY/nZYxnwi8Cfih7ZNUpAN/ue3G9tr0naQ9KZZ3P5viDfLH\nFB3PMQ3EmnxDXJ9iPvNHwB8mn6/7jVHSibZfK+kfKP4O/6x8anKI+bY6443Ebe1vQ9LOti8u5xcf\nwva5TbdhiDrV2YxhXPpi2zs38b2niXcycKjttj6dTsZ9OPBY21e3EOtZwPuArSg+MEyu1GpyiGkd\n4B8ofsZ5krYDnmD72w3GnE2RlHZPis78vib2vEz3hjip7jdGSVcCe1GsZNyT8v/fSLxG5qXG9bcR\n7elaZ9P2uPT7gJuAb7D0p8W6x/pb/XQ6JfZfUAxtrWV7a0lPpdi41khMSVcBf89DrxYbW1Iq6Stl\nvANsP7nsfM63/dSG4n0XWBf4IcVqxu/bvqmJWG2TdChwMMUQ5GjlxUY+NIz5b6P1D0ZD1ok5mxFt\nj0tPVqAbTW1S+1g/xZu9gCNZetPo5LEmvQ/YBTgHwEXm7ib/mO6wfXqD339ZtrX9inJuinJTae2b\ngUdcCuxMsUz3DuB2ST+0fV9TAafZ43MHsAD4R9u1zKfY/jjwcUmfsn1wHd+zwjj/Nj7HMj4YRTM6\n0dlM+XRzpaRWPt24obQ0y4hzLoCkNacOe5RDXE36o+07prz3NrlR9mxJHwG+ztL/DxtZvl5aXP4e\nJzcEbjsau262/76Msz7wOoo0K5sCazcVE/gYsJBis64oKiZOzhcdS5HKpjYtdTTj/tsYxwejwepE\nZ8OYPt20NdY/xtQxAFdIehUwu/z5DqVIydGUXcuvo/teTFHhtSnvA84AtlSRrfhZFJ1AI1SU0n02\nxdXNryje7L/XVLzSvlOWHc9XkRPuHZIaS47btHH8bYzs5xvHB6PB6tqcTdtJI1sZ65c0hyIxZdup\nYyY71HcDk9mzvwN8wHZjn/zHocw2sRvFB5QLbN/cYKx/ouhcLm4rS4GkH1LUlPpaeeilwD/Y3q3s\ndBqZn2raOP42ptnPN6lX+/q6pBOdzeinG4rkf5PWp0h78pqG4i6wPXd0k2PTG9faJulltk+uOlZj\nvLWBv+Khy9cPbyJeGfM/Kcbfz7Dd5BDh2JTzbMcAz6C4UryAYr5hEbCz7e+PsXkRlbrS2Yzlk3+Z\nkuN5FB3a08qx/pNs79JUzLZNc7X4kGM1xjuDYuJ66mq0o5qIV8bcCziI4srmZOC4NpZ5x+pN0lso\n5tvuAv6DsjyF7f8ea8N6qhOdzbhIej7wHuCJwH9TjvXbPmec7aqDinIJLwJeTpHGZdIGFMvLG+lQ\nJV1u+8lNfO/liD0HeCXFsOH1FG8gX3DDObaaJOntLso0fIJl73ZvZA/aEGjp+llvongvqL1+VhS6\nskBgLGyfKenHPDjW/5Ymx/pbdgPFsth9Ka4yJt1FMfzSlPMl7Wj7sgZjPEQ5Z/Ma4LXAT4AvUux+\nP5CaV2q17B3AhymGlxvZvT9go/WzTnBD9bOiMPQrm8Ntv3fk8SyKTzaNVZVsW7mktLVP9uUO9McD\nv6RY4TO5Ua6RRR5lzG9QZAs+ETh+dBe6WsoI3ZQpO/r3YOnkn41mmu47SccBm1PUz/pTirpL57SZ\nVWRIht7ZHAdcY/tfy4ntr1KUon3feFtWn3K5879SDBU+bPJ4U7ukNU3pZDdY5VLSni5T4feNpDfz\n4OKZxnf0D4larp81dEPvbEQx3HIZRR6o020fPd5W1UvS9ylq9hxNkfX5IGDW6BVdAzH/DNjO9nGS\nHgWsZ/uXTcUrYz6Zh3aovUlP3+KO/kGRtDkPpqsBwPZ542tRfw2ys9HSRdrWBD5DsYHsc9CvTV2T\nyUYlXWZ7x9FjDcU7jGJD5xNsby9pM+Bk289qIt5IzD0oOpvTgBdS5Ct7aVMxY/Un6UiK8hRXsnTi\n35QYaMBQFwhMXYZ7G8Ub1VE0v9u9bX8ohwt+Xu58XwSs12C8/wPsRJFGBds3lGldmvRSijH3n9g+\nSNKjebDUd8R0XkLxoahXG5y7apCdjVsu0jZmbwHWoUhT8wGK4cIDZzxj1Sy2bUmTecrWbTDWpPts\nT0haImkDikzeW7YQN1Zv11GMbKSzacEgO5tJ5SfgDwGb2X6hikJjz7D9uTE3rTa2LwKQNGH7oBZC\nflXSZ4ANJb0B+GuK/S5NWiBpwzLOxcDdFOn/I2ZyL3BJWTJiNDda9i41YJBzNpMknU6xg/jd5eau\nNSiGYnYcc9NqI+kZFHNR69l+rKQ/Bd5o+28binck8D8UudhEkYttL9vvaCLeMuI/DtggK4qiiqRl\nXuHb/nzbbRmCoXc2F9l++pTcaKttUsNlkXQhxZzGqSM/Y2O7/NtMpjplocdD9GmhRzRD0lrA9uXD\nq1fnbBNdN+hhNOCecm395PzCbhR5vXrF9vVTNkbXXihqTGUURhd6jH5qmixl3KeFHlEzSXsAn6co\nEyGKEhUHZulzM4be2fwDcCqwraQfAI+iuArok+slPROwpDUpFgz8rIE4X6LY5d5aMtXJhR4qimz9\nLUV6GlOk//9UEzGjV44C9p5M2ippe+AkijpFUbNBD6MBlPM0T6D4ZNO7y2hJG1Okpt+L4mf8b4oc\ncLeMtWE1kvRV4E6KDboArwLm2H75+FoVXbes4d0m62cN3aCvbPRgpc6tbL9B0naSaq/UOS6SZgOv\n7VOut2k82fYTRx6fXeYUi5jJAkmf5cE9Wa+mSF4bDZg17gaM2XHAYoqCVFBsePzg+JpTL9v3U3zK\n77sfl/NtAEjalbxpRLWDKbIHHFreriyPRQMGPYw2kEqdR1NsXPsKcM/k8T6t1JL0M4qh0N+Uhx4L\nXA0soeGM0xGxfAY9jAYsLieXJ1ejbUv/dhNPLuN+f/m1jyu19hl3A2L1Iemrtl8u6TKWXZAuH04a\nMPTO5jDgDIolj1+krNQ51hbV79sUf1CTa58N3CnpqbYvGV+z6tNk+YLopbeUX1881lYMzNCH0b4A\nXArcR5En6cIeVeoEQNKXKLIwn0rR4byY4md+HEU25g+Pr3UR4yVpU2AXig9hF9n+3Zib1FtD72z2\nBJ5d3ralKCd8nu1jxtqwGkk6D3iR7bvLx+sB/0Ux9HTxlFVcEYMh6fXAe4GzKD6I7Q4cbvvYsTas\npwbd2cADy4OfTpEN+U0UGYR3GG+r6iPpKmDHyf1DZUXSn9reYXRhRMTQSLoaeObknrMym8j5tp8w\n3pb106DnbMpsr+tSZAj+HvB02zeNt1W1+yJwoaRvlo//AvhSmfo/e1FiyG4B7hp5fFd5LBow6Cub\nclnwzhQr0H4AnAf80PZ9Y21YzSTNpVj8APAD29mDEoMn6QRgR+CbFHM2+1HMZ14KYPuj42td/wy6\ns5lUVpJ8HfBPwKa21x5viyKiaWU58WnZfv9Mz8eKGXRnU5ZJfjbF1c2vKIbSvmf7rHG2KyKibwY9\nZwM8DPgoxaqsJeNuTES0pxxefjewFSPvhdnU2YxBX9lExHCVq9HeBlwGTEwezybhZgz9yiYihuv3\ntk8ddyOGIlc2ETFIkp4HvBL4LiM5EW1/fWyN6rFc2UTEUB0E7ECRFX1yGM1AOpsG5MomIgZJ0tXJ\nFtCeoRdPi4jhOl9ScgO2JFc2ETFIZdG9bYFfUszZiBTba0w6m4gYJElbLet4lj43IwsEImKQJjsV\nSZtQbPCOBmXOJiIGSdK+kn5OMYx2LkXKqtPH2qgeS2cTEUP1AWA34BrbWwPPAy4Yb5P6K51NRAzV\nH8vCabMkzbJ9NkUJ9WhA5mwiYqhuL8uknwd8UdJNwD1jblNvZTVaRAxSWa32fymWPL8amAN8cbJM\ndNQrnU1ERDQuczYRMUiS/lLSzyXdIelOSXdJunPc7eqrXNlExCBJuhb4C9s/G3dbhiBXNhExVDem\no2lPrmwiYlAk/WV5d3dgU+AUUs+mcelsImJQJB1X3jXFSrRRtv3XLTdpELLPJiIGxfZBAJI+D7zF\n9u3l442Ao8bZtj7LnE1EDNVTJjsaANu3ATuNsT29ls4mIoZqVnk1A4CkR5DRnsbkFxsRQ3UU8ENJ\nJ5ePXwb8yxjb02tZIBARg1WWhX5u+fAs21eOsz19ls4mIiIalzmbiIhoXDqbiIhoXDqbiIhoXDqb\niIho3P8HxYgV47tgTEwAAAAASUVORK5CYII=\n",
            "text/plain": [
              "<Figure size 504x504 with 2 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cTHPBDzWPI8s",
        "colab_type": "text"
      },
      "source": [
        "1D global max-pooling would extract the highest value from each of our num_filters for each filter size. We could also follow this same approach to figure out which n-gram is most relevant but notice in the heatmap above that many filters don't have much variance. To mitigate this, this [paper](https://www.aclweb.org/anthology/W18-5408/) uses threshold values to determine which filters to use for interpretability. \n",
        "\n",
        "To keep things simple and since the feature map values are fairly normalized, we'll just take the sum of values for each token index and use the index that has the max value as th emost influential index."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "P72CZhU0CtGa",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 136
        },
        "outputId": "ed6f836c-323b-493e-af75-42296983b884"
      },
      "source": [
        "sample_index = 0\n",
        "print (f\"Preprocessed text:\\n{untokenize(indices=X_infer[sample_index], tokenizer=X_tokenizer)}\")\n",
        "print (\"\\nMost important n-grams:\")\n",
        "# Process conv outputs for each unique filter size\n",
        "for i, filter_size in enumerate(FILTER_SIZES):\n",
        "    \n",
        "    # Identify most important n-gram\n",
        "    filter_sums = np.sum(conv_outputs[i][sample_index], axis=1)\n",
        "    \n",
        "    # Get corresponding text\n",
        "    start = np.argmax(filter_sums)\n",
        "    gram = \" \".join([X_tokenizer.index_word[index] for index in X_infer[sample_index][start:start+filter_size]])\n",
        "    print (f\"[{filter_size}-gram]: {gram}\")"
      ],
      "execution_count": 154,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Preprocessed text:\n",
            "this weekend the greatest tennis players will fight for the championship\n",
            "\n",
            "Most important n-grams:\n",
            "[2-gram]: tennis players\n",
            "[3-gram]: tennis players will\n",
            "[4-gram]: championship\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sR3EVo4YTGEg",
        "colab_type": "text"
      },
      "source": [
        "Notice that the 4-gram is just the word \"championship\". This is because 3 <PAD> tokens follow it but we aren't showing those."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kbZPYQ2TH1Jt",
        "colab_type": "text"
      },
      "source": [
        "---\n",
        "<div align=\"center\">\n",
        "\n",
        "Subscribe to our <a href=\"https://practicalai.me/#newsletter\">newsletter</a> and follow us on social media to get the latest updates!\n",
        "\n",
        "<a class=\"ai-header-badge\" target=\"_blank\" href=\"https://github.com/practicalAI/practicalAI\">\n",
        "              <img src=\"https://img.shields.io/github/stars/practicalAI/practicalAI.svg?style=social&label=Star\"></a>&nbsp;\n",
        "            <a class=\"ai-header-badge\" target=\"_blank\" href=\"https://www.linkedin.com/company/madewithml\">\n",
        "              <img src=\"https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social\"></a>&nbsp;\n",
        "            <a class=\"ai-header-badge\" target=\"_blank\" href=\"https://twitter.com/madewithml\">\n",
        "              <img src=\"https://img.shields.io/twitter/follow/madewithml.svg?label=Follow&style=social\">\n",
        "            </a>\n",
        "              </div>\n",
        "\n",
        "</div>"
      ]
    }
  ]
}
