{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "14_Advanced_RNNs",
      "version": "0.3.2",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "metadata": {
        "id": "bOChJSNXtC9g",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Advanced RNNs"
      ]
    },
    {
      "metadata": {
        "id": "OLIxEDq6VhvZ",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png\" width=150>\n",
        "\n",
        "In this notebook we're going to cover some advanced topics related to RNNs.\n",
        "\n",
        "1. Conditioned hidden state\n",
        "2. Char-level embeddings\n",
        "3. Encoder and decoder\n",
        "4. Attentional mechanisms\n",
        "5. Implementation\n",
        "\n",
        "\n"
      ]
    },
    {
      "metadata": {
        "id": "41r7MWJnY0m8",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Set up"
      ]
    },
    {
      "metadata": {
        "id": "EJDhjHCHY0_a",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        },
        "outputId": "beb1c764-e47f-41a6-f8cf-e4150ee3befd"
      },
      "cell_type": "code",
      "source": [
        "# Load PyTorch library\n",
        "!pip3 install torch"
      ],
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.0.0)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "p0FbOd6IZmzX",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import os\n",
        "from argparse import Namespace\n",
        "import collections\n",
        "import copy\n",
        "import json\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import re\n",
        "import torch"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "bOsqAo4XZpXQ",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "# Set Numpy and PyTorch seeds\n",
        "def set_seeds(seed, cuda):\n",
        "    np.random.seed(seed)\n",
        "    torch.manual_seed(seed)\n",
        "    if cuda:\n",
        "        torch.cuda.manual_seed_all(seed)\n",
        "        \n",
        "# Creating directories\n",
        "def create_dirs(dirpath):\n",
        "    if not os.path.exists(dirpath):\n",
        "        os.makedirs(dirpath)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "QHfvEzQ9ZweF",
        "colab_type": "code",
        "outputId": "a69944ff-021d-4d04-e920-cfc49112a34c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "cell_type": "code",
      "source": [
        "# Arguments\n",
        "args = Namespace(\n",
        "    seed=1234,\n",
        "    cuda=True,\n",
        "    batch_size=4,\n",
        "    condition_vocab_size=3, # vocabulary for condition possibilities\n",
        "    embedding_dim=100,\n",
        "    rnn_hidden_dim=100,\n",
        "    hidden_dim=100,\n",
        "    num_layers=1,\n",
        "    bidirectional=False,\n",
        ")\n",
        "\n",
        "# Set seeds\n",
        "set_seeds(seed=args.seed, cuda=args.cuda)\n",
        "\n",
        "# Check CUDA\n",
        "if not torch.cuda.is_available():\n",
        "    args.cuda = False\n",
        "args.device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n",
        "print(\"Using CUDA: {}\".format(args.cuda))"
      ],
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Using CUDA: True\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "VoMq0eFRvugb",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Conditioned RNNs"
      ]
    },
    {
      "metadata": {
        "id": "ZUsj7HjBp69f",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "Conditioning an RNN is to add extra information that will be helpful towards a prediction. We can encode (embed it) this information and feed it along with the sequential input into our model. For example, suppose in our document classificaiton example in the previous notebook, we knew the publisher of each news article (NYTimes, ESPN, etc.). We could have encoded that information to help with the prediction. There are several different ways of creating a conditioned RNN.\n",
        "\n",
        "**Note**: If the conditioning information is novel for each input in the sequence, just concatenate it along with each time step's input."
      ]
    },
    {
      "metadata": {
        "id": "Kc8H9JySmtLa",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "1. Make the initial hidden state the encoded information instead of using the initial zerod hidden state. Make sure that the size of the encoded information is the same as the hidden state for the RNN.\n"
      ]
    },
    {
      "metadata": {
        "id": "pKlb9SjfpbED",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn1.png\" width=400>"
      ]
    },
    {
      "metadata": {
        "id": "jbrlQHx2x8Aa",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import torch.nn as nn\n",
        "import torch.nn.functional as F"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "cFoiV-fqmvRo",
        "colab_type": "code",
        "outputId": "9843f756-8d71-4686-b479-b521df9b6f3c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "cell_type": "code",
      "source": [
        "# Condition\n",
        "condition = torch.LongTensor([0, 2, 1, 2]) # batch size of 4 with a vocab size of 3\n",
        "condition_embeddings = nn.Embedding(\n",
        "    embedding_dim=args.embedding_dim, # should be same as RNN hidden dim\n",
        "    num_embeddings=args.condition_vocab_size) # of unique conditions\n",
        "\n",
        "# Initialize hidden state\n",
        "num_directions = 1\n",
        "if args.bidirectional:\n",
        "    num_directions = 2\n",
        "    \n",
        "# If using multiple layers and directions, the hidden state needs to match that size\n",
        "hidden_t = condition_embeddings(condition).unsqueeze(0).repeat(\n",
        "    args.num_layers * num_directions, 1, 1).to(args.device) # initial state to RNN\n",
        "print (hidden_t.size())\n",
        "\n",
        "# Feed into RNN\n",
        "# y_out, _ = self.rnn(x_embedded, hidden_t)\n"
      ],
      "execution_count": 7,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "torch.Size([1, 4, 100])\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "REgyaMDgmtHw",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "2. Concatenate the encoded information with the hidden state at each time step. Do not replace the hidden state because the RNN needs that to learn. "
      ]
    },
    {
      "metadata": {
        "id": "yUIg5o-dpiZF",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn2.png\" width=400>"
      ]
    },
    {
      "metadata": {
        "id": "eQ-h28o-pi4X",
        "colab_type": "code",
        "outputId": "4143190d-c452-48cc-cc96-1a2f0f7fc5ee",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "cell_type": "code",
      "source": [
        "# Initialize hidden state\n",
        "hidden_t = torch.zeros((args.num_layers * num_directions, args.batch_size, args.rnn_hidden_dim))\n",
        "print (hidden_t.size())"
      ],
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "torch.Size([1, 4, 100])\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "2Z6hYSIdqBQ4",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "def concat_condition(condition_embeddings, condition, hidden_t, num_layers, num_directions):\n",
        "    condition_t = condition_embeddings(condition).unsqueeze(0).repeat(\n",
        "        num_layers * num_directions, 1, 1)\n",
        "    hidden_t = torch.cat([hidden_t, condition_t], 2)\n",
        "    return hidden_t"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "Tjyzq_s5pixL",
        "colab_type": "code",
        "outputId": "f4f62742-044e-46ef-cc46-fc21a3c52c78",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "cell_type": "code",
      "source": [
        "# Loop through the inputs time steps\n",
        "hiddens = []\n",
        "seq_size = 1\n",
        "for t in range(seq_size):\n",
        "    hidden_t = concat_condition(condition_embeddings, condition, hidden_t, \n",
        "                                args.num_layers, num_directions).to(args.device)\n",
        "    print (hidden_t.size())\n",
        "    \n",
        "    # Feed into RNN\n",
        "    # hidden_t = rnn_cell(x_in[t], hidden_t)\n",
        "    ..."
      ],
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "torch.Size([1, 4, 200])\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "A-0_81jMXg_J",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Char-level embeddings"
      ]
    },
    {
      "metadata": {
        "id": "w0yUKKpq3pu_",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "Our conv operations will have inputs that are words in a sentence represented at the character level|  $\\in \\mathbb{R}^{NXSXWXE}$  and outputs are embeddings for each word (based on convlutions applied at the character level.) \n",
        "\n",
        "**Word embeddings**: capture the temporal correlations among\n",
        "adjacent tokens so that similar words have similar representations. Ex. \"New Jersey\" is close to \"NJ\" is close to \"Garden State\", etc.\n",
        "\n",
        "**Char embeddings**: create representations that map words at a character level. Ex. \"toy\" and \"toys\" will be close to each other."
      ]
    },
    {
      "metadata": {
        "id": "-SZgVuwebm_4",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/char_embeddings.png\" width=450>"
      ]
    },
    {
      "metadata": {
        "id": "QOdIvz0G3O8C",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "# Arguments\n",
        "args = Namespace(\n",
        "    seed=1234,\n",
        "    cuda=False,\n",
        "    shuffle=True,\n",
        "    batch_size=64,\n",
        "    vocab_size=20, # vocabulary\n",
        "    seq_size=10, # max length of each sentence\n",
        "    word_size=15, # max length of each word\n",
        "    embedding_dim=100,\n",
        "    num_filters=100, # filters per size\n",
        ")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "raztXIeYXYJT",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class Model(nn.Module):\n",
        "    def __init__(self, embedding_dim, num_embeddings, num_input_channels, \n",
        "                 num_output_channels, padding_idx):\n",
        "        super(Model, self).__init__()\n",
        "        \n",
        "        # Char-level embedding\n",
        "        self.embeddings = nn.Embedding(embedding_dim=embedding_dim,\n",
        "                                       num_embeddings=num_embeddings,\n",
        "                                       padding_idx=padding_idx)\n",
        "        \n",
        "        # Conv weights\n",
        "        self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_output_channels, \n",
        "                                             kernel_size=f) for f in [2,3,4]])\n",
        "\n",
        "    def forward(self, x, channel_first=False, apply_softmax=False):\n",
        "        \n",
        "        # x: (N, seq_len, word_len)\n",
        "        input_shape = x.size()\n",
        "        batch_size, seq_len, word_len = input_shape\n",
        "        x = x.view(-1, word_len) # (N*seq_len, word_len)\n",
        "        \n",
        "        # Embedding\n",
        "        x = self.embeddings(x) # (N*seq_len, word_len, embedding_dim)\n",
        "        \n",
        "        # Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)\n",
        "        if not channel_first:\n",
        "            x = x.transpose(1, 2)\n",
        "        \n",
        "        # Convolution\n",
        "        z = [F.relu(conv(x)) for conv in self.conv]\n",
        "        \n",
        "        # Pooling\n",
        "        z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z] \n",
        "        z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)\n",
        "        \n",
        "        # Concat to get char-level embeddings\n",
        "        z = torch.cat(z, 2) # join conv outputs\n",
        "        \n",
        "        return z"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "MzHVs8Xe0Zph",
        "colab_type": "code",
        "outputId": "ff91c1ac-5bc4-446c-9047-8b4b58570e13",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "cell_type": "code",
      "source": [
        "# Input\n",
        "input_size = (args.batch_size, args.seq_size, args.word_size)\n",
        "x_in = torch.randint(low=0, high=args.vocab_size, size=input_size).long()\n",
        "print (x_in.size())"
      ],
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "torch.Size([64, 10, 15])\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "0B_Xscby2PMQ",
        "colab_type": "code",
        "outputId": "05b0c3ac-429f-47aa-9526-718e55dfc897",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 153
        }
      },
      "cell_type": "code",
      "source": [
        "# Initial char-level embedding model\n",
        "model = Model(embedding_dim=args.embedding_dim, \n",
        "              num_embeddings=args.vocab_size, \n",
        "              num_input_channels=args.embedding_dim, \n",
        "              num_output_channels=args.num_filters,\n",
        "              padding_idx=0)\n",
        "print (model.named_modules)"
      ],
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<bound method Module.named_modules of Model(\n",
            "  (embeddings): Embedding(20, 100, padding_idx=0)\n",
            "  (conv): ModuleList(\n",
            "    (0): Conv1d(100, 100, kernel_size=(2,), stride=(1,))\n",
            "    (1): Conv1d(100, 100, kernel_size=(3,), stride=(1,))\n",
            "    (2): Conv1d(100, 100, kernel_size=(4,), stride=(1,))\n",
            "  )\n",
            ")>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "8DIgeEZFXYR2",
        "colab_type": "code",
        "outputId": "ffdbfabf-5f60-4045-be84-23dfb65fd424",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "cell_type": "code",
      "source": [
        "# Forward pass to get char-level embeddings\n",
        "z = model(x_in)\n",
        "print (z.size())"
      ],
      "execution_count": 15,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "torch.Size([64, 10, 300])\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "nzTscaE10HFA",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "There are several different ways you can use these char-level embeddings:\n",
        "\n",
        "1. Concat char-level embeddings with word-level embeddings, since we have an embedding for each word (at a char-level) and then feed it into an RNN. \n",
        "2. You can feed the char-level embeddings into an RNN to processes them."
      ]
    },
    {
      "metadata": {
        "id": "nyCQ13_ckV_c",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Encoder and decoder"
      ]
    },
    {
      "metadata": {
        "id": "_sixbu74kbJk",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "So far we've used RNNs to `encode` a sequential input and generate hidden states. We use these hidden states to `decode` the predictions. So far, the encoder was an RNN and the decoder was just a few fully connected layers followed by a softmax layer (for classification). But the encoder and decoder can assume other architectures as well. For example, the decoder could be an RNN that processes the hidden state outputs from the encoder RNN. "
      ]
    },
    {
      "metadata": {
        "id": "kfK1mAp1dlpT",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "# Arguments\n",
        "args = Namespace(\n",
        "    batch_size=64,\n",
        "    embedding_dim=100,\n",
        "    rnn_hidden_dim=100,\n",
        "    hidden_dim=100,\n",
        "    num_layers=1,\n",
        "    bidirectional=False,\n",
        "    dropout=0.1,\n",
        ")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "p_OJFyY97bF_",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class Encoder(nn.Module):\n",
        "    def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim, \n",
        "                 num_layers, bidirectional, padding_idx=0):\n",
        "        super(Encoder, self).__init__()\n",
        "        \n",
        "        # Embeddings\n",
        "        self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,\n",
        "                                            num_embeddings=num_embeddings,\n",
        "                                            padding_idx=padding_idx)\n",
        "        \n",
        "        # GRU weights\n",
        "        self.gru = nn.GRU(input_size=embedding_dim, hidden_size=rnn_hidden_dim, \n",
        "                          num_layers=num_layers, batch_first=True, \n",
        "                          bidirectional=bidirectional)\n",
        "\n",
        "    def forward(self, x_in, x_lengths):\n",
        "        \n",
        "        # Word level embeddings\n",
        "        z_word = self.word_embeddings(x_in)\n",
        "   \n",
        "        # Feed into RNN\n",
        "        out, h_n = self.gru(z)\n",
        "        \n",
        "        # Gather the last relevant hidden state\n",
        "        out = gather_last_relevant_hidden(out, x_lengths)\n",
        "        \n",
        "        return out"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "HRXtaGPlpyH7",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class Decoder(nn.Module):\n",
        "    def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):\n",
        "        super(Decoder, self).__init__()\n",
        "        \n",
        "        # FC weights\n",
        "        self.dropout = nn.Dropout(dropout_p)\n",
        "        self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)\n",
        "        self.fc2 = nn.Linear(hidden_dim, output_dim)\n",
        "\n",
        "    def forward(self, encoder_output, apply_softmax=False):\n",
        "        \n",
        "        # FC layers\n",
        "        z = self.dropout(encoder_output)\n",
        "        z = self.fc1(z)\n",
        "        z = self.dropout(z)\n",
        "        y_pred = self.fc2(z)\n",
        "\n",
        "        if apply_softmax:\n",
        "            y_pred = F.softmax(y_pred, dim=1)\n",
        "        return y_pred"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "SnKyCPVj-OVi",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class Model(nn.Module):\n",
        "    def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim, \n",
        "                 hidden_dim, num_layers, bidirectional, output_dim, dropout_p, \n",
        "                 padding_idx=0):\n",
        "        super(Model, self).__init__()\n",
        "        self.encoder = Encoder(embedding_dim, num_embeddings, rnn_hidden_dim, \n",
        "                               num_layers, bidirectional, padding_idx=0)\n",
        "        self.decoder = Decoder(rnn_hidden_dim, hidden_dim, output_dim, dropout_p)\n",
        "        \n",
        "    def forward(self, x_in, x_lengths, apply_softmax=False):\n",
        "        encoder_outputs = self.encoder(x_in, x_lengths)\n",
        "        y_pred = self.decoder(encoder_outputs, apply_softmax)\n",
        "        return y_pred"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "hfeoErsc-Tum",
        "colab_type": "code",
        "outputId": "8faa37ab-4c38-4ace-bb96-e5dc7e1483bf",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 204
        }
      },
      "cell_type": "code",
      "source": [
        "model = Model(embedding_dim=args.embedding_dim, num_embeddings=1000, \n",
        "              rnn_hidden_dim=args.rnn_hidden_dim, hidden_dim=args.hidden_dim, \n",
        "              num_layers=args.num_layers, bidirectional=args.bidirectional, \n",
        "              output_dim=4, dropout_p=args.dropout, padding_idx=0)\n",
        "print (model.named_parameters)"
      ],
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<bound method Module.named_parameters of Model(\n",
            "  (encoder): Encoder(\n",
            "    (word_embeddings): Embedding(1000, 100, padding_idx=0)\n",
            "    (gru): GRU(100, 100, batch_first=True)\n",
            "  )\n",
            "  (decoder): Decoder(\n",
            "    (dropout): Dropout(p=0.1)\n",
            "    (fc1): Linear(in_features=100, out_features=100, bias=True)\n",
            "    (fc2): Linear(in_features=100, out_features=4, bias=True)\n",
            "  )\n",
            ")>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "LAsOI6jEmTd0",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Attentional mechanisms"
      ]
    },
    {
      "metadata": {
        "id": "vJN5ft5Sc_kb",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "When processing an input sequence with an RNN, recall that at each time step we process the input and the hidden state at that time step. For many use cases, it's advantageous to have access to the inputs at all time steps and pay selective attention to the them at each time step. For example, in machine translation, it's advantageous to have access to all the words when translating to another language because translations aren't necessarily word for word. "
      ]
    },
    {
      "metadata": {
        "id": "jb6A6WfbXje6",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention1.jpg\" width=650>"
      ]
    },
    {
      "metadata": {
        "id": "mNkayU0rf-ua",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "Attention can sound a bit confusing so let's see what happens at each time step. At time step j, the model has processed inputs $x_0, x_1, x_2, ..., x_j$ and has generted hidden states $h_0, h_1, h_2, ..., h_j$. The idea is to use all the processed hidden states to make the prediction and not just the most recent one. There are several approaches to how we can do this.\n",
        "\n",
        "With **soft attention**, we learn a vector of floating points (probabilities) to multiply with the hidden states to create the context vector.\n",
        "\n",
        "Ex. [0.1, 0.3, 0.1, 0.4, 0.1]\n",
        "\n",
        "With **hard attention**, we can learn a binary vector to multiply with the hidden states to create the context vector. \n",
        "\n",
        "Ex. [0, 0, 0, 1, 0]"
      ]
    },
    {
      "metadata": {
        "id": "gYSIAVQqu3Ab",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "We're going to focus on soft attention because it's more widley used and we can visualize how much of each hidden state helps with the prediction, which is great for interpretability. "
      ]
    },
    {
      "metadata": {
        "id": "9Ch21nZNvDHO",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention2.jpg\" width=650>"
      ]
    },
    {
      "metadata": {
        "id": "o_jPXuT8xlqd",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "We're going to implement attention in the document classification task below."
      ]
    },
    {
      "metadata": {
        "colab_type": "text",
        "id": "0iNnQzdxnGvn"
      },
      "cell_type": "markdown",
      "source": [
        "# Document classification with RNNs"
      ]
    },
    {
      "metadata": {
        "colab_type": "text",
        "id": "n38ZJoVZnGaE"
      },
      "cell_type": "markdown",
      "source": [
        "We're going to implement the same document classification task as in the previous notebook but we're going to use an attentional interface for interpretability.\n",
        "\n",
        "**Why not machine translation?** Normally, machine translation is the go-to example for demonstrating attention but it's not really practical. How many situations can you think of that require a seq to generate another sequence? Instead we're going to apply attention with our document classification example to see which input tokens are more influential towards predicting the genre."
      ]
    },
    {
      "metadata": {
        "colab_type": "text",
        "id": "Fu7HgEqbnGFY"
      },
      "cell_type": "markdown",
      "source": [
        "## Set up"
      ]
    },
    {
      "metadata": {
        "colab_type": "code",
        "id": "elL6BxtCmNGf",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "from argparse import Namespace\n",
        "import collections\n",
        "import copy\n",
        "import json\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import re\n",
        "import torch"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "DCf2fLmPbKKI",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "def set_seeds(seed, cuda):\n",
        "    np.random.seed(seed)\n",
        "    torch.manual_seed(seed)\n",
        "    if cuda:\n",
        "        torch.cuda.manual_seed_all(seed)\n",
        "        \n",
        "# Creating directories\n",
        "def create_dirs(dirpath):\n",
        "    if not os.path.exists(dirpath):\n",
        "        os.makedirs(dirpath)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "colab_type": "code",
        "outputId": "291c03d4-6143-4395-b5c9-ab386b061737",
        "id": "TTwkuoZdmMlF",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "cell_type": "code",
      "source": [
        "args = Namespace(\n",
        "    seed=1234,\n",
        "    cuda=True,\n",
        "    shuffle=True,\n",
        "    data_file=\"news.csv\",\n",
        "    split_data_file=\"split_news.csv\",\n",
        "    vectorizer_file=\"vectorizer.json\",\n",
        "    model_state_file=\"model.pth\",\n",
        "    save_dir=\"news\",\n",
        "    train_size=0.7,\n",
        "    val_size=0.15,\n",
        "    test_size=0.15,\n",
        "    pretrained_embeddings=None,\n",
        "    cutoff=25,\n",
        "    num_epochs=5,\n",
        "    early_stopping_criteria=5,\n",
        "    learning_rate=1e-3,\n",
        "    batch_size=128,\n",
        "    embedding_dim=100,\n",
        "    kernels=[3,5],\n",
        "    num_filters=100,\n",
        "    rnn_hidden_dim=128,\n",
        "    hidden_dim=200,\n",
        "    num_layers=1,\n",
        "    bidirectional=False,\n",
        "    dropout_p=0.25,\n",
        ")\n",
        "\n",
        "# Set seeds\n",
        "set_seeds(seed=args.seed, cuda=args.cuda)\n",
        "\n",
        "# Create save dir\n",
        "create_dirs(args.save_dir)\n",
        "\n",
        "# Expand filepaths\n",
        "args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file)\n",
        "args.model_state_file = os.path.join(args.save_dir, args.model_state_file)\n",
        "\n",
        "# Check CUDA\n",
        "if not torch.cuda.is_available():\n",
        "    args.cuda = False\n",
        "args.device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n",
        "print(\"Using CUDA: {}\".format(args.cuda))"
      ],
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Using CUDA: True\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "colab_type": "text",
        "id": "xfiWhgX5mMQ5"
      },
      "cell_type": "markdown",
      "source": [
        "## Data"
      ]
    },
    {
      "metadata": {
        "colab_type": "code",
        "id": "baAsxXNFmMCF",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import urllib"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "colab_type": "code",
        "id": "3tJi_HyOmLw-",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "url = \"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/news.csv\"\n",
        "response = urllib.request.urlopen(url)\n",
        "html = response.read()\n",
        "with open(args.data_file, 'wb') as fp:\n",
        "    fp.write(html)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "colab_type": "code",
        "outputId": "a51463a7-f37e-41e7-aca4-74038c7c6e8e",
        "id": "wrI_df4bmLjB",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 204
        }
      },
      "cell_type": "code",
      "source": [
        "df = pd.read_csv(args.data_file, header=0)\n",
        "df.head()"
      ],
      "execution_count": 31,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>category</th>\n",
              "      <th>title</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>Business</td>\n",
              "      <td>Wall St. Bears Claw Back Into the Black (Reuters)</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>Business</td>\n",
              "      <td>Carlyle Looks Toward Commercial Aerospace (Reu...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>Business</td>\n",
              "      <td>Oil and Economy Cloud Stocks' Outlook (Reuters)</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>Business</td>\n",
              "      <td>Iraq Halts Oil Exports from Main Southern Pipe...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>Business</td>\n",
              "      <td>Oil prices soar to all-time record, posing new...</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "   category                                              title\n",
              "0  Business  Wall St. Bears Claw Back Into the Black (Reuters)\n",
              "1  Business  Carlyle Looks Toward Commercial Aerospace (Reu...\n",
              "2  Business    Oil and Economy Cloud Stocks' Outlook (Reuters)\n",
              "3  Business  Iraq Halts Oil Exports from Main Southern Pipe...\n",
              "4  Business  Oil prices soar to all-time record, posing new..."
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 31
        }
      ]
    },
    {
      "metadata": {
        "colab_type": "code",
        "outputId": "36145f0d-7316-4341-f270-1d8c8037c661",
        "id": "TreK7nqEmLTN",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        }
      },
      "cell_type": "code",
      "source": [
        "by_category = collections.defaultdict(list)\n",
        "for _, row in df.iterrows():\n",
        "    by_category[row.category].append(row.to_dict())\n",
        "for category in by_category:\n",
        "    print (\"{0}: {1}\".format(category, len(by_category[category])))"
      ],
      "execution_count": 32,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Business: 30000\n",
            "Sci/Tech: 30000\n",
            "Sports: 30000\n",
            "World: 30000\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "colab_type": "code",
        "id": "35nb3LxLmLCA",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "final_list = []\n",
        "for _, item_list in sorted(by_category.items()):\n",
        "    if args.shuffle:\n",
        "        np.random.shuffle(item_list)\n",
        "    n = len(item_list)\n",
        "    n_train = int(args.train_size*n)\n",
        "    n_val = int(args.val_size*n)\n",
        "    n_test = int(args.test_size*n)\n",
        "\n",
        "  # Give data point a split attribute\n",
        "    for item in item_list[:n_train]:\n",
        "        item['split'] = 'train'\n",
        "    for item in item_list[n_train:n_train+n_val]:\n",
        "        item['split'] = 'val'\n",
        "    for item in item_list[n_train+n_val:]:\n",
        "        item['split'] = 'test'  \n",
        "\n",
        "    # Add to final list\n",
        "    final_list.extend(item_list)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "colab_type": "code",
        "outputId": "3b188412-5c0a-4e71-ef50-20c4ba18082b",
        "id": "Y48IvuSfmK07",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        }
      },
      "cell_type": "code",
      "source": [
        "split_df = pd.DataFrame(final_list)\n",
        "split_df[\"split\"].value_counts()"
      ],
      "execution_count": 34,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "train    84000\n",
              "val      18000\n",
              "test     18000\n",
              "Name: split, dtype: int64"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 34
        }
      ]
    },
    {
      "metadata": {
        "colab_type": "code",
        "id": "RWuNBxAXmKk2",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "def preprocess_text(text):\n",
        "    text = ' '.join(word.lower() for word in text.split(\" \"))\n",
        "    text = re.sub(r\"([.,!?])\", r\" \\1 \", text)\n",
        "    text = re.sub(r\"[^a-zA-Z.,!?]+\", r\" \", text)\n",
        "    text = text.strip()\n",
        "    return text\n",
        "    \n",
        "split_df.title = split_df.title.apply(preprocess_text)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "colab_type": "code",
        "outputId": "7bb68022-5848-44ac-f90c-7cdf6a7eb988",
        "id": "fG9n77eLmKWB",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 204
        }
      },
      "cell_type": "code",
      "source": [
        "split_df.to_csv(args.split_data_file, index=False)\n",
        "split_df.head()"
      ],
      "execution_count": 36,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>category</th>\n",
              "      <th>split</th>\n",
              "      <th>title</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>Business</td>\n",
              "      <td>train</td>\n",
              "      <td>general electric posts higher rd quarter profit</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>Business</td>\n",
              "      <td>train</td>\n",
              "      <td>lilly to eliminate up to us jobs</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>Business</td>\n",
              "      <td>train</td>\n",
              "      <td>s amp p lowers america west outlook to negative</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>Business</td>\n",
              "      <td>train</td>\n",
              "      <td>does rand walk the talk on labor policy ?</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>Business</td>\n",
              "      <td>train</td>\n",
              "      <td>housekeeper advocates for changes</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "   category  split                                            title\n",
              "0  Business  train  general electric posts higher rd quarter profit\n",
              "1  Business  train                 lilly to eliminate up to us jobs\n",
              "2  Business  train  s amp p lowers america west outlook to negative\n",
              "3  Business  train        does rand walk the talk on labor policy ?\n",
              "4  Business  train                housekeeper advocates for changes"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 36
        }
      ]
    },
    {
      "metadata": {
        "colab_type": "text",
        "id": "m-a0OpqhmKJc"
      },
      "cell_type": "markdown",
      "source": [
        "## Vocabulary"
      ]
    },
    {
      "metadata": {
        "colab_type": "code",
        "id": "RUMQ_MwumJ8F",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class Vocabulary(object):\n",
        "    def __init__(self, token_to_idx=None):\n",
        "\n",
        "        # Token to index\n",
        "        if token_to_idx is None:\n",
        "            token_to_idx = {}\n",
        "        self.token_to_idx = token_to_idx\n",
        "\n",
        "        # Index to token\n",
        "        self.idx_to_token = {idx: token \\\n",
        "                             for token, idx in self.token_to_idx.items()}\n",
        "\n",
        "    def to_serializable(self):\n",
        "        return {'token_to_idx': self.token_to_idx}\n",
        "\n",
        "    @classmethod\n",
        "    def from_serializable(cls, contents):\n",
        "        return cls(**contents)\n",
        "\n",
        "    def add_token(self, token):\n",
        "        if token in self.token_to_idx:\n",
        "            index = self.token_to_idx[token]\n",
        "        else:\n",
        "            index = len(self.token_to_idx)\n",
        "            self.token_to_idx[token] = index\n",
        "            self.idx_to_token[index] = token\n",
        "        return index\n",
        "\n",
        "    def add_tokens(self, tokens):\n",
        "        return [self.add_token[token] for token in tokens]\n",
        "\n",
        "    def lookup_token(self, token):\n",
        "        return self.token_to_idx[token]\n",
        "\n",
        "    def lookup_index(self, index):\n",
        "        if index not in self.idx_to_token:\n",
        "            raise KeyError(\"the index (%d) is not in the Vocabulary\" % index)\n",
        "        return self.idx_to_token[index]\n",
        "\n",
        "    def __str__(self):\n",
        "        return \"<Vocabulary(size=%d)>\" % len(self)\n",
        "\n",
        "    def __len__(self):\n",
        "        return len(self.token_to_idx)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "1LtYf3lpExBb",
        "colab_type": "code",
        "outputId": "0870e7a9-d843-4549-97ae-d8cf5c3e7e3e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        }
      },
      "cell_type": "code",
      "source": [
        "# Vocabulary instance\n",
        "category_vocab = Vocabulary()\n",
        "for index, row in df.iterrows():\n",
        "    category_vocab.add_token(row.category)\n",
        "print (category_vocab) # __str__\n",
        "print (len(category_vocab)) # __len__\n",
        "index = category_vocab.lookup_token(\"Business\")\n",
        "print (index)\n",
        "print (category_vocab.lookup_index(index))"
      ],
      "execution_count": 38,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<Vocabulary(size=4)>\n",
            "4\n",
            "0\n",
            "Business\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "Z0zkF6CsE_yH",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "## Sequence vocabulary"
      ]
    },
    {
      "metadata": {
        "id": "QtntaISyE_1c",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "Next, we're going to create our Vocabulary classes for the article's title, which is a sequence of words."
      ]
    },
    {
      "metadata": {
        "id": "ovI8QRefEw_p",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "from collections import Counter\n",
        "import string"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "4W3ZouuTEw1_",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class SequenceVocabulary(Vocabulary):\n",
        "    def __init__(self, token_to_idx=None, unk_token=\"<UNK>\",\n",
        "                 mask_token=\"<MASK>\", begin_seq_token=\"<BEGIN>\",\n",
        "                 end_seq_token=\"<END>\"):\n",
        "\n",
        "        super(SequenceVocabulary, self).__init__(token_to_idx)\n",
        "\n",
        "        self.mask_token = mask_token\n",
        "        self.unk_token = unk_token\n",
        "        self.begin_seq_token = begin_seq_token\n",
        "        self.end_seq_token = end_seq_token\n",
        "\n",
        "        self.mask_index = self.add_token(self.mask_token)\n",
        "        self.unk_index = self.add_token(self.unk_token)\n",
        "        self.begin_seq_index = self.add_token(self.begin_seq_token)\n",
        "        self.end_seq_index = self.add_token(self.end_seq_token)\n",
        "        \n",
        "        # Index to token\n",
        "        self.idx_to_token = {idx: token \\\n",
        "                             for token, idx in self.token_to_idx.items()}\n",
        "\n",
        "    def to_serializable(self):\n",
        "        contents = super(SequenceVocabulary, self).to_serializable()\n",
        "        contents.update({'unk_token': self.unk_token,\n",
        "                         'mask_token': self.mask_token,\n",
        "                         'begin_seq_token': self.begin_seq_token,\n",
        "                         'end_seq_token': self.end_seq_token})\n",
        "        return contents\n",
        "\n",
        "    def lookup_token(self, token):\n",
        "        return self.token_to_idx.get(token, self.unk_index)\n",
        "    \n",
        "    def lookup_index(self, index):\n",
        "        if index not in self.idx_to_token:\n",
        "            raise KeyError(\"the index (%d) is not in the SequenceVocabulary\" % index)\n",
        "        return self.idx_to_token[index]\n",
        "    \n",
        "    def __str__(self):\n",
        "        return \"<SequenceVocabulary(size=%d)>\" % len(self.token_to_idx)\n",
        "\n",
        "    def __len__(self):\n",
        "        return len(self.token_to_idx)\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "g5UHjpi3El37",
        "colab_type": "code",
        "outputId": "75875a36-e34f-4e25-aa96-656bdfe4f210",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        }
      },
      "cell_type": "code",
      "source": [
        "# Get word counts\n",
        "word_counts = Counter()\n",
        "for title in split_df.title:\n",
        "    for token in title.split(\" \"):\n",
        "        if token not in string.punctuation:\n",
        "            word_counts[token] += 1\n",
        "\n",
        "# Create SequenceVocabulary instance\n",
        "title_word_vocab = SequenceVocabulary()\n",
        "for word, word_count in word_counts.items():\n",
        "    if word_count >= args.cutoff:\n",
        "        title_word_vocab.add_token(word)\n",
        "print (title_word_vocab) # __str__\n",
        "print (len(title_word_vocab)) # __len__\n",
        "index = title_word_vocab.lookup_token(\"general\")\n",
        "print (index)\n",
        "print (title_word_vocab.lookup_index(index))"
      ],
      "execution_count": 41,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<SequenceVocabulary(size=4400)>\n",
            "4400\n",
            "4\n",
            "general\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "1_wja0EfQNpA",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "We're also going to create an instance fo SequenceVocabulary that processes the input on a character level."
      ]
    },
    {
      "metadata": {
        "id": "5SpfS0BXP9pz",
        "colab_type": "code",
        "outputId": "383414b5-1274-499a-cd2f-d83cfc17bec6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 85
        }
      },
      "cell_type": "code",
      "source": [
        "# Create SequenceVocabulary instance\n",
        "title_char_vocab = SequenceVocabulary()\n",
        "for title in split_df.title:\n",
        "    for token in title:\n",
        "        title_char_vocab.add_token(token)\n",
        "print (title_char_vocab) # __str__\n",
        "print (len(title_char_vocab)) # __len__\n",
        "index = title_char_vocab.lookup_token(\"g\")\n",
        "print (index)\n",
        "print (title_char_vocab.lookup_index(index))"
      ],
      "execution_count": 42,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<SequenceVocabulary(size=35)>\n",
            "35\n",
            "4\n",
            "g\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "4Dag6H0SFHAG",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "## Vectorizer"
      ]
    },
    {
      "metadata": {
        "id": "VQIfxcUuKwzz",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "Something new that we introduce in this Vectorizer is calculating the length of our input sequence. We will use this later on to extract the last relevant hidden state for each input sequence."
      ]
    },
    {
      "metadata": {
        "id": "tsNtEnhBEl6s",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class NewsVectorizer(object):\n",
        "    def __init__(self, title_word_vocab, title_char_vocab, category_vocab):\n",
        "        self.title_word_vocab = title_word_vocab\n",
        "        self.title_char_vocab = title_char_vocab\n",
        "        self.category_vocab = category_vocab\n",
        "\n",
        "    def vectorize(self, title):\n",
        "       \n",
        "        # Word-level vectorization\n",
        "        word_indices = [self.title_word_vocab.lookup_token(token) for token in title.split(\" \")]\n",
        "        word_indices = [self.title_word_vocab.begin_seq_index] + word_indices + \\\n",
        "            [self.title_word_vocab.end_seq_index]\n",
        "        title_length = len(word_indices)\n",
        "        word_vector = np.zeros(title_length, dtype=np.int64)\n",
        "        word_vector[:len(word_indices)] = word_indices\n",
        "        \n",
        "        # Char-level vectorization\n",
        "        word_length = max([len(word) for word in title.split(\" \")])\n",
        "        char_vector = np.zeros((len(word_vector), word_length), dtype=np.int64)\n",
        "        char_vector[0, :] = self.title_word_vocab.mask_index # <BEGIN>\n",
        "        char_vector[-1, :] = self.title_word_vocab.mask_index # <END>\n",
        "        for i, word in enumerate(title.split(\" \")):\n",
        "            char_vector[i+1,:len(word)] = [title_char_vocab.lookup_token(char) \\\n",
        "                                           for char in word] # i+1 b/c of <BEGIN> token\n",
        "                \n",
        "        return word_vector, char_vector, len(word_indices)\n",
        "    \n",
        "    def unvectorize_word_vector(self, word_vector):\n",
        "        tokens = [self.title_word_vocab.lookup_index(index) for index in word_vector]\n",
        "        title = \" \".join(token for token in tokens)\n",
        "        return title\n",
        "    \n",
        "    def unvectorize_char_vector(self, char_vector):\n",
        "        title = \"\"\n",
        "        for word_vector in char_vector:\n",
        "            for index in word_vector:\n",
        "                if index == self.title_char_vocab.mask_index:\n",
        "                    break\n",
        "                title += self.title_char_vocab.lookup_index(index)\n",
        "            title += \" \"\n",
        "        return title\n",
        "    \n",
        "    @classmethod\n",
        "    def from_dataframe(cls, df, cutoff):\n",
        "        \n",
        "        # Create class vocab\n",
        "        category_vocab = Vocabulary()        \n",
        "        for category in sorted(set(df.category)):\n",
        "            category_vocab.add_token(category)\n",
        "\n",
        "        # Get word counts\n",
        "        word_counts = Counter()\n",
        "        for title in df.title:\n",
        "            for token in title.split(\" \"):\n",
        "                word_counts[token] += 1\n",
        "        \n",
        "        # Create title vocab (word level)\n",
        "        title_word_vocab = SequenceVocabulary()\n",
        "        for word, word_count in word_counts.items():\n",
        "            if word_count >= cutoff:\n",
        "                title_word_vocab.add_token(word)\n",
        "                \n",
        "        # Create title vocab (char level)\n",
        "        title_char_vocab = SequenceVocabulary()\n",
        "        for title in df.title:\n",
        "            for token in title:\n",
        "                title_char_vocab.add_token(token)\n",
        "        \n",
        "        return cls(title_word_vocab, title_char_vocab, category_vocab)\n",
        "\n",
        "    @classmethod\n",
        "    def from_serializable(cls, contents):\n",
        "        title_word_vocab = SequenceVocabulary.from_serializable(contents['title_word_vocab'])\n",
        "        title_char_vocab = SequenceVocabulary.from_serializable(contents['title_char_vocab'])\n",
        "        category_vocab = Vocabulary.from_serializable(contents['category_vocab'])\n",
        "        return cls(title_word_vocab=title_word_vocab, \n",
        "                   title_char_vocab=title_char_vocab, \n",
        "                   category_vocab=category_vocab)\n",
        "    \n",
        "    def to_serializable(self):\n",
        "        return {'title_word_vocab': self.title_word_vocab.to_serializable(),\n",
        "                'title_char_vocab': self.title_char_vocab.to_serializable(),\n",
        "                'category_vocab': self.category_vocab.to_serializable()}"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "JtRRXU53El9Y",
        "colab_type": "code",
        "outputId": "659ad7a1-38a4-46ca-98b8-a72ba0c9fff0",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 340
        }
      },
      "cell_type": "code",
      "source": [
        "# Vectorizer instance\n",
        "vectorizer = NewsVectorizer.from_dataframe(split_df, cutoff=args.cutoff)\n",
        "print (vectorizer.title_word_vocab)\n",
        "print (vectorizer.title_char_vocab)\n",
        "print (vectorizer.category_vocab)\n",
        "word_vector, char_vector, title_length = vectorizer.vectorize(preprocess_text(\n",
        "    \"Roger Federer wins the Wimbledon tennis tournament.\"))\n",
        "print (\"word_vector:\", np.shape(word_vector))\n",
        "print (\"char_vector:\", np.shape(char_vector))\n",
        "print (\"title_length:\", title_length)\n",
        "print (word_vector)\n",
        "print (char_vector)\n",
        "print (vectorizer.unvectorize_word_vector(word_vector))\n",
        "print (vectorizer.unvectorize_char_vector(char_vector))"
      ],
      "execution_count": 81,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<SequenceVocabulary(size=4404)>\n",
            "<SequenceVocabulary(size=35)>\n",
            "<Vocabulary(size=4)>\n",
            "word_vector: (10,)\n",
            "char_vector: (10, 10)\n",
            "title_length: 10\n",
            "[   2    1 4151 1231   25    1 2392 4076   38    3]\n",
            "[[ 0  0  0  0  0  0  0  0  0  0]\n",
            " [ 7 15  4  5  7  0  0  0  0  0]\n",
            " [21  5 18  5  7  5  7  0  0  0]\n",
            " [26 13  6 16  0  0  0  0  0  0]\n",
            " [12 17  5  0  0  0  0  0  0  0]\n",
            " [26 13 23 25  9  5 18 15  6  0]\n",
            " [12  5  6  6 13 16  0  0  0  0]\n",
            " [12 15 20  7  6  8 23  5  6 12]\n",
            " [30  0  0  0  0  0  0  0  0  0]\n",
            " [ 0  0  0  0  0  0  0  0  0  0]]\n",
            "<BEGIN> <UNK> federer wins the <UNK> tennis tournament . <END>\n",
            " roger federer wins the wimbledon tennis tournament .  \n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "uk_QvpVfFM0S",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "## Dataset"
      ]
    },
    {
      "metadata": {
        "id": "oU7oDdelFMR9",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "from torch.utils.data import Dataset, DataLoader"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "pB7FHmiSFMXA",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class NewsDataset(Dataset):\n",
        "    def __init__(self, df, vectorizer):\n",
        "        self.df = df\n",
        "        self.vectorizer = vectorizer\n",
        "\n",
        "        # Data splits\n",
        "        self.train_df = self.df[self.df.split=='train']\n",
        "        self.train_size = len(self.train_df)\n",
        "        self.val_df = self.df[self.df.split=='val']\n",
        "        self.val_size = len(self.val_df)\n",
        "        self.test_df = self.df[self.df.split=='test']\n",
        "        self.test_size = len(self.test_df)\n",
        "        self.lookup_dict = {'train': (self.train_df, self.train_size), \n",
        "                            'val': (self.val_df, self.val_size),\n",
        "                            'test': (self.test_df, self.test_size)}\n",
        "        self.set_split('train')\n",
        "\n",
        "        # Class weights (for imbalances)\n",
        "        class_counts = df.category.value_counts().to_dict()\n",
        "        def sort_key(item):\n",
        "            return self.vectorizer.category_vocab.lookup_token(item[0])\n",
        "        sorted_counts = sorted(class_counts.items(), key=sort_key)\n",
        "        frequencies = [count for _, count in sorted_counts]\n",
        "        self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)\n",
        "\n",
        "    @classmethod\n",
        "    def load_dataset_and_make_vectorizer(cls, split_data_file, cutoff):\n",
        "        df = pd.read_csv(split_data_file, header=0)\n",
        "        train_df = df[df.split=='train']\n",
        "        return cls(df, NewsVectorizer.from_dataframe(train_df, cutoff))\n",
        "\n",
        "    @classmethod\n",
        "    def load_dataset_and_load_vectorizer(cls, split_data_file, vectorizer_filepath):\n",
        "        df = pd.read_csv(split_data_file, header=0)\n",
        "        vectorizer = cls.load_vectorizer_only(vectorizer_filepath)\n",
        "        return cls(df, vectorizer)\n",
        "\n",
        "    def load_vectorizer_only(vectorizer_filepath):\n",
        "        with open(vectorizer_filepath) as fp:\n",
        "            return NewsVectorizer.from_serializable(json.load(fp))\n",
        "\n",
        "    def save_vectorizer(self, vectorizer_filepath):\n",
        "        with open(vectorizer_filepath, \"w\") as fp:\n",
        "            json.dump(self.vectorizer.to_serializable(), fp)\n",
        "\n",
        "    def set_split(self, split=\"train\"):\n",
        "        self.target_split = split\n",
        "        self.target_df, self.target_size = self.lookup_dict[split]\n",
        "\n",
        "    def __str__(self):\n",
        "        return \"<Dataset(split={0}, size={1})\".format(\n",
        "            self.target_split, self.target_size)\n",
        "\n",
        "    def __len__(self):\n",
        "        return self.target_size\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        row = self.target_df.iloc[index]\n",
        "        title_word_vector, title_char_vector, title_length = \\\n",
        "            self.vectorizer.vectorize(row.title)\n",
        "        category_index = self.vectorizer.category_vocab.lookup_token(row.category)\n",
        "        return {'title_word_vector': title_word_vector, \n",
        "                'title_char_vector': title_char_vector, \n",
        "                'title_length': title_length, \n",
        "                'category': category_index}\n",
        "\n",
        "    def get_num_batches(self, batch_size):\n",
        "        return len(self) // batch_size\n",
        "\n",
        "    def generate_batches(self, batch_size, collate_fn, shuffle=True, \n",
        "                         drop_last=False, device=\"cpu\"):\n",
        "        dataloader = DataLoader(dataset=self, batch_size=batch_size,\n",
        "                                collate_fn=collate_fn, shuffle=shuffle, \n",
        "                                drop_last=drop_last)\n",
        "        for data_dict in dataloader:\n",
        "            out_data_dict = {}\n",
        "            for name, tensor in data_dict.items():\n",
        "                out_data_dict[name] = data_dict[name].to(device)\n",
        "            yield out_data_dict"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "_Dpb6ZHJFMeb",
        "colab_type": "code",
        "outputId": "f87f31eb-c1d1-4269-ea4d-4f93826bd0df",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 272
        }
      },
      "cell_type": "code",
      "source": [
        "# Dataset instance\n",
        "dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,\n",
        "                                                       args.cutoff)\n",
        "print (dataset) # __str__\n",
        "input_ = dataset[10] # __getitem__\n",
        "print (input_['title_word_vector'])\n",
        "print (input_['title_char_vector'])\n",
        "print (input_['title_length'])\n",
        "print (input_['category'])\n",
        "print (dataset.vectorizer.unvectorize_word_vector(input_['title_word_vector']))\n",
        "print (dataset.vectorizer.unvectorize_char_vector(input_['title_char_vector']))\n",
        "print (dataset.class_weights)"
      ],
      "execution_count": 90,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<Dataset(split=train, size=84000)\n",
            "[ 2 51  1 52 53 26 54  3]\n",
            "[[ 0  0  0  0  0  0  0  0  0  0]\n",
            " [18  5  9 12  8  0  0  0  0  0]\n",
            " [18 15 18  4  5 16  0  0  0  0]\n",
            " [25  8  6 27  7 20 14 12 11 22]\n",
            " [26 13 12 17  0  0  0  0  0  0]\n",
            " [ 9  8 25 15  7  0  0  0  0  0]\n",
            " [18  5  8  9  0  0  0  0  0  0]\n",
            " [ 0  0  0  0  0  0  0  0  0  0]]\n",
            "8\n",
            "0\n",
            "<BEGIN> delta <UNK> bankruptcy with labor deal <END>\n",
            " delta dodges bankruptcy with labor deal  \n",
            "tensor([3.3333e-05, 3.3333e-05, 3.3333e-05, 3.3333e-05])\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "_IUIqtbvFUAG",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "## Model"
      ]
    },
    {
      "metadata": {
        "id": "xJV5WlDiFVVz",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "embed → encoder → attend → predict"
      ]
    },
    {
      "metadata": {
        "id": "rZCzdZZ9FMhm",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import torch.nn as nn\n",
        "import torch.nn.functional as F"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "c9wipRZt7feC",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class NewsEncoder(nn.Module):\n",
        "    def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,\n",
        "                 kernels, num_input_channels, num_output_channels, \n",
        "                 rnn_hidden_dim, num_layers, bidirectional, \n",
        "                 word_padding_idx=0, char_padding_idx=0):\n",
        "        super(NewsEncoder, self).__init__()\n",
        "        \n",
        "        self.num_layers = num_layers\n",
        "        self.bidirectional = bidirectional\n",
        "        \n",
        "        # Embeddings\n",
        "        self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,\n",
        "                                            num_embeddings=num_word_embeddings,\n",
        "                                            padding_idx=word_padding_idx)\n",
        "        self.char_embeddings = nn.Embedding(embedding_dim=embedding_dim,\n",
        "                                            num_embeddings=num_char_embeddings,\n",
        "                                            padding_idx=char_padding_idx)\n",
        "        \n",
        "        # Conv weights\n",
        "        self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, \n",
        "                                             num_output_channels, \n",
        "                                             kernel_size=f) for f in kernels])\n",
        "        \n",
        "        \n",
        "        # GRU weights\n",
        "        self.gru = nn.GRU(input_size=embedding_dim*(len(kernels)+1), \n",
        "                          hidden_size=rnn_hidden_dim, num_layers=num_layers, \n",
        "                          batch_first=True, bidirectional=bidirectional)\n",
        "        \n",
        "    def initialize_hidden_state(self, batch_size, rnn_hidden_dim, device):\n",
        "        \"\"\"Modify this to condition the RNN.\"\"\"\n",
        "        num_directions = 1\n",
        "        if self.bidirectional:\n",
        "            num_directions = 2\n",
        "        hidden_t = torch.zeros(self.num_layers * num_directions, \n",
        "                               batch_size, rnn_hidden_dim).to(device)\n",
        "        \n",
        "    def get_char_level_embeddings(self, x):\n",
        "        # x: (N, seq_len, word_len)\n",
        "        input_shape = x.size()\n",
        "        batch_size, seq_len, word_len = input_shape\n",
        "        x = x.view(-1, word_len) # (N*seq_len, word_len)\n",
        "        \n",
        "        # Embedding\n",
        "        x = self.char_embeddings(x) # (N*seq_len, word_len, embedding_dim)\n",
        "        \n",
        "        # Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)\n",
        "        x = x.transpose(1, 2)\n",
        "        \n",
        "        # Convolution\n",
        "        z = [F.relu(conv(x)) for conv in self.conv]\n",
        "        \n",
        "        # Pooling\n",
        "        z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z] \n",
        "        z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)\n",
        "        \n",
        "        # Concat to get char-level embeddings\n",
        "        z = torch.cat(z, 2) # join conv outputs\n",
        "        \n",
        "        return z\n",
        "        \n",
        "    def forward(self, x_word, x_char, x_lengths, device):\n",
        "        \"\"\"\n",
        "        x_word: word level representation (N, seq_size)\n",
        "        x_char: char level representation (N, seq_size, word_len)\n",
        "        \"\"\"\n",
        "        \n",
        "        # Word level embeddings\n",
        "        z_word = self.word_embeddings(x_word)\n",
        "        \n",
        "        # Char level embeddings\n",
        "        z_char = self.get_char_level_embeddings(x=x_char)\n",
        "        \n",
        "        # Concatenate\n",
        "        z = torch.cat([z_word, z_char], 2)\n",
        "        \n",
        "        # Feed into RNN\n",
        "        initial_h = self.initialize_hidden_state(\n",
        "            batch_size=z.size(0), rnn_hidden_dim=self.gru.hidden_size,\n",
        "            device=device)\n",
        "        out, h_n = self.gru(z, initial_h)\n",
        "        \n",
        "        return out"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "zeEcdA287gz4",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class NewsDecoder(nn.Module):\n",
        "    def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):\n",
        "        super(NewsDecoder, self).__init__()\n",
        "        \n",
        "        # Attention FC layer\n",
        "        self.fc_attn = nn.Linear(rnn_hidden_dim, rnn_hidden_dim)\n",
        "        self.v = nn.Parameter(torch.rand(rnn_hidden_dim))\n",
        "        \n",
        "        # FC weights\n",
        "        self.dropout = nn.Dropout(dropout_p)\n",
        "        self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)\n",
        "        self.fc2 = nn.Linear(hidden_dim, output_dim)\n",
        "\n",
        "    def forward(self, encoder_outputs, apply_softmax=False):\n",
        "        \n",
        "        # Attention\n",
        "        z = torch.tanh(self.fc_attn(encoder_outputs))\n",
        "        z = z.transpose(2,1) # [B*H*T]\n",
        "        v = self.v.repeat(encoder_outputs.size(0),1).unsqueeze(1) #[B*1*H]\n",
        "        z = torch.bmm(v,z).squeeze(1) # [B*T]\n",
        "        attn_scores = F.softmax(z, dim=1)\n",
        "        context = torch.matmul(encoder_outputs.transpose(-2, -1), \n",
        "                               attn_scores.unsqueeze(dim=2)).squeeze()\n",
        "        if len(context.size()) == 1:\n",
        "            context = context.unsqueeze(0)\n",
        "        \n",
        "        # FC layers\n",
        "        z = self.dropout(context)\n",
        "        z = self.fc1(z)\n",
        "        z = self.dropout(z)\n",
        "        y_pred = self.fc2(z)\n",
        "\n",
        "        if apply_softmax:\n",
        "            y_pred = F.softmax(y_pred, dim=1)\n",
        "        return attn_scores, y_pred"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "yVDftS-G7gwy",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class NewsModel(nn.Module):\n",
        "    def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,\n",
        "                 kernels, num_input_channels, num_output_channels, \n",
        "                 rnn_hidden_dim, hidden_dim, output_dim, num_layers, \n",
        "                 bidirectional, dropout_p, word_padding_idx, char_padding_idx):\n",
        "        super(NewsModel, self).__init__()\n",
        "        self.encoder = NewsEncoder(embedding_dim, num_word_embeddings,\n",
        "                                   num_char_embeddings, kernels, \n",
        "                                   num_input_channels, num_output_channels, \n",
        "                                   rnn_hidden_dim, num_layers, bidirectional, \n",
        "                                   word_padding_idx, char_padding_idx)\n",
        "        self.decoder = NewsDecoder(rnn_hidden_dim, hidden_dim, output_dim, \n",
        "                                   dropout_p)\n",
        "        \n",
        "    def forward(self, x_word, x_char, x_lengths, device, apply_softmax=False):\n",
        "        encoder_outputs = self.encoder(x_word, x_char, x_lengths, device)\n",
        "        y_pred = self.decoder(encoder_outputs, apply_softmax)\n",
        "        return y_pred"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "jHPYCPd7Fl3M",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "## Training"
      ]
    },
    {
      "metadata": {
        "id": "D3seBMA7FlcC",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import torch.optim as optim"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "HnRKWLekFlnM",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class Trainer(object):\n",
        "    def __init__(self, dataset, model, model_state_file, save_dir, device, \n",
        "                 shuffle, num_epochs, batch_size, learning_rate, \n",
        "                 early_stopping_criteria):\n",
        "        self.dataset = dataset\n",
        "        self.class_weights = dataset.class_weights.to(device)\n",
        "        self.device = device\n",
        "        self.model = model.to(device)\n",
        "        self.save_dir = save_dir\n",
        "        self.device = device\n",
        "        self.shuffle = shuffle\n",
        "        self.num_epochs = num_epochs\n",
        "        self.batch_size = batch_size\n",
        "        self.loss_func = nn.CrossEntropyLoss(self.class_weights)\n",
        "        self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate)\n",
        "        self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(\n",
        "            optimizer=self.optimizer, mode='min', factor=0.5, patience=1)\n",
        "        self.train_state = {\n",
        "            'stop_early': False, \n",
        "            'early_stopping_step': 0,\n",
        "            'early_stopping_best_val': 1e8,\n",
        "            'early_stopping_criteria': early_stopping_criteria,\n",
        "            'learning_rate': learning_rate,\n",
        "            'epoch_index': 0,\n",
        "            'train_loss': [],\n",
        "            'train_acc': [],\n",
        "            'val_loss': [],\n",
        "            'val_acc': [],\n",
        "            'test_loss': -1,\n",
        "            'test_acc': -1,\n",
        "            'model_filename': model_state_file}\n",
        "    \n",
        "    def update_train_state(self):\n",
        "\n",
        "        # Verbose\n",
        "        print (\"[EPOCH]: {0:02d} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%\".format(\n",
        "          self.train_state['epoch_index'], self.train_state['learning_rate'], \n",
        "            self.train_state['train_loss'][-1], self.train_state['train_acc'][-1], \n",
        "            self.train_state['val_loss'][-1], self.train_state['val_acc'][-1]))\n",
        "\n",
        "        # Save one model at least\n",
        "        if self.train_state['epoch_index'] == 0:\n",
        "            torch.save(self.model.state_dict(), self.train_state['model_filename'])\n",
        "            self.train_state['stop_early'] = False\n",
        "\n",
        "        # Save model if performance improved\n",
        "        elif self.train_state['epoch_index'] >= 1:\n",
        "            loss_tm1, loss_t = self.train_state['val_loss'][-2:]\n",
        "\n",
        "            # If loss worsened\n",
        "            if loss_t >= self.train_state['early_stopping_best_val']:\n",
        "                # Update step\n",
        "                self.train_state['early_stopping_step'] += 1\n",
        "\n",
        "            # Loss decreased\n",
        "            else:\n",
        "                # Save the best model\n",
        "                if loss_t < self.train_state['early_stopping_best_val']:\n",
        "                    torch.save(self.model.state_dict(), self.train_state['model_filename'])\n",
        "\n",
        "                # Reset early stopping step\n",
        "                self.train_state['early_stopping_step'] = 0\n",
        "\n",
        "            # Stop early ?\n",
        "            self.train_state['stop_early'] = self.train_state['early_stopping_step'] \\\n",
        "              >= self.train_state['early_stopping_criteria']\n",
        "        return self.train_state\n",
        "  \n",
        "    def compute_accuracy(self, y_pred, y_target):\n",
        "        _, y_pred_indices = y_pred.max(dim=1)\n",
        "        n_correct = torch.eq(y_pred_indices, y_target).sum().item()\n",
        "        return n_correct / len(y_pred_indices) * 100\n",
        "    \n",
        "    def pad_word_seq(self, seq, length):\n",
        "        vector = np.zeros(length, dtype=np.int64)\n",
        "        vector[:len(seq)] = seq\n",
        "        vector[len(seq):] = self.dataset.vectorizer.title_word_vocab.mask_index\n",
        "        return vector\n",
        "    \n",
        "    def pad_char_seq(self, seq, seq_length, word_length):\n",
        "        vector = np.zeros((seq_length, word_length), dtype=np.int64)\n",
        "        vector.fill(self.dataset.vectorizer.title_char_vocab.mask_index)\n",
        "        for i in range(len(seq)):\n",
        "            char_padding = np.zeros(word_length-len(seq[i]), dtype=np.int64)\n",
        "            vector[i] = np.concatenate((seq[i], char_padding), axis=None)\n",
        "        return vector\n",
        "        \n",
        "    def collate_fn(self, batch):\n",
        "        \n",
        "        # Make a deep copy\n",
        "        batch_copy = copy.deepcopy(batch)\n",
        "        processed_batch = {\"title_word_vector\": [], \"title_char_vector\": [], \n",
        "                           \"title_length\": [], \"category\": []}\n",
        "             \n",
        "        # Max lengths\n",
        "        get_seq_length = lambda sample: len(sample[\"title_word_vector\"])\n",
        "        get_word_length = lambda sample: len(sample[\"title_char_vector\"][0])\n",
        "        max_seq_length = max(map(get_seq_length, batch))\n",
        "        max_word_length = max(map(get_word_length, batch))\n",
        "\n",
        "\n",
        "        # Pad\n",
        "        for i, sample in enumerate(batch_copy):\n",
        "            padded_word_seq = self.pad_word_seq(\n",
        "                sample[\"title_word_vector\"], max_seq_length)\n",
        "            padded_char_seq = self.pad_char_seq(\n",
        "                sample[\"title_char_vector\"], max_seq_length, max_word_length)\n",
        "            processed_batch[\"title_word_vector\"].append(padded_word_seq)\n",
        "            processed_batch[\"title_char_vector\"].append(padded_char_seq)\n",
        "            processed_batch[\"title_length\"].append(sample[\"title_length\"])\n",
        "            processed_batch[\"category\"].append(sample[\"category\"])\n",
        "            \n",
        "        # Convert to appropriate tensor types\n",
        "        processed_batch[\"title_word_vector\"] = torch.LongTensor(\n",
        "            processed_batch[\"title_word_vector\"])\n",
        "        processed_batch[\"title_char_vector\"] = torch.LongTensor(\n",
        "            processed_batch[\"title_char_vector\"])\n",
        "        processed_batch[\"title_length\"] = torch.LongTensor(\n",
        "            processed_batch[\"title_length\"])\n",
        "        processed_batch[\"category\"] = torch.LongTensor(\n",
        "            processed_batch[\"category\"])\n",
        "        \n",
        "        return processed_batch  \n",
        "  \n",
        "    def run_train_loop(self):\n",
        "        for epoch_index in range(self.num_epochs):\n",
        "            self.train_state['epoch_index'] = epoch_index\n",
        "      \n",
        "            # Iterate over train dataset\n",
        "\n",
        "            # initialize batch generator, set loss and acc to 0, set train mode on\n",
        "            self.dataset.set_split('train')\n",
        "            batch_generator = self.dataset.generate_batches(\n",
        "                batch_size=self.batch_size, collate_fn=self.collate_fn, \n",
        "                shuffle=self.shuffle, device=self.device)\n",
        "            running_loss = 0.0\n",
        "            running_acc = 0.0\n",
        "            self.model.train()\n",
        "\n",
        "            for batch_index, batch_dict in enumerate(batch_generator):\n",
        "                # zero the gradients\n",
        "                self.optimizer.zero_grad()\n",
        "                \n",
        "                # compute the output\n",
        "                _, y_pred = self.model(x_word=batch_dict['title_word_vector'],\n",
        "                                       x_char=batch_dict['title_char_vector'],\n",
        "                                       x_lengths=batch_dict['title_length'],\n",
        "                                       device=self.device)\n",
        "                \n",
        "                # compute the loss\n",
        "                loss = self.loss_func(y_pred, batch_dict['category'])\n",
        "                loss_t = loss.item()\n",
        "                running_loss += (loss_t - running_loss) / (batch_index + 1)\n",
        "\n",
        "                # compute gradients using loss\n",
        "                loss.backward()\n",
        "\n",
        "                # use optimizer to take a gradient step\n",
        "                self.optimizer.step()\n",
        "                \n",
        "                # compute the accuracy\n",
        "                acc_t = self.compute_accuracy(y_pred, batch_dict['category'])\n",
        "                running_acc += (acc_t - running_acc) / (batch_index + 1)\n",
        "\n",
        "            self.train_state['train_loss'].append(running_loss)\n",
        "            self.train_state['train_acc'].append(running_acc)\n",
        "\n",
        "            # Iterate over val dataset\n",
        "\n",
        "            # initialize batch generator, set loss and acc to 0, set eval mode on\n",
        "            self.dataset.set_split('val')\n",
        "            batch_generator = self.dataset.generate_batches(\n",
        "                batch_size=self.batch_size, collate_fn=self.collate_fn, \n",
        "                shuffle=self.shuffle, device=self.device)\n",
        "            running_loss = 0.\n",
        "            running_acc = 0.\n",
        "            self.model.eval()\n",
        "\n",
        "            for batch_index, batch_dict in enumerate(batch_generator):\n",
        "\n",
        "                # compute the output\n",
        "                _, y_pred = self.model(x_word=batch_dict['title_word_vector'],\n",
        "                                       x_char=batch_dict['title_char_vector'],\n",
        "                                       x_lengths=batch_dict['title_length'],\n",
        "                                       device=self.device)\n",
        "\n",
        "                # compute the loss\n",
        "                loss = self.loss_func(y_pred, batch_dict['category'])\n",
        "                loss_t = loss.to(\"cpu\").item()\n",
        "                running_loss += (loss_t - running_loss) / (batch_index + 1)\n",
        "\n",
        "                # compute the accuracy\n",
        "                acc_t = self.compute_accuracy(y_pred, batch_dict['category'])\n",
        "                running_acc += (acc_t - running_acc) / (batch_index + 1)\n",
        "\n",
        "            self.train_state['val_loss'].append(running_loss)\n",
        "            self.train_state['val_acc'].append(running_acc)\n",
        "\n",
        "            self.train_state = self.update_train_state()\n",
        "            self.scheduler.step(self.train_state['val_loss'][-1])\n",
        "            if self.train_state['stop_early']:\n",
        "                break\n",
        "          \n",
        "    def run_test_loop(self):\n",
        "        # initialize batch generator, set loss and acc to 0, set eval mode on\n",
        "        self.dataset.set_split('test')\n",
        "        batch_generator = self.dataset.generate_batches(\n",
        "            batch_size=self.batch_size, collate_fn=self.collate_fn, \n",
        "            shuffle=self.shuffle, device=self.device)\n",
        "        running_loss = 0.0\n",
        "        running_acc = 0.0\n",
        "        self.model.eval()\n",
        "\n",
        "        for batch_index, batch_dict in enumerate(batch_generator):\n",
        "            # compute the output\n",
        "            _, y_pred = self.model(x_word=batch_dict['title_word_vector'],\n",
        "                                   x_char=batch_dict['title_char_vector'],\n",
        "                                   x_lengths=batch_dict['title_length'],\n",
        "                                   device=self.device)\n",
        "\n",
        "            # compute the loss\n",
        "            loss = self.loss_func(y_pred, batch_dict['category'])\n",
        "            loss_t = loss.item()\n",
        "            running_loss += (loss_t - running_loss) / (batch_index + 1)\n",
        "\n",
        "            # compute the accuracy\n",
        "            acc_t = self.compute_accuracy(y_pred, batch_dict['category'])\n",
        "            running_acc += (acc_t - running_acc) / (batch_index + 1)\n",
        "\n",
        "        self.train_state['test_loss'] = running_loss\n",
        "        self.train_state['test_acc'] = running_acc\n",
        "    \n",
        "    def plot_performance(self):\n",
        "        # Figure size\n",
        "        plt.figure(figsize=(15,5))\n",
        "\n",
        "        # Plot Loss\n",
        "        plt.subplot(1, 2, 1)\n",
        "        plt.title(\"Loss\")\n",
        "        plt.plot(trainer.train_state[\"train_loss\"], label=\"train\")\n",
        "        plt.plot(trainer.train_state[\"val_loss\"], label=\"val\")\n",
        "        plt.legend(loc='upper right')\n",
        "\n",
        "        # Plot Accuracy\n",
        "        plt.subplot(1, 2, 2)\n",
        "        plt.title(\"Accuracy\")\n",
        "        plt.plot(trainer.train_state[\"train_acc\"], label=\"train\")\n",
        "        plt.plot(trainer.train_state[\"val_acc\"], label=\"val\")\n",
        "        plt.legend(loc='lower right')\n",
        "\n",
        "        # Save figure\n",
        "        plt.savefig(os.path.join(self.save_dir, \"performance.png\"))\n",
        "\n",
        "        # Show plots\n",
        "        plt.show()\n",
        "    \n",
        "    def save_train_state(self):\n",
        "        with open(os.path.join(self.save_dir, \"train_state.json\"), \"w\") as fp:\n",
        "            json.dump(self.train_state, fp)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "ICkiOaGtFlk-",
        "colab_type": "code",
        "outputId": "18174034-ce3e-444a-a968-aba51eb03b3e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 306
        }
      },
      "cell_type": "code",
      "source": [
        "# Initialization\n",
        "dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,\n",
        "                                                       args.cutoff)\n",
        "dataset.save_vectorizer(args.vectorizer_file)\n",
        "vectorizer = dataset.vectorizer\n",
        "model = NewsModel(embedding_dim=args.embedding_dim, \n",
        "                  num_word_embeddings=len(vectorizer.title_word_vocab), \n",
        "                  num_char_embeddings=len(vectorizer.title_char_vocab),\n",
        "                  kernels=args.kernels,\n",
        "                  num_input_channels=args.embedding_dim,\n",
        "                  num_output_channels=args.num_filters,\n",
        "                  rnn_hidden_dim=args.rnn_hidden_dim,\n",
        "                  hidden_dim=args.hidden_dim,\n",
        "                  output_dim=len(vectorizer.category_vocab),\n",
        "                  num_layers=args.num_layers,\n",
        "                  bidirectional=args.bidirectional,\n",
        "                  dropout_p=args.dropout_p, \n",
        "                  word_padding_idx=vectorizer.title_word_vocab.mask_index,\n",
        "                  char_padding_idx=vectorizer.title_char_vocab.mask_index)\n",
        "print (model.named_modules)"
      ],
      "execution_count": 149,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<bound method Module.named_modules of NewsModel(\n",
            "  (encoder): NewsEncoder(\n",
            "    (word_embeddings): Embedding(3406, 100, padding_idx=0)\n",
            "    (char_embeddings): Embedding(35, 100, padding_idx=0)\n",
            "    (conv): ModuleList(\n",
            "      (0): Conv1d(100, 100, kernel_size=(3,), stride=(1,))\n",
            "      (1): Conv1d(100, 100, kernel_size=(5,), stride=(1,))\n",
            "    )\n",
            "    (gru): GRU(300, 128, batch_first=True)\n",
            "  )\n",
            "  (decoder): NewsDecoder(\n",
            "    (fc_attn): Linear(in_features=128, out_features=128, bias=True)\n",
            "    (dropout): Dropout(p=0.25)\n",
            "    (fc1): Linear(in_features=128, out_features=200, bias=True)\n",
            "    (fc2): Linear(in_features=200, out_features=4, bias=True)\n",
            "  )\n",
            ")>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "tuaRZ4DiFlh1",
        "colab_type": "code",
        "outputId": "6496aa05-de58-4913-a56a-9885bd60d9ad",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "cell_type": "code",
      "source": [
        "# Train\n",
        "trainer = Trainer(dataset=dataset, model=model, \n",
        "                  model_state_file=args.model_state_file, \n",
        "                  save_dir=args.save_dir, device=args.device,\n",
        "                  shuffle=args.shuffle, num_epochs=args.num_epochs, \n",
        "                  batch_size=args.batch_size, learning_rate=args.learning_rate, \n",
        "                  early_stopping_criteria=args.early_stopping_criteria)\n",
        "trainer.run_train_loop()"
      ],
      "execution_count": 150,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "[EPOCH]: 00 | [LR]: 0.001 | [TRAIN LOSS]: 0.78 | [TRAIN ACC]: 68.6% | [VAL LOSS]: 0.58 | [VAL ACC]: 78.5%\n",
            "[EPOCH]: 01 | [LR]: 0.001 | [TRAIN LOSS]: 0.50 | [TRAIN ACC]: 82.0% | [VAL LOSS]: 0.48 | [VAL ACC]: 83.2%\n",
            "[EPOCH]: 02 | [LR]: 0.001 | [TRAIN LOSS]: 0.43 | [TRAIN ACC]: 84.6% | [VAL LOSS]: 0.47 | [VAL ACC]: 83.1%\n",
            "[EPOCH]: 03 | [LR]: 0.001 | [TRAIN LOSS]: 0.39 | [TRAIN ACC]: 86.2% | [VAL LOSS]: 0.46 | [VAL ACC]: 83.7%\n",
            "[EPOCH]: 04 | [LR]: 0.001 | [TRAIN LOSS]: 0.35 | [TRAIN ACC]: 87.4% | [VAL LOSS]: 0.44 | [VAL ACC]: 84.2%\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "mzRJIz88Flfe",
        "colab_type": "code",
        "outputId": "dece6240-57ab-4abc-f9cc-ecd11dabcdc6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 335
        }
      },
      "cell_type": "code",
      "source": [
        "# Plot performance\n",
        "trainer.plot_performance()"
      ],
      "execution_count": 151,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAA2gAAAE+CAYAAAD4XjP+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzs3Xl8VPWh/vHPzGSfyb6RhARCgIQd\nZF8DKLK6o0ILotjbq7W3t73WH170Sut1673iFW3r0npr9VqlYgARRYVKEET2TSCAYckG2ciekMxk\n5vdHQiCCLJLJySTP+/XylZmTc848CZjwzPec79fkcrlciIiIiIiIiOHMRgcQERERERGRBipoIiIi\nIiIibYQKmoiIiIiISBuhgiYiIiIiItJGqKCJiIiIiIi0ESpoIiIiIiIibYQKmsgPlJyczKlTp4yO\nISIi0ipmzZrFzTffbHQMkXZPBU1ERERELunw4cMEBgYSGxvLrl27jI4j0q6poIm0sNraWp544gkm\nT57M1KlTee6556ivrwfg//7v/5g6dSpTpkxh5syZHDly5JLbRURE2oLly5czZcoUZsyYwYoVK5q2\nr1ixgsmTJzN58mQeeeQR6urqvnf7li1bmDRpUtOx5z9/+eWXefzxx5k5cyZvvvkmTqeT3/72t0ye\nPJmJEyfyyCOPYLfbATh9+jQPPPAA119/PTfddBMbN25k/fr1zJgxo1nm22+/nbVr17r7WyPS4ryM\nDiDS3vz1r3/l1KlTrF69GofDwZw5c/joo4+4/vrrWbJkCV988QU2m41PPvmE9evXExMTc9HtPXr0\nMPpLERERob6+ns8//5yHHnoIi8XC4sWLqauro6CggN/97nesWLGCqKgo/uVf/oW33nqLKVOmXHR7\nv379Lvk66enprFy5krCwMD799FO2b9/ORx99hNPp5LbbbuPjjz/mlltuYfHixSQlJfHqq69y4MAB\n7rvvPr788ksKCwvJyMggJSWFvLw8srKyGDduXCt9l0RajgqaSAtbv3498+fPx8vLCy8vL2666SY2\nbdrEtGnTMJlMLFu2jBkzZjB16lQA7Hb7RbeLiIi0BRs3bqRfv37YbDYAhg0bxhdffEFpaSmDBg0i\nOjoagMWLF2OxWPjggw8uun3Hjh2XfJ0BAwYQFhYGwOTJk5kwYQLe3t4A9OvXj+zsbKChyP3pT38C\noHfv3qxbtw4fHx8mT57M6tWrSUlJYe3atVx//fX4+Pi0/DdExM10iaNICzt9+jTBwcFNz4ODgyku\nLsbb25s333yTnTt3MnnyZH70ox9x6NCh790uIiLSFqSlpbF+/XqGDBnCkCFD+Oyzz1i+fDklJSUE\nBQU17efr64uXl9f3br+c8393nj59mgULFjB58mSmTJnCunXrcLlcAJSWlhIYGNi079niOH36dFav\nXg3A2rVrmTZt2rV94SIGUUETaWERERGUlpY2PS8tLSUiIgJoeKfvpZdeYvPmzYwZM4ZFixZdcruI\niIiRysrK2Lp1K1u2bGH79u1s376dbdu2sW/fPsxmMyUlJU37VlZWUlRURGho6EW3WyyWpnuyAcrL\ny7/3df/nf/4HLy8vVq1axZo1a0hNTW36XEhISLPz5+TkYLfbGTp0KA6Hgy+++IIjR44watSolvo2\niLQqFTSRFjZ+/HiWLVtGfX091dXVrFy5ktTUVA4dOsQvfvEL6urq8PHxoW/fvphMpu/dLiIiYrTV\nq1czYsSIZpcKenl5MWbMGOrq6ti5cyc5OTm4XC4WLVrEsmXLSE1Nvej2yMhICgsLKS4upr6+nlWr\nVn3v6xYXF9OzZ098fHzIyMhg165dVFdXAzBx4kSWL18OwLfffsvtt99OfX09ZrOZadOm8Z//+Z9M\nnDix6fJIEU+je9BErsHcuXOxWCxNz5966inmzp1LdnY206dPx2QyMWXKlKb7yjp37syMGTPw9vbG\narXyxBNP0LNnz4tuFxERMdqKFSuYN2/eBdsnTZrEH//4R5588knmzZuHxWKhX79+3Hffffj6+n7v\n9jvuuINbb72V2NhYbrnlFg4ePHjR150/fz4LFiwgLS2NIUOGsGDBAh577DH69+/PI488woIFC5g4\ncSJWq5Xnn38ePz8/oOEyx7/85S+6vFE8msl19oJeEREREREPVlRUxG233cb69eubvYEq4kl0iaOI\niIiItAsvvfQSs2fPVjkTj3ZFBe2ZZ57h7rvvZtasWezdu7fZ59555x3uvvtuZs+ezdNPP+2WkCIi\nIiIi36eoqIjrr7+eoqIi5s+fb3QckWty2XvQtm7dyokTJ1i6dCmZmZksXLiQpUuXAg2z8rzxxht8\n9tlneHl5MX/+fHbv3s3AgQPdHlxEREREBBpmUF63bp3RMURaxGVH0DZv3swNN9wAQFJSEmVlZVRW\nVgLg7e2Nt7c31dXVOBwOampqmq1hISIiIiIiIlfusgXt7HoWZ4WFhVFYWAg0LDz40EMPccMNNzBh\nwgQGDBhAYmKi+9KKiIiIiIi0Y1c9zf75kz5WVlby2muvsWbNGmw2G/PmzSMjI4OUlJTvPd7hqMfL\nSzduioiIfFdhYcU1nyM0NICSkuoWSON+npQVPCuvsrqHsrqPJ+VtiayRkYHf+7nLFrSoqCiKioqa\nnhcUFBAZGQlAZmYm8fHxhIWFATBkyBC++eabSxa0lvjGR0YGtsgvsdbiSXmV1T08KSt4Vl5ldY+W\nynqpX0DiHp70JqgnZQXPyqus7qGs7uNJed2d9bKXOI4ePZpPP/0UgP379xMVFYXNZgMgLi6OzMxM\nzpw5A8A333xD165d3ZdWRERERESkHbvsCNp1111Hnz59mDVrFiaTiUWLFpGWlkZgYCCTJk3i/vvv\n55577sFisTBo0CCGDBnSGrlFRERERETanSu6B+3Xv/51s+fnX8I4a9YsZs2a1bKpREREREREOqAr\nWqhaRERERERE3E8FTUREREREpI1QQRMREREREWkjrnodNBERkY6mqqqKBQsWUFZWht1u56GHHuL1\n119v+nxBQQG33XYbDzzwQNO2l19+mVWrVhEdHQ3AzTffzJ133tnq2UVExLOooImIdCDr169j/Pjr\nL7vf008/zYwZdxAbG9cKqdq+5cuXk5iYyMMPP0x+fj7z5s1jzZo1TZ//yU9+wi233HLBcffccw9z\n5sxpzagiIuLhdImjiEgHcfJkHmvXfnpF+z722GMqZ+cJDQ2ltLQUgPLyckJDQ5s+99VXX9G1a1di\nYmKMiiciIu2Ix42g1dbV8+GXmQzqFoafj8fFFxExzAsv/I6DB/czduxQbrxxKidP5vHii3/k2Wef\npLCwgJqaGubP/ymjR49l7ty5/Pzn/8YXX6yjqqqSrKwT5Obm8ItfPMzIkaON/lJa3fTp00lLS2PS\npEmUl5fz2muvNX3urbfeYuHChRc9bs2aNaxbtw4fHx8ef/xx4uPjWyuyiIi0oLKqOrILKsgpqGJ4\n/1hC/d3XQzyu4RzKLuFPK77hhsGd+dGknkbHERHxGLNnzyUt7e8kJiaRlXWcP/7xz5SUnGbYsBFM\nnTqD3Nwc/uM/HmX06LHNjisoyOf551/i66+/YuXKDzpkQVu5ciWxsbG88cYbZGRksHDhQtLS0sjP\nz6e6upqEhIQLjklNTWXEiBEMHTqU1atX89RTTzUrdhcTGhqAl5flmvNGRgZe8zlaiydlBc/Kq6zu\noazu0xby1jtd5BVWciyvjGN55RzNK+NYbhklFbVN++QUV/Hv84a5LYPHFbTeXcOIDgvgi125TBoa\nT2SIv9GRRESu2t//8S3bMgpa9JxDU6K4a2L3K9q3V68+AAQGBnHw4H4+/DANk8lMeXnZBfv27z8Q\ngKioKCorK1susAfZuXMnY8aMASAlJYWCggLq6+tJT09nxIgRFz2mf//+TY8nTpzI888/f9nXKSmp\nvuaskZGBFBZWXPN5WoMnZQXPyqus7qGs7mNE3jN1DnIKq8jOryCroJKs/EpyCyupczib7Rce5MvA\n7hEkRNuIj7IxbkjCNWe9VBn1uILmZTEzZ0oKi/+2kxVfHuWfbupjdCQREY/j7e0NwOefr6G8vJw/\n/OHPlJeX85OfzL1gX4vl3IiOy+VqtYxtSZcuXdizZw+TJ08mNzcXq9WKxWJh3759TJgw4aLHPPXU\nU0yZMoUhQ4awdetWevTo0cqpRUQEGn53lVbWkZVfQXZBJVkFlWTnV1BQUsP5v9UsZhNxEVbio2zE\nRweSEGWjc5QNm793s/MF+HlTVXHGbXk9rqABjBvUmb+vPczX+/OZMrwL8VE2oyOJiFyVuyZ2v+LR\nrpZiNpupr69vtq20tJSYmFjMZjPp6f/Abre3aiZPcffdd7Nw4ULmzJmDw+HgN7/5DQCFhYWEh4c3\n7VdYWMjLL7/Mk08+yZ133smiRYvw8vLCZDLx1FNPGZReRKTjcNQ7OXW6muz8SrIKGgtZfiWVNc1/\nv1n9vEhOCCEhOrChkEXZiI2w4mUxfg5FjyxoZrOJmeOT+J+/7+GD9Ex+eecAoyOJiLR5XbokcuhQ\nBjExsYSEhAAwfvxEHn303zhw4BumT7+ZqKgo/vKXPxmctO2xWq0sWbLkgu2vvvpqs+eRkZE8+eST\nACQnJ/Pee++1Sj4RkY6o+oyD7ILzR8UqyS2qxFHf/GqPqBB/kuNDiI+2kRAVSEK0jdBAX0wmk0HJ\nL80jCxpA38QwUhJC2JtZzKGsEpITQi9/kIhIBxYaGkpa2upm22JiYvnrX8+ViBtvnAqcuxegW7dz\no3zdunXn979/HRERkdbkcrkoLj/TOCpW2TgqVkFRWfPLDL0sZjpH2hrvFTs3Mubv61mVx7PSnsdk\nMnHH+CSefmsH76/P5LG5g9tsCxYRERERkcuzO5zkFVU1jopVkJ3fUMiqax3N9gsM8KZP19Cme8Xi\no2x0Cg/AYjb+EsVr5bEFDSApNpjBPSPZcbiQnYeLGJwcaXQkERERERG5ApU19qYZFAvLazl8ooST\nxVXUO89domgCosMC6NstrHFErOESxWCrT7sdnPHoggZwe2o3dh4pJG1DJgN7hLeL1iwiIiIi0l44\nXS4KS2vOXaLYWMrOX1sMwMfbTNdOgc1GxTpH2vD1ufb1IT2Jxxe0mHArY/vHsmFPHpv2nWLcgFij\nI4mIiIiIdEh19npyi6qaT2lfUEltXfNZhENsPvRPCm+6T2xgr054OZ2Yze1zVOxqeHxBA7hlTCKb\n959i5cZjjOgdjY93x2rZIiIiIiKtrayqrmEWxcaRsaz8Ck6drub8JTPNJhMx4QFNMyieLWRBVp9m\n54qMtHnUwtru1C4KWmigLzcM6cwnX2exbmcOU4d3MTqSiIiIiEi74HS6yC+pJqtxwo6zk3eUVdU1\n28/Px0KPuOCGGRSjG2ZTjIuw4u2lwZOr0S4KGsC0EV1I35XHx5tPMG5ALFY/78sfJCIiF5g58yY+\n/nj15XcUEZF250ydg5zCqqb7xLLyK8ktrKTO4Wy2X3iQLwO7RzROaW8jPjqQiGA/zO104o7W1G4K\nmtXPm+kju/D++kw+/voEd47vfvmDREREREQ6IJfLRWllXfN7xfIrKCip4fxlni1mE7ER1oZJOxon\n7+gcZcPmr8EQd2k3BQ3g+sGdWbsjh7Xbc7hhcDyhgb5GRxIRaTPmz/8xzzyzmE6dOnHq1En+/d8f\nJjIyipqaGs6cOcOvfvUIvXv3NTqmiIi0MEe9k1OnqxvvFatoulSxssbebD+rnxfJCSEkRJ+7Vyw2\nwoqXRbOkt6Z2VdB8vC3cMiaRNz/JYOXGY9w7NcXoSCIibca4cRPYtGkDd9xxF19+mc64cRNISurB\nuHHj2bFjG++881eefvq/jY4pIiLXwOl0kV1QydcZhRw4WkR2fiW5RZU46l3N9osM8SM5PqTZ5B1h\nQb7tdm0xT9KuChrA6H6d+HRrFhv3nmTysHhiwq1GRxIRuUDatx+xq2Bfi55zUFQ/bu8+43s/P27c\nBH7/+xe544672LgxnZ///Fe8997bvPvu29jtdvz8/Fo0j4iIuJ/T6SKroIKME6UcyirhcE4ZNbWO\nps97Wcx0jmwYDTt/ZMzft93VgHaj3f3JWMxmbh+XxB+W7yNtw1Eeuq2f0ZFERNqEbt2SKC4uJD//\nFBUVFXz55XoiIqL4j//4TzIyDvD7379odEQREbmMeqeTrPxKDmWVkpFVwpGcUmpqz60xFhXiz5Dk\nSIb06URYgDedwgOwmHWJoidpdwUN4LqeESTFBrHjUCGZeWUkxQYbHUlEpJnbu8+45GiXu4wcOYbX\nX/8jY8emUlpaQlJSDwDS07/A4XBc5mgREWlt9U4nJ05VciirhEPZpRzOLuXMeYs+R4f6MzQlhOSE\nUJLjQwgLargaIjIyUOuKeah2WdBMJhMzxyfxu7/t4oP1mTwye5CupxURAVJTJ/DAA/N58813OXOm\nhqeeWsQXX6zljjvuYu3az1i9+kOjI4qIdGiOeicnTlVwKPvsCFkZtecXsrAAhieEkBzfUMo0KV77\n0y4LGkByQij9uoWz72gx3xw7Tb9u4UZHEhExXK9efUhP39L0/J13ljU9HjMmFYDp02/GarVSXa13\nXkVE3M1R7+T4qYqGEbKs0oZCZj9XyGLCA5rKWHJCCCE2FbL2rt0WNIA7UrvxzdFilq3PpE9imBbO\nExERERFDOeqdHDtZzqGshkk9juSWUWc/twh0THgAKY1lLDk+hGAVsg6nXRe0hOhARvSJZvP+fLYe\nyGdEn05GRxIRERGRDsTuOFvIGu4h+zanjDrHuUIWF2GlZ0IIKQmh9IwPIdjqY2BaaQvadUEDuHVs\nN7YeLGD5l0cZkhKlhfZERERExG3sDidH88o4lF3KoaxSMnO/U8giraTEN4yQ9YwPIUiFTL6j3Re0\nyBB/JgyKY+2OHNJ353H94M5GRxIRERGRdsLuqOdoXjkZjZcsZuaVYz+vkHWOtJKcEEpKYyELDFAh\nk0tr9wUNYMaorny57yQfbjrGqL6dtDCfiIiIiPwgdkc9mbnlZDRO6pGZV46j/lwhi4+yNU3q0TM+\nWIVMrlqHaCpBVh+mDEtg5cZjfLYtm1vGJBodSUREREQ8QJ29nszchksWM09WcOjEaRz1LgBMNBay\nxhGyHvEh2Py9jQ0sHq9DFDSAG4fG84+dOazZmsWEQXG63ldERERELlDbWMjOXrJ47GT5uUJmgoSo\nwIYZFhsvWbT6qZBJy+owBc3f14ubRyfyzueH+eir4/xoUk+jI4mIiIiIwWrr6vk2t4xD2SVkZJVy\nLK+ceud5hSw6kJSEhksWRw7sTE3lGYMTS3vXYQoaQOrAWD7blsUXu3KZNDSeyBB/oyOJiIgHqKqq\nYsGCBZSVlWG323nooYd4/fXXqa6uJiAgAIAFCxbQt2/fpmPsdjuPPvooeXl5WCwWnn32WeLj4436\nEkSk0Zk6R0MhyyolI6uE4ycrmhWyrp0CSW6cZbFH5xAC/M79c9nm762CJm7XoQqal8XMbWO78fqq\nA6z48ij/dFMfoyOJiIgHWL58OYmJiTz88MPk5+czb948IiMjefbZZ+nZ8+JXZHz00UcEBQWxePFi\nNm7cyOLFi3nxxRdbObmI1NSeK2SHsko4fupcITObTHTpdHaErKGQaTI5MVqH+xs4rHc0a7Zk8fX+\nfCYPSyAhOtDoSCIi0saFhoZy6NAhAMrLywkNDb3sMZs3b+bWW28FYNSoUSxcuNCtGUWkQU2tgyM5\nZU0LQx8/WYHTda6QJcYENi0M3T0uWIVM2pwO9zfSbDJxx/gk/ufve/gg/Si/umuA0ZFERKSNmz59\nOmlpaUyaNIny8nJee+01Fi9ezEsvvURJSQlJSUksXLgQPz+/pmOKiooICwsDwGw2YzKZqKurw8fn\n+yepCg0NwMvLcs15IyM9581HT8oKnpW3o2StqrFz4Fgx32QWsy+ziMzcMpyNI2QWs4meCSH06x5B\n324R9EoMu+ZC1lG+r0bwpLzuzNrhChpA38QwUhJC2He0mENZJSQnXP6dUBER6bhWrlxJbGwsb7zx\nBhkZGSxcuJAHH3yQ5ORkEhISWLRoEe+88w7333//957D1fgO/qWUlFRfc9bIyEAKCyuu+TytwZOy\ngmflbc9Zq8/YOXx2hCyrlBP5FZz938tiNtEtJojk80bIfH3OvelRWV5DZStmNZInZQXPytsSWS9V\n8DpkQTM1jqI9/dYO3l+fyWNzB2MymYyOJSIibdTOnTsZM2YMACkpKRQUFDBx4kQsloZ/+E2cOJGP\nP/642TFRUVEUFhaSkpKC3W7H5XJdcvRMRC6u+oydw9llTQtDZxU0L2Td44JJTmiY1KN7bPNCJuKJ\nOmRBA0iKDWZwciQ7DhWy83ARg5MjjY4kIiJtVJcuXdizZw+TJ08mNzeXgIAA7r//fl566SWCgoLY\nsmULPXr0aHbM6NGjWbNmDWPHjuWLL75g+PDhBqUX8SyVNXaOZJc2rEOWXUJ2fiVnx5+9LCZ6NBay\nlIQQusUF4+utQibtS4ctaAC3j+vGrsNFpG3IZGCPcCxms9GRRESkDbr77rtZuHAhc+bMweFw8Nvf\n/paSkhLuvfde/P39iY6O5l/+5V8AePDBB3nllVeYNm0aX331FbNnz8bHx4fnnnvO4K9CpG0qr6pj\n5+HCphGynILzC5mZnvEhjQtDh5IUG4SPCpm0cx26oMWEWxnTP4YNe/LYtO8U4wbEGh1JRETaIKvV\nypIlSy7YPm3atAu2vfLKKwBNa5+JyIXO1DnYerCADXvyOJpX3rTdy2JuKmMpCSF0iw3CuwUmzhHx\nJB26oAHcMiaRzftPsXLjMUb0jta7MiIiIiJucvxUOem78/j6QD61dfWYTNAvKYKkmECSVchEABU0\nQgN9uWFIZz75Oot1O3KYOqKL0ZFERERE2o2aWgdfH8hnw+48TuQ3zHwXFuTLlGEJjO0fQ3JSpMfM\n3ifSGjp8QQOYNqIL6bvyWL35BOMGxmL18zY6koiIiIjHcrlcHD1ZzobdeWw5mE+d3YnZZGJQjwhS\nB8bSNzEcs1kzaItcjAoaYPXzZvqoLrz/RSYff32CO8d3NzqSiIiIiMepPmNn8/580nfnkVPYsOJY\nRLAfYwfEMqZfDKGBvgYnFLk2TpfT7a9xRQXtmWeeYc+ePZhMJhYuXEj//v0ByM/P59e//nXTftnZ\n2Tz88MPcdNNN7knrRtdf15m123NYuz2HGwbH6weIiIiIyBVwuVx8m1vGht15bMsooM7hxGI2MTg5\nktSBsfTuGoZZ681KG+Vyuahx1FBhr6KirpLKukoq7JUNjxu3VdRVUmGvorKukip7NTd2H8fNCdPd\nlumyBW3r1q2cOHGCpUuXkpmZycKFC1m6dCkA0dHRvP322wA4HA7mzp3LxIkT3RbWnXy8LdwyJpE3\nP8lg5cZj3Ds1xehIIiIiIm1WZY2dzd+cIn1PHnlFVQBEhfgzbmAso/vFEGzVwuxijNr6uqZiVWmv\npKKu6nuLV6W9inpX/WXPafUKwOZjIzogiuSIJLfmv2xB27x5MzfccAMASUlJlJWVUVlZic1ma7bf\n8uXLmTx5Mlar1T1JW8Hofp34dGsWX+7NY/KweGLCPfdrEREREWlpLpeLw9mlpO/JY3tGIY76htGy\nYb2iSB0QS3KXUI2WSYtzOB3NR7POL1n2xlGvuqrGMlZJndN+2XP6WXyxeVtJCIzD5mMj0NtGoI8N\nm4+16XGgjw2btw2bdwAW87nZRSMjA906sc1lC1pRURF9+vRpeh4WFkZhYeEFBe3999/nf//3f1s+\nYSuymM3cPi6JPyzfR1r6UR66vZ/RkUREREQMV1Fdx6Z9p9iwJ49Tp6sBiA4LIHVALKP6dSIoQKNl\ncuWcLidV9urzRrgqcZU4OFlS3DjS1fxywxrHmcue08vsRaC3jU7WKGwXKVs2b2uz0uVjabuTAl71\nJCEul+uCbbt27aJbt24XlLaLCQ0NwKsF1reIjAy85nNczOQIG2t35rDjcCGnq+0kdwlrkfO6K687\nKKt7eFJW8Ky8yuoenpRVRFqe0+Xi0IkS0vfksfNwIY56F14WMyP6RJM6IJae8SGYNFomnL2P68wF\nlxA2u6ywrqrpcZW9GhcXdorzmTBh87ES6htCQmDzghXobWsY9fKxNpUxP4tvu/n7eNmCFhUVRVFR\nUdPzgoICIiMjm+2zfv16Ro4ceUUvWFJSfZURL+TuYcVbR3fldydK+POKfTwye9A1/2G7O29LUlb3\n8KSs4Fl5ldU9WiqrSp6I5ymrqmPTvpNs2JNHQUkNALERVsYNiGVU307Y/NvuyIO0nLqz93HZz15W\neO4SwvMfny1jV3IfV4CXP4GN93F9d4QrLiICV42l6XLDAG9/zCZzK3ylbc9lC9ro0aN5+eWXmTVr\nFvv37ycqKuqCkbJ9+/Yxbdo0t4VsbckJofRPCmdvZjHfHDtNv27hRkcSERERcRuny8WB46fZsDuP\nXUeKqHe68PYyM7pvJ8YNjKV7XHC7GZ3oqM7dx9V8woyL3891Zfdx+Vp8sHnbiA+MI7CxbDUULGvj\nCJet2SWG59/H9V2e9Camu122oF133XX06dOHWbNmYTKZWLRoEWlpaQQGBjJp0iQACgsLCQ9vXyXm\njtQk9mUWs2x9Jn0SNT2siIiItD+llbVs3NswWlZU1nCfT+dIK6kD4xjRJxqrn0bL2qq6ejuV9oZy\nVVlX1fDRXkVVXRUV9irsh2oprixtLGJV1DhqLntOL5OlcabCyIsWrPPv4Qr0seJj0b2H7nBF96Cd\nv9YZQEpK8ynoV61a1XKJ2oj4KBsj+kSzeX8+Ww7kM7JPJ6MjiYiIiFwzp9PFN8dOk747lz3fFuN0\nufDxNjOmfwypA2PpFhOk0bJW5nQ5qXGcobKukkp7dUPxOq90fbeEVdqrqKuvu+x5TZiweVsJ9Q0m\n3hbbeFnh2dJlbfa44T4uP/3ZtwFXPUlIR3Lr2G5sPVjA8g1HGZoShZelY14HKyIiIp7vdPkZNu49\nyZd78ygurwUgIdrWMFrWOxp/X/2zsKXYnY5mZevsqFaVvapxweNzj6vqqqhyVON0OS97Xm+zFzZv\nG9H+Edh8bFi9Awj0tmH1tmLmympyAAAgAElEQVTzsWLzbvzPx0rXTtHUlDs77H1cnkz/J15CZIg/\nEwbFsXZHDut35XLDkHijI4mIiIhcsXqnk32ZDaNle48W43KBr4+F1IGxpA6MpWunIKMjtnlnZyis\ntFdSUlRIdmHB945qnS1eZ+prr+jcAV7+2HysRAaEN663dWHROvvY6m3F1+JzxSNcQX6B1Fboni5P\npIJ2GTNGd2XjvpOs+uo4o/vF6N0lERERafMKTlezYsNRNu47SUlFQ1lIjAkkdWAcw3pF4efTcf89\nc3ayjLPrcJ0/kvV9xetKRrfO3r8V7h/WOKoV0DRhxsVGuKxeAZecNEM6ro77f+cVCgrwYcqwBFZs\nPMZn27K5ZUyi0ZFERERELuCod7Ln22I27Mnjm2MNo2X+vhYmXBdH6oBYEqLb37IXLpeLM/W15xWq\ni9+71VTC7FVXtOgxgL+XPzbvAML9wrD5BGDzthEZHIrF4Y3V29o4U+G50uXbjtbhEmOpoF2BG4fF\n84+dOazZmsWEQXEEWTVjjYiISEdRVHOaDblf4Z1l5swZO2aTGROmho8mE2ZMmJoeN340mb6zz7nt\nZx+bTObGY79zjqbzm5q/VuM2k8nc7PyllXXsOlzErsPFVNbYwWWiW2Iwg3pEMjApEj8fL0zUU1Zb\n3pSl4TwmTE2Pz2UxNX40Qr2znkp7NVVny5a9uvFerouMbDUWL8cVrL9lMVmweQcQ5hfaVK4uGNX6\nzvOLjW5pKnhpDSpoV8DPx4ubRifyzueHWfXVcX48qafRkURERMTNXC4XW0/t5O+HV1zxPUWG6g5+\njQ/zgLwSWL39h53qbEkzn18iv6csXlgczZcsqOe2N5zbZa6npLqcSnsV1VcwFTyAn8UPm3cAnQPj\nsHk3jG5d7N4tq7eVQB+rZicUj6KCdoVSB8by2bYs1u/KZdLQeKJC/I2OJCIiIm5Sba/m3UNp7CzY\ni6/Fhx+l3MGgLikUn67C5XLiwoWz6aMLl8vZ8LFxe7PPnd337H64cLlcjR+/s6/LhZOGfS+2vbK6\njuP55WQXVlBnrweTi9BAH+IiA4gM8cdkBpfLiY+fFzU1deedw9nsNZu/jus7+zgvyHKx3N/N6HTa\nL/J9ufg5XLiavtdmkxmrdwAhvsHE2WKwNa65dcGEGWfv3fK24m3WP2Gl/dLf7ivkZTFz29huvL7q\nACu+PMpPb+pjdCQRERFxg8Ml3/LXA0sprS2jW3AX5vWeRYR/OJEhgQTYW//yNrvDyc7DhaTvziUj\nqxTwx+oXx8R+MYwbEEtshPWCY9r6pXiu80phdFQwxUVVRkcSaTNU0K7CsN7RrNmSxZb9+UwZltAu\nb7YVERHpqOxOBx8d/ZR1WRswmUzMSLyRG7tMMGymvZPFVaTvzuOrb0413FsGpCSEMG5gLIN7RuLt\n5bkzAJrOu9RR63SJNKeCdhXMJhMzxyfxwt/38EH6UX511wCjI4mIiEgLOFWVz1/2v0tOZR4R/uHc\n23s2icEJrZ6jzl7PjkMNo2WHc8oACAzwZsrwBMYNiKVTWECrZxKR1qWCdpX6JIaRkhDCvqPFHMoq\nITkh1OhIIiIi8gO5XC425G5m+bcfYXc6GBUzlDt63Iyfl2+r5sgtrCR9dx6b95+i6owDgN5dQ0kd\nGMegHhF4WTTKJNJRqKBdJZPJxMzx3Xnqre28vz6Tx+YO1qxAIiLtXFVVFQsWLKCsrAy73c5DDz1E\nZGQkTz75JGazmaCgIBYvXoy//7kJpNLS0liyZAkJCQ2jMKNGjeLBBx806kuQiyivq+D/Dr7P/uIM\nrF4B3Nt7NgOj+rXa69fa69l2sIANe/L4NrdhtCzI6sP0kV0Y2z+GqFCNlol0RCpoP0C32CAGJ0ey\n41AhOw8XMjg5yuhIIiLiRsuXLycxMZGHH36Y/Px85s2bR0REBI8++ij9+/fnd7/7HWlpafz4xz9u\ndty0adNYsGCBQanlUvYVHeD/Dr5Ppb2KlNAezO19FyG+wa3y2ln5FWzYk8fm/fnU1DowAX27hZE6\nII4B3cM1WibSwamg/UC3j+vGrsNFfJB+lIE9IrCY9cNURKS9Cg0N5dChQwCUl5cTGhrKq6++is1m\nAyAsLIzS0lIjI8oVqq2vI+3IKjbmbcHL7MUdPW5ifOfRbp+o4kydg60HC0jfncexk+UAhNh8uGFw\nV8b2jyFCy/eISCMVtB8oJtzKmP4xbNiTx6Z9pxg3INboSCIi4ibTp08nLS2NSZMmUV5ezmuvvdZU\nzqqrq1m5ciVLliy54LitW7dy//3343A4WLBgAb17977k64SGBuDVAjPzRUZ6zizDrZn16OkTvLTt\nL+RV5BMfHMu/jphPQkjcVZ3javN+m1PKp1+fIH1nDjW1DswmGNo7msnDuzCkVzQWN46W6e+Beyir\n+3hSXndmVUG7BreMSeTr/adYufEYI3pH4+PtudPdiojI91u5ciWxsbG88cYbZGRksHDhQtLS0qiu\nrubBBx9k/vz5JCUlNTtmwIABhIWFMX78eHbt2sWCBQtYtWrVJV+npKT6mrO29fWvztdaWZ0uJ2tP\npLPq2Kc4XU4mxI/hlm5T8bZ7X9XrX2nemloHWw7kk747jxP5DfuHBfly49B4xvaPISzID4DTp923\n9pf+HriHsrqPJ+VtiayXKngqaNcgNNCXG4bE8/HXJ1i3I4epI7oYHUlERNxg586djBkzBoCUlBQK\nCgqoq6vjZz/7GTNmzOD222+/4JikpKSm0jZo0CBOnz5NfX09FovezGtNxTUlvHXwPb4tPUawTyBz\ne99Nr7CeLf46LpeLYycrSN+dy9aDBdTa6zGbTAzqEUHqwFj6JoZjNmtSMRG5PBW0azRtRALpu3NZ\nvfkE4wbGYvXzNjqSiIi0sC5durBnzx4mT55Mbm4uVquVN954g2HDhnHnnXde9Jg//elPxMTEMGPG\nDA4fPkxYWJjKWSvbdmoXSw8vp8ZxhoGRfZmdcgc2b2uLvkb1GTub9+ezYU8e2QWVAEQE+zFtQBfG\n9IshNLB1p+sXEc+ngnaNAvy8mTayC+9/kcnHm09w54TuRkcSEZEWdvfdd7Nw4ULmzJmDw+HgN7/5\nDY888gidO3dm8+bNAAwfPpyf//znPPjgg7zyyivcdNNNPPLII7z33ns4HA6efvppg7+KjqPaXsPS\nw8vZnr8bH4sPP065k5ExQ1psWRyXy0Vmbjnpe3LZdrCAOocTi9nE4ORIUgfG0rtrGGYtwSMiP5AK\nWgu4/rrOrN2ew9odOVw/uHPTteUiItI+WK3WCyYB2bhx40X3feWVVwDo1KkTb7/9ttuzSXNHSo7y\n1wPvUVJbStegBOb1nkVUQESLnLuiuo7Pt2ezYXceuUUN949FhfgzbmAso/vFEGz1aZHXEZGOTQWt\nBfh4W7hlTCJvfpLBh5uOce/UXkZHEhER6VAcTgerj33O5yfWAzC16w1M7Xo9FvO1X1ZafcbO39Ye\nYVtGAfbG0bJhvaJIHRBLcpdQjZaJSItSQWsho/t14tOtWXy59ySThyUQE96y17iLiIjIxeVXFfDm\ngXfJqsgl3C+Me/vMoltw1xY7/4ovj/HVN6eIi7Qyum8Mo/p1IihAo2Ui4h5aXbmFWMxm7khNwuWC\ntPSjRscRERFp91wuF1/mbubZbUvIqshleKfB/PuwX7ZoOSutrCV9Tx4RwX78/pGJTBmeoHImIm6l\nEbQWNKhHBElxQew4XEhmbhlJccFGRxIREWmXKuoqeSfjffYVHSTAy597et/NdVH9W/x11mzJwu5w\nMn1kF7zcuKi0iMhZ+knTgkwmEzNTG9a8WbY+E5fLZXAiERGR9md/cQZPb32BfUUH6RnanYXDfuWW\nclZWVcf6XbmEB/kyul9Mi59fRORiNILWwpITQumfFM7ezGL2HT1N/6RwoyOJiIi0C3X1dlZkriY9\n5yu8TBZu6z6difFjMZvc837zp1uyqHM4mTayq0bPRKTVqKC5wR2pSezLLOaD9Ez6dgszOo6IiIjH\ny67I4839f+NUdQGdrNHc13s2nQNj3fZ65dV1/GNXDqGBvozR6JmItCIVNDeIj7Ixok80m/fns+VA\nPjdHBRkdSURExCM5XU7WZW1g1dFPqXfVk9p5NLcmTcPH4u3W1/10axZ1did3ju+Ct5dGz0Sk9aig\nucltY7uxLaOA5RuOMnVMktFxREREPE7JmVLeOrCUw6WZBPrYmNvrLvqEp7j9dSuq6/jHjlyCbT6M\nG6DRMxFpXSpobhIR4s/4QXGs3Z7Dms3HGZESaXQkERERj7Ejfw/vHkqjxlFDv4je/DhlJoE+tlZ5\n7c+3Z1Nrr+f2cd3w9rr2ha5FRK6GCpobzRjVlY17T7J07SEGJIbi76tvt4iIyKXUOM7w/uGVbDm1\nAx+zN7OTb2d07HBMJlOrvH5ljZ2123MIsvqQOtB997iJiHwfXVTtRkEBPkwZlkBZZR2fbs0yOo6I\niEiblll6nGe3vsiWUztICOzMo8N+yZi4Ea1WzgDWbs/mTF09U4cn4OOt0TMRaX0a0nGzG4fFs353\nHp9uy2bidZ0JsvoYHUlERKRNqXfW88nxtaw5/g8AJneZyPTESVjMrVuQqs/Y+Xx7DoEB3owfGNeq\nry0icpZG0NzMz8eLuyf1pLaunlVfHTc6joiISJtSUF3E4p1/5JPj6wj1C+GX1z3AzUlTWr2cAazd\nnkNNrYMpwxPw9dHomYgYQwWtFUwe0ZXIED/W78qloLTG6DgiIiKGc7lc/OPoJp7d9iInyrMZGn0d\nC4f9ku4hiYbkqal18Nm2bGz+3kwYpNEzETGOClor8PYyc9u4btQ7Xaz48qjRcURERAxVaa/iT9+8\nzavb/g+Lycx9fX7EvX1m4e/lb1imtTtyqK51MHlYPH4+ugNERIyjn0CtZFivaNZsyeLr/flMGZZA\nQnSg0ZFERERa3cHiw7x9cClldRX0juzB7B4zCfMLNTRTTa2Dz7ZmYfXzYuJ1nQ3NIiKiEbRWYjaZ\nmJnasGD1svRMg9OIiIi0Lnu9nWWHP+T3e/5Mhb2KW5Km8sT4XxpezgC+2JVL1RkHNw6N15I4ImI4\n/RRqRX0Sw0hJCOGbo6fJOFFCShfjfymJiIi4W27lSd7c/y55VaeIDojk3t6zSQjqjNls/PvEZ+oc\nrNmSRYCvF9cPjjc6joiIRtBak8lkYub47gC8vz4Tl8tlcCIRERH3cbqc/CNrA/+17SXyqk4xNm4k\njw79VxKC2s5lhOt35VFZY2fS0HgC/PS+tYgYTz+JWlm32CCGJEey/VAhOw8XMjg5yuhIIiIiLa60\ntoy3D/ydjJIj2LytzOl1J/0iehsdq5laez1rtpzA39fCDUPaTmkUkY5NBc0At6cmsfNwER+kH2Vg\njwgsbeASDxERkZayu2Aff8v4gCpHNX3CU5jT606CfNre5Fjpu3Ipr7Zz06iuWP28jY4jIgKooBmi\nU1gAYwfEkL47j037TjFuQKzRkURERK7ZGUcty458yOaT2/A2e3F3z1sZGzcSk8lkdLQL1Nnr+WRL\nFn4+FiYN1b1nItJ2qKAZ5ObRiWz+5hQrvjzK8N7R+HpbjI4kIiLfo6qqigULFlBWVobdbuehhx4i\nMjKS3/zmNwAkJyfz29/+ttkxdrudRx99lLy8PCwWC88++yzx8e23CBwry+LNA+9SVFNMvC2We/vM\nppM12uhY3yt9Tx5lVXVMH9kFm79Gz0Sk7dC1dQYJDfTlhiHxlFbWsW5HjtFxRETkEpYvX05iYiJv\nv/02S5Ys4emnn+bpp59m4cKFvPfee1RWVpKent7smI8++oigoCDeffddHnjgARYvXmxQeveqd9bz\n8bHPeWHnHymuOc2khPH8esjP23Q5szvq+eTrE/h6W7hRo2ci0saooBlo2ogErH5efLz5BFVn7EbH\nERGR7xEaGkppaSkA5eXlhISEkJubS//+/QGYMGECmzdvbnbM5s2bmTRpEgCjRo1i586drRu6FRTV\nFPPirldZfexzgnwC+cWgn3Jr92l4mdv2BTob9pyktLKOidfFERjgY3QcEZFmVNAMFODnzbSRXaiu\ndfDx5hNGxxERke8xffp08vLymDRpEnPmzOH//b//R1BQUNPnw8PDKSwsbHZMUVERYWFhAJjNZkwm\nE3V1da2a211cLhdfn9zOs1tf5GjZCQZHDeCxYb+iZ2iS0dEuy+5w8vHXJ/DxNjN5WILRcURELtC2\n3+LqAK6/rjNrt+ewdkcO1w/uTFiQn9GRRETkO1auXElsbCxvvPEGGRkZPPTQQwQGnpuV8ErWtbyS\nfUJDA/DyuvZ7kiMj3TdjYmVtFa9v/xtf5+zE38uPnw+/l7Fdhv3giUDcmfViPtl8nJKKWm5NTSKp\na/hVH9/aea+FsrqHsrqPJ+V1Z1YVNIP5eFu4dUwif/kkgw83HePeqb2MjiQiIt+xc+dOxowZA0BK\nSgq1tbU4HI6mz+fn5xMV1Xxdy6ioKAoLC0lJScFut+NyufDxufTldCUl1decNTIykMLCims+z8Uc\nOv0tbx1cSmltGUnBXZnXexbh/mEUFVX+oPO5M+vFOOqdLP0sA28vM6n9Ol31a7d23muhrO6hrO7j\nSXlbIuulCt4VXeL4zDPPcPfddzNr1iz27t3b7HMnT55k9uzZzJw5kyeeeOKagnZUo/p1IiY8gC/3\nniSvqMroOCIi8h1dunRhz549AOTm5mK1WklKSmL79u0AfPbZZ4wdO7bZMaNHj2bNmjUAfPHFFwwf\nPrx1Q7cgu9NB2pGPeGn365TXVXBTt8n88roHCPcPMzraVfnqm1MUl9cyfmAcwTZfo+OIiFzUZQva\n1q1bOXHiBEuXLm2atep8zz33HPPnz2fZsmVYLBby8vLcFra9spjN3JGahMsFaRuOGh1HRES+4+67\n7yY3N5c5c+bw8MMP85vf/IaFCxfywgsvMGvWLBISEhg1ahQADz74IADTpk3D6XQye/Zs3nnnHR5+\n+GEjv4QfLK/yFP+9/WXWZW8gyj+CXw9+iCldr8ds8qzb2B31Tj766jheFjNThuveMxFpuy57iePm\nzZu54YYbAEhKSqKsrIzKykpsNhtOp5MdO3bwwgsvALBo0SL3pm3HBvWIICkuiJ2HC8nMLSMpLtjo\nSCIi0shqtbJkyZILtv/tb3+7YNsrr7wC0LT2madyuVyk53zFiszV2J0ORscO4/buN+Hn5ZkjT5v3\nn6Ko7AzXD+5MaKBnfg0i0jFc9u2voqIiQkNDm56HhYU1zVR1+vRprFYrzz77LLNnz263a7y0BpPJ\nxMzUhtmvlq3PvKKbyUVERNyhrLaCP+75X94/shIfiw8/7XcPP0qZ6bHlrN7pZPVXJ/CymJiq0TMR\naeOuepKQ84uDy+UiPz+fe+65h7i4OH7605+yfv16xo8f/73He8IMVe5wJXkjIwNZtyuP7QfzySqu\nYUgvYxb59KTvrbK6jyflVVb38KSs0nL2Fu7nnYxlVNqr6BXWk7m97iLYN+jyB7ZhX+/Pp6C0hgmD\n4jRbsoi0eZctaFFRURQVFTU9LygoIDIyEmhYuDM2NpaEhIZ3o0aOHMmRI0cuWdDa+gxV7nA1eW8e\n2YUdB/N5Y+U3xIf7Y/6B0xb/UJ70vVVW9/GkvMrqHi2VVSXPc9TW1/HBkVVsytuCl9mLmT1uJrXz\nKI+71+y7nE4XH311HIvZxLQRXYyOIyJyWZf9qTt69Gg+/fRTAPbv309UVBQ2mw0ALy8v4uPjOX78\neNPnExMT3Ze2A+gcZWNEn07kFFayZX++0XFERKQDOFGezXPbXmRT3hbibDEsGPILJsSP8fhyBrDl\nYD75JTWM6R9DeLBGz0Sk7bvsCNp1111Hnz59mDVrFiaTiUWLFpGWlkZgYCCTJk1i4cKFPProo7hc\nLnr27MnEiRNbI3e7dtvYRLZl5LP8y6MMSYnC28vzf0GKiEjb43Q5+ezEelYf+wyny8nE+LHcnDQV\nb3P7WCb1/NGz6Ro9ExEPcUU/gX/96183e56SktL0uEuXLrz77rstm6qDiwjxZ/ygONZuz2H97lwm\nDYk3OpKIiLQzxTWn+euB98gsO06IbzBze91FSlgPo2O1qO2HCjhZXM3Y/jFEhPgbHUdE5Iq0j7fI\n2qEZo7qyce9JPvrqOGP6xeDvqz8qERFpGVtP7WTpoRWcqT/DoMh+zE65A6t3gNGxWpTT5WLVpuOY\nTSamj+pqdBwRkSuma+faqKAAH6YMT6Ci2s6nW7OMjiMiIu1Atb2Gv+z/G3898B4unMzpdRf3953T\n7soZwM5DheQWVTGybzRRGj0TEQ+iYZk27Mah8fxjZy6fbstmwnWdCbb6GB1JREQ81JGSTP56YCkl\ntaUkBiUwr/dsIgPCjY7lFk6Xiw83HcdkghkjuxodR0TkqmgErQ3z8/HiplFdqa2r56NNx42OIyIi\nHsjhdLAy8xOW7HqdsrpypiVO4lfXPdhuyxnArsNF5BRWMqJ3NNFh7W90UETaN42gtXGpA2P5bFtW\nw2Qhw+J1mYaIiFyxU1UFvHngXbIrconwC+PePrNJDG7fsxm6XC5WbTqGiYb7uUVEPI1G0No4L4uZ\n28Z1o97pYsWGo0bHERERD+ByudiQs5nnti0huyKXETFD+Pdhv2z35Qxg97dFZBVUMqx3NDHhVqPj\niIhcNY8raGW15fx5+7ucKM82OkqrGdYrmoRoG18fyCcrv8LoOCIi0oaVnSnn1b1vsvTwcrzNXvyk\n71zm9roLP6/2v0iz6+y9Z2j0TEQ8l8cVtOIzJXyWuYHnd/yBVUc/xeF0GB3J7cwmEzNTkwBYlp5p\ncBoREWmrDpd8y6/XPMU3xQdJDu3OY8P/jUFR/YyO1Wr2HS3mxKkKhqREEReh0TMR8UweV9C6BXfh\nP8b/K8E+Qaw5vo7fbXuJ7Ipco2O5XZ/EMHp1CeWbo6fJOFFidBwREWmDPjn+D6rsNdzRfQY/H/gT\nQnyDjY7UalwuFys3HgfgJo2eiYgH87iCBtAvOoXHhv8bo2OHk1d1iv/a/jKrj37WrkfTTCYTM8c3\njKK9vz4Tl8tlcCIREWlr5vf5ES9Pf5KJCeMwmzzyV/wPtv/YaY6dLGdwciSdo2xGxxER+cE89qe3\nv5cfP0q5g58P+AnBPkF8fHwt/7399+RU5BkdzW0SY4IYkhzJsZPl7DhUaHQcERFpYwJ9bIQHhBod\no9W5XC5WbjoGaPRMRDyfxxa0s3qF9+Sx4b9iVMxQcirz+K/tL/PJsbXUO+uNjuYWt6cmYTaZSNtw\nlHqn0+g4IiIihjtwooTM3HIG9YggITrQ6DgiItfE4wsagL+XPz/udSc/GzAfm7eVj459xn/v+D15\nlaeMjtbiOoUFMHZADKdOV7Nx70mj44iIiBjK5XLx4caG0bObRycanEZE5Nq1i4J2Vp/wFB4f/jAj\nOg0huyKX321bwqfH/9HuRtNuHp2Ij5eZlRuPUWtvX1+biIjI1cjIKuVIThkDksLp0kmjZyLi+dpV\nQQMI8PZnbu+7eKD/vVi9A/jw6BoW7/gjJ6vyjY7WYkIDfZk0NJ7SyjrW7cgxOo6IiIhhVjXee3bz\nGI2eiUj70O4K2ln9Inrz2PCHGRp9HScqsnlu2xI+P7Eep6t93Lc1dXgCVj8vPt58gsoau9FxRERE\nWt2hrBIyskrp1y2cxJggo+OIiLSIdlvQAKzeAdzbZxY/7TcPfy8/VmR+zAs7/kh+VYHR0a5ZgJ83\n00d2pbrWwcdfnzA6joiISKv7cNNxAG4e3dXQHCIiLaldF7SzBkT24fHhDzMkeiDHyrN4dtuLrM1K\n9/jRtOsHxxEa6Mu6HTmcLj9jdBwREZFWcySnlIMnSuiTGEZSXMdZkFtE2j8vowO0Fpu3lfv6/IiB\nkf1471Aay79dzZ7Cb5jb6y6iAiKNjveDeHtZuHVMIn/5JIOVG49x37ReRkcSEWmX3n//fT788MOm\n53v27GHAgAFNzwsKCrjtttt44IEHmra9/PLLrFq1iujoaABuvvlm7rzzztYL3c5p9ExE2qsOU9DO\nGhTVj+4hiSw9vIJdBXt5ZuuL3JI0ldTOozCbPG9AcVS/TqzZmsXGfSeZPCyB2Air0ZFERNqdO++8\ns6lcbd26lU8++YRFixY1ff4nP/kJt9xyywXH3XPPPcyZM6fVcnYUmbll7D92ml5dQunROcToOCIi\nLcrzGkkLCPSx8ZO+c5jf58f4WLxZduRDXtz5GoXVxUZHu2oWs5mZqUm4XJC24ajRcURE2r0//OEP\n/OxnP2t6/tVXX9G1a1diYmIMTNWxrPrqOKDRMxFpnzpkQTtrcPQAHh/+MAMi+5JZdoxntr7A+pxN\nHndv2sAeEXSPC2bn4UIyc8uMjiMi0m7t3buXmJgYIiPPXRr/1ltvcc8991x0/zVr1nDffffxz//8\nz2RnZ7dWzHbt2Mly9mYWkxwfQnJCqNFxRERaXIe7xPG7gnwC+ae+c9mRv5ulh1fw/uGV7C7Yx5xe\ndxHhH2Z0vCtiMpmYOT6J597ZyfvrM1nwo0GYTCajY4mItDvLli3jtttua3qen59PdXU1CQkJF+yb\nmprKiBEjGDp0KKtXr+app57itddeu+T5Q0MD8PKyXHPOyEjPWbD5arO++uEBAO6Z0duQr7M9f2+N\npKzu4UlZwbPyujNrhy9o0FBwhnQaRI/QJN49lMa+ogM8vfUFbkuazpi44R5xb1rP+BD6J4WzN7OY\nfUeL6Z8UYXQkEZF2Z8uWLTz++ONNz9PT0xkxYsRF9+3fv3/T44kTJ/L8889f9vwlJdXXnDEyMpDC\nwoprPk9ruNqsJ05VsPXAKXp0DqZTkG+rf53t+XtrJGV1D0/KCp6VtyWyXqrgtf3m0YqCfYP4537z\nmNd7FhaThaWHl/P73X+muKbE6GhXZGZqEiZg2fqjOF0uo+OIiLQr+fn5WK1WfHx8mrbt27ePlJSU\ni+7/1FNPsX37dqBhYuTMMioAACAASURBVJEePXq0Ss727MNNxwC4eUyirhQRkXZLBe07TCYTwzpd\nx+PD/42+4SkcKvmWZ7a+wKbcLbjaeOnpHGVjRJ9O5BRWsmV/vtFxRETalcLCQsLCwi7YFh4e3uz5\nE088ATTM/Pj8888zZ84c/vznP/PYY4+1at72Jiu/gl1HikiKC6J3F917JiLtly5x/B4hvsE80P8+\nvj61g2WHP+Rvh/5/e3ceH2V57338M2vWycpkIRtJgATCvi+yKoiKrUux+qBtT+05PS7H02p7tFTr\nOX1Eaw/42FqXVp/29HB83JAqLojVAhUICaAsYV9C9n3f13n+SBgIQsKSycyQ7/v14sUsd+755hLn\nnt/8rvu63+Wrsv0sT/0Wob6eu6TvrXMS2Xm4hL98cZIpqRFYzKrBRUT6w5gxY3jttdd6PPbKK6/0\nuG+32/nlL38JQEpKCm+++eaA5bvanVm5Ud0zEbm66dN7LwwGAzOjp/D49IcZHZbCocqjPJXxHNsL\nd3psN21IiB8LJsZSXtPM5j0F7o4jIiJyxfLL6tl9pIzE6CDGJHrHAl4iIpdLBdpFCPUN4f7x32d5\n6rcAB68ffoeX9v2R6hbPXNJ+6awEfK0mPth2iqaWdnfHERERuSIfbDsFdF33TN0zEbnaqUC7SAaD\ngVlDp/Hz6Q+TGjqCgxVHeCpjNTuKdnlcN83mb2XJ9Hjqm9rYmJnr7jgiIiKXraC8gV2HS0mIsjEu\nObzvHxAR8XIq0C5RmG8oD074AXel3Eano5M1h97mlX3/RU1Lrbuj9bB4ahxBAVY2ZuZR09Dq7jgi\nIiKX5aPtp3Cg7pmIDB4q0C6DwWDgmpgZ/Hzaw4wMHU5WxSGeylhNZvGXHtNN87WauXnWMFraOviw\ne2qIiIiINymqaCDjUAnxEYFMGK7re4rI4KAC7QqE+4XxLxN+wLdH3kJ7Zzt/Pvgmr+7/b2pbPeMi\ne/MmDMUe4svmPQWUVje5O46IiMgl+XB7Dg4H3KyVG0VkEFGBdoWMBiNzY2fx8+kPMyIkib3lB3gq\nYzW7Sva4vZtmNhm5bW4yHZ0O3vv7SbdmERERuRQllY3sOFhMrD2AiSPVPRORwUMFWj8Z4hfOQxP/\niWUjvklrRxt/OvD/eC3rf6hrrXdrrqmjIoiPDGTHwRJyij2jsyciItKXD9NPObtnRnXPRGQQUYHW\nj4wGI/PjZrNi2o9JDh7GnrL9PJWxmvS83W7MZOBb85MBeHfLCbflEBERuVil1U2kZ5UwdEgAk1Ps\n7o4jIjKgVKC5QIT/EH406Z+5fcTNtHS08H+2v8Yfs16nvrXBLXnShoUxKiGUrOxKDuVUuSWDiIjI\nxfpo+yk6HQ5unjVM3TMRGXRUoLmI0WBkYdwcfjb1R4wMT2J36V6eyljNnrKsAc9iOKuLtnbzCbef\nGyciInIh5dVNbM8qJjrcn6mpEe6OIyIy4FSguVhkQAS/XPgItw6/iaaOZl7d/9/86cD/o75tYLtp\nidFBTEmxk11Uy+4jZQP62iIiIhfr4x05dHQ6WDprGEajumciMvioQBsARqOR6+Ln8bOp/0pCUBy7\nSvawMuM59pUdGNAct81LxmgwsO7vJ+no7BzQ1xYREelLRU0zX+wrIjLUj2mj1D0TkcFJBdoAigqI\n5JFJ9/PN5BtobGvk9/v/zJ8PvkljW+PAvH6YP3PHR1Nc2cjWfUUD8poiIiIX6+OMM90zk1EfUURk\ncNK73wAzGU0sTljAo1P/lXhbDJnFX/JUxnNklR8akNe/eXYiVrOR97Zm09LWMSCvKSIi0pfK2ma+\n2FuIPcSXGWmR7o4jIuI2KtDcZGhgFD+Z/CA3J11PfVsDL+/7E2sOvU1jW5NLXzfU5sOiqXHU1Lfy\n2a48l76WiIjIxdqQkUt7h4OlM9U9E5HBTe+AbmQymlgy7FoenfoQcYFD2VG0i5WZz3Gw4ohLX/eG\n6fEE+Jr5eEcu9U1tLn0tERGRvlTXt7BlTyFDgn2ZOSbK3XFERNxKBZoHiAmM5qdT/oWbEhdR21rH\ni3v/L68fWktTe7NLXs/f18JNM4fR1NLOxztyXPIaIiIiF2vDjlzaOzq5aWYCZpM+mojI4KZ3QQ9h\nMpq4MXER/zblIWICo9lelMnKjOc4XHnMJa937eQYQm0+fL47n8pa1xSCIiIifampb2HzngLCg3yY\nPTba3XFERNxOBZqHibMN5d+m/As3DLuWmtZaXtjzKm8cfpfmfu6mWcwmbpmTSFt7J+9vze7XfYuI\niFysTzJzaWvv5MaZw9Q9ExFBBZpHMhvNLE26np9OfpChAVFsLcxgZeb/4Ujl8X59ndljohk6JICt\n+4soKB/YC2eLiIhU17Ww6asCQm0+XKPumYgIoALNo8UHxfJvUx/i+oSFVDVX89s9f+CtI+/R3N7S\nL/s3Gg3cPjcJhwPWbTnRL/sUERG5WO9tOU5rWyc3zkjAYtZHEhERAPPFbPT000+zd+9eDAYDK1as\nYNy4cc7nFi5cSFRUFCaTCYBVq1YRGanrl/QXi9HMN5KXMN6exn8ffIu/F2znYMVh7h51ByNCk654\n/xNGDGF4TDBfHSvneEENdrutH1KLiIj0rq6xlY+2ZRMSaGXueHXPRERO6/PrqszMTHJycnjrrbdY\nuXIlK1eu/No2r776KmvWrGHNmjUqzlwkISiOx6b+K4vi51PRXMXzX73CO0ffp6Wj9Yr2azAY+Nb8\nZADWbj6Bw+Hoj7giIiK9+nRnHs2tHdwwIwGL2eTuOCIiHqPPDlp6ejrXXXcdAMnJydTU1FBfX09g\nYKDLw0lPFpOFW4bfyHh7GmsOvc3m/G0c6O6mDQ9JvOz9jowLYVxyOPtOVLD7cCkJQ/z7MbWIiPd7\n5513WL9+vfN+VlYWY8aMobGxEX//rvfMRx99lDFjxji3aWtr47HHHqOwsBCTycQzzzxDXFzcgGf3\nRPVNbXy+O58Qmw/zxg91dxwREY/SZ4FWXl5OWlqa835YWBhlZWU9CrQnn3ySgoICJk+ezCOPPILB\nYLjg/kJD/TH3wzdl3jYVrz/z2u1jmDBsJG9mfcBHRz7n+S9f4aaRC7lz7Dewmq2Xtc9/vHUcD63e\nxKr/2cWNsxNZek0SYUG+/ZbZVbzp34E3ZQXvyqusruFNWV1t2bJlLFu2DOiaWbJhwwaOHz/OM888\nw8iRI8/7Mx9++CFBQUGsXr2arVu3snr1ap5//vmBjO2x/trdPVu+JBWrRd0zEZGzXdQ5aGc7dwrc\nQw89xJw5cwgODuaBBx5g48aNLFmy5II/X1XVeOkpz2G32ygrq7vi/QwUV+W9IWYxIwNG8j+H3ubD\no5+Tmb+Xe0Z9m6TghEveV4DZwD2LU3hvazbvfH6Mv2w+zoy0KK6fFk/MkIB+z94fvOnfgTdlBe/K\nq6yu0V9Zr8Yi78UXX2TVqlU8/PDDvW6Xnp7OLbfcAsCsWbNYsWLFQMTzeI3NbXy2O48gfwtLZg6j\nrqbJ3ZFERDxKn+egRUREUF5e7rxfWlqK3W533r/lllsIDw/HbDYzd+5cjh496pqkcl7JIcP42bQf\nsSDuGsoaK3hu90v85fhHtHW0XfK+5k+M4Y9PLOY716cQHuTL1n1FPPFaBs+/s5fDOVU6P01EBr19\n+/YRHR3tPA7+9re/Zfny5fziF7+gubnn9SrLy8sJCwsDwGg0YjAYaG29svOGrwZ/3ZVPU0sH10+P\nx9d6yd8Ti4hc9fp8Z5w9ezYvvPACd955JwcOHCAiIsI5vbGuro4f/ehHvPzyy1itVnbu3Mn111/v\n8tDSk9Vk5VsjvsH4IWP4n0Nv81nuFvaXH+I7o+9gWFD8Je3Lx2Ji/sQY5k4Yyp5j5XySmcu+ExXs\nO1FBQpSNJdPimZJqx2TUcsgiMvisXbuWW2+9FYDvfOc7pKSkEB8fz5NPPsnrr7/Ovffee8GfvZgv\nua720wAamtr4bHc+QQFWli1KBTw364V4U15ldQ1ldR1vyuvKrH0WaJMmTSItLY0777wTg8HAk08+\nybp167DZbCxatIi5c+fy7W9/Gx8fH0aPHt3r9EZxrRGhSayY/jDvn9jAlvxtrNr1IosS5nNj4iIs\nxkv7ltJoMDBppJ1JI+0cL6hhY0YuXx4t4/frD7B2sy+Lp8YxZ3y0vv0UkUElIyODxx9/HIBFixY5\nH1+4cCEff/xxj20jIiIoKysjNTWVtrY2HA4HVmvv5wlf7acBfLAtm4amNm6fl0R9bRN+Hpz1fDx5\nbM+lrK6hrK7jTXn7I2tvBd5Ffbr+yU9+0uN+amqq8/Z3v/tdvvvd715mNOlvPiYrd4z8JhPsXd20\nT3M2sb/8IPeMuoOEoMtbPWx4TDDDbxtLSVUjn+7MY9u+It74/Bjvb81mwaQYrp0cS0igTz//JiIi\nnqWkpISAgACsVisOh4N/+Id/4Le//S1BQUFkZGQwYsSIHtvPnj2bTz75hDlz5rBp0yamT5/upuSe\noamlnU935hHga2bhpFh3xxER8Viap3aVGhmazIppDzMnZiZFDSWs2v0iH5zcSHtn+2XvMzLUn3sW\np/Cf98/ilmsSMZkMfJSew7+9vJ0/fnSIgvKGfvwNREQ8S1lZmfOcMoPBwB133MH3vvc9li9fTnFx\nMcuXLwfgvvvuA+DGG2+ks7OTu+66i9dff51HHnnEbdk9wd++zKehuZ3F0+Lx89HsCxGRCzE4Bnjl\nh/5aFcxbWqDg/ryHK4/xP4feoaqlmqEBUXxn9LeJs8Wcd9tLydra1sH2rGI2ZuZSUtW1Cte45HCu\nnxZPanxIr5db6A/uHtdL4U1ZwbvyKqtraBVH97haj5HNre3828vpdHY6+PV9s/D37SrQPDFrb7wp\nr7K6hrK6jjfl9YgpjuLdUsNG8PPpD/OX4x+xrTCDX+96gSUJC7l+2ELMl3hu2tmsZy0osvdYORu0\noIiIiJzHpq8KqG9q45vXJDqLMxEROT+9Sw4SfmZf/lfq7Uywj+H1w2v5+NRn7Os+Ny3WNvSK9m00\nGJg40s7E0wuKZOby5REtKCIiItDS1sEnGbn4+ZhYNEXnnomI9EWtjUFmdHgKj09/mJnRU8mvL+TX\nu15gQ/ZndHR29Mv+h8cE88CtY3n6hzNYMCmGusZW3vj8GD95cTvvbjlBdX1Lv7yOiIh4h81fFVDX\n2MZ1k+Pw97W4O46IiMdTgTYI+Zn9uHvUMu4b9w8EWgL4MPtT/nP37yisL+6317jQgiI/fal7QZGy\n+n57LRER8UytbR1syMjF12pi0dTLW0lYRGSw0ZyzQWzMkFE8Pv1h1h77gIzi3Ty78zdML56IudOK\nn8kXX7MvvmYffE2++Jm775t8etw2GXu/oKrN38o3rklkyfR4th8oZmNmHlv3F7F1f9GALigiIiID\nb8ueQmobWrlpZgKBfuqeiYhcDBVog5y/xZ/vjP42EyPG8sbhd9mWu+uSft5qtDgLOT+TX1dBZ/bt\nLvB8ehR1gdG+/K/bgskv8mHngUr25+Wz71Qx8fZQbpiWoAVFRESuIm3tHXyckYOPxcRidc9ERC6a\nCjQBYOyQ0YyaNRJTYCeFpRU0tTfT3P2nqaOZ5vaW89xuobm9ieb2Fpram6lqrqbtYq+zFgG+EV03\ny4A/F5hYk2chwMePUP8AAix+ZxV+vue57Uu0IZTm+s7ujp4PPiYfjAYVeCIinuDve4uoqW/lhhnx\n2Pyt7o4jIuI1VKCJk9loxh5gwxB4+QfS9s52mju6C7juQq65o7m74Gs57+265kbK6uqob2uirqWB\n+vYaMHZe1uv7mnzOOxXzvLcv0Omzmqwq9ERErkBbeycf78jBajFy/dR4d8cREfEqKtCkX5mNZgKN\nZgItAZf8s3WNrWz6qoDPd+dT19SCydLOpNRQpo0Jw2Yzdhd7Lc7untHHQUVt7deLwPZm6tsaKGuq\noMNx6atTGjDg4yzkzj4H75zbZt8LP2fyxcdk1bl1IjIobd1fRFVdC9dPiyMoQN0zEZFLoQJNPIbN\n38o3ZieyZNqZBUV27mtg574GxiaFs2R6PGPPWlCkr6u4OxwOZ0evqXsq5tmFXNd0zXNvd3f+uu/X\nttRR0lFGp+PSO3oGDM7OnM03AB+DD4GWAAKsAQSa/bv+tgQQYPHv/rvrtq/JR4WdiHit9o5OPk4/\nhcVsZMn0BHfHERHxOirQxONYLSbmT4hh7vih7D1ezicZuew/WcH+kxUkRNpYMj2eKan2PvdjMBiw\nmCxYTBZs1sDLzuNwOGjrbOtRuDV1d/Oct3sUfz07fc0dLZQ3VtLY1nRRr2cymAi0+DsLth5FnDWA\nALM/gWcVdwGWABV1IuIxtu0voqK2hUVT4ghW90xE5JKpQBOPZTQYmDjCzsQRdk4U1LAxM5fdR8v4\n/foDrN3sy60LhjMxKQw/H9f+MzYYDFhNVqwmK8HYLmsfdruN4pJqGtobaWhrpL61gYa2hq7bbQ3U\nd99uaGugvvvvqpYaChsu7tp0JoOpRzEXYAkg8Nz71p7dOhV1ItLf2js6+Sg9B7PJyA0zdO6ZiMjl\nUIEmXiE5Jpj7bx1LaVUjn+7MY+u+Il57Pwt/HzPzJ8Zw7eRYQm0+7o7ZK5PRRJDVRpDVBhd5il5H\nZweN7U1dRVxrQ1eB13qmoOv6+8zt/ijqAiwBRFaGQUv382d163xNvirqROSC0rOKKa9p5trJsYQE\nevZ7soiIp1KBJl4lItSfuxencMucJDKOlLH+7yf4eEcOGzNzmZkWxfXT4oixX/50Rk9jMpqwWQO7\npmheRlHnLOJaz9+ta2hrpPp8RV3O+fdtNBjPe95c4FnF3emiLsAcQKBVRZ3IYNHR2cmH6acwmwzc\nMF3dMxGRy6UCTbxSoJ+FOxelMCctkvTuBUW27i9i6/6irgVFpsWRmhA6KAuDHkXdRTpd1J2eYmny\n66SwopyG1kbq2xu6/j6rsKtpqaWooeSi9m00GM8q4s7u1gWcp4Onok7EW+04UEJZdTMLJsUQFuTr\n7jgiIl5LBZp4NavFxLwJMczpXlBk4zkLilw/PY6pqRGYjLquWW/OLersdhtlPhdeIRO+XtQ1tJ0z\n9bK1kYb2Bupbu56rbam7rKLu/N26M0Wdwy+S9g4jFpPlisdBRC5PR2cnH24/hclo4Eat3CgickVU\noMlVoceCIoU1bMzoWlDkD+sP8u7mEyyaGs+ccdEuX1BkMLmyTt3Xz5/rsVBKd3FX21JHcUMpDhx9\n7tvP7Eew1UaQTxBB1kCCrUEE+dgItgYR7GPrPv8vCD+zunMi/S3zUCklVU3MnzCU8GB1z0REroQ+\nrcpVJ3nomQVF/rozny/2FfLm58dYvzXbaxYUuVpdTlHX6eiksa3pPKtddv1pM7RQWltJbWsdNa21\nFDeW9ro/i9FM0DlFW9ftno/ZrAEYDeq8ivSls9PBB9u6u2cz1D0TEblSKtDkqhUR6s/yxSP55pxE\nNn2Zz+e7850LisxIi2TJtPirakGRq5XRYOxaSdJ6/lVSzr1geVtnO3WtddS01FHbWkdta2337TN/\n17bWc6o2r9cLkBsNRmyWgO6OnM3ZnTvTpet+zGrT9EoZ1HYeLqW4spE546IZEuLn7jgiIl5PBZpc\n9QL9LNw8O5El0+PZntW1oMi2/cVs21886BcUuRpZjGbCfEMJ8w3tdbtORycNbY3UtNRS09pdzJ2+\nfdZjJQ2l5NUV9Lovf7PfmeLNajsztfLsYs7HhsOhLwTk6tLpcPDB9lMYDQZumjXM3XFERK4KKtBk\n0LCYzywosu94BZ9k5GhBkUHMaDA6p1vG9rKdw+GguaOlR/HWNZ2y7kxnrrWOupY6ivtYBMVqshBk\n6Vm0BZ2nkAu0aHqleIfdR8ooLG9g9tgoItQ9ExHpFyrQZNAxGgxMGDGECSOGdC0okpnH7iOlWlBE\nzstgMOBn9sXP7EtkQESv27Z1tlN7VtFWe87UyobOBiobajhVm3sR0ysD+zxPLsjHhsWof6fiHp0O\nBx9sy8ZggKUzh7k7jojIVUNHdhnUkocGc/8twWcWFNnftaDI+1uzmT9xKNdNjtOCInLRLEYz4X6h\nhPudf3rl6fPlOh2d1Lc1nHNu3NfPlytqKCG3j+mVAWZ/gs4p5L52npxPEL4mH03jlX711dEy8ssa\nmJkWRWSYv7vjiIhcNVSgiXD+BUU27Mjl08w8ZqRFcv20eGK1oIj0E6PB2F1Q2YChF9yua3pl89cK\nuZrWWmpb6s6cL3cRFw63Gi19LngS7BNEgMVf0yulTw6Hg/XbTnV1z2Zp5UYRkf6kAk3kLGcvKJJ+\noIRPMnK1oIi4Tdf0Sj/8zH5E9TW9sqPNeW5cbS8Ln2TX5PR6XbnTxWN8yFC+k3IXfmZd0wrgnXfe\nYf369c77WVlZvPHGG/zyl7/EaDQSFBTE6tWr8fM7cx7WunXr+M1vfkN8fDwAs2bN4r777hvw7K6w\n53g5eaX1zBgdSXT4+VdYFRGRy6MCTeQ8LGYTc8cP5Zpx0V0LimTmfm1BkSkpEZhN6jSIZ7CYLIT7\nhRHuF9brdp2OTupaG7o7crXdUyvrvnYpgpKGcto72wcovedbtmwZy5YtAyAzM5MNGzbw1FNP8dhj\njzFu3DieffZZ1q1bx/Lly3v83I033sijjz7qjsgu43A4WL/1FAbQyo0iIi6gAk2kF70uKBJ0gkVT\n4pgzfqgWFBGvYTQYu85T87ERZ4u54HbnXl9OznjxxRdZtWoVfn5+BAZ2TX0OCwujurrazckGxr4T\nFeSU1DE1NYKYIeqeiYj0N339L3KRuhYUGcMzP5zJtZNiqWtq482/HecnL23nnc3HqaprcXdEEXGx\nffv2ER0djd1udxZnjY2NvP/++yxZsuRr22dmZnLvvffy3e9+l4MHDw503H7Xde5ZNgA3zx7m3jAi\nIlcpfe0vcokiQvzOLCjyVYEWFBEZRNauXcutt97qvN/Y2Mh9993H97//fZKTk3tsO378eMLCwpg/\nfz5fffUVjz76KB988EGv+w8N9cdsNl1xTrvddsX7OJ/dh0vILqpj1rhoJo6O7pd9uiqrq3hTXmV1\nDWV1HW/K68qsKtBELlOgn4WbZw1jybQ40g+UsDHzzIIiY5LCuGFavBYUEbnKZGRk8PjjjwPQ3t7O\n/fffz9KlS7ntttu+tm1ycrKzaJs4cSKVlZV0dHRgMl24AKuqarzijK6anupwOFjzUVcXcPHk2H55\nDW+bSutNeZXVNZTVdbwpb39k7a3AU4EmcoV6LChyooJPMnLJOllJ1slK4iMDWTItnhvm6DwNEW9X\nUlJCQEAAVqsVgFdffZVp06Y5Fw8516uvvkp0dDRLly7l6NGjhIWF9VqcebqDp6o4UVjLxBFDiI/0\nnm+5RUS8jQo0kX5iNBiYMHwIE4YP4WRhLRszc9l1pJQ/fHCQdX8/yfjkIYxNDiMlPhQfi/d+SBMZ\nrMrKyggLO7NK5uuvv05sbCzp6ekATJ8+nQcffJD77ruPl19+mZtvvpmf/vSnvPnmm7S3t7Ny5Up3\nRb9iDoeD97vPPfvG7EQ3pxERubqpQBNxgaShQdx3yxhKq5v46848tmcV8/mX+Xz+ZT5mk5GU+BDG\nJoUzNimMqDB/TYMU8QJjxozhtddec97funXrebd7+eWXAYiKimLNmjUDks3VDudUcTy/hgnDh5AQ\npe6ZiIgrqUATcaGIED+WLxrJA3dMZMeefPZnV7D/RCUHsrv+vPk5DAn27S7WwhmVEIqPVd01EfEs\n67edArRyo4jIQFCBJjIALGYjqQmhpCaEsmw+VNW1kNV94esDp6rY9FUBm74qwGwyMCK2u7uWHM7Q\ncHXXRMS9juRWcSSvmrFJ4SRGB7k7jojIVU8FmogbhNp8mDN+KHPGD6Wjs5MTBbXs7y7YDuVUcSin\nirc3HSc8yIcxZ3XXdEFsERlop7tn31D3TERkQOjTnoibmYxGRsaFMDIuhNvnJVNT30JWdmVXdy27\nki17CtmypxCT0cCI2GDndMgYe4C6ayLiUkfzqjmUU0VaYhjJMcHujiMiMiioQBPxMMGBPsweG83s\nsdF0dHaSXVjn7K4dzq3mcG4172w+QajNhzGJYYxNCmf0sDD8ffW/s4j0rw+2nwLgm1q5UURkwOgT\nnYgHMxmNDI8NZnhsMLfOTaK2oZUD3d21rOxKvthXxBf7ijAaDAyPCWJscld3LS4iUN01EbkiJwpq\nOJBdyaiEUIbHqnsmIjJQVKCJeJGgACszx0Qxc0wUnZ0OsotryTrZVbAdy6/haH4N7245SXCAlTFJ\nXd21tMQwAnwt7o4uIl5G556JiLiHCjQRL2U0GkgeGkzy0GC+eU0idY2nu2uVZGVXsG1/Mdv2F2Mw\nQPLQYMYmhTE2OZz4SBtGdddEpBcnC7sWLkqNDyElPtTdcUREBhUVaCJXCZu/lRlpUcxIi6LT4SCn\nuK57Kf9KThTWcLyghr98kU2Qv4W0xHDGJocxJjGcQD9110Skpw+2ZQNws849ExEZcCrQRK5CRoOB\nxOggEqODuHl2IvVNbRw81X3u2slK0g8Uk36gq7uWFB3kXMp/WLS6ayKDXU5xHXtPVDAyNpjU+BB3\nxxERGXRUoIkMAoF+FqaNimTaqEg6HQ7ySurJyq5g/4kKjhfUcqKwlve3ZhPoZ2FMYhizJsQQP8Sf\nIH+ru6OLyABbf7p7dk2iFhsSEXEDFWgig4zRYCAhykZClI2bZg6jsbmNg6eqnEv57zhYwo6DJRiA\nhChb13XXksNJig7CaNSHNZGrWW5JHV8dKyc5JojRCTr3TETEHVSgiQxy/r4WpqRGMCU1AofDQX5Z\nA9kl9ezYX8ix/BpOFdfxwfZTBPiaSeu+7tqYpHCCA9RdE7nafNC9cuM3Z6t7JiLiLhdVoD399NPs\n3bsXg8HAihUrGDdu3Ne2Wb16NXv27GHNmjX9HlJEBobBYCAuIpBJadHMHRtFU0s7B09VdU2HPFlB\n5qFSMg+VApAQJdFItQAAFdVJREFUaXMu5Z8cE4TJaHRzehG5Evml9ew+WkZidBBpiWHujiMiMmj1\nWaBlZmaSk5PDW2+9xYkTJ1ixYgVvvfVWj22OHz/Ozp07sVi0GpzI1cTPx8zkFDuTU+w4HA4KyxvY\n333dtaN51eSU1PFReg5+PmbShoU6u2uhNh93RxeRS/TB9lNA13XP1D0TEXGfPgu09PR0rrvuOgCS\nk5Opqamhvr6ewMBA5za/+tWv+PGPf8zvfvc71yUVEbcyGAzE2AOJsQeyZHo8za3tHMqpcl4oe9eR\nMnYdKQMg1h7I2OQwxiWFkxwTjNmk7pqIJysob2DX4VISomyMSw53dxwRkUGtzwKtvLyctLQ05/2w\nsDDKysqcBdq6deuYNm0aMTExrkspIh7H12pm4gg7E0d0ddeKKxvZf6KC/dmVHMmtJr+sng07cvG1\nmhg9LKzrQtlJ4YQF+bo7uoic48Ptp3Cg7pmIiCe45EVCHA6H83Z1dTXr1q3jT3/6EyUlJRf186Gh\n/pjNpkt92a+x221XvI+B5E15ldU1vCkrXHreiIggxqVGAdDc0s7+E+V8ebiU3YdL+fJoGV8e7equ\nJUTZmJQayeTUCEYnhmMxX3l3zZvGVlnF0xRVNJB5sIT4iEAmDB/i7jgiIoNenwVaREQE5eXlzvul\npaXY7XYAduzYQWVlJcuXL6e1tZXc3FyefvppVqxYccH9VVU1XnFou91GWVndFe9noHhTXmV1DW/K\nCv2Td5g9gGH2RG6bk0hJZSP7ui+SfTi3ipzNx/nL5uP4WEyMSghlbHI4Y5PCGBLs55asA2UwZlWR\n5/lOd89u1sqNIiIeoc8Cbfbs2bzwwgvceeedHDhwgIiICOf0xiVLlrBkyRIA8vPz+dnPftZrcSYi\ng1NkmD+LwvxZNCWO1rYOjuRVd193rZI9x8vZc7zrS6DocP+u664lhTMyLqRfumsicmEllY3sOFhC\nrD2AiSPVPRMR8QR9FmiTJk0iLS2NO++8E4PBwJNPPsm6deuw2WwsWrRoIDKKyFXEajE5izCA0uom\n9p+oIOtkBYdyq/h0Zx6f7szDajGSGh/qvFB2RMild9dEpHcfbj+FwwHfmJ2IUd0zERGPcFHnoP3k\nJz/pcT81NfVr28TGxuoaaCJyySJC/Lh2cizXTo6lrb2Do3k13d21Cvad6PrDX7u6cGMTwxibHE5K\nXAhWy5WfyyoymJVWNZJ+oISYIQFMSrG7O46IiHS75EVCRERcxWI2kZYYRlpiGHdeO4Ly6ib2Z1eS\ndbKCgzlVfLY7n89252MxG0mJD2FsUjhzJsXhY3Do3BmRS/Rheg6dDgc3zx6m7pmIiAdRgSYiHmtI\niB8LJsawYGIM7R2dHMur7rpQdnbXgiNZJyt547NjBAdYSYkPITUhlFHxoUSE+qlgE+lFWXUT6VnF\nRIf7MyUlwt1xRETkLCrQRMQrmE1GRg0LY9SwMO5gOJW1zew/WUF2cT17jpWReaiUzEOlAITafLoK\ntvhQUuNDsIeoYJMr884777B+/Xrn/aysLN544w3+/d//HYCUlBT+4z/+o8fPtLW18dhjj1FYWIjJ\nZOKZZ54hLi5uIGNf0Mc7cujodLB01jCMRv2/ISICsHnz58yff22f261cuZKlS29n6FDXXAdaBZqI\neKWwIF/mTYjhW3YbpaW1FFc2cjinisO51RzOrWLHgRJ2HCjp3taH1PhQUuJDGBUfyhAtOCKXaNmy\nZSxbtgyAzMxMNmzYwMqVK1mxYgXjxo3jkUceYcuWLcybN8/5Mx9++CFBQUGsXr2arVu3snr1ap5/\n/nl3/QpOFTXNbN1XRGSYP9NHRbo7joiIRygqKuSzzzZeVIH285//3KWXzVGBJiJez2AwEB0eQHR4\nAAsmxeJwOCgsb3AWa0dyq9meVcz2rGIAhgT7OjtsoxJCCQvydfNvIN7kxRdf5JlnnuHuu+9m3Lhx\nACxYsID09PQeBVp6ejq33HILALNmzfKYy9A4u2czE9Q9ExHp9txzz3Lo0AHmzJnK4sU3UFRUyPPP\nv8Qzz/ySsrJSmpqa+P73/4nZs+dwzz338OCDD7Np0+c0NNSTm5tDQUE+Dz30CDNnzr7iLCrQROSq\nYzAYiLEHEmMP5NrJsXQ6HBSUNXA4t4rDOVUczatm2/5itu3vKtgiQvyc57ClxocSavNx828gnmrf\nvn1ER0djMpkICgpyPh4eHk5ZWVmPbcvLywkLCwPAaDRiMBhobW3FarUOaOazVdY288W+QiJC/JiR\npu6ZiHimt/92nJ2HS/t1n1NTI7hj4fALPn/XXfewbt3bJCYmk5t7ipdeeo2qqkqmTZvBDTcspaAg\nnyeeeIzZs+f0+LnS0hJWrfotO3Zs5/3331WBJiJyMYwGA3ERgcRFBLJoShydDgf5pfXOKZFH8qr5\nYl8RX+wrAiAy1M9ZrKXGhxAcqIJNuqxdu5Zbb731a487HI4+f/ZitgkN9cdsvvJLSNjttvM+vu6L\nbNo7HNx1fQpRkcFX/Dr94UJZPZU35VVW11BW1zmd18/fisnUvx1+P39rr+MREuKPj4+FgAAfpk6d\njN1uIyTElzffPMa//Ms/YjQaaWioc+4jNDSAgAAfZs6cjt1uIyUlkZaWpn4ZcxVoIjLoGA0G4iNt\nxEfaWDwtns5OB7mldRzO6ZoSeTSvmi17CtmypxCA6HB/5zlsqfGhBAW4rwMi7pWRkcHjjz+OwWCg\nurra+XhJSQkRET1XQ4yIiKCsrIzU1FTa2tpwOBx9ds+qqhqvOKPdbjvvuRFVdS18siOHIcG+pMWH\nuPT8iYt1oayeypvyKqtrKKvrnJ335hnx3Dwjvt9fo7fxqK5upKWljYaGFiwWP8rK6tiw4UNKSsr5\nzW9+T21tLT/4wT3OfVRVNfTYtqqqgdbW9ose894KORVoIjLoGY0GhkUFMSwqiCXT4+no7CSnuL5r\nSmRuFcfyatj0VQGbvioAIGZIgLNgS4kPweavgm0wKCkpISAgwFlkJSUlsWvXLqZMmcKnn37KPffc\n02P72bNn88knnzBnzhw2bdrE9OnT3RHbaUNGDu0dnSydNQyzyejWLCIinsZoNNLR0dHjserqaqKj\nh2I0Gtmy5W+0tbUNSBYVaCIi5zAZjSQNDSJpaBA3zkigvaOTU8V1HOk+h+1YQQ0FXzbw+Zf5AMTa\nA0ntPoctJT6EAF+Lm38DcYWysjLnOWUAK1as4Be/+AWdnZ2MHz+eWbNmAXDffffx8ssvc+ONN7J9\n+3buuusurFYrv/rVr9wVnZr6FrbsKSQ8yIdZY6LclkNExFMlJCRy5MhhoqOHEhISAsD8+Qt57LGH\nOXgwi5tu+gYRERH86U+vujyLwXExk+L7UX+0Wr25ZevplNU1vCkreFded2Rt7+gku6jWeQ7b8YIa\n2to7ATAAcRGBznPYRsaF4O9rdlvWy9VfWb3t/Ad3c9Ux8q2/HWNjZh73XJ/CgomuuW7P5fCm/yfA\nu/Iqq2soq+t4U97+yKopjiIi/chsMjIiNoQRsSHcPBva2js5WVjTteBIbhXHC2rJLa3n0515GAwQ\nH2ljVHwo08cNJcJmxc9Hb70ycGobWtn0ZQGhNh+uGRvt7jgiItIHfUoQEblCFrORlPhQUuJDgURa\n2zo4UVjrnBJ5orCWnOI6PsnMxWgwkBBlIzWha8GREbHB+Fr1ViyuszEzl9b2TpbNSMBi1rlnIiKe\nTp8KRET6mdViYlRC10WwmQMtbR0cL6ghr7yRLw+VkF1US3ZRLRt25GIyGhgWbete0j+U4bHB+Fiu\nfJl1EYC6xlb+9mUBIYFW5o5X90xExBuoQBMRcTEfi4m0YWHMn5rAkimxNLe2c7ygxrmsf3ZhHScK\navkoPQeT0UDS0CBS4kMZFR9CckwwVhVscpk+3ZlHS1sHt81LwtIP11cTERHXU4EmIjLAfK1mxiSG\nMyYxHICmlnaO5dd0LeufU8XxghqO5dfw4XYwmwwkDQ0mNT6EUQmhJA0N0gdtuSj1TW18tjuf4AAr\n88YPdXccERG5SCrQRETczM/HzLjkcMYldxVsjc3tHM2v7j6HrZpjedUczatm/bZTWMxGkocGOVeJ\nTBoapGtayXl9ujOPltYObr0mUV1YEREvogJNRMTD+PuamTB8CBOGDwGgobmNo3nVzimRh3OrOZxb\nDWRjNRsZHhvsPIdtWLRNBZvQ0NzG57vzCPK3MM+DltUXEfF23/rWzXz88UcufQ0VaCIiHi7A18LE\nEXYmjrADXVPXjuSeLtaqOHiq6w90ne82IjaYlO4LZw+LsmEyqmAbbP66M4+mlg6WLhimRWdERLyM\nCjQRES8T6GdhcoqdySldBVttYytHc6s5lFvFkdxqsrIrycquBMDHamJkbIhzWf+ESBtGo8Gd8cXF\nGpra+OuufAL9LB51UWoREU/2/e8v5+mnVxMVFUVxcRE/+9kj2O0RNDU10dzczI9//FNGjx4zIFlU\noImIeLkgfytTUiOYkhoBQE1Da9f5a7nVHM6pYv/JCvafrADAz+d0wdY1JTIuIlAF21Xmw60naWpp\n5/Z5SbrGnoh4pXXHP+Sr0v39us+JEWO5bfjSCz4/d+4Ctm37O7fffgdffLGFuXMXkJw8grlz57N7\n905ef/3PrFz5n/2a6UL0zi0icpUJDrAybVQk00ZFAlBV13KmYMutYu+JCvae6CrYAnzNjIzr6q6l\nxIcQGxGI0aCCzVs1tbTz3pYTBPiaWTgp1t1xRES8xty5C/jd757n9tvvYOvWLTz44I958801vPHG\nGtra2vD19R2wLCrQRESucqE2H2akRTEjLQqAytpmjjinRFbx1bFyvjpWDnRNn0yJC+GaiTGMHRaq\nYs3L/O3LfOqb2rh1bhJ+PjrEi4h3um340l67Xa6QlJRMRUUZJSXF1NXV8cUXmxkyJIInnvjfHD58\nkN/97vkBy6J3bxGRQSYsyJeZY6KYOaarYCuvaepadCSna9GR3UfL2H20jF/9cAYRof5uTiuX4kB2\nJTZ/C9eqeyYicslmzryGP/zhJebMmUd1dRXJySMA2LJlE+3t7QOWQwWaiMggNyTYjyFj/Zg9NhqA\nsuomjFYz4f4WNyeTS/WDpaMJDvHH1Nnp7igiIl5n3rwF/PM/f5//+q83aG5u4qmnnmTTps+4/fY7\n+OyzT/noo/UDkkMFmoiI9GAP8cNut1FWVufuKHKJwoJ8sYcH6L+diMhlGDUqjS1bMpz3X399rfP2\nNdfMA+Cmm75BQEAAjY2ue5/VxXFEREREREQ8hAo0ERERERERD6ECTURERERExEOoQBMREREREfEQ\nKtBEREREREQ8hAo0ERERERERD6ECTURERERExEOoQBMREREREfEQKtBEREREREQ8hAo0ERERERER\nD2FwOBwOd4cQERERERERddBEREREREQ8hgo0ERERERERD6ECTURERERExEOoQBMREREREfEQKtBE\nREREREQ8hAo0ERERERERD2F2d4C+PP300+zduxeDwcCKFSsYN26c87nt27fz3HPPYTKZmDt3Lg88\n8IAbk/aedeHChURFRWEymQBYtWoVkZGR7ooKwNGjR7n//vv53ve+x913393jOU8b296yetrY/vrX\nv2b37t20t7fzwx/+kMWLFzuf87Rx7S2rJ41rU1MTjz32GBUVFbS0tHD//fezYMEC5/OeNK59ZfWk\ncT1bc3MzS5cu5f777+e2225zPu5JYys9edPxEbzrGKnjo+voGNn/dIx0LbccHx0eLCMjw/FP//RP\nDofD4Th+/Ljjjjvu6PH8DTfc4CgsLHR0dHQ47rrrLsexY8fcEdPhcPSddcGCBY76+np3RDuvhoYG\nx9133+14/PHHHWvWrPna8540tn1l9aSxTU9Pd/zgBz9wOBwOR2VlpWPevHk9nvekce0rqyeN60cf\nfeT4wx/+4HA4HI78/HzH4sWLezzvSePaV1ZPGtezPffcc47bbrvN8e677/Z43JPGVs7wpuOjw+Fd\nx0gdH11Hx0jX0DHStdxxfPToKY7p6elcd911ACQnJ1NTU0N9fT0AeXl5BAcHEx0djdFoZN68eaSn\np3tkVk9ktVp59dVXiYiI+Npznja2vWX1NFOnTuU3v/kNAEFBQTQ1NdHR0QF43rj2ltXT3Hjjjfzj\nP/4jAEVFRT2+TfO0ce0tq6c6ceIEx48fZ/78+T0e97SxlTO86fgI3nWM1PHRdXSMdA0dI13HXcdH\nj57iWF5eTlpamvN+WFgYZWVlBAYGUlZWRlhYWI/n8vLy3BET6D3raU8++SQFBQVMnjyZRx55BIPB\n4I6oAJjNZszm8//n97Sx7S3raZ4ytiaTCX9/fwDWrl3L3LlznW16TxvX3rKe5injetqdd95JcXEx\nr7zyivMxTxvX086X9TRPG9dnn32WJ554gvfee6/H4546tuJdx0fwrmOkjo+uo2Oka+kY2f/cdXz0\n6ALtXA6Hw90RLtq5WR966CHmzJlDcHAwDzzwABs3bmTJkiVuSnd18cSx/eyzz1i7di1//OMf3Zrj\nYlwoqyeO65tvvsmhQ4f46U9/yvr1691+MOzNhbJ62ri+9957TJgwgbi4OLdlkCvnTcdH0DFyoHjq\nuOoY6Ro6RvYvdx4fPXqKY0REBOXl5c77paWl2O328z5XUlLi1hZ/b1kBbrnlFsLDwzGbzcydO5ej\nR4+6I+ZF8bSx7Yunje0XX3zBK6+8wquvvorNZnM+7onjeqGs4FnjmpWVRVFREQCjRo2io6ODyspK\nwPPGtbes4FnjCrB582Y+//xz7rjjDt555x1eeukltm/fDnje2MoZ3nR8hKvnGOmJY9sbTxxXHSP7\nn46RruHO46NHF2izZ89m48aNABw4cICIiAjndIjY2Fjq6+vJz8+nvb2dTZs2MXv2bI/MWldXx733\n3ktraysAO3fuZMSIEW7L2hdPG9veeNrY1tXV8etf/5rf//73hISE9HjO08a1t6yeNq67du1yfntZ\nXl5OY2MjoaGhgOeNa29ZPW1cAZ5//nneffdd3n77bZYtW8b999/PrFmzAM8bWznDm46PcPUcIz1x\nbC/EE8dVx0jX0DHSNdx5fDQ4PHxexKpVq9i1axcGg4Enn3ySgwcPYrPZWLRoETt37mTVqlUALF68\nmHvvvddjs/75z3/mvffew8fHh9GjR/PEE0+4tfWclZXFs88+S0FBAWazmcjISBYuXEhsbKzHjW1f\nWT1pbN966y1eeOEFEhMTnY9Nnz6dlJQUjxvXvrJ60rg2Nzfz85//nKKiIpqbm3nwwQeprq72yPeC\nvrJ60rie64UXXiAmJgbAI8dWevKm4yN4zzFSx0fX0THSNXSMdL2BPj56fIEmIiIiIiIyWHj0FEcR\nEREREZHBRAWaiIiIiIiIh1CBJiIiIiIi4iFUoImIiIiIiHgIFWgiIiIiIiIeQgWaiIiIiIiIh1CB\nJiIiIiIi4iFUoImIiIiIiHiI/w/8BGbATisZAgAAAABJRU5ErkJggg==\n",
            "text/plain": [
              "<matplotlib.figure.Figure at 0x7f6493730cf8>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "metadata": {
        "id": "4EmFhiX-FMaV",
        "colab_type": "code",
        "outputId": "29ef6d38-6258-429b-841f-7345b7cd0695",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "cell_type": "code",
      "source": [
        "# Test performance\n",
        "trainer.run_test_loop()\n",
        "print(\"Test loss: {0:.2f}\".format(trainer.train_state['test_loss']))\n",
        "print(\"Test Accuracy: {0:.1f}%\".format(trainer.train_state['test_acc']))"
      ],
      "execution_count": 152,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Test loss: 0.44\n",
            "Test Accuracy: 84.4%\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "zVU1zakYFMVF",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "# Save all results\n",
        "trainer.save_train_state()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "qLoKfjSpFw7t",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "## Inference"
      ]
    },
    {
      "metadata": {
        "id": "ANrPcS7Hp_CP",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "class Inference(object):\n",
        "    def __init__(self, model, vectorizer):\n",
        "        self.model = model\n",
        "        self.vectorizer = vectorizer\n",
        "  \n",
        "    def predict_category(self, title):\n",
        "        # Vectorize\n",
        "        word_vector, char_vector, title_length = self.vectorizer.vectorize(title)\n",
        "        title_word_vector = torch.tensor(word_vector).unsqueeze(0)\n",
        "        title_char_vector = torch.tensor(char_vector).unsqueeze(0)\n",
        "        title_length = torch.tensor([title_length]).long()        \n",
        "        \n",
        "        # Forward pass\n",
        "        self.model.eval()\n",
        "        attn_scores, y_pred = self.model(x_word=title_word_vector, \n",
        "                                         x_char=title_char_vector,\n",
        "                                         x_lengths=title_length, \n",
        "                                         device=\"cpu\",\n",
        "                                         apply_softmax=True)\n",
        "\n",
        "        # Top category\n",
        "        y_prob, indices = y_pred.max(dim=1)\n",
        "        index = indices.item()\n",
        "\n",
        "        # Predicted category\n",
        "        category = vectorizer.category_vocab.lookup_index(index)\n",
        "        probability = y_prob.item()\n",
        "        return {'category': category, 'probability': probability, \n",
        "                'attn_scores': attn_scores}\n",
        "    \n",
        "    def predict_top_k(self, title, k):\n",
        "        # Vectorize\n",
        "        word_vector, char_vector, title_length = self.vectorizer.vectorize(title)\n",
        "        title_word_vector = torch.tensor(word_vector).unsqueeze(0)\n",
        "        title_char_vector = torch.tensor(char_vector).unsqueeze(0)\n",
        "        title_length = torch.tensor([title_length]).long()\n",
        "        \n",
        "         # Forward pass\n",
        "        self.model.eval()\n",
        "        _, y_pred = self.model(x_word=title_word_vector,\n",
        "                               x_char=title_char_vector,\n",
        "                               x_lengths=title_length, \n",
        "                               device=\"cpu\",\n",
        "                               apply_softmax=True)\n",
        "        \n",
        "        # Top k categories\n",
        "        y_prob, indices = torch.topk(y_pred, k=k)\n",
        "        probabilities = y_prob.detach().numpy()[0]\n",
        "        indices = indices.detach().numpy()[0]\n",
        "\n",
        "        # Results\n",
        "        results = []\n",
        "        for probability, index in zip(probabilities, indices):\n",
        "            category = self.vectorizer.category_vocab.lookup_index(index)\n",
        "            results.append({'category': category, 'probability': probability})\n",
        "\n",
        "        return results"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "W6wr68o2p_Eh",
        "colab_type": "code",
        "outputId": "87886e24-350d-433e-981d-b2907b0c95cf",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 306
        }
      },
      "cell_type": "code",
      "source": [
        "# Load the model\n",
        "dataset = NewsDataset.load_dataset_and_load_vectorizer(\n",
        "    args.split_data_file, args.vectorizer_file)\n",
        "vectorizer = dataset.vectorizer\n",
        "model = NewsModel(embedding_dim=args.embedding_dim, \n",
        "                  num_word_embeddings=len(vectorizer.title_word_vocab), \n",
        "                  num_char_embeddings=len(vectorizer.title_char_vocab),\n",
        "                  kernels=args.kernels,\n",
        "                  num_input_channels=args.embedding_dim,\n",
        "                  num_output_channels=args.num_filters,\n",
        "                  rnn_hidden_dim=args.rnn_hidden_dim,\n",
        "                  hidden_dim=args.hidden_dim,\n",
        "                  output_dim=len(vectorizer.category_vocab),\n",
        "                  num_layers=args.num_layers,\n",
        "                  bidirectional=args.bidirectional,\n",
        "                  dropout_p=args.dropout_p, \n",
        "                  word_padding_idx=vectorizer.title_word_vocab.mask_index,\n",
        "                  char_padding_idx=vectorizer.title_char_vocab.mask_index)\n",
        "model.load_state_dict(torch.load(args.model_state_file))\n",
        "model = model.to(\"cpu\")\n",
        "print (model.named_modules)"
      ],
      "execution_count": 155,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<bound method Module.named_modules of NewsModel(\n",
            "  (encoder): NewsEncoder(\n",
            "    (word_embeddings): Embedding(3406, 100, padding_idx=0)\n",
            "    (char_embeddings): Embedding(35, 100, padding_idx=0)\n",
            "    (conv): ModuleList(\n",
            "      (0): Conv1d(100, 100, kernel_size=(3,), stride=(1,))\n",
            "      (1): Conv1d(100, 100, kernel_size=(5,), stride=(1,))\n",
            "    )\n",
            "    (gru): GRU(300, 128, batch_first=True)\n",
            "  )\n",
            "  (decoder): NewsDecoder(\n",
            "    (fc_attn): Linear(in_features=128, out_features=128, bias=True)\n",
            "    (dropout): Dropout(p=0.25)\n",
            "    (fc1): Linear(in_features=128, out_features=200, bias=True)\n",
            "    (fc2): Linear(in_features=200, out_features=4, bias=True)\n",
            "  )\n",
            ")>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "JPKgHxsfN954",
        "colab_type": "code",
        "outputId": "0445e3a7-24a9-4c77-829d-a25681768ab1",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "cell_type": "code",
      "source": [
        "# Inference\n",
        "inference = Inference(model=model, vectorizer=vectorizer)\n",
        "title = input(\"Enter a title to classify: \")\n",
        "prediction = inference.predict_category(preprocess_text(title))\n",
        "print(\"{} → {} (p={:0.2f})\".format(title, prediction['category'], \n",
        "                                   prediction['probability']))"
      ],
      "execution_count": 158,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Enter a title to classify: Sale of Apple's new iphone are skyrocketing.\n",
            "Sale of Apple's new iphone are skyrocketing. → Sci/Tech (p=0.86)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "JRdz4wzuQR4N",
        "colab_type": "code",
        "outputId": "f2c91b24-a36a-4e35-b06a-f6618497d64f",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "cell_type": "code",
      "source": [
        "# Top-k inference\n",
        "top_k = inference.predict_top_k(preprocess_text(title), k=len(vectorizer.category_vocab))\n",
        "print (\"{}: \".format(title))\n",
        "for result in top_k:\n",
        "    print (\"{} (p={:0.2f})\".format(result['category'], \n",
        "                                   result['probability']))"
      ],
      "execution_count": 159,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Sale of Apple's new iphone are skyrocketing.: \n",
            "Sci/Tech (p=0.86)\n",
            "Business (p=0.12)\n",
            "World (p=0.01)\n",
            "Sports (p=0.00)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "metadata": {
        "id": "R3jrZ6ZkxN4r",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Interpretability"
      ]
    },
    {
      "metadata": {
        "id": "qrAieHoHxOt2",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "We can inspect the probability vector that is generated at each time step to visualize the importance of each of the previous hidden states towards a particular time step's prediction. "
      ]
    },
    {
      "metadata": {
        "id": "k6uZY4J8vYgw",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import seaborn as sns\n",
        "import matplotlib.pyplot as plt"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "2PNuY7GLoEi4",
        "colab_type": "code",
        "outputId": "24b2e48f-da5b-4251-c2eb-81e72603a6f4",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 330
        }
      },
      "cell_type": "code",
      "source": [
        "attn_matrix = prediction['attn_scores'].detach().numpy()\n",
        "ax = sns.heatmap(attn_matrix, linewidths=2, square=True)\n",
        "tokens = [\"<BEGIN>\"]+preprocess_text(title).split(\" \")+[\"<END>\"]\n",
        "ax.set_xticklabels(tokens, rotation=45)\n",
        "ax.set_xlabel(\"Token\")\n",
        "ax.set_ylabel(\"Importance\\n\")\n",
        "plt.show()"
      ],
      "execution_count": 0,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAdgAAAE5CAYAAAAzwTG+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzt3XlYVOXbB/DvYdgXlRFQDFcUUhAC\nBUUQNMUwNVLTKBI1l6yszPxJ4oKVaO5mVmqaqbkgieKWS264oJIoICIKEsq+KYuIbPP+4cW8aqWI\ncwYOfD9dc+UMM+fczAxzz/0893mOoFAoFCAiIiKV0qjrAIiIiBoiJlgiIiIRMMESERGJgAmWiIhI\nBEywREREImCCJSIiEoFmXQdARERUU3ZtPWr92JiUkyqM5NmYYImISDIEQajrEGqMQ8REREQiYAVL\nRESSIQjSqQulEykREZGEsIIlIiLJ0IB05mCZYImISDKk1OTEBEtERJKhIaE5WCZYIiKSDClVsNL5\nKkBERCQhTLBEREQi4BAxERFJhsAuYiIiItVjkxMREZEIpNTkxARLRESSoSGhBCudWpuIiEhCmGCJ\niIhEwCFiIiKSDEFCdSETLBERSQabnIiIiEQgpSYnJlgiIpIMKS00IZ3BbCIiIglhgiUiIhIBh4iJ\niEgyxF4qcf78+YiOjoYgCAgICICdnZ3yZ6+++ipatmwJmUwGAFiyZAlatGjxn9tigiUiIskQs4v4\nwoULSElJQXBwMJKSkhAQEIDg4ODH7vPzzz/DwMCgRttjgiUiIskQs4s4IiIC/fv3BwBYWlqioKAA\nxcXFMDQ0rNX2OAdLRESSIbzAf8+Sm5sLY2Nj5XW5XI6cnJzH7hMYGIh33nkHS5YsgUKheOr2mGCJ\niIj+xZMJ9NNPP8WMGTOwefNm3LhxA4cOHXrq45lgiYhIMjQEjVpfnsXMzAy5ubnK69nZ2TA1NVVe\nf/PNN9G8eXNoamrC3d0d169ff3qstf81iYiIGg5XV1dlVRoXFwczMzPl/GtRURHGjRuHsrIyAEBk\nZCQ6der01O2xyYmIiCRDzC5iR0dH2NjYwMfHB4IgIDAwEKGhoTAyMoKnpyfc3d3x9ttvQ0dHB126\ndIGXl9fTY1U8a5aWiIionhhs/26tH7sveqsKI3k2VrBERCQZXIuYiIiokWMFS0REksHzwRIREYlA\nSueD5RAxERGRCFjBEhGRZEipyYkJloiIJEPs09WpknQiJSIikhBWsEREJBnsIiYiIhKBlLqImWCJ\niEgypNTkxDlYIiIiEbCCJSIiyZDSEDErWCIiIhGwgiUiIslgFzEREZEIpDREzARLRESSIaUuYiZY\nIiKSDClVsGxyIiIiEgETLBERkQg4RExERJLBLmIiIiIRSGkOlgmWiIgkg13EREREIpBSBcsmJyIi\nIhEwwRIREYmAQ8RERCQZ7CImIiISgZTmYJlgiYhIMljBEhERiUBKh+mwyYmIiEgErGCJiEgyNKRT\nwLKCJSIiEgMrWCIikgw2OREREYmAh+kQERGJQEoVLOdgiYiIRMAKloiIJENDQsfBMsESEZFkcIiY\niIiokWMFS0REksEuYiIiIhFIKL9yiJiIiEgMrGCJiEgyOERMREQkAimdro4JloiIJIOH6RAREUnQ\n/Pnz8fbbb8PHxwcxMTH/ep+lS5di1KhRz9wWK1giIpIMMedgL1y4gJSUFAQHByMpKQkBAQEIDg5+\n7D6JiYmIjIyElpbWM7fHCpaIiCRDEGp/eZaIiAj0798fAGBpaYmCggIUFxc/dp9vv/0Wn3/+eY1i\nZYIlIiICkJubC2NjY+V1uVyOnJwc5fXQ0FA4OzvjpZdeqtH2mGCJiEgyNASh1pfnpVAolP++e/cu\nQkNDMXbs2Bo/nnOwREQkGWIepmNmZobc3Fzl9ezsbJiamgIAzp07h/z8fPj6+qKsrAy3bt3C/Pnz\nERAQ8J/bYwVLRESSIWYF6+rqikOHDgEA4uLiYGZmBkNDQwCAl5cXDhw4gB07dmDVqlWwsbF5anIF\nWMESEREBABwdHWFjYwMfHx8IgoDAwECEhobCyMgInp6ez709QfHoIDMREVE99s3g2bV+7Ox936gw\nkmfjEDEREZEIOERMRESSIaWlEplgiYhIMng2HSIiIhFIKL8ywRIRkXRIqYJlkxMREZEImGCJiIhE\nwCFiIiKSDDGXSlQ1JlgiIpIMHqZDREQkAg3p5FcmWCIikg4pVbBsciIiIhIBEywREZEIOERMRESS\nIaUhYiZYIiKSDDY5ERERiYAVLBERkQgklF/Z5ERERCQGVrBERCQZPJsOERFRI8cKloiIJIOL/RMR\nEYlAQiPETLBERCQdnIMlIiJq5FjBEhGRZHChCSIiIhFIKL9yiJiIiEgMrGCJiEgyOERMREQkAimd\nTYdDxERERCJgBUtERJLBIWIiIiIRSCi/MsESEZF0cCUnIiKiRo4VLBERSYaU5mBZwRIREYmAFSwR\nEUmGhArYmlWwBQUFWLhwIaZNmwYAOHbsGPLz80UNjIiI6EmCINT6om41SrCzZs2Cubk5UlNTAQBl\nZWXw9/cXNTAiIqInCULtL+pWowSbn58PPz8/aGlpAQC8vLxQWloqamBERERP0hCEWl/UHmtN71he\nXq4ssXNzc1FSUiJaUERERFJXoyan9957D2+99RZycnIwadIkxMbGYubMmWLHRkREJFk1SrADBw6E\ng4MDLl26BG1tbXz99dcwMzMTOzYiIqLHNLgu4sTERGzZsgUDBw5Ev379sHz5cly/fl3s2IiIiB7T\n4LqIv/rqK3h4eCivDx8+HN98841oQREREf0bKXUR12iIuLKyEt27d1de7969OxQKhWhBERER/Rux\nK9H58+cjOjoagiAgICAAdnZ2yp/t2LEDv//+OzQ0NPDyyy8jMDDwqfHUKMEaGRlh69at6NGjB6qq\nqnDq1CkYGBi8+G9CRERUT1y4cAEpKSkIDg5GUlISAgICEBwcDAC4f/8+9u/fjy1btkBLSwt+fn64\ndOkSHB0d/3N7NUqwCxYswNKlS7Ft2zYAgIODAxYsWKCCX4eIiKh+iIiIQP/+/QEAlpaWKCgoQHFx\nMQwNDaGnp4eNGzcCeJhsi4uLYWpq+tTt1SjByuVyBAUFvWDo4isrzKuT/Wo3aV6nMdT1/p+Mwa6t\nx1PuKY6YlJPKf+dGnlX7/gHAxKmX8t9lBblq3792U5M63f8/Yqjrv4U6fg1S/zio9v0DgMVAL+W/\nLyz8Ve37d/YfI+r2xRwhzs3NhY2NjfK6XC5HTk4ODA0NlbetXbsWmzZtgp+fH1q3bv3U7dUowe7b\ntw/r1q1DQUHBY3OvJ06ceM7wiYiIak+dKzL9W6/RxIkT4efnhwkTJqBbt27o1q3bfz6+Rgn2+++/\nx7x589CqVavaR0pERPSCxMyvZmZmyM39/5GP7Oxs5TDw3bt3cePGDTg5OUFXVxfu7u6Iiop6aoKt\n0WE6bdu2hZOTE1566aXHLkREROok5nGwrq6uOHToEAAgLi4OZmZmyuHhiooKfPnll7h37x4AIDY2\nFu3bt3/q9mpUwTo4OGDZsmVwdnaGTCZT3u7i4lKThxMREdV7jo6OsLGxgY+PDwRBQGBgIEJDQ2Fk\nZARPT098/PHH8PPzg6amJqytrdGvX7+nbq9GCfbs2YdNI5cuXVLeJggCEywREamV2FOw1ec9r/by\nyy8r/z1s2DAMGzasxtuqUYLdvHnzP26rLqOJiIjon2qUYNPT0/Hbb7/hzp07AB6ecP38+fN47bXX\nRA2OiIjoUXWxpnBt1ajJafr06WjWrBkuX74MW1tb3LlzB4sWLRI7NiIiosdIaS3iGiVYmUyGiRMn\nwsTEBL6+vvjpp5+wZcsWsWMjIiJ6TIM7m86DBw+QmZkJQRBw+/ZtaGpqIi0tTezYiIiIJKtGc7Dj\nx49HREQExo0bB29vb8hkMgwePFilgdy7d095gK+pqSn09fVVun0iIpI+CU3B1izBtm/fHpaWlgAe\nnm3g3r17SE5OVkkAsbGxCAoKQmFhIYyNjaFQKJCdnY0WLVpgzpw5sLa2Vsl+iIhI+qTU5PTUBFtY\nWIi7d+8iICAAS5YsUd5eXl4Of39/lRyqM3/+fAQFBSkTeLW4uDh8/fXXnOslIiJJemqCvXTpEjZu\n3Ij4+HiMHj1aebuGhgbc3NxUEoBCofhHcgUAGxsbVFZWqmQfRETUMEiogH16gvXw8ICHhwe2bNkC\nX19fUQKwt7fHpEmT0L9/f8jlcgAPTxl06NAhODs7i7JPIiKSJnWeTedF1WgO9uDBg6Il2BkzZiAy\nMhIRERGIiYkB8PCMBpMnT4aDg4Mo+yQiImmSUH6tWYLt3LkzvvvuOzg4OEBLS0t5u6rWInZycoKT\nk5NKtkVERFQf1CjBxsfHAwD++usv5W1c7J+IiNStwXQRV/u3xf6JiIjUTUL5tWYrOSUlJcHPzw+O\njo7o1q0bxo0bh1u3bokdGxERkWTVqIL95ptv8P7778PZ2RkKhQJnz55FYGAgNmzYIHZ8RERESoKG\ndErYGlWwCoUCffr0gb6+PgwMDODp6cljVImISO0a3Nl0ysvLERcXp7weExPDBEtERPQUNRoi9vf3\nxxdffIG8vDwAD49TXbhwoaiBERERPanBdRHb29vj4MGDKCoqgiAIMDQ0FDsuIiKif5BQfq1Zgk1M\nTMTKlSuRmJgIQRBgbW2NTz75BO3btxc7PiIiIiUpVbA1moP98ssv4e7ujlWrVmHlypXo2bMn/P39\nxY6NiIhIsmpUwerp6eGtt95SXre0tFTJqeqIiIieh4QKWAgKhULxrDv98MMPsLa2hqurK6qqqnDu\n3DnEx8fj448/hkKhgIZGjQphIiKiF3Jq7s+1fmzvuRNUGMmz1aiC/fHHH//1sJxVq1ZBEATlWsVE\nRESiklAJW6ME++gxsERERHVFSk1ONUqwWVlZOHToEIqKivDoiPLkyZNFC4yIiOhJEsqvNUuwEyZM\ngI2NDVq0aCF2PERERP9JSmsR1yjBNmvWDAsWLBA7FiIiogajRgnW09MTe/bsgYODA2QymfL2Vq1a\niRYYERGRlNUowSYkJGDv3r1o1qyZ8jZBEHDixAmx4iIiIvqHBjcHGx0djcjISGhra4sdDxER0X9q\ncF3Etra2ePDgARMsERHVKQnl15ofpvPqq6/C0tLysTnYLVu2iBYYqUd8fDzkcjk7xIlUrKqqiqvc\niaDBVbCTJk0SOw7JUCgUj73AT15X1XbVIT09HVOnTsXy5cvRrFkz6OjoqHX/T/Nvz4eYH1h79uxB\nkyZNYG1tDXNzc1H2URt5eXmQy+X14kOlLt6jdbnf2oqNjYW1tTW0tbUbbJIV63OwoXlqgq2qqgIA\ndO/eXS3B1FePvnkEQUBZWRnKysqgr6+vkj+eR7d/9+5d6OnpqSXZaWpqwtXVFb///jtMTEzqzRep\n6ufj7NmzuHz5Mpo0aYIBAwbAzMxMlP2FhITgwIEDGD16NPT19UXZR23Ex8dj9+7dmDFjRp3FUP1a\nXLx4EadOnYKTkxMsLS3RsmVLUfeblJQEY2NjGBkZQUtLq95/gFfHd+vWLSxcuBBVVVX49ddfG1SS\nffJzsKKiApWVldDR0anXr01demqC7dKly78+cdVPdGNZg7j6Obh+/Tru3buHX3/9FYIgwNvbG337\n9q31dqufx+rt79ixAydPnkTr1q3RunVr+Pr6qiT+J1X/wZuZmcHW1hZBQUGYMmUKSkpK6kWCqU6u\nP//8M95//31s3boVOTk5+Pzzz1W+r4KCAhw4cAD/+9//YGRkhGPHjiE9PR2Wlpbw8vJS+f5qqqys\nDJ06dUJGRgZWrVpVZ6umVb8Wy5cvh5+fH5YuXYoRI0Zg5MiRj00XqVJISAj27dsHBwcHZGZmYt68\nedDUrNFgW52pfp42bNiAoUOH4o8//sCYMWOwYcMG6OjoNIgkW/05lZKSguLiYmzcuBGamprw9vZG\njx491BiH2nb1wp76il+7dg3x8fH/uFTf3tDl5+fj7t27yMvLw4YNG7Bw4UJERkaif//+aNmy5Quf\ncP7evXvKfx88eBB//vknvv76a9y7dw/JyckvGv5/qv5D3717N8rLy2FhYYFbt27hzz//RF5enmj7\nfZr09HQsWbJEef3SpUuYNGkSqqqqUF5ejjFjxiAlJQWlpaUq3a+hoSG6deuGb7/9FgsWLMDt27fR\nqlUr3Lx5EzU40ZQo9u/fj+nTpyM6OhqLFy9GTk4OTp48WSexKBQKXL16Fd988w2sra1hYGCAIUOG\noKCgQPlzVYqKisK+ffvw448/oqqqCpqamtDU1Kyz16Imqkf6Tpw4AVtbWwwfPhzr1q1Dhw4dMHHi\nRJSVlUFDQ0N5P6nKzMzEunXrsGDBAvz555+wt7eHjo4O2rRpo9Y4qouS2lzUTTZ37ty5at+rBBQV\nFeHXX39FRUUFDA0NoaenhxEjRsDd3R36+vrYvn07Bg4cCCMjo+fetkKhQFZWFkaOHIlXXnkFLVq0\nQFpaGqytrREdHY3k5GTMmzcPsbGxKC0tfez4Y1XZu3cvduzYgddffx0JCQmIjIyEQqGAlpYWmjZt\nCgMDA5Xv82m0tLTw7bffIjk5Ge7u7rhx4wb279+P2NhYBAYGwszMDDt27ECnTp2gp6ensv1qaGig\nc+fO6Nq1K3x8fODm5oasrCwcO3YMnp6edVI5lZWV4cCBA4iJiUF6ejqcnZ2Rk5MDGxsbVFVVif5B\nUT2yUlZWBk1NTdy8eRNz585FbGwsVqxYgaZNm2L+/PmwtbWFoaGhyvablJQEfX19aGlpITo6GgkJ\nCViwYAESEhKQkpJS7xa2qX6e8vPzoa+vj/v37yMjIwNmZmaQy+WwsrJCWFgYjh49ioEDBz73eyk3\nNxelpaUqfb8/r+zsbBQXFwMA7t+/Dw0NDQwdOhSenp4wNjZGWFgYBgwYoNbPi/Rz0YCAWl1a9XpF\nbXECTLD/UP1Ho6OjA4VCgdjYWGhqasLW1hYmJiYAHlabXbt2hZOTU632IQgCDA0NoaGhgeXLl8Pe\n3l6ZYEpKSrBy5UpoaGhg06ZNaNGihUo/WBQKBSorK7Fx40YMHz4cbm5ueO211xAdHY0LFy6goqIC\n2trasLS0VNs3vup9Dhs2DL/88gsSExPh4+ODnTt3wt7eHgMGDMClS5ewbt069OnTB8bGxirdv46O\nDszMzBAbG4vt27cjLCwMs2fPhqmpqUr38ywHDhxATk4OunTpAisrK7z88stIS0tDVFQUfv/9d3Tr\n1k0tDViCIODkyZP4/vvvER8fj379+uH+/ftQKBTw9vZGUlIS9u7di169eqnstTh9+jTCw8PRvn17\nrF27FikpKVi3bh00NDSwdetWlJWVwdbWViX7UoXqId/Tp09j6tSpKCgogK6uLlJSUnDv3j3o6+uj\npKQEOjo6KC0tRU5ODuzt7Z9rH4mJidDS0kKTJk1E+i2eLiIiAkFBQYiMjERoaCjy8vLQpUsXdOzY\nEQqFAjt37kT37t3h4OCg1rgyzsXUuoJt5fJ8r8GLYoJ9QnZ2tvJbeevWraGtrY1z586hqqoKcrkc\nurq62LJlC7p27Yp27do99/arh7oEQYCdnR10dHSwYMECjBw5Eqamprh48SJeeuklnDhxAuHh4Rg6\ndCiaNm2qst9PEARoaGggKysL2dnZsLCwQJMmTdC/f3+kpqbCzc0Njo6OtarMa0OhUEAmkyExMRFl\nZWUYPXo01q1bh7y8PEydOhW7du3CmTNnEBYWhi+++AJ2dnaixaKrqwsNDQ2MGDHihYf/n1dVVRVu\n3ryJ8PBwpKam4tq1axAEAb6+vrCzs0NpaSkcHBwgl8tFjyUxMRGrV6+Gl5cXCgoKEBoairfeegs5\nOTlYuXIljhw5gvHjx8PR0VEl+0tKSsIHH3wADw8PvPrqq2jXrh3+/PNPPHjwACdOnMDFixfx3nvv\nqfyLVW2UlJRAS0sLgiDg5s2bOHbsGIYOHYq0tDQoFAq0aNEC6enpiIyMxNq1azF16lRUVVXhwYMH\nz52IWrRoUWfJ9cyZM/j1118xefJkjBs3DpaWlrh37x7++OMPtG3bFqampti+fTt69Oih9pGFjIjo\nWj+WCbaOKBQKpKenY+DAgYiMjMS1a9fQvHlzmJub46WXXsKlS5dQVVUFhUKBpk2b4tVXX63VPqq/\nSf35559IT09Hjx490Lp1a8yePRsffvghWrZsidjYWMTHxyMgIKBWSbwm5HI5jh8/jsrKShgZGeHy\n5csIDw/HJ598otKE/iyCICAiIgKzZs3CmTNncPbsWcydOxfr1q1DUVERZs+eDVdXV7i7u4tewejp\n6aFdu3aiDMk/TWhoKNavXw8LCws0a9YMnTt3RmRkJA4ePIikpCQMGjQIffr0UUtyTUtLw6ZNmyCX\nyzF+/HjY29sjIyMDR48exfTp0/Hmm2/C09MTXbt2Vcn+du3ahZKSEly9ehU3b96Ek5MTunTpAhsb\nGxQUFKCoqAgffvghOnTooJL9vYj79+9j0aJFsLOzQ3l5Od5991289NJLGDNmDDp27IioqChoaGjA\nysoKw4cPh6WlJW7duoWdO3di7Nixann9VOH27dsYN24cRo0ahf79+wN4mOxbtGiB3NxcpKWloU2b\nNigrK4Onp6fa42OClaCSkhI0b94c+vr6KC0tRUZGBgRBwHfffQcDAwNERkYiMzMTMpkM/fv3h0wm\ne+5DB6rvGxISgm3btqFJkyb44YcfMHbsWLRr1w5z5szByJEjMXjwYPTr1085JC2GJk2aoF27doiI\niMCBAwcQHR0Nf39/0Q6F+S9JSUlYvXo15s+fj7Fjx+LIkSOIiYnBwoULsXLlSsTFxWHAgAFqTfrq\ntHfvXuzduxeTJ0/GmjVr0KZNGwwcOBA9e/ZERkYGMjMz4erqKmp3d/X7+P79+zA0NERiYiIyMzOh\npaWFjh07wt7eHteuXcO2bdvwxhtvqOy1CAkJwd69e2Fvb4/bt28jJSUF586dg7OzM6ysrGBjY4Oe\nPXvWi8q1qqoK2trasLOzQ2FhIdLS0uDl5YWff/4Z7du3R+fOndGxY0dEREQgMzMT3bt3h1wux6lT\np/DRRx/B0tKyrn+FGqmoqICxsTEqKipw7do1tGrVSjlVYmhoiJKSEoSGhsLX11d5lIm6D6HKPBcN\nQUCtLuZMsOpV3XA0ePBg9OjRQ/mNTCaT4bXXXsPrr7+Opk2bIi0tDTdu3MCRI0cwcuTI5zr269E3\nYGpqKtasWYMffvgBV65cQXJyMo4cOYIJEyagsrIS69evh7e3t3IYSkzGxsbo1q0bXFxc0K9fP1hY\nWIi6v2rVz4dCocCBAwdw5swZtG7dGtbW1hgwYAC2bt2K9PR0zJs3DyYmJvVq4QdVKisrQ1JSEoYM\nGYKbN28iNTUVX375JRISEtCqVSs4OzujX79+oieY6lGEFStW4Pbt22jbti2qqqqQlZWF+/fvw9LS\nEk5OTnBwcEDz5s1Vss/CwkJs3rwZX375JWJiYpCamgorKyvEx8fj8OHD6Nu3r9qmKZ6lsrISf/31\nl3K4Njk5GbNnz4aXlxf69u2LmTNnokOHDnj55ZdhbW2NTp06oWXLljAyMoKLi4tkKteMjAz4+/vD\nzc0NvXr1QlpaGvbt24d27dopk2zbtm1x8uRJuLm5KY/VV3d3bub52s/BmvdkglWr6oYjLS0tBAUF\noWfPnnB2dkZ2djbOnDmDNm3awMbGBh4eHhg+fDi8vb2f6xv8o8l1z549kMlk6NixI06fPo2oqCis\nW7cO0dHRWLRoEWxtbfH555+jadOmanvTymQy6Ovrq/X4V0EQEB0djczMTBgZGaFdu3ZISEhAWVkZ\nOnToABMTE8TExKB///4NNrlu374d+/btQ0hICPbs2YOioiJ8//33EAQBS5cuhYWFBVq2bKmWBUfi\n4uLw1Vdf4aOPPkJ6ejqKioqgo6MDDQ0NxMfHQ6FQwNLSUqVD5zo6OmjVqhXOnz+P06dPY+3atcjK\nylJWsu+++26dzT8+qbpnYdKkSdi2bRumT58OCwsLLF68GP369UO/fv3w2WefwcrKCp07d1b7FMOL\nqv6MMjIyQnZ2NrZt2wY3Nzc4OTkhOzsbe/fuVSbZPXv24Ny5cxgyZEidrfyWdT6m1hVsSyZY9bl9\n+zYyMjKgoaGBnj17Qk9PD7NmzUKvXr3Qq1cv5OXl4eLFi9DR0VGuXKOrq1vrYeGwsDDY2NjA2dkZ\nd+7cgb6+PpycnFBUVARHR0e4u7urrYqsC9V/yFeuXEFAQABSU1ORnZ2N8vJytG/fHjt37kRSUhKO\nHDkCb29v0eaf69rx48exa9cuvPHGG8jOzkZ0dDTMzc3h7u6Ow4cPIzIyEoMGDVJbBRcTEwMDAwO8\n9dZbsLOzQ1ZWFlJSUjBo0CDk5eXBzs5OlCqsRYsWKC8vR15eHtzd3ZGWlgZ3d3d88skn9eaQnOr3\nbKtWrXDlyhX8/fff8PLyUlbzy5cvh4eHB7y8vKChoSHJv9/i4mJlsnRwcEB2djY2bNgAd3d3dO/e\nHTk5OTh27BiuX7+OEydOYM6cOXX6xTfzQmztsqsgoGWPZzdJzp8/H6tWrcLOnTthZWX12Drt586d\nw9SpU7Fz505cvHgRr7766lPzQaNNsNWNNdHR0bh16xZcXV1ha2uLJk2aICAgQJlk09PTkZCQADs7\nO8hkslpVlgUFBfj+++8xffp0tGzZEqdPn8a1a9dQVFSEEydO4NixY/j8888b/IL7giDgwoULOHz4\nMCZNmoRRo0ahoqICycnJMDAwUFayHh4eGDRoUF2HK4qEhAT89ttvGDBgALy8vODi4oLY2FhERESg\nqKgIcXFx+N///oe2bduKsv+KigrlQiO5ublQKBQwMDDAypUr0bp1a1haWsLa2hrbt29H165dMXDg\nQFGHOAVBwNGjR3HixAns2rUL48aNE30ZxpqqTq4JCQkoLS1Fr1694OjoiKlTp6Jr165wdXVFkyZN\nsHTpUkyYMAGWlpb1fknHJ127dg2TJk1CUVERkpOT0aVLF7zyyiuQyWRYvXo1+vbti1deeQU3btxA\naGgoFi1ahI4dO9ZpzJnnY2pCCbGkAAAU9klEQVT92Gcl2AsXLuD48ePYuHEjHBwcMHfuXIwYMUL5\n8/fffx9r167FmDFjsGfPHuXn1n+p3+uPieT8+fNYvnw55syZ81jrfHR0NN566y1oa2tj0qRJ+OGH\nHzB06FAUFxe/0Kn6Hl0tyMDAQHmc4927d6GlpQUfH596MxwmhuoPneLiYkRGRmL//v1wdnaGIAhw\ncHBARkYGysvL4e3tDU1NTVy6dAlt27ZV6/Jr6mJiYoK2bdvi6NGjsLS0hJ2dHVauXIkPPvgAOjo6\nWLFihWhLEObn5+Po0aMYPHgwYmJiMH/+fHTs2BG9e/dGQEAAtm7ditLSUnTp0kV5XKfYzM3NMX36\ndFy9ehUffPBBvfqS+egykV5eXjh+/DhWr16NiRMnwt/fH6NGjYJCocD69euVow1SSq5lZWXQ1dVF\n27ZtkZqaikuXLuHatWtIT0/H+PHjIQgC5s6di5kzZ2Ly5Ml4991368V8sqAh3nMcERGh7Jy2tLRE\nQUEBiouLlYduhoaGKv8tl8tx586dp26vUSXY6qXKwsPD8d577z2WXBcuXIjY2Fh4enpi9OjRKCoq\nwrRp0xAWFvbCq9XIZDKMHj0affv2RYcOHaCnp4fjx4/j8OHDWL58eb06i40YBEFAeHg4Vq9eDVdX\nV2hqamL16tVo164d2rdvDxMTE/z2228YNWoU3NzcoKmpWS8OyxBD8+bNMXbsWOzcuRP79u2DIAjo\n2rUr1qxZg8LCQtGSK/CwWomNjUVRURGuXr2KOXPmQKFQKF+XcePG4bvvvlPGaG1tLVosj6o+BKS+\nycvLw48//ohly5bh8uXLyr/TESNGoGnTpti1axdGjBhRL5LO8zp16hR2796NpUuXws/PD5cvX8Yb\nb7wBc3NzJCUlISYmBjo6Oti3bx/i4uKwb98+Sf6ezys3Nxc2NjbK63K5HDk5OcocUP3/6h6dzz77\n7Knba1QJtnpRgyZNmig/yKqqqnDs2DEUFhZiwoQJ2L59Ozp16gRfX18MHjxYZSeZNzIygo2NDaKi\nonDy5EmcP38eQUFBDT65Ag/n+NasWYN58+bh8OHDaN++Pa5cuYIvvvgCAwYMQHp6OsaNGweZTIbW\nrVvD3Ny83i/u/iKaNWuGoUOHYvfu3QgODoaGhgZsbGxEPxSpV69eAICjR4/i/v37aN++PeRyOT74\n4AOsWbMGpqam2LhxI0pLS9VSvdZn+fn50NPTQ8+ePREeHo5jx45h3rx5KC8vR1hYGLy9veHm5gZ9\nfX3JDQtHRERg/fr1+PjjjwEAzs7OKCkpQUREBBwcHODu7g4PDw8AwOTJk6GrqyvqF7/npc6n+t/W\nwM7Ly8OkSZMQGBj4zA5/aZ/e4Tn89ddf2Lx5MwBAW1sbO3bsAPCwQ7Bjx44ICgqCh4cHTExMUF5e\nDgCiNJm0b98ejo6OWLhwoWSOjXtRenp6GDx4MK5du4YzZ85gzJgxcHNzQ3p6Ov744w+89tpr6NOn\nDyoqKgCgQSfXanK5HN7e3rC2tha9eqseucnOzoaTkxO8vLxgYmKCP/74Q3nbuHHjEBwcjIyMjEbx\npe9J+fn5iI+PR3l5OdLT0zFr1iyUlZUhJycH69evR1BQEMzNzXHhwgWcO3cOFRUVyjWCpZRcw8PD\n8csvv2DKlCmPLfXap08f9OrVC1FRUYiIiFAOfbZv377edfKLudi/mZkZcnNzldezs7MfWzK1uLgY\nEyZMwJQpU+Dm5vbM7TWKBFt9jF/10m5jx45FixYt8NFHHwGAcpL6wIEDSEhIgJWVFQCIcnopY2Nj\neHh4iNbEUh+1a9cOLi4uOHXqFCZMmIDevXujRYsWsLGxga2tLfz9/fH33383isT6qObNm+Pdd98V\nbUGRBw8eAHj4Pj579iwmT56M8ePHo6CgAJ07d0Z2djaOHDmCrKwsuLi4YNWqVTA3N5dUwlCVrVu3\nYvv27UhKSkKrVq1gYmKCZs2aYdasWejUqRNWrVqFFStWYNOmTcqF+6X2POXl5WHu3Lno1q0bXnnl\n/xe9X7t2rXJdYUdHR5w6dQrXr1+vt2cwqu0hOjV5uVxdXXHo0CEADw9fMzMze2yK8Ntvv8Xo0aPh\n7u5eo1gbfBfx2bNnsX79ekyZMuWxN1Xfvn1x5swZbNy4ERkZGbh8+TJ+//13LFy4UO2nX2roZDIZ\nmjVrhri4OJSXlyM/Px8ZGRmYOXMmXn/9dchkMlhaWjbY1ZqeRqxzhBYUFGD37t2ws7NDTEwMtm/f\njtmzZ8PCwgKRkZEwNzeHubk5YmJikJ+fj86dO6v9DEr1QfXwrq2tLa5evYrLly9DX18fkZGRaNq0\nKdq2bYshQ4agrKwMxsbG6Nu3L1xdXes67FqrrKzEjRs3oKuri3bt2mHVqlWIj4/HJ598AplMBhMT\nE+jp6cHKyqpenBv632T/FVvrx5o5PX2JT3NzcyQmJmLlypU4deoUAgMDleuDt2rVCl988QXu3LmD\nXbt2YdeuXSgvL3/qEq6Cor5+TVGBhIQEfPrpp5gyZQoGDhyovD0kJAR2dnawtrbGzp07lUOTPXv2\nbFSVpbolJSVh+/btiIyMxOTJk5XdeqR6RUVFKCkpQVlZGaZMmYLKykrs3r0bAHDy5Els2bIF06dP\nR35+PuRyeZ0felFXqhNsSUkJNDU1sXTpUuWhOSkpKejVqxdKSkrQvXt3vPPOO5KrWp+0ZcsWAEBk\nZCQqKipgZGSEb775BpqamggNDcWJEyewePHiej1NcGX1tlo/1nbSOyqM5NkadAWbmpqKgoICyOVy\nmJiYwNDQEN9//z0iIyPxzjvvQCaTwdraGl27doWtra3kVmCRGrlcDhcXF7z++uuwsbGRXHOIlOjo\n6EBPTw87d+5EcXEx4uPjkZWVBQ8PD7Rr1w6xsbHIzMyEt7d3o+gO/S/Vy0QGBgYiKysL3t7eSElJ\nQX5+PgYPHoz33nsPCoUC9vb2aj99oaodPXoUe/bswfjx46Gvr4+DBw/Cx8cHVlZWOHDgAMLCwjBt\n2jS1r0f+vHKi4mo9B2vWTb2nPGyQk17JycnQ19dHly5d4Ovri7CwMFRUVCAxMRHFxcX47rvvoKWl\nhbCwMFy5cgUzZsyo8SQ4vRgtLS1l5x2fb3HJZDK8+eab0NbWhq6uLk6ePImsrCyMGTMGCQkJmDRp\nUl2HWOdSU1OxaNEizJ07F9ra2ujYsSMmT56MxYsXIysrC3fu3MHw4cPrOswXUn3u2jt37sDPzw8t\nW7ZUjh6dOnUKZ86cQVZWlnJNZVKdBpdgIyIisGDBAnTv3h1VVVWYM2cO3N3dcfDgQVy5cgXLly+H\nlpYW9u/fj7CwMMycOVO0eTCiuta8eXMMGjQIlZWV0NLSwsGDB1FQUAB/f3/Y2tqioqKi0TWXPar6\nhBf29vbKpp7qJfBOnTrVIL4EamhooKioCAcOHMDs2bORnp6Oe/fuIScnB05OTli/fj2WLVsmmeQq\npZekQQ0Rh4eHY/Pmzfjiiy/Qu3dvxMTEwMXFBa1bt4aVlRVycnIgCALOnDmDw4cPY+bMmY3mUBlq\nvPT09NCmTRtkZmbCwMAAOjo6iI6OxmuvvdbovlxWT0vcvHkTiYmJaNasGYKDg5GTk4Nu3boBAIKD\ng6GlpYUxY8ao7OxBdS0hIQHHjh2DmZkZli1bBi0tLdy/fx8jR45ULjAhFTmX4mrdRmzqyCHiWqlu\nQR82bBi6deuGzMxMHDp0CIIg4Pz58/jhhx/g4+ODNWvW4Pr161i6dKlkvrERvSi5XI7hw4dj0KBB\nMDU1xVdffYWsrKx6uYKSmARBwMmTJ/HTTz/BwMAAVlZWeP3117Fp0yYUFhbCwsICUVFR6N27d12H\nqlIWFhbo3bs3dHV1MWPGDNjZ/f+avKpaTIf+qcF0EZeUlGDz5s1ITk6Go6MjwsPD4eLiAl9fX2za\ntAkbNmzA3r17kZKSAhMTk0b3wUL0qMrKynq1Oo+6FBQUYNasWZg+fTpat26NDRs2oLy8HN27d8eN\nGzdw584d5UL+VD/Frw+u9WM7j3tbhZE8W4OpYPX19WFoaIiuXbti3bp16NGjB3x9fQEAfn5+uHnz\nJu7evfvYOpNEjVVjTK5JSUkwNjZGUVERkpKS0Lp1a/j5+WH27Nl48OABPvnkk7oOkWpAzMX+Va3B\nTMAcPXoUx44dg6enJ6ZMmYJ79+7h6NGjAIBDhw4hNjaWQyFEjdS1a9cwZcoU5Xlvw8PDceHCBchk\nMgwfPhyFhYXKla+ofhNzqURVk3wF+2QLupmZGfr06QMAOHLkCE6dOoXMzEwsXry43h/fRUSql5SU\nhNjYWGhpaaG8vBy9e/dGQUEBVqxYgZ49e+L48eOYOnVqvV5cgaRJ8hXsoy3obdq0QXp6OtLS0pCb\nmwsXFxdERUVh2rRpbGgiaoQKCwvh7++P0tJS9OjRA2vXrkVFRQV8fX3x5ZdfwsLCAl999VWDa2pq\n0IQXuKiZ5CtY4OHCEiUlJco1VwcMGIDS0lL4+Pigb9++Dfpk5kT035o0aYI333wTbdu2hZOTEw4c\nOIDNmzdjxIgRsLOze6yblkjVJF/BAv9sQR87diw+/PBDaGtrM7kSNUJJSUnIy8tDeXk5OnXqhJCQ\nEHTo0AFDhgzBgwcPsG3bNjx48KDenjGG/puU5mAbzGE6RNR4PbqudWJiIubOnaucFpoyZQp+//13\n6Orqws/PD1FRUZDL5crTVJK03Ni8s9aP7TRKvcteNogKlogar/z8fGzcuBEFBQWorKxESEgIgoKC\nMG3aNFhbWyMgIACFhYWIiooCADg6OjK5SpnGC1zqIFQiIslKSkrCrVu3EBwcjKqqKmhpaeHOnTto\n0qQJfH198fHHH8PJyQmxsbH47bff6jpcekFSGiJmgiUiSXNyckL37t1x584dbNiwAXfv3sXNmzeR\nnJwMALC2toaHhwd+/PFHlJSU1HG01Jg0iC5iImq8Lly4gG3btqF3794oKipCTEwMoqOjERUVhaSk\nJDRv3hzLli1DXFwcLl68iIqKCshksgZxphyq35hgiUiyoqOjsXjxYsybNw+mpqa4fPkySkpKUFFR\nga+++gqZmZkoLy+HtrY2jIyMEBAQ0KhPz9cQSOmLEd9pRCRZZWVlcHR0RHx8PMLDwxEREYHy8nJk\nZ2fjxx9/xMSJE5UJ1dPTs46jJZWQTn5lgiUi6erQoQOaNm2KkJAQfPjhhxg0aBAuXbqEtLQ0vPHG\nG6xWGyApLfbP42CJqMGIiIjAmjVr8NFHH8HZ2bmuwyER3AwJq/VjO4zwVmEkz8avd0QkeUVFRdi3\nbx/27duHiRMnMrlSvcAKlogahPLychQVFUEul9d1KCQiVrBERGqmpaXF5NoISKiJmAmWiIikg4fp\nEBERiUFCXcRMsEREJBlSqmC5FjEREZEIWMESqdiiRYsQGxuLBw8e4OrVq3BwcAAADB8+HG+++eY/\n7h8SEoKLFy/i22+/VXeoRNIjnQKWCZZI1aZPnw4ASE1NxbvvvovNmzfXcUREVBeYYInUpLi4GHPm\nzEFWVhYqKiowbNgwvP3224/dJzw8HKtWrcIvv/yC27dvY+HChaisrERFRQUCAwPx8ssv45133oG7\nuzuioqLw999/Y8qUKRg0aFAd/VZE6iWlOVgmWCI12bhxI+RyOZYtW4b79+9j4MCBcHNzU/786tWr\nWLFiBdatWwdDQ0NMmzYNa9asgYWFBa5cuYLZs2cjJCQEAFBaWoqff/4ZERERWLx4MRMsNRpSWouY\nCZZITWJiYuDj4wMA0NPTQ5cuXRAfHw8AyMjIwAcffID169dDLpcjKysLKSkpmDFjhvLxhYWFyn/3\n6NEDANCqVSvcvXtXjb8FUR1jBUtET3pyaOvRVUr//vtvuLu7Y8OGDViwYAG0tbWhq6v7n/O3MplM\n1FiJ6ispDRHzMB0iNbG3t8fp06cBPJyPjY+Ph42NDQDAxcUFX3/9NZKTk7F//34YGxvD1NRUef+k\npCT89NNPdRY7ET0/VrBEauLn54c5c+bA19cXZWVl+Oyzz2Bubq78uUwmw5IlSzBq1CjY2dlh8eLF\nCAoKwk8//YTKysrHhouJGi3pFLA8mw4REUlH6h8Ha/1Yi4FeKozk2VjBEhGRZLCLmIiISAwSanJi\ngiUiIslgFzEREVEjxwqWiIikg3OwREREqschYiIiokaOFSwREUmHdApYJlgiIpIODhETERFJ0Pz5\n8/H222/Dx8cHMTExj/3swYMH8Pf3x7Bhw2q0LSZYIiKSDg2h9pdnuHDhAlJSUhAcHIygoCAEBQU9\n9vNFixahc+fONQ/1uX85IiKiOiIIQq0vzxIREYH+/fsDACwtLVFQUIDi4mLlzz///HPlz2uCCZaI\niKRDEGp/eYbc3FwYGxsrr8vlcuTk5CivGxoaPleoTLBERET/4kVPNscuYiIikgwxu4jNzMyQm5ur\nvJ6dnQ1TU9Nab48VLBEREQBXV1ccOnQIABAXFwczM7PnHhZ+FE+4TkREkpEdcarWjzVz6f3M+yxZ\nsgR//fUXBEFAYGAgrl69CiMjI3h6euLTTz9FZmYmbty4AVtbW4wcORJDhgz5z20xwRIRkWTknDtd\n68ea9nRTYSTPxjlYIiKSDgmt5MQES0REkiFI6HR1bHIiIiISARMsERGRCDhETERE0sE5WCIiItWT\n0unqmGCJiEg6mGCJiIhUj13EREREjRwTLBERkQg4RExERNLBOVgiIiIRMMESERGpHg/TISIiEgO7\niImIiBo3VrBERCQZgiCdulA6kRIREUkIK1giIpIONjkRERGpHruIiYiIxMAuYiIiosaNFSwREUkG\nh4iJiIjEIKEEyyFiIiIiEbCCJSIi6ZDQQhNMsEREJBkCu4iJiIgaN1awREQkHRJqcmKCJSIiyeBh\nOkRERGKQUJOTdCIlIiKSEFawREQkGewiJiIiauRYwRIRkXSwyYmIiEj12EVMREQkBgl1ETPBEhGR\ndLDJiYiIqHFjgiUiIhIBh4iJiEgy2OREREQkBjY5ERERqR4rWCIiIjFIqIKVTqREREQSwgRLREQk\nAg4RExGRZEjpbDpMsEREJB1sciIiIlI9QUJNTkywREQkHRKqYAWFQqGo6yCIiIgaGunU2kRERBLC\nBEtERCQCJlgiIiIRMMESERGJgAmWiIhIBEywREREIvg/6yUtP+wTpzgAAAAASUVORK5CYII=\n",
            "text/plain": [
              "<matplotlib.figure.Figure at 0x7fd6beb9e8d0>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "metadata": {
        "id": "1YHneO3SStOp",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# TODO"
      ]
    },
    {
      "metadata": {
        "id": "gGHaKTe1SuEk",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "- attn visualization isn't always great\n",
        "- bleu score\n",
        "- ngram-overlap\n",
        "- perplexity\n",
        "- beamsearch\n",
        "- hierarchical softmax\n",
        "- hierarchical attention\n",
        "- Transformer networks\n",
        "- attention interpretability is hit/miss\n"
      ]
    }
  ]
}