{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "🛠 09. Milestone Project 2: SkimLit 📄🔥 Exercise Solutions.ipynb",
      "provenance": [],
      "collapsed_sections": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/ashikshafi08/Learning_Tensorflow/blob/main/Exercise%20Solutions/%20%F0%9F%9B%A0_09_Milestone_Project_2_SkimLit_%F0%9F%93%84%F0%9F%94%A5_Exercise_Solutions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zG61Lb6RbNxR"
      },
      "source": [
        "# 🛠 09. Milestone Project 2: SkimLit 📄🔥 Exercise Solutions.\n",
        "\n",
        "> **Note** The orders of the exercise is mixed. \n",
        "\n",
        "\n",
        "1. Checkout the [Keras guide on using pretrained GloVe embeddings](https://keras.io/examples/nlp/pretrained_word_embeddings/). Can you get this working with one of our models?\n",
        "  - Hint: You'll want to incorporate it with a custom token [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.\n",
        "  - It's up to you whether or not you fine-tune the GloVe embeddings or leave them frozen.\n",
        "\n",
        "2. Try replacing the TensorFlow Hub Universal Sentence Encoder pretrained embedding for the [TensorFlow Hub BERT PubMed expert](https://tfhub.dev/google/experts/bert/pubmed/2) (a language model pretrained on PubMed texts) pretrained embedding. Does this effect results?\n",
        "\n",
        "  - Note: Using the BERT PubMed expert pretrained embedding requires an extra preprocessing step for sequences (as detailed in the [TensorFlow Hub guide](https://tfhub.dev/google/experts/bert/pubmed/2)).\n",
        "  - Does the BERT model beat the results mentioned in this paper? https://arxiv.org/pdf/1710.06071.pdf. \n",
        "\n",
        "3. What happens if you were to merge our `line_number` and `total_lines` features for each sequence? For example, created a `X_of_Y` feature instead? Does this effect model performance?\n",
        "  - Another example: `line_number=1` and total_lines=11 turns into `line_of_X=1_of_11`.\n",
        "\n",
        "4. Train `model_5` on all of the data in the training dataset for as many epochs until it stops improving. Since this might take a while, you might want to use:\n",
        "  - `tf.keras.callbacks.ModelCheckpoint` to save the model's best weights only.\n",
        "  - `tf.keras.callbacks.EarlyStopping` to stop the model from training once the validation loss has stopped improving for ~3 epochs.\n",
        "\n",
        "5. Write a function (or series of functions) to take a sample abstract string, preprocess it (in the same way our model has been trained), make a prediction on each sequence in the abstract and return the abstract in the format:\n",
        "```\n",
        "PREDICTED_LABEL: SEQUENCE\n",
        "PREDICTED_LABEL: SEQUENCE\n",
        "PREDICTED_LABEL: SEQUENCE\n",
        "PREDICTED_LABEL: SEQUENCE\n",
        "```\n",
        "You can find your own unstrcutured RCT abstract from PubMed or try this one from: [Baclofen promotes alcohol abstinence in alcohol dependent cirrhotic patients with hepatitis C virus (HCV) infection.](https://pubmed.ncbi.nlm.nih.gov/22244707/)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VfFYi9c4d1Zd"
      },
      "source": [
        "## Downloading the data and preprocessing it. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wKxWATW0h8fg"
      },
      "source": [
        "import tensorflow as tf\n",
        "from tensorflow.keras import layers "
      ],
      "execution_count": 1,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "crsxPlSIiZ25",
        "outputId": "7f6b293c-272a-44ee-c427-6602ba84903b"
      },
      "source": [
        "!git clone https://github.com/Franck-Dernoncourt/pubmed-rct.git\n",
        "!ls pubmed-rct"
      ],
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Cloning into 'pubmed-rct'...\n",
            "remote: Enumerating objects: 33, done.\u001b[K\n",
            "remote: Counting objects: 100% (3/3), done.\u001b[K\n",
            "remote: Compressing objects: 100% (3/3), done.\u001b[K\n",
            "remote: Total 33 (delta 0), reused 0 (delta 0), pack-reused 30\u001b[K\n",
            "Unpacking objects: 100% (33/33), done.\n",
            "PubMed_200k_RCT\n",
            "PubMed_200k_RCT_numbers_replaced_with_at_sign\n",
            "PubMed_20k_RCT\n",
            "PubMed_20k_RCT_numbers_replaced_with_at_sign\n",
            "README.md\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "16hM2pBoic6t"
      },
      "source": [
        "# Start by using the 20k dataset\n",
        "data_dir = \"pubmed-rct/PubMed_20k_RCT_numbers_replaced_with_at_sign/\""
      ],
      "execution_count": 3,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "zCqWBxzGiiq4",
        "outputId": "fe5d7d45-8e28-467c-841a-d2863de839d3"
      },
      "source": [
        "# Check all of the filenames in the target directory\n",
        "import os\n",
        "filenames = [data_dir + filename for filename in os.listdir(data_dir)]\n",
        "filenames"
      ],
      "execution_count": 4,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "['pubmed-rct/PubMed_20k_RCT_numbers_replaced_with_at_sign/train.txt',\n",
              " 'pubmed-rct/PubMed_20k_RCT_numbers_replaced_with_at_sign/dev.txt',\n",
              " 'pubmed-rct/PubMed_20k_RCT_numbers_replaced_with_at_sign/test.txt']"
            ]
          },
          "metadata": {},
          "execution_count": 4
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TjT1z2LxilII"
      },
      "source": [
        "# Create function to read the lines of a document\n",
        "def get_lines(filename):\n",
        "  with open(filename, \"r\") as f:\n",
        "    return f.readlines()"
      ],
      "execution_count": 5,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qPuqmqipipRL"
      },
      "source": [
        "# Creating a preprocessing function that returns a dictionary\n",
        "def preprocess_text_with_line_numbers(filename):\n",
        "  \"\"\"Returns a list of dictionaries of abstract line data.\n",
        "\n",
        "  Takes in filename, reads its contents and sorts through each line,\n",
        "  extracting things like the target label, the text of the sentence,\n",
        "  how many sentences are in the current abstract and what sentence number\n",
        "  the target line is.\n",
        "\n",
        "  Args:\n",
        "      filename: a string of the target text file to read and extract line data\n",
        "      from.\n",
        "\n",
        "  Returns:\n",
        "      A list of dictionaries each containing a line from an abstract,\n",
        "      the lines label, the lines position in the abstract and the total number\n",
        "      of lines in the abstract where the line is from. For example:\n",
        "\n",
        "      [{\"target\": 'CONCLUSION',\n",
        "        \"text\": The study couldn't have gone better, turns out people are kinder than you think\",\n",
        "        \"line_number\": 8,\n",
        "        \"total_lines\": 8}]\n",
        "  \"\"\"\n",
        "  input_lines = get_lines(filename) # get all lines from filename\n",
        "  abstract_lines = \"\" # create an empty abstract\n",
        "  abstract_samples = [] # create an empty list of abstracts\n",
        "  \n",
        "  # Loop through each line in target file\n",
        "  for line in input_lines:\n",
        "    if line.startswith(\"###\"): # check to see if line is an ID line\n",
        "      abstract_id = line\n",
        "      abstract_lines = \"\" # reset abstract string\n",
        "    elif line.isspace(): # check to see if line is a new line\n",
        "      abstract_line_split = abstract_lines.splitlines() # split abstract into separate lines\n",
        "\n",
        "      # Iterate through each line in abstract and count them at the same time\n",
        "      for abstract_line_number, abstract_line in enumerate(abstract_line_split):\n",
        "        line_data = {} # create empty dict to store data from line\n",
        "        target_text_split = abstract_line.split(\"\\t\") # split target label from text\n",
        "        line_data[\"target\"] = target_text_split[0] # get target label\n",
        "        line_data[\"text\"] = target_text_split[1].lower() # get target text and lower it\n",
        "        line_data[\"line_number\"] = abstract_line_number # what number line does the line appear in the abstract?\n",
        "        line_data[\"total_lines\"] = len(abstract_line_split) - 1 # how many total lines are in the abstract? (start from 0)\n",
        "        abstract_samples.append(line_data) # add line data to abstract samples list\n",
        "    \n",
        "    else: # if the above conditions aren't fulfilled, the line contains a labelled sentence\n",
        "      abstract_lines += line\n",
        "  \n",
        "  return abstract_samples"
      ],
      "execution_count": 6,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ZMc27jMgi29r",
        "outputId": "4c3ce0a6-32df-4197-d993-c0a99df2a4b2"
      },
      "source": [
        "# Get data from file and preprocess it\n",
        "%%time\n",
        "train_samples = preprocess_text_with_line_numbers(data_dir + \"train.txt\")\n",
        "val_samples = preprocess_text_with_line_numbers(data_dir + \"dev.txt\") \n",
        "test_samples = preprocess_text_with_line_numbers(data_dir + \"test.txt\")\n",
        "\n",
        "len(train_samples), len(val_samples), len(test_samples)"
      ],
      "execution_count": 7,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 494 ms, sys: 119 ms, total: 613 ms\n",
            "Wall time: 606 ms\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 483
        },
        "id": "lezgMjlQi9Ab",
        "outputId": "b7aa9a51-f616-4e9d-b275-7cd00de62e49"
      },
      "source": [
        "# Loading our data into a dataframe\n",
        "import pandas as pd\n",
        "train_df = pd.DataFrame(train_samples)\n",
        "val_df = pd.DataFrame(val_samples)\n",
        "test_df = pd.DataFrame(test_samples)\n",
        "train_df.head(14)"
      ],
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>target</th>\n",
              "      <th>text</th>\n",
              "      <th>line_number</th>\n",
              "      <th>total_lines</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>OBJECTIVE</td>\n",
              "      <td>to investigate the efficacy of @ weeks of dail...</td>\n",
              "      <td>0</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>a total of @ patients with primary knee oa wer...</td>\n",
              "      <td>1</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>outcome measures included pain reduction and i...</td>\n",
              "      <td>2</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>pain was assessed using the visual analog pain...</td>\n",
              "      <td>3</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>secondary outcome measures included the wester...</td>\n",
              "      <td>4</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>5</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>serum levels of interleukin @ ( il-@ ) , il-@ ...</td>\n",
              "      <td>5</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>6</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>there was a clinically relevant reduction in t...</td>\n",
              "      <td>6</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>7</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>the mean difference between treatment arms ( @...</td>\n",
              "      <td>7</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>8</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>further , there was a clinically relevant redu...</td>\n",
              "      <td>8</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>9</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>these differences remained significant at @ we...</td>\n",
              "      <td>9</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>10</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>the outcome measures in rheumatology clinical ...</td>\n",
              "      <td>10</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>11</th>\n",
              "      <td>CONCLUSIONS</td>\n",
              "      <td>low-dose oral prednisolone had both a short-te...</td>\n",
              "      <td>11</td>\n",
              "      <td>11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>12</th>\n",
              "      <td>BACKGROUND</td>\n",
              "      <td>emotional eating is associated with overeating...</td>\n",
              "      <td>0</td>\n",
              "      <td>10</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>13</th>\n",
              "      <td>BACKGROUND</td>\n",
              "      <td>yet , empirical evidence for individual ( trai...</td>\n",
              "      <td>1</td>\n",
              "      <td>10</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "         target  ... total_lines\n",
              "0     OBJECTIVE  ...          11\n",
              "1       METHODS  ...          11\n",
              "2       METHODS  ...          11\n",
              "3       METHODS  ...          11\n",
              "4       METHODS  ...          11\n",
              "5       METHODS  ...          11\n",
              "6       RESULTS  ...          11\n",
              "7       RESULTS  ...          11\n",
              "8       RESULTS  ...          11\n",
              "9       RESULTS  ...          11\n",
              "10      RESULTS  ...          11\n",
              "11  CONCLUSIONS  ...          11\n",
              "12   BACKGROUND  ...          10\n",
              "13   BACKGROUND  ...          10\n",
              "\n",
              "[14 rows x 4 columns]"
            ]
          },
          "metadata": {},
          "execution_count": 8
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "QId1cyshjEEC",
        "outputId": "776c485f-9b94-447f-a75f-d66cb793c69f"
      },
      "source": [
        "# Convert abstract text lines into lists \n",
        "train_sentences = train_df[\"text\"].tolist()\n",
        "val_sentences = val_df[\"text\"].tolist()\n",
        "test_sentences = test_df[\"text\"].tolist()\n",
        "len(train_sentences), len(val_sentences), len(test_sentences)"
      ],
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(180040, 30212, 30135)"
            ]
          },
          "metadata": {},
          "execution_count": 9
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "-mBzQ4K_jHmP",
        "outputId": "2e27e2a0-5321-47ce-c004-c87a3fd99ee1"
      },
      "source": [
        "# One hot encoding the labels \n",
        "from sklearn.preprocessing import OneHotEncoder\n",
        "one_hot_encoder = OneHotEncoder(sparse=False)\n",
        "\n",
        "train_labels_one_hot = one_hot_encoder.fit_transform(train_df[\"target\"].to_numpy().reshape(-1, 1))\n",
        "val_labels_one_hot = one_hot_encoder.transform(val_df[\"target\"].to_numpy().reshape(-1, 1))\n",
        "test_labels_one_hot = one_hot_encoder.transform(test_df[\"target\"].to_numpy().reshape(-1, 1))\n",
        "\n",
        "# Check what training labels look like\n",
        "train_labels_one_hot"
      ],
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "array([[0., 0., 0., 1., 0.],\n",
              "       [0., 0., 1., 0., 0.],\n",
              "       [0., 0., 1., 0., 0.],\n",
              "       ...,\n",
              "       [0., 0., 0., 0., 1.],\n",
              "       [0., 1., 0., 0., 0.],\n",
              "       [0., 1., 0., 0., 0.]])"
            ]
          },
          "metadata": {},
          "execution_count": 10
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "BmqMBV3ejNjg",
        "outputId": "5a24222b-2bb1-4fa7-cdfc-f9169a089da0"
      },
      "source": [
        "# Extract labels and encoder them into integers \n",
        "from sklearn.preprocessing import LabelEncoder \n",
        "\n",
        "label_encoder = LabelEncoder() \n",
        "\n",
        "train_labels_encoded = label_encoder.fit_transform(train_df[\"target\"].to_numpy())\n",
        "val_labels_encoded = label_encoder.transform(val_df[\"target\"].to_numpy())\n",
        "test_labels_encoded = label_encoder.transform(test_df[\"target\"].to_numpy())\n",
        "\n",
        "# Check what training labels look like\n",
        "train_labels_encoded"
      ],
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "array([3, 2, 2, ..., 4, 1, 1])"
            ]
          },
          "metadata": {},
          "execution_count": 11
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2HBTkxEKjdlu",
        "outputId": "54551525-58a8-4d3c-c629-9ae066a662b2"
      },
      "source": [
        "# Get class names and number of classes from LabelEncoder instance \n",
        "num_classes = len(label_encoder.classes_)\n",
        "class_names = label_encoder.classes_\n",
        "num_classes , class_names"
      ],
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(5, array(['BACKGROUND', 'CONCLUSIONS', 'METHODS', 'OBJECTIVE', 'RESULTS'],\n",
              "       dtype=object))"
            ]
          },
          "metadata": {},
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hQfKDXoXjzqX"
      },
      "source": [
        "\n",
        "### 1. Checkout the [Keras guide on using pretrained GloVe embeddings](https://keras.io/examples/nlp/pretrained_word_embeddings/). Can you get this working with one of our models?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "l5P0kD6Cli7c",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "bb7f2335-5920-46cd-9a0c-ffadfc3a7cc6"
      },
      "source": [
        "# Loading the pre-trained embeddings \n",
        "!wget http://nlp.stanford.edu/data/glove.6B.zip\n",
        "!unzip -q glove.6B.zip"
      ],
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--2021-09-12 08:59:27--  http://nlp.stanford.edu/data/glove.6B.zip\n",
            "Resolving nlp.stanford.edu (nlp.stanford.edu)... 171.64.67.140\n",
            "Connecting to nlp.stanford.edu (nlp.stanford.edu)|171.64.67.140|:80... connected.\n",
            "HTTP request sent, awaiting response... 302 Found\n",
            "Location: https://nlp.stanford.edu/data/glove.6B.zip [following]\n",
            "--2021-09-12 08:59:27--  https://nlp.stanford.edu/data/glove.6B.zip\n",
            "Connecting to nlp.stanford.edu (nlp.stanford.edu)|171.64.67.140|:443... connected.\n",
            "HTTP request sent, awaiting response... 301 Moved Permanently\n",
            "Location: http://downloads.cs.stanford.edu/nlp/data/glove.6B.zip [following]\n",
            "--2021-09-12 08:59:27--  http://downloads.cs.stanford.edu/nlp/data/glove.6B.zip\n",
            "Resolving downloads.cs.stanford.edu (downloads.cs.stanford.edu)... 171.64.64.22\n",
            "Connecting to downloads.cs.stanford.edu (downloads.cs.stanford.edu)|171.64.64.22|:80... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 862182613 (822M) [application/zip]\n",
            "Saving to: ‘glove.6B.zip’\n",
            "\n",
            "glove.6B.zip        100%[===================>] 822.24M  5.14MB/s    in 2m 40s  \n",
            "\n",
            "2021-09-12 09:02:07 (5.14 MB/s) - ‘glove.6B.zip’ saved [862182613/862182613]\n",
            "\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "b5lsTdxjm3BY",
        "outputId": "19dbdad6-d3cb-458b-85cf-b925b995aa9c"
      },
      "source": [
        "# Getting the path of the glove embedding (using 100D)\n",
        "import numpy as np \n",
        "glove_path = 'glove.6B.100d.txt'\n",
        "\n",
        "embedding_index = {}\n",
        "\n",
        "# Making dict of vector representtion of the words (s --> [8, 48......])\n",
        "with open(glove_path) as f:\n",
        "  for line in f:\n",
        "    \n",
        "    # Getting the words and coef in a variable \n",
        "    word , coefs = line.split(maxsplit = 1)\n",
        "    coefs = np.fromstring(coefs , 'f' , sep = ' ')\n",
        "    \n",
        "    # Adding the coefs to our embedding dict \n",
        "    embedding_index[word] = coefs\n",
        "\n",
        "print(f'Found {len(embedding_index)} word vectors')"
      ],
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Found 400000 word vectors\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-BOJqK3Ynv0F"
      },
      "source": [
        "Great we loaded in the data and the next step will be creating a corresponding embedding matrix. So we can fit our `embedding_index` to our Embedding layer. \n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "F_P-fPKCytqi"
      },
      "source": [
        "# Getting the sentences and characters \n",
        "train_sentences = train_df[\"text\"].tolist()\n",
        "val_sentences = val_df[\"text\"].tolist()\n",
        "\n",
        "# Make function to split sentences into characters\n",
        "def split_chars(text):\n",
        "  return \" \".join(list(text))\n",
        "\n",
        "# Split sequence-level data splits into character-level data splits\n",
        "train_chars = [split_chars(sentence) for sentence in train_sentences]\n",
        "val_chars = [split_chars(sentence) for sentence in val_sentences]\n"
      ],
      "execution_count": 15,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZPKO4FwjpJDr"
      },
      "source": [
        "# Creatinga a text vectorizaiton layer (68k vocab size from the paper itself)\n",
        "from tensorflow.keras.layers import TextVectorization \n",
        "\n",
        "text_vectorizer = TextVectorization(max_tokens= 68000 , \n",
        "                                    output_sequence_length = 56)\n",
        "\n",
        "# Adapt our text vectorizer to training sentences\n",
        "\n",
        "text_vectorizer.adapt(train_sentences)"
      ],
      "execution_count": 16,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "olNGzXBmxyUt",
        "outputId": "460a168f-2158-4a32-958e-adbaa7780288"
      },
      "source": [
        "# Getting the vocabulary of the vectorizer \n",
        "text_vocab = text_vectorizer.get_vocabulary()\n",
        "len(text_vocab)"
      ],
      "execution_count": 17,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "64841"
            ]
          },
          "metadata": {},
          "execution_count": 17
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pzxYjS4u5n6q"
      },
      "source": [
        "# Getting the dict mapping word --> index \n",
        "word_index_text = dict(zip(text_vocab , range(len(text_vocab))))"
      ],
      "execution_count": 18,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MfDOTBaWfLom"
      },
      "source": [
        "# Creating a function that will give us a embedding matrix \n",
        "def get_glove_embedding_matrix(num_tokens , embedding_dim , word_index):\n",
        "\n",
        "  # Defining the hits and misses here \n",
        "  hits , misses = 0 , 0\n",
        "\n",
        "  # Prepare the embedding matrix \n",
        "  embedding_matrix = np.zeros((num_tokens , embedding_dim ))\n",
        "  for word , i in word_index.items():\n",
        "    embedding_vector = embedding_index.get(word)\n",
        "    if embedding_vector is not None:\n",
        "      embedding_matrix[i] = embedding_vector \n",
        "      hits += 1 \n",
        "    else:\n",
        "      misses += 1 \n",
        "\n",
        "  return embedding_matrix , hits , misses"
      ],
      "execution_count": 19,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-kMAsx4idtuS",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "4caa5cf3-4455-436f-bd50-3ac79bbd01e0"
      },
      "source": [
        "# Using the above function to get the embedding matrix \n",
        "\n",
        "num_tokens_text = len(text_vocab) + 2 \n",
        "embedding_dim = 100\n",
        "\n",
        "sentence_embedding_matrix , hits_ , misses_ = get_glove_embedding_matrix(num_tokens_text , embedding_dim, word_index_text)\n",
        "\n",
        "\n",
        "\n",
        "print(f'Hits: {hits_} and Misses: {misses_} for the sentence embedding matrix')"
      ],
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Hits: 29730 and Misses: 35111 for the sentence embedding matrix\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eoE_WjTp5UFO"
      },
      "source": [
        "# Adding the embedding matrix to our Embedding layer (Sentence and characters)\n",
        "from tensorflow.keras.layers import Embedding\n",
        "\n",
        "sen_embedding_layer = Embedding(num_tokens_text , \n",
        "                            embedding_dim , \n",
        "                            embeddings_initializer = tf.keras.initializers.Constant(sentence_embedding_matrix) , \n",
        "                            trainable = False )\n"
      ],
      "execution_count": 21,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4azUa3cWgt8y"
      },
      "source": [
        "Before making the datasets, we gotta convert our string's into numerical values with the help of the `vectorizer` layers we have created for our both sentence and characters."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "I5eKijpyg_Yn",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "ab5c0120-8096-4c06-9620-305d659f853b"
      },
      "source": [
        "# Creating the datasets for our both sentences and chars  \n",
        "\n",
        "train_sen_vectors = text_vectorizer(np.array([[sen] for sen in train_sentences])).numpy()\n",
        "val_sen_vectors = text_vectorizer(np.array([[sen] for sen in val_sentences])).numpy()\n",
        "\n",
        "# Training and validation dataset \n",
        "train_ds = tf.data.Dataset.from_tensor_slices((train_sen_vectors , train_labels_encoded))\n",
        "val_ds = tf.data.Dataset.from_tensor_slices((val_sen_vectors , val_labels_encoded))\n",
        "\n",
        "\n",
        "# Applying the batch size and prefetching (performance optimization )\n",
        "train_ds = train_ds.batch(32).prefetch(tf.data.AUTOTUNE)\n",
        "val_ds = val_ds.batch(32).prefetch(tf.data.AUTOTUNE)\n",
        "\n",
        "\n",
        "train_ds,  val_ds"
      ],
      "execution_count": 22,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(<PrefetchDataset shapes: ((None, 56), (None,)), types: (tf.int64, tf.int64)>,\n",
              " <PrefetchDataset shapes: ((None, 56), (None,)), types: (tf.int64, tf.int64)>)"
            ]
          },
          "metadata": {},
          "execution_count": 22
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xtbL1JJqho3L"
      },
      "source": [
        "Perfect! Now we're gonna build a model that will use glove embeddings as the core."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "53yxu1PE2Yk9",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "c4e6e027-f516-4221-f9cb-25a9d6c77ee4"
      },
      "source": [
        "train_sen_vectors[0].shape"
      ],
      "execution_count": 23,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(56,)"
            ]
          },
          "metadata": {},
          "execution_count": 23
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FsopqN9tkccx",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "fcc89051-11aa-42c3-a68f-59f75e2864ec"
      },
      "source": [
        "# Sample \n",
        "input = layers.Input(shape = (None,) , dtype = 'int64')\n",
        "glove_emb = sen_embedding_layer(input)\n",
        "#sample_emb = embedding_layer(sample_tokens)\n",
        "x = layers.Conv1D(128 , 5 , activation= 'relu' , padding = 'same')(glove_emb)\n",
        "x = layers.MaxPooling1D(5, padding = 'same')(x)\n",
        "x = layers.Conv1D(128, 5, activation=\"relu\" , padding = 'same')(x)\n",
        "x = layers.MaxPooling1D(5 , padding ='same')(x)\n",
        "x = layers.Conv1D(128, 5, activation=\"relu\" , padding = 'same')(x)\n",
        "x = layers.GlobalMaxPooling1D()(x)\n",
        "x = layers.Dense(128, activation=\"relu\")(x)\n",
        "x = layers.Dropout(0.5)(x)\n",
        "output = layers.Dense(len(class_names) , activation= 'softmax')(x)\n",
        "\n",
        "glove_model = tf.keras.Model(input , output)\n",
        "glove_model.summary()\n"
      ],
      "execution_count": 24,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Model: \"model\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_1 (InputLayer)         [(None, None)]            0         \n",
            "_________________________________________________________________\n",
            "embedding (Embedding)        (None, None, 100)         6484300   \n",
            "_________________________________________________________________\n",
            "conv1d (Conv1D)              (None, None, 128)         64128     \n",
            "_________________________________________________________________\n",
            "max_pooling1d (MaxPooling1D) (None, None, 128)         0         \n",
            "_________________________________________________________________\n",
            "conv1d_1 (Conv1D)            (None, None, 128)         82048     \n",
            "_________________________________________________________________\n",
            "max_pooling1d_1 (MaxPooling1 (None, None, 128)         0         \n",
            "_________________________________________________________________\n",
            "conv1d_2 (Conv1D)            (None, None, 128)         82048     \n",
            "_________________________________________________________________\n",
            "global_max_pooling1d (Global (None, 128)               0         \n",
            "_________________________________________________________________\n",
            "dense (Dense)                (None, 128)               16512     \n",
            "_________________________________________________________________\n",
            "dropout (Dropout)            (None, 128)               0         \n",
            "_________________________________________________________________\n",
            "dense_1 (Dense)              (None, 5)                 645       \n",
            "=================================================================\n",
            "Total params: 6,729,681\n",
            "Trainable params: 245,381\n",
            "Non-trainable params: 6,484,300\n",
            "_________________________________________________________________\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xgqA8gTnTWHM"
      },
      "source": [
        "Now we gotta convert our list of string data to Numpy arrays of integer indicees. Th"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HqbLlB2CSxDx",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "074f1fc2-6197-4217-cba4-9dc55bab379b"
      },
      "source": [
        "# Compiling and fitting the model\n",
        "glove_model.compile(loss = tf.keras.losses.SparseCategoricalCrossentropy() , \n",
        "                     optimizer = tf.keras.optimizers.Adam(), \n",
        "                     metrics = ['accuracy'])\n",
        "\n",
        "glove_model.fit(train_ds,\n",
        "                 epochs = 3 , \n",
        "                 validation_data = val_ds)"
      ],
      "execution_count": 25,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Epoch 1/3\n",
            "5627/5627 [==============================] - 56s 7ms/step - loss: 0.6512 - accuracy: 0.7581 - val_loss: 0.5622 - val_accuracy: 0.7934\n",
            "Epoch 2/3\n",
            "5627/5627 [==============================] - 40s 7ms/step - loss: 0.5269 - accuracy: 0.8087 - val_loss: 0.5219 - val_accuracy: 0.8084\n",
            "Epoch 3/3\n",
            "5627/5627 [==============================] - 40s 7ms/step - loss: 0.4815 - accuracy: 0.8244 - val_loss: 0.5328 - val_accuracy: 0.8102\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<keras.callbacks.History at 0x7fc6aef1e390>"
            ]
          },
          "metadata": {},
          "execution_count": 25
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "G0IUpoQ-hWsu"
      },
      "source": [
        "### 2. Try replacing the TensorFlow Hub Universal Sentence Encoder pretrained embedding for the TensorFlow Hub BERT PubMed expert (a language model pretrained on PubMed texts) pretrained embedding. Does this effect results?\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HyMvY8zH8Fhp",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "0d77570e-c71f-4587-9108-290b1e25f119"
      },
      "source": [
        "# Download by uncommenting the below command\n",
        "!pip install tensorflow_text "
      ],
      "execution_count": 26,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Collecting tensorflow_text\n",
            "  Downloading tensorflow_text-2.6.0-cp37-cp37m-manylinux1_x86_64.whl (4.4 MB)\n",
            "\u001b[K     |████████████████████████████████| 4.4 MB 5.2 MB/s \n",
            "\u001b[?25hRequirement already satisfied: tensorflow<2.7,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow_text) (2.6.0)\n",
            "Requirement already satisfied: tensorflow-hub>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow_text) (0.12.0)\n",
            "Requirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (0.4.0)\n",
            "Requirement already satisfied: numpy~=1.19.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.19.5)\n",
            "Requirement already satisfied: h5py~=3.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (3.1.0)\n",
            "Requirement already satisfied: keras~=2.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (2.6.0)\n",
            "Requirement already satisfied: grpcio<2.0,>=1.37.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.39.0)\n",
            "Requirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (0.37.0)\n",
            "Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (3.3.0)\n",
            "Requirement already satisfied: absl-py~=0.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (0.12.0)\n",
            "Requirement already satisfied: wrapt~=1.12.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.12.1)\n",
            "Requirement already satisfied: flatbuffers~=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.12)\n",
            "Requirement already satisfied: tensorflow-estimator~=2.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (2.6.0)\n",
            "Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.6.3)\n",
            "Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (3.17.3)\n",
            "Requirement already satisfied: clang~=5.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (5.0)\n",
            "Requirement already satisfied: six~=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.15.0)\n",
            "Requirement already satisfied: typing-extensions~=3.7.4 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (3.7.4.3)\n",
            "Requirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.1.2)\n",
            "Requirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (1.1.0)\n",
            "Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (0.2.0)\n",
            "Requirement already satisfied: tensorboard~=2.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7,>=2.6.0->tensorflow_text) (2.6.0)\n",
            "Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py~=3.1.0->tensorflow<2.7,>=2.6.0->tensorflow_text) (1.5.2)\n",
            "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (3.3.4)\n",
            "Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (2.23.0)\n",
            "Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (0.6.1)\n",
            "Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (57.4.0)\n",
            "Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (1.34.0)\n",
            "Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (0.4.5)\n",
            "Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (1.8.0)\n",
            "Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (1.0.1)\n",
            "Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (4.7.2)\n",
            "Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (4.2.2)\n",
            "Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (0.2.8)\n",
            "Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (1.3.0)\n",
            "Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (4.6.4)\n",
            "Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (0.4.8)\n",
            "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (2.10)\n",
            "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (3.0.4)\n",
            "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (2021.5.30)\n",
            "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (1.24.3)\n",
            "Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (3.1.1)\n",
            "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->markdown>=2.6.8->tensorboard~=2.6->tensorflow<2.7,>=2.6.0->tensorflow_text) (3.5.0)\n",
            "Installing collected packages: tensorflow-text\n",
            "Successfully installed tensorflow-text-2.6.0\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "klo8Oa4rhqbk"
      },
      "source": [
        "# Loading in the both encoder and the preprocessing models \n",
        "import tensorflow_text as text\n",
        "import tensorflow_hub as hub\n",
        "\n",
        "\n",
        "preprocessing_layer = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3' ,\n",
        "                                     trainable = False , name = 'pubmed_bert_preprocessor')\n",
        "\n",
        "bert_layer = hub.KerasLayer('https://tfhub.dev/google/experts/bert/pubmed/2' ,\n",
        "                            trainable = False , \n",
        "                            name = 'bert_model_layer')"
      ],
      "execution_count": 27,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "r5ndj9nB7cGm",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "985336b7-5690-4ffd-db9c-72235ea9ac53"
      },
      "source": [
        "# Creating a model out of it \n",
        "input = layers.Input(shape = [] , dtype = tf.string , name = 'input_sentences')\n",
        "bert_inputs = preprocessing_layer(input)\n",
        "bert_embedding =bert_layer(bert_inputs)\n",
        "print(f'bert embedding shape: {bert_embedding}')\n",
        "x = layers.Dense(128 , activation = 'relu')(bert_embedding['pooled_output'])\n",
        "x = layers.Dropout(0.5)(x)\n",
        "output = layers.Dense(len(class_names) , activation= 'softmax')(x)\n",
        "\n",
        "# Packing into a model\n",
        "pubmed_bert_model = tf.keras.Model(input , output)\n",
        "pubmed_bert_model.summary()"
      ],
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "bert embedding shape: {'sequence_output': <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, 'encoder_outputs': [<KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>, <KerasTensor: shape=(None, 128, 768) dtype=float32 (created by layer 'bert_model_layer')>], 'pooled_output': <KerasTensor: shape=(None, 768) dtype=float32 (created by layer 'bert_model_layer')>, 'default': <KerasTensor: shape=(None, 768) dtype=float32 (created by layer 'bert_model_layer')>}\n",
            "Model: \"model_1\"\n",
            "__________________________________________________________________________________________________\n",
            "Layer (type)                    Output Shape         Param #     Connected to                     \n",
            "==================================================================================================\n",
            "input_sentences (InputLayer)    [(None,)]            0                                            \n",
            "__________________________________________________________________________________________________\n",
            "pubmed_bert_preprocessor (Keras {'input_mask': (None 0           input_sentences[0][0]            \n",
            "__________________________________________________________________________________________________\n",
            "bert_model_layer (KerasLayer)   {'sequence_output':  109482241   pubmed_bert_preprocessor[0][0]   \n",
            "                                                                 pubmed_bert_preprocessor[0][1]   \n",
            "                                                                 pubmed_bert_preprocessor[0][2]   \n",
            "__________________________________________________________________________________________________\n",
            "dense_2 (Dense)                 (None, 128)          98432       bert_model_layer[0][13]          \n",
            "__________________________________________________________________________________________________\n",
            "dropout_1 (Dropout)             (None, 128)          0           dense_2[0][0]                    \n",
            "__________________________________________________________________________________________________\n",
            "dense_3 (Dense)                 (None, 5)            645         dropout_1[0][0]                  \n",
            "==================================================================================================\n",
            "Total params: 109,581,318\n",
            "Trainable params: 99,077\n",
            "Non-trainable params: 109,482,241\n",
            "__________________________________________________________________________________________________\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hzG6QZyYAX5b"
      },
      "source": [
        "# Making datasets for the pubmed model\n",
        "\n",
        "train_sen_ds = tf.data.Dataset.from_tensor_slices((train_sentences, train_labels_encoded))\n",
        "train_sen_ds = train_sen_ds.batch(32).prefetch(tf.data.AUTOTUNE)\n",
        "\n",
        "val_sen_ds = tf.data.Dataset.from_tensor_slices((val_sentences, val_labels_encoded))\n",
        "val_sen_ds = val_sen_ds.batch(32).prefetch(tf.data.AUTOTUNE)"
      ],
      "execution_count": 29,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nU7dVhYl8ZrR",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "ef9ebca8-caa2-444c-b7f2-ab3f6dabefac"
      },
      "source": [
        "# Compiling the Pubmed model and fitting it on 10% of the data (you can fitting on the whole dataset)\n",
        "\n",
        "pubmed_bert_model.compile(loss = tf.keras.losses.SparseCategoricalCrossentropy() , \n",
        "                          optimizer = tf.keras.optimizers.Adam(), \n",
        "                          metrics =['accuracy'])\n",
        "\n",
        "pubmed_bert_model.fit(train_sen_ds ,\n",
        "                      steps_per_epoch = int(0.1 * len(train_sen_ds)),\n",
        "                      epochs = 3 , \n",
        "                      validation_data = val_sen_ds , \n",
        "                      validation_steps = int(0.1 * len(val_sen_ds)))"
      ],
      "execution_count": 30,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Epoch 1/3\n",
            "562/562 [==============================] - 422s 734ms/step - loss: 0.6469 - accuracy: 0.7785 - val_loss: 0.4563 - val_accuracy: 0.8371\n",
            "Epoch 2/3\n",
            "562/562 [==============================] - 412s 733ms/step - loss: 0.5221 - accuracy: 0.8184 - val_loss: 0.4473 - val_accuracy: 0.8314\n",
            "Epoch 3/3\n",
            "562/562 [==============================] - 412s 733ms/step - loss: 0.5031 - accuracy: 0.8268 - val_loss: 0.4202 - val_accuracy: 0.8531\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<keras.callbacks.History at 0x7fc6016b8910>"
            ]
          },
          "metadata": {},
          "execution_count": 30
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hm5sDOzAL5Ss"
      },
      "source": [
        "### 3. What happens if you were to merge our `line_number` and `total_lines` features for each sequence? For example, created a `X_of_Y` feature instead? Does this effect model performance?\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Y-dm3oSTVdJc",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 359
        },
        "outputId": "594526f0-f71d-4aff-dc0f-b22fc4d7413c"
      },
      "source": [
        "# Combining the total lines and line number into a new feature! \n",
        "train_df['line_number_total'] = train_df['line_number'].astype(str) + '_of_' + train_df['total_lines'].astype(str)\n",
        "val_df['line_number_total'] = val_df['line_number'].astype(str) + '_of_' + val_df['total_lines'].astype(str)\n",
        "\n",
        "train_df.head(10)"
      ],
      "execution_count": 31,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>target</th>\n",
              "      <th>text</th>\n",
              "      <th>line_number</th>\n",
              "      <th>total_lines</th>\n",
              "      <th>line_number_total</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>OBJECTIVE</td>\n",
              "      <td>to investigate the efficacy of @ weeks of dail...</td>\n",
              "      <td>0</td>\n",
              "      <td>11</td>\n",
              "      <td>0_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>a total of @ patients with primary knee oa wer...</td>\n",
              "      <td>1</td>\n",
              "      <td>11</td>\n",
              "      <td>1_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>outcome measures included pain reduction and i...</td>\n",
              "      <td>2</td>\n",
              "      <td>11</td>\n",
              "      <td>2_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>pain was assessed using the visual analog pain...</td>\n",
              "      <td>3</td>\n",
              "      <td>11</td>\n",
              "      <td>3_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>secondary outcome measures included the wester...</td>\n",
              "      <td>4</td>\n",
              "      <td>11</td>\n",
              "      <td>4_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>5</th>\n",
              "      <td>METHODS</td>\n",
              "      <td>serum levels of interleukin @ ( il-@ ) , il-@ ...</td>\n",
              "      <td>5</td>\n",
              "      <td>11</td>\n",
              "      <td>5_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>6</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>there was a clinically relevant reduction in t...</td>\n",
              "      <td>6</td>\n",
              "      <td>11</td>\n",
              "      <td>6_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>7</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>the mean difference between treatment arms ( @...</td>\n",
              "      <td>7</td>\n",
              "      <td>11</td>\n",
              "      <td>7_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>8</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>further , there was a clinically relevant redu...</td>\n",
              "      <td>8</td>\n",
              "      <td>11</td>\n",
              "      <td>8_of_11</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>9</th>\n",
              "      <td>RESULTS</td>\n",
              "      <td>these differences remained significant at @ we...</td>\n",
              "      <td>9</td>\n",
              "      <td>11</td>\n",
              "      <td>9_of_11</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "      target  ... line_number_total\n",
              "0  OBJECTIVE  ...           0_of_11\n",
              "1    METHODS  ...           1_of_11\n",
              "2    METHODS  ...           2_of_11\n",
              "3    METHODS  ...           3_of_11\n",
              "4    METHODS  ...           4_of_11\n",
              "5    METHODS  ...           5_of_11\n",
              "6    RESULTS  ...           6_of_11\n",
              "7    RESULTS  ...           7_of_11\n",
              "8    RESULTS  ...           8_of_11\n",
              "9    RESULTS  ...           9_of_11\n",
              "\n",
              "[10 rows x 5 columns]"
            ]
          },
          "metadata": {},
          "execution_count": 31
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "sx8vJRgLYGo3",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "705d6ada-1d94-41b1-8fa7-a889a151a3da"
      },
      "source": [
        "# Perform one hot encoding on the train and transform the validation dataframe \n",
        "from sklearn.preprocessing import OneHotEncoder\n",
        "\n",
        "# Creating an instance \n",
        "one_hot_encoder = OneHotEncoder()\n",
        "\n",
        "# Fitting on the training dataframe \n",
        "one_hot_encoder.fit(np.expand_dims(train_df['line_number_total'] , axis = 1))\n",
        "\n",
        "# Transforming both train and val df \n",
        "train_line_number_total_encoded = one_hot_encoder.transform(np.expand_dims(train_df['line_number_total'] , axis =1))\n",
        "val_line_number_total_encoded  = one_hot_encoder.transform(np.expand_dims(val_df['line_number_total'] , axis= 1))\n",
        "\n",
        "# Checking the shapes \n",
        "train_line_number_total_encoded.shape , val_line_number_total_encoded.shape"
      ],
      "execution_count": 32,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "((180040, 460), (30212, 460))"
            ]
          },
          "metadata": {},
          "execution_count": 32
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GnvzXNktZ0-q"
      },
      "source": [
        "# Converting the sparse object to array \n",
        "train_line_number_total_encoded = train_line_number_total_encoded.toarray()\n",
        "val_line_number_total_encoded = val_line_number_total_encoded.toarray()\n",
        "\n",
        "# Converting the datatype to int \n",
        "train_line_number_total_encoded = tf.cast(train_line_number_total_encoded , dtype= tf.int32)\n",
        "val_line_number_total_encoded = tf.cast(val_line_number_total_encoded , dtype= tf.int32)"
      ],
      "execution_count": 33,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "k8fncMWlUkNQ"
      },
      "source": [
        "Now lets build a tribid model which use pubmed Bert as the embedding and model + our new `line_number_total` feature which is the combination of `line_number` and `total_lines`. \n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "nBbgThiAWxos",
        "outputId": "0ad05110-0ac9-4eee-bc2b-9fd543decfe6"
      },
      "source": [
        "# Making the performant datasets for our tribid model \n",
        "train_data = tf.data.Dataset.from_tensor_slices((train_sentences ,\n",
        "                                                 train_chars , \n",
        "                                                 train_line_number_total_encoded))\n",
        "\n",
        "train_labels = tf.data.Dataset.from_tensor_slices(train_labels_encoded)\n",
        "\n",
        "val_data = tf.data.Dataset.from_tensor_slices((val_sentences , \n",
        "                                               val_chars , \n",
        "                                               val_line_number_total_encoded))\n",
        "\n",
        "val_labels = tf.data.Dataset.from_tensor_slices(val_labels_encoded)\n",
        "\n",
        "# Zipping the data and labels \n",
        "train_dataset = tf.data.Dataset.zip((train_data , train_labels))\n",
        "val_dataset = tf.data.Dataset.zip((val_data , val_labels))\n",
        "\n",
        "# Applying batch and prefetching \n",
        "train_dataset = train_dataset.batch(64).prefetch(tf.data.AUTOTUNE)\n",
        "val_dataset = val_dataset.batch(64).prefetch(tf.data.AUTOTUNE)\n",
        "\n",
        "train_dataset , val_dataset"
      ],
      "execution_count": 34,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(<PrefetchDataset shapes: (((None,), (None,), (None, 460)), (None,)), types: ((tf.string, tf.string, tf.int32), tf.int64)>,\n",
              " <PrefetchDataset shapes: (((None,), (None,), (None, 460)), (None,)), types: ((tf.string, tf.string, tf.int32), tf.int64)>)"
            ]
          },
          "metadata": {},
          "execution_count": 34
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "jAdFeh1UX7l7",
        "outputId": "c4eade38-b473-4a29-a78a-085887d51023"
      },
      "source": [
        "# Buidling the tribid model using the functional api \n",
        "\n",
        "input_token = layers.Input(shape = [] , dtype =tf.string)\n",
        "bert_inputs_token = preprocessing_layer(input_token)\n",
        "bert_embedding_char =bert_layer(bert_inputs_token)\n",
        "output_token = layers.Dense(64 , activation = 'relu')(bert_embedding_char['pooled_output'])\n",
        "token_model = tf.keras.Model(input_token , output_token)\n",
        "\n",
        "input_char = layers.Input(shape = [] , dtype =tf.string)\n",
        "bert_inputs_char = preprocessing_layer(input_char)\n",
        "bert_embedding_char =bert_layer(bert_inputs_char)\n",
        "output_char = layers.Dense(64 , activation = 'relu')(bert_embedding_char['pooled_output'])\n",
        "char_model = tf.keras.Model(input_char , output_char)\n",
        "\n",
        "line_number_total_input = layers.Input(shape = (460,), dtype = tf.int32)\n",
        "dense = layers.Dense(32 , activation = 'relu')(line_number_total_input)\n",
        "total_line_number_model = tf.keras.Model(line_number_total_input , dense)\n",
        "\n",
        "# Concatenating the tokens amd chars output (Hybrid!!!)\n",
        "combined_embeddings = layers.Concatenate(name = 'token_char_hybrid_embedding')([token_model.output , \n",
        "                                                                                char_model.output])\n",
        "\n",
        "# Combining the line_number_total to our hybrid model (Time for Tribid!!)\n",
        "z = layers.Concatenate(name = 'tribid_embeddings')([total_line_number_model.output , \n",
        "                                                    combined_embeddings])\n",
        "\n",
        "# Adding a dense + dropout and creating our output layer \n",
        "dropout = layers.Dropout(0.5)(z)\n",
        "x = layers.Dense(128 , activation='relu')(dropout)\n",
        "output_layer = layers.Dense(5 , activation='softmax')(x)\n",
        "\n",
        "# Packing into a model\n",
        "tribid_model = tf.keras.Model(inputs = [token_model.input , \n",
        "                                        char_model.input , \n",
        "                                        total_line_number_model.input] , \n",
        "                              outputs = output_layer)\n",
        "\n",
        "tribid_model.summary()"
      ],
      "execution_count": 35,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Model: \"model_5\"\n",
            "__________________________________________________________________________________________________\n",
            "Layer (type)                    Output Shape         Param #     Connected to                     \n",
            "==================================================================================================\n",
            "input_2 (InputLayer)            [(None,)]            0                                            \n",
            "__________________________________________________________________________________________________\n",
            "input_3 (InputLayer)            [(None,)]            0                                            \n",
            "__________________________________________________________________________________________________\n",
            "pubmed_bert_preprocessor (Keras {'input_mask': (None 0           input_2[0][0]                    \n",
            "                                                                 input_3[0][0]                    \n",
            "__________________________________________________________________________________________________\n",
            "bert_model_layer (KerasLayer)   {'sequence_output':  109482241   pubmed_bert_preprocessor[1][0]   \n",
            "                                                                 pubmed_bert_preprocessor[1][1]   \n",
            "                                                                 pubmed_bert_preprocessor[1][2]   \n",
            "                                                                 pubmed_bert_preprocessor[2][0]   \n",
            "                                                                 pubmed_bert_preprocessor[2][1]   \n",
            "                                                                 pubmed_bert_preprocessor[2][2]   \n",
            "__________________________________________________________________________________________________\n",
            "input_4 (InputLayer)            [(None, 460)]        0                                            \n",
            "__________________________________________________________________________________________________\n",
            "dense_4 (Dense)                 (None, 64)           49216       bert_model_layer[1][13]          \n",
            "__________________________________________________________________________________________________\n",
            "dense_5 (Dense)                 (None, 64)           49216       bert_model_layer[2][13]          \n",
            "__________________________________________________________________________________________________\n",
            "dense_6 (Dense)                 (None, 32)           14752       input_4[0][0]                    \n",
            "__________________________________________________________________________________________________\n",
            "token_char_hybrid_embedding (Co (None, 128)          0           dense_4[0][0]                    \n",
            "                                                                 dense_5[0][0]                    \n",
            "__________________________________________________________________________________________________\n",
            "tribid_embeddings (Concatenate) (None, 160)          0           dense_6[0][0]                    \n",
            "                                                                 token_char_hybrid_embedding[0][0]\n",
            "__________________________________________________________________________________________________\n",
            "dropout_2 (Dropout)             (None, 160)          0           tribid_embeddings[0][0]          \n",
            "__________________________________________________________________________________________________\n",
            "dense_7 (Dense)                 (None, 128)          20608       dropout_2[0][0]                  \n",
            "__________________________________________________________________________________________________\n",
            "dense_8 (Dense)                 (None, 5)            645         dense_7[0][0]                    \n",
            "==================================================================================================\n",
            "Total params: 109,616,678\n",
            "Trainable params: 134,437\n",
            "Non-trainable params: 109,482,241\n",
            "__________________________________________________________________________________________________\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 856
        },
        "id": "UPHiC5AQYeq_",
        "outputId": "16fd216d-3aec-46f9-bf87-a53047cef042"
      },
      "source": [
        "# Plotting the model structure \n",
        "from tensorflow.keras.utils import plot_model\n",
        "plot_model(tribid_model)"
      ],
      "execution_count": 36,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlsAAANHCAYAAAABppglAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nOzdeVwVhd4G8Gc4HDicw6YG4gIqYOF6r6aFpGlqpXn1iqDgclPTcrtp5VbaVW9p5VL65pJvab6lXdlcUpQ0V/SmZC5hKrjmhgooAgIKHH7vH13PFUFAYBiW5/v58Iczc2aemTlzeJwzMygiIiAiIiIiNYRbaZ2AiIiIqDpj2SIiIiJSEcsWERERkYpYtoiIiIhUZP3wgAMHDuCzzz7TIgtRuXrnnXfQoUMHVebdv39/VeZLVJE6dOiAd955R+sYRNVegTNbly9fRkREhBZZiMpNREQELl++rOr8r1y5otr8idR28OBBHDhwQOsYRDVCgTNb94WHh1dkDqJypSiK6st4++23MWDAANWXQ6QGnp0lqji8ZouIiIhIRSxbRERERCpi2SIiIiJSEcsWERERkYpYtoiIiIhUxLJFREREpCKWLSIiIiIVsWwRERERqYhli4iIiEhFLFtEREREKmLZIiIiIlIRyxYRERGRili2iIiIiFTEskVERESkonIpW1u3boWTkxM2b95cHrPTzAcffIDmzZvD0dERtra28Pb2xpQpU3Dnzp3HntfBgwfRrFkzWFlZQVEU1K1bF7Nnz1YhdemtW7cOnp6eUBQFiqLAzc0NQ4YM0TpWtVRdjpG5c+fCx8cHdnZ2MJlM8PHxwT/+8Q+kpaU99rx4jBBRTWFdHjMRkfKYjeZ27dqFv//97wgODoZer0dUVBSGDBmC48ePIyoq6rHm5evri1OnTqFHjx7Ytm0b4uPj4ezsrFLy0gkICEBAQAC8vb2RnJyM69evax2p2qoux8i+ffvw+uuv49VXX4WdnR2ioqIwePBgxMTEYPv27Y81Lx4jRFRTlMuZrV69eiE1NRW9e/cuj9mVSVZWFvz8/Er1Wnt7e4waNQq1a9eGg4MDBgwYAH9/f/zwww+4fPlyOSeteGXZNlQ21eUYsbGxwbhx4+Di4gJ7e3v0798fffv2xY8//ohr166Vc9KKx2OEiNRQLme2KpOVK1ciMTGxVK+NjIwsMOyJJ54AAGRmZpYpV2VQlm1D1UdZ3gfr168vMKxBgwYAUKqv2ysbHiNEpIYyn9nav38/PDw8oCgKlixZAgBYtmwZTCYTjEYjvv/+e/Ts2ROOjo5o2LAh1q5da3nt559/DoPBAFdXV4wePRr16tWDwWCAn58fYmJiLNONHz8eNjY2cHNzswwbN24cTCYTFEVBcnIyAOCtt97CxIkTce7cOSiKAm9v77KuHq5evQo7Ozs0adLEMuyHH36Ao6Mj5syZ89jzq+rbZt++fWjevDmcnJxgMBjQqlUrbNu2DQAwcuRIy7UtXl5eOHr0KABg+PDhMBqNcHJywqZNmwAAZrMZM2bMgIeHB+zs7NC6dWuEhoYCAObNmwej0QgHBwckJiZi4sSJaNCgAeLj40uVWWvV/Rg5c+YMnJ2d0ahRI8swHiM8RojoAfKQ0NBQKWRwkS5fviwAZPHixZZh06dPFwCyc+dOSU1NlcTEROnUqZOYTCbJzs62TDdq1CgxmUxy8uRJuXv3rpw4cULat28vDg4OcunSJct0gwcPlrp16+Zb7vz58wWAJCUlWYYFBASIl5fXY+V/lIyMDHFwcJDx48fnGx4ZGSkODg7ywQcfFDuPl19+WQBISkqKZVhl2zZeXl7i5ORU/AYRkfDwcJk1a5bcunVLbt68Kb6+vlKnTp18y9DpdHL16tV8rxs0aJBs2rTJ8u9JkyaJra2tRERESEpKikybNk2srKzk0KFD+bbRhAkTZPHixdKvXz85depUiTKKiACQ0NDQEk//uB53/tXtGMnOzpYrV67I4sWLxdbWVlavXp1vPI+Ryn+MBAYGSmBgYImnJ6JSC1P90Q9+fn5wdHSEi4sLgoODkZGRgUuXLuWbxtraGs2aNYOtrS2aN2+OZcuWIT09HatWrVI7XpE++ugj1KtXr8AdUr169UJaWhr+8Y9/lGn+VXHbBAYGYubMmahVqxZq166NPn364ObNm0hKSgIAjBkzBmazOV++tLQ0HDp0CK+88goA4O7du1i2bBn8/f0REBAAZ2dnvP/++9Dr9QXW65NPPsHf//53rFu3Dj4+PhW3ohWoKr4P3N3d0bBhQ8yaNQvz5s1DUFBQvvE8RniMENF/VehztmxsbAAAOTk5RU7Xrl07GI1GxMXFVUSsQq1fvx5hYWHYtm0bHBwcVF9eVdo2D9Lr9QD++MoDALp27Yonn3wSX3/9teUOvJCQEAQHB0On0wEA4uPjkZmZiZYtW1rmY2dnBzc3t0qzXlqpKu+Dy5cvIzExEf/617/wzTffoE2bNqpf61RVts3DeIwQUaV9qKmtra3lf4IVLSQkBJ988gn27NmDxo0ba5KhKFpumy1btqBLly5wcXGBra0tpkyZkm+8oigYPXo0zp8/j507dwIAvv32W4wYMcIyTUZGBgDg/ffft1y/oigKLl68WC1uRKgoWr4P9Ho9XFxc8NJLLyEkJAQnTpzARx99pEmWwvAYIaLKpFKWrZycHNy+fRsNGzas8GUvXrwYa9aswa5du1C/fv0KX35xKnrbREdHY+HChQCAS5cuwd/fH25uboiJiUFqairmzp1b4DXDhg2DwWDAihUrEB8fD0dHx3wXT7u4uAAAFi5cCBHJ93PgwIEKWa+qTstj5GHe3t7Q6XQ4ceKE1lEA8BghosqnUj76Yc+ePRAR+Pr6WoZZW1sX+/VBWYgI3n33XaSkpGDjxo2wtq6Um6bCt83hw4dhMpkAAMePH0dOTg7Gjh0LT09PAH/8L/1htWrVQlBQEEJCQuDg4IDXX38933h3d3cYDAYcO3ZMlcw1gRbHyM2bN/Hmm2/iX//6V77hZ86cgdlshru7u2rLfhw8RoiosqkUZ7by8vKQkpKC3NxcxMbG4q233oKHhweGDRtmmcbb2xu3bt3Cxo0bkZOTg6SkJFy8eLHAvGrXro2EhAT8/vvvSE9PL/EH7MmTJzFv3jx89dVX0Ov1+U7dK4qCBQsWWKaNiooq9W3tj0urbZOTk4MbN25gz549ll8kHh4eAIAdO3bg7t27OHPmTL5b7B80ZswY3Lt3D5GRkQUe5GkwGDB8+HCsXbsWy5YtQ1paGsxmM65cuVItHoyphspwjJhMJmzfvh27du1CWloacnJycPToUQwdOhQmkwnvvPOOZVoeIzxGiOgBD9+f+LiPfli8eLG4ubkJADEajdKnTx9ZunSpGI1GASBNmzaVc+fOyZdffimOjo4CQBo1aiSnT58WkT9u3dbr9dKgQQOxtrYWR0dH6du3r5w7dy7fcm7evCkvvPCCGAwGadKkibz55psyefJkASDe3t6W27yPHDkijRo1Ejs7O+nYsaNcv369ROtx/PhxAfDIn/nz51um3bp1qzg4OMjs2bMfOb+DBw9KixYtxMrKSgCIm5ubzJkzp1Jtmy+++EK8vLyKXG8Asn79esuypk6dKrVr1xZnZ2fp37+/LFmyRACIl5dXvlvtRUTatGkj7733XqHb5969ezJ16lTx8PAQa2trcXFxkYCAADlx4oTMnTtX7OzsBIC4u7sXeKxASaASPfqhuhwjIiJ9+vSRJk2aiL29vdja2oqXl5cEBwfL8ePH803HY6TyHyN89ANRhQkrl+dslcWoUaOkdu3aFba8qqSqb5tXXnlFzp8/r8myK1PZKquq/j5QU1XfNloeIyxbRBVG/edslcT9W6KpoKq0bR78yiU2NhYGgyHfk/ep9KrS+6CiVaVtw2OEqGaqFGVLLXFxcQWuvSrsJzg4WOuo1cLUqVNx5swZnD59GsOHD8eHH36odSQqBo+RisVjhKhm0rRsTZs2DatWrUJqaiqaNGmCiIiIcp2/j49PgdumC/sJCQkp1+WWB7W3jRqMRiN8fHzQvXt3zJo1C82bN9c6UpXHY+TReIwQUVWhiPznEcb/ERYWhqCgIDw0mKhKURQFoaGhGDBgQJWcP5Ha+vfvDwAIDw/XOAlRtRderb9GJCIiItIayxYRERGRili2iIiIiFTEskVERESkIpYtIiIiIhWxbBERERGpiGWLiIiISEUsW0REREQqYtkiIiIiUhHLFhEREZGKWLaIiIiIVMSyRURERKQili0iIiIiFVk/asT9vwhPVVNeXh4yMjLg4OCgdZRqa+HChQgPD9c6RoVLT0+HyWSClRX/r1aVHTx4EL6+vlrHIKoRCnxauru7IzAwUIssVI7i4uKwe/dupKSkaB1FE4GBgXB3d1d1/g0bNlRt/pVVSkoKdu/ejbi4OK2jUBn5+vqiQ4cOWscgqhEUERGtQ1D5u3fvHvr374/o6GhERUXxQ5XKbP/+/ejVqxeee+45rFu3DnZ2dlpHIiKqCsL5PUA1ZWtri4iICLzwwgt48cUXsXv3bq0jURW2d+9evPLKK+jcuTPWr1/PokVE9BhYtqoxGxsbhIWFoUePHvjLX/6CHTt2aB2JqqCoqCj07NkTvXr1wrp162AwGLSORERUpbBsVXN6vR6hoaEICAhAnz59sH37dq0jURWyefNm+Pv7IyAgAKtXr4Zer9c6EhFRlcOyVQPodDqsWrUKQUFB6N27N77//nutI1EVEBISgn79+mH48OH45ptvYG39yJuXiYioCCxbNYROp8PXX3+NkSNHYsCAAVi/fr3WkagSW7FiBQYPHoy3334bX3zxBR/zQERUBvwErUEURcGSJUswevRoDBgwAGvWrNE6ElVCX3zxBUaNGoXJkydj3rx5WschIqry+L1ADaMoChYtWgSdTodhw4bBbDZj6NChWseiSmLevHmYOnUqPvzwQ7z//vtaxyEiqhZYtmogRVHw2Wefwd7eHq+99hrMZjNee+01rWORxubOnYv33nsPixYtwoQJE7SOQ0RUbbBs1WAffPABTCYTRo4ciYyMDLz55ptaRyINiAgmT56MRYsWYcWKFSzeRETljGWrhps6dSoURcGECRNgNpvx1ltvaR2JKpCI4O2338aSJUvw9ddf49VXX9U6EhFRtcOyRZgyZQp0Oh3eeecd3Llzh9fq1BBmsxlvvPEG1qxZg7CwMPTr10/rSERE1RLLFgEAJk6cCJPJhHHjxsFsNmPmzJlaRyIV3b9OLzQ0FOHh4ejTp4/WkYiIqi2WLbIYPXo0dDodRo8ejaysLHzyySdaRyIVZGdnY+DAgdi2bRsiIyPRvXt3rSMREVVrLFuUz+uvvw6TyYShQ4fCbDZj/vz5WkeicnTv3j0MGDAAe/fuxfbt2+Hn56d1JCKiao9liwoYNGgQdDodhgwZgoyMDCxduhSKomgdi8ooIyMDffv2xeHDh7Ft2zY8++yzWkciIqoRWLaoUEFBQdDpdBg0aBDMZjP/ZEsVl5qaildeeQVnz57Fnj170Lp1a60jERHVGCxb9EiBgYGws7NDYGAgzGYzvvzySxauKiglJQU9evTA5cuXsWvXLrRo0ULrSERENQrLFhWpV69e2LBhA/r164eMjAysXr0a1tZ821QVN27cwEsvvYTU1FTs27cPXl5eWkciIqpxeJqCitWjRw9ERUUhMjISgwcPRk5OjtaRqASuXbuGbt264e7duyxaREQaYtmiEuncuTO2bt2KqKgo9OvXD/fu3dM6EhXh4sWL6NSpE/Ly8rB79264u7trHYmIqMZi2aIS69SpE6KiohAdHQ1/f3/cvXtX60hUiPj4eHTs2BGOjo6Ijo5G/fr1tY5ERFSjsWzRY3nuueewa9cu/Pzzz/jrX/+KrKwsrSPRA06dOoWuXbvCzc0NO3bswBNPPKF1JCKiGo9lix7b008/jR9//BFHjhxBjx49kJ6ernUkAnDkyBE8//zz8Pb2xq5du1C7dm2tIxEREVi2qJTatGmD6OhonDlzBj179kRaWprWkWq0Q4cO4cUXX0S7du3www8/wMHBQetIRET0HyxbVGrNmjXDrl27cOHCBXTt2hW3bt3SOlKNFB0djW7dusHPzw8bNmyAnZ2d1pGIiOgBLFtUJj4+Pti9ezdu3LiB7t27Izk5WetINcoPP/yAHj16oGfPnli/fj0MBoPWkYiI6CEsW1RmTz75JPbv34/U1FR0794dSUlJWkeqESIjI+Hv749+/frhu+++g16v1zoSEREVgmWLykWjRo2we/duZGRk4Pnnn0dCQoLWkaq10NBQ9OvXD0OHDsW3337Lp/oTEVViLFtUbjw8PLBv3z5YWVnhhRdewJUrV7SOVC199913GDJkCN544w3+gXAioiqAn9JUrtzc3LBr1y7Y2tqiU6dOuHDhgtaRqpXly5fj1VdfxcSJE7FkyRIoiqJ1JCIiKgbLFpW7unXrYufOnXByckKXLl1w7tw5rSNVCwsWLMDYsWMxc+ZMfPLJJ1rHISKiEmLZIlW4uLhg9+7dcHNzQ6dOnXDy5EmtI1Vpc+fOxZQpU/DZZ59hxowZWschIqLHwLJFqqlVqxa2b9+ORo0aoWvXrvjtt9+0jlQlzZgxA++99x4WL16Mt956S+s4RET0mFi2SFVOTk7YsWMHmjdvjm7duiE2NrbQ6a5evVpjr+/65ZdfCh0uInj77bfx0UcfYdWqVRg3blwFJyMiovLAskWqM5lMiIyMRKtWrdClSxccOnQo3/jr16/j+eefr5Ffjx05cgQdOnRAREREvuF5eXl4/fXXsWzZMoSEhGDo0KEaJSQiorJSRES0DkE1Q2ZmJvz9/fHzzz8jKioKvr6+SEpKQseOHXH27FkAQHx8PLy9vTVOWnF69eqFqKgo6HQ6bNiwAX/5y19gNpsxYsQIhISEICQkBH379tU6JhERlV44z2xRhTEajdi8eTM6d+6MF198EZGRkejevTsuXLiAvLw86HQ6fPjhh1rHrDCHDx9GVFQURARmsxn+/v7YvHkzgoKCEB4ejs2bN7NoERFVAzyzRRUuOzsb/fr1w5EjR5CcnIycnBzLOCsrK5w6dQpPPvmkhgkrxssvv4zdu3db1t/KygrW1tZwc3PDd999h44dO2qckIiIygHPbFHFy8nJQXJycoGiBQA6nQ4fffSRRskqzi+//IIff/wx3/rn5eUhNzcXycnJsLGx0TAdERGVJ57ZogqVmZmJl19+GQcPHkRubm6h01hZWSEuLg5Nmzat4HQVp3v37oiOji5QNoE/CqednR327t2Ltm3bapCOiIjKEc9sUcXJyspCjx49EBMT88iiBfxRNubMmVOBySrWv//9b+zcubPQogUAZrMZd+/eRffu3XHq1KkKTkdEROWNZYsqzIYNG3Do0CEUdzI1JycHa9asqbZ/5mfatGmwtrYuchoRQUpKCt5++23k5eVVUDIiIlIDyxZVmEGDBuHy5cuYPn06nJycoNPpHvmHlK2srDB79uwKTqi+/fv3Izo6+pFn9vR6PQCgXbt22LRpE6KiomBlxcOUiKgq4zVbpIl79+4hNDQUM2fOxMWLF6EoSoEzODqdDqdPn4anp6dGKctfp06dCr1ezdraGmazGT179sSMGTPw7LPPapSQiIjKGa/ZIm3Y2tri1Vdfxblz5/D999+jVatWAJDv67XqdnZr586d2L9/f76iZW1tDYPBgNdeew1xcXHYsmULixYRUTXDM1tUKYgIoqKi8PHHH2P//v2wsbFBdnY2dDodzpw5gyZNmmgdscw6dOiAn3/+GSICRVHg7OyMd955B2PGjEHt2rW1jkdEROoI16xshYWFabFYqgLOnDmDjRs34vDhwxARdOnSBWPGjNE6VpnExsZa7rB0dXVF37598fzzz1uu0SJ60IABA7SOQETlR7uy9agLo4mIajp+4UBUrYQXff+5ykJDQ/k/OCpWQkICLl++XGWvZUpMTMS5c+fQoUMHraNQJRcWFoagoCCtYxBROdO0bBGVRP369VG/fn2tY5Saq6srXF1dtY5BREQa4d2IRERERCpi2SIiIiJSEcsWERERkYpYtoiIiIhUxLJFREREpCKWLSIiIiIVsWwRERERqYhli4iIiEhFLFtEREREKmLZIiIiIlIRyxYRERGRili2iIiIiFTEskVERESkohpZttq3bw+dToc///nPWkfJZ+TIkXBwcICiKDh27NhjvbayrhNVHevWrYOnpycURYGiKHB3d8fKlSst4/fu3YsGDRpAURS4ubnhyy+/rDRZ3dzcMGTIEM3yEBEVRRER0WTBioLQ0FAMGDBAi8Wje/fuSE5OfuxSo7aQkBAMHDgQR48efeziVFnXiaoWb29vJCcn4/bt2/mGiwjeeOMNWFlZYfny5VAURaOE//WorFVVWFgYgoKCoNHHMhGpI7xGntm6rzL8sihvaq5TVlYW/Pz8VJs/VV55eXkYMWIE9Hp9pSlaRERVRY0uW3q9XusIBZT1l5ia67Ry5UokJiaqNn+qnPLy8vDaa6/BaDRi2bJlLFpERI+pSpStzz//HAaDAa6urhg9ejTq1asHg8EAPz8/xMTEWKYbP348bGxs4ObmZhk2btw4mEwmKIqC5OTkfPM9e/YsfHx8YDKZYGdnh06dOmH//v2W8YsWLYLJZIKVlRWefvpp1K1bF3q9HiaTCW3btkWnTp3g7u4Og8EAZ2dnTJkyJd/8zWYzZsyYAQ8PD9jZ2aF169YIDQ21jBcRzJ8/H0899RRsbW3h5OSEyZMnl2lbFbdOxeWaN28ejEYjHBwckJiYiIkTJ6JBgwbo2bMnJk6ciHPnzkFRFHh7e5c4U0n336OWHR8fX2Tm8pi/iOCzzz5Ds2bNYGtri1q1aqFv376Ii4srsD6rV69Gu3btYDAYYDKZ0LhxY3z44YfFblvgj+uennnmGRiNRjg6OqJVq1ZIS0srdlxJ8hW1fj/88AMcHR0xZ86cEu834I+iNWzYMDg5OWHJkiWPnK4076n4+Hjs27cPzZs3h5OTEwwGA1q1aoVt27aVaHs9rqKWNXLkSMv1X15eXjh69CgAYPjw4TAajXBycsKmTZvKtK5EVIOJRgBIaGhoiacfNWqUmEwmOXnypNy9e1dOnDgh7du3FwcHB7l06ZJlusGDB0vdunXzvXb+/PkCQJKSkizDunXrJp6ennLhwgXJycmR3377TZ599lkxGAxy+vRpy3QzZ84UABITEyMZGRmSnJwsPXr0EACyZcsWSUpKkoyMDBk/frwAkGPHjlleO2nSJLG1tZWIiAhJSUmRadOmiZWVlRw6dEhERKZPny6Kosinn34qKSkpkpmZKUuXLhUAcvTo0cfepiVdp5LkAiATJkyQxYsXS79+/eTUqVMSEBAgXl5ej51LpOT771HLLi5zWec/Y8YMsbGxkdWrV8vt27clNjZW2rZtK0888YRcv37d8vqFCxcKAPn444/l5s2bcuvWLfnf//1fGTx4cLHb9s6dO+Lo6Chz586VrKwsuX79uvTr10+SkpKKHCciJc73qPWLjIwUBwcH+eCDD4rdV15eXuLk5CS5ubkyePBg0ev1Eh8fX+RrSvueCg8Pl1mzZsmtW7fk5s2b4uvrK3Xq1BERKXabPJi1JIpalohIQECA6HQ6uXr1ar7XDRo0SDZt2lTmdS2J0NBQ0fBjmYjUEValytbDH6qHDh0SAPLPf/7TMuxxytaf/vSnfNPFxsYKAJk0aZJl2P2ylZ6ebhn2zTffCAA5fvy4ZdjPP/8sACQkJERERLKyssRoNEpwcLBlmszMTLG1tZWxY8dKZmamGI1GefHFF/NlWLt2bZnKVnHrVFwukf/+ssjKyso3r7KWrZLsv8KWXZLMZZl/Zmam2Nvb55u/yH/36f2Ckp2dLc7OzvLCCy/kmy43N1cWLVpUbM7ffvtNAEhkZGSB7VPUuJLme9T6PS4vLy9xcHCQgQMHStu2bQWAtGjRQu7cuVPo9GV5Tz3so48+EgCSmJhY5DZ5MGtJy1ZRyxIR2bFjhwCQ2bNnW6ZJTU2Vpk2bSm5ubrmva2FYtoiqpbAq8TXio7Rr1w5Go7HQr3pKo1WrVnByckJsbGyR09nY2AAAcnNzLcPuXyuVk5MDAIiPj0dmZiZatmxpmcbOzg5ubm6Ii4vD2bNnkZmZiW7dupVL9kd5eJ2Ky1WRSrr/Spu5pPM/ceIE7ty5g3bt2uUb3r59e9jY2Fi+ioyNjcXt27fx8ssv55tOp9NhwoQJxeb09PSEq6srhgwZglmzZuH333+3TFfUuJLmK0+ZmZno3LkzDh8+DH9/f5w4cQIjR44sdNryfE/dP47MZnOR26Q8PLgsAOjatSuefPJJfP3115a7AUNCQhAcHAydTgegch0/RFR1VOmyBQC2trZISkoqt/np9XpLYSqLjIwMAMD7779vuRZEURRcvHgRmZmZuHLlCgDAxcWlzMsqzoPrVFyuilaS/VeWzCWZ//3HBtjb2xcY5+zsjPT0dACwXCvk7Oxcqpx2dnbYtWsXOnbsiDlz5sDT0xPBwcHIysoqclxJ85Une3t7jBo1CgCwatUqeHp6IiQkBAsXLnzs9S7Kli1b0KVLF7i4uMDW1jbfdY9FbZPSKGpZwB83p4wePRrnz5/Hzp07AQDffvstRowYUS7rSkQ1V5UuWzk5Obh9+zYaNmxYLvPLzc3FrVu34OHhUeZ53S9RCxcuhIjk+zlw4AAMBgMA4N69e2VeVlEeXqficlWkku6/0mYu6fzvl6fCSsuDr69fvz4AFLjR4nFytmjRAps3b0ZCQgKmTp2K0NBQLFiwoMhxJc2nFicnJ4SHh1sKSnR0dL7xpd0/ly5dgr+/P9zc3BATE4PU1FTMnTs33zRFba/iREdHW8phSZYFAMOGDYPBYMCKFSsQHx8PR0dHNGrUqMzrSkQ1W5UuW3v27IGIwNfX1zLM2tq61Gemdu/ejby8PLRt27bM2e7fpfioB4y2bNkSVlZW2Lt3b5mXVZSH16m4XBWpsP1XmNJmLun8W7ZsCXt7e/zyyy/5hsfExCA7OxtPP/00AKBx48aoXbs2trp/AocAACAASURBVG/fXqqcCQkJOHnyJIA/fml//PHHaNu2LU6ePFnkuJLmU1Pbtm2xcOFC5ObmYsCAAUhISLCMK+3+OX78OHJycjB27Fh4enrCYDDke6xEUdukJA4fPgyTyVSiZd1Xq1YtBAUFYePGjViwYAFef/31fOMr0/FDRFVHlSpbeXl5SElJQW5uLmJjY/HWW2/Bw8MDw4YNs0zj7e2NW7duYePGjcjJyUFSUhIuXrxY6Pyys7ORmpqK3NxcHDlyBOPHj0ejRo3yza+0DAYDhg8fjrVr12LZsmVIS0uD2WzGlStXcO3aNbi4uCAgIAARERFYuXIl0tLSEBsbW+Y/gVLcOhWXqyi1a9dGQkICfv/9d6Snpz92qS3J/itMSTOXZf4TJ07E+vXrsWbNGqSlpeH48eMYM2YM6tWrZ/k6zdbWFtOmTUN0dDTGjx+Pq1evIi8vD+np6Th58mSxORMSEjB69GjExcUhOzsbR48excWLF+Hr61vkuJLmK0pUVFSpHv3woDFjxmDgwIG4ceMG+vfvb9n/pX1P3T/bumPHDty9exdnzpzJd/1ZUdukKDk5Obhx4wb27NljKVvFLevh9bx37x4iIyPRu3fvfOPKcvwQUQ1WMRfiF4RS3I2o1+ulQYMGYm1tLY6OjtK3b185d+5cvulu3rwpL7zwghgMBmnSpIm8+eabMnnyZAEg3t7elscArFq1Sl544QVxdXUVa2trqVOnjgwcOFAuXrxomdeiRYvEaDQKAGncuLHs27dPPvnkE3FychIAUrduXfnuu+8kJCRE6tatKwCkVq1asnbtWhERuXfvnkydOlU8PDzE2tpaXFxcJCAgQE6cOCEiIunp6TJy5EipU6eO2NvbS8eOHWXGjBkCQBo2bCi//vrrY23TkqxTcbnmzp0rdnZ2AkDc3d1l9erVltcdOXJEGjVqJHZ2dtKxY8d8jxwoj/1X1LKL25ZlnX9eXp7Mnz9fmjZtKnq9XmrVqiX+/v6FPvJgyZIl0qpVKzEYDGIwGKRNmzaydOnSYnP+/vvv4ufnJ7Vq1RKdTif169eX6dOnS25ubpHjSpqvqPXbunWrODg45LvT7mHr168XLy8vAWB5D06bNi3fNOnp6fLUU08JAHF1dZWVK1cWu95F5Zo6darUrl1bnJ2dpX///rJkyRIBIF5eXrJv375HbpOHsz7qZ/369SVa1oOPBxERadOmjbz33nuFbqfSrmtJ8G5EomoprMr8bcTRo0cjPDwcN2/eVDkZqUHt/cf3B5WnXr16YcmSJWjSpEmFLpd/G5GoWqpafxvx/i3aVDWpvf/4/qDSevAr8djYWBgMhgovWkRUfVWpslXTxMXF5bu9/FE/wcHBzEZUBlOnTsWZM2dw+vRpDB8+3PLnl4iIykOVKFvTpk3DqlWrkJqaiiZNmiAiIkLrSBXCx8enwO3lhf2EhIRU6mxq77+a+v6g8mM0GuHj44Pu3btj1qxZaN68udaRiKgaqTLXbBERVXe8ZouoWqpa12wRERERVTUsW0REREQqYtkiIiIiUhHLFhEREZGKWLaIiIiIVMSyRURERKQili0iIiIiFbFsEREREamIZYuIiIhIRSxbRERERCpi2SIiIiJSEcsWERERkYpYtoiIiIhUZK3lwg8cOKDl4omIKhV+JhJVT4qIiCYLVhQtFktEVOlp9LFMROoI1+zMFj9MqLIYMGAAACAsLEzjJEREVB3xmi0iIiIiFbFsEREREamIZYuIiIhIRSxbRERERCpi2SIiIiJSEcsWERERkYpYtoiIiIhUxLJFREREpCKWLSIiIiIVsWwRERERqYhli4iIiEhFLFtEREREKmLZIiIiIlIRyxYRERGRili2iIiIiFTEskVERESkIpYtIiIiIhWxbBERERGpiGWLiIiISEUsW0REREQqYtkiIiIiUhHLFhEREZGKWLaIiIiIVMSyRURERKQili0iIiIiFbFsEREREamIZYuIiIhIRSxbRERERCpi2SIiIiJSEcsWERERkYpYtoiIiIhUxLJFREREpCKWLSIiIiIVWWsdgKgi7d27FwcPHsw3LC4uDgAwd+7cfMN9fX3RuXPnCstGRETVkyIionUIoory448/4qWXXoJer4eVVeEndvPy8pCTk4Pt27fjxRdfrOCERERUzYSzbFGNYjabUbduXdy8ebPI6WrVqoXExERYW/PkLxERlUk4r9miGkWn02Hw4MGwsbF55DQ2Njb429/+xqJFRETlgmWLapyBAwciOzv7keOzs7MxcODACkxERETVGb9GpBqpUaNGuHTpUqHjGjZsiEuXLkFRlApORURE1RC/RqSaaciQIdDr9QWG29jYYOjQoSxaRERUbli2qEYaMmQIcnJyCgzPzs5GcHCwBomIiKi6YtmiGqlZs2Zo1qxZgeE+Pj5o2bKlBomIiKi6YtmiGuvVV1/N91WiXq/H0KFDNUxERETVES+Qpxrr0qVLaNy4Me4fAoqi4Pz582jcuLG2wYiIqDrhBfJUc3l4eKBdu3awsrKCoiho3749ixYREZU7li2q0V599VVYWVlBp9Phb3/7m9ZxiIioGuLXiFSjJSUloV69egCAq1evom7duhonIiKiaiacf4/kP8LCwhAUFKR1DNKQm5ub1hFIA6GhoRgwYIDWMYioGmPZekhoaKjWEaiC7d27F4qi4Pnnn9c6ClUw/geLiCoCy9ZD+D/cmqdHjx4AAEdHR42TUEVj2SKiisCyRTUeSxYREamJdyMSERERqYhli4iIiEhFLFtEREREKmLZIiIiIlIRyxYRERGRili2iIiIiFTEskVERESkIpYtIiIiIhWxbBERERGpiGWLiIiISEUsW0REREQqYtkiIiIiUhHLFhEREZGKWLZKqX379tDpdPjzn/+sdZRKY+TIkXBwcICiKDh27FiJXrNgwQK4urpCURQsX778sZZXltdqbd26dfD09ISiKFAUBe7u7li5cqVl/N69e9GgQQMoigI3Nzd8+eWXlSarm5sbhgwZolkeIqKqxlrrAFXVoUOH0L17dyQnJ2sdpdJYsWIFunfvjoEDB5b4NZMmTULfvn3RtGnTx15eWV6rtYCAAAQEBMDb2xvJycm4fPlyvvHPP/88XnnlFVhZWWH58uVQFEWjpAWzXr9+XbMsRERVEc9slZGavwSzsrLg5+en2vypcsrLy8OIESOg1+s1L1pERFR2LFtlpNfrVZv3ypUrkZiYqNr81cBiUDZ5eXl47bXXYDQasWzZMm5PIqJqgGWrjM6ePQsfHx+YTCbY2dmhU6dO2L9/f75pzGYzZsyYAQ8PD9jZ2aF169YIDQ0FAMybNw9GoxEODg5ITEzExIkT0aBBA/Ts2RMTJ07EuXPnoCgKvL29S5xp0aJFMJlMsLKywtNPP426detCr9fDZDKhbdu26NSpE9zd3WEwGODs7IwpU6bke72I4LPPPkOzZs1ga2uLWrVqoW/fvoiLiysw3fz58/HUU0/B1tYWTk5OmDx5coE8Ra2/Gvbt24fmzZvDyckJBoMBrVq1wrZt2wD8cV3Z/WuPvLy8cPToUQDA8OHDYTQa4eTkhE2bNhWb+1H7LT4+Hj/88AMcHR0xZ86cx8qdl5eHYcOGwcnJCUuWLHnkdKXNVdR2Af64TuyZZ56B0WiEo6MjWrVqhbS0tMdah/u03gdERJWKkIiIhIaGyuNujm7duomnp6dcuHBBcnJy5LfffpNnn31WDAaDnD592jLdpEmTxNbWViIiIiQlJUWmTZsmVlZWcujQIRERmT59ugCQCRMmyOLFi6Vfv35y6tQpCQgIEC8vr1Ktz8yZMwWAxMTESEZGhiQnJ0uPHj0EgGzZskWSkpIkIyNDxo8fLwDk2LFjltfOmDFDbGxsZPXq1XL79m2JjY2Vtm3byhNPPCHXr1+3TDd9+nRRFEU+/fRTSUlJkczMTFm6dKkAkKNHj5Z4/c+cOSMA5Isvvnjs9SzsteHh4TJr1iy5deuW3Lx5U3x9faVOnTqW8QEBAaLT6eTq1av55jVo0CDZtGlTiXM/ar9FRkaKg4ODfPDBB8Xm9/LyEicnJ8nNzZXBgweLXq+X+Pj4Il9T2lxFbZc7d+6Io6OjzJ07V7KysuT69evSr18/SUpKKpC1JLTeByUFQEJDQ0s8PRFRKYSxbP1HacvWn/70p3zDYmNjBYBMmjRJRESysrLEaDRKcHCwZZrMzEyxtbWVsWPHish/f2FkZWXlm1d5lK309HTLsG+++UYAyPHjxy3Dfv75ZwEgISEhlmz29vb58j443f0CkZmZKUajUV588cV8061duzZf2SrJ+pd32XrYRx99JAAkMTFRRER27NghAGT27NmWaVJTU6Vp06aSm5tb4tyP2m+Pw8vLSxwcHGTgwIHStm1bASAtWrSQO3fuFDp9eeZ6cLv89ttvAkAiIyOLzFrSslXUskQqzz5g2SKiChDGrxHLWatWreDk5ITY2FgAQHx8PDIzM9GyZUvLNHZ2dnBzcyvwtZzabGxsAAC5ubmWYfevOcvJyQEAnDhxAnfu3EG7du3yvbZ9+/awsbFBTEwMgD++Ps3MzES3bt2KXGZlWP/762g2mwEAXbt2xZNPPomvv/4aIgIACAkJQXBwMHQ6XYXnzszMROfOnXH48GH4+/vjxIkTGDlyZKHTlmeuB7eLp6cnXF1dMWTIEMyaNQu///57qdenuGUBlW8fEBGpiWVLBXq93lJeMjIyAADvv/++5ToVRVFw8eJFZGZmahmzULdv3wYA2NvbFxjn7OyM9PR0AMCVK1cAAC4uLkXOT4v137JlC7p06QIXFxfY2toWuCZNURSMHj0a58+fx86dOwEA3377LUaMGKFJbnt7e4waNQoAsGrVKnh6eiIkJAQLFy4sMG1ZchW1Xezs7LBr1y507NgRc+bMgaenJ4KDg5GVlVWqdapq+4CISE0sW+UsNzcXt27dgoeHB4D/lpGFCxdCRPL9HDhwQMuohXJ2dgYAS6l60O3bt9GwYUMAgMFgAADcu3evyPlV9PpfunQJ/v7+cHNzQ0xMDFJTUzF37twC0w0bNgwGgwErVqxAfHw8HB0d0ahRI81y3+fk5ITw8HBLQYmOjs43vrS5SrJdWrRogc2bNyMhIQFTp05FaGgoFixYUKLc0dHRlnJY1fcBEVF5Y9kqZ7t370ZeXh7atm0LAJa7/kr6RHWttWzZEvb29vjll1/yDY+JiUF2djaefvppy3RWVlbYu3dvkfOr6PU/fvw4cnJyMHbsWHh6esJgMBT6+IRatWohKCgIGzduxIIFC/D6669rmvtBbdu2xcKFC5Gbm4sBAwYgISGhzLmK2y4JCQk4efIkgD9Kzscff4y2bdtahhXn8OHDMJlMJVrWfZV5HxARlSeWrTLKzs5GamoqcnNzceTIEYwfPx6NGjXCsGHDAPxxBmj48OFYu3Ytli1bhrS0NJjNZly5cgXXrl0rct61a9dGQkICfv/9d6Snp1u+mlSTwWDAxIkTsX79eqxZswZpaWk4fvw4xowZg3r16lm+7nJxcUFAQAAiIiKwcuVKpKWlITY2tsCflSnL+pfG/TOKO3bswN27d3HmzBnLdWYPGzNmDO7du4fIyEj07t273HJHRUWV6tEPD2cbOHAgbty4gf79+1v2fWlzFbddEhISMHr0aMTFxSE7OxtHjx7FxYsX4evrW2TOnJwc3LhxA3v27LGUrcqwD4iIKpWKvBy/MivN3YirVq2SF154QVxdXcXa2lrq1KkjAwcOlIsXL+ab7t69ezJ16lTx8PAQa2trcXFxkYCAADlx4oTMnTtX7OzsBIC4u7vL6tWrLa87cuSINGrUSOzs7KRjx475HrtQlEWLFonRaBQA0rhxY9m3b5988skn4uTkJACkbt268t1330lISIjUrVtXAEitWrVk7dq1IiKSl5cn8+fPl6ZNm4per5datWqJv79/gUcSpKeny8iRI6VOnTpib28vHTt2lBkzZggAadiwofz666/Frv+nn35qyWAymaRfv34l3v6Peu3UqVOldu3a4uzsLP3795clS5YIAPHy8pJLly7lm0ebNm3kvffeK3T+pd1vW7duFQcHh3x32j1s/fr14uXlJQAs22vatGkFtu9TTz0lAMTV1VVWrlxZplxFbZd9+/aJn5+f1KpVS3Q6ndSvX1+mT58uubm5BbI+6mf9+vUlWlZF7IOSAu9GJCL1hSki/7kVqIYLCwtDUFAQuDlqll69emHJkiVo0qSJ1lFqLC33gaIoCA0NxYABAyp82URUY4Tza0SqUR78KjY2NhYGg4FFq4JxHxBRTcOyVUXExcXlu/39UT/BwcFaRy0Ttddz6tSpOHPmDE6fPo3hw4fjww8/LOc1oOJwHxBRTWOtdQAqGR8fnxrxFafa62k0GuHj44MGDRpg6dKlaN68uWrLosJxHxBRTcNrtv6D12wR1Ty8ZouIKgCv2SIiIiJSE8sWERERkYpYtoiIiIhUxLJFREREpCKWLSIiIiIVsWwRERERqYhli4iIiEhFLFtEREREKmLZIiIiIlIRyxYRERGRili2iIiIiFTEskVERESkIpYtIiIiIhVZax2gslEUResIREREVI0oIiJah6gMrly5gp9++knrGFSE1NRUTJ8+HS4uLpgxY0alLsbbtm3DqlWr8M477+CZZ57ROg4Vwc/PDw0bNtQ6BhFVX+EsW1Ql3L17F127dsX169cRExMDFxcXrSMV680338TKlSuxa9cu+Pr6ah2HiIi0Ec5rtqjSExGMHDkS8fHxiIqKqhJFCwAWLVqEl156Cb1798bZs2e1jkNERBph2aJKb8aMGQgLC0NYWBieeuopreOUmE6nw5o1a+Du7o4+ffogJSVF60hERKQBli2q1EJDQzFnzhwsXrwY3bp10zrOY7O3t8eWLVuQkZEBf39/3Lt3T+tIRERUwVi2qNLav38/hg4diilTpmDUqFFaxym1evXqYevWrTh27BhGjx6tdRwiIqpgLFtUKV24cAEBAQHo1asXPvroI63jlFmLFi0QGhqKNWvWYPbs2VrHISKiCsS7EanSSUtLw3PPPQcbGxtER0fDZDJpHancrFixAm+88Qa++eYb/O1vf9M6DhERqS+cDzWlSiUnJwcBAQG4desWYmJiqlXRAmC5q3LkyJFo0KABunbtqnUkIiJSGcsWVSrjx4/HwYMHsW/fvmr7oMl58+bh6tWr6N+/P3766acqdYclERE9Pl6zRZXG/Pnz8eWXX2LNmjX485//rHUc1SiKgpUrV+Kpp55Cz549kZiYqHUkIiJSEcsWVQpbtmzBe++9h08//RR//etftY6jOjs7O2zatAk6nQ5/+ctfkJmZqXUkIiJSCcsWae7o0aMICgrCsGHD8NZbb2kdp8I88cQT2Lx5M86ePYthw4YhLy9P60hERKQCli3SVEJCAvr06YPnnnsOy5cv1zpOhfPx8cHGjRuxadMmvP/++1rHISIiFfACedJMZmYm+vbtCwcHB4SGhsLauma+HZ9//nn83//9HwYNGgR3d3eMGTNG60hERFSOauZvN9JcXl4eBg0ahAsXLuDAgQNwdnbWOpKmgoODERcXhwkTJsDLywsvvfSS1pGIiKicsGyRJiZPnoxt27Zh586d8Pb21jpOpTBz5kxcuHABgYGB2L9/P1q3bq11JCIiKgd8gjxVuJUrV+L111/H6tWrMXjwYK3jVCo5OTno2bMn4uLicPDgwWr7rDEiohoknGWLKtTevXvx0ksvYfr06ZgxY4bWcSql1NRUdOzYETY2Nti7dy/s7e21jkRERKXHskUVJy4uDn5+fnjxxRcREhICRVG0jlRpXbhwAR06dMDTTz9teR4XERFVSeF89ANViJs3b6J3795o3rw5vv32WxatYjRp0gSbN2/Gnj17MG7cOK3jEBFRGbBskeqys7MRGBgIs9mM9evXw9bWVutIVUL79u3xzTff4KuvvsLnn3+udRwiIiolli1SlYhgxIgROHLkCDZt2gRXV1etI1UpgYGB+Pjjj/H2229j48aNWschIqJS4KMfSFX//Oc/ERoaii1btqBly5Zax6mSpkyZgosXL2LQoEHYvXs3nn32Wa0jERHRY+AF8qSa8PBwBAUFYenSpXwqehmZzWb4+/vjl19+wYEDB9CoUSOtIxERUcnwbkRSx6FDh9ClSxeMHTsW8+fP1zpOtZCeno5OnTohJycH//73v2v8U/eJiKoIli0qf7///jt8fX352AIVJCQk4Nlnn4W3tze2bdsGGxsbrSMREVHR+OgHKl9paWno06cP6tevj9DQUBatcla/fn1s2rQJv/zyC7+aJSKqIniBPJUbs9mMQYMGITk5GTExMXzyuUratGmDsLAw9OnTB02bNsW7776rdSQiIioCyxaVm/Hjx2PXrl3Ys2cP3N3dtY5TrfXs2RNLly7F6NGj4e7uzr8xSURUibFsUblYuHAhli9fjnXr1uGZZ57ROk6N8MYbb+DUqVMYOXIkGjdujOeee07rSEREVAheIE9lFhUVhd69e2Pu3LmYOHGi1nFqlLy8PAQGBiI6Oho//fQTnnzySa0jERFRfrwbkcrmxIkT8PPzQ2BgIFauXKl1nBopKysLXbt2RVJSEg4cOAAXFxetIxER0X+xbFHpXbt2Dc8++yw8PT2xfft2PoZAQ0lJSejQoQPq16+PH3/8kX9/koio8uCjH6ho58+fR2F9PCsrC3379oXJZMKGDRtYtDTm4uKCzZs347fffsPQoUML3WdmsxkJCQkapCMiqtlYtqhIw4cPR1BQEO7evWsZlpeXh8GDB+PcuXPYtGkTatWqpWFCuq9Zs2bYsGEDNmzYgJkzZ+Ybl5GRgb/+9a947733NEpHRFRz8W5EeqTz589j3759UBQF58+fx9atW+Hq6op3330XW7ZswbZt29C0aVOtY9IDOnfujOXLl+O1115DgwYNMGrUKFy7dg09e/ZEbGwsbGxs8D//8z/8Uz9ERBWIZYseadWqVbC2tkZOTg5iY2PRunVrjBkzBgsWLMA333yDLl26aB2RCjF8+HCcPXsWb775JnQ6HWbOnImkpCSICHJzc/Gvf/0LY8eO1TomEVGNwQvkqVB5eXlo2LAhrl27ZhlmbW0NRVHQv39/fPfddxqmo+KICF555RXs2bMHubm5yM3NBQAoigIfHx+cPHlS44RERDUGL5Cnwm3bti1f0QJg+aUdEhKCxYsXa5SMSmLVqlX48ccfkZ2dbSlawB8l7NSpUzh8+LCG6YiIahaWLSrUihUroNfrCwwXEeTl5WHChAl444038v0iJ+2JCGbOnIkRI0bAbDYjLy+vwDR6vR5ffvmlBumIiGomfo1IBSQnJ6NevXolKlK9e/fG+vXrYW3Ny/+0du/ePQwdOhShoaHFTmtnZ4fExET+sXAiIvXxa0QqaM2aNcVOoygKWrdujXfffZdFq5KwsbFBnz594ObmVuw+uXfvXolKGRERlR3LFhXw1VdfwWw2FzpOr9fD0dERCxcuxJEjR+Dn51fB6ehRFEXBoEGDcO7cOcyePRt2dnZFPmz2iy++qMB0REQ1F8sW5XPo0CGcPHmywBPIra2tYWVlhWHDhuHcuXOYMGECdDqdRimpKEajEVOnTsWZM2fQv39/KIpS4ExXXl4eDh8+jF9//VWjlERENQfLFuWzcuXKfBfGW1lZQVEUdOjQAceOHcOXX36JJ554QsOEVFINGjTAmjVrEBMTgzZt2kBRFCiKYhmv1+vx9ddfa5iQiKhm4AXyZJGVlQVXV1fcuXMHwB9ns+rUqYN58+bh1Vdf1TgdlYWIICIiAuPHj0dycrLl5gd7e3vcuHEDRqNR44RERNUWL5Cn/4qIiMCdO3dgZWUFGxsb/OMf/8CFCxdYtKqB+w+jPX36NCZNmgS9Xg+dToc7d+5gw4YNWscjIqrWKvWZrc8++wwHDhzQOkaNsWfPHiQnJ6NBgwb405/+VOXPdrzzzjvo0KGDKvPu37+/KvOtKBkZGfj111+RkJCAJ554gn96qYKFh4drHaHS4ec9VReFHN+V+8zWgQMHcPDgQa1j1Ah37txBTk4OOnfujA4dOlT5ohUREYHLly+rOv8rV66oNn+1mUwm+Pn5oXPnzsjJyUF6errWkWqEK1euICIiQusYlRI/76mqK+r4rvQPSPL19eX/AivAtWvX4OLiUm2emfXgheBqefvttzFgwADVl6M2s9mMxMRE1KtXT+so1V5YWBiCgoK0jlFp8fOeqrKiju/q8ZuVyoy/aGsunU7H/U9EpKJK/TUiERERUVXHskVERESkIpYtIiIiIhWxbBERERGpiGWLiIiISEUsW0REREQqYtkiIiIiUhHLFhEREZGKWLaIiIiIVMSyRURERKQili0iIiIiFbFsEREREamIZYuIiIhIRdW+bI0cORIODg5QFAXHjh3TOk65uHv3Lnx8fPD+++8/9mvXrVsHT09PKIqS78fGxgaurq7o0qUL5s+fj5SUFBWS033V4X05e/bsAu8jRVHQsmXLx54X35dUGlu3boWTkxM2b96sdZRyVZbP+IMHD6JZs2awsrKCoiioW7cuZs+erULK0nv4eHdzc8OQIUO0jqWqal+2VqxYga+++krrGOVq+vTpiI+PL9VrAwICcP78eXh5ecHJyQkigry8PCQmJiIsLAxNmjTB1KlT0aJFC/zyyy/lnJzuq47vy7Lg+5JKQ0S0jqCKsnzG+/r64tSpU3jppZcAAPHx8aUqbWp6+Hi/fv061qxZo3UsVVX7slXd/PTTT/jtt9/KdZ6KosDZ2RldunTBqlWrEBYWhhs3bqBXr15ITU0t12VR9bJ69WqISL6f8np/8n1Jxbn/Xujdu7fWUZCVlQU/P78yz0eNz3itlde2qcpqRNlSFEXrCOUiKysLkydPxqJFi1RdTmBgIIYNAk42VQAAIABJREFUG4bExEQsX75c1WXVZNXlfVlR+L6kymzlypVITEws0zwq6jO+opXHtqnqql3ZEhHMnz8fTz31FGxtbeHk5ITJkycXmM5sNmPGjBnw8PCAnZ0dWrdujdDQUADAsmXLYDKZYDQa8f3336Nnz55wdHREw4YNsXbt2nzz2bt3L5555hkYjUY4OjqiVatWSEtLK3YZpTF9+nSMGzcOLi4uhY7/4Ycf4OjoiDlz5pR6GfcNGzYMABAVFWUZVhW3WWVRnd+XxeH7ktSwf/9+eHh4QFEULFmyBEDJ9/fnn38Og8EAV1dXjB49GvXq1YPBYICfnx9iYmIs040fPx42NjZwc3OzDBs3bhxMJhMURUFycjIA4K233sLEiRNx7tw5KIoCb2/vUq2Tmp/xVX3b7Nu3D82bN4eTkxMMBgNatWqFbdu2AfjjGtj71395eXnh6NGjAIDhw4fDaDTCyckJmzZtAlD0sTxv3jwYjUY4ODggMTEREydORIMGDUr9lW4+UokFBgZKYGDgY71m+vTpoiiKfPrpp5KSkiKZmZmydOlSASBHjx61TDdp0iSxtbWViIgISUlJkWnTpomVlZUcOnTIMh8AsnPnTklNTZXExETp1KmTmEwmyc7OFhGRO3fuiKOjo8ydO1eysrLk+vXr0q9fP0lKSirRMh7H/v37pU+fPiIikpSUJABk+vTp+aaJjIwUBwcH+eCDD4qdn5eXlzg5OT1yfFpamgAQd3d3y7CqtM0ASGho6GO9Rs35V8f35YcffigNGzYUZ2dn0ev10rhxY/nr/7N353FR1fv/wF/DADMssqikuaBimimYCxoq3FyuEZmmIouZpjfTNJNSC6+akRuafdVy6WaaN5cUMa9raWppakouGOK+ASqRK4KAMMD790c/J5FtRmc4M/B6Ph784ZnPOef1+cw5Z96eOefMK6/Ib7/9VqQdt8u/xcTEiIUfdhXzKMf7y5cvCwCZP3++fpoh77eIyPDhw8XJyUlOnjwp9+7dkxMnTki7du2kWrVqkpKSom83YMAAqVWrVpH1zp49WwDotw8RkeDgYGncuLGx3dYz9TE+MDBQAMjt27f10yxtbMrb3x8UGxsrUVFRcuvWLbl586b4+flJjRo1iqxDrVbL1atXi8z36quvyqZNm/T/NvR4ERERIfPnz5e+ffvKqVOnDMpYxv691qL3emN3vuzsbHF0dJTu3bsXmb569eoiH2o5OTni6Ogo4eHhRebVaDQycuRIEfl7wHNycvRt7n84nj9/XkREEhMTBYBs2bKlWBZD1mFMv3x9feXKlSsiUvqOaAxDNnKVSiVubm4iYn1jZknFVmXdLlNSUuTo0aOSmZkpubm5cuDAAWndurU4ODhIYmKiUcu6r7Jvlyy2SmfqYqus91vkr4Li4W3t0KFDAkA+/vhj/bSKKLbMcYwvq9iylLExpth62IwZMwSAXLt2TUREdu7cKQBk2rRp+jZ37tyRJk2aSH5+vog8+vHCUGUVW5Xqa8Tz588jOzsb3bp1K7PdmTNnkJ2dXeQWdQcHB9SuXRunT58udT57e3sAgE6nAwB4eXnhiSeewGuvvYaoqCgkJSU99jpKMmHCBAwbNgx169Y1ar7HkZWVBRGBi4sLAOsbM0tSWbfL+vXro3Xr1nB2doa9vT38/PywbNky5OTkYOHChUYty1DcLulRPPx+l8bX1xeOjo4V/r4qcYy/z9LHpjR2dnYA/vpaEAC6du2Kpk2b4uuvv9bfpbpmzRqEh4dDrVYDUHZfrlTF1pUrVwCg1O+778vKygIATJo0qcgzfZKTk5GdnW3w+hwcHPDTTz/B398f06dPh5eXF8LDw5GTk2Oydezbtw/Hjx/H0KFDDZ7HFM6ePQsAaNasGQDrGjNLUxm3y9L4+PhArVbrtx9T43ZJ5qbRaHD9+vUKW59Sx/hHUdFj86CtW7eic+fO8PDwgEajwQcffFDkdZVKhbfeegsXL17Erl27AADLly/HG2+8oW+j5L5cqYotrVYLAMjNzS2z3f0Pvblz5xa7bf3AgQNGrbNFixbYvHkzUlNTERkZiZiYGHz66acmW8fSpUuxa9cu/QPqVCqVftnTp0+HSqUyy3OHtm3bBgAICgoCYF1jZmkq43ZZmsLCQhQWFkKj0Tz2skrC7ZLMSafTIT09HfXq1auwdSp1jDdWRY/NL7/8grlz5wIAUlJS0KdPH9SuXRtxcXG4c+cOZs2aVWyewYMHQ6vVYsmSJThz5gxcXFzQoEED/etK7suVqtjy9vaGjY0N9uzZU2a7+vXrQ6vVPvaTu1NTU3Hy5EkAf72J0dHRaNOmDU6ePGmydSxbtqzYRnH/fxYTJ06EiMDX1/ex1vGwtLQ0zJ07F/Xq1cO//vUvANY1ZpamMm6XABAYGFhs2qFDhyAi6NChw2Mv/2HcLsncdu/eDRGBn5+ffpqtrW25X7E9DiWO8Y+iosfmyJEjcHJyAgAcP34cOp0OI0eOhJeXF7RabYmPznF3d0dYWBg2bNiATz/9FG+++WaR15XclytVseXh4YHg4GCsW7cOS5cuRUZGBhISErB48eIi7bRaLYYMGYLVq1dj0aJFyMjIQEFBAa5cuYI//vjD4PWlpqbirbfewunTp5GXl4f4+HgkJyfDz8/PZOswxg8//GDUbcEigrt376KwsFC/g8fExKBTp05Qq9XYsGGD/tqYyjpmFaGybpdXr17FmjVrkJ6eDp1OhwMHDmDo0KHw9PTEiBEj9O24XZKlKiwsxO3bt5Gfn4+EhAS8++678PT01D9iBACeeuop3Lp1Cxs2bIBOp8P169eRnJxcbFnVq1dHamoqkpKSkJmZaZYixNh96XEoNTY6nQ5//vkndu/erS+2PD09AQA7d+7EvXv3cO7cuSKPoXjQiBEjkJubiy1bthR72K2i+7LRl9tXoEe5OyUzM1OGDh0qNWrUEGdnZ/H395fJkycLAKlXr578/vvvIiKSm5srkZGR4unpKba2tuLh4SHBwcFy4sQJWbhwoTg6OgoAadKkiVy4cEEWL14sLi4uAkAaNGggZ8+elaSkJOnYsaO4u7uLWq2WOnXqyMSJE/V3PpS1jsdR2p0q33//vVSrVq3I3RgP27Rpk7Rs2VIcHR3F3t5ebGxsBID+Dq/27dvLlClT5ObNm8XmtaYxgwXdjShSObfLsWPHSuPGjcXJyUlsbW2lXr168uabb0pqamqRdtwu/8a7EUtn7PF+/vz5Urt2bQEgjo6O0qtXL4Pfb5G/7rizs7OTunXriq2trbi4uEjv3r3lwoULRdZz8+ZN6dKli2i1WmnUqJG888478v777wsAeeqpp/SPQjh69Kg0aNBAHBwcxN/fX9LS0h55LB7nGH/w4EFp0aKFfh+qXbu2TJ8+3aLG5osvvpDGjRsLgDL/1q9fr19XZGSkVK9eXdzc3CQkJEQWLFggAKRx48ZFHkchItK6dWv597//XeL4lLUvz5o1SxwcHPSPl1mxYoXhb5qUfTeiSsRyf1wqJCQEABAbG6twErI2KpUKMTExCA0NtcrlU+W0du1ahIWFVdrf9HscFX28f+uttxAbG4ubN29WyPqsibWPTY8ePbBgwQI0atSoQtdbxv4dW6m+RiQiIjLU/ccGUHHWNDYPfi2ZkJAArVZb4YVWeVhsKeD06dNFbjst7S88PFzpqFSFcLskMg3uSxUrMjIS586dw9mzZzFkyBBMnTpV6UjF2CodoCpq1qwZv0Ygi8PtkqqKCRMmYNmyZcjLy0OjRo0we/Zs9OvXz2TLt+Z9ydxjYw6Ojo5o1qwZ6tati4ULF6J58+ZKRyqGZ7aIiKhKmTFjBnJzcyEiuHTpksUXExXJGsdm2rRpKCgoQEpKSrE7EC0Fiy0iIiIiM2KxRURERGRGLLaIiIiIzIjFFhEREZEZsdgiIiIiMiMWW0RERERmxGKLiIiIyIxYbBERERGZEYstIiIiIjNisUVERERkRiy2iIiIiMyIxRYRERGRGbHYIiIiIjIjW6UDlOfgwYMICQlROgb9f/n5+cjOzoaLi4vSURQ3d+5cxMbGKh3D7PLy8mBvb690jErhypUrSkewaDzekzUra/+26GKrQ4cOSkegh5w5cwbnzp1D+/btUadOHaXjlKpfv36oX7++WZdfVezZswd169ZF8+bNlY5i9erVq1elth1j8Hhvfps2bYKvr69FH7utWVn7t0pEpILzkBXLz8/He++9h4ULF+KDDz5AdHQ0VCqV0rHITNavX49+/frh2LFjaNmypdJxiOgxqFQqxMTEIDQ0VOkoVU2sRZ/ZIstja2uL+fPnw8fHB6NGjcKlS5ewbNkyODo6Kh2NzCA6Ohp9+/ZloUVE9Bh4gTw9kmHDhmHXrl3YvXs3OnXqhOTkZKUjkYlt2rQJR44cwb///W+loxARWTUWW/TIAgICcODAAeTn58PX1xd79uxROhKZ0LRp0/DKK6+gbdu2SkchIrJqLLbosXh5eeHAgQPw9/dHYGAgvv76a6UjkQls3boVhw4dwoQJE5SOQkRk9Vhs0WNzdnbG+vXrMX78eAwdOhTDhw+HTqdTOhY9hujoaLz88sto166d0lGIiKweL5Ank1CpVIiKikLz5s0xZMgQJCUlYc2aNXB3d1c6Ghlp+/bt2L9/P+Li4pSOQkRUKfDMFplUaGgo9u/fjzNnzqB9+/Y4efKk0pHISNOnT0dQUBDat2+vdBQiokqBxRaZXKtWrXDo0CHUqVMHfn5+2LRpk9KRyEA7d+7E3r17ea0WEZEJsdgis/Dw8MCPP/6Ifv36oXfv3oiKilI6Ehlg6tSp6N69O/z9/ZWOQkRUafCaLTIbjUaDr7/+Gn5+fnj77bdx5swZfP3113BwcFA6GpVg9+7d+OWXX/DLL78oHYWIqFLhmS0yu2HDhmHr1q3Yvn07OnXqhMuXLysdiUowZcoUdO3aFQEBAUpHISKqVFhsUYV44YUX8NtvvyE3Nxd+fn68083C/Prrr/j555/x4YcfKh2FiKjSYbFFFeapp57CwYMH4evri+effx7ffPON0pHo/4uKikKnTp3QuXNnpaMQEVU6LLaoQlWrVg3r16/Hu+++i8GDByMiIgIFBQVKx6rSDh48iB07duDjjz9WOgoRUaXEYosqnFqtxsyZM/Htt9/iq6++wssvv4z09HSlY1VZH3/8MTp06IBu3bopHYWIqFJisUWK6d+/P/bt24cTJ06gffv2OH36tNKRqpwjR45g+/bt+Oijj5SOQkRUabHYIkW1adMGBw8ehLu7Ozp27IgdO3YoHalKiYqKQvv27REYGKh0FCKiSovFFimuTp06+OWXX9CzZ08EBQVh1qxZSkeqEuLj47F161ZMnjxZ6ShERJUaH2pKFkGj0eCbb75BmzZtMGbMGBw/fhxLliyBVqtVOlql9fHHH6N169YICgpSOgoRUaXGM1tkUSIiIrBlyxZs3boV3bp1Q1pamtKRKqXExERs3rwZH330EVQqldJxiIgqNRZbZHGCgoIQFxeHW7duwdfXF4cOHVI6UqUTFRWFli1bomfPnkpHISKq9FhskUVq2rQp9u/fj2eeeQb/+Mc/sHLlSqUjVRonT57E//73P0yePJlntYiIKgCLLbJY1atXx7Zt2xAREYFBgwZh/PjxKCwsVDqW1fv444/xzDPP4JVXXlE6ChFRlcAL5Mmi3X8Aqo+PD4YOHYrjx4/j22+/haurq9LRrNKpU6ewbt06rFmzBjY2/L8WEVFF4NGWrMKAAQOwa9cuHD16FAEBAbh06ZLSkazStGnT0KxZMwQHBysdhYioymCxRVajY8eOOHz4MDQaDdq1a4effvpJ6UhW5fz581i7di0+/PBDntUiIqpAPOKSValbty727t2LoKAgBAYG8gGoRpg6dSoaNWqEkJAQpaMQEVUpvGaLrI5Wq8Xy5cvh7e2NCRMm4OLFi5g/fz7s7e2VjmaxLly4gG+//Rb//e9/oVarlY5DRFSl8MwWWSWVSoXIyEhs2rQJa9asQbdu3XDt2jWlY1ms6dOno2HDhggLC1M6ChFRlcNii6xajx49sG/fPly9ehW+vr44evSo0pEsTnJyMlatWoVJkybB1pYns4mIKhqLLbJ6Pj4+OHToEJo0aYLnn38e69evVzqSRZk+fTrq1auHAQMGKB2FiKhKYrFFlUKNGjWwfft2/Otf/0K/fv34ANT/LyUlBd988w0mTpzIs1pERAphsUWVhq2tLT777DP85z//wZw5cxAWFoasrCylY1WIwsJCjBw5stjzx6Kjo1G7dm289tprCiUjIiKViIjSIYhMbd++fQgODkbt2rWxceNGNGzYUOlIZpWamoq6detCrVZj4MCBmDRpEjQaDZ566il8/vnnGDZsmNIRiagCDRw4EMeOHSsyLSkpCR4eHnByctJPs7Ozw+bNm1G3bt2KjliVxLLYokrr8uXL6N27N1JSUhAbG4vOnTsXa3P27FksWrQI8+bNq/iAJnTgwAF07NgRwF8Hz4KCAvj6+uLKlSu4dOkSH4tBVMVMmzYNH374YbntmjVrhlOnTlVAoiotll8jUqVVv3597NmzBwEBAQgMDMTSpUuLvJ6eno6goCB8/vnn+OWXXxRKaRpJSUn6p8LrdDoUFhYiPj4ef/zxB/r378+DKVEV079/f6hUqjLb2NnZYfDgwRUTqIpjsUWVmrOzM7777jtMmTIFw4YNw/Dhw6HT6VBQUICwsDBcvnwZKpUKw4YNg06nUzruI0tJSYGdnV2RaTqdDiKCzZs3o0WLFggNDcWJEycUSkhEFalx48Zo3bp1mT/NlZ+fz2fvVRDenkSV3v0HoDZq1AhDhgzBxYsX0axZM+zatQsFBQUA/vrdwHnz5uH9999XOO2jSUpKKvXuy/tFZGxsLLKzs7Fp0yb+NiJRFTBo0CAkJCSUeGxQqVRo3759pb+e1VLwiEtVRmhoKPbt24d79+5hwYIF+kILAAoKCvDhhx8iKSlJuYCP4eLFi2WemVOr1XjhhRewbt06FlpEVURYWFip/wmzsbHBoEGDKjhR1cWjLlUphYWF+O2330p97b333qvgRKZx4cKFUl+ztbVF9+7dsWnTJmi12gpMRURKql27NgICAkr9PdTg4OAKTlR1sdiiKuOPP/7ASy+9VOSM1oN0Oh02bNiArVu3VnCyx3f16tUSp6vVavTs2RObNm2CRqOp4FREpLSBAwcWm2ZjY4MuXbqgVq1aCiSqmlhsUZWQk5ODHj164Pbt26UWW8BfB6GRI0fi3r17FZju8dy4caPEvGq1Gr1798batWuLXTxPRFVDSEhIiZcOlFSEkfmw2KIqYeHChcUe8FeSwsJCpKamIjo6ugJSmUZycnKxaTY2NggODsaaNWv4Mz1EVZiLiwtefPHFIscBtVqNV155RcFUVQ+LLaoSxo0bh6SkJEydOhUNGjQAgFIf9Jmfn4/o6GicOXOmIiM+sqSkpCLP01Gr1QgNDcW3337LQouI8Nprr+nP6Nva2qJXr15wdXVVOFXVwmKLqgxPT09ERkYiKSkJhw8fxvDhw+Hq6gqVSlXiBaQjRoxQIKXxkpOT9V8T3r/DaNWqVaVeFEtEVUuvXr3g4OAA4K87rwcMGKBwoqqHxRZVSW3btsXnn3+OtLQ0rF27FkFBQVCr1bC1tYWNjQ10Oh1+/vlnxMbGKh21XCkpKdDpdLCxscGbb76JpUuX8vEORKSn1WrRt29fAICjoyOCgoIUTlT1WO13DGvXrlU6AlUiAwcORO/evbF//37s3r0bly5dAgC8+eabyM7O1v+v0BLt27cPIoLAwEB06dLFKgpEqtpCQ0PNstwrV67g119/NcuyrV39+vUBAO3atcOmTZsUTmOZ6tevjw4dOphl2Vb7Q9Tl/eYTERFZJnN97Kxdu5Y/P0OPrF+/fub6z2qs1Z7ZAoCYmBiz/Q+JCPjr7sRff/0Vfn5+Fnux+cyZMzF+/HilYxCVq6KKISs9h2B2UVFRmDRpksUey5QUEhJi1uVzxInKYGNjA39/f6VjlImFFhEZgoWWcngVLRERURXAQks5LLaIiIiIzIjFFhEREZEZsdgiIiIiMiMWW0RERERmxGKLiIiIyIxYbBERERGZEYstIiIiIjNisUVERERkRiy2iIiIiMyIxRYRERGRGbHYIiIiIjIjFltEREREZsRiqxzR0dFwdXWFSqXCsWPHlI5jsCFDhkCr1UKlUuHevXuVJke7du2gVqvRqlWrR17G999/D1dXV2zevLnUNkOHDkW1atUs7n03Rf9LY2ifS2tnyLhWtDNnzuCdd95BixYtUK1aNdja2sLV1RVNmzZFjx49cODAAaUjkoWx1H3fWDqdDjNmzMBTTz0Fe3t7uLm5wdvbG0lJSUYt57vvvoOXlxdUKlWRP3t7ezzxxBPo3LkzZs+ejdu3b5unI5UEi61y/Pvf/8aXX36pdAyjLVu2DOPGjVM6hslzHDp0CF26dHmsZYhIuW2WLFmCr7766rHWYw6m6H9pDO1zae0MGdeKtHTpUvj4+CAhIQFz5szB5cuXkZWVhfj4eEydOhXp6ek4fvy40jHJwljqvm+ssLAwLF++HKtWrUJ2djZOnTqFxo0b4+7du0YtJzg4GBcvXkTjxo3h6uoKEUFhYSGuXbuGtWvXolGjRoiMjESLFi1w+PBhM/XG+tkqHaCi5OTkoFu3bvj111+VjkImoFKpHnneHj164M6dOyZMU/Eep//mYknjevDgQQwfPhzPP/88tm/fDlvbvw91Xl5e8PLygpubG86dO6dgyrIpeczi8dK6rVmzBhs2bMDvv/8OHx8fAMCTTz6JjRs3mmT5KpUKbm5u6Ny5Mzp37owePXogLCwMPXr0wNmzZ+Hq6mqS9VQmVebM1tKlS3Ht2jWlYyjCUj6YTZnDzs7OZMsqjaWMW0nM1X9D+1wRYyMiiI2NxeLFi42ed9q0aSgoKEB0dHSRQutBgYGBGDVq1OPGNBslj1lV+XgJWPa+b4gvvvgCbdq00Rda5tavXz8MHjwY165dw3/+858KWae1qRLF1rvvvouxY8fiwoULUKlUeOqppwD8dTCfM2cOnnnmGWg0Gri7u6N37944ffp0mcv7888/0bBhQ9ja2uLFF1/UTy8oKMDkyZPh6ekJBwcHtGzZEjExMQCARYsWwcnJCY6Ojti4cSOCgoLg4uKCevXqYfXq1Y/ctxUrVsDX1xdarRZOTk5o2LAhpk6dqn/dxsYGW7duRVBQEFxdXfHkk0/i66+/LrKMvXv3onnz5nB1dYVWq4WPjw+2b98OAPjkk0/g6OiIatWq4dq1axg7dizq1q2LM2fOGJWzvBxDhw7VXwvQuHFjxMfHA/jrmi9HR0e4urpi06ZN+vbnz59Hs2bN4OTkBAcHBwQEBGDfvn3610vLvXTpUnh6ekKlUmHBggX69iKC2bNn4+mnn4ZGo4Grqyvef/99o/r4oLK2hXnz5sHJyQk2NjZo27YtatWqBTs7Ozg5OaFNmzYICAhA/fr1odVq4ebmhg8++KDY8svrf3kZjOmzIe327dtXbFyN2eYLCgowY8YMPP3003BwcEDNmjXRqFEjzJgxA6Ghofp227Ztg4uLC6ZPn17q2Ofl5WHXrl2oUaMG2rdvX2q7kvpZ3vHA2P24rP2zrP2utGOWqY4xpl63NTN0PzDV2O/Zswft27eHo6MjXFxc4OPjg4yMjHLXYai8vDwcPHjQoOs6DdmfDDV48GAAwA8//KCfZi1jViHESgGQmJgYg9sHBwdL48aNi0ybPHmy2Nvby4oVKyQ9PV0SEhKkTZs2UrNmTUlLS9O3W716tQCQ+Ph4ERHJy8uT4OBg2bhxY5HljRs3TjQajaxbt05u374tEyZMEBsbGzl06JCIiEycOFEAyK5du+TOnTty7do1CQgIECcnJ8nLyzN6DObOnSsAJDo6Wm7evCm3bt2SL7/8UgYMGFBsfenp6XLr1i156aWXRKPRSFZWln45sbGxEhUVJbdu3ZKbN2+Kn5+f1KhRQ//6/eVERETI/PnzpW/fvnLq1CmDcxqaIzg4WNRqtVy9erXI/K+++qps2rRJ/+9u3bqJl5eXXLp0SXQ6nSQmJspzzz0nWq1Wzp49W27uy5cvCwCZP39+kbYqlUr+7//+T27fvi3Z2dmycOHCIu+7McrbFj766CMBIHFxcZKVlSU3btyQF198UQDI1q1b5fr165KVlSWjR48WAHLs2DGj+2/I9mhInw1tV9q4GrLNT58+XdRqtWzcuFGys7PlyJEjUqtWLencuXORcd2yZYtUq1ZNpkyZUurYnz17VgCIn5+fUe+ZoccDQ/tU3v5Z3n5X0jHLVMcYc6zbEDExMWLOj51HWb6h27cpxv7u3bvi4uIis2bNkpycHElLS5O+ffvK9evXDVqHIS5duiQApFWrVtK5c2epXbu2aDQaadasmSxYsEAKCwv1bQ3Zn+5r3LixuLq6lvp6RkaGAJD69etb3ZiJiPTr10/69etn1DxGWFtli63s7GxxdnaW8PDwIu1+++03AVBk43uw2NLpdNK/f3/54YcfisyXk5Mjjo6ORZaXnZ0tGo1GRo4cKSJ/b1g5OTn6Nvd36vPnzxvcF5G/Cj43Nzfp0qVLken5+fkyb968Ute3fPlyASCJiYmlLnvGjBkCQK5du1bqcoxhaI6dO3cKAJk2bZp+2p07d6RJkyaSn5+vn9atWzd59tlni6wjISFBAMi4cePKXK9I8aIgOztbHB0dpXv37kXaPVxkG8qQbeF+sZWZmalv88033wgAOX78uH7a/e1xzZo1RvW/vAyG9tmYsSmr2Cpvm2/Xrp20b9++yDqGDRsmNjY2kpubK8Y4fPiwAJB//vOfBs9jzPHAkD4Zsn8+7OH97uFjljmPMaZYtyEsrdgydPs21dgnJiYKANmyZUuxLKYa4+P/25x3AAAgAElEQVTHjwsA6d69u+zfv19u3rwp6enpMn78eAEgK1euNHhZDyqv2BIRUalU4ubmZnB/LGXMRMxfbFWJrxFLcuLECdy9exe+vr5Fprdr1w729vaIi4srNk9BQQFeffVVPPHEE0W+PgT+usU8Ozsb3t7e+mkODg6oXbt2mV9L2tvbA/jrNl1jJCQkID09HYGBgUWmq9VqRERElDrf/Wt9ylrf/TYFBQVGZTJGSTm6du2Kpk2b4uuvv9bf2bZmzRqEh4dDrVaXuTwfHx+4uroiISHB6Cznz59HdnY2unXrZvS8JXncbSE/P18/zZD3Cyje//IyGNpnU48NUPI2f+/evWJ3MxYUFMDOzq7c9/5hzs7OAIDs7GyD53mU48GDHu7To+yf5e135jzGmGvdls7Q7dtUY+/l5YUnnngCr732GqKiooo8hsFUY6zRaAAALVq0QMeOHVG9enW4urri448/hqur6yNdA2mIrKwsiAhcXFwAWNeYVYQqW2ylp6cD+PvA/CA3NzdkZmYWmz5q1CicO3cO//nPf3Dy5Mkir2VlZQEAJk2aVORZJMnJyUYd9A11//tqNze3x17W1q1b0blzZ3h4eECj0ZR4jVBFUKlUeOutt3Dx4kXs2rULALB8+XK88cYbBs1vZ2dndNEKAFeuXAEAeHh4GD1vSSp6W7jvwf6Xl8HQPpt6bErz0ksv4ciRI9i4cSNycnJw+PBhbNiwAS+//LLRxVbDhg2h1Wpx9uxZg+d5lONBWQzZP43d70y5XSm5bkti6PZtqv47ODjgp59+gr+/P6ZPnw4vLy+Eh4cjJyfHZOt48sknAQA3btwoMt3e3h4NGjTAhQsXDF6WMe7vb82aNQNgXWNWEapssXX/IFjSQTQ9PR316tUrNj00NBQ7duyAm5sbBg0aVOQMxP2dde7cuRCRIn/meHBinTp1ABTfoYyVkpKCPn36oHbt2oiLi8OdO3cwa9YsU0R8JIMHD4ZWq8WSJUtw5swZuLi4oEGDBuXOl5+fj1u3bsHT09PodWq1WgBAbm6u0fOWpKK3BaB4/8vLYGifTT02pYmKikLXrl0xePBguLi4oG/fvggNDX2k5x1pNBoEBgbixo0b2L9/f6ntbt26haFDhwJ4tONBWcrbPx9lvzPVdqXkui2Nodu3KfvfokULbN68GampqYiMjERMTAw+/fRTk63D2dkZTZo0KXZCAPjrOGGuxzJs27YNABAUFATAusasIlTZYsvb2xvOzs7FHsIWFxeHvLw8tG3bttg8Xbp0Qc2aNbF48WIcOXIE06ZN0792/+6xinricMOGDVG9enX8+OOPj7Wc48ePQ6fTYeTIkfDy8tI/7V0p7u7uCAsLw4YNG/Dpp5/izTffNGi+n3/+GYWFhWjTpo3R6/T29oaNjQ327Nlj9LwlqehtASje//IyGNpnU49NaU6cOIELFy7g+vXr0Ol0SElJwaJFi+Du7v5Iy4uKioJGo8GYMWOQk5NTYpvExET9YyEe5XhQlvL2z0fZ70y1XSm5bktj6PZtqv6npqbqiyAPDw9ER0ejTZs2OHnypEnHOCwsDPHx8bh48aJ+WnZ2NpKTk83yOIi0tDTMnTsX9erVw7/+9S8A1jdm5lZliq3q1asjNTUVSUlJyMzMhFqtxtixY7F+/XqsXLkSGRkZOH78OEaMGIEnn3wSw4cPL3VZvXr1wuDBgzF9+nQcOXIEwF//QxoyZAhWr16NRYsWISMjAwUFBbhy5Qr++OMPk/dHo9FgwoQJ+OWXXzB69GhcvXoVhYWFyMzMLPF/NKW5fyZk586duHfvHs6dO1fu9SnmNmLECOTm5mLLli3o2bNniW3y8vJw584d5Ofn4+jRoxg9ejQaNGigv/3YGB4eHggODsa6deuwdOlSZGRkICEh4ZGvbaiIbaG8/peXwdA+m3psSjNq1Ch4enqW+3TrH374waBb1Vu1aoVVq1YhMTERAQEB+P7773Hnzh3odDpcunQJX331Fd544w39tUparfaRjwclKW//NGS/K+mYZYrtSsl1WxpDt29T7dOpqal46623cPr0aeTl5SE+Ph7Jycnw8/Mz6XFjzJgx+uNBSkoKbt68icjISOTk5GD8+PH6dobuT/eJCO7evYvCwkKICK5fv46YmBh06tQJarUaGzZs0F+zZW1jZnbmuvTe3GDk3YhHjx6VBg0aiIODg/j7+0taWpoUFhbK7NmzpUmTJmJnZyfu7u7Sp08fOXPmjH6+7777Ttzd3QWANGzYUK5duyYZGRlSv359ASDOzs6yfPlyERHJzc2VyMhI8fT0FFtbW/Hw8JDg4GA5ceKELFy4UBwdHQWANGnSRC5cuCCLFy8WFxcXASANGjQoctu+oRYsWCA+Pj6i1WpFq9VK69atZeHChTJr1ixxcHAosr6VK1fq+1KvXj39nYCRkZFSvXp1cXNzk5CQEFmwYIEAkMaNG8uoUaP0y6lfv76sWLHCqHzG5HhQ69at5d///neJy1y2bJl06dJFnnjiCbG1tZUaNWpI//79JTk5ucT1Pph7/vz5Urt2bQEgjo6O0qtXLxERyczMlKFDh0qNGjXE2dlZ/P39ZfLkyfqMv//+u1H9LmtbmDdvnn5baNiwoezdu1dmzpwprq6uAkBq1aolq1atkjVr1kitWrUEgLi7u8vq1asN7n95GYzpsyHtShpXY7b5n376SWrUqCEA9H92dnbyzDPPyHfffafv0/fffy/VqlUrcsdqWVJSUmTcuHHi4+Mjzs7Oolarxc3NTVq3bi1vvPGG7N+/X9/WkOOBsftxafunSNn7XUpKSonHLFMdY0y9bkNZ2t2IIobvB6YY+6SkJOnYsaO4u7uLWq2WOnXqyMSJE/V3W5tijO+7fPmy9O/fX9zd3UWj0Uj79u2L3UVvyP60adMmadmypTg6Ooq9vb3Y2NgIAP2dh+3bt5cpU6bIzZs3i81rTWNm7rsRVSIW9oNmBlKpVIiJiSnywEOqPHr06IEFCxagUaNGSkehCrBo0SKcO3cOc+fO1U/Ly8vD+PHjsWjRIty+fRsODg4KJiRTWLt2LcLCwsz2O5rmXj5VXiEhIQCA2NhYcyw+tsr8NiJZNp1Op/9KJyEhAVqtloVWFZGWlobRo0cXu+7C3t4enp6e0Ol00Ol0LLaIyGpVmWu2rMHp06eL3L5a2l94eHilyxkZGYlz587h7NmzGDJkSJGfHLIE1vLeWCMHBwfY2dlh6dKl+PPPP6HT6ZCamoolS5Zg8uTJCA8P118HQlQV8fhj/Xhmy4I0a9bMKk5/myOno6MjmjVrhrp162LhwoVo3ry5SZf/uKzlvbFGrq6u+PHHHzFlyhQ0bdoUWVlZcHZ2RosWLTBz5kwMGzZM6YhEiuLxx/qx2CKLMG3atCKP0qCqJSAgADt27FA6BhGRWfBrRCIiIiIzYrFFREREZEYstoiIiIjMiMUWERERkRmx2CIiIiIyIxZbRERERGbEYouIiIjIjFhsEREREZkRiy0iIiIiM2KxRURERGRGLLaIiIiIzIjFFhEREZEZsdgiIiIiMiNbpQM8jgMHDigdgahSy8/Ph62tVR8myIJU1DF77dq1FbIeQ3Afsg5XrlxBvXr1zLZ8lYiI2ZZuRiqVSukIRET0CMz1sbN27VqEhYWZZdlU+fXr1w+xsbHmWHSs1RZbRGR+sbGxGD58OJ588kmsXLkSrVu3VjoSkcXLyMjA+++/j8WLF2PgwIFYtGgRnJ2dlY5FyonlNVtEVKqQkBAcO3YMHh4eeO655xAVFYWCggKlYxFZrF9//RVt2rTBhg0bsHHjRixfvpyFFvECeSIqm6enJ37++WfMnj0b0dHR+Mc//oGLFy8qHYvIouTm5mL8+PEICAjAs88+ixMnTqBXr15KxyILwWKLiMqlUqkQERGBw4cP4+7du2jdujUWL16sdCwii5CYmAg/Pz988cUX+OKLL/Ddd9+hZs2aSsciC8Jii4gM5uPjg7i4OIwYMQIjRoxAaGgobt26pXQsIkWICD777DP4+vrCwcEBR44cwbBhw5SORRaIF8gT0SPZsWMHhgwZAhsbG3zzzTfo0qWL0pGIKkxycjJef/11/Prrr5gwYQI+/PBDqNVqpWORZeIF8kT0aLp3747ExET4+/ujW7duiIiIQG5urtKxiMwuNjYWrVu3xo0bNxAXF4eoqCgWWlQmFltE9Mjc3Nzw7bff4r///S+WLVuGtm3b4tixY0rHIjKL69evo2/fvggLC8PAgQNx5MgRPg6FDMJii4ge26BBg5CQkIDq1aujQ4cOmDVrFgoLC5WORWQy27dvR6tWrXDkyBHs2rULn332GTQajdKxyEqw2CIik2jYsCF+/vlnREVFYfLkyXjhhRdw5coVpWMRPZacnBxEREQgKCgInTp1wrFjx3h9IhmNxRYRmYxarUZkZCT27duHK1euwNvbG6tWrVI6FtEj+e2339CqVSssX74cK1aswNq1a+Hu7q50LLJCLLaIyOTatWuH+Ph4vP766xg4cCBCQ0Nx+/ZtpWMRGSQ/Px+zZs2Cv78/GjZsiMTERAwYMEDpWGTF+OgHIjKr7du3Y8iQIbC3t8c333yD559/XulIRKU6ffo0Bg4ciBMnTiA6OhqjR4+GSqVSOhZZNz76gYjMKzAwEL///jueffZZdO3aFREREcjLy1M6FlERIoLFixfD19cXKpUK8fHxiIiIYKFFJsFii4jMzsPDAxs3bsSyZcvw9ddfw9fXFwkJCUrHIgIApKWloWfPnnj77bcxatQo7N+/H08//bTSsagSYbFFRBXm/iMiXFxc8Nxzz/EREaS4devWwdvbG6dOncLu3bsxc+ZM2NnZKR2LKhkWW0RUoRo1aoTdu3cjKioKH374IV588UWkpqYqHYuqmDt37mD48OEIDQ1FcHAwEhIS0KlTJ6VjUSXFC+SJSDFxcXEYOHAgbt++ja+++gq9e/dWOhJVAbt27cKQIUOQl5eHJUuW4OWXX1Y6ElVuvECeiJTz3HPP4dixY3j11VfRp08fDBo0CJmZmUrHokrq3r17GD9+PF544QW0b98eiYmJLLSoQvDMFhFZhP/9738YNmwYnJ2dsXz5cgQEBCgdiSqRxMREvPbaa7h06RJmz56NYcOGKR2Jqg6e2SIiy9CnTx8kJibC29sbXbt2xfjx4/mICHpshYWF+Oyzz9C2bVs4OTnh6NGjLLSowvHMFhFZFBHBV199hTFjxqB58+ZYuXIlmjZtqnQsskJJSUl4/fXXERcXh48//hjvv/8+bGx4joEqHM9sEZFlUalUGDZsGA4fPoyCggK0atUKn332Gfj/QjLG8uXL0bJlS9y8eRMHDx5EZGQkCy1SDLc8IrJIzZo1Q1xcHD744AOMHTsWQUFB+OOPP5SORRbu+vXr6NOnDwYPHowhQ4bgyJEjaNWqldKxqIpjsUVEFsvW1hZRUVHYu3cvzp8/j1atWmHTpk1KxyILtW3bNjz77LOIj4/HTz/9hM8++wwajUbpWEQstojI8nXo0AFHjx5F79698corr2DQoEG4e/eu0rHIQmRnZyMiIgIvvfQS/P39ER8fj86dOysdi0iPF8gTkVX57rvvMHz4cLi4uGDFihV86ncVd/DgQQwaNAjp6en48ssv0adPH6UjET2MF8gTkXUJDg5GYmIinnnmGXTu3Bnjx4+HTqdTOhZVsPz8fERFRcHf3x9eXl44duwYCy2yWDyzRURW6f4jIt577z34+PhgxYoVaNKkidKxqAKcOnUKAwcOxMmTJxEdHY3Ro0dDpVIpHYuoNDyzRUTW6f4jIg4dOoS8vDy0bdsWixcvVjoWmZGIYPHixfD19YVarcaxY8cQERHBQossHostIrJqzZs3R1xcHMaMGYMRI0YgODgYN27cUDoWmVhaWhpefvllvP3223jnnXewb98+PuyWrAaLLSKyenZ2doiKisKOHTtw6NAheHt7Y8uWLUrHIhOJjY1FixYtcPHiRRw4cAAzZ86EnZ2d0rGIDMZii4gqja5du+L48eN44YUX0KtXLwwfPhxZWVlKx6JHdOfOHQwaNAhhYWHo168fDh8+DF9fX6VjERmNF8gTUaUUGxuLt956C7Vq1cLKlSvRpk0bpSOREXbu3IkhQ4YgPz8fS5YsQY8ePZSORPSoeIE8EVVOISEhiI+PR61atfDcc88hKioKBQUFSseicty7dw/jx49HYGAg/Pz8kJiYyEKLrB7PbBFRpSYi+PzzzxEZGYk2bdpgxYoVaNy4cbnz8A63inf8+HG89tprSE5OxieffIJhw4YpHYnIFHhmi4gqN5VKhYiICBw+fBjZ2dlo06ZNmY+IWLduHWbMmFGBCSu/wsJChIaG4vLlyyW+XlBQgFmzZsHX1xc1a9bE8ePHWWhRpcJii4iqBG9vbxw8eBAjRozAiBEjEBISgps3bxZpc/nyZQwZMgSTJ0/GwYMHFUpa+Xz66aeIjY3F4MGD8fCXKUlJSejSpQuioqIwZcoU7NixA/Xr11coKZF5sNgioipDq9Vi5syZ2L59Ow4cOABvb2/88MMPAP46+zJgwADk5uZCpVIhNDQUGRkZCie2focOHcKECRMAALt378aCBQv0ry1fvhw+Pj64ffs2Dh48iMjISNjY8GOJKh9es0VEVVJ6ejrefvttrF69Gm+++SYaNGiASZMm6c+82NnZISwsDCtWrFA4qfW6e/cuWrZsicuXLyM/Px/AX+O6c+dOzJkzB5s3b8aoUaMwe/Zs2NvbK5yWyGxiWWwRUZX23//+F++88w5ycnJKvFtx5cqVGDBggALJrN+AAQMQGxtb5IfCbW1todVq4eHhgRUrVqBTp04KJiSqELxAnoiqtv79+6N27dol3n2oUqkwfPhwXLp0SYFk1m3t2rX49ttvixRaAJCfn4+cnByEhISw0KIqg8UWEVVpkZGRSEpK0n/N9SARQV5eHvr161esaKDSXbx4EUOGDCn18RkFBQWYPXs29u7dW8HJiJTBYouIqqydO3fi888/L7HQuk+n0+H333/H9OnTKzCZ9crPz0doaCjy8vKK3Xn4IBsbGwwYMACZmZkVmI5IGSy2iKhKunXrFl5//XWD2hYUFGDq1KnYt2+fmVNZv0mTJuHYsWNlFrDAX2N6+fJljBs3roKSESmHxRYRVUm2traYM2cOBg0ahJo1awIA7O3tS330gEqlQlhYGNLT0ysyplXZuXMnPvnkk1J/FkmlUkGtVgMA6tatixEjRuCll14q8wwYUWXAuxGJiPDXdUabN2/G//73P+zfvx8FBQWws7NDXl6evo2trS169uyJ9evXK5jUMl2/fh3e3t64efNmkWJLo9EgNzcXGo0GHTt2RGBgIP75z3+ibdu2CqYlqlB89AMR0cMyMjKwa9cubNu2DVu3bsXVq1dhZ2eH/Px8iAiWLVuGwYMHKx3TYogIevTogR9++EF/ZlBE0Lx5c/Ts2RMvvPACOnXqxGdpUVXFYovI0oWEhGDdunVKxyAiK8OPd4sRa6t0AiIqn5+fH9577z2lYxCA3NxcnDx5EpmZmfjHP/6hdBzF3bt3D7t27ULz5s3RsGHDUh/3QBXnwIEDmDdvntIx6AEstoisQL169RAaGqp0DKISDRo0SOkI9BAWW5aFdyMSERERmRGLLSIiIiIzYrFFREREZEYstoiIiIjMiMUWERERkRmx2CIiIiIyIxZbRERERGbEYouIiIjIjFhsEREREZkRiy0iIiIiM2KxRURERGRGLLaIiIiIzIjFFhEREZEZsdgiIiIiMiMWW0RV3Pfffw9XV1ds3rwZANCuXTuo1Wq0atXK6HlLMnToUFSrVg0qlQrHjh0zWe7HZUw/jWVon0trZ8i4WpIzZ87gnXfeQYsWLVCtWjXY2trC1dUVTZs2RY8ePXDgwAGlIxIpisUWURUnIkX+fejQIXTp0uWR5i3JkiVL8NVXXz1SNnMypp/GMrTPpbUzZFwtxdKlS+Hj44OEhATMmTMHly9fRlZWFuLj4zF16lSkp6fj+PHjSsckUpSt0gGIyPRycnLQrVs3/Prrr+W27dGjB+7cuVNsukqleuR5rYkh/axo1jKuBw8exPDhw/H8889j+/btsLX9+yPFy8sLXl5ecHNzw7lz5xRMWTZj9pXKtG6qWCy2iCqhpUuX4tq1a4+1DDs7OxOlscyC5j5T9vNBhva5IsZGRLBu3Trcvn0bw4YNM9lyp02bhoKCAkRHRxcptB4UGBiIwMBAk63T1Eyxr1jjuqli8WtEokrm3XffxdixY3HhwgWoVCo89dRT+OSTT+Do6Ihq1arh2rVrGDt2LOrWrYulS5fC09MTKpUKCxYsKLKc8+fPo1mzZnBycoKDgwMCAgKwb98+/ev79u0rcV4RwezZs/H0009Do9HA1dUV77///iP3p6CgAJMnT4anpyccHBzQsmVLxMTEAADmzZsHJycn2NjYoG3btqhVqxbs7Ozg5OSENm3aICAgAPXr14dWq4Wbmxs++OCDYssvr5/lZTCmz4a0K2lcFy1aBCcnJzg6OmLjxo0ICgqCi4sL6tWrh9WrVxfLOmPGDDz99NNwcHBAzZo10ahRI8yYMQOhoaH6dnv27EH79u3h6OgIFxcX+Pj4ICMjAwCwbds2uLi4YPr06aW+L3l5edi1axdq1KiB9u3bl9qupDGYM2cOnnnmGWg0Gri7u6N37944ffq0vo0x/QWAFStWwNfXF1qtFk5OTmjYsCGmTp0KANi7dy+aN28OV1dXaLVa+Pj4YPv27QBK3lfuj2Fp77cx2Uy9brJiQkQWrV+/ftKvXz+j5gkODpbGjRsXmTZx4kQBIBERETJ//nzp27evnDp1Si5fviwAZP78+fq23bp1Ey8vL7l06ZLodDpJTEyU5557TrRarZw9e1bfrqR5J06cKCqVSv7v//5Pbt++LdnZ2bJw4UIBIPHx8Ub3f9y4caLRaGTdunVy+/ZtmTBhgtjY2MihQ4dEROSjjz4SABIXFydZWVly48YNefHFFwWAbN26Va5fvy5ZWVkyevRoASDHjh0zup/lZTC0z4a2K21cAciuXbvkzp07cu3aNQkICBAnJyfJy8vTt5s+fbqo1WrZuHGjZGdny5EjR6RWrVrSuXNnfZu7d++Ki4uLzJo1S3JyciQtLU369u0r169fFxGRLVu2SLVq1WTKlCmlvi9nz54VAOLn52fU+zl58mSxt7eXFStWSHp6uiQkJEibNm2kZs2akpaWZnR/586dKwAkOjpabt68Kbdu3ZIvv/xSBgwYICIisbGxEhUVJbdu3ZKbN2+Kn5+f1KhRQz9/SfuKIe+3IdnMsW5DxMTECD/eLcpavhtEFs7UxVZOTk6R6aUVW88++2yRdgkJCQJAxo0bV+q82dnZ4ujoKN27dy8y7+rVqx+p2MrJyRFHR0cJDw/XT8vOzhaNRiMjR44Ukb+LrczMTH2bb775RgDI8ePH9dN+++03ASBr1qwxqp/lZTC0z8aMTVnF1oPv3/1C7fz58/pp7dq1k/bt2xdZx7Bhw8TGxkZyc3NFRCQxMVEAyJYtW+RRHT58WADIP//5T4Pnyc7OFmdn5yJjKfL3e/NgcWdIf/Py8sTNzU26dOlSZHn5+fkyb968EjPMmDFDAMi1a9dEpPi+Ysg2Z+h7YY51G4LFlsVZy68RicggPj4+cHV1RUJCQqltzp8/j+zsbHTr1s0k6zxz5gyys7Ph7e2tn+bg4IDatWsX+drpYfb29gCA/Px8/bT712bpdLoy1/lwP8vLYGifTT02wN/9fLBP9+7dK3Y3Y0FBAezs7KBWqwH8dfH6E088gddeew1RUVFISkoyet3Ozs4AgOzsbIPnOXHiBO7evQtfX98i09u1awd7e3vExcWVOf/D/U1ISEB6enqxa8LUajUiIiJKXMb97aCgoKDE1x93mytr+zLXusnysdgiIoPZ2dmV+WFy5coVAICHh4dJ1peVlQUAmDRpElQqlf4vOTnZqA95Yz3Yz/IyGNpnU49NaV566SUcOXIEGzduRE5ODg4fPowNGzbg5Zdf1hdbDg4O+Omnn+Dv74/p06fDy8sL4eHhyMnJMXg9DRs2hFarxdmzZw2eJz09HcDfhdqD3NzckJmZafCyAOivMXNzcyu1zdatW9G5c2d4eHhAo9GUeN3eg0y5zSm5brIsLLaIyCD5+fm4desWPD09S22j1WoBALm5uSZZ5/3CZO7cuRCRIn/melDmw/0sL4OhfTb12JQmKioKXbt2xeDBg+Hi4oK+ffsiNDS02PO8WrRogc2bNyM1NRWRkZGIiYnBp59+avB6NBoNAgMDcePGDezfv7/Udrdu3cLQoUMB/F0UlVRUpaeno169egavHwDq1KkDALhx40aJr6ekpKBPnz6oXbs24uLicOfOHcyaNavMZZpqm1Ny3WR5WGwRkUF+/vlnFBYWok2bNqW28fb2ho2NDfbs2WOSdd6/k7Ainzz/cD/Ly2Bon009NqU5ceIELly4gOvXr0On0yElJQWLFi2Cu7u7vk1qaipOnjwJ4K8P+OjoaLRp00Y/zVBRUVHQaDQYM2ZMqWfFEhMT9Y+F8Pb2hrOzMw4fPlykTVxcHPLy8tC2bVuj1t+wYUNUr14dP/74Y4mvHz9+HDqdDiNHjoSXlxe0Wm25j9ow1Tan5LrJ8rDYIqqEqlevjtTUVCQlJSEzM7Pc65RKkpeXhzt37iA/Px9Hjx7F6NGj0aBBAwwePLjUeTw8PBAcHIx169Zh6dKlyMjIQEJCAhYvXvxI/dBqtRgyZAhWr16NRYsWISMjAwUFBbhy5Qr++OOPR1rmw8rrZ3kZDO2zqcemNKNGjYKnpyfu3r1bapvU1FS89dZbOH36NPLy8hAfH4/k5GT4+fkBAH744YdyH/0AAK1atcKqVauQmJiIgIAAfP/997hz5w50Oh0uXdhlMpcAACAASURBVLqEr776Cm+88Yb+WiWtVouxY8di/fr1WLlyJTIyMnD8+HGMGDECTz75JIYPH25UXzUaDSZMmIBffvkFo0ePxtWrV1FYWIjMzEycPHlSf3Zy586duHfvHs6dO1fsurCH9xW1Wm2SbU7JdZMFqvBr8onIKI9yN+LRo0elQYMG4uDgIP7+/jJmzBhxcHAQAFK/fn1ZsWKFiIjMnz9fateuLQDE0dFRevXqJSIiy5Ytky5dusgTTzwhtra2UqNGDenfv78kJyfr11HavJmZmTJ06FCpUaOGODs7i7+/v0yePFkASL169eT33383qi+5ubkSGRkpnp6eYmtrKx4eHhIcHCwnTpyQefPmiaOjowCQhg0byt69e2XmzJni6uoqAKRWrVqyatUqWbNmjdSqVUsAiLu7u6xevdrgfpaXwZg+G9KupHFduHChvp9NmjSRCxcuyOLFi8XFxUUASIMGDfSPqvjpp5+kRo0aAkD/Z2dnJ88884x89913IiKSlJQkHTt2FHd3d1Gr1VKnTh2ZOHGi5Ofni4jI999/L9WqVZNp06YZ9B6lpKTIuHHjxMfHR5ydnUWtVoubm5u0bt1a3njjDdm/f7++bWFhocyePVuaNGkidnZ24u7uLn369JEzZ87o2xjTXxGRBQsWiI+Pj2i1WtFqtdK6dWtZuHChiIhERkZK9erVxc3NTUJCQmTBggUCQBo3biwpKSnF9pW0tLQy329jspl63Ybi3YgWZ61KxIp+hIuoCgoJCQEAxMbGKpyErMGiRYtw7tw5zJ07Vz8tLy8P48ePx6JFi3D79m04ODgomJDMbe3atQgLC7Oq39is5GL5cz1ERJVEWloaRo8eXeyaH3t7e3h6ekKn00Gn07HYIqpgvGaLiCrU6dOni9zWXtpfeHi40lGtjoODA+zs7LB06VL8+eef0Ol0SE1NxZIlSzB58mSEh4fDxcVF6ZhEVQ7PbBFRhWrWrBm/3jATV1dX/Pjjj5gyZQqaNm2KrKwsODs7o0WLFpg5c6ZJf4SaiAzHYouIqBIJCAjAjh07lI5BRA/g14hEREREZsRii4iIiMiMWGwRERERmRGLLSIiIiIzYrFFREREZEYstoiIiIjMiMUWERERkRmx2CIiIiIyIxZbRERERGbEYouIiIjIjFhsEREREZkRiy0iIiIiM2KxRURERGRGtkoHIKLyrVu3DiqVSukYRET0CFhsEVm4MWPGICQkROkYZIHCwsLw7rvvokOHDkpHIaIyqERElA5BRETGU6lUiImJQWhoqNJRiKh0sbxmi4iIiMiMWGwRERERmRGLLSIiIiIzYrFFREREZEYstoiIiIjMiMUWERERkRmx2CIiIiIyIxZbRERERGbEYouIiIjIjFhsEREREZkRiy0iIiIiM2KxRURERGRGLLaIiIiIzIjFFhEREZEZsdgiIiIiMiMWW0RERERmxGKLiIiIyIxYbBERERGZEYstIiIiIjNisUVERERkRiy2iIiIiMyIxRYRERGRGbHYIiIiIjIjFltEREREZsRii4iIiMiMWGwRERERmRGLLSIiIiIzYrFFREREZEYstoiIiIjMiMUWERERkRmx2CIiIiIyIxZbRERERGZkq3QAIiIqX3JyMgoKCopN//PPP3Hx4sUi05588kk4ODhUVDQiKodKRETpEEREVLagoCBs27at3Ha2trZIS0tDjRo1KiAVERkgll8jEhFZgfDwcKhUqjLb2NjYoHv37iy0iCwMiy0iIivQt29f2NnZldtu4MCBFZCGiIzBYouIyApUq1YNL7/8cpkFl52dHXr27FmBqYjIECy2iIisxIABA5Cfn1/ia7a2tujTpw+cnZ0rOBURlYfFFhGRlejRowecnJxKfK2goAADBgyo4EREZAgWW0REVkKj0aBfv36wt7cv9pqzszNeeOEFBVIRUXlYbBERWZFXX30VeXl5RabZ2dkhPDy8xCKMiJTHYouIyIp069YNNWvWLDJNp9Ph1VdfVSgREZWHxRYRkRWxsbHBq6++WuQsloeHBwICAhRMRURlYbFFRGRl+vfvr/8q0d7eHoMGDYJarVY4FRGVhsUWEZGVee6551C/fn0AQF5eHsLDwxVORERlYbFFRGRlVCoVBg0aBABo0KABfH19FU5ERGWxVToAEVU+Bw4cwJw5c5SOUallZGQAAJycnBASEqJwmsqtQ4cOGDNmjNIxyIrxzBYRmdzly5ex7v+1d7+xVdb3/8dfV//3tOe0oEXAFtYWBgqy6YRgQYMa44iJU9pCmciKYytzTp2KTYQQR2SOVWTRwFyRcUMjnlN0gETqEol4Y+o0AxFYYcBgdLW0YqWUdlDa9++GP7vvGfQf7cV1Tvt8JOcGV6/rc7173ahPz7l6ddMmr8cY0AKBgNLS0pSZmen1KAPahx9+qA8++MDrMRDleGcLgGsqKiq8HmFAe+edd3TnnXd6PcaAxruG6A+8swUAUYrQAqIDsQUAAOAiYgsAAMBFxBYAAICLiC0AAAAXEVsAAAAuIrYAAABcRGwBAAC4iNgCAABwEbEFAADgImILAADARcQWAACAi4gtAAAAFxFbAAAALiK2AESkhQsXyu/3y3Ec7d692+txPLV8+XJde+21CgQCSkxM1JgxY/Tkk0+qqamp12u98cYbysnJkeM4Ya+EhAQNGzZMM2bMUFlZmRoaGlz4ToDBidgCEJFefvllrVu3zusxIsKOHTv00EMP6ejRo/riiy/061//Wr/73e9UWFjY67Xy8/N15MgR5ebmKi0tTWam9vZ21dXVKRQKKTs7W6WlpZowYYI++eQTF74bYPAhtgDgMmhpaVFeXt4lHZuamqqSkhINHTpUfr9fs2fP1r333qvKykodP368z7M5jqP09HTNmDFDGzZsUCgU0okTJ3TXXXfp1KlTfV7fa3259kB/ILYARCzHcbweod+sX79edXV1l3Tstm3bFBsbG7btyiuvlCQ1Nzf3ebb/VVBQoOLiYtXV1emll17q9/Uvt75ce6A/EFsAIoKZqaysTOPGjVNiYqLS0tK0ePHisH1++9vfyufzye/3q66uTo8//riuvvpqHThwQGam559/Xtdcc40SExM1ZMgQ3XPPPaqqquo4/oUXXlBSUpKGDRumRYsWacSIEUpKSlJeXp4++uijC+bpbr2HH35YCQkJGj58eMe2n//850pJSZHjOPriiy8kSY8++qgef/xxHT58WI7jaMyYMX2+Xv/+97+VnJys7Ozsjm2VlZUKBAJasWJFn9cvLi6WJG3fvl0S1x7oEwOAfhYMBq23P16WLFlijuPYqlWrrKGhwZqbm23NmjUmyXbt2hW2nyR75JFH7MUXX7RZs2bZ3//+d1u2bJklJCTYK6+8Yl999ZXt2bPHbrjhBrvyyiuttra24/iSkhJLSUmx/fv323/+8x/bt2+fTZ482fx+v/3rX//q2K+n691333121VVXhX0vZWVlJsnq6+s7tuXn51tubm6vrklnzpw5Y36/3x5++OGw7du2bTO/32/Lly/vdo3c3FxLS0vr9OuNjY0mybKysjq2DcZrX1BQYAUFBZd0LPD/hYgtAP2ut7HV3NxsPp/P7rjjjrDtGzdu7DS2Wlpawo5PTU21oqKisOP/+te/mqSw+CgpKbkgMj7++GOTZL/61a96vZ4XsbVkyRL79re/bY2NjZe8RnexZWbmOI6lp6eHnXewXXtiC/0gFHeZ30gDgAscOnRIzc3Nuv322y/p+H379qmpqUk33nhj2PbJkycrISHhgo+p/teNN94on8/X8TFVX9dz05tvvqlQKKQ///nP8vv9rp3nzJkzMjMFAoEu9xtM1x64VMQWAM9VV1dLkjIyMi7p+K+++krS17+197/S09N1+vTpbtdITExUfX19v63nhtdff13PP/+83nvvPY0cOdLVcx08eFCSNH78+C73GyzXHugLYguA55KSkiRJZ8+evaTj09PTJemi/yH+6quvlJmZ2eXxra2tYfv1dT03vPjii3rnnXe0Y8eOi4ZIf6usrJQkzZw5s8v9BsO1B/qK30YE4LmJEycqJiZGO3fuvOTjU1NTL3gI50cffaRz587pe9/7XpfHv/feezIzTZ06tdfrxcXFqbW19ZLm7gkzU2lpqT777DNt3rz5soRWbW2tVq9erczMTD3wwANd7juQrz3QX4gtAJ7LyMhQfn6+Nm3apPXr16uxsVF79uxReXl5j45PSkrS448/rjfffFOvvvqqGhsb9dlnn+lnP/uZRowYoZKSkrD929vb1dDQoPPnz2vPnj169NFHNWrUqI7HHfRmvTFjxujLL7/U5s2b1draqvr6eh07duyCGYcOHaqamhodPXpUp0+f7nEk7N+/X7/97W+1bt06xcfHX/Bndp577rmOfbdv396rRz+YmZqamtTe3i4zU319vYLBoKZNm6bY2Fht3ry523u2BvK1B/qNl7fnAxiYLuXRD6dPn7aFCxfaFVdcYampqTZ9+nRbtmyZSbLMzEz79NNPbeXKlZacnNzxSIJXXnml4/j29nYrKyuzsWPHWnx8vA0ZMsTuvfdeO3DgQNh5SkpKLD4+3q6++mqLi4uzQCBg99xzjx0+fDhsv56ud/LkSbv11lstKSnJsrOz7Re/+IUtXrzYJNmYMWM6Hmnwt7/9zUaPHm3Jyck2ffr0sEcYdOWzzz4zSZ2+ysrKOvZ9++23ze/32zPPPNPpelu3brVJkyaZz+ezhIQEi4mJMUkdv3k4ZcoUW758uZ08eTLsuMF47c34bUT0i5BjZuZF5AEYuEKhkObMmaNI/PGyaNEiVVRU6OTJk16PMuhE47X/5u9PVlRUeDwJolgFHyMCGHTa2tq8HmHQ4tpjMCK2AOAyq6qquuDeq4u9ioqKvB4VQD8gtgAMGk899ZQ2bNigU6dOKTs7W5s2bfJkjvHjx8vMun29/vrrnsznhki59oAXuGcLQL+L5Hu2gN7gni30A+7ZAgAAcBOxBQAA4CJiCwAAwEXEFgAAgIuILQAAABcRWwAAAC4itgAAAFxEbAEAALiI2AIAAHARsQUAAOAiYgsAAMBFxBYAAICLiC0AAAAXxXk9AICBq7Cw0OsRgD758MMPNXXqVK/HQJTjnS0A/S4rK0sFBQVejzHgbd26VTU1NV6PMaBNnTpVN910k9djIMo5ZmZeDwEA6D3HcRQMBjV79myvRwHQuQre2QIAAHARsQUAAOAiYgsAAMBFxBYAAICLiC0AAAAXEVsAAAAuIrYAAABcRGwBAAC4iNgCAABwEbEFAADgImILAADARcQWAACAi4gtAAAAFxFbAAAALiK2AAAAXERsAQAAuIjYAgAAcBGxBQAA4CJiCwAAwEXEFgAAgIuILQAAABcRWwAAAC4itgAAAFxEbAEAALiI2AIAAHARsQUAAOAiYgsAAMBFxBYAAICLiC0AAAAXEVsAAAAuIrYAAABcRGwBAAC4iNgCAABwkWNm5vUQAICu3X///dq9e3fYtqNHjyojI0MpKSkd2+Lj4/XWW2/p6quvvtwjAri4ijivJwAAdG/cuHF69dVXL9je1NQU9u/x48cTWkCE4WNEAIgCc+fOleM4Xe4THx+v4uLiyzMQgB4jtgAgCuTm5ur6669XTEznP7bPnz+vOXPmXMapAPQEsQUAUWL+/PmdxpbjOJoyZYq+9a1vXd6hAHSL2AKAKDFnzhy1t7df9GsxMTGaP3/+ZZ4IQE8QWwAQJYYPH66bb75ZsbGxF/16fn7+ZZ4IQE8QWwAQRe6///4LtsXExOjWW2/VVVdd5cFEALpDbAFAFCksLLzofVsXizAAkYHYAoAoEggE9P3vf19xcf99TGJsbKx+8IMfeDgVgK4QWwAQZebNm6e2tjZJUlxcnO6++26lpaV5PBWAzhBbABBl7r77biUnJ0uS2tradN9993k8EYCuEFsAEGWSkpI0a9YsSZLP59PMmTM9nghAV/jbiAAiVnV1tf7yl794PUZEysrKkiRNnjxZW7du9XiayJSVlaWbbrrJ6zEAOWZmXg8BABcTCoX48zO4ZAUFBaqoqPB6DKCCd7YARDz+n/Dinn76aS1dujTsNxPxtcLCQq9HADpwzxYARClCC4gOxBYARClCC4gOxBYAAICLiC0AAAAXEVsAAAAuIrYAAABcRGwBAAC4iNgCAABwEbEFAADgImILAADARcQWAACAi4gtAAAAFxFbAAAALiK2AAAAXERsARjQFi5cKL/fL8dxtHv3bq/HuSQzZsyQ4zgXfaWmpvZqrTfeeEM5OTkXrJOQkKBhw4ZpxowZKisrU0NDg0vfDTD4EFsABrSXX35Z69at83oM10yfPr1X++fn5+vIkSPKzc1VWlqazEzt7e2qq6tTKBRSdna2SktLNWHCBH3yyScuTQ0MLsQWAES4pKQkNTY2yszCXiUlJXryySf7vL7jOEpPT9eMGTO0YcMGhUIhnThxQnfddZdOnTrVD98BMLgRWwAGPMdxvB6hTyorK+X3+8O2HT9+XHv37tVtt93W7+crKChQcXGx6urq9NJLL/X7+sBgQ2wBGFDMTGVlZRo3bpwSExOVlpamxYsXX7BfW1ubli1bplGjRik5OVmTJk1SMBiUJK1du1YpKSny+XzasmWLZs6cqUAgoMzMTG3cuDFsnZ07d2rKlCny+XwKBAK67rrr1NjY2O05+uo3v/mNHnnkkbBtlZWVCgQCWrFiRZ/XLy4uliRt3769Y1u0XzPAMwYAESoYDFpvf0wtWbLEHMexVatWWUNDgzU3N9uaNWtMku3atatjvyeeeMISExNt06ZN1tDQYE899ZTFxMTYxx9/3LGOJHv33Xft1KlTVldXZzfffLOlpKTYuXPnzMysqanJAoGArVy50lpaWqy2ttZmzZpl9fX1PTrHpaqurrZrr73W2trawrZv27bN/H6/LV++vNs1cnNzLS0trdOvNzY2miTLysrq2BZN16ygoMAKCgp6dQzgkhCxBSBi9Ta2mpubzefz2R133BG2fePGjWGx1dLSYj6fz4qKisKOTUxMtAcffNDM/hsOLS0tHft8E22HDh0yM7O9e/eaJNu2bdsFs/TkHJfqoYcest///vd9WqO72DIzcxzH0tPTzSz6rhmxhQgS4mNEAAPGoUOH1NzcrNtvv73L/Q4cOKDm5mZNnDixY1tycrKGDx+uqqqqTo9LSEiQJLW2tkqScnJyNGzYMM2bN09PP/20jh492udzdKempkZbt27t+JjPLWfOnJGZKRAISIruawZ4jdgCMGBUV1dLkjIyMrrc78yZM5KkpUuXhj1r6tixY2pubu7x+ZKTk7Vjxw5Nnz5dK1asUE5OjoqKitTS0tJv5/hfK1eu1E9+8hMlJSVd8ho9cfDgQUnS+PHjJUX3NQO8RmwBGDC+CZCzZ892ud83MbZ69eoLHqfwwQcf9OqcEyZM0FtvvaWamhqVlpYqGAzqueee69dzfKO2tlavvfaaHnzwwUs6vjcqKyslSTNnzpQUvdcMiATEFoABY+LEiYqJidHOnTu73C8rK0tJSUl9fqJ8TU2N9u/fL+nrGHn22Wd1ww03aP/+/f12jv9r5cqVmjdvnoYOHdpva15MbW2tVq9erczMTD3wwAOSoveaAZGA2AIwYGRkZCg/P1+bNm3S+vXr1djYqD179qi8vDxsv6SkJC1YsEAbN27U2rVr1djYqLa2NlVXV+vzzz/v8flqamq0aNEiVVVV6dy5c9q1a5eOHTumqVOn9ts5vnHixAn98Y9/1C9/+ctO99m+fXuvHv1gZmpqalJ7e7vMTPX19QoGg5o2bZpiY2O1efPmjnu2ovGaARHjMt+RDwA9dimPfjh9+rQtXLjQrrjiCktNTbXp06fbsmXLTJJlZmbap59+amZmZ8+etdLSUhs1apTFxcVZRkaG5efn2759+2zNmjXm8/lMko0dO9YOHz5s5eXlFggETJKNHj3aDh48aEePHrW8vDwbMmSIxcbG2siRI23JkiV2/vz5bs/RW4899pjNmzevy33efvtt8/v99swzz3S6z9atW23SpEnm8/ksISHBYmJiTFLHbx5OmTLFli9fbidPnrzg2Gi6Zvw2IiJIyDEz87D1AKBToVBIc+bMET+m0FuFhYWSpIqKCo8nAVTBx4gAAAAuIrYA4DKrqqoKe7RBZ6+ioiKvRwXQD+K8HgAABpvx48fz0SgwiPDOFgAAgIuILQAAABcRWwAAAC4itgAAAFxEbAEAALiI2AIAAHARsQUAAOAiYgsAAMBFxBYAAICLiC0AAAAXEVsAAAAuIrYAAABcRGwBAAC4iNgCAABwUZzXAwBAd0KhkNcjIMpUV1crMzPT6zEAScQWgCgwZ84cr0dAFCooKPB6BECS5JiZeT0EAKD3HMdRMBjU7NmzvR4FQOcquGcLAADARcQWAACAi4gtAAAAFxFbAAAALiK2AAAAXERsAQAAuIjYAgAAcBGxBQAA4CJiCwAAwEXEFgAAgIuILQAAABcRWwAAAC4itgAAAFxEbAEAALiI2AIAAHARsQUAAOAiYgsAAMBFxBYAAICLiC0AAAAXEVsAAAAuIrYAAABcRGwBAAC4iNgCAABwEbEFAADgImILAADARcQWAACAi4gtAAAAFxFbAAAALiK2AAAAXERsAQAAuIjYAgAAcBGxBQAA4CJiCwAAwEVxXg8AAOheeXm5GhoaLti+ZcsW/fOf/wzbVlxcrKuuuupyjQagG46ZmddDAAC6VlJSovLyciUmJnZsMzM5jtPx7/PnzystLU21tbWKj4/3YkwAF6rgY0QAiAJz586VJJ09e7bjde7cubB/x8TEaO7cuYQWEGGILQCIArfccouGDRvW5T6tra0dUQYgchBbABAFYmJiNG/ePCUkJHS6z4gRI5SXl3cZpwLQE8QWAESJuXPn6ty5cxf9Wnx8vObPnx92DxeAyEBsAUCUuPHGG5WdnX3Rr/ERIhC5iC0AiCLz58+/6A3wOTk5+s53vuPBRAC6Q2wBQBSZN2+eWltbw7bFx8drwYIFHk0EoDvEFgBEkTFjxui6664LuzertbVVc+bM8XAqAF0htgAgysyfP1+xsbGSJMdxdP3112vs2LEeTwWgM8QWAESZH/7wh2pra5MkxcbG6kc/+pHHEwHoCrEFAFFm5MiRysvLk+M4am9vV2FhodcjAegCsQUAUej++++XmemWW27RyJEjvR4HQBf4Q9QAIlYoFOLGb1yygoICVVRUeD0GUBHn9QQA0J1gMOj1CBFp1apVKikpUWpqqtejRJzVq1d7PQLQgdgCEPFmz57t9QgRKS8vT5mZmV6PEZF4RwuRhHu2ACBKEVpAdCC2AAAAXERsAQAAuIjYAgAAcBGxBQAA4CJiCwAAwEXEFgAAgIuILQAAABcRWwAAAC4itgAAAFxEbAEAALiI2AIAAHARsQUAAOAiYgsAAMBFxBaAAW3hwoXy+/1yHEe7d+/2epxL9tprr2ny5Mny+/0aPXq0FixYoNra2l6v88YbbygnJ0eO44S9EhISNGzYMM2YMUNlZWVqaGhw4bsABidiC8CA9vLLL2vdunVej9EnwWBQ9913nwoLC1VdXa0tW7bo/fff18yZM3X+/PlerZWfn68jR44oNzdXaWlpMjO1t7errq5OoVBI2dnZKi0t1YQJE/TJJ5+49B0BgwuxBQAR7g9/+INGjhypxYsXKy0tTd/97nf12GOPaffu3froo4/6vL7jOEpPT9eMGTO0YcMGhUIhnThxQnfddZdOnTrVD98BMLgRWwAGPMdxvB6hT44fP64RI0aEfR9ZWVmSpGPHjvX7+QoKClRcXKy6ujq99NJL/b4+MNgQWwAGFDNTWVmZxo0bp8TERKWlpWnx4sUX7NfW1qZly5Zp1KhRSk5O1qRJkxQMBiVJa9euVUpKinw+n7Zs2aKZM2cqEAgoMzNTGzduDFtn586dmjJlinw+nwKBgK677jo1NjZ2e47eyMnJUV1dXdi2b+7XysnJ6dhWWVmpQCCgFStW9Poc/6u4uFiStH379o5t0XTNgIhiABChgsGg9fbH1JIlS8xxHFu1apU1NDRYc3OzrVmzxiTZrl27OvZ74oknLDEx0TZt2mQNDQ321FNPWUxMjH388ccd60iyd999106dOmV1dXV28803W0pKip07d87MzJqamiwQCNjKlSutpaXFamtrbdasWVZfX9+jc/TUe++9Z/Hx8fbCCy9YY2Oj7d2716655hq78847w/bbtm2b+f1+W758ebdr5ubmWlpaWqdfb2xsNEmWlZUVldesoKDACgoKenUM4JIQsQUgYvU2tpqbm83n89kdd9wRtn3jxo1hsdXS0mI+n8+KiorCjk1MTLQHH3zQzP4bDi0tLR37fBNthw4dMjOzvXv3miTbtm3bBbP05By9sXTpUpPU8crMzLTjx4/3ep1vdBdbZmaO41h6erqZRd81I7YQQUJ8jAhgwDh06JCam5t1++23d7nfgQMH1NzcrIkTJ3ZsS05O1vDhw1VVVdXpcQkJCZKk1tZWSV9/hDds2DDNmzdPTz/9tI4ePdrnc1zMkiVLVF5ernfffVdNTU06cuSI8vLydNNNN+n48eO9Wqunzpw5IzNTIBCQFH3XDIgkxBaAAaO6ulqSlJGR0eV+Z86ckSQtXbo07FlTx44dU3Nzc4/Pl5ycrB07dmj69OlasWKFcnJyVFRUpJaWln47x+eff66VK1fqpz/9qW677TalpKQoOztb69atU01NjcrKynq8Vm8cPHhQkjR+/HhJ0XXNgEhDbAEYMJKSkiRJZ8+e7XK/b2Js9erVMrOw1wcffNCrc06YMEFvvfWWampqVFpaqmAwqOeee67fzvGPf/xDbW1tGjlyZNj2QCCgoUOHat++fb2at6cqKyslSTNnzpQUXdcMiDTEFoABY+LEiYqJidHOnTu7pwohMQAAAs1JREFU3C8rK0tJSUl9fqJ8TU2N9u/fL+nrGHn22Wd1ww03aP/+/f12jszMTElfv8P1f50+fVpffvllxyMg+lNtba1Wr16tzMxMPfDAA5Ki65oBkYbYAjBgZGRkKD8/X5s2bdL69evV2NioPXv2qLy8PGy/pKQkLViwQBs3btTatWvV2NiotrY2VVdXXxA1XampqdGiRYtUVVWlc+fOadeuXTp27JimTp3ab+fIzs7WrbfeqnXr1un9999XS0uLjh8/rpKSEknSj3/84459t2/f3qtHP5iZmpqa1N7eLjNTfX29gsGgpk2bptjYWG3evLnjnq1oumZAxLnMd+QDQI9dyqMfTp8+bQsXLrQrrrjCUlNTbfr06bZs2bKO3+D79NNPzczs7NmzVlpaaqNGjbK4uDjLyMiw/Px827dvn61Zs8Z8Pp9JsrFjx9rhw4etvLzcAoGASbLRo0fbwYMH7ejRo5aXl2dDhgyx2NhYGzlypC1ZssTOnz/f7Tl644svvrBHH33UxowZY4mJiZaammrTpk2zP/3pT2H7vf322+b3++2ZZ57pdK2tW7fapEmTzOfzWUJCgsXExJikjt88nDJlii1fvtxOnjx5wbHRdM34bUREkJBjZuZh6wFAp0KhkObMmSN+TKG3CgsLJUkVFRUeTwKogo8RAQAAXERsAcBlVlVVFfZog85eRUVFXo8KoB/EeT0AAAw248eP56NRYBDhnS0AAAAXEVsAAAAuIrYAAABcRGwBAAC4iNgCAABwEbEFAADgImILAADARcQWAACAi4gtAAAAFxFbAAAALiK2AAAAXERsAQAAuIjYAgAAcBGxBQAA4KI4rwcAgO44juP1CIhCBQUFXo8ASCK2AESwvLw8BYNBr8dAlMrKyvJ6BECS5JiZeT0EAADAAFXBPVsAAAAuIrYAAABcRGwBAAC4KE5ShddDAAAADFAf/j9S6LtI0Y9HXQAAAABJRU5ErkJggg==\n",
            "text/plain": [
              "<IPython.core.display.Image object>"
            ]
          },
          "metadata": {},
          "execution_count": 36
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "IV3YQM-oe_yW",
        "outputId": "f432b9fc-cb3b-433f-881f-9c58ff054add"
      },
      "source": [
        "# Compiling and fitting the model (Fun Part!)\n",
        "tribid_model.compile(loss= tf.keras.losses.SparseCategoricalCrossentropy(), \n",
        "                     optimizer= tf.keras.optimizers.Adam() , \n",
        "                     metrics = ['accuracy'])\n",
        "\n",
        "# Fitting the model for fewer epochs  (training only on 10% of the data)\n",
        "# To speed up the experimentation\n",
        "\n",
        "tribid_model.fit(train_dataset , \n",
        "                 steps_per_epoch = int(0.1 * len(train_dataset)), \n",
        "                 epochs = 3 , \n",
        "                 validation_steps = int(0.1 * len(val_dataset)),\n",
        "                 validation_data = val_dataset)"
      ],
      "execution_count": 37,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Epoch 1/3\n",
            "281/281 [==============================] - 845s 3s/step - loss: 0.7118 - accuracy: 0.7471 - val_loss: 0.4305 - val_accuracy: 0.8457\n",
            "Epoch 2/3\n",
            "281/281 [==============================] - 846s 3s/step - loss: 0.4681 - accuracy: 0.8367 - val_loss: 0.3547 - val_accuracy: 0.8684\n",
            "Epoch 3/3\n",
            "281/281 [==============================] - 848s 3s/step - loss: 0.4134 - accuracy: 0.8516 - val_loss: 0.3332 - val_accuracy: 0.8826\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<keras.callbacks.History at 0x7fc6c55d0550>"
            ]
          },
          "metadata": {},
          "execution_count": 37
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ELuWgnVrqwJC",
        "outputId": "8300d042-1417-4ff3-ae29-2f60540259dd"
      },
      "source": [
        "# Evaluating on the whole val data \n",
        "tribid_model.evaluate(val_dataset)"
      ],
      "execution_count": 38,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "473/473 [==============================] - 1221s 3s/step - loss: 0.3205 - accuracy: 0.8832\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[0.3204561769962311, 0.8832252025604248]"
            ]
          },
          "metadata": {},
          "execution_count": 38
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SQ6mxxeOfXeA"
      },
      "source": [
        "### 4. Train `model_5` on all of the data in the training dataset for as many epochs until it stops improving. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ZiIhcDvrgTLX",
        "outputId": "940ac4f4-fa9a-4b5b-bec7-d565b0b2366e"
      },
      "source": [
        "# Use TensorFlow to create one-hot-encoded tensors of our \"line_number\" column \n",
        "train_line_numbers_one_hot = tf.one_hot(train_df[\"line_number\"].to_numpy(), depth=15)\n",
        "val_line_numbers_one_hot = tf.one_hot(val_df[\"line_number\"].to_numpy(), depth=15)\n",
        "test_line_numbers_one_hot = tf.one_hot(test_df[\"line_number\"].to_numpy(), depth=15)\n",
        "\n",
        "# Use TensorFlow to create one-hot-encoded tensors of our \"total_lines\" column \n",
        "train_total_lines_one_hot = tf.one_hot(train_df[\"total_lines\"].to_numpy(), depth=20)\n",
        "val_total_lines_one_hot = tf.one_hot(val_df[\"total_lines\"].to_numpy(), depth=20)\n",
        "test_total_lines_one_hot = tf.one_hot(test_df[\"total_lines\"].to_numpy(), depth=20)\n",
        "\n",
        "# Download pretrained TensorFlow Hub USE\n",
        "import tensorflow_hub as hub\n",
        "tf_hub_embedding_layer = hub.KerasLayer(\"https://tfhub.dev/google/universal-sentence-encoder/4\",\n",
        "                                        trainable=False,\n",
        "                                        name=\"universal_sentence_encoder\")\n",
        "\n",
        "# Check shape and samples of total lines one-hot tensor\n",
        "train_total_lines_one_hot.shape, train_line_numbers_one_hot.shape"
      ],
      "execution_count": 39,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(TensorShape([180040, 20]), TensorShape([180040, 15]))"
            ]
          },
          "metadata": {},
          "execution_count": 39
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2n98VUaay67M",
        "outputId": "3d99339a-8c8a-4023-bc3b-c7e2895daf57"
      },
      "source": [
        "# Re-building the Model 5 \n",
        "\n",
        "# 1. Token inputs\n",
        "token_inputs = layers.Input(shape=[], dtype=\"string\", name=\"token_inputs\")\n",
        "token_embeddings = tf_hub_embedding_layer(token_inputs)\n",
        "token_outputs = layers.Dense(128, activation=\"relu\")(token_embeddings)\n",
        "token_model = tf.keras.Model(inputs=token_inputs,\n",
        "                             outputs=token_embeddings)\n",
        "\n",
        "\n",
        "# 2. Char inputs\n",
        "char_inputs = layers.Input(shape= [], dtype=\"string\", name=\"char_inputs\")\n",
        "char_embeddings = tf_hub_embedding_layer(char_inputs)\n",
        "exp_layer = layers.Lambda(lambda x: tf.expand_dims(x , axis = 1))(char_embeddings)\n",
        "char_bi_lstm = layers.Bidirectional(layers.LSTM(32))(exp_layer)\n",
        "char_model = tf.keras.Model(inputs=char_inputs,\n",
        "                            outputs=char_bi_lstm)\n",
        "\n",
        "# 3. Line numbers inputs\n",
        "line_number_inputs = layers.Input(shape=(15,), dtype=tf.int32, name=\"line_number_input\")\n",
        "x = layers.Dense(32, activation=\"relu\")(line_number_inputs)\n",
        "line_number_model = tf.keras.Model(inputs=line_number_inputs,\n",
        "                                   outputs=x)\n",
        "\n",
        "# 4. Total lines inputs\n",
        "total_lines_inputs = layers.Input(shape=(20,), dtype=tf.int32, name=\"total_lines_input\")\n",
        "y = layers.Dense(32, activation=\"relu\")(total_lines_inputs)\n",
        "total_line_model = tf.keras.Model(inputs=total_lines_inputs,\n",
        "                                  outputs=y)\n",
        "\n",
        "# 5. Combine token and char embeddings into a hybrid embedding\n",
        "combined_embeddings = layers.Concatenate(name=\"token_char_hybrid_embedding\")([token_model.output, \n",
        "                                                                              char_model.output])\n",
        "z = layers.Dense(256, activation=\"relu\")(combined_embeddings)\n",
        "z = layers.Dropout(0.5)(z)\n",
        "\n",
        "# 6. Combine positional embeddings with combined token and char embeddings into a tribrid embedding\n",
        "z = layers.Concatenate(name=\"token_char_positional_embedding\")([line_number_model.output,\n",
        "                                                                total_line_model.output,\n",
        "                                                                z])\n",
        "\n",
        "# 7. Create output layer\n",
        "output_layer = layers.Dense(5, activation=\"softmax\", name=\"output_layer\")(z)\n",
        "\n",
        "# 8. Put together model\n",
        "model_5 = tf.keras.Model(inputs=[line_number_model.input,\n",
        "                                 total_line_model.input,\n",
        "                                 token_model.input, \n",
        "                                 char_model.input],\n",
        "                         outputs=output_layer)\n",
        "\n",
        "\n",
        "\n",
        "# Summary of the model \n",
        "model_5.summary()"
      ],
      "execution_count": 46,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Model: \"model_15\"\n",
            "__________________________________________________________________________________________________\n",
            "Layer (type)                    Output Shape         Param #     Connected to                     \n",
            "==================================================================================================\n",
            "token_inputs (InputLayer)       [(None,)]            0                                            \n",
            "__________________________________________________________________________________________________\n",
            "char_inputs (InputLayer)        [(None,)]            0                                            \n",
            "__________________________________________________________________________________________________\n",
            "universal_sentence_encoder (Ker (None, 512)          256797824   token_inputs[0][0]               \n",
            "                                                                 char_inputs[0][0]                \n",
            "__________________________________________________________________________________________________\n",
            "lambda_1 (Lambda)               (None, 1, 512)       0           universal_sentence_encoder[3][0] \n",
            "__________________________________________________________________________________________________\n",
            "bidirectional_1 (Bidirectional) (None, 64)           139520      lambda_1[0][0]                   \n",
            "__________________________________________________________________________________________________\n",
            "token_char_hybrid_embedding (Co (None, 576)          0           universal_sentence_encoder[2][0] \n",
            "                                                                 bidirectional_1[0][0]            \n",
            "__________________________________________________________________________________________________\n",
            "line_number_input (InputLayer)  [(None, 15)]         0                                            \n",
            "__________________________________________________________________________________________________\n",
            "total_lines_input (InputLayer)  [(None, 20)]         0                                            \n",
            "__________________________________________________________________________________________________\n",
            "dense_16 (Dense)                (None, 256)          147712      token_char_hybrid_embedding[0][0]\n",
            "__________________________________________________________________________________________________\n",
            "dense_14 (Dense)                (None, 32)           512         line_number_input[0][0]          \n",
            "__________________________________________________________________________________________________\n",
            "dense_15 (Dense)                (None, 32)           672         total_lines_input[0][0]          \n",
            "__________________________________________________________________________________________________\n",
            "dropout_4 (Dropout)             (None, 256)          0           dense_16[0][0]                   \n",
            "__________________________________________________________________________________________________\n",
            "token_char_positional_embedding (None, 320)          0           dense_14[0][0]                   \n",
            "                                                                 dense_15[0][0]                   \n",
            "                                                                 dropout_4[0][0]                  \n",
            "__________________________________________________________________________________________________\n",
            "output_layer (Dense)            (None, 5)            1605        token_char_positional_embedding[0\n",
            "==================================================================================================\n",
            "Total params: 257,087,845\n",
            "Trainable params: 290,021\n",
            "Non-trainable params: 256,797,824\n",
            "__________________________________________________________________________________________________\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "CGBD_B3v59xS",
        "outputId": "8b53a48f-f1ab-4bd3-a30a-bb6cde4f83f5"
      },
      "source": [
        "# Create training and validation datasets (all four kinds of inputs)\n",
        "train_pos_char_token_data = tf.data.Dataset.from_tensor_slices((train_line_numbers_one_hot, \n",
        "                                                                train_total_lines_one_hot, \n",
        "                                                                train_sentences, \n",
        "                                                                train_chars)) \n",
        "train_pos_char_token_labels = tf.data.Dataset.from_tensor_slices(train_labels_one_hot) \n",
        "train_pos_char_token_dataset = tf.data.Dataset.zip((train_pos_char_token_data, train_pos_char_token_labels)) \n",
        "train_pos_char_token_dataset = train_pos_char_token_dataset.batch(32).prefetch(tf.data.AUTOTUNE) \n",
        "\n",
        "# Validation dataset\n",
        "val_pos_char_token_data = tf.data.Dataset.from_tensor_slices((val_line_numbers_one_hot,\n",
        "                                                              val_total_lines_one_hot,\n",
        "                                                              val_sentences,\n",
        "                                                              val_chars))\n",
        "val_pos_char_token_labels = tf.data.Dataset.from_tensor_slices(val_labels_one_hot)\n",
        "val_pos_char_token_dataset = tf.data.Dataset.zip((val_pos_char_token_data, val_pos_char_token_labels))\n",
        "val_pos_char_token_dataset = val_pos_char_token_dataset.batch(32).prefetch(tf.data.AUTOTUNE) \n",
        "\n",
        "# Check input shapes\n",
        "train_pos_char_token_dataset, val_pos_char_token_dataset"
      ],
      "execution_count": 41,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(<PrefetchDataset shapes: (((None, 15), (None, 20), (None,), (None,)), (None, 5)), types: ((tf.float32, tf.float32, tf.string, tf.string), tf.float64)>,\n",
              " <PrefetchDataset shapes: (((None, 15), (None, 20), (None,), (None,)), (None, 5)), types: ((tf.float32, tf.float32, tf.string, tf.string), tf.float64)>)"
            ]
          },
          "metadata": {},
          "execution_count": 41
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QtUn9dSO7KTc"
      },
      "source": [
        "`tf.keras.callbacks.ModelCheckpoint` to save the model's best weights only.\n",
        "\n",
        "`tf.keras.callbacks.EarlyStopping` to stop the model from training once the validation loss has stopped improving for ~3 epochs.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kMhZJNlD7UE2"
      },
      "source": [
        "# Creating the callbacks \n",
        "check_filepath = 'best_weights/checkpoint.ckpt'\n",
        "model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath= check_filepath , \n",
        "                                                               save_weights_only = True , \n",
        "                                                               save_best_only = True  , \n",
        "                                                               save_freq = 'epoch' , \n",
        "                                                               monitor = 'val_loss')\n",
        "\n",
        "early_stopping  = tf.keras.callbacks.EarlyStopping(monitor= 'val_loss' , \n",
        "                                                   patience = 3, min_delta = 0.5 , verbose = 1)\n",
        "\n",
        "reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor=\"val_loss\",  \n",
        "                                                 factor=0.2, \n",
        "                                                 patience=2,\n",
        "                                                 verbose=1, \n",
        "                                                 min_lr=1e-7)"
      ],
      "execution_count": 47,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tqFqTdx088tR"
      },
      "source": [
        "Now while compile and fit the data on 100% of the training data. \n",
        "\n",
        "> Note: You can lower the `min_delta` or set it as default  in the EarlyStopping callback while training the model. To speed up the experiments I have used 0.8. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "AVRXVWux9fMj",
        "outputId": "c6664257-5e17-495f-dd2a-4089280b8173"
      },
      "source": [
        "# Compiling the model and fitting it on the 100% of the data for 50 epochs! \n",
        "\n",
        "model_5.compile(loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing= 0.2) ,\n",
        "                    optimizer = tf.keras.optimizers.Adam() ,\n",
        "                metrics = ['accuracy'])\n",
        "\n",
        "history = model_5.fit(train_pos_char_token_dataset , \n",
        "                      epochs = 50 , \n",
        "                      validation_data = val_pos_char_token_dataset  , \n",
        "                      callbacks = [early_stopping , model_checkpoint_callback , \n",
        "                                   reduce_lr])"
      ],
      "execution_count": 48,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Epoch 1/50\n",
            "5627/5627 [==============================] - 279s 49ms/step - loss: 0.9694 - accuracy: 0.8150 - val_loss: 0.9241 - val_accuracy: 0.8447\n",
            "Epoch 2/50\n",
            "5627/5627 [==============================] - 269s 48ms/step - loss: 0.9308 - accuracy: 0.8440 - val_loss: 0.9136 - val_accuracy: 0.8508\n",
            "Epoch 3/50\n",
            "5627/5627 [==============================] - 271s 48ms/step - loss: 0.9226 - accuracy: 0.8505 - val_loss: 0.9088 - val_accuracy: 0.8555\n",
            "Epoch 4/50\n",
            "5627/5627 [==============================] - 271s 48ms/step - loss: 0.9174 - accuracy: 0.8552 - val_loss: 0.9086 - val_accuracy: 0.8550\n",
            "Epoch 00004: early stopping\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0p9Sa8q0-OPn"
      },
      "source": [
        "### 5. Write a function (or series of functions) to take a sample abstract string, preprocess it (in the same way our model has been trained), make a prediction on each sequence in the abstract and return the abstract in the format:\n",
        "```\n",
        "PREDICTED_LABEL: SEQUENCE\n",
        "PREDICTED_LABEL: SEQUENCE\n",
        "PREDICTED_LABEL: SEQUENCE\n",
        "PREDICTED_LA\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XlGZYP5kqzha"
      },
      "source": [
        "From the derived raw abstract, for each abstract we will need to: \n",
        "- Split it into sentence (lines)\n",
        "- Split it into characters \n",
        "- Find the number of each line \n",
        "- Find the total number of lines. \n",
        "\n",
        "\n",
        "We will use spacy to extract the abstract informations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kBQAuO6huRqE",
        "outputId": "ce4d4653-37b9-4e9a-9de3-08d218df23fd",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "source": [
        "# Getting the example abstract to test our function\n",
        "!wget https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/skimlit_example_abstracts.json\n"
      ],
      "execution_count": 49,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--2021-09-12 11:17:42--  https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/skimlit_example_abstracts.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 6737 (6.6K) [text/plain]\n",
            "Saving to: ‘skimlit_example_abstracts.json’\n",
            "\n",
            "skimlit_example_abs 100%[===================>]   6.58K  --.-KB/s    in 0s      \n",
            "\n",
            "2021-09-12 11:17:42 (56.3 MB/s) - ‘skimlit_example_abstracts.json’ saved [6737/6737]\n",
            "\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "REeduwohuimU",
        "outputId": "546fa058-715e-457a-cf69-7ed13f61a671",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "source": [
        "# Using json to load in our abstract sample \n",
        "import json\n",
        "with open('skimlit_example_abstracts.json' , 'r') as f:\n",
        "  example_abstracts = json.load(f)\n",
        "\n",
        "example_abstracts"
      ],
      "execution_count": 71,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[{'abstract': 'This RCT examined the efficacy of a manualized social intervention for children with HFASDs. Participants were randomly assigned to treatment or wait-list conditions. Treatment included instruction and therapeutic activities targeting social skills, face-emotion recognition, interest expansion, and interpretation of non-literal language. A response-cost program was applied to reduce problem behaviors and foster skills acquisition. Significant treatment effects were found for five of seven primary outcome measures (parent ratings and direct child measures). Secondary measures based on staff ratings (treatment group only) corroborated gains reported by parents. High levels of parent, child and staff satisfaction were reported, along with high levels of treatment fidelity. Standardized effect size estimates were primarily in the medium and large ranges and favored the treatment group.',\n",
              "  'details': 'RCT of a manualized social treatment for high-functioning autism spectrum disorders',\n",
              "  'source': 'https://pubmed.ncbi.nlm.nih.gov/20232240/'},\n",
              " {'abstract': \"Postpartum depression (PPD) is the most prevalent mood disorder associated with childbirth. No single cause of PPD has been identified, however the increased risk of nutritional deficiencies incurred through the high nutritional requirements of pregnancy may play a role in the pathology of depressive symptoms. Three nutritional interventions have drawn particular interest as possible non-invasive and cost-effective prevention and/or treatment strategies for PPD; omega-3 (n-3) long chain polyunsaturated fatty acids (LCPUFA), vitamin D and overall diet. We searched for meta-analyses of randomised controlled trials (RCT's) of nutritional interventions during the perinatal period with PPD as an outcome, and checked for any trials published subsequently to the meta-analyses. Fish oil: Eleven RCT's of prenatal fish oil supplementation RCT's show null and positive effects on PPD symptoms. Vitamin D: no relevant RCT's were identified, however seven observational studies of maternal vitamin D levels with PPD outcomes showed inconsistent associations. Diet: Two Australian RCT's with dietary advice interventions in pregnancy had a positive and null result on PPD. With the exception of fish oil, few RCT's with nutritional interventions during pregnancy assess PPD. Further research is needed to determine whether nutritional intervention strategies during pregnancy can protect against symptoms of PPD. Given the prevalence of PPD and ease of administering PPD measures, we recommend future prenatal nutritional RCT's include PPD as an outcome.\",\n",
              "  'details': 'Formatting removed (can be used to compare model to actual example)',\n",
              "  'source': 'https://pubmed.ncbi.nlm.nih.gov/28012571/'},\n",
              " {'abstract': 'Mental illness, including depression, anxiety and bipolar disorder, accounts for a significant proportion of global disability and poses a substantial social, economic and heath burden. Treatment is presently dominated by pharmacotherapy, such as antidepressants, and psychotherapy, such as cognitive behavioural therapy; however, such treatments avert less than half of the disease burden, suggesting that additional strategies are needed to prevent and treat mental disorders. There are now consistent mechanistic, observational and interventional data to suggest diet quality may be a modifiable risk factor for mental illness. This review provides an overview of the nutritional psychiatry field. It includes a discussion of the neurobiological mechanisms likely modulated by diet, the use of dietary and nutraceutical interventions in mental disorders, and recommendations for further research. Potential biological pathways related to mental disorders include inflammation, oxidative stress, the gut microbiome, epigenetic modifications and neuroplasticity. Consistent epidemiological evidence, particularly for depression, suggests an association between measures of diet quality and mental health, across multiple populations and age groups; these do not appear to be explained by other demographic, lifestyle factors or reverse causality. Our recently published intervention trial provides preliminary clinical evidence that dietary interventions in clinically diagnosed populations are feasible and can provide significant clinical benefit. Furthermore, nutraceuticals including n-3 fatty acids, folate, S-adenosylmethionine, N-acetyl cysteine and probiotics, among others, are promising avenues for future research. Continued research is now required to investigate the efficacy of intervention studies in large cohorts and within clinically relevant populations, particularly in patients with schizophrenia, bipolar and anxiety disorders.',\n",
              "  'details': 'Effect of nutrition on mental health',\n",
              "  'source': 'https://pubmed.ncbi.nlm.nih.gov/28942748/'},\n",
              " {'abstract': \"Hepatitis C virus (HCV) and alcoholic liver disease (ALD), either alone or in combination, count for more than two thirds of all liver diseases in the Western world. There is no safe level of drinking in HCV-infected patients and the most effective goal for these patients is total abstinence. Baclofen, a GABA(B) receptor agonist, represents a promising pharmacotherapy for alcohol dependence (AD). Previously, we performed a randomized clinical trial (RCT), which demonstrated the safety and efficacy of baclofen in patients affected by AD and cirrhosis. The goal of this post-hoc analysis was to explore baclofen's effect in a subgroup of alcohol-dependent HCV-infected cirrhotic patients. Any patient with HCV infection was selected for this analysis. Among the 84 subjects randomized in the main trial, 24 alcohol-dependent cirrhotic patients had a HCV infection; 12 received baclofen 10mg t.i.d. and 12 received placebo for 12-weeks. With respect to the placebo group (3/12, 25.0%), a significantly higher number of patients who achieved and maintained total alcohol abstinence was found in the baclofen group (10/12, 83.3%; p=0.0123). Furthermore, in the baclofen group, compared to placebo, there was a significantly higher increase in albumin values from baseline (p=0.0132) and a trend toward a significant reduction in INR levels from baseline (p=0.0716). In conclusion, baclofen was safe and significantly more effective than placebo in promoting alcohol abstinence, and improving some Liver Function Tests (LFTs) (i.e. albumin, INR) in alcohol-dependent HCV-infected cirrhotic patients. Baclofen may represent a clinically relevant alcohol pharmacotherapy for these patients.\",\n",
              "  'details': 'Baclofen promotes alcohol abstinence in alcohol dependent cirrhotic patients with hepatitis C virus (HCV) infection',\n",
              "  'source': 'https://pubmed.ncbi.nlm.nih.gov/22244707/'}]"
            ]
          },
          "metadata": {},
          "execution_count": 71
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "h4FPb3ZAuxF_",
        "outputId": "460762e3-276b-4903-a0d0-37c3e4d84c1b",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 173
        }
      },
      "source": [
        "# How does our abstracts look in a DataFrame? \n",
        "pd.DataFrame(example_abstracts)"
      ],
      "execution_count": 72,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>abstract</th>\n",
              "      <th>source</th>\n",
              "      <th>details</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>This RCT examined the efficacy of a manualized...</td>\n",
              "      <td>https://pubmed.ncbi.nlm.nih.gov/20232240/</td>\n",
              "      <td>RCT of a manualized social treatment for high-...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>Postpartum depression (PPD) is the most preval...</td>\n",
              "      <td>https://pubmed.ncbi.nlm.nih.gov/28012571/</td>\n",
              "      <td>Formatting removed (can be used to compare mod...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>Mental illness, including depression, anxiety ...</td>\n",
              "      <td>https://pubmed.ncbi.nlm.nih.gov/28942748/</td>\n",
              "      <td>Effect of nutrition on mental health</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>Hepatitis C virus (HCV) and alcoholic liver di...</td>\n",
              "      <td>https://pubmed.ncbi.nlm.nih.gov/22244707/</td>\n",
              "      <td>Baclofen promotes alcohol abstinence in alcoho...</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "                                            abstract  ...                                            details\n",
              "0  This RCT examined the efficacy of a manualized...  ...  RCT of a manualized social treatment for high-...\n",
              "1  Postpartum depression (PPD) is the most preval...  ...  Formatting removed (can be used to compare mod...\n",
              "2  Mental illness, including depression, anxiety ...  ...               Effect of nutrition on mental health\n",
              "3  Hepatitis C virus (HCV) and alcoholic liver di...  ...  Baclofen promotes alcohol abstinence in alcoho...\n",
              "\n",
              "[4 rows x 3 columns]"
            ]
          },
          "metadata": {},
          "execution_count": 72
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oZIx8NKsPj9i"
      },
      "source": [
        "Making a function that takes in a dictionary of abstracts and information and prints out the predicted class with the line \n",
        "\n",
        "```\n",
        "Predicted Class : Sequence \n",
        "Predicted Class : Sequence \n",
        "Predicted Class : Sequence \n",
        "Predicted Class : Sequence \n",
        "\n",
        "```"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_sLSNsOVTXew"
      },
      "source": [
        "def visualize_pred_sequence_labels(abstract_dict , model , label_encoder):\n",
        "\n",
        "  '''\n",
        "  \n",
        "    Takes in a list of dictionaries of abstracts, \n",
        "\n",
        "    [{'abstract': 'This RCT examined .......' , \n",
        "      'details': 'RCT of a manuali......',\n",
        "      'source': 'https://pubmed.ncbi.nlm........./'},..........] \n",
        "\n",
        "    Arguments: \n",
        "    ----------\n",
        "      - abstract_dict : Abstract dictionary of the above format \n",
        "      - model : the trained model on the same data format (line_numbers,  total_lines , sentences , characters)\n",
        "      - label_encoder : the label encoder used to encode the classes \n",
        "\n",
        "    Returns:\n",
        "    --------\n",
        "      Prints out the predicted label and the corresponding sequence/ text \n",
        "  '''\n",
        "\n",
        "  # Setup english sentence parser \n",
        "  nlp = English()\n",
        "\n",
        "  # Create sentence splitting pipeline object \n",
        "  sentencizer = nlp.create_pipe('sentencizer')\n",
        "  nlp.add_pipe(sentencizer)\n",
        "\n",
        "  # Create doc of parsed sequences\n",
        "  doc = nlp(abstract_dict[0]['abstract'])\n",
        "\n",
        "  # Return detected sentences from doc in string typpe \n",
        "  abstract_lines = [str(sent) for sent in list(doc.sents)]\n",
        "\n",
        "  # Get total number of lines \n",
        "  total_lines_in_sampe = len(abstract_lines)\n",
        "\n",
        "  # Loop through each line in the abstract and create a list of dictionaries containing features \n",
        "  sample_lines = []\n",
        "  for i , line in enumerate(abstract_lines):\n",
        "    sample_dict = {}\n",
        "    sample_dict['text'] = str(line)\n",
        "    sample_dict['line_number'] = i \n",
        "    sample_dict['total_lines'] = total_lines_in_sample - 1 \n",
        "    sample_lines.append(sample_dict)\n",
        "\n",
        "  \n",
        "  # Get all line number and total lines numbers then one hot encode them \n",
        "  abstract_line_numbers = [line['line_number'] for line in sample_lines]\n",
        "  abstract_total_lines = [line['total_lines'] for line in sample_lines]\n",
        "\n",
        "  abstract_line_numbers_one_hot = tf.one_hot(abstract_line_numbers , depth = 15)\n",
        "  abstract_total_lines_one_hot = tf.one_hot(abstract_total_lines , depth = 20)\n",
        "\n",
        "\n",
        "  # Split the lines into characters \n",
        "  abstract_chars = [split_chars(sentence) for sentence in abstract_lines]\n",
        "\n",
        "  # Making prediction on sample features\n",
        "  abstract_pred_probs = model.predict(x = (abstract_line_numbers_one_hot, \n",
        "                                           abstract_total_lines_one_hot , \n",
        "                                           tf.constant(abstract_lines) , \n",
        "                                           tf.constant(abstract_chars)))\n",
        "  \n",
        "  # Turn prediction probs to pred class \n",
        "  abstract_preds = tf.argmax(abstract_pred_probs , axis = 1)\n",
        "  \n",
        "  # Prediction class integers into string class name \n",
        "  abstract_pred_classes = [label_encoder.classes_[i] for i in abstract_preds]\n",
        "\n",
        "  # Prints out the abstract lines and the predicted sequence labels \n",
        "  for i , line in enumerate(abstract_lines):\n",
        "    print(f'{abstract_pred_classes[i]}:  {line}\\n')"
      ],
      "execution_count": 75,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bVj0HkFATcPh",
        "outputId": "d20532dd-f47f-4f1b-e84b-a4c8fbdc4260",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "source": [
        "visualize_pred_sequence_labels(example_abstracts , model_5 , label_encoder)"
      ],
      "execution_count": 76,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "OBJECTIVE:  This RCT examined the efficacy of a manualized social intervention for children with HFASDs.\n",
            "\n",
            "METHODS:  Participants were randomly assigned to treatment or wait-list conditions.\n",
            "\n",
            "METHODS:  Treatment included instruction and therapeutic activities targeting social skills, face-emotion recognition, interest expansion, and interpretation of non-literal language.\n",
            "\n",
            "METHODS:  A response-cost program was applied to reduce problem behaviors and foster skills acquisition.\n",
            "\n",
            "METHODS:  Significant treatment effects were found for five of seven primary outcome measures (parent ratings and direct child measures).\n",
            "\n",
            "METHODS:  Secondary measures based on staff ratings (treatment group only) corroborated gains reported by parents.\n",
            "\n",
            "RESULTS:  High levels of parent, child and staff satisfaction were reported, along with high levels of treatment fidelity.\n",
            "\n",
            "RESULTS:  Standardized effect size estimates were primarily in the medium and large ranges and favored the treatment group.\n",
            "\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NBhZ5V9OdgWv"
      },
      "source": [
        "Hope you guys made till the end! "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "51UkdCnIgbYv"
      },
      "source": [
        ""
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}