{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Chapter 16 – Natural Language Processing with RNNs and Attention**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "_This notebook contains all the sample code in chapter 16._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<table align=\"left\">\n",
    "  <td>\n",
    "    <a href=\"https://colab.research.google.com/github/ageron/handson-ml2/blob/master/16_nlp_with_rnns_and_attention.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
    "  </td>\n",
    "  <td>\n",
    "    <a target=\"_blank\" href=\"https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml2/blob/add-kaggle-badge/16_nlp_with_rnns_and_attention.ipynb\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" /></a>\n",
    "  </td>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Setup"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "No GPU was detected. LSTMs and CNNs can be very slow without a GPU.\n"
     ]
    }
   ],
   "source": [
    "# Python ≥3.5 is required\n",
    "import sys\n",
    "assert sys.version_info >= (3, 5)\n",
    "\n",
    "# Is this notebook running on Colab or Kaggle?\n",
    "IS_COLAB = \"google.colab\" in sys.modules\n",
    "IS_KAGGLE = \"kaggle_secrets\" in sys.modules\n",
    "\n",
    "if IS_COLAB:\n",
    "    !pip install -q -U tensorflow-addons\n",
    "    !pip install -q -U transformers\n",
    "\n",
    "# Scikit-Learn ≥0.20 is required\n",
    "import sklearn\n",
    "assert sklearn.__version__ >= \"0.20\"\n",
    "\n",
    "# TensorFlow ≥2.0 is required\n",
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "assert tf.__version__ >= \"2.0\"\n",
    "\n",
    "if not tf.config.list_physical_devices('GPU'):\n",
    "    print(\"No GPU was detected. LSTMs and CNNs can be very slow without a GPU.\")\n",
    "    if IS_COLAB:\n",
    "        print(\"Go to Runtime > Change runtime and select a GPU hardware accelerator.\")\n",
    "    if IS_KAGGLE:\n",
    "        print(\"Go to Settings > Accelerator and select GPU.\")\n",
    "\n",
    "# Common imports\n",
    "import numpy as np\n",
    "import os\n",
    "\n",
    "# to make this notebook's output stable across runs\n",
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "# To plot pretty figures\n",
    "%matplotlib inline\n",
    "import matplotlib as mpl\n",
    "import matplotlib.pyplot as plt\n",
    "mpl.rc('axes', labelsize=14)\n",
    "mpl.rc('xtick', labelsize=12)\n",
    "mpl.rc('ytick', labelsize=12)\n",
    "\n",
    "# Where to save the figures\n",
    "PROJECT_ROOT_DIR = \".\"\n",
    "CHAPTER_ID = \"nlp\"\n",
    "IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID)\n",
    "os.makedirs(IMAGES_PATH, exist_ok=True)\n",
    "\n",
    "def save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n",
    "    path = os.path.join(IMAGES_PATH, fig_id + \".\" + fig_extension)\n",
    "    print(\"Saving figure\", fig_id)\n",
    "    if tight_layout:\n",
    "        plt.tight_layout()\n",
    "    plt.savefig(path, format=fig_extension, dpi=resolution)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Char-RNN"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Splitting a sequence into batches of shuffled windows"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For example, let's split the sequence 0 to 14 into windows of length 5, each shifted by 2 (e.g.,`[0, 1, 2, 3, 4]`, `[2, 3, 4, 5, 6]`, etc.), then shuffle them, and split them into inputs (the first 4 steps) and targets (the last 4 steps) (e.g., `[2, 3, 4, 5, 6]` would be split into `[[2, 3, 4, 5], [3, 4, 5, 6]]`), then create batches of 3 such input/target pairs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "____________________ Batch 0 \n",
      "X_batch\n",
      "[[6 7 8 9]\n",
      " [2 3 4 5]\n",
      " [4 5 6 7]]\n",
      "===== \n",
      "Y_batch\n",
      "[[ 7  8  9 10]\n",
      " [ 3  4  5  6]\n",
      " [ 5  6  7  8]]\n",
      "____________________ Batch 1 \n",
      "X_batch\n",
      "[[ 0  1  2  3]\n",
      " [ 8  9 10 11]\n",
      " [10 11 12 13]]\n",
      "===== \n",
      "Y_batch\n",
      "[[ 1  2  3  4]\n",
      " [ 9 10 11 12]\n",
      " [11 12 13 14]]\n"
     ]
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "n_steps = 5\n",
    "dataset = tf.data.Dataset.from_tensor_slices(tf.range(15))\n",
    "dataset = dataset.window(n_steps, shift=2, drop_remainder=True)\n",
    "dataset = dataset.flat_map(lambda window: window.batch(n_steps))\n",
    "dataset = dataset.shuffle(10).map(lambda window: (window[:-1], window[1:]))\n",
    "dataset = dataset.batch(3).prefetch(1)\n",
    "for index, (X_batch, Y_batch) in enumerate(dataset):\n",
    "    print(\"_\" * 20, \"Batch\", index, \"\\nX_batch\")\n",
    "    print(X_batch.numpy())\n",
    "    print(\"=\" * 5, \"\\nY_batch\")\n",
    "    print(Y_batch.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Loading the Data and Preparing the Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading data from https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt\n",
      "1122304/1115394 [==============================] - 0s 0us/step\n"
     ]
    }
   ],
   "source": [
    "shakespeare_url = \"https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt\"\n",
    "filepath = keras.utils.get_file(\"shakespeare.txt\", shakespeare_url)\n",
    "with open(filepath) as f:\n",
    "    shakespeare_text = f.read()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "First Citizen:\n",
      "Before we proceed any further, hear me speak.\n",
      "\n",
      "All:\n",
      "Speak, speak.\n",
      "\n",
      "First Citizen:\n",
      "You are all resolved rather to die than to famish?\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(shakespeare_text[:148])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"\\n !$&',-.3:;?abcdefghijklmnopqrstuvwxyz\""
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\"\".join(sorted(set(shakespeare_text.lower())))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)\n",
    "tokenizer.fit_on_texts(shakespeare_text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[20, 6, 9, 8, 3]]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tokenizer.texts_to_sequences([\"First\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['f i r s t']"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tokenizer.sequences_to_texts([[20, 6, 9, 8, 3]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "max_id = len(tokenizer.word_index) # number of distinct characters\n",
    "dataset_size = tokenizer.document_count # total number of characters"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "[encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_text])) - 1\n",
    "train_size = dataset_size * 90 // 100\n",
    "dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Note**: in previous versions of this code, we used `dataset.repeat()` now to make the dataset \"infinite\", and later in the notebook we set the `steps_per_epoch` argument when calling the `model.fit()` method. This was needed to work around some TensorFlow bugs. However, since these bugs have now been fixed, we can simplify the code: no need for `dataset.repeat()` or `steps_per_epoch` anymore."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "n_steps = 100\n",
    "window_length = n_steps + 1 # target = input shifted 1 character ahead\n",
    "dataset = dataset.window(window_length, shift=1, drop_remainder=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = dataset.flat_map(lambda window: window.batch(window_length))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_size = 32\n",
    "dataset = dataset.shuffle(10000).batch(batch_size)\n",
    "dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = dataset.map(\n",
    "    lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = dataset.prefetch(1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(32, 100, 39) (32, 100)\n"
     ]
    }
   ],
   "source": [
    "for X_batch, Y_batch in dataset.take(1):\n",
    "    print(X_batch.shape, Y_batch.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Creating and Training the Model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Warning**: the following code may take up to 24 hours to run, depending on your hardware. If you use a GPU, it may take just 1 or 2 hours, or less."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Note**: the `GRU` class will only use the GPU (if you have one) when using the default values for the following arguments: `activation`, `recurrent_activation`, `recurrent_dropout`, `unroll`, `use_bias` and `reset_after`. This is why I commented out `recurrent_dropout=0.2` (compared to the book)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/10\n",
      "31368/31368 [==============================] - 7150s 228ms/step - loss: 1.4671\n",
      "Epoch 2/10\n",
      "31368/31368 [==============================] - 7094s 226ms/step - loss: 1.3614\n",
      "Epoch 3/10\n",
      "31368/31368 [==============================] - 7063s 225ms/step - loss: 1.3404\n",
      "Epoch 4/10\n",
      "31368/31368 [==============================] - 7039s 224ms/step - loss: 1.3311\n",
      "Epoch 5/10\n",
      "31368/31368 [==============================] - 7056s 225ms/step - loss: 1.3256\n",
      "Epoch 6/10\n",
      "31368/31368 [==============================] - 7049s 225ms/step - loss: 1.3209\n",
      "Epoch 7/10\n",
      "31368/31368 [==============================] - 7068s 225ms/step - loss: 1.3166\n",
      "Epoch 8/10\n",
      "31368/31368 [==============================] - 7030s 224ms/step - loss: 1.3138\n",
      "Epoch 9/10\n",
      "31368/31368 [==============================] - 7061s 225ms/step - loss: 1.3120\n",
      "Epoch 10/10\n",
      "31368/31368 [==============================] - 7177s 229ms/step - loss: 1.3105\n"
     ]
    }
   ],
   "source": [
    "model = keras.models.Sequential([\n",
    "    keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id],\n",
    "                     #dropout=0.2, recurrent_dropout=0.2),\n",
    "                     dropout=0.2),\n",
    "    keras.layers.GRU(128, return_sequences=True,\n",
    "                     #dropout=0.2, recurrent_dropout=0.2),\n",
    "                     dropout=0.2),\n",
    "    keras.layers.TimeDistributed(keras.layers.Dense(max_id,\n",
    "                                                    activation=\"softmax\"))\n",
    "])\n",
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\")\n",
    "history = model.fit(dataset, epochs=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Using the Model to Generate Text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "def preprocess(texts):\n",
    "    X = np.array(tokenizer.texts_to_sequences(texts)) - 1\n",
    "    return tf.one_hot(X, max_id)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Warning**: the `predict_classes()` method is deprecated. Instead, we must use `np.argmax(model(X_new), axis=-1)`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'u'"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X_new = preprocess([\"How are yo\"])\n",
    "#Y_pred = model.predict_classes(X_new)\n",
    "Y_pred = np.argmax(model(X_new), axis=-1)\n",
    "tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[0, 1, 0, 2, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 2, 1, 0, 2, 1,\n",
       "        0, 1, 2, 1, 1, 1, 2, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 2]])"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tf.random.set_seed(42)\n",
    "\n",
    "tf.random.categorical([[np.log(0.5), np.log(0.4), np.log(0.1)]], num_samples=40).numpy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "def next_char(text, temperature=1):\n",
    "    X_new = preprocess([text])\n",
    "    y_proba = model(X_new)[0, -1:, :]\n",
    "    rescaled_logits = tf.math.log(y_proba) / temperature\n",
    "    char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1\n",
    "    return tokenizer.sequences_to_texts(char_id.numpy())[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'u'"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tf.random.set_seed(42)\n",
    "\n",
    "next_char(\"How are yo\", temperature=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "def complete_text(text, n_chars=50, temperature=1):\n",
    "    for _ in range(n_chars):\n",
    "        text += next_char(text, temperature)\n",
    "    return text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "the belly the charges of the other words\n",
      "and belly \n"
     ]
    }
   ],
   "source": [
    "tf.random.set_seed(42)\n",
    "\n",
    "print(complete_text(\"t\", temperature=0.2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "thing! they know't.\n",
      "\n",
      "biondello:\n",
      "for you are the own\n"
     ]
    }
   ],
   "source": [
    "print(complete_text(\"t\", temperature=1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "th no cyty\n",
      "use ffor was firive this toighingaber; b\n"
     ]
    }
   ],
   "source": [
    "print(complete_text(\"t\", temperature=2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Stateful RNN"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.random.set_seed(42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])\n",
    "dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)\n",
    "dataset = dataset.flat_map(lambda window: window.batch(window_length))\n",
    "dataset = dataset.batch(1)\n",
    "dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))\n",
    "dataset = dataset.map(\n",
    "    lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))\n",
    "dataset = dataset.prefetch(1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_size = 32\n",
    "encoded_parts = np.array_split(encoded[:train_size], batch_size)\n",
    "datasets = []\n",
    "for encoded_part in encoded_parts:\n",
    "    dataset = tf.data.Dataset.from_tensor_slices(encoded_part)\n",
    "    dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)\n",
    "    dataset = dataset.flat_map(lambda window: window.batch(window_length))\n",
    "    datasets.append(dataset)\n",
    "dataset = tf.data.Dataset.zip(tuple(datasets)).map(lambda *windows: tf.stack(windows))\n",
    "dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))\n",
    "dataset = dataset.map(\n",
    "    lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))\n",
    "dataset = dataset.prefetch(1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Note**: once again, I commented out `recurrent_dropout=0.2` (compared to the book) so you can get GPU acceleration (if you have one)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = keras.models.Sequential([\n",
    "    keras.layers.GRU(128, return_sequences=True, stateful=True,\n",
    "                     #dropout=0.2, recurrent_dropout=0.2,\n",
    "                     dropout=0.2,\n",
    "                     batch_input_shape=[batch_size, None, max_id]),\n",
    "    keras.layers.GRU(128, return_sequences=True, stateful=True,\n",
    "                     #dropout=0.2, recurrent_dropout=0.2),\n",
    "                     dropout=0.2),\n",
    "    keras.layers.TimeDistributed(keras.layers.Dense(max_id,\n",
    "                                                    activation=\"softmax\"))\n",
    "])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ResetStatesCallback(keras.callbacks.Callback):\n",
    "    def on_epoch_begin(self, epoch, logs):\n",
    "        self.model.reset_states()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/50\n",
      "313/313 [==============================] - 62s 198ms/step - loss: 2.6189\n",
      "Epoch 2/50\n",
      "313/313 [==============================] - 58s 187ms/step - loss: 2.2091\n",
      "Epoch 3/50\n",
      "313/313 [==============================] - 56s 178ms/step - loss: 2.0775\n",
      "Epoch 4/50\n",
      "313/313 [==============================] - 56s 179ms/step - loss: 2.4689\n",
      "Epoch 5/50\n",
      "313/313 [==============================] - 56s 179ms/step - loss: 2.3274\n",
      "Epoch 6/50\n",
      "313/313 [==============================] - 57s 183ms/step - loss: 2.1412\n",
      "Epoch 7/50\n",
      "313/313 [==============================] - 57s 183ms/step - loss: 2.0748\n",
      "Epoch 8/50\n",
      "313/313 [==============================] - 56s 179ms/step - loss: 1.9850\n",
      "Epoch 9/50\n",
      "313/313 [==============================] - 56s 179ms/step - loss: 1.9465\n",
      "Epoch 10/50\n",
      "313/313 [==============================] - 56s 179ms/step - loss: 1.8995\n",
      "Epoch 11/50\n",
      "313/313 [==============================] - 57s 182ms/step - loss: 1.8576\n",
      "Epoch 12/50\n",
      "313/313 [==============================] - 56s 179ms/step - loss: 1.8510\n",
      "Epoch 13/50\n",
      "313/313 [==============================] - 57s 184ms/step - loss: 1.8038\n",
      "Epoch 14/50\n",
      "313/313 [==============================] - 56s 178ms/step - loss: 1.7867\n",
      "Epoch 15/50\n",
      "313/313 [==============================] - 56s 180ms/step - loss: 1.7635\n",
      "Epoch 16/50\n",
      "313/313 [==============================] - 56s 179ms/step - loss: 1.7270\n",
      "Epoch 17/50\n",
      "313/313 [==============================] - 58s 184ms/step - loss: 1.7097\n",
      "<<31 more lines>>\n",
      "313/313 [==============================] - 58s 185ms/step - loss: 1.5998\n",
      "Epoch 34/50\n",
      "313/313 [==============================] - 58s 184ms/step - loss: 1.5954\n",
      "Epoch 35/50\n",
      "313/313 [==============================] - 58s 185ms/step - loss: 1.5944\n",
      "Epoch 36/50\n",
      "313/313 [==============================] - 57s 183ms/step - loss: 1.5902\n",
      "Epoch 37/50\n",
      "313/313 [==============================] - 57s 183ms/step - loss: 1.5893\n",
      "Epoch 38/50\n",
      "313/313 [==============================] - 59s 187ms/step - loss: 1.5845\n",
      "Epoch 39/50\n",
      "313/313 [==============================] - 57s 183ms/step - loss: 1.5821\n",
      "Epoch 40/50\n",
      "313/313 [==============================] - 59s 187ms/step - loss: 1.5798\n",
      "Epoch 41/50\n",
      "313/313 [==============================] - 57s 181ms/step - loss: 1.5794\n",
      "Epoch 42/50\n",
      "313/313 [==============================] - 57s 182ms/step - loss: 1.5774\n",
      "Epoch 43/50\n",
      "313/313 [==============================] - 57s 182ms/step - loss: 1.5755\n",
      "Epoch 44/50\n",
      "313/313 [==============================] - 58s 186ms/step - loss: 1.5735\n",
      "Epoch 45/50\n",
      "313/313 [==============================] - 58s 186ms/step - loss: 1.5714\n",
      "Epoch 46/50\n",
      "313/313 [==============================] - 57s 181ms/step - loss: 1.5686\n",
      "Epoch 47/50\n",
      "313/313 [==============================] - 57s 181ms/step - loss: 1.5675\n",
      "Epoch 48/50\n",
      "313/313 [==============================] - 56s 180ms/step - loss: 1.5657\n",
      "Epoch 49/50\n",
      "313/313 [==============================] - 58s 185ms/step - loss: 1.5654\n",
      "Epoch 50/50\n",
      "313/313 [==============================] - 57s 182ms/step - loss: 1.5620\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x7f8d45d95d10>"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\")\n",
    "history = model.fit(dataset, epochs=50,\n",
    "                    callbacks=[ResetStatesCallback()])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To use the model with different batch sizes, we need to create a stateless copy. We can get rid of dropout since it is only used during training:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "stateless_model = keras.models.Sequential([\n",
    "    keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id]),\n",
    "    keras.layers.GRU(128, return_sequences=True),\n",
    "    keras.layers.TimeDistributed(keras.layers.Dense(max_id,\n",
    "                                                    activation=\"softmax\"))\n",
    "])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To set the weights, we first need to build the model (so the weights get created):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [],
   "source": [
    "stateless_model.build(tf.TensorShape([None, None, max_id]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "stateless_model.set_weights(model.get_weights())\n",
    "model = stateless_model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tor:\n",
      "in the negver up how it thou like him;\n",
      "when it\n"
     ]
    }
   ],
   "source": [
    "tf.random.set_seed(42)\n",
    "\n",
    "print(complete_text(\"t\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Sentiment Analysis"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.random.set_seed(42)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can load the IMDB dataset easily:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz\n",
      "17465344/17464789 [==============================] - 0s 0us/step\n"
     ]
    }
   ],
   "source": [
    "(X_train, y_train), (X_test, y_test) = keras.datasets.imdb.load_data()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65]"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X_train[0][:10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb_word_index.json\n",
      "1646592/1641221 [==============================] - 0s 0us/step\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'<sos> this film was just brilliant casting location scenery story'"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "word_index = keras.datasets.imdb.get_word_index()\n",
    "id_to_word = {id_ + 3: word for word, id_ in word_index.items()}\n",
    "for id_, token in enumerate((\"<pad>\", \"<sos>\", \"<unk>\")):\n",
    "    id_to_word[id_] = token\n",
    "\" \".join([id_to_word[id_] for id_ in X_train[0][:10]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1mDownloading and preparing dataset imdb_reviews/plain_text/1.0.0 (download: 80.23 MiB, generated: Unknown size, total: 80.23 MiB) to /home/aurelien_geron_kiwisoft_io/tensorflow_datasets/imdb_reviews/plain_text/1.0.0...\u001b[0m\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9d19e9f5f622440b9feb5ed1a98db7b3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Completed...', max=1.0, style=Progre…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "001634a67f0e42f69450a95ed84492b2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Size...', max=1.0, style=ProgressSty…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\n",
      "\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Shuffling and writing examples to /home/aurelien_geron_kiwisoft_io/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteK5RNB1/imdb_reviews-train.tfrecord\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c661939803ee4442be234cc9e1485599",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=0.0, max=25000.0), HTML(value='')))"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\r"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Shuffling and writing examples to /home/aurelien_geron_kiwisoft_io/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteK5RNB1/imdb_reviews-test.tfrecord\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "bdd0d660b82e4f93b4824e6e3573f197",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=0.0, max=25000.0), HTML(value='')))"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\r"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Shuffling and writing examples to /home/aurelien_geron_kiwisoft_io/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteK5RNB1/imdb_reviews-unsupervised.tfrecord\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "91133fb7161d440383d4d50bd06f16ba",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "HBox(children=(FloatProgress(value=0.0, max=50000.0), HTML(value='')))"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1mDataset imdb_reviews downloaded and prepared to /home/aurelien_geron_kiwisoft_io/tensorflow_datasets/imdb_reviews/plain_text/1.0.0. Subsequent calls will reuse this data.\u001b[0m\n",
      "\r"
     ]
    }
   ],
   "source": [
    "import tensorflow_datasets as tfds\n",
    "\n",
    "datasets, info = tfds.load(\"imdb_reviews\", as_supervised=True, with_info=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "dict_keys(['test', 'train', 'unsupervised'])"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "datasets.keys()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_size = info.splits[\"train\"].num_examples\n",
    "test_size = info.splits[\"test\"].num_examples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(25000, 25000)"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_size, test_size"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Review: This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting  ...\n",
      "Label: 0 = Negative\n",
      "\n",
      "Review: I have been known to fall asleep during films, but this is usually due to a combination of things including, really tired, being warm and comfortable on the sette and having just eaten a lot. However  ...\n",
      "Label: 0 = Negative\n",
      "\n"
     ]
    }
   ],
   "source": [
    "for X_batch, y_batch in datasets[\"train\"].batch(2).take(1):\n",
    "    for review, label in zip(X_batch.numpy(), y_batch.numpy()):\n",
    "        print(\"Review:\", review.decode(\"utf-8\")[:200], \"...\")\n",
    "        print(\"Label:\", label, \"= Positive\" if label else \"= Negative\")\n",
    "        print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "def preprocess(X_batch, y_batch):\n",
    "    X_batch = tf.strings.substr(X_batch, 0, 300)\n",
    "    X_batch = tf.strings.regex_replace(X_batch, rb\"<br\\s*/?>\", b\" \")\n",
    "    X_batch = tf.strings.regex_replace(X_batch, b\"[^a-zA-Z']\", b\" \")\n",
    "    X_batch = tf.strings.split(X_batch)\n",
    "    return X_batch.to_tensor(default_value=b\"<pad>\"), y_batch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(<tf.Tensor: shape=(2, 53), dtype=string, numpy=\n",
       " array([[b'This', b'was', b'an', b'absolutely', b'terrible', b'movie',\n",
       "         b\"Don't\", b'be', b'lured', b'in', b'by', b'Christopher',\n",
       "         b'Walken', b'or', b'Michael', b'Ironside', b'Both', b'are',\n",
       "         b'great', b'actors', b'but', b'this', b'must', b'simply', b'be',\n",
       "         b'their', b'worst', b'role', b'in', b'history', b'Even',\n",
       "         b'their', b'great', b'acting', b'could', b'not', b'redeem',\n",
       "         b'this', b\"movie's\", b'ridiculous', b'storyline', b'This',\n",
       "         b'movie', b'is', b'an', b'early', b'nineties', b'US',\n",
       "         b'propaganda', b'pi', b'<pad>', b'<pad>', b'<pad>'],\n",
       "        [b'I', b'have', b'been', b'known', b'to', b'fall', b'asleep',\n",
       "         b'during', b'films', b'but', b'this', b'is', b'usually', b'due',\n",
       "         b'to', b'a', b'combination', b'of', b'things', b'including',\n",
       "         b'really', b'tired', b'being', b'warm', b'and', b'comfortable',\n",
       "         b'on', b'the', b'sette', b'and', b'having', b'just', b'eaten',\n",
       "         b'a', b'lot', b'However', b'on', b'this', b'occasion', b'I',\n",
       "         b'fell', b'asleep', b'because', b'the', b'film', b'was',\n",
       "         b'rubbish', b'The', b'plot', b'development', b'was', b'constant',\n",
       "         b'Cons']], dtype=object)>,\n",
       " <tf.Tensor: shape=(2,), dtype=int64, numpy=array([0, 0])>)"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "preprocess(X_batch, y_batch)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "from collections import Counter\n",
    "\n",
    "vocabulary = Counter()\n",
    "for X_batch, y_batch in datasets[\"train\"].batch(32).map(preprocess):\n",
    "    for review in X_batch:\n",
    "        vocabulary.update(list(review.numpy()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[(b'<pad>', 214309), (b'the', 61137), (b'a', 38564)]"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vocabulary.most_common()[:3]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "53893"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(vocabulary)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [],
   "source": [
    "vocab_size = 10000\n",
    "truncated_vocabulary = [\n",
    "    word for word, count in vocabulary.most_common()[:vocab_size]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "22\n",
      "12\n",
      "11\n",
      "10000\n"
     ]
    }
   ],
   "source": [
    "word_to_id = {word: index for index, word in enumerate(truncated_vocabulary)}\n",
    "for word in b\"This movie was faaaaaantastic\".split():\n",
    "    print(word_to_id.get(word) or vocab_size)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [],
   "source": [
    "words = tf.constant(truncated_vocabulary)\n",
    "word_ids = tf.range(len(truncated_vocabulary), dtype=tf.int64)\n",
    "vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids)\n",
    "num_oov_buckets = 1000\n",
    "table = tf.lookup.StaticVocabularyTable(vocab_init, num_oov_buckets)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(1, 4), dtype=int64, numpy=array([[   22,    12,    11, 10053]])>"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "table.lookup(tf.constant([b\"This movie was faaaaaantastic\".split()]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [],
   "source": [
    "def encode_words(X_batch, y_batch):\n",
    "    return table.lookup(X_batch), y_batch\n",
    "\n",
    "train_set = datasets[\"train\"].batch(32).map(preprocess)\n",
    "train_set = train_set.map(encode_words).prefetch(1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[[  22   11   28 ...    0    0    0]\n",
      " [   6   21   70 ...    0    0    0]\n",
      " [4099 6881    1 ...    0    0    0]\n",
      " ...\n",
      " [  22   12  118 ...  331 1047    0]\n",
      " [1757 4101  451 ...    0    0    0]\n",
      " [3365 4392    6 ...    0    0    0]], shape=(32, 60), dtype=int64)\n",
      "tf.Tensor([0 0 0 1 1 1 0 0 0 0 0 1 1 0 1 0 1 1 1 0 1 1 1 1 1 0 0 0 1 0 0 0], shape=(32,), dtype=int64)\n"
     ]
    }
   ],
   "source": [
    "for X_batch, y_batch in train_set.take(1):\n",
    "    print(X_batch)\n",
    "    print(y_batch)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/5\n",
      "782/782 [==============================] - 118s 152ms/step - loss: 0.5305 - accuracy: 0.7282\n",
      "Epoch 2/5\n",
      "782/782 [==============================] - 113s 145ms/step - loss: 0.3459 - accuracy: 0.8554\n",
      "Epoch 3/5\n",
      "782/782 [==============================] - 113s 145ms/step - loss: 0.1913 - accuracy: 0.9319\n",
      "Epoch 4/5\n",
      "782/782 [==============================] - 114s 146ms/step - loss: 0.1341 - accuracy: 0.9535\n",
      "Epoch 5/5\n",
      "782/782 [==============================] - 116s 148ms/step - loss: 0.1011 - accuracy: 0.9624\n"
     ]
    }
   ],
   "source": [
    "embed_size = 128\n",
    "model = keras.models.Sequential([\n",
    "    keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size,\n",
    "                           mask_zero=True, # not shown in the book\n",
    "                           input_shape=[None]),\n",
    "    keras.layers.GRU(128, return_sequences=True),\n",
    "    keras.layers.GRU(128),\n",
    "    keras.layers.Dense(1, activation=\"sigmoid\")\n",
    "])\n",
    "model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n",
    "history = model.fit(train_set, epochs=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or using manual masking:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/5\n",
      "782/782 [==============================] - 118s 152ms/step - loss: 0.5425 - accuracy: 0.7155\n",
      "Epoch 2/5\n",
      "782/782 [==============================] - 112s 143ms/step - loss: 0.3479 - accuracy: 0.8558\n",
      "Epoch 3/5\n",
      "782/782 [==============================] - 112s 144ms/step - loss: 0.1761 - accuracy: 0.9388\n",
      "Epoch 4/5\n",
      "782/782 [==============================] - 115s 147ms/step - loss: 0.1281 - accuracy: 0.9531\n",
      "Epoch 5/5\n",
      "782/782 [==============================] - 116s 148ms/step - loss: 0.1088 - accuracy: 0.9603\n"
     ]
    }
   ],
   "source": [
    "K = keras.backend\n",
    "embed_size = 128\n",
    "inputs = keras.layers.Input(shape=[None])\n",
    "mask = keras.layers.Lambda(lambda inputs: K.not_equal(inputs, 0))(inputs)\n",
    "z = keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size)(inputs)\n",
    "z = keras.layers.GRU(128, return_sequences=True)(z, mask=mask)\n",
    "z = keras.layers.GRU(128)(z, mask=mask)\n",
    "outputs = keras.layers.Dense(1, activation=\"sigmoid\")(z)\n",
    "model = keras.models.Model(inputs=[inputs], outputs=[outputs])\n",
    "model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n",
    "history = model.fit(train_set, epochs=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Reusing Pretrained Embeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.random.set_seed(42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [],
   "source": [
    "TFHUB_CACHE_DIR = os.path.join(os.curdir, \"my_tfhub_cache\")\n",
    "os.environ[\"TFHUB_CACHE_DIR\"] = TFHUB_CACHE_DIR"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow_hub as hub\n",
    "\n",
    "model = keras.Sequential([\n",
    "    hub.KerasLayer(\"https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1\",\n",
    "                   dtype=tf.string, input_shape=[], output_shape=[50]),\n",
    "    keras.layers.Dense(128, activation=\"relu\"),\n",
    "    keras.layers.Dense(1, activation=\"sigmoid\")\n",
    "])\n",
    "model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\",\n",
    "              metrics=[\"accuracy\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./my_tfhub_cache/82c4aaf4250ffb09088bd48368ee7fd00e5464fe.descriptor.txt\n",
      "./my_tfhub_cache/82c4aaf4250ffb09088bd48368ee7fd00e5464fe/saved_model.pb\n",
      "./my_tfhub_cache/82c4aaf4250ffb09088bd48368ee7fd00e5464fe/variables/variables.data-00000-of-00001\n",
      "./my_tfhub_cache/82c4aaf4250ffb09088bd48368ee7fd00e5464fe/variables/variables.index\n",
      "./my_tfhub_cache/82c4aaf4250ffb09088bd48368ee7fd00e5464fe/assets/tokens.txt\n"
     ]
    }
   ],
   "source": [
    "for dirpath, dirnames, filenames in os.walk(TFHUB_CACHE_DIR):\n",
    "    for filename in filenames:\n",
    "        print(os.path.join(dirpath, filename))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/5\n",
      "782/782 [==============================] - 128s 164ms/step - loss: 0.5460 - accuracy: 0.7267\n",
      "Epoch 2/5\n",
      "782/782 [==============================] - 128s 164ms/step - loss: 0.5129 - accuracy: 0.7495\n",
      "Epoch 3/5\n",
      "782/782 [==============================] - 129s 165ms/step - loss: 0.5082 - accuracy: 0.7530\n",
      "Epoch 4/5\n",
      "782/782 [==============================] - 128s 164ms/step - loss: 0.5047 - accuracy: 0.7533\n",
      "Epoch 5/5\n",
      "782/782 [==============================] - 128s 164ms/step - loss: 0.5015 - accuracy: 0.7560\n"
     ]
    }
   ],
   "source": [
    "import tensorflow_datasets as tfds\n",
    "\n",
    "datasets, info = tfds.load(\"imdb_reviews\", as_supervised=True, with_info=True)\n",
    "train_size = info.splits[\"train\"].num_examples\n",
    "batch_size = 32\n",
    "train_set = datasets[\"train\"].batch(batch_size).prefetch(1)\n",
    "history = model.fit(train_set, epochs=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Automatic Translation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.random.set_seed(42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {},
   "outputs": [],
   "source": [
    "vocab_size = 100\n",
    "embed_size = 10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow_addons as tfa\n",
    "\n",
    "encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)\n",
    "\n",
    "embeddings = keras.layers.Embedding(vocab_size, embed_size)\n",
    "encoder_embeddings = embeddings(encoder_inputs)\n",
    "decoder_embeddings = embeddings(decoder_inputs)\n",
    "\n",
    "encoder = keras.layers.LSTM(512, return_state=True)\n",
    "encoder_outputs, state_h, state_c = encoder(encoder_embeddings)\n",
    "encoder_state = [state_h, state_c]\n",
    "\n",
    "sampler = tfa.seq2seq.sampler.TrainingSampler()\n",
    "\n",
    "decoder_cell = keras.layers.LSTMCell(512)\n",
    "output_layer = keras.layers.Dense(vocab_size)\n",
    "decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,\n",
    "                                                 output_layer=output_layer)\n",
    "final_outputs, final_state, final_sequence_lengths = decoder(\n",
    "    decoder_embeddings, initial_state=encoder_state,\n",
    "    sequence_length=sequence_lengths)\n",
    "Y_proba = tf.nn.softmax(final_outputs.rnn_output)\n",
    "\n",
    "model = keras.models.Model(\n",
    "    inputs=[encoder_inputs, decoder_inputs, sequence_lengths],\n",
    "    outputs=[Y_proba])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/2\n",
      "32/32 [==============================] - 6s 6ms/sample - loss: 4.6053\n",
      "Epoch 2/2\n",
      "32/32 [==============================] - 3s 3ms/sample - loss: 4.6031\n"
     ]
    }
   ],
   "source": [
    "X = np.random.randint(100, size=10*1000).reshape(1000, 10)\n",
    "Y = np.random.randint(100, size=15*1000).reshape(1000, 15)\n",
    "X_decoder = np.c_[np.zeros((1000, 1)), Y[:, :-1]]\n",
    "seq_lengths = np.full([1000], 15)\n",
    "\n",
    "history = model.fit([X, X_decoder, seq_lengths], Y, epochs=2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Bidirectional Recurrent Layers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_5\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "gru_10 (GRU)                 (None, None, 10)          660       \n",
      "_________________________________________________________________\n",
      "bidirectional (Bidirectional (None, None, 20)          1320      \n",
      "=================================================================\n",
      "Total params: 1,980\n",
      "Trainable params: 1,980\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = keras.models.Sequential([\n",
    "    keras.layers.GRU(10, return_sequences=True, input_shape=[None, 10]),\n",
    "    keras.layers.Bidirectional(keras.layers.GRU(10, return_sequences=True))\n",
    "])\n",
    "\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Positional Encoding"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [],
   "source": [
    "class PositionalEncoding(keras.layers.Layer):\n",
    "    def __init__(self, max_steps, max_dims, dtype=tf.float32, **kwargs):\n",
    "        super().__init__(dtype=dtype, **kwargs)\n",
    "        if max_dims % 2 == 1: max_dims += 1 # max_dims must be even\n",
    "        p, i = np.meshgrid(np.arange(max_steps), np.arange(max_dims // 2))\n",
    "        pos_emb = np.empty((1, max_steps, max_dims))\n",
    "        pos_emb[0, :, ::2] = np.sin(p / 10000**(2 * i / max_dims)).T\n",
    "        pos_emb[0, :, 1::2] = np.cos(p / 10000**(2 * i / max_dims)).T\n",
    "        self.positional_embedding = tf.constant(pos_emb.astype(self.dtype))\n",
    "    def call(self, inputs):\n",
    "        shape = tf.shape(inputs)\n",
    "        return inputs + self.positional_embedding[:, :shape[-2], :shape[-1]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [],
   "source": [
    "max_steps = 201\n",
    "max_dims = 512\n",
    "pos_emb = PositionalEncoding(max_steps, max_dims)\n",
    "PE = pos_emb(np.zeros((1, max_steps, max_dims), np.float32))[0].numpy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAkkAAAFLCAYAAADCoBiiAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nOx9eXhURfb2W91JCCFsYd8F2fd9CULSJOybIiCIDDhuuKHjNo7KbwYVFGfGXXTEEVQQRBEcdkPSCWEVgQCCgMgORglLyEK27vP9cXKTTtLL7e6q7g5fv89zH8jtuufWrVu36tQ5b50jiAhBBBFEEEEEEUQQQZSFwd8VCCKIIIIIIogggghEBJWkIIIIIogggggiCDsIKklBBBFEEEEEEUQQdhBUkoIIIogggggiiCDsIKgkBRFEEEEEEUQQQdhBUEkKIogggggiiCCCsIOgkhREEEEEEUQQQQRhBwGvJAkhHhNC/CiEyBdCLHFR9i9CiHQhRKYQ4lMhRBUfVTOIIIIIIogggrjJEPBKEoCLAF4F8KmzQkKI4QCeBxAH4BYArQDMVV25IIIIIoggggji5kTAK0lE9C0RrQFw2UXRGQD+S0SHiegqgFcAzFRdvyCCCCKIIIII4uZEwCtJbqATgAM2fx8A0EAIUcdP9QkiiCCCCCKIICoxQvxdAYmIBJBp87f2/+qwY4USQjwI4EEAqAP0ugUAQkJgadkSFBHhVUWuXjXgwgUDLBZACD5HBNSoQWja1ILQUK/E68aJEycAAK1bt3ZYxmKxwGg0+qZCAFBYCOPJkxB5efx3SAhQVAQIAWuDBrA2aOCV+Lw8gXPnDMjNFTAYuN2JgNBQoHlzCyIjOVehL577ypUrAICoqCil93EHhvPnYbhc/DmEhgKFhQAAioyE5ZZbAC/axGIBfvvNgMuXDWX6PQA0bGhF/frWkvP+gs/7uw1EVhaMp08DVitgMPDgYLEAYWGwNGsGioz0Sv6VKwZcvMjjjtHIogGgenVC8+YWCOG/Z/c3rAUFCD19GuLGDT6h9X2DAdZ69WBt2NAr+TduCJw/z+OObduHhAAtWpSOO76GP/u7LQy//QbDH3/wHyEh3EBEoBo1YGnaFN5MikVFQHo6jzuGYrOP1QoAezOIqJ7XlSeiSnGAeUlLnPx+AMBkm7/rACAAdVzJ7talC9E33xC1bElUsybRnj3kCaxWopkzeVqOjiY6dozPFxUR/fvfRFWqENWpQ7R/v0fi3UZMTAzFxMQ4LXPp0iXfVIaI6PRpolatiGrUIPr0U6IrV7jRdu0imjCBG+7VVz0W/913RCEhRHXrEn35JYsmItq5k6h1ayIhiN54g8/54rkXL15MixcvVn4fXSgsJJo+nQig3PvuK+2c588TzZtHFBpK1L8/0fXrHonPyOBXazAQzZ5NlJnJ569cIZo8mV+tyUSUnS3peTyET/u7LVau5Dbu3p0oJYWooIAoL487art2RNWqcUf1EA8/XDrupKXxuaIiog8/JAoPJ2rYkGjTpquSHqaS4fRpKrz1VqKqVYkWLya6fJnP795NNHEiN9wLL3gsPimJx50GDXhYs1j42LePX63BUDru+Bp+6+8aCgqI7r2X2/j++4lOnOCBOT2d6B//IIqIIOrWjejaNY/EZ2YSdepEZDQSPf44kfa4164RAfiRJOgegsg/Gq67EEK8CqApEc108PuXAE4R0YvFfw8B8CURuVwidO/endLS0oAzZwCTCbhyBdiyBejd2606vv8+8PjjwPPPA6++WnFRfvQoEB8PhIcDe/cCNWu6Jd5txMbGAgCSk5MdlsnIyEDdunXVVgQAzp4FBg8GMjOBzZuBvn3L/m6xAPfeC3zxBbBgAfDcc26JP3EC6NULaNMG2LQJKP9I2dksftUqvn2PHuqfe8mSJQCAmTNnKr2PS1itwNSpwMqVwKuvIuOhhyo+++rVwKRJwMCBwMaNgBvWVIsFGDUKSE4GEhL4NduCCPj0U+CBB4A//xn45BPvH8lT+Ky/22LFCmDaNGDAAGDdOqBWrbK///YbMGgQcPkyN2K3bm6J/+QTbtunngL++U+UrKY1HDoEjB8P5OdbcOiQEQFk2FSPixeBvn1hzc6GYf167t+2IAIeeghYtAj497+5EV3AarUiPT0dV65cQUFBIQoK2CgYFma/fGEhf4KhoRXfzU0P7eFDQuxbqa3WEoueO9ak0NBQ1K4dhccea4i1aw3YtInnVlsIIfYSkXuTuD3I0LRUHmCXYDiA1wB8Ufz/EDvlRgBIB9ARQG0ASQBe13OPbt26laqmZ84QtWjBVqXcXN0a7Q8/8EJxzBheRThCaiprvZMmlVo6VCFgLElWK1FcHFuQ9u51XK6oiGjqVF51JCbqFp+TQ9S1K1Ht2kSnTjkul51N1LEjUf36RIcOZeivv4cIGEvSJ59wm86fT0RO3vmXX7K57YEH3BL/4oss/uOPnZf729+43PLlbomXCp+vrC9c4H4/cCB3VEc4dYqoaVOiRo3cWlXv2kUUFkY0dCh/Po6wdy9RaKiV7rhD/bgTMLBaicaPJwoPpytms+NyRUWlFqW1a12KPXHiBP36669040YeXb9upcxM521vtRJlZbGR1tncoAIWX9/QFgUFbOrJy9NXTud8a7VaKS8vjw4f/pU+/PAEvfee/XKQZEnyuxLksoLAP8BuM9vjHwCaA8gG0Nym7FMAfgdwHcBiAFX03KOMkkREZDZz07z0ktOXpeHqVdarWrQoteQ6w+uvs/j339cl3mM88cQT9MQTTzgt45NJY9kyfuAPPnBdNjeXFdT27Yny83WJv/9+nts3bHBd9qef2Oo+aFC+04FNBjZu3EgbN25UexNX+P131h4HDSoZoZ2+86ee4sbcvVuX+M2b+dXed5/rsgUFRAMGEFWvTvTrr7rES4fPlaSJE9nf9csvrsv+8AO3/VNP6RKdlcV61S23sLvTFebOzSaA6KOPdImv/PjmG+6cb7zh+r3n5fEKqmVLohs3nBbdv38/WSwWys3lub2w0HVVioq4bHa2b5VUvylJFgtrhVlZ+h74xg39jUlc7OpVC+3evd+h+P9vlCRfHBWUJCKie+7hJdrRo/bfgA2ef57Htl27XBYlIu4/w4fzZKFncFMJ5ZPGlStsuunb1/lyyxbr1nHXfP11l0X37+eiTz+tv0qLFvE1n32m/5pKi3vuYRPnkSMlp5y+88xMJrD07u3yfRUVEXXuTNSmjct5pQSnTjHtb/hwfeVlw6dK0nffka0FTxfuv58JLjbvyxFefZXFb9umT/Tvv1+iYcN4kXDhgv4qVUpcucL9uGdPosJCfe89IYEbdN48p8X27dtXovS44Wyg/Hy+RufaTwr8piRpGqTeMd9q1a1UWa2sbF6/zu/CEYJKkmolKT2dR/O4OKcvLT2duWd33+2wiF389BMrVn/9q3vXyYbySWPWLGYuOunMdjF+PDfsmTNOi40ZQ1SrFlvz9MJqJerSpZBatWLrxk2LLVv4E58zp8xpl+986VK+7j//cVrsiy+42IoV7lVrwQK+bscO966TAZ8pSTk5bObp3Nm9TvbHH9yh4+OdjjsZGezFGz9ev+hLly7Rr7+yDjZ7tv7rKiVmzWJeQ/G4o/u9T5jA4865cw6L7Nu3j3JyWAdwRwexndx9ZU3yi5JUWMiNo3flpEFzu7nQIm2LBZUkfypJROwPA4icuEyeeIK/xePHHRZxiGnTeFX322/uX6tP/jSaNm2a0zJKJ40TJ0q3O7mLU6e4cf70J4dFtm8ntxfqGpYuzSSA6TqqsGrVKlq1apW6G7hCdDT7gMsNVi7fudVKNHgwUVSUw91u+fnsmeje3X2eRXY2Ub16zKPxNXymJL33HnfOlBT3r333Xb52zRqHRZ5+mj+tn37SL1Z79vvu4522N6016fx5tp4+/HDJKd3v/dQpdo86WfXu3btPF9XGHjT9wVfWJL8oSZ5qgjq0SI3fpRmcgkqSv5Wk/Hyixo15VWcHZ8+yR04PH8Mejh9nBcsFbchj+J24/dhjPFh5OhrPns3L3vPnK/xktRLFxrInz5Nt5X/8cYn69GEdQtWA5VfitqZBvvtuhZ90vfOdOx1eT8T0MkAfD8we/vUvvj411bPrPYVPlKTCQiYKRUd7fv2ttzq8/swZVnJmznRPrPbsmjXp8cc9q17A49lnWYO0Ib659d7/+lc289shzlmtRLt37/PYGuRra5LPlSTND+mJBqnjes1lqRlnfaEk/f+2IdE9hIUBs2dzOIADByr8PG8e7yCdM8cz8W3aADNnAh99BJw/711VAw5XrvC+72nTgMaNPZPxxBO8RfSDDyr8lJzMx0svAdWquS9aCODllznqw3//61n1Ahr//CcQFcV77j1B//58vPNOaWS8YhQWct8fNAgYMcIz8Q8/DDRoAPz9755dH9D45hvg9Gm3w1iUICSEx50dO4Affqjw8z//yePOP/7hmfhWrYAZM4CPPwYuXPBMRsDi2jUeUCdP5gf1BI8/ztvV33+/wk87d/KQVKUKPAqMKgRfS1QSx/XmQn4+AOC1f/8bffr0QY0aNVCvXj2MHTsWP/30U5mir732WsUyP//MbV9QUBqJthhELN5o5E/EVwgqSa7w4IM8C7/5ZpnTV68Cn33GSk6LFp6Lf+kl/lj+8x/vqhlw+OgjIDcXePppz2W0agXccQfLyskp89P773MspAce8Fz88OGsB/z731qE1psEx48D330HPPKIZxqkhqeeAn79lWP72OC77zj8zF//6tlEAXAYpr/+FUhKAvbs8byKAQci4I03gHbtgLFjPZdz771AjRrA22+XOZ2dzePOpEnejTsvvsi6b7lhrfLjo4+ArCzPFVQAaNKEG/i//2VZNli4kP/1JmuC0chHfn4FPaByw2Lh8NdhYUhOScEjjzyCHTt2ICkpCSEhIYiPjy/JQgBw/D67ZbKzuWGKisqILyri02Fhno87HkGGOaqyHw7dbRpmz67gNtJoA+7yke1h9GgOj6Jz96Nu+M3dlpfH4WdHjPBeluY2sgkfcOECuymffdZzsdpza+RjN8Iy6Ybf3G0PPsj+mPR0uz/rfueFheyPLNeH4uL4tLchFK5dY46sm2GZvIJyd5tGlpdBdnvqKfaL2ZCIP/6Y3NrRZovyzz5pEmcA8NQzEnC4cYPHnWHDKvzk9nvftauCu/mPP5he8cMP3g/6GvlY9caR8u62S5cuEQB68803qXfv3lSlShVq06YNbd682fubaTva7Lj4srKyyGAw0P/+9z+Hl5eU+e479keW41Hk5FR0UwbdbYGCJ59kLfm99wCwNvvxxxyQu0cP78U/+CAH3S23YPcaAwYMwIABA+QK1YPly4HffweeecZ7WQMGAP36AW+9VeL2+fRT/u+DD3ov/s47OQDyokXeyyqPpk2bomnTpvIFO8OVK2xqmDGD/VneICSEXQ8pKcD+/QDYSJWYyG3vbUqomjXZK7J8OVtIbgq88w63+z33eC/r8cfLuJuJgA8/BLp0AaKjvRd///0c5Pu777yXFRD49lt5406/fmxmfvfdEjPzp5+yF0hGKrSQELaG+Nrltr/4O37//ffx+uuv4+DBg+jatSvuvvtu3NDy2hVj/vz5iIyMdHqkpqZyYc1/6CCseFZWFqxWK2rXru2wbiVloqLYXGSxlIz5VmuJkcr3+R9laFqV/XBpSSLivbYNGxIVFpYYNxYtcn2ZHhQWEjVpQjRypBx57kDJynrwYE5aJIuZ+OWX3OBbtlBREVHz5mzN8Aa2zz17Nq8Q/Z3mSAo+/NClidOtd37tGlFkJMfvId5VFRIib0em7G/JFZRakn7/nRvnuefkybzzTt5lmJdHu3dXMKq6hfLPbrGwRdDBvpTKh2HD+IHsWDI8eu/Ll3ODr1tHFgvv5oyJcWy90Cz3tscHxS8rJyenwm+DBsXQwoWLyWLh+tm7fkVxfI2zZ89W+E0PyluS3njjDTIajXTUJv7fiRMnCECF57p8+TL98ssvTo9cLVCUxqh24A6ZNGkSde/enYqcmJ/LlLFaywSi0mJNln+1QUtSIOFPfwLS04HERHz8MVC9OjBlihzRISHAffdxzrEzZ+TI9BtOnwa2bgWmT5en8t9+O/Mzli7F5s2cBm7WLDmiAeY1FRQAn38uT6bf8PnnQOfOQPfucuTVrMnmtq+/Rt61PCxezDQxL5Oml2DAAKBjRzWWPJ9j+XJe7v7pT/Jk3n8/Wwc3bsSHHzLFTIaRCuAF/3338b6UU6fkyPQbLlzgB5k+XV6CtDvvZOLjF19g82Zuo4cfliMaKB0efWlNSktLw9ixY9GuXbuSc2EOks5FRUWhdevWTo+qVatyYS3/mh0z21NPPYVt27Zh1apVMDoww1UoIwRbpQoLQVZCYSHPk37JfSdD06rshy5LUl4eUe3alDdpGoWHc6wymThzhned6syEogsTJkygCRMmOC0jfWX9yiu8+jp9Wq7cP/+ZqHp1mjgqhxo08N6XX/65o6PlGr+IiFasWFGyEvQJjh8nLQ2DM7j9zr//ngiglNnfKOFvvfUWV/vAAbly7UGpJalnT6JeveTKLCwkql+fCsZPpPBwooce8lyUvWc/e5Z3y8scd/wCLdeTg4B1Hr/3Rx4hqlqV7h57nerXdx3A0F1kZ+vP3OEJyluSOnbsSHPnzi1zbtWqVRQeHk455XILzps3j6pVq+b02Lp1K5t3HGzbf/LJJ6lhw4b0888/O6yjwzLFQaWKbhQ4NFIFLUmBhCpVgMmTYfhuNYx52V7tqrKH5s15O/Vnn8nbaXX58mVcvnxZjjA9IAK++AKIifFu6409TJ8OZGUhbNP/8Kc/ebe7xB4eeAA4dox3XcvCjRs3Kvj5leKLL3ipNW2aXLlDhgANGyLkq6Vo3hwwmeSKnz6duQaffCJXrk/x00/Avn1yrUgAL5+nTIFhw1qE5WVKF9+sGY87Gs+vUoKILajR0RxXRSamTQNu3ED4ptW46y7upzIRGsrjvS/aPi8vD8eOHYO13ATz9ttvY8qUKYiIiChzftasWUhLS3N69O7du9QUVm5QfuKJJ/Dll18iKSkJ7du3t1snp2U0i1JhIYSQwwXzBEElyR1Mn47Qglw80uBbKYTt8pg6FTh3zm5olMqBH35gZu/06fJlDx6M7NpNMdW6VJqb0xZ33sl68MqV8mX7BFYrK0nx8Z7HpXIEoxF5E+5G79/XY8bYK9KJk3Xq8G75r7+uxKEYPv+cFZqpU+XLnjYNxsJ8PFB7Ffr3ly/+3ns5pIPGwa102LsXOHKENyvIxoAByKrXEpMLl+Guu+SL1/QKX7jcDh06BABYvnw5UlNTcezYMUyfPh0nTpzAa6+9VqG8LndbeHgpm93GF/boo49i8eLFWL58OWrXro309HSkp6cj22aHhssyQoBCQ2GgIoSGWH1P2C5GUElyA5faROMkWuLBiC+UvLBx43ilUmkn6i++AMLDgYkT5cs2GLC+5jSMwCb0aPKHdPHVq/OKetWqSjpRb9vGfDDZpoZibKp7D8JQiD/X+FqJ/IkTmfK3fbsS8WpRVAQsXQqMGgXUqydd/NXWffALWuOh6suUcDJGjeK4VV+rebXq8dlnJZZ+6RACG2rejXhswYCW6SrEIySkNAaQSqSlpaFNmzaYO3cupk6dih49eiArKwt79uxBQ09JhhYLV7ycFWnhwoXIyspCXFwcGjVqVHL861//cqtMkQiFABAmysZM8iWCSpIbWL1GYCnuwa2nE5WEqq1ZkwMcfvNNJZyoi4qAFSuA8eP5QSQjIwOYd3Y6QmCB+GqFdPkAx4+7cAHYtUuJeLVYtoxZvbffrkT8R7u643hoR7RIXapE/pgxrF9Xyok6NZVjeKiwoAJY8x2PO63PmZWMOxERrCh9+20ldLlZrTxgjhnDsTwk4+pVYN7paTDCCsM3alavoaGsZ6hu+7S0NHTp0gVTpkzB+fPnkZubizVr1qBJkyaeC9UCPpZTkhzxe/5hEyZeT5lCixEWGCAs/gtPHlSS3MDKlcDOFlMhiJQFF5k0SZ7LLS4uDnFxcd4L0oPUVA66osImDWD1auCQtRNutOmqbCYdO5YXpLLEt2zZEi1btpQjzBmsVu6Po0Z5F2HbATIygC2JAqcGTIPYtk1JDp3ISGDkyEpqyVuzhjW8kSOViP/qKyC12TQed776Ssk9NEueTE6eT7BrF1d8wgQl4levBg4VdUBOux68EFEALcWGapdbWloaunbtKk8gFcdG0oI+SQYVB922GkMhLBa/DQxBJUknLl0CzGag9/QOnHJgzRol95HpcpszZw7meJpYzl1oE8WwYUrEr1zJnMzwqXfwSH7pkvR71Kgh15IXExODmJgY7wW5wq5dHETvjjuUiF+9mle5zWYXy//f/5TcZ+JE5sbs3KlEvBoQcd8fNkydgroF6DetNdCtm7JxZ/Ro/ny/+UaJeHVYvZqtGKNHKxH/1VecHSlixmReuSqw5PnC5UZEOHTokFwlyWrlCitKpKYpjYYwHxK37CCoJOnEt99yn5g0CexSMps5maJkVEqXm+KJ4tIlzvE1eTIgbh/PDSM7PHkxJk1iQ8nu3UrEq8GaNTxRjBqlRLymoHa4oz3Qtq0yK+qYMWzJq1QT9b59HLhLkYKqucDuugs87mzfzpqTZFRKSx4RK0lDhihz8ScmctuL28fzybVrpd8HUO9yE0Lg+vXrGDdunDyhmtKiUEkSAjCEGJgUXuQfXlJQSdKJr79mA1KXLmDeR1ERsHGjknvJcrmNHDkSIxW5AMogLY0nCkV8mDVrbBTU7t05XoKiFfXYsWzJk+FyW7p0KZYuVcPhKYE2UZhMSiaKK1d4PTBpEiAMonSBkJkp/V6yLXk+wZo1PICPGaNE/HffAbfeykYkjBundIEwcWIl4+T99BMnYFakoG7cyErLhAkA2rcHWrdWtkDQ9Aw/6QGeoaiowq42WdAUxtDQYk9eSAif8MPAEFSSdCAzk9NX3X578Qvr14/zMymcqI1G770aPovTo3iiWLeOwy517Qp+AePHAwkJQG6u9HvVrMkGsTVrvDd9FxUVoUj1qHfkCHDihLKJYvNmHptKEtqPH89LPEULhIkT2ZL3449KxMvH6tXA4MEcmVkycnLYkjF2bPG407Mn0LSpMnfnmDG8QFi1Sol4+Vi9unQ8UIC1a4FGjbjZS+6TlKTkXprLrbBQ/S43KdAUFtkB64qhDZslRirtPn7QIoNKkg4kJPC7KdEBDAb+YDZsAPLzpd+vVi3gttuA9euli1aDNWu4wgq2P+flMSdjzBgbbuDttwM3bgDffy/9fgDTG06dAo4eVSJeLjRFXaYZ3Qbr1vFr7dOn+ET//nxC0Yp61Cj+vDZsUCJeLn75BTh8WJmCmpTEw0vJuCMEv+fNm7n/S0aNGmyQrDTjzurVnNdGVo4cGxQUcDOPHm1jKBk3jn9QZM0ICWEFqVJYUStoMWrElwSQNBi4/weVpMDEunVA7dooG8ht/HhOXa5oZTF6NHDwILvdAhonT3JFFbnakpPZYFSGlzloEGuSCidqoJJM1KtXc8eUHUASpR7lUaNsBiujkU0bGzbwhCEZderw41SKiXr1av5XkSVj3TqO3zVokM3JceP4g0hMVHLP0aM58vyvvyoRLw+nT7ObX5GCum0bcP16OeN4dDQQFaVUSQIqicutqIgVF0WutqKicpvmfBlQqhwqhZIkhIgSQqwWQuQIIc4IIe52UG6mEMIihMi2OWK9ubfVyhPFiBHllOYhQ5jtqHC3CVAJJmpNUVE4UURElEuFoe1mWbtWyYjSvDlzzwJ+or5wgaMNK2r7nTs5TkwFL+r48TyDpKQoue+oUexuS5cfu08u1q1jjpzsFDzgeWDdOuZolUmFERvLmpOiBYJGYQz4cUeroEILapUqHMC+BCEh/DFoARQlw+BffrJ+aHlUFLnaHMSnLJ2AfRzMq1IoSQA+AFAAoAGAaQA+FEJ0clB2JxFF2hzJ3tz4xx+BP/6ws8M0PJxHsA0blHwwHToAt9zi3UQ9ZswYjFHEEyrBxo2cxr1VK+miifj54+K4uctg/HiOy6RoG9ro0Rz6yRt+ctu2bdG2bVt5lSqPzZv5X0Xbn9ev53Fp6NByP8THA1WrKuPGaI+zaZMS8XKQmcmhKBTtKExL43AIFT7fKlVYk1m7VolFo3Vr3sAY8ErSpk085sjO1QYed9au5XVwhc26mlKmaKL2Iz9ZP7RnV+xqqyDeVwGlyiHglSQhRDUAdwKYQ0TZRLQNwP8AqAlvWw7r17N2P2KEnR+HD2eW6c8/S7+vEDxZJCZ6Tj945pln8Mwzz8itmC1yc4GtWx00jvc4coSt6nb1vPh4fjGaoiAZo0fzx5qQ4LmM6OhoREdHy6tUeWzaxG62zp2ViF+3jjnJFTbNRUSwRUNR23frxo8V0Ja8xESeLBT1/XXreAywuzl19GiOi5WWpuTeo0fzBsacHCXivUd+PtMcRo5UEsTw+HHeC2F33NHiwCky92jWk4COfF5UVLw3X436oG2aq/Bq/eRyC3glCUBbABYiOm5z7gAAR5akHkKIDCHEcSHEHCGEV+ruunXMDaxTx86Pw4fzv4qWvGPGsB6SnKxEvPdISeEBS9FEoU2SdhfrtWvzLkNFE3X//nyLgJ2oNQ1uxAglE8WpU8xJdmiIHD6cicunTkm/txD8zr//3m/x41xj0yZmOqvIOAsed/r1A+rXt/OjNlEr6vujRvFnbTYrEe89tm1jDU6hggo4MNBWr67UJ+ZHfrI+2CUMyYPVyodDI5Uf2O1q7GVyEQmgvNMjE0B1O2W3AugM4AxYifoKQBGACimOhRAPAngQABo3bowMOwHa0tMF9u2rgxdfzEFGhh1zTkQEarVtC+vatbiuILFo585AREQdfPNNHvr0cX9ZN76Yq/KdE/5Cphf+pGqrVyO8alVc7tBBSYC71atronNngfDwa3bFVx08GBFvvIErx46B7GqxjqHnuWNjq2P9+lD88ccVjxZNy5cvBwBMVZAZPuSHH1Dr2jVcj45GgZttr+fZV64MBxCJ6OgryMioOCAZ+/VDbQDZq1vuFfMAACAASURBVFYhb+ZMt+6vB4MGheGTT2pgw4ZMDBwoT1Pypr+XgAi1169H0eDByFIQLyojQ2DPnij89a+59sedkBDU6tQJtG4dMh94QLdcvc/eoQOPO6tW5aF//8AzJ0V8+y2qhoXhcpcuuscdd9772rU10L69AdWq2R93YDAAViusFosSRcFoFJyOwyrHWmKVqFAIqxWCCGQ0ghQoKoWFAoCA0Uh2n18YjRAAqKgIVNz29uZumagMSlI2gBrlztUAkFW+IBGdtPnzkBDiZQDPwo6SREQfA/gYALp370517cQ50YKrTppUDXXrOogkPXo0sHAh6kZEsBtCMuLiALO5KurWrer2taHFtlt7z2YLV787REoKEBuLuk2bena9E1y/DuzZAzz3nJP63XEHsGAB6uzfD0yZ4vY9XD33hAm8genMmbqlW+DdQPXq1XXdxyPs2gUYDKgxYQKbvNyEqzpt386Uj379ouwXqFMHaNECkdu3I1KBS/eOO4D77we2baspnZfu9fs4fBi4eBHGuXNRRcG7TUjgxfKECS7GnTffRN0qVdi6oRN6n33YMCApqSrq1KmqQg/wDikpwODBqOsmYV7Ps+fncxDfBx5wXP7c2bMAAIPFUo5VLwehoZpHSZTuKvUQVqsVBplusWLTrggJgVDgbrNaWe80GgWEo45nMEBYLBBVqgBQNL7a3k6pdDk4DiBECGHL0OsG4LCOawmAx594QgLHjHSa7mb4cP6ytm719DZOMWwYezROnnRd1qc4dYqd94pM3snJ7JevQBq2Re/evCVXkdtB82p4w0tShk2bSn2CklFYyO3vtO2F4L6fmKjEJ6ZtfVcUCss7aO51zd0uGQkJ/Fp79nRSaPhwnkkV+cRGjuQg+grolt7h3DlWUhWNOz/8wBzQMrtpy0MIPhSSt4EAdTW7sfX/t99+w4wZM1CvXj2Eh4ejY8eOSCm3I3bhwoVo2bIlwsPD0atXL2zdmmqfj2QLo9GnvKSAV5KIKAfAtwBeFkJUE0IMBDAewBflywohRgohGhT/vz2AOQA82itrtXIQw/h4Fy9s8GDeeqWIl6RtQQ24iVpTTBQNVgkJbJhzyns2Gnkm37xZyQdTvz6TiLdskS7aO2Rk8LZLRW2/ezeQleVCSQJ4os7KUpaRduhQDsH1++9KxHuOTZuATp2AZs2kiybivh8XB+dWhIED+QNRpEVq715ROCbPoY2zitItmc083rvMS62QQMyWlAAkb2u5QnTsart27RoGDhwIIsL69evx888/47333kN9G5LdV199hSeeeAIvvPAC9u/fjwEDonHnnSNx8eJZ58J9HAog4JWkYjwCoCqAPwAsB/AwER0WQjQvjoXUvLhcHICDQogcABvAytV8T2546BAnVnU5UVStyl+UImtGu3aciSDglKRNm4CWLZVswQX4eQcP5h3PTjF8OPDbb/zCFGDoUHY9KciA4jk0f4xCBdVg4C3QTqHN5Ir6vrZACKiJOidH6Y7Oo0d5w6xmxXSIKlXY3KGo7Vu25JxxATnuNGvGxCkFMJs59JVLA63RqJRArClJsnWwjIwMCCHw1ltvoU+fPggPD0fbtm3xvR5l240o22+88QYaNWqEzz//HH379kXLli0RFxeHDjbv7c0338TMmTPxwAMPoEOHDnjzzffQoEEjLFr0oXPhPo66WSmUJCK6QkS3E1E1ImpORF8Wnz9bHAvpbPHfzxBRg+JyrYjo/4jII6OlNjiUCSbmCCNG8Oh2+rQnt3IKIXiiTkpyX3GePHkyJk+eLL1OKCzkmWv4cCXExXPnOOqvSwUVUL7TJz6eA0unprp/badOndCpk6NNmF5g82bmBPXqJV82uO/37q1joqhZk7d+Kmr7Hj3YmxpQE/XWrdwhFLraAJ19f/hw3quuyBcfH89u14Bx+1gsPBAOG6Zk3MnLY6OoU1ebBsUTtSrx+/fvBwC8//77eP3113Hw4EF07doVd999d4U8n/Pnz0dkZGTpUbs2Ihs3RmStWiXnUh0MjGvWrEG/fv1w1113oX79+ujevTvef/99ULHWV1BQgL1792KYzWqgqAgYMmQYdu3a4fwhNFObj5SkykDc9gu2bOHFSpMmOgprI1pSEvDnP0uvS3w8sHgxsG8f3CIQP/LII9LrAoAd99nZOkdy9+HWRNGkCW8DTEgAnn1Wel0GDWJuZkKC+/NiH0/Y3q5AxArqkCFK4pRkZvLrff55nRcMHw7MmcMuQMkESqORH3PLFn7sgCAQJyZyhxg4UIn4hAQO6HjLLToKax1y82bg4Yel1yU+HvjPf3gDhcpwX7qxfz9w7RpbMBVg506ml+pSkmzDYxebu5csWVKhWKdOndCnTx8UFhZi2bJlFX7v3r07unfvjtzcXKxcubLkPBErbX379kavXp2RmZmJ1VoanGLM9GBXaVpaGoxGIzZs2IB27doBABYsWIDWrVvj6NGj6NGjR0nZWbNmlV1kZ2fzR1m1dBNREwcT5MmTJ7Fw4UL85S9/wfPPP4+0tDQ8/vjjAIDHHnsMGRkZsFgsaNCgQcnzWixAw4YNsHWrDn5DSIiSvKn2UCksSb5GXh4vGHXrAB07MoFFEYnSU15Sbm4uclX4iZKSeMaKjZUvGzwpNmzoRoxEk4l9YgpyiUVE8HzoiTWjsLAQhbKX4b/8wv4YRROF2ayDMG8LzSenKJhXfDw/7vHjrsv6BImJrDEo2MlaUMDN6NLVpqFNG3Y9KcofOWQIf+YBw8nT/K4u/cCewWxmvadMrjxn0MJjK+IlGQzyaTdpaWkYO3ZsiYIEAGEOduhFRUWhdevWfLRqxUe7dqXnWrdG1ar2d11brVb07NkTr732Gnr06IF7770Xs2fPxgcffFCmnLaDzWrVFkLkeFebLTRTmw/iJQUtSXawYwfvcNA9UWgKg9msZMlrSyB+4QX9140qjsKYLHsCS0ws9YVIhkaYd8uTZzIB773HJpDbbpNep6FDud1//513O+qFtnL0ZMXnENpEoUhJ2rKFUzEMGKDzgj59+ILkZGDiROn10b7BhATm5/kVGRkc5fqVV5SI37XLTQOtENz3N2zgD0eyZTEqij26CQnA//2fVNGeITGRV07ufIRuwGzm560QYd4RQkJYsy0mMzv7zkNDQ53+HhERUeH3/Hw+rFagZs2aUsaRtLQ03HXXXWXO7dmzB+Hh4WUUJ4DdbfPnO6f0bty4EYPsaJWNGjVCx44dy5zr0KED3nnnHQC8bd9oNCK9OEGjpgxmZPxRYl1yCi3qpg+UpKAlyQ62bOH+73KHgy2GDOGEo7/8oqRO8fEBQiDOzWW7tKJJ+uBBnYR5W8TE8AejaEUdUATixETOwHvrrUrEJyRwc+oO/xIayoqpIitqq1ZMIg4Ia4b2jAoVVKNRp7tHg8nEytthPRFR3Ed8PCtvWRWi0vkYeXkcaVtR2+fm8q5Ot9pe235YSXhJeXl5OHbsWIXgkm+//TamTJmCiHLW0VmzZiEtLY2PnTuRtm0b0vbvLz2XlobevXvbvdfAgQNx7NixMueOHz+OFsWxrcLCwtCrVy8kFJvotUwnW7Yk6EvlpPGSgkqSf7BlC4egcSNGW+nXpWiyGDqUFy2KwjHph+bWUjRYaYqILsK8hqgo3pKiqO179mQSs98JxFYrP6PmB5EMza3lVtsD3PePHFG2V3/oUH5sv6dqSEzkQUEF1wz8jL17u2HJAJSPO/Hx3O5+H3d27mTzvqJxZ/t2Jqi7pSQp3quvGUtkiT9UvAN4+fLlSE1NxbFjxzB9+nScOHECr71WId5yqbvt1lvR+pZb2NXWpo0ud9tf/vIX7Nq1C/PmzcOJEyfw9ddf491338Wjjz5aUuapp57CkiVLsGjRJzhy5Gc8//wTuHjxImbNmqXvgbQdhmddhAzwEkElqRwyM4G9ez34Ftu04aycigarQYN40e73PG6JiaXWAwVITma3SuPGbl5oMvFAmpcnvU7a6t7vuawOHACuXFE2UWhx3tyaKGwvUMhLun6dQ0P5FUlJbGZTkP1cs2S4TfNr0YJNbYo658CBHAbO75a8xET+EN0y7+uH2cyv1e1hTdVefZTdxCVDfFpaGtq0aYO5c+di6tSp6NGjB7KysrBnzx40bNjQ8YUaYciN8N99+vTBmjVrsHLlSnTu3BkvvvgiXnnllTKbie666y68/fbbmDfvVQwc2B07d27Dhg0bSqxNLqF9h4onRd1KkhDiASEE2Rx5QoifhBAzVFbQ19i2jfuE24OVxg9ITlbywUREAH37BoiS1L8/81Akw2LhFatHfHCTiR34igIbmkzAmTNKojzohw+Iq7Vru4gwbw89e7KFRdFErc2Lfu37586xK12RgrpjB1syPO77KSlKXA/h4cxTD4hxp08fTiqsAGYzi4+MdPNCxYENZeZzTUtLQ5cuXTBlyhScP38eubm5WLNmjcMdaiVwIz6SLUaPHo0DBw4gLy8Px48fx+zZsyuQsh955BEcO3Yaly7l48cf92Lw4MH6b6Bx8BSvXt2xJHUHkAdgQPFxB4DrAJYIIdxdewYskpN5R6dHyb1NJnY5KIrlHxvLq2m9/ICZM2fKJQ1fvcpmNkWTdFoaWww8migGDeKPRtEHo9XJnclC294rDYmJHJfCbTObPiQncwBPt/m/ISF8oaK2r1+fA1z7daJWTJhPTuaFukcGWpOJv80DB2RXCwD3fc2I6RdoiRwVtX1WFot324IK+IyXJEMHS0tLQ1e3V0DFN9eZisQTaHmC3RavXRRgStIRItpVfGwEcF/xb6PkV80/SE5mBSk83IOLFfMDYmO5Q23frq+8dCUpJYWXNQonCsBDi3rNmrw1RVHbd+zIYYD8piRpES0VKajnzgG//upFVAeTiQlNFy/KrFYJYmLYyuu3wIaJiUC9em7EpXAPyckeWjIAn4w7RJ4FVJWC1FQe+BT1fU28R0qSYl6SlibOWx2MiHDo0CH3lSQivrm3mXZdiPfYg20wKDfx61KSBNvIugIon/vhevG/7qeoD0BkZnLARo8nipYtmSOgaLAaMMA9XlJGRgYyMjLkVSApif1+/frJk2kDs5n5SI0aeSjAZGJih4ItgAYDT9TuKElS41T9+COnxFA0UXjMR9Lgg4k6J4e/T5+DiF+8yaSEMJ+Tw9ErPB53mjRhTqQiU1vfvrxo9JslLzmZt1vqjkvhHsxmHlc9DpipmJckIxyTEALXr1/HuHHj3LtQ8/Mp4OFJEe8Dl5teS1IbAJEADpY7r63590qrkR/hMR9Jg8ZLMpuV8AOqVXOPlzRx4kRMlBm7JiWFRxLd+8P1o6iIV3Rexac0mdjUoNfU5iZiY91btKxcubJMFF2voL10d3z2boqvXRvo0sVDAd26AbVq3Zy8pFOneOufItKwV3wkDSYTE/oUWDSqVPEzL0kz7zvYSeUtzGYW73F8UMUpShSniXMO7ZkUWZK8Fi8EW3gDQEnSfAZHhBAhQojaQogJAN4CcBScdLbSwys+kgaTiZ33ihKuustLkobLlzmIkaIo217xkTTcdhsPWAHES5KGlBR29UhO/aEhOZl1AI9pB9rOI4W8pI4d/dT22k0V9f3kZO62XmU6MZnYFF6cm0s2/MZLun6dzYeKFNRr17jJPLagAqUzvELytkLxzuEDPpLX4m0DOSuA3qppCV02AigEcAXACgDJAExElAcAQoivhRAef+rFypdZMLYJIVp5KssTeMVH0hBgvCRp0AgJigYrr/hIGiIjmdgRQLwkKdCsY4omaa/5SBpMJk62qihuSWysn3hJKSm8WlWUed4rPpIG7cNRaMnzCy9JM+8rGne2bmXxXnmxFSdc1eIl+TxOmI/4SF6LN5nY0qso0bM7lqTzAPoA6A2gE4CaRHQXEaUDgBCiN4AoIvJ4+iaiq0RkIk4V/CaAuZ7Kchde85E0NGvG0ZADhJckDcnJbO5WFEgvORlo394LPpIGk4m3qigwtXnCS5KCvXuZuKJYQZWiJAFKFwjZ2X7gJWlmtkDkI2lo1Ig/IEVtr/GSNO6az5CSwgOeQj5SeLiX3gOAzT1aPCEFUJgmzjECnY+kQfG4446S9CMR/UhEe4noCBHdKFfmQQBf2p4QQvxdCJEghEgVQhwp/tehv0AI8bIQQssStBbACCFELb0P4w1SU73kI9lCi1uiwD7qLi9JGlJSeKAqzngtE1L4SBpMJm73bdskCKsId3lJUhDofCQNnTsDdercXLyk06fZMqZIQd2+nfu/tL6fmqrE1BYezp+/XxZnffsqSSgMcFeNjpYwrCkOBeAXXlKg85E0tGvHGdH9pSQJIRoAaAjAlbM7DsDucud6g3e+jSGijgB+BytTjtALxSRwIioE76ZTE9q5HKTwkTRo/IC0NAnCKkIvL+nhhx/Gww8/7P0NtRgsiiYKKXwkDdHRvPIMgIm6d+/eDnMbuYWUFPb11a/vvSw78JqPpMFgUMoP8AsvSYof2Ln4kBAvdlbZwmRiU9teNftoYmP5W716VYn4isjK4mdR5Ga+fJmHNa/4SBp8xEvyqctNMR+pqEiS+PIJ5iVDT/U0PpIrJakpWAmyRW8ATxJRZvHfhwA4Y572AmBrTE8vlqscUvhIGgKEl3TXXXdVyPjsEVJTufMpJK4CkuahiAhe8ipq+06d2FiiZ6Lu3LkzOnsbV6eoiK1iiibps2fZlS9logBY0NmzvCNMAXzOS0pJ4RfeqZMS8VL4SBq07/NmiZe0fTsPdIr6vtdhL2zhA16SweBD8jYR30whH0mqeJMJ+O03JQnm9ShJ2s42V0pSLmziJQkhmgKIQlmlpz8AuxmYistbieg3m9PhAMq79aTDYuEdDtJ0AI0foCgrvV5e0rlz53Du3Dnvb5iSwma2vn29l2UHGh/JWfogt2AyMXElM9N1WTfhDi8pMzMTmd7WYd8+tg4oUlC1iUKa+JuNl+RxGHLXyM72ItKzPWjBLhXzknxmyUtJkWhmqwizmddU0miWinlJMvO4uYQH+drcgabsSaM7KRx3XH75RPQ6EQkiOu+i6EEA7W3+7gMgDEBrABBC3AmgCYCvi//+XAhxh035ElebDToAUBNr3wY5OUIeH0mDQn6AXl7S9OnTMX36dO9vKNXMVhZS+UgaTCb+yBWlLtfLS1q9ejVWr17t3c00LUYhHykqSmIg6Q4dgAYNlE3UWjP4ZKLWXrIiBXXHDol8JA0mE1tgCgokCmX4nJekmdkU5IkEuIvedpvEsG8+SlHiE16Sh/na9EJTkqTpYK1bc1BVfyhJbuAbACNt/u4N4F0AC4UQhwDMADCimGuk/W6reJVxtQkhWgIwwgdKUna2kMdH0uADfoBP4iVp3CpFE8X+/RL5SBo0hU5xvCSf7PSRbmYrC7NZEh9Jg2J+QIMGrIf5ZKLWXrDizPNSDSUmE0ec37NHotBSxMT4iJeUnc0DnKJx548/gMOHJVrxAOW8JMU6WFl4nFBNH6TxkTRo446CBPMyW2AxgGFCCM273gfAeiKKJ6IuRDSOiC4CgBCiHoALRFTyJRPR/xHR323kPQzgjeJwAEqRnS3kG0q0gVXRTOqzeEmK45Qo4cUqDhHsDi/JKyjmI505w9Qh6fOQycQ53BTwA4BSXpLyySIlRdK2P/vQNm5JNZRooQoqOy9JM7MpHnekKkk3Cy9JIwx5YUX64IMP0LVrV9SoUQM1atTAgAEDsH79+jLiX3/9HxBClDkaerMY1BLMHz3quQw7kKYkEVE2gNkAtACQPeGAf0REl4hoqAuR5wF8CgBCiCghxGohRI4Q4owQ4m5HFwkh/iKESBdCZAohPhVCuNzceeOGkPuxAMq34vgsXpKWN0mqma2seCWGEoVLXp/FS9K2/SkmrkpXkhSb2nzGS/IBH0l620dFAV27Kuuc/frxGkS5FTUlhRUOr8KQO4bZDFSvzjmxpULjJSnyifmElySBj9S0aVMsWLAA+/btw48//oghQ4bg9ttvx8GDB0uUPCGAdu3a4bfffis5DnmTqUIRL0nq109EiUR0sPj/dYnI4yD2RPQuEWk97QMABQAaAJgG4EMhRIXtJkKI4QCeB4cjuAWssOkKSKnEqqulLlewsvBZvKSUFB4ZFeRNUsJH0qCFCFYYL+n0acXxkhS7e5KT2SImPbF927bsF1M0k/okXtK5c7ztT5G7R9u4JX1xBnAD7dhRuXlJyclA796Stv1VhNkMDBqkgHKj0+X2xhsV53Kzmc87g7cpSjIyMiCEwFtvvYU+ffogPDwcbdu2xffff19aSAKrevz48Rg5ciRat26Ntm3bYt68eahevTp27txZIt5gAEJCQtCwYcOSo169eh7fEy1bAk2bSh931DgcJUIIUQ3AnQDmEFE2EW0D8D8A9hjJMwD8l4gOE9FVAK8AmOn6HooS2yte8rriJT399NN4+umnPb/B9evMqVI0Se/fz3VXMlFoS15Fo7keY8mAAQMwwJtIwcnJnN29cWPPZbgQL5WPpEEhPwDwES/JBwqqskDSsbHAjRvKeEmxsfztXrumRDyHIVdiZmNcvAgcO6Zo3NGpJPXpA0yeXKoomc38t6uddt7SnvYX5/Z7//338frrr+PgwYPo2rUr7r77bty4UbyRvKgIEALzX38dkZGRTo9UHX5Xi8WCFStWIDs7G9HR0SV8JCGAkydPokmTJmjZsiWmTJmCk96kFhGCv9eUFKnjTsArSQDaArAQ0XGbcwfAqVHKoxPKEr0PAGgghKjj7AYREaRi41bpVhzFvCRHxpKxY8di7Nixnt9g+3aJYcgrQmkgaS3XgKK218NLateuHdq1a+fZDSwWhWa2Uj6SIh2ABV+4oCyfUmwsN48yXlJKClCrFruuFEDpxq1Bg/hfheOOUl7Szp28K1ghYR5QpCTp5CWZTMDKlawY/d//8b8rV7quk8ZL8rTfp6WlwWg0YsOGDYiLi0Pbtm2xYMECXL58GUePHi3DR5o1axbS0tKcHs6C5R46dAiRkZGoUqUKZs2ahdWrV6Nz5y4ldKd+/fphyZIl2LhxIxYtWoT09HRER0fj8uXLnj0cwH3m99+B48ddl9UJNfv75CISQPlgM5kAqusoq/2/OoAyLS+EeBDF0b9r1LgFy5Ytk1LZ8hjTqBGyv/wSyQqsAfn5RhiNk7Bw4VFcuVIxuvfFixcBAI2d3DsnJwfVHIzU3VesQHujEV+fOQOLgvZZtiwWjRtHIjFxnXTZANAlKgqdt27FN4sWobBcWgNnz60XrVoNwrp1UVi27Du7v2cVm/iqV7fXVZ2j9qlTGJWZie0hITgtue1zcnKwb19nANHIzV2PZcvkmwRqZGVhLIBdr7+OXxUoekZjc2RnD8K8eZvQurX+QVXvex+7bh2ut2yJlBUrvKmmXeTlheCHHyZh7NgjWLZMzebdUc2aIW/5ciS1aFFyTkafB4CCAgNCQydj4cLjuH5dvpW869dfo5PBgK/Pn0eRpL5v++yLFvVDREQzHDmyCkePum9xaN++PQqdhHYxCAGjxYLCggKn+f5uuw148EEDXnnFiBdesOC226y6IsYIYYDFYkBBQZHLdIJEBGFTaN++fRg9ejRatWpV8gza74WFhSgsKEAoEYrA45aesctRW7Rq1Qp79uxBZmYmvv32W8yYMQObN29BmzbdARQhPj6+pGyHDh3Qq1cvtGvXDp9++imefPJJl/cFAKvVWmburn7tGsYB2O3Kb+kOiCigD3DE79xy554GsNZO2QMAJtv8XQcAAajj7B7dunUjZXjoIaIaNYiKipSIHziQqG9f+7/FxMRQTEyM0+svXbrk+Md+/Yiioz2vnBMUFhJVr0708MNKxDOSkogAonXrKvzk9Ll14t13WfypU/Z/X7x4MS1evNgz4W++ycLPnfO0eg5x6dIluvdeojp1iCwW6eIZVitR/fpE06crEZ+ezs2zYIF71+l67xcusPB//cuzyrnApk0sPiFBiXjGY48RRUQQFRSUnJLR5zXExhL17ClNXFncdhtRnz5SRdo+e6tWROPGeS5r3759zgsUFhJlZpZpe3tISiKqW5dozhz+NylJ3/0LClh8YaHrspZyH3jHjh1p7ty5Zc6tWrWKwsPDKScnhyg/n4VbLDRv3jyqVq2a02Pr1q36Kk1EcXFxNGPGnykzk4cHe4iNjaVZs2bpllnhXVitRA0aEN19N4HzzXqtg1QGd9txACFCiDY257oBOGyn7OHi32zL/U5EXtjvvERMDHN7FOZx27uXbyEViuOUaHwkReIZ/fvzzjyFbgdAETcmJQW49VYmIiqAMj6SBkX8AA1KeUnKtv0xlPKRNMTGcrykH+1uMJYiXgkvKTcX+OEHZW0vPQ2PPeggDmkcpJUrgZdfLnW96dmY5Sl5Oy8vD8eOHYO13M67t99+G1OmTEFEREQJHwlCeO1uKw+r1Yq8vHwYjfYNbHl5eTh69CgaNWrk3oPZwnbckYSAV5KIKAfAtwBeFkJUE0IMBDAewBd2in8O4D4hREchRG0ALwFY4rPK2oPirTjK4iUpzpukOG8oo2pVJnBXtnhJWrRwRY1z7pxBTXyk8oiJ4VlJ0RZAjZckPah9cjJQowbQvbvLop6Klx4fqTx8wIckUhDUftcu3pVXGflIGnTwkvbsKctB0jhKerj2WoxHd3lJ2vb65cuXIzU1FceOHcP06dNx4sQJvPbaa2XjIwmBqKgotG7d2ulR1cGu5+effx6pqak4ffo0Dh06hL/97W9ITk7GpEnTSnTIZ555BikpKTh16hR2796NiRMnIicnBzNmzHDvwcpD40NKQsArScV4BJwX7g8AywE8TESHhRDNhRDZQojmAEBEmwC8AcAM4Ezx8XcHMn2Dxo15h5KiwUpLei99opaanty+eC2DhVLExPDuQummNoXxkg4e5PhOiiaKHTtCAfhASaqs8ZK0/eEK8lZp8ZGULg4APRPwnAAAIABJREFUzuPWqZPyeEnSxZvN/GHddptkwaXi69RRFh+0FC7yuD33XEVFzWTi83rFWyzuGWnT0tLQpk0bzJ07F1OnTkWPHj2QlZWFPXv2cBBHifna0tPTcc8996Bdu3aIi4vDnj17sG7dRgwdOrLEEnb+/HlMnToV7dq1w4QJE1ClShXs2rULLWx4dB5B8sdVGYjbII63dLud82fBZG3bc28CeNNHVdOHmBjgm2+UZFWOiFBkLDGbJaYnLwstPtI990gXXRGxscCrr7JlbORIl8XdhckEfPstG0tuuUWSUMXL3e3bQ1Umti9Fx45A3brcOWfOlC7e1t0pLYTHhQscKfyhhyQJLAvNQKtcQQV43Pn8cza1hYZKFa0sXpLZzBEea9aULJjnf+lpeBzBNoeI5La3Fe9OYOy0tDR06dIFU6ZMwZQpUyoWkJh1dsmSJRXO5eWxkVCr+woFmyIAlI47GRlSxFUWS1LlRmwsO+8PHlQm3h4v6aWXXsJLL73kvsCsLOYyKJqkfcJH0qCFJvcDL2nw4MEY7El8A7OZEzY2a+ZN1Rxix45Q30wUCvgBtlAS1F6xgqrYQFsWiuO0mUySg9rn5DAfSVHbnzrF3t8hQ5SILwvFedw0PcYdl1taWhq6OgtpofGRFOZrc8RHkgohpMaVCSpJvoCf8rjFx8eX2WapG6mpCsMB+4iPpCEiQmlocltjSXm0atUKrVq1qviDM1gsTPRQ1PZnzgBnzhh90/YAv+TTp/nGCqDlcZPGS0pO5vhI3bq5LOqpeOV8JA2VjZe0fTu/SEV93yd8JA2K87hp4vXqYESEQ4cOOVaSJORrc35/9uYp8GDbx4svShMVVJJ8gaZNeaeSj/O4aTsQ3IbZzAIrOx9JQ0wMW8ays6WLdsZLSk9PR3p6unsC9+8HMjOVjeSKN25VhA8WCNnZbEmVAs0fo5CP5LO2VxyavG9fdrtJE6+Z2RTykbQm8Qlc8JK8haYk6REvhMD169cxbtw4+wUk8pHsQdMVFelgFdGzpzRRQSXJV4iJ4SWXgsSHjnhJTz75pO6gXGVgNvP2+XIBGGVAab42R9BMbTt2KBN/5kzFTVybNm3Cpk2b3BOmLXcVbj+vXdsqP1+bI3TuzElXFU3UUjePKt4f7lM+kgaF+SOl85IU8iA1PlJsrA/cPRpseUkK4G0etzJQrMVodfSZJUkigkqSrxATw857b7IcO4G0eEnXrrE142bgI2mIjuaPX3EeNynJp81moH17wJtYIU6QnAxERxeq5yNpMBjY7VMZeEk3Ex9JQ2wsf3DFObtkw2QCDhwArnicyrwYWVlKzWwnTxpw8aKPXG0aFPOSpOpgFktpzhMF8BkfSQGCSpKv4CdektvQrF2K2I0+5SNpqFaNM4orantnvCS3UFjIZjaFfKRTp1hJ8iliY9lCc+6cEvEmkyRekrY/XJGZzad8JA0+GHek8JK2bVPKg9y2LQyAj5WkAOMlOQRRqRajABofyWeuNskIKkm+QosWvEdc0WDliJfkNszm0uSwCuBzPpKG2FjeOZOTI120LS/JK/rB3r1MXFHMRxo40MdKkg8m6pwcL3lJiveH+5yPpKFhQ6Bdu8DnJWk8yIEDZVSrArZtCy0JWedTGI0Bw0tyCI0CokiL0XTEyuhqA4JKkm8RG8sThQ95SW4jKYn9AVWqyKhWGfiFj6QhJoYrsHOnEvEmk4Tg0j7gI0VFAR06qDH/O0TXrkDt2somam0Tl1fuTm1/uCIFdccOP/CRNMTElO5YlYwqVXi4kKIk9eunhAdJxLHBTCY/uHs82avva/GKtZjKzEcCgkqSbxETA1y+DBy2l3bOe5TnJc2fPx/z58/XLyAjg2M53Ux8JA0DB/JXqpiXZCs+Li4OcXFx+oWYzezqqVdPZtVKoDxfmyMYDBzBWiEvyevg0jcjH0lDbCxw/TqMP/2kRLzJxMOGx7ykzEyO5aSo7Y8cAS5dMvjW1abBR7wkr8QH+UhOEVSSfAntK5XC8K0IjZe0bRv/HR0djWh3RmVtErsZ4iOVR/XqzEtS1Pb2eEnNmjVDM70BIQsKmFCmmI/kFwUV4BufOAGcP69MvFe8JLO5lAWuAMnJvHHLp3wkDcUfXJj0BI8MjZfksQ6cmsrW9ZshPlJ5+IiX5LF4xXwkq7Vy85GAoJLkW7RoAbRqxS4tBSjPS9qxYwd2uLPt3Wxmc3efPkrq5zc+koYhQ5iXpCBekhA8Wdjyks6dO4dzesnKP/zAGdBvlvhI5aE9l6K+bzJ5kfRe8f5wv/GRNDRuDLRti9DUVCXi+/ThXNIeW/LMZvbbDRggs1plxDdtakHLlkrEu4ZiXpJX4ZgU85Equ6sNCCpJvkdcHI8mClYWERHMt9YGqxdeeAEvvPCCfgFmMwdyCwuTXje/8pE0DBnCFZGeupwRG1uWl5SYmIjExER9F5vNpWk8FEDjI/ksPlJ5dO3Kpja97eEmNF6SRxP1L79A5f7wHTu42/m178fFIWTnTomhyUvhNS9Ji8sWHi6zWgBYB0hO5s0KfnP3KOYleRUKQDEfqbKTtoGgkuR7DBnCPnhFcUs8jpf0++/svL8Z+UgaBg7kEV3RRO1VvCSzmVNhREXJrFIJ/MZH0mAwcN9KTFSyoq5XjxVAjybqm5mPpCEuDgYtN5oCaLyky5fdvPDKFU4Ap6jtDx3iW9x2m493dNoiwHhJjz76KCZMmFB6kUI+kpbppLLykYCgkuR7aIOBwonaamWrjVvQZhfFvACJeQfdR9WqPFMpcvl4HC8pL4/NDYra/vRpP/ORNMTFARcusOVGATReUkGBmxeazRy8s21bFdVCYiJvlVcQSFo/TCaQEMr6vta33OYlbd3KSrOivr9lC/87eLAflaQA4yW9+uqr+Pzzz6XxkbZu3Ypx48ahSZMmEEJgyZIlAEr5SEYjsHDhQrRs2RLh4eHo1asXUu1MUHrK+ANBJcnXaNCAl7yKlKQBAzw0lpjNTG7u1UtJvRIS+LEbNlQiXj+GDAHS0iDcXvK6hhA81icluWks2bULyM9XPlF4kutYKrQApQoXCLm5bhpLiFirVbQ//OpV5kkNHSpdtHuIioKlSxdlba/xkty2ompx2fr1U1KvLVs4gH3jxvLDrrgFjTikIPyLu+Jr166NyMjIUtOTl3yk7OxsdO7cGe+88w6qVq1acl5T2r799is88cQTeOGFF7B//35ER0dj5MiROHv2bEnZr75yXcZfCCpJ/kBcHC958/Oli65alWlF2sSoG2Yzb9NWQOC7cYMtW36fpAFuewCh2hZAyYiPZ2PJ0aNuXJSUVJq+QwESEpi767PEno7QujXQrJmyiVrTc9zq+z//zK5mhRZUqzUw+n7BoEEcJyw3V7rssDAePjxanGlucMnIz2dDVSC0vdxEaxWh1+V2/vx5CCFw7NgxaazqUaNGYf78+Zg4cSIMNm47i4W/x7fffhMzZ87EAw88gA4dOuC9995Do0aN8OGHH5aUffNN12X8haCS5A8MGcIuFkWBDYcOZV/8Sy+9j7ffftv1BRcuAMePK0tFsn07D1h+X00DvOStXl2ZkqQ9Y0ICMGLECIwYMcL1RWYzZ62uWVN6faxWnrji4wOAFyAEK6ma5iAZUVEc5SEhwY2LFPORtmxhN5siQ4lbKBw0iH2Rivr+sGGsc+qO8pCRwQOVorbftYv1wYBQkgwG7v9+Jm+npaUhIiICbdq04cLFfKT58+cjMjLS6eGO+0vz5FmtBdi7dy+GDRtW5vdhw4aV7LwuKHBdxp+oxNELKjE0Bm1SkhKiiDYopKd31jdAKJ4oEhI4NIFf+UgaQkKAwYMRpmiHW8uWwK238jPPnq3Dt5ibC+zeDTz5pJL6pKUxmTYgFFSAlaQlS7hiPXtKFz90KLBgAe+N0KVzms1s3WrVSnpdAO4HsbHc//2Nwv79uSKJiazRSIbtAuHee3VcoJgHuWULKw+xsUo29ZXBk09yl3YMAVgjWHswEv/tJbp3B7Q1sBA8tLmyJB04cABdu3aFQQguXLyTedasWZg8ebLTa5s0aaK7blpIgqtXM2CxWNCgXNyXBg0aYEuxyTcjw3UZfyJoSfIHatbkJa+iDtCjB+fp/Pzzi/o6WWIiUKsW765SgIQE5kr5lbhqi7g4GE+e5P36CjB0KI//x46dxMmTJ50X3rqVR3BFVjzt9bsT+FspFPOShg7lsV8Xeb6oSKmZ7fRpjp8ZEJYMgCNZ9u+vrO27dGHKpW5L3ubNQI0ayuKyJSQwYV6BgdYzKDblauGYnBlp09LS0L179wqutqioKLRu3drpYcs3cgXNoqV5GUW5ZyeiCuf0lPEHgpYkf2HYMGD+fGZ21q4tVbTBwJPimjVhyM9/FfHORmki4PvveXZREMwiI4O3/7/yinTRnsN2yXvffUrEf/QR8NlnR9G27R9o5cxKsXkzE1cVxUfSCPONGikR7z40clRCAvDss9LFDxjA8cK+/x4YP95F4T17gGvXgOHDpdcDKNVFAkZJAnhgmDuXzYt16kgVLQT3/c2beaJ2uquciAvGxSkxs127xq/3xReli7YLPawGWAjIyeXvXUEsupAQpjVou/rt4cCBA3j66acraDF6Ulht3LgRgwYN0lUXrQ7169eF0WhEenp6md//+OOPEstR3bquy/gTQUuSvzBiRClhRAGqVwcKCupi61YzbrkFWLbMfjnjzz9zID3JE8Ubb7Anw3aiMJv5vN/RqRMsjRoBmzYpET9kCA8Qhw83dl140yb2Q7qxStOLgCLM22LECLagKSAQV6nC7hVd1ozNm3lmV9RACQmsnCrKdOIZhg9nBcUt4pZ+DB0KXLoEHDjgouDRo8C5c8oU1OTkwCHMl0AxL0lTjByJz8nJwa+//sqWpHIJ1WbNmoW0tDSnR+/evXXVwzayQFhYGHr16oWEcv0tISGhJGWWnjJ+BREF7AEgCsBqADkAzgC420nZmQAsALJtjlg99+nWrRv5HIWFRDVrEt13n3TRS5cSVa1KxN2Vj4gIPl8e2X//Oxc4d05qHZKSiOrWJRo1ih8zIYH/TkqSehuPcePuu7lihYVK5N96K1FISBEBVmrRwn7b05kz3Pb//reSOiQksPj168uev3TpkpL76cbmzVyxDRuUiH/rLRZ/+nTF38o8e//+RH37KqmDxcL9ffp0JeI9wqVLl4iKioiioohmzFByjwsXuO0XLHBRUHtJp04pqcejjxJVq0aUn89/y+zz+/bt8/zi3FyizEwiq1VafWyRk0OUlVX6t8ViKfn/jh07yGAwUE5WFtchL0/KPbOysmj//v20f/9+qlq1Kv3973MpNXU/nThxhoiIVqxYQaGhobRo0SI6cuQIzZ49m6pVq0anbT5QPWXswdm7APAjydBDZAhRdQBYDuArAJEAbgOQCaCTg7IzAWzz5D5+UZKIiO68k6hJE+kfTIsWZRUk7WjRomLZ/MGDiTp3lnp/DYmJRAYDUbt2gaUgERFlfvIJN8r27dJlL11KFBqqQ0n9+GP+8fBh6XUgInruOaKQkLKDJlEAKEk3brAWP3u2EvE//cTNumhRxd9Knv3KFe6cc+YoqcPevVyHzz5TIt4jlDz7lClEDRqwJqcAnTsTxce7KDRyJFHbtkrub7UStWpFNHp06bmAUZIKClhBUbQ4y8tj8dqrtVWSPvzwQ2rfvj1rjpmZrDBLgNlsJgAVjhk2ivgHH3xALVq0oLCwMOrZsyelpKRUkKOnTHn8f60kAagGoABAW5tzXwB43UH5yqckLVrEr+Cnn6SKFcK+kiREuYLZ2WQNCyN6+mmp99dw+HDpvRXNRR4j45dflE2SupXUO+8katpU2aqyc2cik6nieb8rSUTKJ8nGjYkmTar4W8mzr1ypTEkmInr5Zf7efv9diXiPUPLsS5bws3sz2TvBX/5CVKUKG03sQlOSH3tMyf2PHePH++CD0nMBoyRZrayg3LghrT62KCpi8ZoFzWJPEc7JIbp+Xdm4k5VFlJ2tRHQF+EJJCmROUlsAFiI6bnPuAIBOTq7pIYTIEEIcF0LMEUIENjFd88dL5sY0b67zfHIyREEBc0QU4J13+N/Zs4EPP/Qwp5kiUK1avNNHAS/J0aa5MueLinjr2fDhSna9nD0L/PQTMHq0dNFyMGIEx+ZytfvPAwjBzfr99062fm/ezNue+vaVfn8A2LCBN23Vr69EvHdQNO7Yis/Pd7LDcNs2Jswp4iNt2MD/jhqlRLx3UJyiRKM9OQwFQKQ0oZq2u05BTGK/IZAfJRLsXrNFJoDqDspvBdAZzF3qBHbTFQF4zV5hIcSDAB4EgMaNGyMjI0NCld1E1aqo1a4drGvX4vqMGdLEPv98GJ56qjpu3Cj9CKpWJTz/fBYyMkoTW1VbswZVwsNxpX173oYmEdu2heLTT2ugRQsr5sy5CpMpFJMmVccnn2T5N9lkMTIzM1F18GBELFiAK0ePgurWlSa7SZPaOH++4k7BJk0syMi4CgAI2b0btTIzcX3AABQo6HtffRUOIBLR0VeRkVF2xMzMLP9Z+R6Gfv0QBSB71Srk6Qqq4x4GDw7D4sU1sGHDNQwcWDohZWZmAkSovXEjigYNQta1a9LvnZEhsHt3FJ59NhcZGTeky/cUJe89JAS1unQBrV2LzAcekH6fTp2AiIg6+PrrPPTpk1Ph94g1a1A1NBSXO3eWPu4AwJo1NdCunQGRkddKxMvu81YvgqEKoxGioABksXA+PckwGkVxIEfOjWRbV2G1QhCBjEaQgoCuhYUCgIDRSCX3Vw3lc7cMc5QnB4Bk2PFjFh/bAPQAkFvumqcBrNUpfwqAvXrK+s3dRkT01FNEYWHS7ZNLlxLVq5dDgJWMRgfE4TZtKN8lecAz/OMf7M16/vnSc0lJOgidPsKlS5eIdu9mu/yyZVJlL13KHCSnnKQ5c7iBrlyRem8NY8YwL8OeRT0g3G1WK1HLlkTjxikRf/0688Keeabs+UuXLpX6gT/+WMm9v/iCxe/Zo0S8xyjz3v/2NyasXbum5F63307UvLkDj44jP7AEZGU5ee+S4JW7jaiiT0wybGlPFdxt5UlLkqHYk1cBN7W7jYhiiUg4OG4DcBxAiBCijc1l3QAc1nsLyAhrqhrDh3OqAMm+qGnTgI4dR6FNm7dgsdgJbnziBPDLLyhQFMSwQwc2u9q6e0wm4LnnlNzOM/TqxbFiJLsdpk0DPv4YqF07BwChZk3+e9o0m0Lr1rG7T3KMLIA9GYmJ7G4IgFhs9iEEu9wSE5XkMKxenUMBrFtn58f//Y//HTlS+n0BYP16DqqoIKC4PIwYwS6fpCQl4seMYZfvoUPlfjh5kv3AY8YouW9iIrtYA9LVpkFxKADN1WVXfGEhu/ucBrHyDFS89V+RJ89vCFhOEhHlAPgWwMtCiGpCiIEAxoPJ2xUghBgphGhQ/P/2/4+9Lw2XrKrOfk/Vne/tkcZmhpZGpmaW0aGNAwoRbAQRFYVEQaMm8YsxGscYkc+Y6BfjQAQiBBABEWgUiKCiGFSw6QHoBpRGpoamJ7r7zkPV/n7su+quWrXWPvvcW9X3Xvqs5zlPVZ2qOnX23mt417vWOQXgcwCW7qjzHbcsXuxvRU2Ou84yd+7vAHjHXSVL/dQMNagf6bbbfPw/4YSGHL4+Uiz6QHnbbXV3WO95D/D1r/8Q+++/EQsXCoD09NP+DptLltT1N0l++UsPlKZsPxLJqacCvb3Ar37VkMO/9a3+djyPPy7eWLrUA+S99qr7b46M+HanU05pSByqn5x4or/btYoiJy4EUmoOT34u9U6f45Pbb/cA+VWvasjh6yP0HyIjIx5ZNODwattTgxuGqA/qpdSPBExhkDQqHwbQDmAD/O0A/so5txoAkiTZJ0mSniRJqB35DQAeTJKkF8Dt8AArfAvRqSCtrd6jLF3akH+IbmvbgMMOU5zVLbcARx6J8t571/03y2Xgjjt8sjrlDWbJEmDLlob96eeRRz6DBx4Ann+e7WxwoLjtNn9vygb8LWB95Q1v8H+VcfPNDTk8gUSeICQvvOD/K69Bc3/fff4m+lOayQD8Xa7//M+BH/+4IX5n9939Py/V+J2lS33T0v771/03nfMg6U1vasgNresr5BgbMPd0eP//aYzSkf8VUmdp8OEnTaY0SHLObXHOLXHOdTrn9nHOXcvee9o51+Wce3r09d875+aPfvblzrnPO+cmv0M4RpYsATZs8B62AfLnf+4xQKVHdcMG4N57GxYofv97f9fdKc9kAB7JtbV50NgAOfxw/5fodMUNAP9bBx0EvOIVdf895zwoeMMb/LCmtLS3+/lfujT8h1PjlP3392VfHqhb7rzTT1IDAWqxOIX+UDgkZ5zhDfXeexty+NNOA373O/8TAHwy8utfN2zuH3wQePbZaQBQgTEk0aB/3lVLbiMjnt5sYKmN3cT7JSNTGiTtNHLqqT6za1CgfutbfcLy05+O7vjJT7xWN6jc8+MfeztsUCWvvtLZ6SPazTc3hPree+8XsddeLFBv3erLSw0KFA895P9YtUEtH/WXM87wNNv99zfk8G99q5/u7m7/uvWOO4D99vP/xlpncQ646Sb/LzOzZ9f98PWXt7zFM9kNYvLe+tYxdgeAf1IqAaef3pDf+9GPvN+ZFrrf4JIbYaEKSGpwwxBV8hrwN3yTLjlImgoya5bvar7llroazNVXX42rr74aJ5zgG0l/9KPRN265Bdh3X+CII+r2WyTOAT/8oS/11Pn/MxsnS5b4PqGVK+t62DPOOANvf/sZOP10D1B7euADxchIw0DSD3/oneMZZzTk8PWXU0/1jruBCcLw8Gig7ulB8z33+LlvQKBYvRp47DHgHe+o+6EbIzNm+D83a1CCcNRR/v+Ml1Jn6NKlvg537LF1/y0AuPFGD1CnwH+ixklTk5/3BrCohMFKpdGlzUtt45YcJE0VWbIE+OMffadpnWTvvffG3nvvjWIROPNMz2b0buj1f27ZoEDx0EP+HoHTJlAAvi5QKNQ9UM+aNQuzZs3CO97hG6lvuw3+N+bPB44/vq6/BYwB1MWLp+hNDDWZM8cnCA0K1K96FbDbbsANNwC4804kg4MNBahJArz97Q05fGPkjDOAp56qe4IAjM3FHXcA3ZsG/VWkZGt1ljVrgEceAc46q+6HbpwQ7dLQkpu/ZxJGRsY6uhsgDbxobtLlJTikaSpEQdcxUF9//fW4/vrrAaASqFd+9U5gYKBhpTZiMqZVoNh1V+DVr647SHr44Yfx8MMP4zWv8YH6ph80NlBMOyaDZMkSj6wfeaTuhy4W/XzcfjswfONSlGfN8mvdAJl2TAbg/U6h0LCS2zvf6d3Nsq/+wlOpDSq13XijxwDThkEFqi9Da0CC4PuDHIaHq0ttH/nIR/D2OjroUumlW2oDcpA0dWTPPf1fJFRqYhOXSy65BJdccgkA4DWv8c67fMONPnt/zWvq9jskzvmM/XWvm0ZMBsmSJb7zc+3auh1y2bJlWLZsGYpFn+GWbv+pb45pkCeflgAVGGN2GhSozz4bcAMDcLfc4m950QBvvmaN36YdQKUEoUFzf9JJ3rXhB9f6Rq03vrEhv3PjjZ413GOPhhy+cdLcPNbQU2ehklsFhI3Wwi666CJcddVVdfudu+++B+ecczoWLNgTSZLgyiuvrPnMd77zHSxYsABtbW045phj8Otf/7rq/XvuuQenn3469tzTPsZkSQ6SppK8853AAw94OqDOUiwC7zm9G8c8czOGzzi7IcXjaVlqIznzTP947bXhz41Tzj4beOfw1RiYuWvDLn364Q+nIZMB+Ch64onAddc1JKM+6STgvLk/QUv/dgw2SDmJyZh2ABXwJ/3www1h8goF4NwzenHsszdj8G3v8I3idZbHHvO+Z1qV2kiCd36cuDQ3A80YhkNS+a05c+agq6urbr+xbVsPDj10Eb7xjW+gvb295v3rr78ef/u3f4tPf/rTWLFiBU466SSccsopeJr9mWVPTw8WLbKPMZmSg6SpJO96l/cqV6v3y5ywXDDvZnSgH/fs+96GHH/aMhmA//ff170OuOqqxvTGHLoVp+HH+Nku5zSEyVi9ehr2ZHB573t9oG5Ab0yhAHx09jV4Drtj8+H1Z1ABD5Je/Wrflzzt5JxzfBZVR3aBywdediu60Iu7d39P+ofHIUS+T0u/Uyj4uR8aatBVbg5NGMFI4kttzz77LJIkwWN1SsTLZeBNbzoVF110Mc466ywUlDaCr3/96zj//PNxwQUX4OCDD8Y3v/lN7L777pUqBwCceuqpuPhi+xiTKVPrbHZ22X13zzJcfXVD6NcDf38NniouwHdWnlT3Y/Or2qZdqY3kfe/zt2duwP2qCjf/CG0YxP995lxs3173w+OGGzyTQYTYtJN3vtODx0YE6i1bcOjTt+MHeBfuuLP+WeqaNdOYyQA89fiWt3i/04CbG+5/3/exrrg3vrmyMSX+a6/1d/ZvwH1xd4w0N/uBNGDuk5ERJACGXDPKZWDlypXo6OjAAQccUPW5iy++GF1dXcFNlsiAsZ5zK+8bGhrCAw88gJNPPrlq/8knn4zf/OY39Rhiw+UleMHeNJf3vhc491x/07XFi+t33OeeQ/KLn+ORV34Gt92eYPPm+l6i/7vfedr77/++fsfc4XLmmcCHP+wDdb3/T+Waa9C/9wH4zTPH4kc/Aur5x/elEnDFFb7dY7fd6nfcHSpz5/qG9muvBb761fqybT/8IQojw/j5budi8KZWfOQj9Ts0AFx+uT/dc86p73F3qJx3nr/88he/qG85eONGJD/9Kf54zN/hzp8VsGkTMG9e/Q5/332eRb300vodc1zysY9NjAUtlXyWk4VFOfJI4N//PfyZ4WG4JEHJFTEyAqxatQqHH354DVvzoQ99CGeffXbwUHvuuWfVa+c8ARa6qm3Tpk39LhQAAAAgAElEQVQolUqYL3oA5s+fj5/97Gfhc58ikjNJU02WLPE3OKxDye3GG2/EjTfe6F9cey1QLmO/z5yLwcH6J+yXXur/gm5aB4qZM31T9XXX1eVPV88++2zveJ5+GvjlL9H2/nNx4IFJ3R36nXcCzzwDXHhhfY+7w+V97/N3g6/c9bROcs01wMEH45j3H4m7727GU0/V79BkS2972zRmUAEPUGfPBv77v+t73BtuAEZGsOc/vAcjI/X3O5df7t3ltPY7gAdI9S63lcsefDU3o1BIMDzsmaQjjzyy5qNz587FwoULg5vsFaJ7MMX8BUwibjfjnKvZN2XFObfTb0cccYSbUnLeec7NnOlcX1/9jnnEEc4dd5xzzrkTTnDuwAOdK5ed27hx44QP/eKLzrW3O/fBD074UDtMzHHfcYdzgHM/+lH9fuwrX/HHXLvWff3r/unKlfU7/BlnOLfrrs4NDsZ9vh5r3hAZHHRu3jzn3vGO+h3ziSf8hH/5y+6pp5wrFMruM5+p3+Gvu84f/qc/rd8xGyWp6/7BD3pD3ratfj96/PHOHXaYc865V73KuQMOcK5Uqs+ht293rrPTub/8y/TP1lPnly9fXrdjVWR42M/70FD9jjkw4I85MuIGB/3TAw44wP3nf/5nzUe//OUvu87OzuB2zz33VH2nt9evQbk8tq+zs9NdccUVldeDg4OuWCy6G264oeq7H/7wh91rX/ta9bTlMUISWgsAy1wd8EHOJE1Fee97ge3bJ3w7gCuvvNJfSvn73wOrVvnjAvjgB31p7J576nCuAL7/fX8PpmnPZABjNasrrpjwoVauXImVDzwAfPe7vqv35S/Heef5C3y++906nCv8P3r8+MfA+edPgz/1TJOWFn/xwtKl/n++6iHf+Y6vB5x7LvbZB3jTm4Zw+eW+TFAPuewyf/P6Bl3ZvmPlvPO8IRP7PFG5/35fD3v/+wEAf/VX/n65v/hFfQ5//fVAby/wgQ/U53iTKvSnZ/VSzNFamCsWgWIRzc1Ab28v1q5dqzJJH/rQh7y/CmyvfOUrK58vl/0Fec3N4XsSt7S04JhjjsFdd91Vtf+uu+7CSSfVvze2IVIPpDXdtynHJJVKzh10kHNHHVUN0zPK4sWL3eLFi5175zs9MzWaIfb2OjdrlnPvetfEM6xy2bnDD3fumGMmdJgdLsFxf/azziWJc489NqHfuOKKK9zPP/IRTzXceGNl//ve51xXl8/CJioXX+wPn+VUpyyT5Jxzq1b5AV188cSP1d3tFf3ssyu7rr12qwOcE4ntuOTxx/2p/vM/T/xYO0JS171cdu7gg5078sgJ+Z2KvPvd3u+MKvrAgCcK3/72iR/aOU9SHXJI3KlOeSbJuSrmZ8IyNOTctm2uzOjlu+/+jSsUCq6np3fCh+en2t3d7VasWOFWrFjh2tvb3Re/+EW3YsUK99RTTznnnLvuuutcc3Ozu+yyy9yaNWvc3/zN37jOzk735JNPVo6XdgxLdgSTNOkAZSpsUw4kOefcpZf65fnFL8Z9iMWLF7uzjz/euWLRuY9/vOq9v/5r51panHv00U0TOs377vOn+d3vTugwO1yCTnP9eudaWydcP7ziiivc+gMOcG7BgirHd++99ZmzUsm5l7/cude9Ltv3pjRIcs65k092bv585/r7J3acb37TT/Rvf1vZtX79Rrfvvs69/vUTO7Rzzn3qU84VCs4988zEj7UjJGrdL7/cz9mdd07sx5591rmmJuc+9rGq3f/wD94dPfvsxA6/YoU/za9/Pe7z0wIklUoeedSjzaKnx7nt212J1Ta//e1L3CtecZAbGJjYoctln390d/vXd999twNQs5133nnst7/t9t13X9fS0uKOPvpo96tf/arqmDHH0CQHSTszSOrvd+5lL3Pu1FPH9/1rrnHPt7a6sidenfvGN6refvhhv/tTn+qZ0Gm+4x31Y0V2pKQ6zQsucK6tzbkXXhjfD1xzjeudNctP8pw5zl1zTeWtctm3aRx++MT6M264wR/++uuzfW/Kg6S77vIDu+yy8R+jVHJu4ULfgMdk48aNFfZtzZrxH37LFk+S1IsV2RESte4DA87tvrtzb3zjxH7s05/2bOzatVW71671uz//+Ykd/qyznJsxw7nNm+M+Py1AknMeIG3bNjHHMDLijzEwUAWSnPPYqbt7YkThKElV1/ap8UoOknZmkOScc1/8ol+i1auzfe+aa5zr6PDfpa2joypQO+fcaac5N3NmyW3ZMr7To2zus58d3/cnU1Kd5iOP+MF94QvZDx4x///9325CZZ+REV9qOOSQ7Oz8lAdJ5bIv+Rx44PiDxdKlfoKvu65q98aNG92GDR7YsypcZvnc5/zhV60a/zF2tESv+7/8ix/cAw+M74d6e53bZRfnlixR3z7tNOdmz3bj9jsPPpjd70wbkERs0kRYVAJa5XINSJoowOEsUj0qshOVHCTt7CBp40bPZsRcvsFl332rAzRt++5b9bGVK/3uT396fKd3+une2b344vi+P5kS5TRPO807+96MNfyI+SeQc+CB/sKWrHLtteMHWVMeJDk3NsClS7N/t1x27sQTndt775rJpbF//vP+8MuWZT/85s2eRTrzzOzfnUyJXvetWz1Nc8454/shuoTzl79U3161yrNJn/zk+A6flUVybhqBJOe8vxkFOZlFlOwkSJooyJlKLJJzOUjKQZJzvnmoUHAui2EmiR6kk6Tmo0uWDLjOzuxVpfvv94f80peyfW+qSJTT/N//deNikyLn/6ab/O7vfS/b4YeHnXvFK3zJbjxEy7QAScPDzu23n69JZvXIV1/trHIdjX3bNo9/Tz45+6l99rNu2rFIzmVc9098wvudrINcv94jyLe8JRiFzz3X539Ze5MeesjPfdbbOEwrkMTKZZmFANaoY5AgybkxoBN7yxCSqcYiOZeDpBwkOec56Ze9zF/KERsRI5kk55z77W+3uEKhpr8yKOWyc29+sw8y060XiSTaab773dThHn/wXXeNmv9y2bljj3Vun32y+cMrrvCHu+mm+O9wmRYgyTnnbr7ZD/Rf/iX+O9u2Obfbbn5iFXvhY//a1/zhf/7z+MOvX+9ZjLPOiv/OVJFM675pk9fjY4/NVs/9y790rrk51V6eeMJ/7IIL4g9dLntyd8YMf3pZZFqBJOfGwE6Wuad7LTFnooGkcrnS150J7NC9lrKCq0ZKDpJykOSFMmPlJmCqfOYztQFa6UlyzjuP97/fX4jym9/EHf6//ssf8mtfyzCGKSbRTnP9el9TfN3r4jxKX5+/zlmyScb8U4/yP/5j3On86U/+dI4/fvzZ3LQBSc75vpb29poGYFM+/nE/9/ffr77Nx97f79xee3lGLqaiWir55KCtzbesTTfJvO4/+IFXzv/3/+I+T5e6fuITUR//m7/xZJWxVDVy2WX+8P/6r3Gf5zLtQFKp5FFMT0+coRs0jwaSnBvDU7GtT1TFiz2dHSU5SMpBkpdy2bk/+zMfHZ97LvzZ3l7f7DJvnlvf2upKxGAoAdo57zy2bPFXqe+1l2+DCsmDD/og8YY31Od2HpMlmZzmd7/rTSXmLrCf+pSjRq/uXXbxVxcG5t85597/fhfVfjM05C/WmjkzHjNoMq1A0jPPeOrg5JPTvfN993m0/4EPmB+RY7/tNo+pzj03/fD/9m9+nS65JPbkp5ZkXvdy2V9d29npHLunjSrbt/u7+u+2W/Qduzdt8qax114+FwnJo4/6POMNb5j8EvOKFStM8FFXyULd9PerzUKh86T+7rRqNjFPE73ort5SKpXcihUrzPd3CpAE4KMAlgEYBHBlxOf/D4D1ALYB+B6A1pjfmfIgyTmfunZ0eABkeZTeXn8DmELBuTvuGLuZZEDIeSxb5qtKb36zbQjd3f4el7vtlu7UprpkcpqlknOvfrW/d9Jtt9mfo4bV0SB9xRVXRN1ev7/fuaOP9vc9/OMf7c998pNuXJf8S5lWIMm5sfsdffSjtnKuWuVvtbBgQRDpa2Oni0i/+U37FO6/35eHzjhjamXSWWRc6/7UUx4kHX64c88/r3+mv98nccWicz/5SabDL1/uicLXvtYO1n193j522cW5desynv+o1FPnV69e7Xp6JnbrlCjh6CR0dQeBqb6+GuUMgSQin9KqenTjyKlUZnPOuZ6eHrc6cOX3zgKS3g5gCYBL0kASgDcDeAHAoQDmAPglgK/E/M60AEnOOXf33R4oHXRQLaPU1+fTrCTx5TnnMoEk53w1D/D3fpEgaPVqf1ftJMnWwzFVJbPT3LTJe+rmZt8rI+Xb3/aTd9ZZFYcWC5Kc82W0uXOd239/58RfJLmeHuc+9CF/+AsvzHbamkw7kFQuO/f3f+8n4PzzawPGI4/4vr099/TNLgHRxl4q+V6XpiZfWZKHv+YaT2bttVe2K6qmmox73e+80wOl/fevnd+BAefe9ja/NqN+J6t8//v+62eeWet3Hn3U4zPAuVtuGd/pO1dfnd+yZYt76KGHXE9PT+MZpVJpDMlIFFkujwGk3l4VvaedH1X1tm+v1ftyeYxtMg4/KVIqlVxPT4976KGH3JbAfSTqBZKaGvJfJ3US59xNAJAkySsB7JXy8fMA/JdzbvXod74E4PsAPtXQk9yR8rrXAXfcAZx6KrBokf/H+pNPBn77W/9v288/D1x5JXDuueM6/IUX+r+M+9zngEMOAT7xCWDGDGDdOuDrXwe6uvzfyb3+9XUd1fSQXXYBfv5z4C1vAc46y8/76acDw8P+n9MfeMC/vvZaoCm7We23H3DrrX7pXvta/39Uxxzj/5vq0kv9f1594hPARRfVf2hTXpIE+OpXvTJ+4Qt+rt/2NuDQQ/3/jC1dCsyd69dnwYLMhy8UgKuv9v8k/3/+j1/OD33I/40ZmdarX+3/o3Du3AaMb6rLm94E/Oxn3u8ce6z3O3/2Z/7/IL/3PWDTJuBb3xq333n3u4FnnvF+5667gE9+Epg9G3j2WeA//gNobwduvx045ZQ6j2ucMmfOHADAU089haGhIUrSGyvDw/4P0woFvwFAqeQ7HgsF/ydq4xTn/OHpUPLwxeK4XFrDJEkStLS0YM8996ysRUOlHkir0RuAi5DOJK0C8E72eh78rc13STv+tGGSSJYt81ddzZjhU6yWFp/Nib8iz8okkTzyiHMnneSq+o5PP336l9i4jDuz3L7dsxr77z82OUcd5dy//3vNJWpZmCSSnh7fe1wojB1+n30m9O80NTLtmCQuV17p/06eJmjePOf+7u88FRchobGXy8798IeekKK5b27291Qaz72spppMeN0fftg30tOd5ItFX3+sE7X86KO+3M/9zutfP/G/MHFumuu8c76k+bGP+f8hoslZtMj/fVWg+zp23N3d/gapbW1jh99zT+d+/ON6DWDHC+rEJCVuR6DgCUqSJBcB2Ms5d37gM2sBfMQ59z+jr5sBDAFY4Jx7Uvn8hQDof+sXAXi4zqc9XWQegE2TfRKTIDvruIF87PnYdz7ZWce+s44bAA50zs2Y6EEmjURLkuSXABYbb9/rnHt1xkP2AJjJXtPzbu3DzrlLAVw6ei7LnHOvzPh7LwnZWce+s44byMeej33nk5117DvruAE/9nocZ9JAknPudXU+5GoARwC4YfT1EQBecM5trvPv5JJLLrnkkksuO4EUJvsEQpIkSVOSJG0AigCKSZK0JUliAburALw/SZJDkiSZA+CzAK7cQaeaSy655JJLLrm8xGRKgyR4oNMPf4XauaPPPwsASZLskyRJT5Ik+wDAaC/SVwHcDeCp0e0Lkb9zaZ3PezrJzjr2nXXcQD72nVXyse98srOOG6jT2KdF43YuueSSSy655JLLjpapziTlkksuueSSSy65TIrkICmXXHLJJZdccslFkRwk5ZJLLrnkkksuuSiSg6Rccskll1xyySUXRXKQlEsuueSSSy655KJIDpJyySWXXHLJJZdcFMlBUi655JJLLrnkkosiOUjKJZdccskll1xyUSQHSbnkkksuueSSSy6K5CApl1xyySWXXHLJRZEcJOWSSy655JJLLrkokoOkXHLJJZdccsklF0VykJRLLrnkkksuueSiSA6Scskll1xyySWXXBSZciApSZKPJkmyLEmSwSRJrmT790uSxCVJ0sO2z7H3kyRJ/iVJks2j21eTJEkmZRC55JJLLrnkksu0l6bJPgFFngNwEYA3A2hX3p/tnBtR9l8IYAmAIwA4AHcBeALAfzboPHPJJZdccskll5ewTDkmyTl3k3PuFgCbM371PABfc84965xbB+BrAM6v9/nlkksuueSSSy47h0w5kBQhTyVJ8mySJFckSTKP7T8UwCr2etXovlxyySWXXHLJJZfMMhXLbZZsAnAsgJUAdgHwbQDfhy/LAUAXgG3s89sAdCVJkjjnnDxYkiQXwpfo0NHRcczChQv5e6knoxwy+J7cx1/T89C+tPe0fc8//zwAYP78+Zm+Z/32RN7LMj7rs9brtP2xYq273M9fW+8lSYKREV8Vbm5uVr+jPWZ9r9Gf1x5j3wvNT9q+mPeA7HYo96c9j9HZmMd67xvPZ0KPafvkc+112v4sEqMnMXYZa3fWY8x72mcbZXtZbDP2OZes+7lk1YcY3ZqI/W3YsAHbt2+fcF/ytAFJzrkeAMtGX76QJMlHATyfJMlM59x2AD0AZrKvzATQowGk0eNdCuBSAFi4cKH70pe+hEKhEL0lSVLzKJ8DqHodMgb5XDlf81E+L5fLAIC/+Iu/gHMOl19+eWW/fCyVSlXfS9tKpVL08/E88uOMjIzAOWceW76XtvEx8jHTcbQ51F7zR9qvydatWwEAs2fPrtpfKBRUHeD7LV2izxSLxRod1HSzWCxW7afXfD9tfJ98X/teaF/oGNr5WDZG72tj5TZl2aAMRDFBJiRpdijtketa6LXUUdLHWN2WG9mOZbsh+007Ftme9Vr+LrcvzQfJ+eB2Je2Q7+PzL59bYvlbaXt8v6VX1nuWHUo75RuAoO3Ebtb3NFvTzkc7/1C8015bmzbXch2011xCoEnqxQUXXJCqDzEybUCSIjRbNKOr4Zu27x99fcTovlQpFouYPXu2qRwhRQCqDQnI5oA1g9ccLd8vnYt0OOS0hoeHAQBbtmxJBTwxQMd6X4KZkZERE+iEjjMRpyzBXhbgI+derosmIUPnTgkAWlpaUh2rdKIhgBMDYrTHJEnQ1NRU+Q495+9ZwMc6bhowshy2HHPICcfaWZqtpQEczc4se7PAQJqNWTo/nkQibZ9mb9y2LGAk7Yvv08YswU0WO0uzNbIzbb01YEM6Re9zAMLti/ZlBR7SnqRthOwvdl/a7/D3YxKNLEDHmlcJdMYDZOoR0yxb05ICin8TlSkHkpIkaYI/ryKAYpIkbQBGABwDYCuAPwKYA+A/APzSOUcltqsA/F2SJLfDA6iPA/hmzG82NTVh3rx5qmNOc8qWhJgeK3Oy2B7LcaUBnqGhIQDAc889ZzrU2OexDllz0GlOmd6jeZAgJ+R4NUfM518+J5EGz52s3GcBG/lcOqdyuYxCoYDdd9+9xtmGHF8I2FhOOYvTDYGrNHAjn9OcxWaUWe2IC+mFZktSF6SNWQ44BHI0m+KAPw3whGxDAzCh48RuWiDh+9Lsie+jz2YRri/SjqQtydchvdKATVoiwV9nYU45MIn5rmYn2vvWWKxEXJsrOadZwAuJXH+55mkJQyhOpSUSMXEsNtnQwBH/fnd3dybdtWTKgSQAnwXwBfb6XABfBPAYgIsBvAzAdvhL/N/FPvddAC8H8NDo68tH96VKf38/Vq1aFRVkNCOyjAHQyyKh4CGVXL4OoXT5+rjjjoNzDscee2xQqWMAF1fINDAlGSX5nrVpn5WslPVcA2KaYfGsRANhdFw+jzKAyHXgr/l69ff3I0kS9PT0BAMFz341wMH1TT5mAVjEFHHWSANdoc36nPztLOxT1iy43kAsS0ITyypZbEwIPKUlJnKfZl/ydRb7sYCcc67qM9xmpF3R3IVYXHrMCsC47fC11cBEGtBKs6O05MTSc+05tzXNftJsJjbR0UBaiEmSc5dmSyF7SotLWcgB/jwGNFm2dOutt2bWL00SLcve2eSwww5zS5cuTVUQS2TgTEPfPFCHULWlDNxhTQSUSEca+37ouQZaOCjRxh7LGvF92vyTWCwREO9Q0wBKU1NTDRuU1WFqzjL0fgxgiXGsIZBvBRQNpMj5TbMZ6Uhj7SUG2MfYCtdHzYay2kfILizgYjl1HhC0R24baSBEAyBpfl6umWYnmh5I/QFQBS64Xk0EdGSxizRAL5MEK6ng9pFmL3I+JgPAS/BugQvJvITAcgiojzfeSDuUwN2KJ3xfjL089thj6Ovr23katxsp/f39WLNmjYnUY7JbnuFYwYScCOCveLIkDXRpDtHKbC0knobGZRDh2WsMQAtluFYGrYGqNAAJ2MHDCiTaHPPjaUJrF5PFpgUTLbhoWwjgxJTg+HMKNNpxrUf5OzGblq1yoJUleMggQoGKr1vITmhdY8BXDBNk2Y5lQyFAlPZeTO9Q2mYBTMtv8Pnk8ydFA1x8reiqTm4zVqIifaWmQ9rzEICx2CEL/Egdj2E80xKMLDFDG79MQuT8afNuiZZEclvXbEfqRywQS0v6s24ygQiBPfn6mWeeSZ2bGMlBErwT7+rqinL6IUAE6Jm1JjIz4PtCQEhzeppDL5VK+MpXvgLnHD7+8Y8HwYkFZrSs2XpfAqhYaj9kCJrT19inGMZJc/YkWQAQfVbqiGSbCoUCNm3ahCRJsMcee6isUwzAkZm0zIJjMuW035TnlgUEceeeZg8xNhFrD5bzDulRGkCJyaBD+0OfkbYhM3bt/KxxcJvXwM6OtAeZBADhcpYEGiGgHsuWWvbAj2eVvLTftvbFgh/NHrgtNMoerBgRAi+WTYQS2onYQZqdWfbAY0esPfT19Zlzm0VykATP6uy2225RWW5IrIyWb5z9sBRZa0aTihSjtM8++yycc3j88cdrHHZIscfLCoUUWSo0zZMFdDSmxxLNqXMnrYEcek86Pc6ipDE5oefFYhGPPfYYCoUCFi1apDr7mBJBKHCkOfEQwLEyWannVtZqBVwepEOZaYwD54DHcqBZQQ7tm0ivjnTeoc0C9VLv5TxK/ddEC75NTU1V+7iuS72PZWosJiYW4HC7COkxZ3VimSB+vjGMDZ+DUGKbxd/T+vL1lP5MA7Ix+h8DatLsQsYLzVeHALql7zJG8XHKxxBwt3y71H1tTcg/UfsD7SOdqIfkIAlAX18fVqxYkRq4NGOOCUihcgMQ38sh2adQJlkqlSr3R1q8eLFpbFmyBA1cyX1ZX8eCs/EEJ96ELR+1IETP5TpodLcGNCQI27JlCwqFArZs2WIGgZhsuVgsqtlyoVBAc3Nz6ufG2+eUFqykvmu6LgNUFqZV030LeKUFnLSAYoEo/t7w8PC47cI6pvU6FpSRnpNucz3XAhK3gZDwxMNikiTgkuwRf6QgFlMmjunbi9Fpy2a0fSG950yrllhx1kgmHll0PkbfLX+v6XsIAIV8v7Y/LbGO0fU0v649cpaIQF1M4gHgpXsLgMmQzs5OHH/88RMCL/SoKTLPSNOy5jTl1ZSQFFgq8ubNm+Gcw6233lr1Hnf2/LuUXWQFLbSPj5EeLYUOsUb8uUZNS8AinRQwllnIYM6dtwUKYhyzBCj8PXpOwPuEE05QnXZaMLAy8ixs0XjZ0SzOmtaV9EfqtAVQYhs/Q0Bb0/2RkZGaY2v70vRa2m1It7Xnci5jdJv0OJYBLRaLaG5urtFvDkokm2PpWwhwh55zYMMZozSQoiUL1ibHXo+E09Jvrtca80k+NMQ0ZgETabotfXZoXywYsXQ6pNt8vjSgbSWZXKf5+7Seml7LrampqaKfGqjmn6P3Nm3apOpAVslBEoCBgQH84Q9/MKldbfE0FsHKHopFT/sRFc4lDWgBtVStzB64gvN9XV1dcM7hqKOOUg0nC/qPBWwhA5XGamU/POhaQYje1+aQzyM3Zmmk0nBDrJAsQWjgSz4+88wzKBaLaG9vD2axoWw47TvyuaXDlv7KMcYCqxDAouM65yo6n8YChbLiiTJDUv+4vmZJAkK/r9mjHE+IDZB6y/U7JJzxkbaTJEklm5asR4gVkexJrL7LR8mcakxq6HkIPMlz5c9jEgVLh2l+uFiAgDb+e845FItFU881Hbd0PqRzlu6n2UjWjbNRNBcyFln2bemxTBqkyHXhOq6t48jISGX+S6VSZT3onCcqOUiCn/zW1tYaB5EFGGUJIhIQ0XNa1BAgCoEi6dD3339/OOfQ09MzrkwmNmOJBUny3OUjYJcMJBAKBRBpVCQ8c5HrapUKQoBEZjMSzPT396NYLGKXXXYJMkgTvbxZBpSppL9WJs6dbxpgSQPoafteKvrLQa0F4uXriehvrL6+VPTXYgY18MF1Pg2oZ9Xh2PJtjP5qejyV9Vfzv1KXZeJo6W/oCvIskoMk+L+N2GuvvTJlzllpW6D2BmzjMbAspYfDDz8cpVIJDz74YNVnYowwxuhiadrYrELLIEJMTwjccAPLUnIYT1ktNmBoRi6zZS1AWJmxNkeWaE6OB4G0wJCmt1mDQ6hnJ1TyjQExdJ7yIoI0PeV2rOmqJXwNeMDnTLKms5IxkbprBYRYcBMDUPimMT0aWNHAi8bmaL5U6ii9liwwXwse3EMMjVxn+k4okUzTLe112r195PGt+/yENhojB2RST6U9h/STzzXXS97wTDqgsXRSZzWmMc3HhXyx5ROtxzTdTJIEd911V5T9pkkOkuAbt5ctWxadFcU4kSRJKv/+noXuBbI3q1qBzApcVr+GBbbofWt/qTSWrWs9IqFM3rqBmZbl0HmnOQz+nM+xnHcts9HAl9ZsagUbKyzRK48AACAASURBVMumYxDg4vtlP4e2LwagxTiWGPAVo6dpSYLUUSsjt5IDCZY0gBWjt6HPW2BNsxELgEkdlUEuK/jSsvC05EACLQ3oaMmApbsxupf2Ge21FRi1gCv1U84BnyPNxjV/YLFGcr2yJAZZEwKrhy7NZ2bpqZO6aoFIrq9ZdFTqa0hXLbZIu2lnGoiP0TH+ur+/P2oMaZKDJGRv3A4FCG54mqJmNTbL4Q8PD5tAhvbddNNNcM7hLW95ixkorCZu51yNUVrZkMx46Dmfm5DEBgSehaRlJpbhhEAJBzLyc/I9K5jw1/fccw8KhQLe+MY3mlmXDAgh0KLpoqaTIcBC60jrznWRZ8hpTl7qktQvDpqlXlqAmgcCi02K1ccQU8QfpR7KOY4B0sVibfN0CJjw5mYOmi2QrOlpSJe5HmqASAakkD6GWExtvjRgkjXJkwDBSvSkD9N8nJbgWe+F/K0FoEP6GMNcWroodTLkGy02RfqjNHBsJXPj1UVNL61zCLFEQPh/7ixdvOGGG8x5zSI5SAIwODiIxx9/PNVxWFmNRSvTMYD4rEY6Dw1wSaBlZRi33nornHM48sgjU4Oc5hBism4r2+aPlrPgwCrGcRBwA8bu6gtkdx6xZQ0ryFhBT25r165FoVDA/PnzTacQ4ySyBK80gC/nl9PoxWJtw2lals1LWqEEIEv2HSphpOlaLJi3WAQ5PzEAnz+SU+e2qoFgrod8v5VhW5l2SI84o8R/J615OrTF6F8MqNf0MY1B58lRknimvlwuo7W11QRclu8M6Y9MFKSuW8fgNqK91vSOdI5ey/HTezEiGTb+3IpXMhmmJuihoaGadQ5dWav5MHotfUyMnvENCIMkLQYDyJmkeou2uCEnQd8JOQV6TgZOz+k9LcvlTjsm+KQFGucchoaGzEzKyq5CoCotaGmliDSABKQ7A5p/mj96XS6Xq54TMOWBSQsuacxTWlaVls13dHSgUChgzpw5mXqV6ql3JDIIkfOmeddAEInGLkm2Jy0TT9O18eidDGQ7Wu80tlMCcMk40Rqnlbuk3qVl9dpjCMBrQCuL3tFcxLBJfL5j9E76E/leKIGrp96VSnqrwHTQOwme+fuNaglIK4VpemfpXj30rrW1NTi/sZKDJACtra3Yb7/9ojIhmWFqFHKa4VtAI82oqcTG94W27u5uOOdw3333mY7DOo9QZqZl31nLGFzxqXlQy0607Hg8wEYz6jTww48TAleWoa9btw4AsGjRomiKWNMzOdfDw8Mms6NlwiEgnRZkpM7xwKGVJEL9Z2kB0Aoo49ExcrA059QfGJMJk06GQK3WzC91Rgs4IeCSFTjHMDpyftL8GG1cxyQrYvm1tCQuC1OdVnYN/a5W9sqiX5qOabYqgYr0YxpTGAKpaT4u9H6opJoGiuU5Wrolxzwe/eJzzddU6hfXOc2nxejc1q1ba/zEeCQHSQB6enpw//33R6HjWCdm0YAW/axl+hrokk7AciClUgm//vWvAQBve9vbzFq8tdHN0rRNC55awCUnJxVabpSJhQIiZ+P4PHLjlZkHdwLSQVk18jSQlWV74oknUCgUsHz58hpQFpNtZdEty2lpbJIEXjEBUOqWBbDSekFCW+gYoYAZ0i8eOEP6xbN60iMrOFhZvCxJxAQzq7dDe532OQuUxeoXT1y4rqUB+zTARaDE0rEQsE8D8jG6Z/mqGHAv93GmaGhoSB1fmn7VS8csQGT5l5Bvo8/EJI5Wn1GIJdKAGB+rnIuYxJHPtebHvve979UcYzySgyQAXV1dOO6444JBh4uV5dMjGZ9WLrOcQJZgkgZiaCsWiyiXy7juuutUh5GWnUkKGQg7AikhB8ADicYcaSAmLXhoGxm9bMZOe605F8sRaCAmSRLMnTsXAHDiiSeOW5c4MKa1q7cuxeqTFnwkozSZusQ37e68WvIzHvAbq0sxyVYIDMcwROPVJe6XpE/QwIkFZi29yaJPEqRwYDwVdImDk6amJrS2tgZ1iesfPba0tEyqLsUkVhPRJQvgDg4ORiVSId0h/YzRKa5LxORPVHKQBN+4/eSTT1YUTWaEmtOysizOWABhZbMy+JER/bJ4K6O2lO+ggw6qcWj0/dD3uLOiR06POlf7VyvSWUm0T6I5Ljl2mkuNHeLrE5O5h7J3y9FZ2ZgG3ELZ+fz581EoFPD0009XnXtWllHqEVBdTmpqakKpVEJLS0tmRsjaNB3TdE7TFb6WFsvDX8tyiKU/VvBLK4XQcwAVvS2XfR/b0NCQuX5Wpq6B+Zj92rE0NicU4EIZ+ETKaxZ7zXWMHi2dylIOSfus1BEOkGSQ5udMfkgbn9QdjT2U+qQBCrJjje0h4eeRJP7O5yGGRWOJpH9L0xMrXqWBJA4YNd3RRLPNWJ2S+yRb1tTUVBX/tNho+RK+rVixInUcMZKDJNRetk4MDFDdYK2VfGICnXRMhUKh8puFQqFmwZMkqXrkQgwRPxY5fHoeyhKkQ7FAmAyMPNPTsjqu/NJRSaCkCTdgWgMOlGiMzvnb/tM8jIyMVBwJd4byuBIgaZmctVnASgNgFsUswd549MZyMlowkU6ar73FEExk09gADUhZzq8eeqMFLS3YkA7RetBxCARo5QftMZZ5StOZNBbA0pusLIAMVBqIlvrD14zWOFZ/tMxffk8eU/qiNL2RDFIj9EY+T0u+Jrpp/kbTzSys9nj0xgI4XGd4UqQlVwDUZJ7fwkZ7neZrZIyS8apUKqG3tzdVD2IkB0kA2trasHDhwmhFCqFljnRDRh+iHDW6Ue6XzzklSY/33nsvnHM4+uijVaaIOx1S6JDzofe5yGxWOhxJW2vZ9HhKIJKWtuhsK1jVI1DRuC0doVswvPWtb63oiHZ/Ii1IhdjCegGakIMZD6jR9CNNT6g0IQNUmq5IEKMFGLn2snSRRS9kMOLPrWyd6wfXkzQ2RwJOLaGRYFfTFc3XZLl/laUbUi8s/QixgpZYyRKtH59zCyxoaxhaZ+1qriw6oQHxWKBr+RBLTyTYlSAlza9oaxrSH02XtGPG+AypHyHdsNhiTUekHXI2rh6SgyQAvb29uP/++6OcaMhZNjc3o7W1NdUQLCcpEbvM4iwHZgGt1atXV4K0Bra0R63+awVVDXDRc+d8YyNQ3S/Ax02izZFkkuQmnaLMukIMAK0zB1qhfbGgizvoQqGA/v5+JMnYLQpC2R2fC1ka0PSD5jjkDEPBMpTtDw0NVQVV+XkJyLXjy/PQ2EcOtDQdkWI5ylgdsbJ+DVhZax/LIGm6GGIApL5nYRvT/IjM9LlP0QKipiuhzD+WMZJgTOqHFmDpd6QdWHoyHh3RWCLNtid6MYeWCFogTCuvSf2gRzleqR9kX1JfND3RYk4IWGkxQuqIFaNikzUZa6T/sFhoij8TlRwkAejo6MAxxxxT45yAcG1fY5GszC8N3GjMkHxPgpkQsBkZGcGmTZvgnMNNN91UdX48++BGkSVoSeOU2RIPKLyUEeN0ZLOjBDD8UWuM1AKZdEb8OXdCFkMwHn1oamqqvB4aGpp0fbAYgx2hDzIYyYRkOujDRP2DxQzx+5jtjPogS51tbW110wf6jmSapU5Mhn/Qkh2pA9o97jRgSp+LSYYpCZoO+sD9A2+Y1/RBJsDXXHMN6iE5SIK/78y6detMpsii07lhcAMCbMSuZf+h4JmG0q1MbmRkBMuXL4dzDgcccEAws6NHyQpJg4ihSKWTkE5Bq/3L7FrL5mKyf/5ezH2SLIZQ6oAMkjHBEkAlk6FLhWkugOryU6FQqAJVWbI4iyXSPmeVU7SsnmfzsTR5FvYHqL2BK+3XgkehMNZkbWXfWrZu6ZK28ew9y/rHsD3cJ3C7iGEIya4ocJDdSJu1snFtjTV7T1t7OheZtfMtTQ+kz9TKr7ycpr2mRxmkR0ZGgracttbWmmdZf27bNE45HxqrHooV0va0NSIdKRbHeu2amppqQDn5dHkM0gMNwPHz4fOtnbum95Yu8PnS5lOLu3IdLF0J/X5WyUES/IIPDAyoztW5sUbrYrFYKZ3weiig/8O1lUVoypgGlJqbm6tATnNzcyVLLBaLFbCkKQv1fJTL5coYyKhkGZBnDPXKHiTtTc951sCdFwc4IyMjFYDDnUJzc3PVvBcKBZRKpYqToPfk8WUmqVHfFIxCgCnkJGku2tvbAQAzZswwHZ8EQzKDGxnxfxMwMjJScbgEqrQ10zJFq5dNA1Ta/2ZxHdAcZmjtaf2zZo407/TXE7xviY7JGakQi8AzzBCwthgE6YxjGQQJiCTQCNm+XBtpfzGMgWScpA5oiZMFoGRADgHk2JKXZBfTkiGaf21dY/RAvtZs3wJRWoDm/keuPdeBWL9vsYga05PGJFrMkfYoY009WSPNXmL8vpUEa+tvsYr5HbfrKG1tbTjwwAPNrFAqPVd+TaHSFF5TXr5xB8e3EKWqGRMF09/+9rdBpbcyARLL0WkBLk3JLbpclky0fZajS7uBnhXkaI2t7F9ba+5EtExdC2zLly+PKo+EPmOtscUc1BPUFgoFtLS01Kw9zW0MuyfXj6+/BK1WE34aI2Rl/tYaW+vNQQ1n30L2HQpsGkANrau2zhYzJNdZ2jIHa3KtQ+vd3NyMlpaWqjXWmLu0ddcY3tBr+Sh1zUpY0thdy4dLBm94eLhyb5+Y5FVbdw5AYmxX69GyWF2N3eFgPOS/aU60uWtqaqpab4ulDa176HUaw6uxfhprJPVV+i6+zh0dHUGbj5UcJME3bv/+979XM4+YTCPUsM1Fyyy5kWpGKQ3RCqQWoKJH/tk0sCWN22IWsjAKaRllTMC1elA0cJUGsqw+FWudCWARo6EZqlzrgw46SAXUMVmkVl4NASvreaj/gK+vBu60tQ6BaW2d6blW9sgKsPhjCGRrwMyyYwmyNOecBrQsgBVabxkgLVAl+4msRwt8aSCOA3rJamnJE/ksa53ptbZxtoiDndgm6DRAHXPzRSsQczYyy1rTo8UQSlBjASzNt1u2KnVBPkobtsAWbdpFNVa5VIJOixm2AJYFsrJsWS6qoi3kp7JIDpLgG7cPP/xwM/u0GCUyDJ4JpNGnmlOLBTlaYLSyUs0pxlLnXEIskqTLubJazXSSIrXATQjwpLEMEthYNDk9pjGFNG8aPS2dHgcnaWtsbRLoyONzhyh1LpYh5FkZoJfBaJ04mNEclwZSKTPNsqba2vKAmoUxSrNZC7wMDg7WNTGZSjarlbiamsbuIB2yWVqPRtgs36TNShYkq81q7Ayt8UvRZrUEZCrZLJ2rZbMcoMXarOaLBwYGqpL3iUgOkuAbtzdu3DiuRaVH+o7FHGgGy7PJGKdsGbL1nZ///OdwzuHEE0+sUiItkwyVZTQhQ6XPA2M35SwWixgeHkZTU1NNs63GHGh9A1mzx3owBGn0PA9S9MjXn89NsVjEz372M5TLZbzqVa9SAZXmTHnAtDJFrjsaCynPNdQ/wsct2QEZrKQz5t+l8+PPKWPVsj6t/yB2DblNxq5naE01NkBzyNx2+HzyuaCxWgyCFrRlWUUCI2mbnOGRQUWWIOQmyxfcd2mP2jpQMOPnRPZurZs8tgWG5Dry8XCfQyL12gJNIXZPm3sJUOn3iXUicKkxryFblI8hUCTXU86DxtiG1ld7rdk4HZ/HhaGhITUxsfyqZo8a4JXPNZDE11krNWo2XC7ntwCoq5TLZfT09KQyEzHZq3RefNG40nHHKx8l4NGylaGhITObbW72Td0E2pqb/U3YhoeHAYzdRbdQKFQcnZaxWlSspuAyq9EyGo1qlRkJz2RKpVLlOZ1Lc/NYwzatC72m3w1lNWmAiY8rloXQaO6RkRG0tbWhVCqhs7OzZv3479ExuRPh7FUaA6EBYK5rMlMl4WuZVjaRNiGZBw5qac3kPNLca9mqxlpopRN61NhCyRRabJIVMLUkxmIcODjkx+B2GWKUQmupAV9aRx5QNZu0SiIaYAmxDGSLfH419mG8DJJkPWITUgsIcT1PW0eNEaTPhtgiuZ6kC/Q9aYv892N8K19LLXEZj2+VLKC1htIm5fdjGCMtTo6HLUpjisjOeLzk60nrlJbox0oOkuAbtw8++GA1ewHCDdsWMyQNUjM4+ZweiQ4eGhrK5Gilc+3u7oZzDg899FAqOySdapIklUa+GKdq0bR8v/U8zRg1pxoKjtba8azcyupDjjQt+EngumbNGpTLZWzcuDEV3GgZKc+MaCxcpDPlG5VQOMAJ0e+WgxxvMOQJhpVoaBkmXzceRDQwInU+LRiGyinyIgoLyFi6o61ZWkmMP3KWjgc0zspIUGqB07Tnmh5YTB4PymlMD9dRLamgtRoeHq6UQ6wkUfpRK1nQwEoMCyuBlcZQaCKZVgleyPY6Ojqq5tGac63EKdcpzSdK36gxR9qaZUkiOHjR1k4+59URKx5q62XphCQXLDvj67Z161Z1DbNKDpIw1rjN2QypxFYZgOhXi0q0gjQZrsYoWUxSCFxJoDU8PIzNmzfDOf+3JJZz4Yots9aBgYEoZ8+pXq0so5VaQsFYA1MaqAo17lrsA50frZvl7PnaaaWYmEy1qcnfwuCkk04KZqdprIMFsDRgzPVKo6s1pkU6e401spx71h4V6fytAKCVbQhwpdkbPVpBOoZp0G6joG2hBEYGCx5M+DpJ8Kc5fJIQw0BgVAPG3F9Za5DGPFgJktQTyVjxMpXG2mo2FyqPyXVLS3BiH0OBWwNYdF4ExNJ8peUvZalL2h9nbCyALBOeEMCyEhlpb0mSoLm52QTGaSBLMtly7SQLH9piGFhpY4899pi5DlkkB0kA2tvbsWjRIpUitBSBZ0VcAbRFDzESFuDR3pMZr6YwXBGJSVq5cqVpxNJpcYfb2tpqUroWyLEAjgZ2+HPLGWslFo3OBWopeQBVxkoGS5f40vxZZa3xAFP+fO3atSiVSnjyySdrgmWIio8BN1o5jAP6NEBqrROtewigakGS/7bW76CxfZpNaYCGQEuaTYXYWetRAztaZqyVMTVd09aJnsuMX0vIOECx1kquSyiR0AKlBkA1pojGEWIaQsBlYGCgZq1Cfk17JFbdAqL8uZYwpNmUlSyEWFiaZz63ZDMh+wmtkwVmrGTBShSsJIGzMVZywBva09ZHVjv456Qt8dKYxRBltSkJLiXga25urhxvopKDJPj+jxdffNFURk5XAtWOg9OstGDSeWhUpGSKNMW0AoAWIGQG9GfPP4+/HBrC7iMjeOHJJ/HtPffEHXPmRJcCuEICY03ZsheDg43m5mYMDg5G3f8oSynAchQhx07nTI+ac5dOg88HHVeyhRw0trS0mODq6EcfxemPPop5/f3Y/Pjj+MFhh+GXe+xRw2ZIXdHWhjMKPAPn4+e6qmWksmTD56hUGrujNf0+Z8NCVL8WbCWItdZHWyOL/bGYIM3x0nPq76DzIt2mtZPftdaEnxN/lOfPx8WTLF5Ss9ZKrpsEVJLRJh/ES0q0Xhr44eBVWxfOUEmRuqitk0xEJHiyWDy5ljSPxBhq/jRU2oxhc7hIX2cFYo0ll0FaY9Doc0mSVM3N4OBgzbG1NdGYX25D3DdoQFDOiVw3OifL3jQwzOeayovNzc2mnWglMXneXDQ74uulERnaHK5ZsyZKB9IkB0nwjcxbtmypyhC0YBALnDQnYjl2KyOWKN1imrTM+KQ//Qkf/cMf0DaqlLsPDeEzTz0F5xxunz27xpHJTIuLVEIAakbMAY8sw2iZsGSZZJZFBiVLClYQkeujrU2WTFiCWA20aqxSoVDAMY89hnf++tdoHfFXe+3a14cLly3D8BFH4Ofz51cCWygLls4oZm20UgsBI7lGVjmTfk86aH4ci7FIA7WhUksMmyQTDcpQZQLB14Nnp/TZUPZrlcksJkkLxjHshFa6Cq2NBEzjZSe0tdGCswxiFitB68LnK405orXlPi6NQZKMhAS4IeYoxma00jJnYvl8kr/ibLvF9IXWRZbONLClMUZpyR/NB58nLZnT1oYzt1lZWFnR4I8WU8TXhfuaNEZPJtZajNHWfzyS1OtA9ZIkST4K4HwAhwH4gXPufPbeGwB8G8A+AO4DcL5z7qnR9xIAXwHwgdGP/xeAT7qIAR7d1eV+s2gRwBEsgKovOudfjz465/xzGXzpOTmV0efjfdSOWbXROZHhADhqYAA34l34DC7G09gH++BpfBmfxln4AVa2tdFcVh7NjRzo6PMC21/I8siOVTlekgD8t/zJVD+OPq8SNs60tdDWpOZ5aM6tddDWg63D4f39uNGdEzX/cszR6yHmuWadtPUS60DfketQWRs2/1oHScU+Im0jzUZq5j303LIHwyaq9IYJH1cV0zXO9bDshe+XdiA3sLWg55VzVQAmtw2nPI5rPZS5t3yRtIuq3zLOKyRV68DXQ+ioXIeaeQ2tj/K9mk3+Npv/KP+krQH0K/RqNs3nxMQDxRb4moBeA+G1ED6Y62LNvBhblX7LuQ+sW81xUeuTYuLE8f39eOSRRwIGEydTkUl6DsBFAN4MoJ12JkkyD8BN8CDoxwC+BOB6ACeMfuRCAEsAHAG//ncBeALAf6b9IPXvpBoRxharkCRAoVC3YK4Fbxk0yuVyVHC/ceAMXIjL0IdOAMBT2A8X4jIAwIHFW8ecGdj9J9g5c5HB3AwgmvNXQJMZzLXPhBwZnVMyFuirjIYvhViPKIdlBYwQ+B19fmPf28z5P6T1djOQlP2ChJVVOAjpxKMCRhr4lc+tddB+n6/P6HlWL4YbfwBJCeJZA3qNLfLzK5f9+Wh2YQUQOR/GmownkFuAqgZcsXNAkqAAAMQIanbBxyweo8FVhjkPBnb5m3Quo99LFS2ICr3UAnvN3GYI6DXzr/inytwXCiiwc60RBbzUzMM418UCtNamrQEAfzwYdqGtBXSAE1qPVMAV8kls/oeHh9N1JkKmHJNEkiTJRQD2cqNMUpIkF8IzRyeNvu4EsAnAUc65R5Mk+Q2AK51zl46+/34AFzjnTlB/gEln59HukEN+Paq3SeXRH8d/ZmyaHHtNSmU5d6Koa59rj54mpn3V749911BqUIbgMLd/D6zD3jXj3BPPYHPbc2yciXg+qpAF/rrAXtPzrI/+uTw+P4ex12NzT/NfraKOzf3YmFVnETnvocfqea9+zc+Br8EuA/b8b2l/rmqcY/OMqjmvXoeJzbtcT/l71efTGN3XdHrsUe4bv+5LXZHQYOrpvj73E9F9XV8nrvt8DeT6j61Bta7I8UwN3dd9Tqzua7Y/Ht2v9vfavL90dJ8fe0fpfn//CVi7du1Lkkmy5FAAq+iFc643SZK1o/sfle+PPj/UOtgo6LoQAJqbF1WakqXxOBdaOIx+zrHP++/4hfKPSVKGcwnKZYck8fv9YwGFQhnl8thjtbEUgsF6bKsO1OuwlzrmddgLHYX1NP7KmKoNRwo3PodyGZVzTZJy1TwVCjSmsTGWy2Qg5YphlMuawQCWo6o6G3YusWCJz1P1Wo79XqGA0XXwn6Nz9+tH3/c35ywWZYCoPpd1A/b8z2zeCOmYNcBYu2lZk5wXvj70GTeqw/S6BK7fMXMemncNLKWtQ63Tr6wK+/3C6Dz7NfBkCP+8U85Jl9rxySDBn+trIAOKvRY0F2WmX2Xld7PN+dhrS+81G7DWRNtPc+VtwbkiAD7vtXNee558vrXx1Y5f2r6dPNX65urndB5llEq1866dly1ybPb8a+sQXhv9GOSDquedf6b23GpFAlK/Lzz/E3kcez6m8wBQQu2c156jNhZL563nIb80NNRfc/zxyHQCSV0ANop92wDMYO9vE+91JUmSOCfNGHCecboUABYuXOj++Z/vMZseraveEqEBY0GguvHUatSWl1HybXBwEIODgzWveWMqvebNdCMjI1i+/CYMDe1eM4FNTc9ht93OqWk25ePhV2/IS1/5vPDLXFtaWtDa2lrzXD5aTdpWg6l2BRvgnSMtqXZ1jXYloWzs5XMp51l7tJoW+aXptN6PPfZTjIzsWTP/xeI6zJ17ZnC+ZaM1NYjK+eNz3dzcjLa2tpr92pxrVxPy5kjZZK3pN+m4dnWSvCmjbG635prvt5p5eQMvNYRal3zXzn31lYBao67VOB2r2/Rcu62C1HFtvkm3ucgLQHgzrHVxgTaPabrN7cG63F5rlCa7kxK6TFvzKdqFHFK3tbm29JsuHklrVpf+RPpvPt+yAdqab81vy/e1i3D4sWXc0K7ki5lveTWu1G/Lp1g6zu0hdKsJ7QpLK2Zq8235cO0CpoGBgRpfPjg4iKuu2vluAdADYKbYNxNAt/H+TAA9GkCS0t7ejsMOO6zGUXGF5EFYu4Q/ZCxkKAMDAzX7aYG1YKzdH4bOgxsNlyRJMHPmV7B58/+Fcx1sfz/23PM7mDdvXpShcMdEm2Y42pU12tU0IXCpBQLrKjM+P+Rs+NzRHIeCrxZ4Y+6Hw3WDj4UH27a2NixYcBnWrv0UyuW2yhiLxQEcd9wteMUrFlcFAG1uQw5pPM6IA0p564Ph4WH09/dHOf9QIJBXlclNXjlI52A5fT9n+k0uW1tbVeAurzqS8yj3pd0Hil96LwE7SaFQqDp368o8a541p8/nVbsCzNJfeTuDrIFV3uKiubkZ7e3tNVffafOngRV+7yArIcpyFST3xfLKVO4n+Dxbc8bnUyabGvDkt1rgtysI6bCVDHGQTvNK/pZea1cHx+qt5X+tK4B5DAldlS1tmGKXNndcd635l7FUznGa/mpzzIHhwMBAzefHI9MJJK0GcB69SHxP0v6j++n9IwDcP/r6CPZeUHp7e/HAAw+Yl2zyzE/eN2f0XACkM0mag5SBhgd2ep6Wccusb+7c/0V7+5fw3HMfQam0B5qansfcuf+GpqYfY9Omsf8Jo3PPkmHz+eHshZb18eCkXRIr2QzpQFpbW9WAL52klelxIw0FfJ6JmJ7xEAAAIABJREFUWAydPC53GnydBwYGUCxej/nze7Bhw8cq8z9//jfQ2/tTrFqlByP+XHOQWuasBakQmOJOkxx2S0tLlQ6H9DiWpbMYJHlpsRaUZMCXwYhsRBMrm+aXWXOWwWKO0oJR2vzKgNTW1pbKymlBP22utXnnz+XnZeIh7w3FbWl4eLhynsznVvkKoPp2IJItknot54+zpGnBXvpgCarIRqzkIKTPcp45sJUshsaWapfAa8wbT77K5TIGBwfR36+XhDjbpemzdqsCPtecSdPm1WLvpQ7zeSZbsJIwCWJDOq3ptuYHJEiVgEquI//NCH4kSqYcSEqSpAn+vIoAikmStAEYAXAzgH9NkuRMALcB+DyAB51zj45+9SoAf5ckye3whcuPA/hmzG+2tbXhgAMOCN5rRwZnixYMZeEWAAqVGqSTk/QrLy/4/hl/vq2tN2LhwpurspempllB4CPpbM5waM9jg7PFJGkUNzcGuq8N7ZMsksXYxYJKC/TEMEjckWjBYP/9H0Vz818LCvtVNayRLCVItoOXbGTASJtXqwxpBdiQrkqGjj9qQVpj5qSuynnV2Aya046OjhqWQisPWMxnmr5SgOD2YvkA0lfSFRkc09ghy/41dk6CHsl6aj5Am9dQ+YV0kHRNm7u2trbUMkxoXiVLxNk4CyxKJpnmtr+/39RFja2Xcxtikq37lLH4BKD6nwk4aNFYt/b29qBfjS3dEvCRCY/FvskqSKiMxfW1t7c3k1+1SrUaINd8K81r6H5vkpGU86TZ/0v5ZpKfBfAF9vpcAF90zv3TKED6FoBr4O+TdA773HcBvBzAQ6OvLx/dlyrO+f8pkyg65CSB2syKFpZ/RhPeW8OpwrSbm2UNPMQYtbS01CidFXD592X2RY5L9qvwQNzU1ITh4do7/8q+Iv5dSaFr1G4aY0GGKueEzyWBSHotwa42l3LNeAmIBxwtgwZQCfAaI8Ff09xwoMifS/2werQ0FkiWfmR/XIip0DJqOjcAlXGSnklWwrIFzXbSsmUtAw5lxtyeCcwQGyV7gSzdJDuy5lQmS1xv+Txr+7k90yPpGAEXCwRJW7fAJrdtObcy2FrPtZ6eYtHfCZvsT+vzkcL9p8Y6SIAkAb7GGGjJjQYeST/a29urdFIDQDIh4nZvlSglYNEYRcuG+XPSDyoVafPI55Key7KUpq/aPmvjYIYfmzaKG21tber5kUg/JeMAn1fpE/h+0isel7VERj6vh0wqSEqS5Dh4sHOSc+63AOCc+ycA/6R93jn3MwAHGe85AP8wumWSoaEhPP/88zX/uUPOlwMMK3OXCkiBRNK1kgnhKH1gYKDyyDMiieI56g9lQevWrQMA7LbbbjTfVQZLmRDPLDWkrm2UWdJzi2XitLp0vDxIWYydLKVpbJ3W+2U9l1mlBRpiWCUOmLTy5KZNm9DU1IQFCxbUZOTaPNJr+h2aIwmqtLnUApRGfWugUyv3anpJW5IklZKXZE34MfnapQF6mZmHskepa21tbTX6yOdO60fijIcsM1j2HcrINQbZmsuBgYHK2oyMjNR8ho5FJUqNPRoZGeG+L4qRs3qILH3kZW+L8ZD2bSVH/Fw1nZQsp8XCy7nkZW6NAeXzmNW+JQCSwFxjKLV5tPwob0uQeqnppJacamyxxhDxuCHnkZg5mitu7xZbpLFEWe2bzyVvz9Dsm15LfQxVNrq6umIhQFAmm0n6E4ATMdZHNCnS2dmJI488ssqYgdoyEDk0TQFlWYKcIXeMWvAOGbMsrcmsnKNrMqa2traK8m3atAmFQgEHH3yw6gx5gJGBRnOKPLBIhygRPaCXe2QPCo3bcoRkwFqwlr1EkublWSatp8UCckdIBiv7qNLAotzuvfdeNDU14ZRTTqmZP9mDFSrvSiaov7+/CnhbZTI5XzHlXRmQ6Xf5OfFMTWM+m5ubKyUGms+00lioRMZLjnz+eCAuFouVc9IyaslO9vX1BYGNNpcSlIdKDDJw8Kyfr7NWBqMALEuMsgcwVP6SiQpnLCkIxzDmFrs7PDyMnp4es1RrlbrSSt8aMyT9nsVCSkaxtbUVXV1dNWVEDThrpUOpc6GEWeqeZrsagOnu7lbnULtiKzRvsj9HY9N43LDYW9mz19LSgq6uLlWvQraqtV9wPQ+VCK2ERJs/Podye/HFFzMiAV0mFSQ55zai9rL+HS69vb1Yvny5ika54yfnpSH6tIxdInKLQdLAQEyWSb/b19dXOaehoSEAwOOPPw6gGs3LTToQjfmQ2ZHsp6HvasZBW2trK9rb200wxTNljfFIC2hZezx4qYPKB4ODg6lgVCv5cMfa2tqKJ598sipAaU3tUt8kEJXMGx1PyyhD2bnGZsp51LJOei3LblZGznsPSGRQ4+BQOmXJmlnOWX7GKrtJMFAoFCo6HLJfzoJZ5UgJlKyeF3otWSEtqJHdUrmFs4OyF46zZTSXUof4vHLdlO/LuZNMBp8/Op4s82r9Lxq40hi4tE1jgqz+TPpsX1+fWY7UmA2N4ZA9RnJ+NXuVTJDsc+PzJ3VQ6iG3KwkUrPmU+iqBFFU5tPIl/V5fX5/acqABVKtELjc5V1rSrSXgsvJAemsBLIop3//+91Njf4xMdrltFYDlzrm/mMzzaG1txYIFC1LLQKRg0vjJAYboYQI+FjMiA7kGgtIyAw7mmpqasGHDBhQKBey7775m+UyWKyw2SQZyjVq36GAeuK1gEyqZ0ZxJ8JhWfsySTfGyAQ++GvMhWTg+b/w7v/rVr1AsFnHaaadFzRk3cC0DlcHZAtKcucxass06ZyGQrc1Ze3u7qmdaGYeXHrLomVYK6+vrS7VNPmeS6ZUAaDxzRoFSgheakxBTaTGWkjHipS/ZB5NlzqSOabYZag+QvtAqG6b5M57USSDd1taGjo4Ok9kN2WbanHGGzbJNrQ2Aj5uYtjRdC5UItUSY1s6aM1pzbkf0XLZGhOZLY9s4uOZz1tTUpPadamyuVg7kDNrQkG8eT9M1GTckI/nss8/WBvtxyKSBpCRJWgAcDP9HtJMqBIBko5dGP8vMSWtq0zb6jjw+V6xisYhSqVRROpJCoVBT9uDH0BxxS0tLxdA7OjqqDMSi6yV7JgM4KTqfs5gerbR6ubZJFkjSzJT90PwQS9XU1GSWJ7USWygTl2wGBwByv+wnoCy7XC5XHJ7mRKQupWXgEihpTIaWpdPcO+cqpSmaM96XwXUuVJLkepaWcWtZuczgkyRBqVSqrLVGxVu6pQV6rc9MYyRkaVuyY7xJleYqJsOWTAUvr3J9szJv/h0JEmlO6NzpMnIZ0K2yI58v2TcW85wfQwbuJEkqNgKgyka1ueLsGLdJPl6NUZAslwTwvB8qSfwFERRopQ+VLA6fK03XaM44a0avJQvEfRHv16GtWCyio6MDHR0dVWsoS7KyUVmCX613SX5G2rT8HS7kS7q7u2v0Sc6TZBGljWpxUpsTuS5SSLeoWZyPh4+PHl944QXzWFlkMpmkRQCaAayYxHMA4BXimWeeqQAHWTbSsn8SvuDSOcuShZbp862/v7+md0nrvdEyMq3fhkoKf/zjH6syfp6J8SxCbu3t7TX9SjIr0xy97HWgedL6Q6y+EK2fS86R/Axn+XjGz39T6w3RmBHOjnDGTZsrOU/0uZe//OVmnV72NdC6AdU3JLTKjiH2aGBgAH19fTX7aF5C2Rfv4SIHpgEAWZbQbj4q54hKrfSoNQbzueJAQoJx2bvAWV7L9jgjSXPE547mmfZpAJ3buLQ9ru8S6MgeN8niWvpULBZV20vzUVY7gMayyUxe2h+3OZpfzU/F+CguFMg58OF2Ytkd6Q+9Jh3S+iplT5s2T3KuOPChjSciMb58ZGRE9VOcWZPzZPX/WT5KJm+SEdL8kvRRvG0i5KPS+k25TsnSvTVP0peHmDXpozSdkvP0UuhJOgr+fkar0j7YaOns7MTRRx+dWiqihdN6iuTikwLIR6vEJo1FslA8OyDH29nZWeN4uaFwI6HnVkCPAYdAbWMszQ85Ti048XHzedCMRJaGZHlDo+llhsVBDjUdyj6rWKAj+4h4ANeADkm5XMahhx5aAwjpbsCxzlabFz43vA9BOhASPj8SCHZ2dlbGxMuIGhjk+/jn0hxsGrjhwGZwcBA9PT3BEo8MPBqokXojgY0UAvc8ENG4CNhZINAqVcg+KsmicYZA6k4IJPP72MSW9PnnZf+KBvpILCaRs4Stra2YMWNGTQnfKudo7HWotMol1PvJ133r1q1mqUvbJFMt54b7ZD43cn5kzw2NsbOzE7Nnz1bLplqCoTHUkj3jLIq0q1CCRfNCdqbpES998U0ysTwBpfWhuQH0PljZw0nb7Nmza6ocdAGIZlNyXqRdvec974kJ/6ky2SBprXNu+ySeA4Cxxm1SVg0s0KJ2dnZWDNfK0LjBSgeehqR5ANCcPqfHh4aGKucB1Dq0UqlUBaZ4INSyfdons36poFowTJKkAjY4IyKDYYg9Gk/PQwgo0GcpIwPs5nWtSVhz+ppTk/0NNDcDAwMoFouYOXNmxYFSGYLWLUuPg5ZlaY7ecmiyr4H2UbO/BFM8IEo2RPYeycDHQSXXGR5IJFCgteD3stEyVjlHWslRZrOyREmveSMrBwqkQ6R73d3dFf2xyo80Pj5e6dTlcxkELeaa1oZ0TpbzZenMKtNqfW3a+1opks8LrQ3NNwn3QVpjtNa4q22yL80qR8r5Id2bMWOGOUcSnMs2AN6XJRkfrWTL9VImcWSb27dvryql8dKQBIay/G+Vs7nOSZDZ3NxcU7bkPpoLL3tpJX8Z2zS90vbJ+ZVl3nK5jP7+fvT19amJC/cNciyygsFtj15v27at5pjjkckGSZNeagOAlpYW7LPPPlWKJVE6X3AZ7LWsN1RSk1kNMTAao6SxAlrWwpkgchT33HMPCoUCFi9ebJY8JDugBTUZzAiUSYOiYBRik9JKZxotbWW8fE54DV7LUqwykMYoaYwAzXEaS8JLrzfffDNKpRJOOeUUNaOPBc1ayVXOiewrstgjTq9LEMi3jo4Ok3GUZQzusPickNC80NqFytEhu7FKFxIUcj0hPQ3NCS+tcpYshm3Uys+8tDPROdEYtJCuyD4+Gdy0/jMrcYgpDWpzk5ZY8d5Ji71PK3URC2Kx9aHyjZU0WHrCfS0fT8xcxLDS1pxQLxWfE95KwHWF60l3d3fVHPDSO/+sTCCknvA54XoiwQn3BZJBnDVrVmrJTzJEdFw5JxYTzYEb1/+2tvCNLmNlUkBSkiQFAIcD+Mlk/L4UXm+VRgvo6FrSmFrWr21aGcCicZuaxu7eHapNc3DEnRop2MyZM1X6WwY8XlbjDlw6VTkfmpJyRx2iu8lo+RzS7xIYBFB55I6MbxqdK520RvfLz2gOncY6MDCgNjla8/HCCy+gVCrhD3/4Q836c9AjwaUGmJ1zlUbpYrFYkw1qQFEyQRxM8/mR45alIZm501wMDfmrULi98OZNmVDI5ILGqDGC/D2tuZgaqmkuOFsoy698TrRNy0L5uCXzJUtBdP7aXEhmSmMxtExdJktWJs5BD+k311HOWEh2R26S9bE+ZzVI07gpIPP+P84+hebE2uTnZVO01p7Q0dGBzs7OmuZe8ifkX3jiyedHe18bs9azw31Db28venp6qs5VMk7avGhjpbW2xi2FzovsWsaRLJts/rbiJq05twOeFGzfvh1bt25NnQe5yfFqzd78fHp6etQ5ySqTxSQdAKALU4RJGhwcxNNPP13TY8HBAyFbUhK58LJhlAMBnulom9ZkK4Mnd6hak7aW8fT09KBQKODBBx+synLk1tHRUdWzJPsHeMZDwIEcCmBf6s/HIHuS5Pi1Pi76Dg8qBKJ4MCbRQCSBAlkuo7HTc5oDrXertdXffZhngJJhkw3qIyMjWL16NUqlEhYtWlQFGiUjYG30GQqWBKy0bJj3ksjmYR7429ra1CyYj59vbW1tVQ3WsslTY9ZItN4RCaLT5oJsg0oX0j4k08hZRgJOVq+RVnbW5oA3mfM546yAZF41nbBKzhqLRuOW80G2RIw0fY+XhTiQlGUMDnxkSVD2nXEd4PpBvSKSIeFMgATWJBpbZLFnIb8p+xu1krPWM0OisWch5kyzDz4PUic0Nl5jRPj5SZ2I8RWSKbIY1hCTyNlO3jYgWw24b7R8pRU/JLNKSa/lK2LtQ7MV/rl6yGSBpKNGH1dO0u9XiXbHbY7oyTFJQ+ZKyp06vabnsnQiDZnTm5w14kbMHbJG73Ijpuff+c53UCgUcP7559fQm2mOnZeNyIBlw6ikuvm4tWAvnRlnCqQj01iilpaWqkZsSW/T2KVTk/QuzSFnEmRJhDtzXlfv6+vD1q1bowL88uXLMTw8jDVr1qjgl4MdzXlpLFlLSwvmzJmjlj80gBPqL5NNj1ojqAzs5Li3bdtmlgtDgNfSfwn0gOqrn2Q/xsyZM4NzoJU9tFKhLIllAXm85KPNgVY2pU1rDtYYZY0F402tc+bMUW0hpgTGWTQOZog9BVDDfPE5oF6bDRs2mGVSbfwcFIWAjNbXJHu72tra0NXVFSzpWKVimfhp5T/e6yXHT3PQ19eHLVu21PhEbd05ay77KDlwskrDHNhyEENN4Wlj51UHYk8lQ6bpv9UryS8S2LRpU3DtyXfIPi+roqLFADl2PgdkD6T3y5cvj4UAQUksmm5nkv3339994xvfMNkTDiKA6lqx5jx43VdD/TEAIi0D4BSwVmJqbW3F7373OxQKBZx++umpASQtC+JUq9VQK8ttvE6uBRKLOeM9N7JvgERezcYzY9mgLjNkCjBa71HIgVoZsVZ+HRoawl133YXh4WEcd9xxwZKjDB6y8VHSzVwPaW1kAycPJJpDkWPW+iQkGxAqNXIwLQEF79uTPSKyFK01VPMgQnMvdYDbgsYOyPKrZIv5ulvjl2UVWY7nusrnQfaQaI+SFePBg5ectNJKTGmRMwRSR7RmYDl+yRRKX0DzoF2owedBlluthnGr4Zf7Hs0f8LKz1ggtbUSCb1my10qOsuQk7UHzDbLPio9Te66VXDlbKstuch60hmfOIKY1hss5CM2DBi65XVjtCJpOSABlzYVWdub+IUkS3HPPPdi6deuE/+V2sv+7bUpIS0sL9thjj6q+HL7wXNnp0Wo4DZVPiFXiAEL2n8gmQh4EqVRCDj+tVDBv3jy0tLTguOOOq4AiAghWGU2yKVwhZR9N1nHT93k2RUwS3/i4ZZNgzLj51traWvkPLA6CpROUwS+tjBoz7vXr12NwcBDr168311uOOwR+s46blwU08KvpOWcPuZ7zCw5iyoR0h2vrwgRLz61xS4Ykbdz0GKvnGtDTGuxjyqM8Oci63lzPJVOmlb7qqee8P4yPgZczZJJnMWYT0fMQQyjHLi9A0cadtt6hBJeveVqCG1P2tNZbJnTaWvO2CKu0lXXcPIGRJcz+/n5s27atauwaO0x6TseTACZWz7WS3qxZs6rGrq03Z4UJFCdJgjPPPLMu+CBnkgAccsgh7vrrr69MrmQMtJqx7KngYCDkQHkGbTkRi2pNq5lrzzXWgDslng1odXKNISKKNY1e56BCZgXcgEg0alWO23KiVjM6BwU0XsAHZS3j0RghyX5IBoiDR9mELLNgyYLwTE+j1OWmXQ0iG66trJfrtGQ+tAyfj10yQ1x/ifnRsjvuJK3sVsvqLcYjlOVbWa1sEOVBQuql1UCuZbKc4dRKBRrTJRujtfHI1xqjxX0UL4+RPks7ttgd/pongnItibnQGBx6lI2+GpujNYTzPhU+X1J3ibnS1pXWQTaDy3Ie388/rzUGc7Eav+U4pe3xsfHP0zEsRoqfh2ze5q0g2vj4+5yBlGOk35FrqI1PY6RofHL8fA21Bm9rDeXY+Fpp+svtWWtuv/XWW7Fp06acSaqHDA4O4k9/+lNVMyLv26HGXVmvlcFF9urwvqS+vj709vZWvZYgSvaryOxLa9KmQMkzDELddBnk3nvvjc7Ozsp+mZGEWAZSaK6kvERAwE/ry+JjliAylGlzIMEdDa+pa6wCHx+NV2tK13py+Hh5gOHZtcUoaOPs7+/Hiy++WDFg2YjPDZ+E92DwDNNa3/b29qpx8v08O5VXMPL+I6AaNHFQrDVLyvWVGTZ9R45Xy6pl86wEgjKr5OOkLa1kLPttkiSpcsIWa0Tj6e3trYydj5f607Ryscym+fryBlleGtZ6DPmaWiwCJQya7ZJYiZ52YYlcY8kEa+OV7CAJL3vJXhqZ4En7lYwRfZYz6nRcPl4OLLg+a2yJNV5tzGnjlRdLWMyYxgxZaxwar9Y7KMvcXJ8lG6j1zXL7pfmS46U55uPV7NdiP7ndyr5RzVdp45UAio+XxtDf349f/OIXEwMGpMc5kwQcfvjh7ic/+UkVKOBBQy6CBAMUIHt7e2uAEKcntWZd2mSA5I2JMlBQYJSPXOlaW1vxsY99DEmS4KqrrqpyoLKcRgbAHYkG9PjYNPBDzoT3VGlAQDZjy3G2tbXVBH9tjNKweCOqDBK8lMKv/qC/ppDBX24aCCAnQnNIvwV4ALB9+3YUi0XsuuuuKtjhjlGOT6OX5ZUi3HnwfhDJ/lnATgM6siRMx5E6C1SDdtlULINgKADytSQny49pXVAgA6AG6ORYtXIBv5CAMwxpOqtdLUljJB2WpSGrNKL1/Fk+yGKvrXIQB4CcRbLWkidgnJ3VQKtcQ56I8ESTg3MuWl9nWomTxmwBGT5GzhJZV31yoGqVujTApvUxcjaFhLM+XGc5SNViiyxvcd8Tk2zxMWoAJlS61i72sEp5MrHkbKy1ltoaynFqLC6XUJLV1taGu+++G5s3b54wk5SDJAALFy503/rWt2oUXzowIN1JW4GIlJ4yU64Q0kmTAyMJZWTSefFgdMUVV6BYLOIf//EfTbTOaX0uWvkpdGWbBEuyh0UGW2nYgF5us0psEmzw5uyYK3iA9MuRZVCSzIrGmvAxrlu3Ds457LrrrjVlVDoXfo5WOVWWErnT4nrKWTENWHA20CqraeVE2WDNy1CylChZQDoXnhlqzdLEAlmN1XwdySZ5MJJZtWyol6U02TCulUqlg5ZlC/pdrXTKG2TTmqVlCVyupewtId2VJQbZHMwb4LVSqmwwTyuZko3KsqnWKC6bgCmIaeVFrSSlratWcpJrzNdZzgEfZ6ghnOZXlqE4iOTlUjlm7TE0XtniYa2vbIHgz+W4Q83f/LhaSVwri4fGKTf+mdjxyhIx2RvXZaslwFrv//mf/8GWLVvycls9pLm5udLkzMs7QC3bQgvCg6TMxDmzxLNajW3h9XQSOgfek8MzAJmVy42yga6uLhQKBRxyyCE1/SqcgZClQ5nlxLAsMqPT2IdyuWyif6tB0yonETDiV2ZpzAMvJcnsRtLu2kZrKHvKOEggHeEBrampCYODg5VgyRupQ2tHLIQsDZI+yAxO9tjQ+VklBU03OaiVgJ0HJd5vkFZOSNNNrazN2TFaOy0h4cxfPXST1k4yY3RuFisW0k0tA9fKuhy0SsZ6YGAAPT095hhjdJMDHMkuWKXrmLXTGDEKhJKNL5fLVT5PMmHbt2+vYaml3+RXgFIrguY3JXti+c25c+emjk+2IGgMUVrJh8bU3d2dWTfl2mXxm11dXWrJVpaluW5SokxxQSZUvBSnxYVt27aht7e3igyI0c1yuVzRnyy6OXPmzKBurlxZnzsM5SAJY3dn1aj9crlchVpDSmL1HMmGbQ66AFSUnx55pm3Vsq3aLg+sZNT0nAIO75nQxmY1n1s0vmTBaE5JyXk2EqLxuTFYjeea09KaTK36vGSDpCFzY+bZGAESMmb6Dz/tKhUa3xNPPIGmpiYcc8wxVUwXZ4s0Ro8CdqlUquiVRtlzFoSzmZIV0hgSyRaQDXR0dNQ03ErWQ2sm5zorM0mN3SJHKpk8K1uUTI/UOQnoyH4LhUJFryRAl4yHZHn4e1YDtWR3CKByAKRl/jLj5+sidY4zHOQvAFQCydy5c2syftkArmX2sWwVnUd3dze2bdtWxTrJNZDMRagBXI6Hn0tHRwdmzJihNrprz7VxaOwMZ1XpfKinktaB66PGOnGAxoEMF62xm8517ty52HXXXWvWSY4lK9PE2Rd6vW3bNmzevFldHzkmrWldNuSHmKWOjg7MmjWrRtcsu5E9vpLx5vMv/cLgoP9rGt4DKHsB169fb0T8bJKDJAADAwNYu3YtOjo6qmqxHKjQwtJCcnpPNi9TbxJtPT09Va/pMzxoc6PkfUoSXWt9O52dnZWtq6ursp+YjBdffLGmr0VmRrxvx+rBIhAox8PBIQdR0vkThc3HJTMinvVQNsTHJBvQNTZCAl2Z7clMlveTWWPiTIuVqctMdvv27RX2h4Nbvl5avxUvP/FeB41B0hgWjTXSxsQzQ9nLQTrB10uCIskaSX2k/bJ0KJvmZXZOzs7qwbH6AGWJlwCjBFG0VlqfkVa6Jj2kHqOuri6VNZKsiky2JJspkxHJppCvkKVt2dvIQSEPRtqFDrxfStM/+oy8xNrqaSQAoTX7awym7GfUGHbJQPMStcUSyT5GyfJJhpb7QV454D6DX6TCk2PJYMYmx3xcJLIPTDKzWq8iZ4limEurZzHUl9nT01MzTtmzaPW4WayX7N8L9ZzOnDmzinGmNdd622QvFPn43//+93XBBzlIAtDR0YEjjjiiQoPzwMoViAyfA4Xu7m4VCJFScRZDUsQyqM6YMUMFCgR++HNy2J2dnTXBh4JqU5Nf3v3226/KkdG9L6QDozHwR40etsCP7Jvq6uqqKjPxc5fPaRy8YVleoWQ11Q8MDGDr1q0q6NHWhahgraGVOzDeN0TbLrvsUtOAPGPGjKpxkMFv3boVbW1tWLBgQeUKFc15kV5wh7x9+/aaNeJj4vs4iOABlDcgS1DAm47nzp3u4qbrAAAgAElEQVRbdVEAnT93xrQ+MWCbBxgOtjdv3lyVJHD94mORgZODAZ5AWEFzxowZmD9/fuXcOciRQVMCAXLAmi+Qjah9fX148cUXTdBmldk5y8J1jcYj+w3b2/09zyQQleUUWcLkJTASrbzHwdr27dvxwgsv1CR1FqiRyZ0sqWusOPku0jutNGSBT/Izsp9QgjRuO5s2bVKBWqjpOwTQ6JxkszPpnaZn2sUXNE8E0LTyndbwLPVOsvxaMhcCMFZJsrOzE7vssgv23ntvs1wuEwPZu6s15Mux9PT0oLu7Gxs2bFDXh+uqjKFaCZnGsnnz5rrgg7xxG75x+5JLLjGDMwnvj9AW3crcZebEgzOv25NIsKHVZkPOUmYWnPKUvUhar440Sh6o+XuaQWp9Hry58/+z96YxlmZXueY6MWXGfGIeMzIjI3LOLAM2XHRb6LZsqy0hNY2YbOlauE03SBiE/KMFplvgdvOjm3/8wJIxYAEuMG2wXTY2NkKW70UyGNvgKldlVk6RkTHP84nImL/+UfWsfL8V+0SVcXAbU/FJoczKyjxnf3uvvfZa7/uutTVT0swi6j2iIDulXUmtTaRHUxRbdPbqKJXKUeiWR2FnpWR0viNVGGlDpXVi0JSqFtE1iqJyfbcUzRYh9kgRqOBY4XM93LTqLP6aEmzGtdE1UgoG6jmKp+NPSmwcaY8opNagUJ2ovpvSh/oOsYIwHgBRM3WcUDxFH6aE4XpAmuU7KafE4eXeLdKEkRYpR+fwPXynBj2R/tB3S1GGkTaMVEp8Ig0TxxrFwPH3KSpUxdZqf1EgnKINy1GhqfVK2WC0w9darxS9m6JHo56Ud8Je4jtFWlf9fMoulVqM1LWiyurTGWPcU5Gajz5Pz9cYLKoNpuQTKT+ov/J3Pve5z51IddspkmSvlIQWi0U7c+bMEeH266U19GdjY8ODphT0Gg/fVKbC4ZqiaCL9VE6P9ODBA6uoqLBr167loNYY4KVowlKp5KLR1HuwsZgf3Typ91DkK4UiReFrqqxWDyVdD4X3U+sRETECJT2A1SG81nvE9Sj3HltbW1ZdXW09PT05ygy7ItOK9MvrfY+traddrVPvwXdF/dBx7xGhfKXLIvXHe2ADMftNISyK7GkArogeDjvqwCJ0HwXU5exKaSaSBpw0B45qvvQ9sJfXopojOhmz+Nf7HsVi8cg7RCoiRasotUdgEjWUvEe5n9fzHuwPvvc4GqVYLB7xV6/3PZTKK6cFTUka1OZO6j0aGhqspaWl7D6Pfjf1HgQgJDPqc/GzpVLJFhcXj6CRESVOoSkaOOp7pM6Ptra25HtohenreY9yFB3j511e6z00odb30JYMqffgXVLocE1NjX39618/kfjgFEkysxs3bmSf+tSnclynBkeq3cC41VkSUETnH/UeGAXfoRlTqoLmtQ5h/n5Kj5Nlmb33ve+1LMvswx/+cFkoOjqWSN9ElKicBkedviJDKV79OIRIqQ7VSbEOeghrYKQHmv6owDkKR2MpfkoTELUcKTG5HrpavfTFL37Rsiyzt7/97TnYuRzCdZyAXIWZjF8FlZqhRWF86iclGtcMXHUuKcF4KovTTC6OXcW7PBH6V/Q0rof+PpaSRwSB+TGzJFqgwZgGmFHgrnNebuya4aey6nKC8JSYlc+NAuMoAD8OndL5TglyFZVShCOKcMuJ1xUBSImKy5Wtx3Gnfh8RmpRgPc75a827vpOiMoxZg31+jfYSiwX4NYUIsleYC53zlPA5ZSNq4+VQTfXzr2fc2HlMqiJlmvKRZnZk3IxL92T0LeVsPiJhscozjlt9fjm0POUj/+Ef/sHW1tZOkaSTeLa3t+3hw4eOdqhOoa6uLmeQGFNEYoik4Vf50QBKgxAWnk2lGylyq4rANDY2WkNDQ+7XlK6npqbGnUFXV5cf3BgjBqZIkuqreI84fg1EotDXLB34aSbAeMuNX7MzNqIGHlGDEIWvjFffQedeaTZ1cDx6aKcymagRU62YislramqsqanJKisr7eLFi0c0LjgCAtGoDdN51/JhRfRSwStOSB2VZsYRydPxp7LjlLA1Iqsq0lXEK5U8KKqhTvq4YoVyhQpRo6fZJNl0DADNLBdwxAIFtRWd/+MQVaXJzCwXeCi1rBlv1OYpdZ7SSunByHxpwKqayWhHUQiuyJ36oJSonTGkSsxTCVxDQ0NOg8OhrxQs+yAKbTXpSaF2Ua8SdUQEVpESP67QQN8jhcynkJVIhcfinRQSzLtEfZfOgdKRfFc58X2cc12HxsZG90PqRxUEiMxC1KYdhziil1Q/qsE5NlQoFDxAej2oULl9oEmRnsUEdalCo83NTXv55ZdPJD44DZLsFeH2rVu3rLKyMpc5b29v+yELVE3goIEQP+qI2DS6gWPmrA4lHrpNTU1HAgmqajCelOHEbPnw8NBu376dc5rr6+v+qx7CiiKxeTR7JhNQKlA3KmPlJ3UAq06KbDRF2bAR42Gl86/BW5xzNpCWS+P0z549a+3t7T4u5repqenIfKtGKh66Zkd1ajiaxcVFm5qasp2dHfvCF76Qm/soik1RmGRxOucEPMViMTfnGjTHgOG1Ak510jiY+fn5XIAZA08N2DSbS0HnHJJKpXZ0dJQNNDXYx7Fy2EXoX+ecA3ZpaelIoBkPW0XtVKulgnCQiRjkd3d3HxlzDJKVLtY512Ak6v/Yf3Nzc68ZIKfoIxIsRXUV0a2vr7fW1tbknMfgkjlP0S0xqNGAbGZmJmc3qcRKfSKPIiSK1sZij66urrI2Xg5Rj8GMHqQaQC4tLdnY2FjOF2pAw7+PgUBqzmMQ0NDQYH19fUcCsqgdxfZUlwi6qahzDL42NjZyYvs45+UCefWJGsTruBsaGqytrc0GBgZyiWwMYKqqXgkloK5JBI8DE7Dx6enp3NjL2QoBMN+nxQ2xSKOxsTGpf/uXPKd0m5ldunQp+73f+73kgULgFKNupax0waMBqINT0ZxmPRpxR7oqlS1E3YgKadW5/dzP/ZyZmX3kIx/JZc0x8y9XlaOHoDplM8sd4ikOuVz5auS/U9m+OrdYJaVOTt8jJSbf2dnJiRGVw9fNVo5mY27jAaI6iqjRUZrwb/7mb+zg4MB+5Ed+JEmxxd9HmkrtRB+FvqMA+Th6TaH9SJlohsxhnoK/OSBfD3Svn6f0BvZZTvSZoh2iCFmFvjwRuk8JcuNYy9FUKsaNCItWckbq5Lhfowg3jp93UHovJYSOFA/JjAaqerCkKCul26JoPzXfSldFmlDpE/1O9jDBeOodVIwfRdFKF6aKDXQfR4G3UkIpwbAmaRoMqh29FoUVRdDqb6LfSe3hOPeRlo3UVYq+So1f6c4UTZ6iyFUWkipciXtAqc5YwUgiqr/qGZWSJ+jY1e/Hc0ulIOWkFdjU1772NVtfXz+l207iqaysdFSHCJtDQ52oZn4xKFJqKmbbWvobjY2NrAEEAVGk1r5TVIbNTsm/ojKgSXHMEUZVgbmOOYocUzQaY1aaQTe7Ztg4UuYqwu46x2Sq0CD8aJmojpnNrLRHzJZA7yIyECseI9wLaqfZEvM8OztrOzs79o1vfCOHymiArQeFCnw1O9WA8zj0SJFGpWp0zKypjjlq61Ioo9qGBhXaziLSS5o1g9Lp2FOZaQqp4wBNZdNx7ymKoSJkDeDMngYLrKsiRmobOseMOaKLGvwwHymEjjGp/TLXmqhokK8+gzGztuWo7DjXmqxEGpUkkMBek0DQlEhfH+fnCBRSPoMf1lzH2NzcfKyf04rjcuhz3H/4uZWVlRzyrLSpUqYpn6Eoi46ZsXZ2dib3H0mgUqVR7hApxjjHoEMpP6djVp+h1FY5P9fY2OhI+kn4uTjuhYWF3P7b2trKBcHRZzDmiMKlzhJaeyjarMkqz+7urr3zne88kfjgFEkys5s3b2af/vSnraqqKpeNxs23tbWVpNv4b4VpNTCKVQgpaPa1qJNUNQiP6nSUG3/++edte3vburq6jvRAej2QrGZnKX3RcRVqwN/w+WR9qSoi5jdVEZVqnUBQEeFjFZErh69jVlQrpf3QDF8PjxgoK38fkSz0Equrq7a/v2/V1dU5BC6VicX2Aepsoz4iVWWmCFYqcywnDldBpyJX6sj00FCEJ6JrrLk6Ww3glbrRTDcl1GTMsR2A0texwifqmXSvacuC1FymkDVdf91jUbxeboyRKmCcikRFwX0KOVBKAyopUmHMRUSYUmPVxE9RKuYSGzXLawzjfKbGqXOp8xkFxuXslD2WQlV13Xl03aMImjWPv6bGWs4+FfGKiLYW9OjaxwBGxxkR4Lh/UoUhKeo2hfxG1IWARnWAuq/4+zpWtU+ltSLKrolCuR5dSnvqnCoKp8yAnrMpZkD9vorKIyNAgPj5z3/e5ufnT5Gkk3iePHli9+/f9y6faAoaGhrKBk4qztvY2LD19fXkjx72GGipVMppIFQrozw82p6mpib/UcRD9Q91dXW5wGl/f9+uXLni4mCN/Blv/DXqZDY3N3OOHsesUKrqejQjVE0VGZaOVzdQFEDy3VEIz1gjKlNO66BBiApPNUOJGirNrlJansi3a2Cn2VQKqdNgD6dmZjk7AFVUJCYK9VNjVf0Oc6u6I5xnFMWqPRyHHCnVqsFIqrggBvwRNYKSVeqGeeBQVCd/HGIbbTZSZ9isipA1WFYEVOcVm01l2BHJSGXYihQdh9aWS6QISFKie91XOl5FiggAtFoXkWuk3OMY1TbwYehymFvWizXUxKQcghE1OYq4qB2YPe28ji2ojiilkyuVSo6Yg1xQZq4aP0WIInpYDuksFovHJqkanGoCFRFa9V3o5/i7qQBAkU72jDINqYS6sbHROjs7j6D2+ENNUiOavLX19B69qPtcWVk5gr5BJ+ITUoGK2mxK98n9a9qeA1uICKcib6l5VVtgvCfxnCJJZnbr1q3s85//fA7p0EXRxVhbW/MAaG1tzdbW1o44QD0ENTLH2Sr6ogFQc3Oz/z4eLArd8miFFIauAcVLL71kpVLJGhsb/dDGuLQyhCxUHR4OTIOIONYYBKmQj75GcUNGKFx/mEs9qPXg49HAUh0HAaT+MG51zjh0RWJU8K4OWedzbW0tF6xFSkczSGypuvppJ3V1GHE+NQjSCkV1yIpq7e7u5hA3xhnHGC/WLCfIjyJl7E+D3miX6tw0G1dNgR4YSuXxEysoywVmBH/l9lAco1K9UdjLowdxrM5LJRKxwAGbJoFKCZA1idDkgd8rWqz0gVLRMSjXICyOTw+2GJTv7z/tZ6YHWkSz4ziZTxV4k3yYPb0QFdvUJEvXPJXoKPKeoupeK4FMJWSMsba21oOiFK2fSnDYQzEgxzYU2dL9g22Vo4s0yaX6DMqIYEYDcAIQ1V4eN5dx/0Tta6oYIfqjOE6VHej+gVaOfd5UxB9tU5NGfJHS4Fqdi8+LVYjq33XdtdhDRfA/+ZM/aS+++OJ3jSR9TwZJhULhv5jZD5sZljqVZdmVV//f28zsw2Y2YGb/aGb/c5ZlY8d93qVLl7I/+IM/SB5MhUIhp4nQAzQ6U91UbD42Y6ygUS5Ws68U5cbGr6+vz8GwwLcKZWJ4W1tb9mu/9mt2cHBgv/Irv+KbSB09zj6W0prlSzg1k4ml2FpRECkshVz5TIXZVeOlUGtKPB5L3mM1klIWKVg49paKlJBWfSkCo1UxkWqLAmylBw4ODmxsbMyyLLNz584lezKpTqqcyFFpAdXqcDDpIUpgEmmLKGqM9EWE2qMw+ThhdYoOimJMHhVlpnrnxJ8o5FURLDYV6YEoQo7iaR2jCl/5LJ6UgDoKweOPinwVwSjX60dtNyWe5icKplW4HPvN6HhV+B0F7KlxRwpTx6wUcUrkrT9x7CmBbuynFGlCDk+lhdReyvX0UTF33GtxXx0nho40tq5tpAhTPzputQc+kzlR1DBSb/hcflW6UGlsAivGqZRwFDvjt6L/iggsTznaLbZTiBKGWPmma6/rHNGs2H4gShmUIoxnQESx/vZv/9YWFxff0HTbL2dZ9gf6B4VCod3MPm1m/6uZ/ZWZ/ZaZ/b/2SkBV9iHz0zJMs6dVGlo6qsFGRBgUBWFh9VBnEysFpBlcKmuPImJ1PpoZAUtrlskGHRsbO5K1Y3SqoYg9dmImzJhAvDSQIzvRjcH86YFNxqHzp6iCwtFapq2HuFInEXrWMWomp9QJ42ONdcMqmqBZED8p2iQGlzjnjY0NzxAValbEUDNLgjgteecwUWfC95ebw3KojNqgOmOlRsqhMmqDGlhqQKlIB+OLc6hrrELUaIMqQE0hcPy+ubk5h2aWs8GIZip6sLKy4ns47uO4R74TZFjHqQiHCtTVBplDTWKYs8XFxSOIkR4e6mdigYWiBnF9U6iBIjBqg3GPxH2sY4x+RoPylABd6SIdIy0XmMNyNoifJjAoZ4MLCws5gX9E21J+RpPYlKxA20KoDTLGSBupDSo9z8/i4qKPkeQxFh5wTmGDUVQebZD2CapvjSh1TLKjDeoeZq3L2SBzSLCkwvdyNtjR0ZFLagnAow1qYUG5s2R9fT2H9n03z/cykvRsIkj6BXsFOfqPr/53vZktmtn3Z1l2t9zn3bx5M/vsZz+b42zjoaRQLJTb6upqDlFSoWmsOlB9gW56DTgUPkRfAF+vtBDRtGoK1Hmura3Z5uamPffcc3ZwcGA/8AM/cKT6Sx0BzrQcVBwpFt1ksdpEM6yo21L4XfuQKJJlls+0cKJR6xB/9ADSTItHs5dYVRJbIMT2B2TbWsoe0aCYWdXV1dkLL7xgNTU19va3v/0IXB2pKc1Ooxg86lhitZbu4ZSIUQWWGiymROsRmVBkR1GzlFBdEQClTWLLBc3oNfPUthDHCdSjiFoRM/1RIbhmyaly5ij21bGlWj9ExCGW7EfRvP4ahb4RadDycNWkRJQ0jiuWVUcxvwaKERlVu9JSdj6DPanFEdi+ogtxLfnRUvU4Lp2riHQofRSRWp7UGkZ717YqamOKdrOP4lwdh3bH8nNFt1S7p3sxVnEp9aa+lQQrVsZh23y/Bv2RJeAHlJlH0Xedq6jNUh+r/hVbw/ZThU7lqk8jZa26PEWFUyxLpFePoy7f+c532u3bt9/QSNL/XSgU/h8zu2dm/0eWZf/FzG6Y2Qv8hSzLNguFwsirf142SEK4rRk9ok4CJ4yAzUJETdC0urpqq6urOc2SGmqpVDIz88BJ9QAEJQRMxWLRmpub/UeDlJqaVzo5q3BQKwXUGL/yla/YwcGBveUtb7HV1dVcVoUhl0qlI1m8HhAaKOmYGBfGijE3Nze7o1YnyLxFBE7nS3UKmj1xeGh1DVmbZuvMm+o9tPUAG1sDgCi+jhmJooQpBC4KQnE4DQ0NNj8/b2fPnrVvfetbR5AFRW9At3CqBBnqoBXhUNRDkSN4ftVMZFnmB3sMhGNGp06H+VXhOp+nQnDN2FV7oKib7oPj5k2Ry6iJifodDjotsU6VV8cSZcYVUUsNWnSfaiUqh4jOVRSixhYMOm+KJKiAXtGEeKCkNFqvhfgqopXS6qhAlnlL6TA1S49ygrW1tSP2FltYpDSDiiTEgw57IzFU/6YoG/OGj40aotXVVfc3jE0DYw2GUyiqrmtTU5P19PTkqjfxb3oupIoMos6yVHpa1q8tHgiAVIKBHaX0QuqPe3p6ciJtpY6VcsPfpxgQ/pux6T49ODjwJEcTCA2mFBlXZLe7uzunDcOPKHrP2CIQEdHx2dnZI1raaG+cQeyDubm5f3Fwoc/3KpL0H8zsjpntmtm7zOx3zez7zOx/N7OFLMs+IH/3q2b2+1mW/VH4jF8ws18wM+vt7X3zV7/6VausrMxlEar450BfWVnxgEgXUR0thmVmOW5ZD6VisWgtLS1WLBatWCzmHJoaFgeAirQJJGKApuMqlUr2T//0T3Z4eGj9/f2eRXAQxyAII9dxaSCkDlZL5MnuFPpMjUsDRzIghY+1sgvnoGNiPMyVipw5kF5d19whjrNnTDGg1cMS54Dz4yBh/Ti8cQaMh9+rQLympsa+8IUvWKFQsJ/6qZ+yLMtyULYeNoxJf9XARynH1zNXMdBWKkWrXRRijwE249AxqXiZuQIBiWJ6tas4V4xVAx6CMaiJFJLLPtSxaWCh9Czvpy0smAcdE+NS6g5Hy7/F1sniyd7VqeuYmCtdvyiWx2ajKDVlU+w/1lsLI0B8YrEBSLeOSYs3lGLiUcQj0pu6B9WPKc0OoqPJEUGNJpUp/wkK8lo+VBH41BoSJCqtCRqjPlQDrWhXSq2zhpFKUqRD5yquHzYF5c+7aJsJ1kFpdGUrdK60YlqRSKVYtegm5RfUh5KkqQ81s1yAReDHmKK9q18gMFUfGudKfehxgIBqb5VSjXRlyoeur6/bl7/85RNpJvk9GSTFp1AofMnMvmBmw2ZWnWXZ++T/vWhm/2eWZZ8q9+8vX76cfexjH7P6+voc180BACyofZJAZsptKgxMHwwxVmlE2k11PlErpULHVOWDZnxjY2P25MkTO3v27LF0mwqzNbNSB64CPc1acGSxjF+DzCjESzWfi5qeCFFrRUYUCmqpvgrFyd6Va48ixkhjqYBdBc0Ehq9Ff2jQtrW1ZVVVVdbW1uaHmllewBxFoCpmTFFYShcxX1FQe1wPG9W0lRPTqkg1JaxmnVWEHUutX917ufVMiX61mkWFyfwa6b84xjhWfq/l1LFnjT6sq1JcUUwd/4x3SgnUtWeR7tU4nzpufs+/iUJvrUgrJ5rm96lxxv4/qV5F2L1SWnoI6/gYlwr8o7C/XG8inVc+RynUVB+dSFcSCCu6FilUnRsNshSlUcpS9wbj0jVS6vS4nj66XzWBUC1qqrWD/qrj0/OIR5E1RWL0HFDfq9RzpHVVd6XtBWLhUDlNnQrbWa9yeiYNpkqlUs7H4RfVfynNFvWwelZqZbWifD/90z/9hqfb9MnMrGBmt83sPfzhq5qkoVf/vOyDA4oHR2yAFfuLKAUSS1pxMipE1QxEKbaYZWOobGCM0cw8yFG0K0bSBG4Y48LCQq4igA3LZ585cyYXqGnWSDaiG0QrKnBWbMatra0jWZC2StA2CTigFGIDaqPzo60H2EB6wOIsVXPEOsUMNrZuwDErDUQQFCnHcpmiBmzYlWobYiuJmPlgR4qI4IAitUJ5Pg4D9E+pAhXhAsNz6KkWK5Uhsn56GCgti4NVarqhoSGH+B2XHSpqpLB71P7pHOkBwJxqpZsKqXHsOg5sKO6xs2fP5jJ7HHcqWwVJ1sxeD8hUBh3tOWb0ihapYFWpV6VMIvqxuLh4ZI/RK0jF3JWVlblkQ2lqEGTmijlUnQc+SAMYpQp1vWZnZ3MokQY3+Fa1adYjokOdnZ05raYWX4Aw8J6qg4no//r6uk1NTR2RQahNM+/aQyuFhPb19blPRDuqhQJRmpFCQdfX1215edkeP37sdqa6JhU+s2bYbUSLi8Witbe3uw1xbvCAVmHTx/mgubm5HM27u7ubqyJTpDF1hikr0tvbmxPaq5wAjaNSzjpHuscmJyfd9qN4Xc9T5B6s2erq6ncXVRAffK8hSYVCoWhm/8HM/qu90gLgnWb2UTP7ATNbNrOHZvZz9gqy9CEz+09Zlh1b3Xbr1i0XbrPhlJdX2mhlZcU3HVAjm0HLJ8nclXqIxqSHHMEB8KJqespRf5Fy2NjYyDmj9fV1MzNrbW3NCYy134QalVZosBn08FDBeOwnk6r+SvXBUMGn8sfxR8XYKsYze9rf5rVEgirE1hJis6cXPGrWkmp+luphoxUhseJCofzJyUnb2tqympoap120nxK9ddjssRpJs0vNMFOC/qiP0CAxtlVIZbxmlkMF+I5Y4ptq80AFCnMSha8pIXpEBVT/oEGq6pWiEFftVOkes3zzxHIogAqD1T5YVy0e0CoiRQEiKqHzAUryeuajXKsL1jciI7p/on1oVh2rl6JOBfo+CpIVVdWiBUUfVOOGX1E0RBtxavJJcqRromhzREJYQ5IY+o8p7ZwSHcfxMBbtRxQPbGULVIulqIz2bcte7S+HHzmuLxZoTENDg8+hIsz4S62ijvocgjz0kaBcr56NubWJNK76epUGKJLG+aUi+nIBTExc4lgqKytzmjSl2FTSUSwWfayKPGoRlSbgjIOzmMSFgiW0S+Pj47azs/PGo9sKhUKHmf21mV01swN7RZD9G1mW/e2r///t9opG6bw97ZP0+LjPHBoayj784Q97Z1Xl2AuFQu4QZNOwMMvLy7aysmLLy8s5hAIHpBQEjlT1Gi0tLbkfzS4VRnz13TyL07FgMDGI+7u/+zvb39+3CxcueJbL52h5NRtZM0rGolArWZ9mlBx2Kc2BbiiQtogAqFBcBbtRh6RBHHoR5oWDSEWnKa0BDkc3EoFC5PTV0aXQLHXAsZSWJnDPPfecbW9v2w//8A/ndD6MS/UFKoAtJwSPeihFRQio9JBWuriccFPnSYMHHJ4ioQRIajOKFvFnBBAplE8dXswgo4A/ttFQ+k4DNz2QlLaOgmAOVaU6NchOlRGrtkjHYpbX92nAn6paBXlQJCS28Yi6ItUW6Z/FsagmTIPJcvOiY1FRLQEIVP5xlb0pHQoBldKooD+x5Fs1RIp6alVqubHERJGxadAb9UMEdGfPnj0iNFb0A0QY+2UvavKsVJJqv7Q4RvVozCsBd1VVVRKJSen29AzA3vBZOpbjdKBacKLJGXtCWwfgd/VHx8JeYi+q/pPgUsfBeaTapYiUw2xEfRdnkJ6PKd0ZD4gr59HP//zP28OHD994QdK/xnPr1q3sr/7qr3KLHgMiAiF+VAjJZuAB1j579mzuUGttbfUfpUcaGxtz6ACQraISanSMgf9WMaZm5HNzc1YoFOyZZ57xcbS0tPj3t7a25ugIHKyWxHKoKUS7srJiS0tLuTHw/8iI6ZM0/48AACAASURBVDYenaVuvjgXwPzQaBzQOCiF0jVIJVBlc5IRqlhWxYNKeegYoBvimmgJrgYbOEcdA+Pa3Ny08fFx29/ft7q6uhwFo1mvOgLGotkVh5mK5ck4VROn38+6cJipU+JzlHZh/XUMjEkzYAIw7FP1B0pFLS8v29LSktsshykUEMG69rpKJQ2siwYZBPcaqHOIMhfMgWaaSv2ABKCBqKyszCEA2GdbW1suYSgWi7n2HCBNKgjW3kv6oz5D14T30GIKPTjVNrFXpXcVASAg5uCMPkOLTjQQJRjRyi+tXNL1wD5TPsPslQRMxe0c1PgM7EPtUwXbiqwe5zM0iVKfge9iTWI/rHI+Q9t+kOgoHfh6fQZIiCYpSvu/ls+AmiQAVtpWKeTUmhznMzSQ+U59BmeaIofM/Wv5DNaERHZvby+Hkql9qs9QWlR9hiJ2zLmuB79vamqy97///fbyyy+fBkkn8Vy5ciX74z/+Y6urq/NoX7luLYdVI49aEhVH80RRXKqUXqFPdXzHVa+osWsZuBr37duvSLHe9KY35SL1FMWl4myFX7WnCYczmYmWV6sei8BKe4ZA5eh8RIpLs1uQIi1p1aqZFKWkSIh2Zo2Zv3bgjb16IoUT+xppPxzNouLP7u6uffvb37aDgwMbGhpyPRjvoshVSvSqf656OWgts7wA/DiRdRR9q4A5CnBVqKzfnRIt61hUWM33RHF1FHozJ/zEMek8aedmfh//TEXOfI6KveN8pX5fbp54tFUDgTwZuQqoVayu1FsUnutaRaF+qsM82hDtOxW7SfN7taMollbxduxTpC0UUj2deFICaXxM7E+kWjTtrabUG1S+onuxJxHjZS5AISKKlurzo6gMcxIbBmtFbKqVhfZvUoSIeVAxdkQSlcbXpJhH+x/F6ldFz/C9UVOFLarmTNH4iJgxV+z/WD2mSKYGcwTZBFNQoXyOtrbhOxUg0GAKv8/4tTGtnnMElYuLi372chazHpw1ZmZ1dXU2MTFhW1tbp8Ltk3xiVY3+xGoVrQpR/hZnotVjCvuq+FbLazVI0lJaHIKZ5bRSkVpjM4PiENiQ2anQVjMARZK09Jnxa18XnGfMlJeXlz14U72LblwtjQdKJfpXAXTMgrRqgk27tbV1BFHT/jdsWkVvtB8Vm1ezQZA2RSt4oBUVrSAL0g2rpcz7+/u2tLRkhULBlpeXHTEhO4/ZoELsqm/hMNAqFoXW9ftBFVkrFcZjdwTEEcFSmF+LBsyedoRW4bB+vzrPtbU1P+CiFg2NE4mCoiMRWdV2BUobcnBqFgqqmRIva2DBARkpXd6fDJQxaEM/ghql5nTeFxYWcrSuImdmT4XKKTQTxKq3t/dY9E7RkRR6Nz8/fwQNYA7wb6nycEXMent73R+hEWEPaVNRbRWhvmB+fv5IW4bd3d2cP4TOUopJ7bC3t9eRQyhkggn2tgYROgbmYWRkxNbX14/4I+ZAD/GIsBeLRevp6clVdEWfjB9MzcHy8rItLCzYo0ePjvhkPkf7gTGGiE51d3fnqGuSAK0ig8ZSNIhfFxcXbXR09EghETalYvmUP2pubraOjg4PPrWgiSBa6U9FC/lZWFiw0dHRXJCd8snRFyiCPDg4aNeuXSuLIGuRkAZzn/nMZ04mLjhFkl6h2z73uc/lDBCoeG1tzZ3w4uKiG6OiR2R8Co1q5UiEAoFnVahGEIYT1INgdXX1CBSpkKiWv2oTsq9//etWVVVl73nPe3KUlvLL5QIidQCxZw6HpR7EKrLV6is9/GMwyGblM1QjUk4foodPyvkqLK0HQdRWAYmbPa3+UPooNh/kYFJBK4+K9LWS6f79+3b27Fl761vf6u+slTmgY9ooLyJ0ESGEJtIAUIXw2sJBK7hAyAhcFUUhwI8tG1TLoGutyBPvpN+dEpur4F31SdqEjwxa9UhRnGr2FPqP1X4RNcC+Y7kyh2Z8Z+YbG4xif9Y6inSj0F/FsFpwENc6tu6InZK1DxXvoBWgWnWZKtPWtiGxqjEKk0k+UlVEsZJRvzfur1hxqho9peBi+5QYVEG9MedRfxZ7SUU6VnsjaXKnPk21RKo70+SCOVetGUEMQZ1W34Gqsd6vJRNQ7ZAi8Ly7apeUBo60YyrBVVE876toDD8696Cq2K4Gsy0tLdbe3m7t7e3W1tZmbW1tuZ5L7AutvmbOodMWFxdtcXHRFhYWcnol7EMLBPBj+s6dnZ3W3t5uHR0dPgdqd7z7kydP7Ed/9EftpZdeOqXbTuIZGhrKPvKRj+SE21BuWZblqh5Y2MXFRV90jE8FwQQu0Dkasbe2trqRsdAqYMTBaEWMcvsxYCOb4Ltxbhh7e3t7jsuO0CdOBvRAe0RpR+XUBlcIGLqRR4MWnJlm7SmdBxvkOG1YRE4IGFX4rK0NdO61skJLr1UgTxASWwho5p7SlyjcyyFVX19vbW1tyYZ8Eb2DhuBwVnpVWxdoJYdq0SLsniqL1dYOzA3OzSwvgi9Xkq/Bo747iKmW4qv4XZvaaQm1vruiFVFMrWOIvVbQ1EQNnL67isxjmw1Fa+OhGitJVQzLoRobamqgHkX/oHUE7Lw7a6jVXlGQGytZ9d2Z+1TLgZTNEzjzaPFDrGhS/Y7SYByIWhWpQVTc6xpQqYaIw1nbLsS9HhNEfXczy9H3keZRxDr17gTN2sIkotUpLRfrTnCmCWHUoGq7EJIb1epgV9BLJOfqawmmKH6JAQVrHc8YbJB314ayvLeiYpxtjAFfy57nbNRCFw3iNJBqa2vzgFqRQapxt7e3c3amZyvnK++ObzR7WvGpbVD47vb2dvvd3/1dGx0dPQ2STuLRFgDKS2sfkmg0Sq1wqCqSolB+R0eHGysGg6PiUFMUh+/V71RKge/VnixkmByCGKcaq2Y8iiigQ1AKRb93aWkpF4jxvZrR4wRaWlr8fTXbwDmQlSqSEDcImQ4bhO/V7BanpFlOW1tbbq6VulD+X+k7nB/fq45Je7zwvRyqSpV2dHTk5lnFtYreqG2x8fV71bagCrAt5lnpumhbBMNQdjgjtS2+N9rW4uJizqbJprXRqNr067EtDt2dnZ2clk/XWG0Lik5tC2pKbau9vd3HAEXX0NCQtC2+V22Ld07ZFjoxtS0OOrVrpQb1HitNatR/6PdS6AA1XM62UvOsFT6gkooCa4Vpyn9gW9qg9CRsSwO66DuwLZD3crbV3Nzsa6vfrYGdUn/od/he/T71l2pbIETRtlL+koCagx2BvB7sGtD8S20rBhTRtkBBCYo1YYvf+53YVvxeKC66cGNbsVBEz0S+n+9V21Lxu9oW39ve3u7o1HdqW/r9qo9CZlFdXU1/pdMg6SSeK1euZB//+MdduE1WR1ZFBqPl/jGziLSXNgCLnK+W4yo0HqFpzaJjJsnhTUZhlq/gamhosOXlZaurq7Pv//7vz214RQ/4XrQ/ZPARjlcaBAcFx64tBcpRPlriC92kV1BwmJYTbKpIURs+AguX66WjAkn9XjJgMhMVqvKr9jTS7tva/TiKwFUsu7y8bBUVFdbb2+vzrLSq6kxSYmsVG/OQOWrPGi0tjoJmsuzYuZrPVhExv9c/j52udRxRXK1ji7+PYm90fmaWHEsUVbPuKqjGDlTIrJoL7bWk4vOoM9QijSigVspN14Q50EIArYZiP0C7IebW8nal/rRnUOzfBNWp1Bvvpu0ztHdTbEarGj/d66BWSinjz7D/FOUXWx4o5QfFqkUP2iWaQDxFpyu9GxupqiA7tuWILRaUblNUVIXI6lO1hxi2pUlvrACNFbmg0diGXiGlVJeyD4rKbW9v+xqpXkhRIRAS1e/h81gjbVqrQczi4qJr1gho8HnYper1+K6Ojg7r6Oiwzs7OXLUptqXUmgYy8/PztrCwYPPz8zY3N5dDw7BpDRo5H9vb262rq8s6Ojqsu7vburq6fA6amppylB4+mqR2YWHB5ubmbHZ21ubm5uzTn/60ra2tnQq3T+KJ7fa1WoFoGEenDlZF3ApBqrNWGiIKt7U/CBUnHAbQRvR+gMsH9gZ2hd7is83MD4Xbt29bVVWVve1tb0tmKLwfRqtCx/X1dc/GlpaWcj1jlLPnQCgUCv4uiqhEzpxDUoV3QLma/epmVufFxsJBt7S05L6zvb0911eJTagBCe9CZs07gmqA1mnmqaXAii4oQqg9iyoqKuyzn/2s7e/v25vf/GZHcdRZIvZFfI+j5qH7rwrNNdtlbjXY1iaKBLcqbsZhak8vFbYS8HHwvBZqpNQw9Kgikooo8J4ckuwX1XOlIHvtkYKNaQM+qDelRxYWFnLJjHZ9VkoSCkAzakUS+P/4ANXOcbjre87OznrPtIicqHidA17fc2BgIFf1g18AKeLQVYEwGfX09LQtLy/nGjBqHzL8DQedrufAwIDvUeycyjvVlagN8SsCaW3YqnoW7Cgirb29vXblyhX3gawJ/ielo+H7x8bGcp3XsSOtIsZmIlJy6dIln1u6iGdZ5nPLeypKgx3RHJb9EjWoikiB1OAnLly4kKtcZq60YSQ2q353dnbWHj58mOu3FGUMis4oin7u3Dm7evVqrs+eaj6jdERbAjx69Mju3LljOzs7nlhgQ1p2j/2q2PvKlSt269Ytq66uznVDZ7+oT1CEc3V11WZnZ+355593hJOAUfeL+tzW1larr6+3jo4O6+vr86B8e3vbvva1r51IfHCKJFleuA16gfPD4S4sLLgxrayseHYCH19RUZFryKUCN4wIh6HiPoXnIyerYm2cn6IoOAStjFAH/1u/9VtWWVlpv//7v5/jv9XxqcPl9yqQ1sAPJxT1PfpuKozWrDXVvTxmU9BamsWBxsUGk7w3GY2KROHY+T7VlRDoIf4GSeAQVAQsaliAoclUcVpaeabo3ze+8Q3b2dmxnp6e3GGpCBgCXG0WGTufqzPXUu2I9rFu0KdKA7N+HBApwa2WaWs3cYJZgi5FF1VczVxi39oPKiUw1k7qOpeqAVT9U9RAoclg7XgPbWypc6lZKOiRVshgJ4hstfs036fl7gQzKiBWKkwrFLWpZ7wKJjYWBb1kLtF3YSOxyZ/aShRLq9ZEqUetBlU6iO9TYbYiJwTloDZmT5vcMpeqI1JNjeomNcggKNeq15ReE8Qd3Z4iNZF64s/07jIQE5pbRpoNf81aKvWjFD1+knNBgxsSAJJXzoW6urqc8Lmzs9OpYwI6FZ0TXGjhjqIznEfYrDaJZG0I1Nrb2627u9u6u7tzaBQImJn5Xsd3KQo0OzvrwT9JuiZW+GnWrKury1EgfpTKI3jnXFhfX/d3mp2dtZmZGZuZmXGBtyaQ2AyJY0tLi3V3d1tvb6+/Y09Pj7W2ttp73vMe+/a3v31Kt53EMzQ0lH30ox+11tbWHP2F88CZUV4KnEgUjGGpeBbIWYXSnZ2dDlsSeauzwVBxXpoRRxGbirSVY1fdyF//9V9bdXW1ffCDH3RnwyHJk8rENcrXsu7o3DRTjI4NXQ6BBVCyVjOpPkbnkgo6bYDI5qf/SGw4qNoU5dPJhrVaMVKmCM/R4pg9LZeODQa1VJwASoXPZOBbW1v2uc99zra3t+3GjRu5g0MrmWIHXEVxlKblINZ+L2Z25LBPdd7VAAo6SSkbbYuggaiK6uMBpWiKCj61WkcpIxxjbGKpa6nCVrJIUCoEzRy0+n1adq+tBw4PD3O9e+KcaqDBAaVdsFPl/ipijkFGDGq0gasGGSpa1/5bqZJmbbFAkI9Nm5kH3Nh+bLqIrRJAql4MZC1qXDSwUTrI7CmSi12A0ESRLrZKUAnSqIhC1EupDpDilyzL3N41YOOw1wIU7fGmlWXse3yoauL4fxqYYqONjY3u06CdFMkA/dQGwJp8EshAAynFx3xUVDy9Dw07gebq6uqytrY26+zsdF1nQ0OD+25aIYB+EbDNzc3lzidE/js7O86QgNq2tLT4udTV1eW/Z2+qBEXRNt6NIIpzcXFx0ZMZ/T5skyCqs7PTAynej+ak0MMkKQAHGkQRVC0sLPg+RctYWVlpTU1NNjExYaVS6TRIOonn5s2b2XPPPWdVVVW5iiY2sC4IC6bl9xgCh2lLS4vzqio05QDQ8m/0AARCmiloVYEGKBzebCo4YxUQNzQ02Pve9z6rqKiwZ5991p0+TkGzIH74fzRhxEEhrNRMgVJMAhU0A9rtNpUFsbmgQEqlkq9DVVWVz1GKE+cAoDKmuro6V8oNyscczs/P5zI8EBYCBM3MmT91hor8oe3h0OBAWVpasrm5OVtYWPDvX11dte3tbVtaWjIzs7a2Ng9gVWDOHLa3t+c6B5s9rTrBCWIb/GhPHBWlKhSvZbN8F8EswTmU8P7+vmel9LvR9VJht+rCOHz53OM0DDU1NS58hWZlP2l2DL2LBk6rl0A3dK340fYWOEsNPDg0sA0Vj6f2s1Krul4dHR25ogDdzyCmUQsyNzeXq8hM7WfmS+dQAw+CAK08JYBTu4iUqu5ngjNFUbAPpWrift7d3c3t5/hd5fYzdCV7infr6urKVbeyn7W4Af/HekXEBn0kCJFW1EX/m9rPWrmsSJT6DsTfGuyDJKqOBi0N76jUu+5ngii1DdZOE9K4nzVAjP63vb3d9zPFOJrcq0/EjyB41v1cWVnpiaiiUJrcE+TrfiYgiggbWiR8R2o/gyDyPnqOaVf3cvuZs4WgkD2+vb1tDx48sO3t7dMg6SSeq1evZs8++6zV19fnaIaUAI7faykqaIA2asMBqK6CzJJsRwWNwI6KdIAIoBUie9S7tGJGro0ZP/ShD1lFRYX9zu/8Tk7ECL2nZb4cSmSN0Avah0fLqrWMWjUbKojl4NE+NOV6wZhZjoYiOMOpaw8UFcGqwDc29lQxNgLgg4ODnO4MvZF2LAYZUpE5j/acUaGvintxOnQ8v379erKrtf6q4mbWOIqmVUytYmcVIJuZxT2tYmmzp4JjFT3H3+uPvrs2VS0ntI4ibz6bd1XBe+zqnWremupIrYJmXVczy302NqNd1VVIzXzrWmI/sXBA9wX2SpFGqnu8lltrSxFQMYIBDhB+rwgc64lWTHuwcehr5+LUdSlKXeJf2PtUQaHtwUZTjQYj3UYAYGY+bwQc2ipENVoEAGg98St8fkpzp1Qi66Tvogc/lVb4GYotqqqqfN40WNMEDL+mFceg7DHgJQCgkS76UHyIolBQQBq4gQpVVVX5fGilLRQXPxpEse9BaAgyoJt6e3utt7fXgxwNavCPoEHz8/M2PT1t09PTNjU15e+HTSoapFWPfX191tfXZ729vdbT02Pd3d1uK0hJDg4OHKWcm5vz75mYmLDp6Wmbm5vz9VI9G9Qy89bX12f9/f3W39/v6BPBdW1trVe1afI/MTFhH/rQh2x6evo0SDqJ5/Lly9mf/MmfWFNTU666jcM9ZoVaJUAkvre35+WsCCQjYgA0rJsex6zBkUbGCwsLuQ6+ZuYQLQER2S3fg4MplUpWW1tr3d3dZma5nj8xkyHzJCDDoatwmHcgk9H+HyoEx8lrNQebT3VPBDCqz8FZkr0oxM1BoA0o1VlqtsQakS1h58c5y46OjlwvF2wBug5nyXzxXdiClqACGzc3N1tXV5cjOnwPc6edrTn8maNUlqTUIM4cjRHtCDQba29vz9GQrGsKheCHJAAKGe0PAUC0beaNgwanqs1ByfAiUorYGGpVS4aZI0XCSDhwkBUVFW4LKhiPSBj0WKlU8mAKarOhocHnS9dItSnYAj4BuijuIWxRb4pX6l2RIuwC6l1F0xFt03nT0ucUOsphpe+iCDOHM4EoAUzU2YCck0iZmR/O2AKfiy1owIGwv7q6Olexi19QRA/fp41LSYjwCRGxoUiDZEo1l6lKK0XYQKIIyrUTPOiazh0BqVYE6x150RaQKmhJfEQN4x7C3lX7iE/Y3Nx0CpR5U6Sc80GbYJKUR98DAkUgTFJC8Kl0miLYilyriBxEnjWJe6hYLOaSW2xBRfnYN35idXXV6WAoXt1DbW1tfg4p2kpyfXBwYD/xEz9ht2/fPg2STuKhT1JFRYUbiQYrZA5a/UT/iRRFxOJ1dXXlGpipowU5UiPESPQ79IBicykVpRlXfX29Z+yqNYqloByCysWbWQ6K12ZksTIuFUhq5sj3aaNHNhXfoRUoqivAiYO80McpCkI12NIWDIiwceKqsyGg4/9RtVFOD6KVUarL4DtACqKOR8uR6TINtK+BquppgKK3trZypcC6JvEHZES7SStql0INokaI91ABsjZg1C7hIDvaFVzfQa/GAfXEYad0T6AhegUJiJ92hU418OTw4DuiLkepgebm5pxwWy+kReejOjWlqbhOgkA+pfvRbvIgPvod2iNKDwESEvYg2b7S9tpPhnXhPbSUP4qXNXgnkNrf3/fDV4NQpZgJSDjUzJ4Wl0B/6aEW0ZTt7e3kd3R3d+foZU0WqUYiQeDgpJSb94H22t7edgQYXRSIDUgDwYA2LMVmtIoWkbAGHfgTpdbwu6AbPT09Of0OHfW1gla/Y3p62mZmZlwvBKpOYUV1dbX72q6uLuvt7bW+vj5/J+2Ppd3T8bVTU1OO1PBOJDp6RRRoXWdnZw4N6u3ttc7OTg8G8Ys7Oztus6BNfNfU1FSOGgcxRoAP4tTf3+/f1dPT47YNunpwcOD+fHZ21iYnJ21qasomJydtcnLSg/XV1VVPdmtra/384PNBm/r7+62trc3e9a532QsvvHAaJJ3EMzQ0lP3hH/6htbW15a4JoRqFwGJ+ft7hT6Xd2HwKh2tU3d3d7ZkC+hQyEja9OrgovNNy/5iNqF4CrQQO6Ctf+YpVVFTY2972NqfYUhV7OFMEoRq5Rx2NVmNAs8WMJ9WgUdEWKDYORu1Qq+W6HKQ4XaiX7e3tnJheD54o+uaQ1u7Hqr3gICXgIBsFpQJtiZ13tYoMekOvbGhpabHd3V1ramqy69eve8YLv05Ah7PUbt58j16RQVCjWbzSIPxKwMMBZ2ZOHcUAEFGnUhNojZRa1SAzVaJOUI42TKuoWHuQK+27o1fopAIPgh7t0Ku9y3RdEFMTCLKGMfBQLQ7/rUioNu3ToDwWMpRKpZwIXu8jU9RD22Cw77VRngZoIMcaRGnbAj2w9bDje9iTvDN7UosyNBkDwWFOzcyDIxVIk/Bpxo64mUBSNUT4L1AikA7oSgJOfBXBhyKGihCB1IOmLC8v29zcnAcd+DKtWsW3ME8dHR1OCxFEtbW15VAo0Dtt5xCDKJDcnZ0dD1ZB89vb2z3g4H06Ojpy7U8IOPGNs7OzHnQwb/iXra2tXDEH693b22v9/f1O4XV2duZaKeBjFcEn6EBovby87L5FK1/5PL6jr6/P/4yEprq62hE/NJlzc3Me1Oi6KOrNHmEN+A5oNAJOgmfOX2xqdnbWxsfHc/M1Nzfnf0/b0HR3d9udO3dsZWXlNEg6iefmzZvZpz/9aauqqvIsXGFNFohsZnV11R09Dripqck3OQsOkoRTIdvXg1c5Z5wXYzB72iASh6JVAcpvkynjfNfW1ux973uf7e3t2bvf/W4vqVQOmAMewwK61IwMZ6ndbjlUVVTJHGnbgu9kjuDNmSMNgk5yjqCBYkaJQ0zNkbY/KDdHBKYc6FtbW/bJT37SNjc37U1vetPrnqPGxsYcEslPnCMzy3X8jXNEdp+ao2KxeCTr/m7maHt7O6dbg4LhO1QUzOdXVFT4oaaHt84RgQ9zBDqhdqRzRCBEpSFzBGLLZ2u1EXSs3v/3WnPEXgA10AQnzpGiH3ptx3FzpG0mCMa1ekkTte9mjkALtNrsuDlSDVa5Oers7PRAXTVEGqSVm6Otra0cCnWSc0QVYqSIdI5IZknavtM52tzcdBRY9VwpO1I0+7g5UslFao5IxuMcacKv/ZH+W82RMjDfzRwhe2COCMRTc0RFLXM0Pj5uW1tbp0HSSTxXr17NPvGJT1hdXZ0LcLXMOFVdwQFq9vR+Nu3xoZwsdFtU6EPrKRqC6DDqaLQUPfYt0bvHlK74wAc+YAcHB/be977XM1MtsdXeKCA68a4tRSYw7tirB1pEL2OFCmH8bGwMXwXZKt5FJ0HGlRLQahds7bbMd/BTTqSL0FdF12Td6FtUCK3fwY+KsPWzzZ42J/3mN79pWZbZ933f9+UE1wSCqUeF0sf9t3a7Tomu41j4fhWw83vGo59ZTmCtdkBFF3oG7IJ10w7iUZxPUKVd0nW9tEM4n6e2oD2M+A4OcaVCY4EBAQ6dr7VwQvtcqQ4QlIUgJLYU0H5afC86DypYtZ+PloKDeu7t7fm49MobRb1AVkFJoNsQS6sGkOADNAp5APOgpeaqx2O+mHttK6FBuKI3oJDYiPpAqCmt4mQOQQDW1tb8M5WWovoQ5JmeQ83NzY52dHd3O2UEwlZfX+/2Bnq+uLhok5OTOUpqbm7O19zMHD0nKIiC4e7u7hw9TJAPpQZaMz4+bjMzM47Ug2qCOhWLRevv77dz5875DxQRTAP+gbWcmZmx8fFxGx8ft4mJCZucnLT5+XkPTKAfm5qaPJg5f/68XbhwwQYGBhzhIhEiSdnc3PSxT0xM2OPHj+3x48c+P8vLyx6gMPft7e12/vx5GxgY8O9g/O3t7Y7I7u3tOTXLZ4+Njdn4+LiNjY25tKRUKjkqB23a19dng4ODNjg4aOfOnfM15szQ6k4QrPHxcXv06JGP//bt27a7u3saJJ3Eg3C7WCzmGl3hZIi4gUNR/9M7CANVJANoNzoHKgygPwjAYhUDvTSoOMEhK0oC4qCcOAa9trZmv/iLv2h7e3v2sz/7s7lNu7S05NQaARjiVTIGoFycMxmJWT47Z16YG8Tsm5ubuR4ZOGU4fRURMi9m5geTIguKLnB4cbBrM03mnblB7N3Q0OAdt4GIgYnVIS8uLuaa+bEhtewWZ8nBBX9PtgbSODs7a1/60pdsbW3N7gdb8AAAIABJREFUWltbbWFhweeFqkFsRmlZDhRoDeaGeaFSRDPBFGdPCXtTU5Pbi6J3eskoNqPl3THbX11d9SCIaymoqkGEjE4DmyHYOTw8dJtRtIjf4yjRmiA61SwZm2FuQGVICLa2to6gmrOzs76XtBiBvYR+UPu10MG3oaHhCM0DBRO1MpoUkLCokBlfoIc4NqO3w0c/g81QsME7q3YlNgnUCjcQFtVXzszMON0Cdci8EFTGzJ65YV4oOCGzx2a0CSBl31CX2koEe+E7VPhPAI6uhwCKecHe0cHgw6qqqnI0J3OOzeDD0KRR8RdtBhRW/W+hUPC9pPaoSDiJHwG7tpBhzpV+BIEFya+vrz+C2HR3d+cq/LAZ7VzNPtWSe5JskjgttiC47Orq8rnSJpbsJRXWR5shgaewh15gajPQptrbCVG9lu7z2cwLCfze3p7Pi7a5wb+zpnpmI8NYXV21ubk5+83f/E2bmpo6DZJO4kG4bWae6WlpJAa+srLimSWVbMCRGDYCOBAfRV9wSlraiSHqbe5k2Ko3UmoHgTNBC4encreLi4v2mc98xvb29uzKlSteJcABxwGkzlyFoRFCVUeLY9QKNTYa/Ty0okJFh2TY2t1VG7wx/9rygPnQviQcFFQn6S32ZId66KsIG9SDzFG7xfIDxciBRpDFmBGwQncgwsWx1tbW2vz8vNXW1toP/dAPeQBNYAVyoo0S44WZKlTWlgwgIip6j1170b0wZu2ATMM31o95I1jW5qDMuQZs2hhQPxfYHlQGpFJ72KjuTJFQtDogRvG6D+aDii72lTY0jVVCxWLREUYE5wQ8WrnDfGAbIFKMT3sJaaNUkBMgfqqpYp8u5hr9Fnubyj2CV9Usoikj209RGKDDdCImmdKrP3p7e3O92kDqtO2IBiAcWsy16vpYN/xcX1+fJzsqKNfWKQRO+FItZcc26urq3BehU+Eg7+rqco2Kmflnz87O+meCJKj+jc9m/bq7ux21AYHq6Ohw36IXqWqp+sTEhE1NTTniTwPIqqqqnA5pYGDABgYGvGRdEX5dQ4TPICoaIPAwH21tbXb+/Hk7f/68oyl9fX2OjGrDRUWDxsbGbGxszD97fX3dgxqdD5Ag5gU6trGxMaf9RAMEEjQxMeE2SCVioVBwLWl/f39u3AMDA+6r6+vrHRleWlryuR4dHbXHjx/b1NSUn4skKfh+5oNxDwwM2Llz53yfQoHv7OzY7Oys/czP/Iw9ePDgNEg6iWd4eDj72Mc+5lAhk80BxUZns8/Ozjq9pHcVaVBD/whFBVTTg2BTkRiyUxyBijW1/F6dKvw8VVSMa2lpyX7jN37D9vb27B3veIdDs0DE9AFCZBybylFNw5jJqGMZPD+IjHGqOCjtraEZo96srcJipQy062+pVHJKKd4tFgWyWplDxqaCaO1Ey7i1wRni6ygijxcs6mcT3ECZbmxs2P37921/f99bMPDZeo0Mn40QmsooqlcIlFScjqAc2oyDRis+9PMRp0LRaGdgnQ8OXb23jkAyViMS5EDfxDJiDSRV9A7qQjWXNsrjc0FeEYib5ZvxaTCivXhwvmaWSzS0aSIHOmJ9Eg0ds4qOOWRIjlh/qln5XLQU0ClUTtLUDxSNA4DMn/2IyBzansCGA53MWZFLTY7wS/r52jYEilDRLXwUc6/zATI5Pz/vhyRBFHvyyZMnnmQo+gRNBZpLsmhmuQtJCW74/JmZGR8zSC4iXAI+DnT1q5xh2PLc3JzTUhzC8/PzHsxS/VosFp3GUeqLMdPSZG9vz+lFDXCo8NK766Aru7q6nFLjUFeNp+5DfL/SUZw3JA3KJjBODRh0DfEf2Nvk5KTTaFNTUzYzM+MB387OTu4qLf3M8+fPW29vr/tVklsouunpaRsbG/NxYx+I583M/bIGTnw+rRUaGho8KV5cXPR51cAJdJV508R2cHDQP5fAqbGx0X7sx37MvvWtb50GSSfx3LhxI/vUpz5lVVVVOSiTjat31+DUtBMqGQ+ORykHUAgiZ7IrPptgAxTCzFwTpMEWn68XbmqFARuCz+UQKZVKHjBoUEEmqJUe2vWWEnI2KxsAxw7kyiGqdBQODYegonXtBcVm4MAAnt/b2/M7uLQZGz/a7wUId3Nz0w83HCPw88rKirc5ICNuaWnJHRQqOqUsGQQAKoH5nZ2ddcTkyZMnjrAoldDb2+sHPo7GzPxwY1x6AHG4cdhTldfQ0OD0gc4DDpk1Uw2AZtoEQYi4syzzw5zsncy9s7Mz13FZ7xpTpAFUgERCy6UVGmd+cWhUdR4eHvpBTPWNUk1aHg9ypgJVPluv3dBGgxxq2iSPgA1dGLePNzc3+xwojQJFZPYU0VFqnOwdWkjpQtUlgo5ocz+QDoI+SqyxBVqOEJyBICLsV1ugkR+fS5+lWEGlVBh3CUI/Rltg/WgDUVdX5/QddDINAvFjUM3o9Orq6ty3EIxoeT5zRY+r9fX1XELKd0A57u/vH6EzFdXSCk+avuLLCXDwC+wJkgEQSfwC80pCCqqML9/c3HRbVX8OYr23t+fFANqiIKJ7mjwTqGvbAGwNf0OQB7pC0Kvrxjzo2UPyGc8e/A1tQUCSdJ8xF4rak+wQlKo/VyE4QbeKzLUCEPtAZ6eIXursAUDQs0fpW+yspaXFfumXfsnu3LlzGiSdxHPt2jUXbusdWCy+VgGQgQO5Kt9LsKQZpdJtUf2P015ZWcmJK9mQeicSmQKHeFVVVa6MON71pJ179RZnFYUi1OQQwqD18kgOAIyT3h4gRppZ8/mgC3qRr9JKUXyLaFqF2GwAaE3tpYPo1sw8a2Iu+Cy9i8nsqfhahcja9Tp2esbJQnMhftaHfxe7VJtZrlu2dqBmPCq25sBMddxWkTU/2tmav8u7MEesi4rW+Ty9t0tF0YjZeTeoqtj9XDuSMzeIcNF6aPd2aFMz889Tmgz6gOIGqllU2Ku9mGi7wUFEFQ8BD4iXomipppNalaZ9pLAX3SuxIawiodorjICfYFXF2XoxLqiWVvTwfQi+t7e3PYnSAF2RMzNzPQ40CrQMwR5JVWVlZQ7VUhRneno6lwQSjKDvUaExWsWWlhYPTNfX152yg+4hOAUB4F40gkUV/VJuDg2dZZmvk1JIIC2gt7u7u26XPT09jiYg+gXNam5u9r1D0jM5OWkjIyP26NEjm5iY8MCBPcf1N93d3TY0NGQXL17MCaFBf5FSLC8v2+joqI2OjtqjR4/s0aNHNjY2lqtkZawgHxcuXLBLly7Z0NCQJym1tbXubwg8xsbG7MGDB/bw4UNHyFZWVnz+mdP+/n4bHh624eFhu3Dhgl24cME1iPX19Z78zs/P26NHj2x0dNRGRkZsZGTEJiYmPIml0hPk6vz587nPxW4RaW9vb3sANjo6ag8fPrSRkRGbnJzkDjXb3d21LMt8rAMDAzY0NGRDQ0M+VhLg2tpaDxpnZmbs8ePHPq8jIyM2MzPj0gwKdVpbW91Gh4aG7GMf+5g9evToNEg6iefSpUvZxz/+cWttbc015SPrgkfW6JuKHTrCtrS0eEajfDoHPaX/0Hex8Ze27GcjaXUIjbjInGtraz0ooG8IGePU1JQtLi7agwcP7MmTJw53s+nZTAqJt7W1+cEN0rG8vOwVG2R1CwsLHohQTUEmzruThejlvVpJoe8/PT3tmwE91pkzZ3ICcjJxoFs6qu7v73vJLJ85OTmZQ9Gwb232iUMmS2xpaXGRORRGfHcEo9rMDuoMBEIzpPr6epuYmDAzs8HBQdva2vLP0WsAtCsvQY22SWCsWkKvlS+sPQceaJSW82rvHu19QpM6aB4CnJhx0ieGYIAml42NjZ5lKjIJqmH2NOCO2SaHMo6erBv7xObJujs6Orwqx+ypdpB3VzROm00SiClKwp4iiKqvr3f6iHYKoETaMG9jY8MDUBDkzs7O3JySxZuZNylVVIs+MqAbBLtVVVWeFPHuaH5AtdDdRdRFEW8QLUXgFNXjB51ZXV1d7u4tbJQ5BdVbW1vzwFmDPN1L6AVJFLTnkDYhZD5IcBDqp3wofoQLnaMmSd+fZE51nQQ5+rl0OEdoTfk9mi/2knafJwFC86WNEglyW1tbPenSbunMpQq49cJZbFTXh32l9zmShGj/JuYUm8CP1NTU5KhV7AkUHbTt8PDQNXoEzjRwpGADW0aakfKhigZpHzP2u747FdCKjvK+eoaojpVkjnNI96f2+QO86OjosOeff96Wl5dPg6STeG7dupV95jOfsSzLPAucnp72kksOXfh9M3OdjQoCqVJiYQuFQq5DLVE1ToLKEg4yvRdHoWnVONCfSMtOORihlfb3962+vt7u379vVVVV9u53vztX0cBmRg9ExqYHImJCUBqoxebm5hwMrdVGaJc0GIxN5XBKBG04XIXhdcMdHh7maK94NQM0E5et0l9FM38qONBtQRdCUzK2paUlP2ConKNrMJ+piB7VW3yeXuuwvLxszz//vO3u7lpHR4cdHBx4BYxqtAh+tBWCdrPWXizoqKBTQO20+gNUU4XFdK/W3l/aqJTDiuomqF5dX71uBG0Q44tXMRCggJpwNYuuMZQe1Z5kuFopyWfyeVRKUrGnDQg7OjpywnLmUCF7DkGoAkrKI51JlvxahzOHFejL4eGhfx6Uqx56rJf2Y1N6GKfPYa8oCSXp2oCPQAda9MmTJ7lAFOExB32pVPLgob293Q9OkAK9lke7bHMQoz/BflZXV30Ni8Wiv+uFCxdcz8LeAQVdX1/3+QMd4FBGvMz1KqzFhQsX7OLFizYwMODrDTL65MkTp9AeP37siAjBHshoTU1NrjQeVIi14eqZg4MDt7+xsTFHW5jTUqnkiDI+YWBgwC5evGgXL170tUG/WFlZ6cn25OSkI0xoprTBLnu3t7fXhoaGclobKhfPnj3rAcjMzIyjK6oLIpnRvQeyBmJDO4DGxka/aWBxcdHXBD3Q5OSkB8m0cGlpafHP0TJ92AkuU1cUkM8cGxvzBNzMXO+Idou1BlVqa2uzqqoqD5I4Q3VtOF+54qq6utr6+vrs7t27p80kT+oZHh7O/uiP/sg6OztzWRDGjbPRLEjhYxwBmYWWuMPPUy67vLzswQ1BBBklWZU2aovcrV5DAeSslKBqDl566SUzM3vLW97iGVCqZBjqiWAEdEI7825ubjrFodVVetjDsXNYKXVJIMKvZLyUlOvN9fpDB9ba2lqvSlNNj3ZCBg2AgtGKOBwQXYnJzgg4QHRSVVpUb2jWR9bf2trqGQ/UIqjhF7/4Rdva2rKhoSGnatfW1mxnZ8d1MQROID0qQI/NKeHq9Z0ZJ8Enc6VVTjpOoGntcwUipZ8L1UXZuTaJpBKJjLSurs7nUa9K0ICMoEAvZ9au9Eo5kWRAC2LTauc4bqV9tfkg+4a55BoP6BalsBTVgp6uqKjwPaJiZOYSfcrh4WGubULMxHnnQqGQS6wQshLgNTc3O0KoXau1CopqOR5oQO2Jg34RZNjMcqJxBLx0X15YWHD0XBOWixcv2uDgoCPYXV1dTtsS3M3NzTn9oWgJ81JXV+fBMUEEwVNvb6/7RhACxjcyMuJjnJmZyXXoJ9AZHBx06odqOLrZ7+3tOeIA9fPo0SNHhtFjVVZWeqB98eJFGx4etqGhIQ90aPZ4eHjoSP3jx4/t4cOH9vDhQ19rEhczc3RxcHDQLl26ZMPDw05VNjU1eX813nd8fDxHo01MTNjq6qonu7zvuXPn7NKlS3bp0iVPykHAKysrPZmamJiwhw8f2oMHD2x8fNyr/jhfkIX09fX55yF2RidWXV3tZ8D09LSPj8BOK8+KxaIj/tBxfB50HFpf9FuPHj2yhw8f2tjYmCP1+K+Ghga3Qai4gYEBX2OCIG0Tgw2qWB+x/d7e3mmQdBLPjRs3sr/4i79w41AKQ8sdS6WSOwAOnxSSRF8IM8s5UDaB9oQAIdKeLbHKor6+3ps/aldfxqdVYJopPv/881ZTU2Mf+MAHPPOkQSQiTBW38qveX0RvFBAuvesHUXZtbW1ZrQOiQ0ozObQ6Ojq8iRqBm1bTwfHPz8/7ZldRL+vAZ3V2dvq8aRUTeiBFZfg8DvCNjQ0/GLX/DOMjIGhoaPB10P4nOEsa+JGxl0olq6+vtx/8wR90XQe0IaJr7UuiazA5OZkThytKpmXSZOusu4o/9ZoANAysgwq3ceIgKOp8tf0D9qvXJzC/BHkdHR3uJGMAwDrQtZjP4/BXSgfnS0Bx7tw5XweQjv39fVtZWclRD+wv2j0wL1A5eqs4Qb2uQ7yjCrqNIFULClgD9gRJDGuPaJjmf4xTEVUoW/aDrgM0eXV1de4+SRrnaY8s5lcvvNbPI0Hg79EjKLUOKysruX5VjIfPYx2KxWLuPjT8EmXz2uyQREgRJygbOplT5YS+K94TphV7WhXJemoRhlLI0OfsB/wSySmInVJJ586dywWc0Dh64TnzxmctLS35/GrfOV0HEmr8FwGn7i98JtogKE4CuqgNKxaLuYvS9XzQppOgQcwddqsaNtaBXn6pdaBSkPNGq+7iOiCfgLFYXl52+1CKnKCLCl3VF1FM0dHR4eugyRjzpnIYM7NHjx7ZkydPToOkk3iuXbuW/fmf/7mdPXs2J67msMKZoJvZ39/PHfZseoIa7VhL5k9wQ4O72N8E4aXeooxmQK8bIXjAQJaXl73zMKWRCO6+9KUvWXV1tf3qr/6qN7OsqqrKNQ2j7FlL7YGozcxRI6Bb4H2yNhA1KAc+Q5EnNBdaqg50DDJGt23Kw9H+qBhbRdgqwFZBN5+jImVE3KlO2vwbgiQVhfNuIBCp7tQq/kYkzRi+/vWv2+HhoT3zzDN+KGq3cPoekb0xH4iioQfNnl4yqgJp5hfBO43pQHcIcMnmKioqcndt0QKBX9EwmVkOkUK8DDxPYH9wcJCjBkkwaHGhaxXvTSP4r6urMzNz+8autfEpwmcqprSUnb0CRcteUdFnbBjK/IMCc9CCPjFG1rpUKnnWqokTmhPtdswBAX3Q1dXlc0jlzsrKik1MTHjmy+exJ0EW29ranH6gT05XV5fbOxT03NycC5C12zN2TmDS19dnQ0NDNjw87J8FFba3t2dLS0veDXlkZMTREtYVu1L6BvSFSrjm5uYcfQ8KAc3Ee6KVJEi/fPmyXblyxWkcUPiqqipP5MbGxuzu3bt29+5dDzyXlpa8cALh9uDgoF29etWuXr3quplisej7Z2pqytGHe/fu2b179zx43djYcDRzYGDA0RZ+Ojuf3mHGvDx+/Nju3btnd+/edeRqeXnZaTSCzMHBQbt27ZpdvnzZ35PinoqKCg9qRkZG/D1JOEulkvsZSumHhob8PVX0DYIPKgeydP/+/VxjWIIbUC+Qm6GhId/vFRUVvg9HR0ft/v379vDhQ6f3oIUrKipc/3ThwgW7fPmy21l/f7/T1vv7+x50qcCbYImWOmfPnvVk68KFC04/sl/Rzm5ubjoNPD4+7nRhqVSykZGR047bJ/Ug3C7HnaNNojvw6uqqOwwc7MDAgBsEEXR9fX1Ou4ED0syew5uACw6evhr0jOHvqaiWcVFFs7W15YdbV1eX/fM//7OdOXPGPvjBDzpnTHWcXjXA5/C+BEoHBwe5Jm80B2NDnj171qswVEwHRUA2pEJKskec/sDAQK4rK4cbh4heJaDXN4Dk0XND2xkQXKjQDx4bpGxlZcUKhYLfgUQ2RRapnD2XhK6uruauBWC+COI0Gx0YGLC5uTkrFov2rne9yzo6OtzetDeK0ikEvRz2KmTl0NUeQgSj6N20d4t2goYe1AZy2tEXwb52waWSSGkjggEVg+LI6FlTW1vrzgv0lMCCcnnaXZBoqG2hC6LSLcuyZI8adAh0vK+urnZBNlTWuXPnclWmBCEcRjo2MmMSFq3qgULv6enxgHN3d9fnhn2tNBud8rnQUyuvtPcMVWzr6+s+LsY0OzvrSKJenM3nQIV1dnb6uGhauLCw4GuoV0wgfIfO7+npcd0LKGJzc7MHFCAwqnHCHjY3N3NIGLagZdjYsnYsZ760xJ8HehObOH/+vAettbW17gP00lZ8BeLlvb09RwiZd1Bc9jUdpvFR2BYB6+zsrFc6VlZWOgXU39/v848P0krj2IhyYmLCfZqZua8DJcHn9PX1uY8EIQV1wW+BXKvUg0SBfd3b2+s0MT2eSqVSblz4G5J17RsFGqTCbFpAgPAtLS3lbIvWEnqWwYywhtpZnb8HCpw6y9bX1716UNE9foXqr62t9bMMloBE4Ytf/KKVSqXTIOkknlu3bmWf+tSn7ODgwKNcvceGzQeM2tTU5IcWzqqvr88zPByoBkYcXIuLi17izuGszcwo2a2vr3ekB0gRxwIcy9ohtFVIHOif0n40VgpzUlkGdaFaKK2GaG9vz2k6Yn+Q6enpXMNHxh/7YXR2djpigwPW/jDaa4WybvrC8EMVDf8fnY5+jiIaZGmqn0IQipC2uro61xU36rvQEFVUVHjvJ726BQG6mbkuBwGtNiXUaz04vDl8oTAoq6bbOQceDpKu71tbW47EUBGFg+RySBAftFsqEtZb5lOVQARRWp2pglGcNsjP4eGhd5MmiMaG9BoUhP1ra2t+yOnVBFR7qQBfaVQqk6KwmM+CpgRtQ+OlASK2RGCO5kSD/MePHzvCSiFEbW2t73kO3nPnzvn8oXfiEEGoChKtgRMOH/FrX1+fI3Y7Ozu+7xG8EpzPzs465at6EBAieuW0tbV58M7hgdh1ZGTEKyC3tra8aKSvr8+RIfyb9nTCdihHf/DggVNFOzs7jjAjEL548aKXztPqIcsytxkQDkrwJyYmHHmura31+R0eHnb0hT3M1SLLy8v+TvxMTU35fm5sbPQ9f/nyZbt8+bIH0J2dnWZmHojj8+/fv28PHjxwqnVtbc3tV+eI9yOAq6mp8fngc+7fv++2qZ3AscWLFy866oKmrqKiwotf0D+NjIy4SBuUnYSlu7vbLl265HPEvtPmriQXzDmi8ZWVldwVQ0NDQ3bp0iVHurq7u12XSPXj5OSkrz/9zVZWVjxQ10QfZAr5Q11dnfuR2IJhfHzc1tfXc/ouupkPDw/bxYsX/Ryhp9L29rY9fvw4d14/fvzYSqWSPXz40HZ2dk6DpJN4hoeHs49//OPW2dmZ6/uB8wXGw7EoJEsGRUv6np6eHISqNxYT5YJkwK1T8q6lqj09PbnLCPUeOW1HsLi4mGtcpmJTDnHtvwLnj+PmV60iIjihki3elQMNwRUPKjRV2k6vKmFccPI0JSNAobpJK//oMYMAFXiYz+BzCHT05nrtxLy8vOwdcRGU0rEV6id135heyYK4GXoKCpEWENrZWbVaXIcBhw50jY6MwJPLLylXpzUBf8bf43PQpyiKiD3V1tbmAmfmisZ1iG9VHwSSxZxDdXB1BoexBpfaPR6kRxtCbm9ve3DJ3KieiqCQPmJcr6CaoOnpaacPqKJqaWnJoTIUIvBsbGzkhMAkFzhngsuOjg7vewNCQAf7mpoaD1Cnpqb8kIIW4FLOqqoqP8y1MocqsYqKChfY05NmdHTUUaydnR1vbMm8DA8P+0HF2lGxt7KykgsI+CwODLQrfX19TmGBMBBcEDhBgT148MDu3buXu0mAfXH+/Hm7evWqf86FCxdyXb85fEdGRuzll1+2u3fv5kr88T0c4oiZh4eHnXJXBObu3bv28ssv24MHD9wOKisr7cyZM9bS0uL/Vt+NPbK+vu7rxVio/JqdnXU/wiE+ODho169fd1qORp8gE9B7UEwjIyO2srLifoRgZHh42K5fv25XrlzxxJAq0K2tLQ90CL4ePHjg50hNTY3vd6gzxMoDAwPua1dXVz14hyLkLFlaWvJ57unp8bkeHBy0ixcvelBBEMeZplQja0/Af+7cOf/3BIVQcFqxp5QZe3ZnZ8eBABKcwcFB74dEMk7H8dXVVQ8AQfMmJydzF+tyPlIAoA05kWSo/m9sbMz+7M/+zNbW1k6DpJN4rl+/nn3yk5+0qqoqP3impqY8KiUQ0JJcbWMPZKpIEhkFARYZLhE31AwZm4oYVbys9AeBGkJS1QiQHSmk/Jd/+ZdmZvbWt77V9RRE3VqGyXUZmiGDcBEIaGClBw8HK2Npb28/8k4IgdEXsFGZX6BlnV8OLb2ziOtGXu/8FovFI++Uml89TLmjr1Qq5eY3ooYNDQ1O6/FOoH04r+npacuyzEvyy80vQkTNQnHs/9L51fuQUvOrVBmfw51vBwcHOVEvNhPnt7m5+cg7dXZ2+liovIz2S7D5nc4v9quJBnfO1dfX58bCPig3v1AFzA1jiXQWBQ98DlV2ijhRDbmzs5OcX95JdXxxfkEt6Fj9ncyvVq0dN7/qq5jfiooKD2rZk5RpHze/vBNjof2A2m+5+Y32qy1JmF+9K4ykQOcXhEHtNzW/2Az+YXl52SorK62+vt4ZAShVUE/GQoNMquSYX+5rBDGH7lL71YSEeVH7Za339/e9wpX30c9JzS92x/Uiy8vLuauDoE9Za+3fpIiy2i8aqtraWmtoaMjNL2Opra31ar945UmcX66qea355XNAS9H6peZX15r5raysTNrv0tKSffOb3zwVbp/Ug3D7zJkzHoTAk2pfB7Q1QMGxeSIC4J2dHUd5lEqCx0evBN2iYlFurNdKIDJaqjvIZODdEUDy76F+fv3Xf90ODw/tl3/5lx25UOrw4ODAtQ70v1DdkgrG6b4NRM9t9mSUGDRZunZPPjw8dHoE0Sb9jfSuN8r2Y7doKka0mzZZhtnT7tXaSZt/a2bewZh/x4/+W74PyJg2C0DWfDdZC5AwuiaqRlSM/o//+I92eHhob37zm30OaVPA5zBWnFPsWM4D2kaVCq0EQBHRooFqcTN5Y2Ojo1A0S9Qfypdx1lA+VNJoewNgfijJ2P8qyzKrrq7O3cRFSCcNAAAVJUlEQVQOmslBC5UE3K6l6Fx6CiKqlWj9/f1+RUZVVZWjewhAOXig7GJnY73Ik/XV5qZaJk4wqH1mgPv5HCrFsizLBfzQGTq3UKDDw8N25cqVnKDVzLwbPxk5Itvx8XFHIdFuDQwM2PXr1+3atWs2MDDgFVMI6MfGxhwhALWgVYKZuUbkypUrduPGDbt69apTWDQWXFpastu3b9udO3fs/v37XpLPXm9ra3NE5/r163bjxg3XPLa0tOQCixdffNFeeuklP7yodKupqXHEAzTn+vXrfu3S/v6+28ft27ft29/+tj148MC1gMgVOjs77fr1647kXL582c6dO+d7maTu4cOH9tJLL9lLL73k2rGNjQ2nz4eHh+3GjRt27do1p4goKtjZ2XHk7u7du3bnzh27d++eI5Nog3p6enw+KIXv6+tz3zE7O+trwhrR12hra8vt/eLFi3bt2jW7du2a7yG9KBf7wGZHRkbcFyhNqWgb5wvyhLm5OR8L6M3i4qL7G9pKKGVGI2P23/Lyso2NjbmQm6o6mizX1tbm7lRjPmAYaNisgTUFFuvr6+7TtPkxsgIQrUKh4GetVrhubm7aV7/61VNNUuopFAqtZvaHZvY/mNmimf16lmV/dty/uXTpUvbss89aZ2dn7iB4/KpSHr5U7/qBGjt37pxDgNAjlZWVThc9fvw4B/nOzMyYmXnHZjUimnxxbxZIx/T0tHO3GNHW1laOJwfm12qC97znPZZlmX3oQx/KXXI4Pj7uTvXs2bN+SSccO40n29rafD40A2dTkQnt7+/nLqCkEgEKy8w8QNNmahxQHIyaOcOJIxA/e/as3xM0PT2d01ZQAaWiSBqxIXqmW/HOzo47X9Z3cnLS5wO0Bn4eaJdyW+38ynyAZlEhlmWZ9fT02JMnT6ytrc1+/Md/3CkYmq3hYHkP+nvMzs76IV4sFnPXF0C/oP1QhAUnDpKwu7vrVCTUAhldf3+/V/FtbW25ZoL5mJ6e9uaD2l8L+0As3NTU5BWImplCK29ubnoQh3PTxoBavoyuDPvgMJybm/P5QBCv80H7gzNnzuTQRW00R+CEfVCZpfNBALCxseHzwb6lmKFUKnkgwJ67ePGiz4de1Ml8Yh9jY2O5wgMOPyp3QKyKxaKXSy8tLfm+h8pAfE1X4Wgf2l8LVFwb7xFIghBR3TQ4OJjrR8N8qHYMSoSu5qVSyQsVaIB48eJF13xxK8DOzs6Ra0XGxsb8YD979qzLA6B49C42ErSFhQW3D6jYlZUVnw8V7aOLohKtUCjk5mNkZCRXmAC61NzcnLNRWpRoc2C1j9HR0Zy+C5qyv7/f7YP5oHHq9vZ2bj6YE6pfEfvrnkPs39LS4hQVPYIePXrk1CT3pVVVVeUahWIfFFccHh564g0NTKC0sLBg1dXVnjgzl6A5HR0dnlxSeTgxMeH2QfNSGmS2trZ6w010TiBKJM74DlDryclJb1eCAL+3tze35zhvVYqi8/HlL3/ZNjc3T4Ok+BQKhU+YWYWZ/S9m9n1m9gUz+49Zlt0u929u3bqVffKTn7Td3V2HVfl1bGzMEQhoIEppWSwydvRHBDUjIyNeVfLkyRNHfjAYNBU4pf39fdvc3PRDRqvqCEa4KLGvr88Nnx4+2udpcnLSfvu3f9t2dnbs5s2btrKyYmbm1ShaDYGmg+8ASdO+Imtra45w6XUMZOdk7lwVQmWGQqjoQerr653W6O/v9woNGkbu7+/nWu4TXFJxR/UeTdGg0Qim0BEhxoRe5LoKrXyhkg2Bupl5B2jmQO/uQ9dAtZiK3Omyi/N47rnnbG1tzTo6OtyBId4kQNDGa+iZ0OBoDxCycxV/6tUdHPboZLj3Sys+9IJI7jjr6OhwJ4hwlKyVSjCCYoJaWhdE2pkKKdA4hJ6gPaOjo74OOzs7nt1y35RqcMzMs17VK2APoJQEChwEg4ODLuyvrq52JBc0YWRkxJE4+pO1tbU5qkGFXXd3t1e8zc/Pu65E6WqQStUjUUrd2NjoYm7sGAQBCn95edkPop6eHtf+kHG3tLR4yweaDiJ0Hh0ddTS3urra15D3GBoa8qCQ7vdodV5++WWvJFpeXvYiggsXLrguBv9QV1fnCCrfz8/Dhw8dZW1sbPSADWRIA2Gliu7cuWO3b9/26lBN+C5duuTzQMAEhbaxseEl7YiiCT6fPHni7QQuXLjgaBu+rba21gXjIyMjjpJxINMSolgsusAbLc7AwIBrTNHhjI6O2t27dx0NKpVKtr297cGeludrjx+SZ/6tNkEE/Wpvb/fvRpDf0dHhNk1FHyX59+7d8+75ZubIILaggRqVugsLC456asUo76nnC4EaNp1lmWsHaQwJrQ+y1traap2dnY44UinX2trqdzrSWJI9NTY2Zk+ePDGzVyoK8Svs7fPnzzuTA30OVffo0aOc7rBYLNq9e/dsfX39NEjSp1Ao1JvZipndzLLs/qt/9nEzm8qy7APl/t3w8HD27LPPWldXlzdR1ChdrwugJwfQNz/nz5/3CgdKVDmgyEYpBdWyWRwLIjQu2d3Z2cndI8TnaZt9InKibDYSVV/vf//7bW9vz975znf6HTdc4qk3q0P5IaStqqryJmugFTReXFpayvWZ4d9o40BgUigivQ6DOaAfRnV1tQuxyTpaW1sdSt3b23PDJwvWbt1U8CEOp4qCDU12xr/ncEJjlmVZLmhRgTlB1+7urguUoYaoDtvc3LSqqiq/YFU7kD///PNWXV1t73jHO3JUBsEL4mbucKLPDL122traXKwOTbmyspKjcNGnETjpDfF8jl6ITPao/X4InBoaGvwdcEzamR3qB3pJ2zNw/5oGTiAcPT09TqFqdZyiX+vr67a1tZW7/JSAgyC6UCi4wB8HD9KLwBeERauPqBzC3lRszGFFNlpVVeV76tq1a3b9+nW/KLWrq8s2NjYcTXj55Zftzp07nkytrKy4zXBIXrlyxQ8rrr5AzIvY+fbt297bpVQqOWI2ODhot27dslu3bnnVWkNDg+/Je/fu2Ysvvmh37tzxQJjKwJaWFrt165bdvHnTx3D+/Hmn2WdmZrwXz+3bt+327dt+tcPe3p4Lba9evWrPPPOM3bx50/fm7u6uIxcvvPCCU2EEgyAe58+ft2eeecaeeeYZRwBAY3d2dpy6unPnjo+Ffm+NjY0u8r5x44bdvHnTBgcHnf5FzzUyMmIvvPCCvfjii15NSsDU3t7u47927ZrbJEUnS0tL9tJLL9mdO3fcniYnJz1Q6O3t9bW7evWqXbt2zZOIiooKR4Hu379vd+7csZdfftl91ZkzZ9xmbty44XaEz6YoZHJy0t+dz9N2G9iv9jAiEdKEWqvNKC6hMW5/f7+/B1cNnTlzxqn7x9LlnMCVMwbkVZEkroVBwI0tQHtTvLO/v5+7KxSqWhF1zgPeQxtzMge0QtF7F6neNTNHv2AZKPb4+7//e9vY2DgNkvQpFArfb2Z/n2VZrfzZ/2Zm/ynLsv+x3L+7fv169olPfMKqq6t9o4+Pj3uEio6kurrajVxV9jSL4yBWjQNCYG3eBcRO5k1VHY3mUpAy1QlA9BcuXPDD6MyZM2b2CgKCsY2Njdmf/umf2s7Ojg0PD3vJMHQcGYLeG0ZlEgfI48ePfTPX1NQ42kLFA92eW1tbnU6bmZnJUYMEdmhuyA4VRSMgW1lZyVGciIWBuhHDEpReuHDBNSo7OzsezCptRNaC/ktpAXr7UMm0uLjo1ATZ/tLSkpfst7W1uaPi0GppaXGtAGJpvvvFF190alZLo3E2vb29rmtZWVnxtcPZaB8t0CqohMHBQafd0Dso3ExQvrGxcYQSoWy5tbXVqqurfe1wluPj4y585UqS1tZWt3nNCtFW8e6gThMTE95kjsZwrBsIJkENZfyImEGc6KdE4IfNQhs0Njb6tSULCwsedCFoJaPEyUIDE3TpvXZKORDEUSjQ1tbma4d4FCefZZmjbbr3qOyjwofMfHBw0DUmSiFrg8OVlRW/oFi7mKNzQUdDc9nFxcXcuxPEg85AU/Dd9HuC6gA1py8ZASf6I1238+fPe/VuZWWlo83Y7ejoaO66IRIwbIfqyIaGBkd7Qd5pwsilxyRg6i9JQnZ2drwLuVKSMzMz7q/oywYKwdpVV1e7Ro/1Yv1mZmZcwwfKCX2PhODMmTNWUVHhCLf6DBJDrlBRGQOVz83Nza57UykFCcz+/r7PEfOG/qynp8eDTJWEsA4I/ysrKz0wYf0GBgYc6X7y5MkR2cHs7KzTgSTPShtypROI2sLCQs5nKJIESo2/6enp8R5vULVQuKOjo15hh+63vr4+RwNT8c3aaK8m1pC7ULMss76+PvvGN75hCwsLp0GSPoVC4UfM7C+yLOuWP/t5M/vPWZb99+Hv/oKZ/cKr/3nTzF76bzXOf2NPu72i3XqjPW/U9zY7fffTd3/jPW/Ud3+jvreZ2ZUsyxq/2w+peu2/8j31lMysKfxZk5ltxL+YZdlHzeyjZmaFQuGbWZa95V9/eP/2njfqu79R39vs9N1P3/2N97xR3/2N+t5mr7z7SXxOxUl8yL+h576ZVRUKhUvyZ28ys7Ki7dPn9Dl9Tp/T5/Q5fU6f1PPvKkjKsmzTzD5tZv9XoVCoLxQK/52Z/U9m9vH/f0d2+pw+p8/pc/qcPqfP99rz7ypIevV5n5nVmtm8mX3CzH7xuPL/V5+P/quP6t/u80Z99zfqe5udvvsb9Tl99zfe80Z9b7MTevd/V8Lt0+f0OX1On9Pn9Dl9Tp+Tev49Ikmnz+lz+pw+p8/pc/qcPt/1cxok/X/t3W+oHFcZx/HvL42YF0HT1lJNxIRGbG2llVz8g2IU4h8sLYoplDaY+KaKJVRQRBASg1XEQME/LdUXNU0kSBO11RaLKDWYRqxGpcVIfNHW2jQiNsGYe9PbP+nji3M2TCazuyHdvXP3zO8Dw73MmYXn3OfMznPn7OwxMzMza9DpIknSBZLulTQj6SlJN7Yd0zhIerWku3Ifj0v6i6SP5rYVkkLSdGXb1HbMoyRpj6TZSv/+XmlbI+mgpBOSfiNpeZuxjlItp9OSTkr6bm4rKu+SNkraL+l5SXfX2vrmWMk3JR3J21b11p6ZEP36Lundkn4l6aik/0jaLekNlfYtkl6sjYFLWunEORjQ74Fju/Ccr6v1+0T+W0zl9onOOQy+nuX2kZ7vnS6SgDuAF4CLgXXAnZKuaDeksVgIPA28H3gtsAnYJWlF5ZglEbE4b7fOfYhjt7HSv0sBJL2O9DTkJuACYD9wT4sxjlSlv4tJY/w5YHftsFLyfhj4GvCD6s6zyPGngY+TvirkSuAa4DNzEO8oNfYdOJ/04dUVwHLS98Vtqx1zT3WcRMQT4w52hPr1u6ff2C425xGxs3be3ww8Afy5ctgk5xwGXM/Gcb53tkhSWudtLbApIqYj4mHg58An241s9CJiJiK2RMQ/IuLliHgAeBKYaju2ln0COBARuyNiFtgCXCXpsnbDGovrSE987m07kHGIiJ9GxH3AkVrTsBxvAG6LiEMR8QxwG/CpOQp7JPr1PSIezP3+X0ScAG4H3ttKkGMwIOfDFJvzBhuAHVHQE1pDrmcjP987WyQBbwFO9hbCzR4FSryTdBpJF5P6X/1qhKckHZK0LVfjpfmGpGcl7ZP0gbzvClLOgVPfs/U4ZY6Bfm+Wped9WI5Pa6fs94DVnPnFutfm6bgDkj7bRlBj1G9sdyLneZppNbCj1lRUzmvXs5Gf710ukhYDx2r7jgGveK2X+UzSq4CdwPaIOEha1+cdpNvxU6T+72wvwrH4EnAJsIw0/XC/pJV0ZAxIehPp1vT2yu4u5B2G57jefgxYPGmfURlG0pXAZuCLld27gLcCFwE3AZsl3dBCeKM2bGx3IufAemBvRDxZ2VdUzhuuZyM/37tcJJ31Om+lkLSA9O3jLwAbAfJU4/6IeCki/p33f1hS/W8zsSLikYg4HhHPR8R2YB9wNd0ZA+uBh6tvll3IezYsx/X21wDTJU1PSHoz8CDwuYg4Nd0aEX+LiMMRcTIifgd8mzQtO9HOYmwXn/NsPaf/Y1RUzpuuZ4zhfO9ykdSpdd5ypXwX6QO8ayPixT6H9gZLaf9VVQWpfwdIOQdOfU5tJeWNgTPeLBuUmvdhOT6tncLeA/KUy6+BWyNi2PJMvfOiNPWxXXTOAZSW5FoK/HjIoROZ8wHXs5Gf750tkjq4ztudpNus10bEc72dkt4l6VJJCyRdCHwH2BMR9VuWE0nSEkkfkbRI0kJJ60jz9L8E7gXeJmmtpEWk6YjH8m3bIkh6D2macXdtf1F5z7ldBJwHnNfLN8NzvAP4vKRlkpYCXwDubqEL56xf3yUtAx4C7oiI7zW87mOSzs+PRb8TuAX42dxGf+4G9HvY2C4255VDNgA/iYjjtddNdM4rGq9njON8j4jObqRHBO8DZoB/Aje2HdOY+rmc9B/DLOl2Y29bB9xAejJgBvhXHkSvbzvmEfb9IuCPpNut/wV+D3yo0v5B4CDp8fg9wIq2Yx5x/78P/LBhf1F5Jz3FErVty7Ack/6L3goczdtW8nJNk7L16zvwlfx79ZyfrrzuR6Sno6bz3+eWtvsyon4PHNsl5zy3LcrvdWsaXjfROc996Hs9y+0jPd+9dpuZmZlZg85Ot5mZmZkN4iLJzMzMrIGLJDMzM7MGLpLMzMzMGrhIMjMzM2vgIsnMzMysgYskMzMzswYukszMzMwauEgys6JJ+oOkXZK+KulxSbOSHpO0pu3YzGx+8zdum1mx8npWx4GXgUeAb5HWu/o6aU27lRHxbHsRmtl8tnD4IWZmE+ty0lpWvyWt2XcSQNJR0rpO7yMtimlmdgZPt5lZyabyzy/3CqSstyr4hXMcj5lNEBdJZlayVcDhiNhX2780/zw0x/GY2QRxkWRmJVsFPNOw/3rgBLB3bsMxs0nizySZWZEkLQCuAmYkLYyIl/L+pcDNwO0RMdNmjGY2v/npNjMrkqTLgQPA06QPbm8D3ghsBo4AqyNitr0IzWy+83SbmZVqVf55NbAEuB/YCvwCWOMCycyG8XSbmZVqCjgUEX8Frmk7GDObPL6TZGalWgX8qe0gzGxyuUgys+JIEvB2XCSZ2SvgD26bmZmZNfCdJDMzM7MGLpLMzMzMGrhIMjMzM2vgIsnMzMysgYskMzMzswYukszMzMwauEgyMzMza+AiyczMzKzB/wHdRN3AtLxoHwAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<Figure size 648x360 with 2 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "i1, i2, crop_i = 100, 101, 150\n",
    "p1, p2, p3 = 22, 60, 35\n",
    "fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(9, 5))\n",
    "ax1.plot([p1, p1], [-1, 1], \"k--\", label=\"$p = {}$\".format(p1))\n",
    "ax1.plot([p2, p2], [-1, 1], \"k--\", label=\"$p = {}$\".format(p2), alpha=0.5)\n",
    "ax1.plot(p3, PE[p3, i1], \"bx\", label=\"$p = {}$\".format(p3))\n",
    "ax1.plot(PE[:,i1], \"b-\", label=\"$i = {}$\".format(i1))\n",
    "ax1.plot(PE[:,i2], \"r-\", label=\"$i = {}$\".format(i2))\n",
    "ax1.plot([p1, p2], [PE[p1, i1], PE[p2, i1]], \"bo\")\n",
    "ax1.plot([p1, p2], [PE[p1, i2], PE[p2, i2]], \"ro\")\n",
    "ax1.legend(loc=\"center right\", fontsize=14, framealpha=0.95)\n",
    "ax1.set_ylabel(\"$P_{(p,i)}$\", rotation=0, fontsize=16)\n",
    "ax1.grid(True, alpha=0.3)\n",
    "ax1.hlines(0, 0, max_steps - 1, color=\"k\", linewidth=1, alpha=0.3)\n",
    "ax1.axis([0, max_steps - 1, -1, 1])\n",
    "ax2.imshow(PE.T[:crop_i], cmap=\"gray\", interpolation=\"bilinear\", aspect=\"auto\")\n",
    "ax2.hlines(i1, 0, max_steps - 1, color=\"b\")\n",
    "cheat = 2 # need to raise the red line a bit, or else it hides the blue one\n",
    "ax2.hlines(i2+cheat, 0, max_steps - 1, color=\"r\")\n",
    "ax2.plot([p1, p1], [0, crop_i], \"k--\")\n",
    "ax2.plot([p2, p2], [0, crop_i], \"k--\", alpha=0.5)\n",
    "ax2.plot([p1, p2], [i2+cheat, i2+cheat], \"ro\")\n",
    "ax2.plot([p1, p2], [i1, i1], \"bo\")\n",
    "ax2.axis([0, max_steps - 1, 0, crop_i])\n",
    "ax2.set_xlabel(\"$p$\", fontsize=16)\n",
    "ax2.set_ylabel(\"$i$\", rotation=0, fontsize=16)\n",
    "save_fig(\"positional_embedding_plot\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [],
   "source": [
    "embed_size = 512; max_steps = 500; vocab_size = 10000\n",
    "encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "embeddings = keras.layers.Embedding(vocab_size, embed_size)\n",
    "encoder_embeddings = embeddings(encoder_inputs)\n",
    "decoder_embeddings = embeddings(decoder_inputs)\n",
    "positional_encoding = PositionalEncoding(max_steps, max_dims=embed_size)\n",
    "encoder_in = positional_encoding(encoder_embeddings)\n",
    "decoder_in = positional_encoding(decoder_embeddings)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is a (very) simplified Transformer (the actual architecture has skip connections, layer norm, dense nets, and most importantly it uses Multi-Head Attention instead of regular Attention):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {},
   "outputs": [],
   "source": [
    "Z = encoder_in\n",
    "for N in range(6):\n",
    "    Z = keras.layers.Attention(use_scale=True)([Z, Z])\n",
    "\n",
    "encoder_outputs = Z\n",
    "Z = decoder_in\n",
    "for N in range(6):\n",
    "    Z = keras.layers.Attention(use_scale=True, causal=True)([Z, Z])\n",
    "    Z = keras.layers.Attention(use_scale=True)([Z, encoder_outputs])\n",
    "\n",
    "outputs = keras.layers.TimeDistributed(\n",
    "    keras.layers.Dense(vocab_size, activation=\"softmax\"))(Z)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here's a basic implementation of the `MultiHeadAttention` layer. One will likely be added to `keras.layers` in the near future. Note that `Conv1D` layers with `kernel_size=1` (and the default `padding=\"valid\"` and `strides=1`) is equivalent to a `TimeDistributed(Dense(...))` layer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [],
   "source": [
    "K = keras.backend\n",
    "\n",
    "class MultiHeadAttention(keras.layers.Layer):\n",
    "    def __init__(self, n_heads, causal=False, use_scale=False, **kwargs):\n",
    "        self.n_heads = n_heads\n",
    "        self.causal = causal\n",
    "        self.use_scale = use_scale\n",
    "        super().__init__(**kwargs)\n",
    "    def build(self, batch_input_shape):\n",
    "        self.dims = batch_input_shape[0][-1]\n",
    "        self.q_dims, self.v_dims, self.k_dims = [self.dims // self.n_heads] * 3 # could be hyperparameters instead\n",
    "        self.q_linear = keras.layers.Conv1D(self.n_heads * self.q_dims, kernel_size=1, use_bias=False)\n",
    "        self.v_linear = keras.layers.Conv1D(self.n_heads * self.v_dims, kernel_size=1, use_bias=False)\n",
    "        self.k_linear = keras.layers.Conv1D(self.n_heads * self.k_dims, kernel_size=1, use_bias=False)\n",
    "        self.attention = keras.layers.Attention(causal=self.causal, use_scale=self.use_scale)\n",
    "        self.out_linear = keras.layers.Conv1D(self.dims, kernel_size=1, use_bias=False)\n",
    "        super().build(batch_input_shape)\n",
    "    def _multi_head_linear(self, inputs, linear):\n",
    "        shape = K.concatenate([K.shape(inputs)[:-1], [self.n_heads, -1]])\n",
    "        projected = K.reshape(linear(inputs), shape)\n",
    "        perm = K.permute_dimensions(projected, [0, 2, 1, 3])\n",
    "        return K.reshape(perm, [shape[0] * self.n_heads, shape[1], -1])\n",
    "    def call(self, inputs):\n",
    "        q = inputs[0]\n",
    "        v = inputs[1]\n",
    "        k = inputs[2] if len(inputs) > 2 else v\n",
    "        shape = K.shape(q)\n",
    "        q_proj = self._multi_head_linear(q, self.q_linear)\n",
    "        v_proj = self._multi_head_linear(v, self.v_linear)\n",
    "        k_proj = self._multi_head_linear(k, self.k_linear)\n",
    "        multi_attended = self.attention([q_proj, v_proj, k_proj])\n",
    "        shape_attended = K.shape(multi_attended)\n",
    "        reshaped_attended = K.reshape(multi_attended, [shape[0], self.n_heads, shape_attended[1], shape_attended[2]])\n",
    "        perm = K.permute_dimensions(reshaped_attended, [0, 2, 1, 3])\n",
    "        concat = K.reshape(perm, [shape[0], shape_attended[1], -1])\n",
    "        return self.out_linear(concat)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:Layer multi_head_attention is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.\n",
      "\n",
      "If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.\n",
      "\n",
      "To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:Layer multi_head_attention is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.\n",
      "\n",
      "If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.\n",
      "\n",
      "To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "TensorShape([2, 50, 512])"
      ]
     },
     "execution_count": 77,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Q = np.random.rand(2, 50, 512)\n",
    "V = np.random.rand(2, 80, 512)\n",
    "multi_attn = MultiHeadAttention(8)\n",
    "multi_attn([Q, V]).shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Exercise solutions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. to 7."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "See Appendix A."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 8.\n",
    "_Exercise:_ Embedded Reber grammars _were used by Hochreiter and Schmidhuber in [their paper](https://homl.info/93) about LSTMs. They are artificial grammars that produce strings such as \"BPBTSXXVPSEPE.\" Check out Jenny Orr's [nice introduction](https://homl.info/108) to this topic. Choose a particular embedded Reber grammar (such as the one represented on Jenny Orr's page), then train an RNN to identify whether a string respects that grammar or not. You will first need to write a function capable of generating a training batch containing about 50% strings that respect the grammar, and 50% that don't._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [],
   "source": [
    "default_reber_grammar = [\n",
    "    [(\"B\", 1)],           # (state 0) =B=>(state 1)\n",
    "    [(\"T\", 2), (\"P\", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)\n",
    "    [(\"S\", 2), (\"X\", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)\n",
    "    [(\"T\", 3), (\"V\", 5)], # and so on...\n",
    "    [(\"X\", 3), (\"S\", 6)],\n",
    "    [(\"P\", 4), (\"V\", 6)],\n",
    "    [(\"E\", None)]]        # (state 6) =E=>(terminal state)\n",
    "\n",
    "embedded_reber_grammar = [\n",
    "    [(\"B\", 1)],\n",
    "    [(\"T\", 2), (\"P\", 3)],\n",
    "    [(default_reber_grammar, 4)],\n",
    "    [(default_reber_grammar, 5)],\n",
    "    [(\"T\", 6)],\n",
    "    [(\"P\", 6)],\n",
    "    [(\"E\", None)]]\n",
    "\n",
    "def generate_string(grammar):\n",
    "    state = 0\n",
    "    output = []\n",
    "    while state is not None:\n",
    "        index = np.random.randint(len(grammar[state]))\n",
    "        production, state = grammar[state][index]\n",
    "        if isinstance(production, list):\n",
    "            production = generate_string(grammar=production)\n",
    "        output.append(production)\n",
    "    return \"\".join(output)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's generate a few strings based on the default Reber grammar:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "BTXXTTVPXTVPXTTVPSE BPVPSE BTXSE BPVVE BPVVE BTSXSE BPTVPXTTTVVE BPVVE BTXSE BTXXVPSE BPTTTTTTTTVVE BTXSE BPVPSE BTXSE BPTVPSE BTXXTVPSE BPVVE BPVVE BPVVE BPTTVVE BPVVE BPVVE BTXXVVE BTXXVVE BTXXVPXVVE "
     ]
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "\n",
    "for _ in range(25):\n",
    "    print(generate_string(default_reber_grammar), end=\" \")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looks good. Now let's generate a few strings based on the embedded Reber grammar:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "BTBPTTTVPXTVPXTTVPSETE BPBPTVPSEPE BPBPVVEPE BPBPVPXVVEPE BPBTXXTTTTVVEPE BPBPVPSEPE BPBTXXVPSEPE BPBTSSSSSSSXSEPE BTBPVVETE BPBTXXVVEPE BPBTXXVPSEPE BTBTXXVVETE BPBPVVEPE BPBPVVEPE BPBTSXSEPE BPBPVVEPE BPBPTVPSEPE BPBTXXVVEPE BTBPTVPXVVETE BTBPVVETE BTBTSSSSSSSXXVVETE BPBTSSSXXTTTTVPSEPE BTBPTTVVETE BPBTXXTVVEPE BTBTXSETE "
     ]
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "\n",
    "for _ in range(25):\n",
    "    print(generate_string(embedded_reber_grammar), end=\" \")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {},
   "outputs": [],
   "source": [
    "POSSIBLE_CHARS = \"BEPSTVX\"\n",
    "\n",
    "def generate_corrupted_string(grammar, chars=POSSIBLE_CHARS):\n",
    "    good_string = generate_string(grammar)\n",
    "    index = np.random.randint(len(good_string))\n",
    "    good_char = good_string[index]\n",
    "    bad_char = np.random.choice(sorted(set(chars) - set(good_char)))\n",
    "    return good_string[:index] + bad_char + good_string[index + 1:]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's look at a few corrupted strings:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "BTBPTTTPPXTVPXTTVPSETE BPBTXEEPE BPBPTVVVEPE BPBTSSSSXSETE BPTTXSEPE BTBPVPXTTTTTTEVETE BPBTXXSVEPE BSBPTTVPSETE BPBXVVEPE BEBTXSETE BPBPVPSXPE BTBPVVVETE BPBTSXSETE BPBPTTTPTTTTTVPSEPE BTBTXXTTSTVPSETE BBBTXSETE BPBTPXSEPE BPBPVPXTTTTVPXTVPXVPXTTTVVEVE BTBXXXTVPSETE BEBTSSSSSXXVPXTVVETE BTBXTTVVETE BPBTXSTPE BTBTXXTTTVPSBTE BTBTXSETX BTBTSXSSTE "
     ]
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "\n",
    "for _ in range(25):\n",
    "    print(generate_corrupted_string(embedded_reber_grammar), end=\" \")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We cannot feed strings directly to an RNN, so we need to encode them somehow. One option would be to one-hot encode each character. Another option is to use embeddings. Let's go for the second option (but since there are just a handful of characters, one-hot encoding would probably be a good option as well). For embeddings to work, we need to convert each string into a sequence of character IDs. Let's write a function for that, using each character's index in the string of possible characters \"BEPSTVX\":"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [],
   "source": [
    "def string_to_ids(s, chars=POSSIBLE_CHARS):\n",
    "    return [chars.index(c) for c in s]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[0, 4, 4, 4, 6, 6, 5, 5, 1, 4, 1]"
      ]
     },
     "execution_count": 84,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "string_to_ids(\"BTTTXXVVETE\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can now generate the dataset, with 50% good strings, and 50% bad strings:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 85,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_dataset(size):\n",
    "    good_strings = [string_to_ids(generate_string(embedded_reber_grammar))\n",
    "                    for _ in range(size // 2)]\n",
    "    bad_strings = [string_to_ids(generate_corrupted_string(embedded_reber_grammar))\n",
    "                   for _ in range(size - size // 2)]\n",
    "    all_strings = good_strings + bad_strings\n",
    "    X = tf.ragged.constant(all_strings, ragged_rank=1)\n",
    "    y = np.array([[1.] for _ in range(len(good_strings))] +\n",
    "                 [[0.] for _ in range(len(bad_strings))])\n",
    "    return X, y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [],
   "source": [
    "np.random.seed(42)\n",
    "\n",
    "X_train, y_train = generate_dataset(10000)\n",
    "X_valid, y_valid = generate_dataset(2000)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's take a look at the first training sequence:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(22,), dtype=int32, numpy=\n",
       "array([0, 4, 0, 2, 4, 4, 4, 5, 2, 6, 4, 5, 2, 6, 4, 4, 5, 2, 3, 1, 4, 1],\n",
       "      dtype=int32)>"
      ]
     },
     "execution_count": 87,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X_train[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "What class does it belong to?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([1.])"
      ]
     },
     "execution_count": 88,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y_train[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Perfect! We are ready to create the RNN to identify good strings. We build a simple sequence binary classifier:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/20\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/ageron/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:433: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n",
      "  \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n",
      "/Users/ageron/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:433: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n",
      "  \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 5s 42us/sample - loss: 0.6847 - accuracy: 0.5138 - val_loss: 8.1518 - val_accuracy: 0.6115\n",
      "Epoch 2/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 28us/sample - loss: 0.6524 - accuracy: 0.5571 - val_loss: 7.9259 - val_accuracy: 0.6085\n",
      "Epoch 3/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 28us/sample - loss: 0.6686 - accuracy: 0.5783 - val_loss: 7.7483 - val_accuracy: 0.6110\n",
      "Epoch 4/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 28us/sample - loss: 0.6201 - accuracy: 0.5969 - val_loss: 7.5567 - val_accuracy: 0.6110\n",
      "Epoch 5/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 28us/sample - loss: 0.5705 - accuracy: 0.6428 - val_loss: 6.9117 - val_accuracy: 0.7075\n",
      "Epoch 6/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 0.5660 - accuracy: 0.7008 - val_loss: 5.7277 - val_accuracy: 0.7580\n",
      "Epoch 7/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 28us/sample - loss: 0.3997 - accuracy: 0.8336 - val_loss: 4.3641 - val_accuracy: 0.8550\n",
      "Epoch 8/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 0.1771 - accuracy: 0.8958 - val_loss: 1.5009 - val_accuracy: 0.9605\n",
      "Epoch 9/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 0.2710 - accuracy: 0.9566 - val_loss: 3.2648 - val_accuracy: 0.9005\n",
      "Epoch 10/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 0.2574 - accuracy: 0.9620 - val_loss: 1.0385 - val_accuracy: 0.9790\n",
      "Epoch 11/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 0.0356 - accuracy: 0.9845 - val_loss: 0.1081 - val_accuracy: 1.0000\n",
      "Epoch 12/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 4s 29us/sample - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0261 - val_accuracy: 1.0000\n",
      "Epoch 13/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 0.0019 - accuracy: 1.0000 - val_loss: 0.0144 - val_accuracy: 1.0000\n",
      "Epoch 14/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 28us/sample - loss: 8.1710e-04 - accuracy: 1.0000 - val_loss: 0.0101 - val_accuracy: 1.0000\n",
      "Epoch 15/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 5.8225e-04 - accuracy: 1.0000 - val_loss: 0.0079 - val_accuracy: 1.0000\n",
      "Epoch 16/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 5.8369e-04 - accuracy: 1.0000 - val_loss: 0.0064 - val_accuracy: 1.0000\n",
      "Epoch 17/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 4s 30us/sample - loss: 3.8744e-04 - accuracy: 1.0000 - val_loss: 0.0054 - val_accuracy: 1.0000\n",
      "Epoch 18/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 4s 29us/sample - loss: 4.2988e-04 - accuracy: 1.0000 - val_loss: 0.0047 - val_accuracy: 1.0000\n",
      "Epoch 19/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 4s 29us/sample - loss: 2.7449e-04 - accuracy: 1.0000 - val_loss: 0.0041 - val_accuracy: 1.0000\n",
      "Epoch 20/20\n",
      "313/313 [========================================================================================================================================================================================================================================================================================================================================================================] - 3s 29us/sample - loss: 2.9469e-04 - accuracy: 1.0000 - val_loss: 0.0037 - val_accuracy: 1.0000\n"
     ]
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "embedding_size = 5\n",
    "\n",
    "model = keras.models.Sequential([\n",
    "    keras.layers.InputLayer(input_shape=[None], dtype=tf.int32, ragged=True),\n",
    "    keras.layers.Embedding(input_dim=len(POSSIBLE_CHARS), output_dim=embedding_size),\n",
    "    keras.layers.GRU(30),\n",
    "    keras.layers.Dense(1, activation=\"sigmoid\")\n",
    "])\n",
    "optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum = 0.95, nesterov=True)\n",
    "model.compile(loss=\"binary_crossentropy\", optimizer=optimizer, metrics=[\"accuracy\"])\n",
    "history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Estimated probability that these are Reber strings:\n",
      "BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE: 0.40%\n",
      "BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE: 99.96%\n"
     ]
    }
   ],
   "source": [
    "test_strings = [\"BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE\",\n",
    "                \"BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE\"]\n",
    "X_test = tf.ragged.constant([string_to_ids(s) for s in test_strings], ragged_rank=1)\n",
    "\n",
    "y_proba = model.predict(X_test)\n",
    "print()\n",
    "print(\"Estimated probability that these are Reber strings:\")\n",
    "for index, string in enumerate(test_strings):\n",
    "    print(\"{}: {:.2f}%\".format(string, 100 * y_proba[index][0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Ta-da! It worked fine. The RNN found the correct answers with very high confidence. :)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 9.\n",
    "_Exercise: Train an Encoder–Decoder model that can convert a date string from one format to another (e.g., from \"April 22, 2019\" to \"2019-04-22\")._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's start by creating the dataset. We will use random days between 1000-01-01 and 9999-12-31:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datetime import date\n",
    "\n",
    "# cannot use strftime()'s %B format since it depends on the locale\n",
    "MONTHS = [\"January\", \"February\", \"March\", \"April\", \"May\", \"June\",\n",
    "          \"July\", \"August\", \"September\", \"October\", \"November\", \"December\"]\n",
    "\n",
    "def random_dates(n_dates):\n",
    "    min_date = date(1000, 1, 1).toordinal()\n",
    "    max_date = date(9999, 12, 31).toordinal()\n",
    "\n",
    "    ordinals = np.random.randint(max_date - min_date, size=n_dates) + min_date\n",
    "    dates = [date.fromordinal(ordinal) for ordinal in ordinals]\n",
    "\n",
    "    x = [MONTHS[dt.month - 1] + \" \" + dt.strftime(\"%d, %Y\") for dt in dates]\n",
    "    y = [dt.isoformat() for dt in dates]\n",
    "    return x, y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here are a few random dates, displayed in both the input format and the target format:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Input                    Target                   \n",
      "--------------------------------------------------\n",
      "September 20, 7075       7075-09-20               \n",
      "May 15, 8579             8579-05-15               \n",
      "January 11, 7103         7103-01-11               \n"
     ]
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "\n",
    "n_dates = 3\n",
    "x_example, y_example = random_dates(n_dates)\n",
    "print(\"{:25s}{:25s}\".format(\"Input\", \"Target\"))\n",
    "print(\"-\" * 50)\n",
    "for idx in range(n_dates):\n",
    "    print(\"{:25s}{:25s}\".format(x_example[idx], y_example[idx]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's get the list of all possible characters in the inputs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "' ,0123456789ADFJMNOSabceghilmnoprstuvy'"
      ]
     },
     "execution_count": 93,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "INPUT_CHARS = \"\".join(sorted(set(\"\".join(MONTHS) + \"0123456789, \")))\n",
    "INPUT_CHARS"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And here's the list of possible characters in the outputs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 94,
   "metadata": {},
   "outputs": [],
   "source": [
    "OUTPUT_CHARS = \"0123456789-\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's write a function to convert a string to a list of character IDs, as we did in the previous exercise:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "metadata": {},
   "outputs": [],
   "source": [
    "def date_str_to_ids(date_str, chars=INPUT_CHARS):\n",
    "    return [chars.index(c) for c in date_str]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[7, 11, 19, 22, 11, 16, 9, 11, 20, 38, 28, 26, 37, 38, 33, 26, 33, 31]"
      ]
     },
     "execution_count": 96,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "date_str_to_ids(x_example[0], INPUT_CHARS)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[7, 0, 7, 5, 10, 0, 9, 10, 2, 0]"
      ]
     },
     "execution_count": 97,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "date_str_to_ids(y_example[0], OUTPUT_CHARS)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [],
   "source": [
    "def prepare_date_strs(date_strs, chars=INPUT_CHARS):\n",
    "    X_ids = [date_str_to_ids(dt, chars) for dt in date_strs]\n",
    "    X = tf.ragged.constant(X_ids, ragged_rank=1)\n",
    "    return (X + 1).to_tensor() # using 0 as the padding token ID\n",
    "\n",
    "def create_dataset(n_dates):\n",
    "    x, y = random_dates(n_dates)\n",
    "    return prepare_date_strs(x, INPUT_CHARS), prepare_date_strs(y, OUTPUT_CHARS)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "metadata": {},
   "outputs": [],
   "source": [
    "np.random.seed(42)\n",
    "\n",
    "X_train, Y_train = create_dataset(10000)\n",
    "X_valid, Y_valid = create_dataset(2000)\n",
    "X_test, Y_test = create_dataset(2000)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(10,), dtype=int32, numpy=array([ 8,  1,  8,  6, 11,  1, 10, 11,  3,  1], dtype=int32)>"
      ]
     },
     "execution_count": 100,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Y_train[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### First version: a very basic seq2seq model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's first try the simplest possible model: we feed in the input sequence, which first goes through the encoder (an embedding layer followed by a single LSTM layer), which outputs a vector, then it goes through a decoder (a single LSTM layer, followed by a dense output layer), which outputs a sequence of vectors, each representing the estimated probabilities for all possible output character.\n",
    "\n",
    "Since the decoder expects a sequence as input, we repeat the vector (which is output by the decoder) as many times as the longest possible output sequence."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/20\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 1.8111 - accuracy: 0.3533 - val_loss: 1.3581 - val_accuracy: 0.4965\n",
      "Epoch 2/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 1.3518 - accuracy: 0.5103 - val_loss: 1.1915 - val_accuracy: 0.5694\n",
      "Epoch 3/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 1.1706 - accuracy: 0.5908 - val_loss: 0.9983 - val_accuracy: 0.6398\n",
      "Epoch 4/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.9158 - accuracy: 0.6686 - val_loss: 0.8012 - val_accuracy: 0.6987\n",
      "Epoch 5/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.7058 - accuracy: 0.7308 - val_loss: 0.6224 - val_accuracy: 0.7599\n",
      "Epoch 6/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.7756 - accuracy: 0.7203 - val_loss: 0.6541 - val_accuracy: 0.7599\n",
      "Epoch 7/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.5379 - accuracy: 0.8034 - val_loss: 0.4174 - val_accuracy: 0.8440\n",
      "Epoch 8/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.4867 - accuracy: 0.8262 - val_loss: 0.4188 - val_accuracy: 0.8480\n",
      "Epoch 9/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.2979 - accuracy: 0.8951 - val_loss: 0.2549 - val_accuracy: 0.9126\n",
      "Epoch 10/20\n",
      "313/313 [==============================] - 5s 14ms/step - loss: 0.1785 - accuracy: 0.9479 - val_loss: 0.1461 - val_accuracy: 0.9594\n",
      "Epoch 11/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.1830 - accuracy: 0.9557 - val_loss: 0.1644 - val_accuracy: 0.9550\n",
      "Epoch 12/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0775 - accuracy: 0.9857 - val_loss: 0.0595 - val_accuracy: 0.9901\n",
      "Epoch 13/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0400 - accuracy: 0.9953 - val_loss: 0.0342 - val_accuracy: 0.9957\n",
      "Epoch 14/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0248 - accuracy: 0.9979 - val_loss: 0.0231 - val_accuracy: 0.9983\n",
      "Epoch 15/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0161 - accuracy: 0.9991 - val_loss: 0.0149 - val_accuracy: 0.9995\n",
      "Epoch 16/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0108 - accuracy: 0.9997 - val_loss: 0.0106 - val_accuracy: 0.9996\n",
      "Epoch 17/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0074 - accuracy: 0.9999 - val_loss: 0.0077 - val_accuracy: 0.9999\n",
      "Epoch 18/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0053 - accuracy: 1.0000 - val_loss: 0.0054 - val_accuracy: 0.9999\n",
      "Epoch 19/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.0041 - val_accuracy: 1.0000\n",
      "Epoch 20/20\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0032 - val_accuracy: 1.0000\n"
     ]
    }
   ],
   "source": [
    "embedding_size = 32\n",
    "max_output_length = Y_train.shape[1]\n",
    "\n",
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "encoder = keras.models.Sequential([\n",
    "    keras.layers.Embedding(input_dim=len(INPUT_CHARS) + 1,\n",
    "                           output_dim=embedding_size,\n",
    "                           input_shape=[None]),\n",
    "    keras.layers.LSTM(128)\n",
    "])\n",
    "\n",
    "decoder = keras.models.Sequential([\n",
    "    keras.layers.LSTM(128, return_sequences=True),\n",
    "    keras.layers.Dense(len(OUTPUT_CHARS) + 1, activation=\"softmax\")\n",
    "])\n",
    "\n",
    "model = keras.models.Sequential([\n",
    "    encoder,\n",
    "    keras.layers.RepeatVector(max_output_length),\n",
    "    decoder\n",
    "])\n",
    "\n",
    "optimizer = keras.optimizers.Nadam()\n",
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
    "              metrics=[\"accuracy\"])\n",
    "history = model.fit(X_train, Y_train, epochs=20,\n",
    "                    validation_data=(X_valid, Y_valid))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looks great, we reach 100% validation accuracy! Let's use the model to make some predictions. We will need to be able to convert a sequence of character IDs to a readable string:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {},
   "outputs": [],
   "source": [
    "def ids_to_date_strs(ids, chars=OUTPUT_CHARS):\n",
    "    return [\"\".join([(\"?\" + chars)[index] for index in sequence])\n",
    "            for sequence in ids]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can use the model to convert some dates"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_new = prepare_date_strs([\"September 17, 2009\", \"July 14, 1789\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2009-09-17\n",
      "1789-07-14\n"
     ]
    }
   ],
   "source": [
    "#ids = model.predict_classes(X_new)\n",
    "ids = np.argmax(model.predict(X_new), axis=-1)\n",
    "for date_str in ids_to_date_strs(ids):\n",
    "    print(date_str)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Perfect! :)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "However, since the model was only trained on input strings of length 18 (which is the length of the longest date), it does not perform well if we try to use it to make predictions on shorter sequences:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_new = prepare_date_strs([\"May 02, 2020\", \"July 14, 1789\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 106,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2020-01-02\n",
      "1789-02-14\n"
     ]
    }
   ],
   "source": [
    "#ids = model.predict_classes(X_new)\n",
    "ids = np.argmax(model.predict(X_new), axis=-1)\n",
    "for date_str in ids_to_date_strs(ids):\n",
    "    print(date_str)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Oops! We need to ensure that we always pass sequences of the same length as during training, using padding if necessary. Let's write a little helper function for that:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 107,
   "metadata": {},
   "outputs": [],
   "source": [
    "max_input_length = X_train.shape[1]\n",
    "\n",
    "def prepare_date_strs_padded(date_strs):\n",
    "    X = prepare_date_strs(date_strs)\n",
    "    if X.shape[1] < max_input_length:\n",
    "        X = tf.pad(X, [[0, 0], [0, max_input_length - X.shape[1]]])\n",
    "    return X\n",
    "\n",
    "def convert_date_strs(date_strs):\n",
    "    X = prepare_date_strs_padded(date_strs)\n",
    "    #ids = model.predict_classes(X)\n",
    "    ids = np.argmax(model.predict(X), axis=-1)\n",
    "    return ids_to_date_strs(ids)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 108,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['2020-05-02', '1789-07-14']"
      ]
     },
     "execution_count": 108,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "convert_date_strs([\"May 02, 2020\", \"July 14, 1789\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Cool! Granted, there are certainly much easier ways to write a date conversion tool (e.g., using regular expressions or even basic string manipulation), but you have to admit that using neural networks is way cooler. ;-)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "However, real-life sequence-to-sequence problems will usually be harder, so for the sake of completeness, let's build a more powerful model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Second version: feeding the shifted targets to the decoder (teacher forcing)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Instead of feeding the decoder a simple repetition of the encoder's output vector, we can feed it the target sequence, shifted by one time step to the right. This way, at each time step the decoder will know what the previous target character was. This should help is tackle more complex sequence-to-sequence problems.\n",
    "\n",
    "Since the first output character of each target sequence has no previous character, we will need a new token to represent the start-of-sequence (sos).\n",
    "\n",
    "During inference, we won't know the target, so what will we feed the decoder? We can just predict one character at a time, starting with an sos token, then feeding the decoder all the characters that were predicted so far (we will look at this in more details later in this notebook).\n",
    "\n",
    "But if the decoder's LSTM expects to get the previous target as input at each step, how shall we pass it it the vector output by the encoder? Well, one option is to ignore the output vector, and instead use the encoder's LSTM state as the initial state of the decoder's LSTM (which requires that encoder's LSTM must have the same number of units as the decoder's LSTM).\n",
    "\n",
    "Now let's create the decoder's inputs (for training, validation and testing). The sos token will be represented using the last possible output character's ID + 1."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 109,
   "metadata": {},
   "outputs": [],
   "source": [
    "sos_id = len(OUTPUT_CHARS) + 1\n",
    "\n",
    "def shifted_output_sequences(Y):\n",
    "    sos_tokens = tf.fill(dims=(len(Y), 1), value=sos_id)\n",
    "    return tf.concat([sos_tokens, Y[:, :-1]], axis=1)\n",
    "\n",
    "X_train_decoder = shifted_output_sequences(Y_train)\n",
    "X_valid_decoder = shifted_output_sequences(Y_valid)\n",
    "X_test_decoder = shifted_output_sequences(Y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's take a look at the decoder's training inputs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 110,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(10000, 10), dtype=int32, numpy=\n",
       "array([[12,  8,  1, ..., 10, 11,  3],\n",
       "       [12,  9,  6, ...,  6, 11,  2],\n",
       "       [12,  8,  2, ...,  2, 11,  2],\n",
       "       ...,\n",
       "       [12, 10,  8, ...,  2, 11,  4],\n",
       "       [12,  2,  2, ...,  3, 11,  3],\n",
       "       [12,  8,  9, ...,  8, 11,  3]], dtype=int32)>"
      ]
     },
     "execution_count": 110,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X_train_decoder"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's build the model. It's not a simple sequential model anymore, so let's use the functional API:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 111,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/10\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 1.6898 - accuracy: 0.3714 - val_loss: 1.4141 - val_accuracy: 0.4603\n",
      "Epoch 2/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 1.2118 - accuracy: 0.5541 - val_loss: 0.9360 - val_accuracy: 0.6653\n",
      "Epoch 3/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.6399 - accuracy: 0.7766 - val_loss: 0.4054 - val_accuracy: 0.8631\n",
      "Epoch 4/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.2207 - accuracy: 0.9463 - val_loss: 0.1069 - val_accuracy: 0.9869\n",
      "Epoch 5/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0805 - accuracy: 0.9910 - val_loss: 0.0445 - val_accuracy: 0.9976\n",
      "Epoch 6/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0297 - accuracy: 0.9993 - val_loss: 0.0237 - val_accuracy: 0.9992\n",
      "Epoch 7/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0743 - accuracy: 0.9857 - val_loss: 0.0702 - val_accuracy: 0.9889\n",
      "Epoch 8/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0187 - accuracy: 0.9995 - val_loss: 0.0112 - val_accuracy: 0.9999\n",
      "Epoch 9/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0084 - accuracy: 1.0000 - val_loss: 0.0072 - val_accuracy: 1.0000\n",
      "Epoch 10/10\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0057 - accuracy: 1.0000 - val_loss: 0.0053 - val_accuracy: 1.0000\n"
     ]
    }
   ],
   "source": [
    "encoder_embedding_size = 32\n",
    "decoder_embedding_size = 32\n",
    "lstm_units = 128\n",
    "\n",
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "encoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)\n",
    "encoder_embedding = keras.layers.Embedding(\n",
    "    input_dim=len(INPUT_CHARS) + 1,\n",
    "    output_dim=encoder_embedding_size)(encoder_input)\n",
    "_, encoder_state_h, encoder_state_c = keras.layers.LSTM(\n",
    "    lstm_units, return_state=True)(encoder_embedding)\n",
    "encoder_state = [encoder_state_h, encoder_state_c]\n",
    "\n",
    "decoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)\n",
    "decoder_embedding = keras.layers.Embedding(\n",
    "    input_dim=len(OUTPUT_CHARS) + 2,\n",
    "    output_dim=decoder_embedding_size)(decoder_input)\n",
    "decoder_lstm_output = keras.layers.LSTM(lstm_units, return_sequences=True)(\n",
    "    decoder_embedding, initial_state=encoder_state)\n",
    "decoder_output = keras.layers.Dense(len(OUTPUT_CHARS) + 1,\n",
    "                                    activation=\"softmax\")(decoder_lstm_output)\n",
    "\n",
    "model = keras.models.Model(inputs=[encoder_input, decoder_input],\n",
    "                           outputs=[decoder_output])\n",
    "\n",
    "optimizer = keras.optimizers.Nadam()\n",
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
    "              metrics=[\"accuracy\"])\n",
    "history = model.fit([X_train, X_train_decoder], Y_train, epochs=10,\n",
    "                    validation_data=([X_valid, X_valid_decoder], Y_valid))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This model also reaches 100% validation accuracy, but it does so even faster."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's once again use the model to make some predictions. This time we need to predict characters one by one."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 112,
   "metadata": {},
   "outputs": [],
   "source": [
    "sos_id = len(OUTPUT_CHARS) + 1\n",
    "\n",
    "def predict_date_strs(date_strs):\n",
    "    X = prepare_date_strs_padded(date_strs)\n",
    "    Y_pred = tf.fill(dims=(len(X), 1), value=sos_id)\n",
    "    for index in range(max_output_length):\n",
    "        pad_size = max_output_length - Y_pred.shape[1]\n",
    "        X_decoder = tf.pad(Y_pred, [[0, 0], [0, pad_size]])\n",
    "        Y_probas_next = model.predict([X, X_decoder])[:, index:index+1]\n",
    "        Y_pred_next = tf.argmax(Y_probas_next, axis=-1, output_type=tf.int32)\n",
    "        Y_pred = tf.concat([Y_pred, Y_pred_next], axis=1)\n",
    "    return ids_to_date_strs(Y_pred[:, 1:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 113,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['1789-07-14', '2020-05-01']"
      ]
     },
     "execution_count": 113,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Works fine! :)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Third version: using TF-Addons's seq2seq implementation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's build exactly the same model, but using TF-Addon's seq2seq API. The implementation below is almost very similar to the TFA example higher in this notebook, except without the model input to specify the output sequence length, for simplicity (but you can easily add it back in if you need it for your projects, when the output sequences have very different lengths)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 114,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/15\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 1.6757 - accuracy: 0.3683 - val_loss: 1.4602 - val_accuracy: 0.4214\n",
      "Epoch 2/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 1.3873 - accuracy: 0.4566 - val_loss: 1.2904 - val_accuracy: 0.4957\n",
      "Epoch 3/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 1.0471 - accuracy: 0.6109 - val_loss: 0.7737 - val_accuracy: 0.7276\n",
      "Epoch 4/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.5056 - accuracy: 0.8296 - val_loss: 0.2695 - val_accuracy: 0.9305\n",
      "Epoch 5/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.1677 - accuracy: 0.9657 - val_loss: 0.0870 - val_accuracy: 0.9912\n",
      "Epoch 6/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.1007 - accuracy: 0.9850 - val_loss: 0.0492 - val_accuracy: 0.9975\n",
      "Epoch 7/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0308 - accuracy: 0.9993 - val_loss: 0.0228 - val_accuracy: 0.9996\n",
      "Epoch 8/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0168 - accuracy: 0.9999 - val_loss: 0.0144 - val_accuracy: 0.9999\n",
      "Epoch 9/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0107 - accuracy: 1.0000 - val_loss: 0.0095 - val_accuracy: 0.9999\n",
      "Epoch 10/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0074 - accuracy: 1.0000 - val_loss: 0.0066 - val_accuracy: 0.9999\n",
      "Epoch 11/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0053 - accuracy: 1.0000 - val_loss: 0.0051 - val_accuracy: 0.9999\n",
      "Epoch 12/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.0037 - val_accuracy: 1.0000\n",
      "Epoch 13/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0030 - val_accuracy: 1.0000\n",
      "Epoch 14/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.0022 - val_accuracy: 1.0000\n",
      "Epoch 15/15\n",
      "313/313 [==============================] - 5s 15ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.0018 - val_accuracy: 1.0000\n"
     ]
    }
   ],
   "source": [
    "import tensorflow_addons as tfa\n",
    "\n",
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "encoder_embedding_size = 32\n",
    "decoder_embedding_size = 32\n",
    "units = 128\n",
    "\n",
    "encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)\n",
    "\n",
    "encoder_embeddings = keras.layers.Embedding(\n",
    "    len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)\n",
    "\n",
    "decoder_embedding_layer = keras.layers.Embedding(\n",
    "    len(OUTPUT_CHARS) + 2, decoder_embedding_size)\n",
    "decoder_embeddings = decoder_embedding_layer(decoder_inputs)\n",
    "\n",
    "encoder = keras.layers.LSTM(units, return_state=True)\n",
    "encoder_outputs, state_h, state_c = encoder(encoder_embeddings)\n",
    "encoder_state = [state_h, state_c]\n",
    "\n",
    "sampler = tfa.seq2seq.sampler.TrainingSampler()\n",
    "\n",
    "decoder_cell = keras.layers.LSTMCell(units)\n",
    "output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)\n",
    "\n",
    "decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,\n",
    "                                                 sampler,\n",
    "                                                 output_layer=output_layer)\n",
    "final_outputs, final_state, final_sequence_lengths = decoder(\n",
    "    decoder_embeddings,\n",
    "    initial_state=encoder_state)\n",
    "Y_proba = keras.layers.Activation(\"softmax\")(final_outputs.rnn_output)\n",
    "\n",
    "model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],\n",
    "                           outputs=[Y_proba])\n",
    "optimizer = keras.optimizers.Nadam()\n",
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
    "              metrics=[\"accuracy\"])\n",
    "history = model.fit([X_train, X_train_decoder], Y_train, epochs=15,\n",
    "                    validation_data=([X_valid, X_valid_decoder], Y_valid))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And once again, 100% validation accuracy! To use the model, we can just reuse the `predict_date_strs()` function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 115,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['1789-07-14', '2020-05-01']"
      ]
     },
     "execution_count": 115,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "However, there's a much more efficient way to perform inference. Until now, during inference, we've run the model once for each new character. Instead, we can create a new decoder, based on the previously trained layers, but using a `GreedyEmbeddingSampler` instead of a `TrainingSampler`.\n",
    "\n",
    "At each time step, the `GreedyEmbeddingSampler` will compute the argmax of the decoder's outputs, and run the resulting token IDs through the decoder's embedding layer. Then it will feed the resulting embeddings to the decoder's LSTM cell at the next time step. This way, we only need to run the decoder once to get the full prediction."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 116,
   "metadata": {},
   "outputs": [],
   "source": [
    "inference_sampler = tfa.seq2seq.sampler.GreedyEmbeddingSampler(\n",
    "    embedding_fn=decoder_embedding_layer)\n",
    "inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(\n",
    "    decoder_cell, inference_sampler, output_layer=output_layer,\n",
    "    maximum_iterations=max_output_length)\n",
    "batch_size = tf.shape(encoder_inputs)[:1]\n",
    "start_tokens = tf.fill(dims=batch_size, value=sos_id)\n",
    "final_outputs, final_state, final_sequence_lengths = inference_decoder(\n",
    "    start_tokens,\n",
    "    initial_state=encoder_state,\n",
    "    start_tokens=start_tokens,\n",
    "    end_token=0)\n",
    "\n",
    "inference_model = keras.models.Model(inputs=[encoder_inputs],\n",
    "                                     outputs=[final_outputs.sample_id])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A few notes:\n",
    "* The `GreedyEmbeddingSampler` needs the `start_tokens` (a vector containing the start-of-sequence ID for each decoder sequence), and the `end_token` (the decoder will stop decoding a sequence once the model outputs this token).\n",
    "* We must set `maximum_iterations` when creating the `BasicDecoder`, or else it may run into an infinite loop (if the model never outputs the end token for at least one of the sequences). This would force you would to restart the Jupyter kernel.\n",
    "* The decoder inputs are not needed anymore, since all the decoder inputs are generated dynamically based on the outputs from the previous time step.\n",
    "* The model's outputs are `final_outputs.sample_id` instead of the softmax of `final_outputs.rnn_outputs`. This allows us to directly get the argmax of the model's outputs. If you prefer to have access to the logits, you can replace `final_outputs.sample_id` with `final_outputs.rnn_outputs`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can write a simple function that uses the model to perform the date format conversion:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "metadata": {},
   "outputs": [],
   "source": [
    "def fast_predict_date_strs(date_strs):\n",
    "    X = prepare_date_strs_padded(date_strs)\n",
    "    Y_pred = inference_model.predict(X)\n",
    "    return ids_to_date_strs(Y_pred)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 118,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['1789-07-14', '2020-05-01']"
      ]
     },
     "execution_count": 118,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "fast_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check that it really is faster:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 119,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "199 ms ± 3.94 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
     ]
    }
   ],
   "source": [
    "%timeit predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 120,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "18.3 ms ± 366 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n"
     ]
    }
   ],
   "source": [
    "%timeit fast_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That's more than a 10x speedup! And it would be even more if we were handling longer sequences."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Fourth version: using TF-Addons's seq2seq implementation with a scheduled sampler"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Warning**: due to a TF bug, this version only works using TensorFlow 2.2 or above."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When we trained the previous model, at each time step _t_ we gave the model the target token for time step _t_ - 1. However, at inference time, the model did not get the previous target at each time step. Instead, it got the previous prediction. So there is a discrepancy between training and inference, which may lead to disappointing performance. To alleviate this, we can gradually replace the targets with the predictions, during training. For this, we just need to replace the `TrainingSampler` with a `ScheduledEmbeddingTrainingSampler`, and use a Keras callback to gradually increase the `sampling_probability` (i.e., the probability that the decoder will use the prediction from the previous time step rather than the target for the previous time step)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 121,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/20\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/ageron/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/framework/indexed_slices.py:434: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n",
      "  \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "313/313 [==============================] - 6s 19ms/step - loss: 1.6759 - accuracy: 0.3681 - val_loss: 1.4611 - val_accuracy: 0.4198\n",
      "Epoch 2/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 1.3872 - accuracy: 0.4583 - val_loss: 1.2827 - val_accuracy: 0.5021\n",
      "Epoch 3/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 1.0425 - accuracy: 0.6152 - val_loss: 0.8165 - val_accuracy: 0.7000\n",
      "Epoch 4/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.6353 - accuracy: 0.7673 - val_loss: 0.4365 - val_accuracy: 0.8464\n",
      "Epoch 5/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.3764 - accuracy: 0.8765 - val_loss: 0.2795 - val_accuracy: 0.9166\n",
      "Epoch 6/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.2506 - accuracy: 0.9269 - val_loss: 0.1805 - val_accuracy: 0.9489\n",
      "Epoch 7/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.1427 - accuracy: 0.9625 - val_loss: 0.1115 - val_accuracy: 0.9718\n",
      "Epoch 8/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.0853 - accuracy: 0.9804 - val_loss: 0.0785 - val_accuracy: 0.9809\n",
      "Epoch 9/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.1010 - accuracy: 0.9797 - val_loss: 0.1198 - val_accuracy: 0.9746\n",
      "Epoch 10/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.0447 - accuracy: 0.9917 - val_loss: 0.0306 - val_accuracy: 0.9949\n",
      "Epoch 11/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0241 - accuracy: 0.9961 - val_loss: 0.0205 - val_accuracy: 0.9968\n",
      "Epoch 12/20\n",
      "313/313 [==============================] - 5s 17ms/step - loss: 0.0705 - accuracy: 0.9861 - val_loss: 0.0823 - val_accuracy: 0.9860\n",
      "Epoch 13/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0182 - accuracy: 0.9977 - val_loss: 0.0117 - val_accuracy: 0.9980\n",
      "Epoch 14/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0088 - accuracy: 0.9990 - val_loss: 0.0085 - val_accuracy: 0.9990\n",
      "Epoch 15/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0059 - accuracy: 0.9994 - val_loss: 0.0061 - val_accuracy: 0.9993\n",
      "Epoch 16/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0045 - accuracy: 0.9996 - val_loss: 0.0048 - val_accuracy: 0.9996\n",
      "Epoch 17/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0038 - accuracy: 0.9997 - val_loss: 0.0039 - val_accuracy: 0.9995\n",
      "Epoch 18/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0029 - accuracy: 0.9997 - val_loss: 0.0024 - val_accuracy: 0.9999\n",
      "Epoch 19/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0020 - accuracy: 0.9999 - val_loss: 0.0031 - val_accuracy: 0.9992\n",
      "Epoch 20/20\n",
      "313/313 [==============================] - 5s 16ms/step - loss: 0.0018 - accuracy: 0.9999 - val_loss: 0.0022 - val_accuracy: 0.9999\n"
     ]
    }
   ],
   "source": [
    "import tensorflow_addons as tfa\n",
    "\n",
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "n_epochs = 20\n",
    "encoder_embedding_size = 32\n",
    "decoder_embedding_size = 32\n",
    "units = 128\n",
    "\n",
    "encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
    "sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)\n",
    "\n",
    "encoder_embeddings = keras.layers.Embedding(\n",
    "    len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)\n",
    "\n",
    "decoder_embedding_layer = keras.layers.Embedding(\n",
    "    len(OUTPUT_CHARS) + 2, decoder_embedding_size)\n",
    "decoder_embeddings = decoder_embedding_layer(decoder_inputs)\n",
    "\n",
    "encoder = keras.layers.LSTM(units, return_state=True)\n",
    "encoder_outputs, state_h, state_c = encoder(encoder_embeddings)\n",
    "encoder_state = [state_h, state_c]\n",
    "\n",
    "sampler = tfa.seq2seq.sampler.ScheduledEmbeddingTrainingSampler(\n",
    "    sampling_probability=0.,\n",
    "    embedding_fn=decoder_embedding_layer)\n",
    "# we must set the sampling_probability after creating the sampler\n",
    "# (see https://github.com/tensorflow/addons/pull/1714)\n",
    "sampler.sampling_probability = tf.Variable(0.)\n",
    "\n",
    "decoder_cell = keras.layers.LSTMCell(units)\n",
    "output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)\n",
    "\n",
    "decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,\n",
    "                                                 sampler,\n",
    "                                                 output_layer=output_layer)\n",
    "final_outputs, final_state, final_sequence_lengths = decoder(\n",
    "    decoder_embeddings,\n",
    "    initial_state=encoder_state)\n",
    "Y_proba = keras.layers.Activation(\"softmax\")(final_outputs.rnn_output)\n",
    "\n",
    "model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],\n",
    "                           outputs=[Y_proba])\n",
    "optimizer = keras.optimizers.Nadam()\n",
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
    "              metrics=[\"accuracy\"])\n",
    "\n",
    "def update_sampling_probability(epoch, logs):\n",
    "    proba = min(1.0, epoch / (n_epochs - 10))\n",
    "    sampler.sampling_probability.assign(proba)\n",
    "\n",
    "sampling_probability_cb = keras.callbacks.LambdaCallback(\n",
    "    on_epoch_begin=update_sampling_probability)\n",
    "history = model.fit([X_train, X_train_decoder], Y_train, epochs=n_epochs,\n",
    "                    validation_data=([X_valid, X_valid_decoder], Y_valid),\n",
    "                    callbacks=[sampling_probability_cb])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not quite 100% validation accuracy, but close enough!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For inference, we could do the exact same thing as earlier, using a `GreedyEmbeddingSampler`. However, just for the sake of completeness, let's use a `SampleEmbeddingSampler` instead. It's almost the same thing, except that instead of using the argmax of the model's output to find the token ID, it treats the outputs as logits and uses them to sample a token ID randomly. This can be useful when you want to generate text. The `softmax_temperature` argument serves the \n",
    "same purpose as when we generated Shakespeare-like text (the higher this argument, the more random the generated text will be)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "metadata": {},
   "outputs": [],
   "source": [
    "softmax_temperature = tf.Variable(1.)\n",
    "\n",
    "inference_sampler = tfa.seq2seq.sampler.SampleEmbeddingSampler(\n",
    "    embedding_fn=decoder_embedding_layer,\n",
    "    softmax_temperature=softmax_temperature)\n",
    "inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(\n",
    "    decoder_cell, inference_sampler, output_layer=output_layer,\n",
    "    maximum_iterations=max_output_length)\n",
    "batch_size = tf.shape(encoder_inputs)[:1]\n",
    "start_tokens = tf.fill(dims=batch_size, value=sos_id)\n",
    "final_outputs, final_state, final_sequence_lengths = inference_decoder(\n",
    "    start_tokens,\n",
    "    initial_state=encoder_state,\n",
    "    start_tokens=start_tokens,\n",
    "    end_token=0)\n",
    "\n",
    "inference_model = keras.models.Model(inputs=[encoder_inputs],\n",
    "                                     outputs=[final_outputs.sample_id])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 123,
   "metadata": {},
   "outputs": [],
   "source": [
    "def creative_predict_date_strs(date_strs, temperature=1.0):\n",
    "    softmax_temperature.assign(temperature)\n",
    "    X = prepare_date_strs_padded(date_strs)\n",
    "    Y_pred = inference_model.predict(X)\n",
    "    return ids_to_date_strs(Y_pred)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 124,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['1789-07-14', '2020-05-01']"
      ]
     },
     "execution_count": 124,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tf.random.set_seed(42)\n",
    "\n",
    "creative_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Dates look good at room temperature. Now let's heat things up a bit:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 125,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['2289607-12', '9272-03-01']"
      ]
     },
     "execution_count": 125,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tf.random.set_seed(42)\n",
    "\n",
    "creative_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"],\n",
    "                           temperature=5.)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Oops, the dates are overcooked, now. Let's call them \"creative\" dates."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Fifth version: using TFA seq2seq, the Keras subclassing API and attention mechanisms"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The sequences in this problem are pretty short, but if we wanted to tackle longer sequences, we would probably have to use attention mechanisms. While it's possible to code our own implementation, it's simpler and more efficient to use TF-Addons's implementation instead. Let's do that now, this time using Keras' subclassing API.\n",
    "\n",
    "**Warning**: due to a TensorFlow bug (see [this issue](https://github.com/tensorflow/addons/issues/1153) for details), the `get_initial_state()` method fails in eager mode, so for now we have to use the subclassing API, as Keras automatically calls `tf.function()` on the `call()` method (so it runs in graph mode)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this implementation, we've reverted back to using the `TrainingSampler`, for simplicity (but you can easily tweak it to use a `ScheduledEmbeddingTrainingSampler` instead). We also use a `GreedyEmbeddingSampler` during inference, so this class is pretty easy to use:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "metadata": {},
   "outputs": [],
   "source": [
    "class DateTranslation(keras.models.Model):\n",
    "    def __init__(self, units=128, encoder_embedding_size=32,\n",
    "                 decoder_embedding_size=32, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.encoder_embedding = keras.layers.Embedding(\n",
    "            input_dim=len(INPUT_CHARS) + 1,\n",
    "            output_dim=encoder_embedding_size)\n",
    "        self.encoder = keras.layers.LSTM(units,\n",
    "                                         return_sequences=True,\n",
    "                                         return_state=True)\n",
    "        self.decoder_embedding = keras.layers.Embedding(\n",
    "            input_dim=len(OUTPUT_CHARS) + 2,\n",
    "            output_dim=decoder_embedding_size)\n",
    "        self.attention = tfa.seq2seq.LuongAttention(units)\n",
    "        decoder_inner_cell = keras.layers.LSTMCell(units)\n",
    "        self.decoder_cell = tfa.seq2seq.AttentionWrapper(\n",
    "            cell=decoder_inner_cell,\n",
    "            attention_mechanism=self.attention)\n",
    "        output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)\n",
    "        self.decoder = tfa.seq2seq.BasicDecoder(\n",
    "            cell=self.decoder_cell,\n",
    "            sampler=tfa.seq2seq.sampler.TrainingSampler(),\n",
    "            output_layer=output_layer)\n",
    "        self.inference_decoder = tfa.seq2seq.BasicDecoder(\n",
    "            cell=self.decoder_cell,\n",
    "            sampler=tfa.seq2seq.sampler.GreedyEmbeddingSampler(\n",
    "                embedding_fn=self.decoder_embedding),\n",
    "            output_layer=output_layer,\n",
    "            maximum_iterations=max_output_length)\n",
    "\n",
    "    def call(self, inputs, training=None):\n",
    "        encoder_input, decoder_input = inputs\n",
    "        encoder_embeddings = self.encoder_embedding(encoder_input)\n",
    "        encoder_outputs, encoder_state_h, encoder_state_c = self.encoder(\n",
    "            encoder_embeddings,\n",
    "            training=training)\n",
    "        encoder_state = [encoder_state_h, encoder_state_c]\n",
    "\n",
    "        self.attention(encoder_outputs,\n",
    "                       setup_memory=True)\n",
    "        \n",
    "        decoder_embeddings = self.decoder_embedding(decoder_input)\n",
    "\n",
    "        decoder_initial_state = self.decoder_cell.get_initial_state(\n",
    "            decoder_embeddings)\n",
    "        decoder_initial_state = decoder_initial_state.clone(\n",
    "            cell_state=encoder_state)\n",
    "        \n",
    "        if training:\n",
    "            decoder_outputs, _, _ = self.decoder(\n",
    "                decoder_embeddings,\n",
    "                initial_state=decoder_initial_state,\n",
    "                training=training)\n",
    "        else:\n",
    "            start_tokens = tf.zeros_like(encoder_input[:, 0]) + sos_id\n",
    "            decoder_outputs, _, _ = self.inference_decoder(\n",
    "                decoder_embeddings,\n",
    "                initial_state=decoder_initial_state,\n",
    "                start_tokens=start_tokens,\n",
    "                end_token=0)\n",
    "\n",
    "        return tf.nn.softmax(decoder_outputs.rnn_output)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 127,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/25\n",
      "313/313 [==============================] - 7s 21ms/step - loss: 2.1549 - accuracy: 0.2295 - val_loss: 2.1450 - val_accuracy: 0.2239\n",
      "Epoch 2/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 1.8147 - accuracy: 0.3492 - val_loss: 1.4931 - val_accuracy: 0.4476\n",
      "Epoch 3/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 1.3585 - accuracy: 0.4909 - val_loss: 1.3168 - val_accuracy: 0.5100\n",
      "Epoch 4/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 1.2787 - accuracy: 0.5293 - val_loss: 1.1767 - val_accuracy: 0.5624\n",
      "Epoch 5/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 1.1236 - accuracy: 0.5776 - val_loss: 1.0769 - val_accuracy: 0.5907\n",
      "Epoch 6/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 1.0369 - accuracy: 0.6073 - val_loss: 1.0159 - val_accuracy: 0.6199\n",
      "Epoch 7/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 0.9752 - accuracy: 0.6295 - val_loss: 0.9723 - val_accuracy: 0.6346\n",
      "Epoch 8/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 0.9794 - accuracy: 0.6315 - val_loss: 0.9444 - val_accuracy: 0.6371\n",
      "Epoch 9/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 0.9338 - accuracy: 0.6415 - val_loss: 0.9296 - val_accuracy: 0.6381\n",
      "Epoch 10/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.9439 - accuracy: 0.6418 - val_loss: 0.9028 - val_accuracy: 0.6574\n",
      "Epoch 11/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.8807 - accuracy: 0.6637 - val_loss: 0.9835 - val_accuracy: 0.6369\n",
      "Epoch 12/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.7307 - accuracy: 0.6953 - val_loss: 0.8942 - val_accuracy: 0.6873\n",
      "Epoch 13/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.5833 - accuracy: 0.7327 - val_loss: 0.6944 - val_accuracy: 0.7391\n",
      "Epoch 14/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.4664 - accuracy: 0.7940 - val_loss: 0.6228 - val_accuracy: 0.7885\n",
      "Epoch 15/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.3205 - accuracy: 0.8740 - val_loss: 0.4825 - val_accuracy: 0.8780\n",
      "Epoch 16/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.2329 - accuracy: 0.9216 - val_loss: 0.3851 - val_accuracy: 0.9118\n",
      "Epoch 17/25\n",
      "313/313 [==============================] - 7s 21ms/step - loss: 0.2480 - accuracy: 0.9372 - val_loss: 0.2785 - val_accuracy: 0.9111\n",
      "Epoch 18/25\n",
      "313/313 [==============================] - 7s 22ms/step - loss: 0.1182 - accuracy: 0.9801 - val_loss: 0.1372 - val_accuracy: 0.9786\n",
      "Epoch 19/25\n",
      "313/313 [==============================] - 7s 22ms/step - loss: 0.0643 - accuracy: 0.9937 - val_loss: 0.0681 - val_accuracy: 0.9909\n",
      "Epoch 20/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 0.0446 - accuracy: 0.9952 - val_loss: 0.0487 - val_accuracy: 0.9934\n",
      "Epoch 21/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 0.0247 - accuracy: 0.9987 - val_loss: 0.0228 - val_accuracy: 0.9987\n",
      "Epoch 22/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 0.0456 - accuracy: 0.9918 - val_loss: 0.0207 - val_accuracy: 0.9985\n",
      "Epoch 23/25\n",
      "313/313 [==============================] - 6s 18ms/step - loss: 0.0131 - accuracy: 0.9997 - val_loss: 0.0127 - val_accuracy: 0.9993\n",
      "Epoch 24/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.0360 - accuracy: 0.9933 - val_loss: 0.0146 - val_accuracy: 0.9990\n",
      "Epoch 25/25\n",
      "313/313 [==============================] - 6s 19ms/step - loss: 0.0092 - accuracy: 0.9998 - val_loss: 0.0089 - val_accuracy: 0.9992\n"
     ]
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "tf.random.set_seed(42)\n",
    "\n",
    "model = DateTranslation()\n",
    "optimizer = keras.optimizers.Nadam()\n",
    "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
    "              metrics=[\"accuracy\"])\n",
    "history = model.fit([X_train, X_train_decoder], Y_train, epochs=25,\n",
    "                    validation_data=([X_valid, X_valid_decoder], Y_valid))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not quite 100% validation accuracy, but close. It took a bit longer to converge this time, but there were also more parameters and more computations per iteration. And we did not use a scheduled sampler.\n",
    "\n",
    "To use the model, we can write yet another little function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 128,
   "metadata": {},
   "outputs": [],
   "source": [
    "def fast_predict_date_strs_v2(date_strs):\n",
    "    X = prepare_date_strs_padded(date_strs)\n",
    "    X_decoder = tf.zeros(shape=(len(X), max_output_length), dtype=tf.int32)\n",
    "    Y_probas = model.predict([X, X_decoder])\n",
    "    Y_pred = tf.argmax(Y_probas, axis=-1)\n",
    "    return ids_to_date_strs(Y_pred)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 129,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['1789-07-14', '2020-05-01']"
      ]
     },
     "execution_count": 129,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "fast_predict_date_strs_v2([\"July 14, 1789\", \"May 01, 2020\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are still a few interesting features from TF-Addons that you may want to look at:\n",
    "* Using a `BeamSearchDecoder` rather than a `BasicDecoder` for inference. Instead of outputing the character with the highest probability, this decoder keeps track of the several candidates, and keeps only the most likely sequences of candidates (see chapter 16 in the book for more details).\n",
    "* Setting masks or specifying `sequence_length` if the input or target sequences may have very different lengths.\n",
    "* Using a `ScheduledOutputTrainingSampler`, which gives you more flexibility than the `ScheduledEmbeddingTrainingSampler` to decide how to feed the output at time _t_ to the cell at time _t_+1. By default it feeds the outputs directly to cell, without computing the argmax ID and passing it through an embedding layer. Alternatively, you specify a `next_inputs_fn` function that will be used to convert the cell outputs to inputs at the next step."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 10.\n",
    "_Exercise: Go through TensorFlow's [Neural Machine Translation with Attention tutorial](https://homl.info/nmttuto)._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Simply open the Colab and follow its instructions. Alternatively, if you want a simpler example of using TF-Addons's seq2seq implementation for Neural Machine Translation (NMT), look at the solution to the previous question. The last model implementation will give you a simpler example of using TF-Addons to build an NMT model using attention mechanisms."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 11.\n",
    "_Exercise: Use one of the recent language models (e.g., GPT) to generate more convincing Shakespearean text._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The simplest way to use recent language models is to use the excellent [transformers library](https://huggingface.co/transformers/), open sourced by Hugging Face. It provides many modern neural net architectures (including BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet and more) for Natural Language Processing (NLP), including many pretrained models. It relies on either TensorFlow or PyTorch. Best of all: it's amazingly simple to use."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, let's load a pretrained model. In this example, we will use OpenAI's GPT model, with an additional Language Model on top (just a linear layer with weights tied to the input embeddings). Let's import it and load the pretrained weights (this will download about 445MB of data to `~/.cache/torch/transformers`):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 130,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import TFOpenAIGPTLMHeadModel\n",
    "\n",
    "model = TFOpenAIGPTLMHeadModel.from_pretrained(\"openai-gpt\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we will need a specialized tokenizer for this model. This one will try to use the [spaCy](https://spacy.io/) and [ftfy](https://pypi.org/project/ftfy/) libraries if they are installed, or else it will fall back to BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most use cases)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 131,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import OpenAIGPTTokenizer\n",
    "\n",
    "tokenizer = OpenAIGPTTokenizer.from_pretrained(\"openai-gpt\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's use the tokenizer to tokenize and encode the prompt text:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 132,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(1, 10), dtype=int32, numpy=\n",
       "array([[  616,  5751,  6404,   498,  9606,   240,   616, 26271,  7428,\n",
       "        16187]], dtype=int32)>"
      ]
     },
     "execution_count": 132,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt_text = \"This royal throne of kings, this sceptred isle\"\n",
    "encoded_prompt = tokenizer.encode(prompt_text,\n",
    "                                  add_special_tokens=False,\n",
    "                                  return_tensors=\"tf\")\n",
    "encoded_prompt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Easy! Next, let's use the model to generate text after the prompt. We will generate 5 different sentences, each starting with the prompt text, followed by 40 additional tokens. For an explanation of what all the hyperparameters do, make sure to check out this great [blog post](https://huggingface.co/blog/how-to-generate) by Patrick von Platen (from Hugging Face). You can play around with the hyperparameters to try to obtain better results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 133,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(5, 50), dtype=int32, numpy=\n",
       "array([[  616,  5751,  6404,   498,  9606,   240,   616, 26271,  7428,\n",
       "        16187,   239,   784,   645,  1184,   558,  1886,   688,  6437,\n",
       "          240,   784,   645,   507,   641,  5486,   240,   600,   636,\n",
       "          868,   604,   694,  2816,   485,  1894,   822,   481,  1491,\n",
       "          600,   880,  6061,   239,   256, 40477,   256,   600,   635,\n",
       "          538,   604,  1816,   525,   239],\n",
       "       [  616,  5751,  6404,   498,  9606,   240,   616, 26271,  7428,\n",
       "        16187,   488,  1288,   989,   640, 16605,   239,   256, 40477,\n",
       "          674,   481, 12744,  3912,   488,  3912,  5936,  2441,   811,\n",
       "          488,  1040,   485,   754,  3952,   239, 40477,   481,  1375,\n",
       "         1981,   833,  1210,   481, 17384,   488,   481,  3089,   488,\n",
       "          481,  4815,   509,   498,  1424],\n",
       "       [  616,  5751,  6404,   498,  9606,   240,   616, 26271,  7428,\n",
       "        16187,   980,   987,  1074, 13138,   240,   531,   501,   517,\n",
       "          836,   525, 12659,   485,  2642,   512,   239,   500,   616,\n",
       "         7339,   704,   989,  1259, 38752,   481,  9606,   498,   481,\n",
       "         6903,   239,   500,   616,  7339,   704,  3064,   994,   580,\n",
       "         3953,   617,   616,  4741,   488],\n",
       "       [  616,  5751,  6404,   498,  9606,   240,   616, 26271,  7428,\n",
       "        16187, 10595,   485,   510,   239,   244, 40477,   244,   481,\n",
       "         1424,  6404,   498,  1922,    23, 37492,   257,   244, 40477,\n",
       "          244,  3491,   240,   244,   603,   481,   618,   556,   246,\n",
       "         3386,   498,   524,   756,   239,   244,   616,  1276,   509,\n",
       "         1098, 10945,   498,   246,  6785],\n",
       "       [  616,  5751,  6404,   498,  9606,   240,   616, 26271,  7428,\n",
       "        16187,   544,  2203,   239,   616,   544,   246,  6460,   260,\n",
       "          850,   629,  4844,  3064,  3766,   240,   246,  1082,   806,\n",
       "         9606,   640, 32581,   240,   595,  7914,  1243,   488, 18535,\n",
       "          239,   249,   587,   538,   788,   775,  2319,   498,  1013,\n",
       "          525,   544,   595,   754,  1074]], dtype=int32)>"
      ]
     },
     "execution_count": 133,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "num_sequences = 5\n",
    "length = 40\n",
    "\n",
    "generated_sequences = model.generate(\n",
    "    input_ids=encoded_prompt,\n",
    "    do_sample=True,\n",
    "    max_length=length + len(encoded_prompt[0]),\n",
    "    temperature=1.0,\n",
    "    top_k=0,\n",
    "    top_p=0.9,\n",
    "    repetition_penalty=1.0,\n",
    "    num_return_sequences=num_sequences,\n",
    ")\n",
    "\n",
    "generated_sequences"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's decode the generated sequences and print them:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 134,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "this royal throne of kings, this sceptred isle. even if someone had given them permission, even if it were required, they would never have been allowed to live through the hell they've survived.'\n",
      "'they couldn't have known that.\n",
      "--------------------------------------------------------------------------------\n",
      "this royal throne of kings, this sceptred isle and these people are royalty.'\n",
      " then the mute prince and prince edward broke off and went to their rooms. \n",
      " the talk passed again between the princes and the guards and the princess was of great\n",
      "--------------------------------------------------------------------------------\n",
      "this royal throne of kings, this sceptred isle has its own highness, an alatte that waits to save you. in this kingdom your people must emulate the kings of the realm. in this kingdom your kin should be saved from this pit and\n",
      "--------------------------------------------------------------------------------\n",
      "this royal throne of kings, this sceptred isle belongs to me. \" \n",
      " \" the great throne of penvynne? \" \n",
      " \" indeed, \" said the king with a nod of his head. \" this world was once composed of a magical\n",
      "--------------------------------------------------------------------------------\n",
      "this royal throne of kings, this sceptred isle is empty. this is a modern - day fedaykin court, a place where kings are governed, not emperors and judges. i don't see any sign of life that is not their own\n",
      "--------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "for sequence in generated_sequences:\n",
    "    text = tokenizer.decode(sequence, clean_up_tokenization_spaces=True)\n",
    "    print(text)\n",
    "    print(\"-\" * 80)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can try more recent (and larger) models, such as GPT-2, CTRL, Transformer-XL or XLNet, which are all available as pretrained models in the transformers library, including variants with Language Models on top. The preprocessing steps vary slightly between models, so make sure to check out this [generation example](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) from the transformers documentation (this example uses PyTorch, but it will work with very little tweaks, such as adding `TF` at the beginning of the model class name, removing the `.to()` method calls, and using `return_tensors=\"tf\"` instead of `\"pt\"`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Hope you enjoyed this chapter! :)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  },
  "nav_menu": {},
  "toc": {
   "navigate_menu": true,
   "number_sections": true,
   "sideBar": true,
   "threshold": 6,
   "toc_cell": false,
   "toc_section_display": "block",
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
