{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# [COM6513] Assignment 2: Topic Classification with a Feedforward Network\n",
    "\n",
    "\n",
    "### Instructor: Nikos Aletras\n",
    "\n",
    "\n",
    "The goal of this assignment is to develop a Feedforward neural network for topic classification. \n",
    "\n",
    "\n",
    "\n",
    "For that purpose, you will implement:\n",
    "\n",
    "- Text processing methods for transforming raw text data into input vectors for your network  (**1 mark**)\n",
    "\n",
    "\n",
    "- A Feedforward network consisting of:\n",
    "    - **One-hot** input layer mapping words into an **Embedding weight matrix** (**1 mark**)\n",
    "    - **One hidden layer** computing the mean embedding vector of all words in input followed by a **ReLU activation function** (**1 mark**)\n",
    "    - **Output layer** with a **softmax** activation. (**1 mark**)\n",
    "\n",
    "\n",
    "- The Stochastic Gradient Descent (SGD) algorithm with **back-propagation** to learn the weights of your Neural network. Your algorithm should:\n",
    "    - Use (and minimise) the **Categorical Cross-entropy loss** function (**1 mark**)\n",
    "    - Perform a **Forward pass** to compute intermediate outputs (**3 marks**)\n",
    "    - Perform a **Backward pass** to compute gradients and update all sets of weights (**6 marks**)\n",
    "    - Implement and use **Dropout** after each hidden layer for regularisation (**2 marks**)\n",
    "\n",
    "\n",
    "\n",
    "- Discuss how did you choose hyperparameters? You can tune the learning rate (hint: choose small values), embedding size {e.g. 50, 300, 500}, the dropout rate {e.g. 0.2, 0.5} and the learning rate. Please use tables or graphs to show training and validation performance for each hyperparameter combination  (**2 marks**). \n",
    "\n",
    "\n",
    "\n",
    "- After training a model, plot the learning process (i.e. training and validation loss in each epoch) using a line plot and report accuracy. Does your model overfit, underfit or is about right? (**1 mark**).\n",
    "\n",
    "\n",
    "\n",
    "- Re-train your network by using pre-trained embeddings ([GloVe](https://nlp.stanford.edu/projects/glove/)) trained on large corpora. Instead of randomly initialising the embedding weights matrix, you should initialise it with the pre-trained weights. During training, you should not update them (i.e. weight freezing) and backprop should stop before computing gradients for updating embedding weights. Report results by performing hyperparameter tuning and plotting the learning process. Do you get better performance? (**3 marks**).\n",
    "\n",
    "\n",
    "\n",
    "- Extend you Feedforward network by adding more hidden layers (e.g. one more or two). How does it affect the performance? Note: You need to repeat hyperparameter tuning, but the number of combinations grows exponentially. Therefore, you need to choose a subset of all possible combinations (**4 marks**)\n",
    "\n",
    "\n",
    "- Provide well documented and commented code describing all of your choices. In general, you are free to make decisions about text processing (e.g. punctuation, numbers, vocabulary size) and hyperparameter values. We expect to see justifications and discussion for all of your choices (**2 marks**). \n",
    "\n",
    "\n",
    "\n",
    "- Provide efficient solutions by using Numpy arrays when possible. Executing the whole notebook with your code should not take more than 10 minutes on any standard computer (e.g. Intel Core i5 CPU, 8 or 16GB RAM) excluding hyperparameter tuning runs and loading the pretrained vectors. You can find tips in Lab 1 (**2 marks**). \n",
    "\n",
    "\n",
    "\n",
    "### Data \n",
    "\n",
    "The data you will use for the task is a subset of the [AG News Corpus](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html) and you can find it in the `./data_topic` folder in CSV format:\n",
    "\n",
    "- `data_topic/train.csv`: contains 2,400 news articles, 800 for each class to be used for training.\n",
    "- `data_topic/dev.csv`: contains 150 news articles, 50 for each class to be used for hyperparameter selection and monitoring the training process.\n",
    "- `data_topic/test.csv`: contains 900 news articles, 300 for each class to be used for testing.\n",
    "\n",
    "### Pre-trained Embeddings\n",
    "\n",
    "You can download pre-trained GloVe embeddings trained on Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download) from [here](http://nlp.stanford.edu/data/glove.840B.300d.zip). No need to unzip, the file is large.\n",
    "\n",
    "### Save Memory\n",
    "\n",
    "To save RAM, when you finish each experiment you can delete the weights of your network using `del W` followed by Python's garbage collector `gc.collect()`\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Submission Instructions\n",
    "\n",
    "You should submit a Jupyter Notebook file (assignment2.ipynb) and an exported PDF version (you can do it from Jupyter: `File->Download as->PDF via Latex`).\n",
    "\n",
    "\n",
    "You are advised to follow the code structure given in this notebook by completing all given funtions. You can also write any auxilliary/helper functions (and arguments for the functions) that you might need but note that you can provide a full solution without any such functions. Similarly, you can just use only the packages imported below but you are free to use any functionality from the [Python Standard Library](https://docs.python.org/3/library/index.html), NumPy, SciPy (excluding built-in softmax funtcions) and Pandas. You are **not allowed to use any third-party library** such as Scikit-learn (apart from metric functions already provided), NLTK, Spacy, Keras, Pytorch etc.. You should mention if you've used Windows to write and test your code because we mostly use Unix based machines for marking (e.g. Ubuntu, MacOS). \n",
    "\n",
    "There is no single correct answer on what your accuracy should be, but correct implementations usually achieve F1-scores around 80\\% or higher. The quality of the analysis of the results is as important as the accuracy itself. \n",
    "\n",
    "This assignment will be marked out of 30. It is worth 30\\% of your final grade in the module.\n",
    "\n",
    "The deadline for this assignment is **23:59 on Mon, 9 May 2022** and it needs to be submitted via Blackboard. Standard departmental penalties for lateness will be applied. We use a range of strategies to **detect [unfair means](https://www.sheffield.ac.uk/ssid/unfair-means/index)**, including Turnitin which helps detect plagiarism. Use of unfair means would result in getting a failing grade.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:00:18.625532Z",
     "start_time": "2020-04-02T15:00:17.377733Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from collections import Counter\n",
    "import re\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score\n",
    "import random\n",
    "from time import localtime, strftime\n",
    "from scipy.stats import spearmanr,pearsonr\n",
    "import zipfile\n",
    "import gc\n",
    "\n",
    "# fixing random seed for reproducibility\n",
    "random.seed(123)\n",
    "np.random.seed(123)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Transform Raw texts into training and development data\n",
    "\n",
    "First, you need to load the training, development and test sets from their corresponding CSV files (tip: you can use Pandas dataframes)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:39.748484Z",
     "start_time": "2020-04-02T14:26:39.727404Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# read the train data\n",
    "train_data_path = \"./data_topic/train.csv\"\n",
    "train_data=pd.read_csv(train_data_path,header = None,names=['label', 'text'])\n",
    "train_df = pd.DataFrame(train_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# read the test data\n",
    "test_data_path = \"./data_topic/test.csv\"\n",
    "test_data=pd.read_csv(test_data_path,header = None,names=['label', 'text'])\n",
    "test_df = pd.DataFrame(test_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# read the dev data\n",
    "dev_data_path = \"./data_topic/dev.csv\"\n",
    "dev_data=pd.read_csv(dev_data_path,header = None,names=['label', 'text'])\n",
    "dev_df = pd.DataFrame(dev_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:39.753874Z",
     "start_time": "2020-04-02T14:26:39.749647Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# all_df including the train data, dev data and test data\n",
    "all_df = train_df.append(test_df).append(dev_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Create input representations\n",
    "\n",
    "\n",
    "To train your Feedforward network, you first need to obtain input representations given a vocabulary. One-hot encoding requires large memory capacity. Therefore, we will instead represent documents as lists of vocabulary indices (each word corresponds to a vocabulary index). \n",
    "\n",
    "\n",
    "## Text Pre-Processing Pipeline\n",
    "\n",
    "To obtain a vocabulary of words. You should: \n",
    "- tokenise all texts into a list of unigrams (tip: you can re-use the functions from Assignment 1) \n",
    "- remove stop words (using the one provided or one of your preference) \n",
    "- remove unigrams appearing in less than K documents\n",
    "- use the remaining to create a vocabulary of the top-N most frequent unigrams in the entire corpus.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:40.851926Z",
     "start_time": "2020-04-02T14:26:40.847500Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "stop_words = ['a','in','on','at','and','or', \n",
    "              'to', 'the', 'of', 'an', 'by', \n",
    "              'as', 'is', 'was', 'were', 'been', 'be', \n",
    "              'are','for', 'this', 'that', 'these', 'those', 'you', 'i', 'if',\n",
    "             'it', 'he', 'she', 'we', 'they', 'will', 'have', 'has',\n",
    "              'do', 'did', 'can', 'could', 'who', 'which', 'what',\n",
    "              'but', 'not', 'there', 'no', 'does', 'not', 'so', 've', 'their',\n",
    "             'his', 'her', 'they', 'them', 'from', 'with', 'its']\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Unigram extraction from a document\n",
    "\n",
    "You first need to implement the `extract_ngrams` function. It takes as input:\n",
    "- `x_raw`: a string corresponding to the raw text of a document\n",
    "- `ngram_range`: a tuple of two integers denoting the type of ngrams you want to extract, e.g. (1,2) denotes extracting unigrams and bigrams.\n",
    "- `token_pattern`: a string to be used within a regular expression to extract all tokens. Note that data is already tokenised so you could opt for a simple white space tokenisation.\n",
    "- `stop_words`: a list of stop words\n",
    "- `vocab`: a given vocabulary. It should be used to extract specific features.\n",
    "\n",
    "and returns:\n",
    "\n",
    "- a list of all extracted features.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-05-11T08:23:17.181553Z",
     "start_time": "2020-05-11T08:23:17.178314Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def extract_ngrams(x_raw, ngram_range=(1,3), token_pattern=r'\\b[A-Za-z][A-Za-z]+\\b', stop_words=[], vocab=set()):\n",
    "   \n",
    "    tokenRE = re.compile(token_pattern)\n",
    "   \n",
    "    # first extract all unigrams by tokenising\n",
    "    x_uni = [w for w in tokenRE.findall(str(x_raw).lower(),) if w not in stop_words]\n",
    "   \n",
    "    # this is to store the ngrams to be returned\n",
    "    x = []\n",
    "   \n",
    "    if ngram_range[0]==1:\n",
    "        x = x_uni\n",
    "   \n",
    "    # generate n-grams from the available unigrams x_uni\n",
    "    ngrams = []\n",
    "    for n in range(ngram_range[0], ngram_range[1]+1):\n",
    "       \n",
    "        # ignore unigrams\n",
    "        if n==1: continue\n",
    "       \n",
    "        # pass a list of lists as an argument for zip\n",
    "        arg_list = [x_uni]+[x_uni[i:] for i in range(1, n)]\n",
    "\n",
    "        # extract tuples of n-grams using zip\n",
    "        # for bigram this should look: list(zip(x_uni, x_uni[1:]))\n",
    "        # align each item x[i] in x_uni with the next one x[i+1].\n",
    "        # Note that x_uni and x_uni[1:] have different lengths\n",
    "        # but zip ignores redundant elements at the end of the second list\n",
    "        # Alternatively, this could be done with for loops\n",
    "        x_ngram = list(zip(*arg_list))\n",
    "        ngrams.append(x_ngram)\n",
    "   \n",
    "       \n",
    "    for n in ngrams:\n",
    "        for t in n:\n",
    "            x.append(t)\n",
    "       \n",
    "    if len(vocab)>0:\n",
    "        x = [w for w in x if w in vocab]\n",
    "       \n",
    "    return x\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Create a vocabulary of n-grams\n",
    "\n",
    "Then the `get_vocab` function will be used to (1) create a vocabulary of ngrams; (2) count the document frequencies of ngrams; (3) their raw frequency. It takes as input:\n",
    "- `X_raw`: a list of strings each corresponding to the raw text of a document\n",
    "- `ngram_range`: a tuple of two integers denoting the type of ngrams you want to extract, e.g. (1,2) denotes extracting unigrams and bigrams.\n",
    "- `token_pattern`: a string to be used within a regular expression to extract all tokens. Note that data is already tokenised so you could opt for a simple white space tokenisation.\n",
    "- `stop_words`: a list of stop words\n",
    "- `min_df`: keep ngrams with a minimum document frequency.\n",
    "- `keep_topN`: keep top-N more frequent ngrams.\n",
    "\n",
    "and returns:\n",
    "\n",
    "- `vocab`: a set of the n-grams that will be used as features.\n",
    "- `df`: a Counter (or dict) that contains ngrams as keys and their corresponding document frequency as values.\n",
    "- `ngram_counts`: counts of each ngram in vocab\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:42.563876Z",
     "start_time": "2020-04-02T14:26:42.557967Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def get_vocab(X_raw, ngram_range=(1,3), token_pattern=r'\\b[A-Za-z][A-Za-z]+\\b',\n",
    "              min_df=0, keep_topN=0, stop_words=[]):\n",
    "   \n",
    "   \n",
    "    tokenRE = re.compile(token_pattern)\n",
    "   \n",
    "    df = Counter()\n",
    "    ngram_counts = Counter()\n",
    "    vocab = set()\n",
    "   \n",
    "    # iterate through each raw text\n",
    "    for x in X_raw:\n",
    "       \n",
    "        x_ngram = extract_ngrams(x, ngram_range=ngram_range, token_pattern=token_pattern, stop_words=stop_words)\n",
    "       \n",
    "        #update doc and ngram frequencies\n",
    "        df.update(list(set(x_ngram)))\n",
    "        ngram_counts.update(x_ngram)\n",
    "\n",
    "    # obtain a vocabulary as a set.\n",
    "    # Keep elements with doc frequency > minimum doc freq (min_df)\n",
    "    # Note that df contains all te\n",
    "    vocab = set([w for w in df if df[w]>=min_df])\n",
    "   \n",
    "    # keep the top N most frequent\n",
    "    if keep_topN>0:\n",
    "        vocab = set([w[0] for w in ngram_counts.most_common(keep_topN) if w[0] in vocab])\n",
    "   \n",
    "   \n",
    "    return vocab, df, ngram_counts"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now you should use `get_vocab` to create your vocabulary and get document and raw frequencies of unigrams:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:43.577997Z",
     "start_time": "2020-04-02T14:26:43.478950Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# get the vocab of all the data\n",
    "vocab, df, ngram_counts = get_vocab(all_df['text'], ngram_range=(1,3), token_pattern=r'\\b[A-Za-z][A-Za-z]+\\b',\n",
    "              min_df=0, keep_topN=0, stop_words=stop_words)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then, you need to create vocabulary id -> word and word -> vocabulary id dictionaries for reference:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:44.069661Z",
     "start_time": "2020-04-02T14:26:44.065058Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# the dict from word to index\n",
    "word_id_dict = dict(zip(list(vocab),list(range(len(vocab)))))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "# the dict from index to word\n",
    "id_word_dict = dict(zip(list(range(len(vocab))),list(vocab)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Convert the list of unigrams  into a list of vocabulary indices"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Storing actual one-hot vectors into memory for all words in the entire data set is prohibitive. Instead, we will store word indices in the vocabulary and look-up the weight matrix. This is equivalent of doing a dot product between an one-hot vector and the weight matrix. \n",
    "\n",
    "First, represent documents in train, dev and test sets as lists of words in the vocabulary:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:45.047887Z",
     "start_time": "2020-04-02T14:26:44.920631Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# iterate through each raw text\n",
    "def get_indices(data):\n",
    "    indices = []\n",
    "    # get all the indices in the data\n",
    "    for x in data:\n",
    "        # get ngram of a row of data\n",
    "        x_ngram = extract_ngrams(x, ngram_range=(1,3),token_pattern=r'\\b[A-Za-z][A-Za-z]+\\b', stop_words=stop_words)\n",
    "        # convert the ngram to indices\n",
    "        x_indices = []\n",
    "        for i in x_ngram:\n",
    "            if i in word_id_dict.keys():\n",
    "                x_indices.append(word_id_dict[i])\n",
    "        indices.append(x_indices)\n",
    "    # return the result\n",
    "    return indices        "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then convert them into lists of indices in the vocabulary:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:45.752658Z",
     "start_time": "2020-04-02T14:26:45.730409Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# indices of 3 type of data\n",
    "train_indices = get_indices(train_df['text'])\n",
    "test_indices = get_indices(test_df['text'])\n",
    "dev_indices = get_indices(dev_df['text'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Put the labels `Y` for train, dev and test sets into arrays: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:03:13.183996Z",
     "start_time": "2020-04-02T15:03:13.077575Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# the classify of the data\n",
    "train_Y = train_df['label']\n",
    "test_Y = test_df['label']\n",
    "dev_Y = dev_df['label']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Network Architecture\n",
    "\n",
    "Your network should pass each word index into its corresponding embedding by looking-up on the embedding matrix and then compute the first hidden layer $\\mathbf{h}_1$:\n",
    "\n",
    "$$\\mathbf{h}_1 = \\frac{1}{|x|}\\sum_i W^e_i, i \\in x$$\n",
    "\n",
    "where $|x|$ is the number of words in the document and $W^e$ is an embedding matrix $|V|\\times d$, $|V|$ is the size of the vocabulary and $d$ the embedding size.\n",
    "\n",
    "Then $\\mathbf{h}_1$ should be passed through a ReLU activation function:\n",
    "\n",
    "$$\\mathbf{a}_1 = relu(\\mathbf{h}_1)$$\n",
    "\n",
    "Finally the hidden layer is passed to the output layer:\n",
    "\n",
    "\n",
    "$$\\mathbf{y} = \\text{softmax}(\\mathbf{a}_1W) $$ \n",
    "where $W$ is a matrix $d \\times |{\\cal Y}|$, $|{\\cal Y}|$ is the number of classes.\n",
    "\n",
    "During training, $\\mathbf{a}_1$ should be multiplied with a dropout mask vector (elementwise) for regularisation before it is passed to the output layer.\n",
    "\n",
    "You can extend to a deeper architecture by passing a hidden layer to another one:\n",
    "\n",
    "$$\\mathbf{h_i} = \\mathbf{a}_{i-1}W_i $$\n",
    "\n",
    "$$\\mathbf{a_i} = relu(\\mathbf{h_i}) $$\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Network Training\n",
    "\n",
    "First we need to define the parameters of our network by initiliasing the weight matrices. For that purpose, you should implement the `network_weights` function that takes as input:\n",
    "\n",
    "- `vocab_size`: the size of the vocabulary\n",
    "- `embedding_dim`: the size of the word embeddings\n",
    "- `hidden_dim`: a list of the sizes of any subsequent hidden layers. Empty if there are no hidden layers between the average embedding and the output layer \n",
    "- `num_classes`: the number of the classes for the output layer\n",
    "\n",
    "and returns:\n",
    "\n",
    "- `W`: a dictionary mapping from layer index (e.g. 0 for the embedding matrix) to the corresponding weight matrix initialised with small random numbers (hint: use numpy.random.uniform with from -0.1 to 0.1)\n",
    "\n",
    "Make sure that the dimensionality of each weight matrix is compatible with the previous and next weight matrix, otherwise you won't be able to perform forward and backward passes. Consider also using np.float32 precision to save memory."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:41:20.918617Z",
     "start_time": "2020-04-02T15:41:20.915597Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def network_weights(vocab_size=1000, embedding_dim=300, \n",
    "                    hidden_dim=[], num_classes=3, init_val = 0.5):\n",
    "    # get W to store the result\n",
    "    W = []\n",
    "    # the weight of embedding\n",
    "    W0 = np.float32(np.random.uniform(-0.1,0.1,(vocab_size,embedding_dim)))\n",
    "    W.append(W0)\n",
    "    \n",
    "    # the weight of other hidding layer\n",
    "    for i in hidden_dim:\n",
    "        Wi = np.float32(np.random.uniform(-0.1,0.1,(W[-1].shape[1],i)))\n",
    "        W.append(Wi)\n",
    "        \n",
    "    # weight of last layer\n",
    "    W1= np.float32(np.random.uniform(-0.1,0.1,(W[-1].shape[1],num_classes)))\n",
    "    W.append(W1)\n",
    "    return W\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-01T10:31:57.970152Z",
     "start_time": "2020-04-01T10:31:57.966123Z"
    }
   },
   "source": [
    "Then you need to develop a `softmax` function (same as in Assignment 1) to be used in the output layer. \n",
    "\n",
    "It takes as input `z` (array of real numbers) and returns `sig` (the softmax of `z`)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "def softmax(z):\n",
    "    # get e^z\n",
    "    z_exp = np.exp(z)\n",
    "    # sum of e^z\n",
    "    z_sum = np.sum(z_exp, axis = 0, keepdims=True)\n",
    "    # get softmax\n",
    "    sig = z_exp / z_sum\n",
    "    return sig"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now you need to implement the categorical cross entropy loss by slightly modifying the function from Assignment 1 to depend only on the true label `y` and the class probabilities vector `y_preds`:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "def categorical_loss(y, y_preds):\n",
    "    # loss = -sigma(yi * log(y_pred)) \n",
    "    loss = - math.log(y_preds[y-1])\n",
    "    return loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-03-31T15:02:56.149535Z",
     "start_time": "2020-03-31T15:02:56.145738Z"
    }
   },
   "source": [
    "Then, implement the `relu` function to introduce non-linearity after each hidden layer of your network \n",
    "(during the forward pass): \n",
    "\n",
    "$$relu(z_i)= max(z_i,0)$$\n",
    "\n",
    "and the `relu_derivative` function to compute its derivative (used in the backward pass):\n",
    "\n",
    "  \n",
    "  relu_derivative($z_i$)=0, if $z_i$<=0, 1 otherwise.\n",
    "  \n",
    "\n",
    "\n",
    "Note that both functions take as input a vector $z$ \n",
    "\n",
    "Hint use .copy() to avoid in place changes in array z"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:52.665236Z",
     "start_time": "2020-04-02T14:26:52.661519Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def relu(z):\n",
    "    a = np.zeros(len(z))\n",
    "    for i in range(len(z)):\n",
    "        if z[i] > 0:\n",
    "            a[i] = z[i]\n",
    "    return a\n",
    "    \n",
    "def relu_derivative(z):\n",
    "    dz = np.zeros(len(z))\n",
    "    for i in range(len(z)):\n",
    "        if z[i] <= 0:\n",
    "            dz[i] = 1      \n",
    "    return dz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "During training you should also apply a dropout mask element-wise after the activation function (i.e. vector of ones with a random percentage set to zero). The `dropout_mask` function takes as input:\n",
    "\n",
    "- `size`: the size of the vector that we want to apply dropout\n",
    "- `dropout_rate`: the percentage of elements that will be randomly set to zeros\n",
    "\n",
    "and returns:\n",
    "\n",
    "- `dropout_vec`: a vector with binary values (0 or 1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:53.429192Z",
     "start_time": "2020-04-02T14:26:53.425301Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def dropout_mask(size, dropout_rate):\n",
    "    # the number of zeros\n",
    "    zeros = int(np.around(size * dropout_rate))\n",
    "    # get a array that has size * rate zeros and other elements is one\n",
    "    dropout_vec = np.append(np.zeros(zeros),np.ones(size - zeros))\n",
    "    # just shuffle it, we will get a drop mask\n",
    "    random.shuffle(dropout_vec)\n",
    "    return dropout_vec"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now you need to implement the `forward_pass` function that passes the input x through the network up to the output layer for computing the probability for each class using the weight matrices in `W`. The ReLU activation function should be applied on each hidden layer. \n",
    "\n",
    "- `x`: a list of vocabulary indices each corresponding to a word in the document (input)\n",
    "- `W`: a list of weight matrices connecting each part of the network, e.g. for a network with a hidden and an output layer: W[0] is the weight matrix that connects the input to the first hidden layer, W[1] is the weight matrix that connects the hidden layer to the output layer.\n",
    "- `dropout_rate`: the dropout rate that is used to generate a random dropout mask vector applied after each hidden layer for regularisation.\n",
    "\n",
    "and returns:\n",
    "\n",
    "- `out_vals`: a dictionary of output values from each layer: h (the vector before the activation function), a (the resulting vector after passing h from the activation function), its dropout mask vector; and the prediction vector (probability for each class) from the output layer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:26:54.761268Z",
     "start_time": "2020-04-02T14:26:54.753402Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def forward_pass(x, W, dropout_rate=0.2):\n",
    "    # the result of hidden layer\n",
    "    H = []\n",
    "    \n",
    "    # after passing h from the activation function\n",
    "    A = []\n",
    "    \n",
    "    # the drop mask of hidden layers\n",
    "    dropout_vecs = []\n",
    "    \n",
    "    # the return dict\n",
    "    out_vals_dic = {}\n",
    "    \n",
    "    # the result of forward pass\n",
    "    out_vec = []\n",
    "    \n",
    "    # the embedding layer \n",
    "    We_inter = []\n",
    "    \n",
    "    # first hidden layer\n",
    "    h_inter = []\n",
    "    \n",
    "    # cal the embedding\n",
    "    for i in x:\n",
    "        We_inter.append(W[0][i])\n",
    "    h_inter.append(np.sum(We_inter,axis=0)/len(We_inter))\n",
    "    # store the first hidden layer result\n",
    "    H.append(h_inter)\n",
    "    \n",
    "    # drop out the first layer\n",
    "    H = [np.array(H).reshape(np.array(H).shape[0],np.array(H).shape[-1])]\n",
    "    # calculate the activation result\n",
    "    A.append([relu(i) for i in H[-1]])\n",
    "    \n",
    "    # cal the least hidden layers\n",
    "    for layer in range(1,len(W)):\n",
    "        H.append(np.dot(A[layer - 1],W[layer]))\n",
    "        A.append([relu(i) for i in H[-1]]) \n",
    "        dropout_vecs.append(dropout_mask(len(A[-1]),dropout_rate))\n",
    "    \n",
    "    # A[-1] is last layer, apply a softmax , we will get the output result\n",
    "    out_vec = [softmax(i) for i in A[-1]]\n",
    "    \n",
    "    # feedback the result\n",
    "    out_vals_dic['h'] = H\n",
    "    out_vals_dic['a'] = A\n",
    "    out_vals_dic['drop'] = dropout_vecs\n",
    "    out_vals = [out_vals_dic,out_vec]\n",
    "            \n",
    "    return out_vals  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `backward_pass` function computes the gradients and updates the weights for each matrix in the network from the output to the input. It takes as input \n",
    "\n",
    "- `x`: a list of vocabulary indices each corresponding to a word in the document (input)\n",
    "- `y`: the true label\n",
    "- `W`: a list of weight matrices connecting each part of the network, e.g. for a network with a hidden and an output layer: W[0] is the weight matrix that connects the input to the first hidden layer, W[1] is the weight matrix that connects the hidden layer to the output layer.\n",
    "- `out_vals`: a dictionary of output values from a forward pass.\n",
    "- `learning_rate`: the learning rate for updating the weights.\n",
    "- `freeze_emb`: boolean value indicating whether the embedding weights will be updated.\n",
    "\n",
    "and returns:\n",
    "\n",
    "- `W`: the updated weights of the network.\n",
    "\n",
    "Hint: the gradients on the output layer are similar to the multiclass logistic regression."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-05-11T08:24:13.732705Z",
     "start_time": "2020-05-11T08:24:13.729741Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def backward_pass(x, y, W, out_vals, lr=0.001, freeze_emb=False):\n",
    "    y_pre = out_vals[-1]\n",
    "    y_one_hot = np.zeros(len(y_pre[0]))\n",
    "    # chage y one dimension to two dimension\n",
    "    y_one_hot[y - 1] = 1\n",
    "    #compute the error between predict label and the true label\n",
    "    error = y_pre - y_one_hot\n",
    "    layers = len(out_vals[0]['h'])\n",
    "    # compute relu_derivative\n",
    "    relu11 = [relu_derivative(i) for i in error]\n",
    "    D = [error * np.asarray(relu11)]\n",
    "    Wp = W.copy()\n",
    "    #loop every layer\n",
    "    for layer in range(layers-1,0,-1):\n",
    "        #compute delta\n",
    "        delta = np.dot(D[-1],Wp[layer].T)\n",
    "        # compute relu_derivative\n",
    "        tmp =[relu_derivative(i) for i in out_vals[0]['a'][layer-1]]\n",
    "        # update delta\n",
    "        delta = delta*np.array(tmp)\n",
    "        D.append(delta)\n",
    "    D=D[::-1]\n",
    "    # chage x indices into matrix\n",
    "    x_mat = np.zeros((1,Wp[0].shape[0]))\n",
    "    for index in x:\n",
    "        x_mat[0][index-1] = 1\n",
    "\n",
    "    Wnew = []\n",
    "    \n",
    "    for layer in range(0,len(Wp)):\n",
    "        if freeze_emb != False and layer == 0:\n",
    "            Wnew.append(W[0])\n",
    "            continue\n",
    "        elif layer == 0:\n",
    "            tmp = np.dot(x_mat.T,D[layer])\n",
    "        else:\n",
    "            tmp =np.dot(np.array(out_vals[0]['a'][layer-1]).T,D[layer])\n",
    "        Wnew.append(W[layer] - lr * tmp)\n",
    "\n",
    "    return Wnew\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-02-15T14:08:59.937442Z",
     "start_time": "2020-02-15T14:08:59.932221Z"
    }
   },
   "source": [
    "Finally you need to modify SGD to support back-propagation by using the `forward_pass` and `backward_pass` functions.\n",
    "\n",
    "The `SGD` function takes as input:\n",
    "\n",
    "- `X_tr`: array of training data (vectors)\n",
    "- `Y_tr`: labels of `X_tr`\n",
    "- `W`: the weights of the network (dictionary)\n",
    "- `X_dev`: array of development (i.e. validation) data (vectors)\n",
    "- `Y_dev`: labels of `X_dev`\n",
    "- `lr`: learning rate\n",
    "- `dropout`: regularisation strength\n",
    "- `epochs`: number of full passes over the training data\n",
    "- `tolerance`: stop training if the difference between the current and previous validation loss is smaller than a threshold\n",
    "- `freeze_emb`: boolean value indicating whether the embedding weights will be updated (to be used by the backward pass function).\n",
    "- `print_progress`: flag for printing the training progress (train/validation loss)\n",
    "\n",
    "\n",
    "and returns:\n",
    "\n",
    "- `weights`: the weights learned\n",
    "- `training_loss_history`: an array with the average losses of the whole training set after each epoch\n",
    "- `validation_loss_history`: an array with the average losses of the whole development set after each epoch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:09:19.021428Z",
     "start_time": "2020-04-02T15:09:19.017835Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def SGD(X_tr, Y_tr, W, X_dev=[], Y_dev=[], lr=0.001, \n",
    "        dropout=0.2, epochs=5, tolerance=0.001, freeze_emb=False, \n",
    "        print_progress=True):\n",
    "    training_loss_history = []\n",
    "    validation_loss_history = []\n",
    "    \n",
    "    if X_dev:\n",
    "        n_test = len(X_dev)\n",
    "        n = len(X_tr)\n",
    "    for i in range(epochs):\n",
    "        # shuffle the train data\n",
    "        random.shuffle(X_tr)\n",
    "        train_loss = []\n",
    "        for j in range(len(X_tr)):\n",
    "            print(str(j) + \" times updata parameter\")\n",
    "            # input one data to change the network parameter\n",
    "            train_out_vals = forward_pass(X_tr[j], W, dropout)\n",
    "            # compute categorical_loss\n",
    "            loss = categorical_loss(Y_tr[j],train_out_vals[1][0])\n",
    "            train_loss.append(loss)\n",
    "            training_loss_history.append(np.average(train_loss))\n",
    "            # backward pass\n",
    "            Wnew = backward_pass(X_tr[j],Y_tr[j],W,train_out_vals,lr)\n",
    "            c = W\n",
    "            if freeze_emb != False:\n",
    "                W = []\n",
    "                W.append(c[0])\n",
    "                W.extend(Wnew[1:])\n",
    "            else:\n",
    "                W = Wnew\n",
    "            WE = np.array(W)-np.array(c)\n",
    "            validation_loss = []\n",
    "            if X_dev!=None:\n",
    "                for k in range(len(X_dev)):\n",
    "                    #compute the dev forward_pass\n",
    "                    dev_out_vals = forward_pass(X_dev[k], W, dropout)\n",
    "                    validation_loss.append(categorical_loss(Y_dev[k],dev_out_vals[1][0]))\n",
    "                validation_loss_history.append(np.average(validation_loss))\n",
    "                #computer the backward_pass\n",
    "                if len(validation_loss_history)==1:\n",
    "                    W = backward_pass(X_tr[j],Y_tr[j],W,train_out_vals,lr)\n",
    "                elif abs(validation_loss_history[-1]- validation_loss_history[-2] ) >= tolerance:\n",
    "                    W = backward_pass(X_tr[j],Y_tr[j],W,train_out_vals,lr)\n",
    "                else:\n",
    "                    #print progress\n",
    "                    if print_progress:\n",
    "                        plot_progress(range(len(training_loss_history)),training_loss_history,'train')\n",
    "                        plot_progress(range(len(validation_loss_history)),validation_loss_history,'dev')\n",
    "                    return W, training_loss_history, validation_loss_history\n",
    "            "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pylab as pl\n",
    "def plot_progress(x,y,title):\n",
    "    fig = plt.figure(figsize=(7,5))\n",
    "    ax1 = fig.add_subplot(1,1,1)\n",
    "    p1 = pl.plot(x,y,label=u''+ title)\n",
    "    pl.legend()\n",
    "    pl.xlabel(u'iters')\n",
    "    pl.ylabel(u'loss')\n",
    "    plt.title(title + ' loss plot')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-02-15T14:10:15.772383Z",
     "start_time": "2020-02-15T14:10:15.767855Z"
    }
   },
   "source": [
    "Now you are ready to train and evaluate your neural net. First, you need to define your network using the `network_weights` function followed by SGD with backprop:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "W = network_weights(vocab_size=len(vocab),embedding_dim=300,\n",
    "                    hidden_dim=[], num_classes=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:09:33.643515Z",
     "start_time": "2020-04-02T15:09:33.640943Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 times updata parameter\n",
      "1 times updata parameter\n",
      "2 times updata parameter\n",
      "3 times updata parameter\n"
     ]
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAc0AAAFNCAYAAABi9TTFAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAgAElEQVR4nO3deXyV9Zn//9eVhYRAFkjCloBh3wJSjIiKCm4VsKKtrV2s1bYy0367OR2nrUu1Ko7WX211OtWxM9bahTqjglRRqVoFF7TBsoRFEvawhi1hJ8v1++PchEgTOEBO7iTn/Xw8zsNz7vX65GDeuT/3fX9uc3dERETkxBLCLkBERKStUGiKiIhESaEpIiISJYWmiIhIlBSaIiIiUVJoioiIREmhKdJGmNnjZnbnKa77ppl9vblrOhVmttbMLg27DpFTodAUaQHNERTu/s/ufm9z1dTamdl4MysPuw6RhhSaIq2AmSWFXYOInJhCUyTGzOx3QB/gz2a218z+zcwKzMzN7Gtmth54I1j2/8xsi5lVmtlcMxveYDtPmdl9wfvxZlZuZt83s21mttnMboqyngQzu8PM1gXrPm1mmcG8VDP7vZntMLPdZvY3M+sezLvRzFab2R4zW2NmX2pi+3eb2bNm9kyw7IdmdmYTy6aY2S/MbFPw+kUwrRPwMtAr+JntNbNeUf/QRWJEoSkSY+7+ZWA98Cl37+zuP20w+yJgKPDJ4PPLwECgG/Ah8IfjbLoHkAnkAV8D/tPMukRR0o3BawLQD+gM/DKY95Vgm72BbOCfgQNBiD0KTHT3dOA8YOFx9jEF+D+gK/BHYKaZJTey3O3AWGAUcCYwBrjD3fcBE4FNwc+ss7tviqJtIjGl0BQJ193uvs/dDwC4+5PuvsfdDwF3A2ceOQpsRDVwj7tXu/tsYC8wOIp9fgl42N1Xu/te4EfA54Mu4moiYTnA3WvdfYG7VwXr1QGFZtbR3Te7+9Lj7GOBuz/r7tXAw0AqkXBsrJZ73H2bu1cAPwG+HEUbREKh0BQJ14Yjb8ws0cweMLNVZlYFrA1m5TSx7g53r2nweT+Ro8YT6QWsa/B5HZAEdAd+B7wK/CnoLv2pmSUHR37XETny3GxmL5nZkGja5e51QHmw32hqUTestFoKTZGW0dTjhBpO/yKRbs1LiXSRFgTTrZlr2QSc0eBzH6AG2Boctf7E3YcR6YK9ErgBwN1fdffLgJ7ACuDXx9lH7yNvzCwByA/2G00tR5bTI5ik1VFoirSMrUTOHx5POnAI2AGkAffHqJbpwC1m1tfMOgf7ecbda8xsgpmNMLNEoIpId22tmXU3s6uCc5uHiHQF1x5nH2eZ2aeDLt/vBevMb6KWO8ws18xygB8Dvw/mbQWyj9M9LdLiFJoiLePfiYTDbjP71yaWeZpI9+RGYBmNh0xzeJJIN+xcYA1wEPh2MK8H8CyRwFwOvEUkxBKA7xM5CtxJ5AKmbx5nHy8Q6c7dReQc5aeD85vHug8oBhYDS4hc/HQfgLuvIBKqq4Ofm7ptJXSmh1CLSHMys7uJXEh0fdi1iDQ3HWmKiIhESaEpIiISJXXPioiIRElHmiIiIlFSaIqIiEQprp+skJOT4wUFBWGXISIirciCBQu2u3tuY/PiOjQLCgooLi4OuwwREWlFzGxdU/PUPSsiIhIlhaaIiEiUFJoiIiJRiutzmiIi8nHV1dWUl5dz8ODBsEuJudTUVPLz80lObuz56I1TaIqISL3y8nLS09MpKCjArLmfStd6uDs7duygvLycvn37Rr2eumdFRKTewYMHyc7ObteBCWBmZGdnn/QRtUJTREQ+pr0H5hGn0k6FpoiItCq7d+/mV7/61UmvN2nSJHbv3h2Dio5SaIqISKvSVGjW1tYed73Zs2eTlZUVq7IAXQh02l5espnkxARG5mfSLSM17HJERNq8H/7wh6xatYpRo0aRnJxM586d6dmzJwsXLmTZsmVcffXVbNiwgYMHD/Ld736XqVOnAkdHedu7dy8TJ05k3LhxvPvuu+Tl5fHCCy/QsWPH065NoXmafvaXlZRt2wtA94wURuRlMiIvi5H5mRTmZZKbnhJyhSIibcsDDzxASUkJCxcu5M0332Ty5MmUlJTUX+X65JNP0rVrVw4cOMDZZ5/NZz7zGbKzsz+2jdLSUqZPn86vf/1rPve5z/Hcc89x/fXXn3ZtCs3TNOtb57NsUxVLNlaypLySxRsreX3FNo48prRnZioj8jIZmZ/JiPwsRuRl0rVTh3CLFhGJwk/+vJRlm6qadZvDemVw16eGn9Q6Y8aM+dhtIY8++igzZswAYMOGDZSWlv5DaPbt25dRo0YBcNZZZ7F27drTKzyg0DxNaR2SKCroSlFB1/ppew/VsGxTFYvLd9eH6ZxlW+vn52V1DEI0MzgyzSQrTUEqItKYTp061b9/8803ee2113jvvfdIS0tj/Pjxjd42kpJytJcvMTGRAwcONEstCs0Y6JySxJi+XRnT92iQVh2sZunGKpZs3M3i8kqWbKzk5ZIt9fP7dE2rD9GReZkMz8sks2P0o1SIiDS3kz0ibC7p6ens2bOn0XmVlZV06dKFtLQ0VqxYwfz581u0NoVmC8lITebc/tmc2/9oF0Ll/mpKNlUGIbqbxeW7eWnx5vr5fXM6URiE6Ij8TIb3yiA9VUEqIu1bdnY2559/PoWFhXTs2JHu3bvXz7viiit4/PHHGTlyJIMHD2bs2LEtWpv5kZNvcaioqMhb2/M0d+07HOnSDbp1l2ysZOPuSLeCWSRIIyEaudhoWM8MOqXobx8RaR7Lly9n6NChYZfRYhprr5ktcPeixpbXb9tWpkunDlw4KJcLBx19aPj2vYdYsrGSkuBCo/mrdzJz4SYgEqQDcjszIv/oEemwnpl07JAYVhNERNothWYbkNM5hQmDuzFhcLf6adv2HKRkY9C1W17JvNLtPP/hRgASDAZ1T49cZBScJx3aM4PUZAWpiMjpUGi2Ud3SU7l4SCoXD4n09bs7W6sOBd26u1m8sZI3Vmzj/xaUA5CUYB8L0pH5mQzukU5KkoJURCRaCs12wszokZlKj8xULht2NEg3Vx5scKFRJXOWbeGZ4g0AJCcag3uk1w/GMCIvk0Hd0+mQpNEVReKZu8fFoO2nck2PQrMdMzN6ZXWkV1ZHrijsAUT+kZTvOsCSoGu3ZGMlLy3exPQP1gPQITGBoT3Tg3OkWRTmZTKwe2eSExWkIvEgNTWVHTt2tPvHgx15nmZq6skNf6qrZ1vZ1bNhcHfW79xfH6JH/rvnUA0AKUkJDOuVUX/V7oi8TAZ060xiQvv9H0okXlVXV1NeXn7Sz5lsi1JTU8nPzyc5+eO38h3v6lmFpkKzUXV1ztod+z42PODSjZXsOxx5ykDH5ESG98o4OiBDfiZ9cxSkItL2KTSboNA8ObV1zprt+46OalReydJNVRyojgRppw6JDO919EKjEXmZFGR3IkFBKiJtSGj3aZrZk8CVwDZ3L2xkvgGPAJOA/cCN7v5hMO9BYHKw6L3u/kww/RLgISLPAt0brFNmZn2A3wJZQCLwQ3efHcv2xZvEBGNAt84M6NaZaz6RD0SCdFXF3iBEI2Pt/n7+Og7V1AGQnpLE8LwMRgbduiPzM+nTNa1dnysRkfYrpkeaZnYhkWB7uonQnAR8m0hongM84u7nmNlk4HvARCAFeAu42N2rzGwlMMXdl5vZN4Ex7n6jmT0B/N3dHzOzYcBsdy84Xn060oyNmto6SrftrR/RaPHGSpZvquJwbSRIM1KTgm7do1ft5nfpqCAVkVYhtCNNd59rZgXHWWQKkUB1YL6ZZZlZT2AY8Ja71wA1ZrYIuAL4X8CBjGD9TGDTkd01MV1aWFJiAkN7ZjC0ZwafO7s3AIdr6li5dc/Hhgj8n7dXU10b+aMtKy356CPUgguOemWmKkhFpFUJ+5aTPGBDg8/lwbRFwF1m9jCQBkwAlgXLfB2YbWYHgCrgyGi9dwNzzOzbQCfg0phXL1HrkJRAYV7kwdxfCKYdqqnloy17jl5sVF7Jf721mpq6SJBmd+rwscenjczPontGioJUREITdmg29tvP3X2OmZ0NvAtUAO8BNcH8W4BJ7v6+md0KPEwkSL8APOXuPzOzc4HfmVmhu9d9bIdmU4GpAH369IlJoyQ6KUmJjMzPYmR+VqRzHjhYXcuKLXsioxoF3bvzSrdTGwRpbnpKgxCN/LdbxsndZyUicqrCDs1yoHeDz/kE3aruPg2YBmBmfwRKzSwXONPd3w+WfwZ4JXj/NSJduLj7e2aWCuQA2xru0N2fAJ6AyDnNGLRJTkNqciKjemcxqndW/bQDh2tZtrmqfnjAko2VvPnRNoIcpXtGytHzo0GQ5nROaWIPIiKnLuzQnAV8y8z+RORYo9LdN5tZIpDl7jvMbCQwEpgTrJNpZoPcfSVwGbA8mL4euAR4ysyGAqlEjlKljevYIZGzzujCWWd0qZ+271ANyzZXNRiQYTevr9jKkevaemWmBre+REY1GpGXSddOHUJqgYi0F7G+5WQ6MB7IMbNy4C4gGcDdHwdmE7lytozILSc3BasmA/OCc1dVwPXBRUGY2c3Ac2ZWB+wCvhqs833g12Z2C5GLgm70eL4JtZ3rlJLE2QVdObuga/20PQerWbqp6ujTXzZW8urSrfXz87t0ZGR+ZvBg78gtMJlpeqi3iERPgxvolpN2rfJANUs3Hr31ZUl5Jet37q+ff0Z2WhCika7dwrxMMlIVpCLxTA+hlriV2TGZ8wbkcN6AnPppu/cfpmRjFYs37mZJeSUL1+/mpcWb6+f3zenEiLxMzuydxbgBOQzq3llX7IoIoNCUOJSV1oFxA3MYN/BokO7cd/jos0jLKyleu5NZiyK3+nZLT2HcwBwuHJjL+QNyyE3XRUYi8UqhKQJ07dSBiwblctGg3Pppm3Yf4O3S7cwtreCvK7bx/IcbARjaM4MLB+ZwwcBcigq6kJqsB3mLxAud09Q5TYlCXZ2zdFMVc0srmFdawYJ1u6iudVKSEhjTtysXDsxl3MAchvRIV1euSBunp5w0QaEpp2rfoRo+WLOTuaUVvF26ndJte4HI4AsXDMip7/7tlq6BF0TaGl0IJNLMOqUkMWFINyYM6QbA5soDzCvdztul23lzZQXP/z3SlTukRzoXBF25Y/p2VVeuSBunI00daUozq6tzlm2uYl7pduaVVlC8dheHa+vokJTAOX27Mm5AJESH9EjXs0ZFWiF1zzZBoSktYf/hGt5fs5O3gxBduTXSlZvTOYVxA7K5YGAuFwzM0Ri6Iq2EumdFQpTWIYkJg7sxYXCkK3dL5UHeLosE6LzS7cxcGLm1ZXD3oCt3UC5jCrrSsYO6ckVaGx1p6khTQlRX5yzfcrQr929rd3G4po4OiQmc3bcLFwzMZdyAHIb1zFBXrkgLUfdsExSa0tocOFzLB2t3Mm9lBW+XbWfFlj1A5Nmi4wbm1J8P7ZGprlyRWFH3rEgb0bFD4scGWdhadZC3S7fXd+e+EHTlDuremXEDcrlgUA7n9O1KWgf9ryzSEnSkqSNNaSPq6pwVW/bwdlnkXOj7a3bWd+UWFXSpH+pPXbkip0fds01QaEpbdrC6lr+t3cm80u3MXVlR35XbtVMHzh+QE9wfmkPPzI4hVyrStig0m6DQlPZk256DvFO2nXkrtzOvbDsVew4BMKBbZy4IjkLP6aeuXJETUWg2QaEp7ZW789HWPcxbGRlw/oM1OzlUU0dyonHWGV3q7w0t7JWprlyRYyg0m6DQlHhxsLqW4rW76u8NXba5CoAuacmcPyCnfsD5XlnqyhVRaDZBoSnxqmLPoUhXbnB/6LagK7d/bqf6o9Cx/bLplKKuXIk/Cs0mKDRFIl25K7furT8KfX/NDg5WR7pyR/fpUj/gfGFeJonqypU4oNBsgkJT5B8drK7lw3W7mBschS7dFOnKzUpL5vz+kStyxw3MIb9LWsiVisSGQrMJCk2RE9u+92hX7tul29lSdRCAfjmd6o9Cx/bPprO6cqWdUGg2QaEpcnLcnbJte+uPQt9fvZMD1bUkJTToyh2Uywh15UobptBsgkJT5PQcqqllwbpd9UehJZsqcYfMjsmcHzz2bNyAHHp3VVeutB2hhKaZPQlcCWxz98JG5hvwCDAJ2A/c6O4fBvMeBCYHi97r7s8E0y8BHgISgL3BOmXBvM8BdwMOLHL3L56oRoWmSPPasfcQ76zawdvBRUWbKyNduX2DrtxxA3I4t3826anJIVcq0rSwQvNCIsH2dBOhOQn4NpHQPAd4xN3PMbPJwPeAiUAK8BZwsbtXmdlKYIq7LzezbwJj3P1GMxsI/G+w3C4z6+bu205Uo0JTJHbcnVUV++qvyp2/egf7D9eSmGCM7pNVP+D8yLxMkhITwi5XpF4oTzlx97lmVnCcRaYQCVQH5ptZlpn1BIYBb7l7DVBjZouAK4iEogMZwfqZwKbg/c3Af7r7rmDfJwxMEYktM2NAt84M6NaZm87vy+GaOj5cf3SAhV+8vpKfv7aSjNQkzh+QUz/gvLpypTUL83K3PGBDg8/lwbRFwF1m9jCQBkwAlgXLfB2YbWYHgCpgbDB9EICZvQMkAne7+ysxb4GIRK1DUgJj+2Uztl82t34Sdu07zDurgrFySyt4uWQLAAXZaYwLrso9t382GerKlVYkzNBs7NI6d/c5ZnY28C5QAbwH1ATzbwEmufv7ZnYr8DCRIE0CBgLjgXxgnpkVuvvuf9ip2VRgKkCfPn2at0UiErUunTpw5cheXDmyF+7O6u376h++PePDjfx+/noSE4xRvbPqb205M19duRKumF49G3TPvtjEOc3/At509+nB54+A8e6++Zjl/gj8HvgbMN/d+wfT+wCvuPswM3s8mPdUMO914Ifu/rfj1adzmiKt0+GaOv6+fhdvl21nbul2Fpfvxh3SU5M4r3/kqtwLB+bSJ1tdudL8QjmnGYVZwLfM7E9ELgSqdPfNZpYIZLn7DjMbCYwE5gTrZJrZIHdfCVwGLA+mzwS+ADxlZjlEumtXt2RjRKT5dEhK4Jx+2ZzTL5vvXz6Y3fsP807ZDt4uq2Duyu28unQrAH26ptUfhZ7bP5vMjurKldiKWWia2XQi3aU5ZlYO3AUkA7j748BsIlfOlhG55eSmYNVkIt2rEDlveX1wURBmdjPwnJnVAbuArwbrvApcbmbLgFrgVnffEau2iUjLykrrwOSRPZk8sifuztod+5lXGgnQmX/fyB/eX0+CEXTlRgacH9U7S1250uw0uIG6Z0XatOraOhZu2M28lRX1Xbl1DukpSZzbP7v+SPSM7DSCP8ZFjksjAjVBoSnS/lTur+bdVdvrh/or33UAgN5dOzJuQC4XDcrhsmE9NMyfNEmh2QSFpkj75u6sC7py55Vu571VO9hzqIavnt+XH39qWNjlSSvVWi8EEhGJKTOjIKcTBTmd+PK5BVTX1vHjF0p46t01fHp0HoV5mWGXKG2MzpKLSNxITkzghxOH0rVTCrfNWEJtXfz2tMmpUWiKSFzJ7JjMnVcOZXF5Jb+fvy7scqSNUWiKSNy56sxeXDAwh4de/YitwUO1RaKh0BSRuGNm3DulkMO1ddzz4rITryASUGiKSFwqyOnEtyYM4KXFm3nzIz0YSaKj0BSRuPVPF/WjX24n7nyhhAOHa8MuR9oAhaaIxK2UpESmXT2CDTsP8Mu/loZdjrQBCk0RiWvn9s/mM6PzeWLuakq37gm7HGnlFJoiEvdumzSETilJ3D6jhDrduynHodAUkbiX3TmFH00cwgdrd/Lsh+VhlyOtmEJTRAT47Fm9ObugC/8+ezk79x0OuxxppRSaIiJAQoIx7ZoR7DlYw/2zl594BYlLCk0RkcCg7uncfGE/nl1QzvzVeo69/COFpohIA9+5eCD5XTpy+4wlHK6pC7scaWUUmiIiDXTskMi9UwpZVbGPJ+auCrscaWUUmiIix5gwpBuTRvTgP94oY92OfWGXI62IQlNEpBE/vnI4yYkJ3DGzBHfduykRCk0RkUb0yEzl+5cPYl7pdl5cvDnscqSVUGiKiDThhnMLGJGXyT0vLqPyQHXY5UgroNAUEWlCYoJx/zUj2LH3ED+b81HY5UgroNAUETmOEfmZ3HBuAb+bv46FG3aHXY6ELKahaWZPmtk2MytpYr6Z2aNmVmZmi81sdIN5D5pZSfC6rsH0S8zsQzNbaGZvm9mAY7Z5rZm5mRXFrmUiEk++f/kguqWncNvzS6ip1b2b8SzWR5pPAVccZ/5EYGDwmgo8BmBmk4HRwCjgHOBWM8sI1nkM+JK7jwL+CNxxZGNmlg58B3i/WVshInEtPTWZuz41nGWbq/jte+vCLkdCFNPQdPe5wM7jLDIFeNoj5gNZZtYTGAa85e417r4PWMTR8HXgSIBmApsabO9e4KfAwWZshogIEwt7MGFwLg/P+YjNlQfCLkdCEvY5zTxgQ4PP5cG0RcBEM0szsxxgAtA7WObrwGwzKwe+DDwAYGafAHq7+4stVbyIxA8z454phdS6c/espWGXIyEJOzStkWnu7nOA2cC7wHTgPaAmmH8LMMnd84HfAA+bWQLwc+D7J9yh2VQzKzaz4oqKiuZog4jEid5d0/jOJQN5delWXlu2NexyJARhh2Y5R48gAfIJulvdfZq7j3L3y4iEa6mZ5QJnuvuRc5bPAOcB6UAh8KaZrQXGArMauxjI3Z9w9yJ3L8rNzY1Vu0Sknbr5gn4M6t6Zu2YtZf/hmhOvIO1K2KE5C7ghuIp2LFDp7pvNLNHMsgHMbCQwEpgD7AIyzWxQsP5lwHJ3r3T3HHcvcPcCYD5wlbsXt3iLRKRdS05MYNo1I9i4+wCPvFYadjnSwpJiuXEzmw6MB3KCc5B3AckA7v44kS7YSUAZsB+4KVg1GZhnZgBVwPXuXhNs82bgOTOrIxKiX41lG0REjnV2QVeuK+rNf7+9hqs/kcfQnhknXknaBYvngYiLioq8uFgHoyJy8nbtO8wlD79FQXYaz/7zeSQkNHaJhrRFZrbA3Ru91z/s7lkRkTapS6cO3D5pKB+u382f/rbhxCtIu6DQFBE5RZ8encfYfl154OXlVOw5FHY50gIUmiIip8jMuO/qERyoruX+2cvDLkdagEJTROQ0DOjWmW9c1J8Zf9/IO2Xbwy5HYkyhKSJymr45YQBnZKdxx8wSDlbXhl2OxJBCU0TkNKUmJ3Lf1YWs2b6Px99aFXY5EkMKTRGRZnDBwFyuOrMXv/rrKlZX7A27HIkRhaaISDO548qhpCQncMfMEuL5Hvj2TKEpItJMuqWn8m9XDOHdVTuYuXBj2OVIDCg0RUSa0ZfG9GFU7yzue3E5lfurwy5HmplCU0SkGSUkGNOuKWT3gWoeeGVF2OVIM1Noiog0s+G9MrnpvAKmf7CeBet2hl2ONCOFpohIDNxy2SB6ZaZy+4wSqmvrwi5HmolCU0QkBjqlJHH3VcNZsWUPT769JuxypJkoNEVEYuTy4T24dGh3fvFaKeW79oddjjQDhaaISAz9ZMpwzOCuF5bq3s12QKEpIhJDeVkdueXSQby+YhuvLt0adjlymhSaIiIxduP5BQzpkc7ds5ay91BN2OXIaVBoiojEWHJiAvd/egRb9xzk539ZGXY5choUmiIiLWB0ny58cUwffvPOGko2VoZdjpwihaaISAv5t08OoWunDtw+Ywm1dbooqC1SaIqItJDMtGTuvHIYi8or+cP768IuR06BQlNEpAVddWYvxg3I4aFXPmJb1cGwy5GTFLPQNLMnzWybmZU0Md/M7FEzKzOzxWY2usG8B82sJHhd12D6JWb2oZktNLO3zWxAMP1fzGxZsJ3XzeyMWLVLROR0mBn3Xl3Iodo67nlxWdjlyEmK5ZHmU8AVx5k/ERgYvKYCjwGY2WRgNDAKOAe41cwygnUeA77k7qOAPwJ3BNP/DhS5+0jgWeCnzdoSEZFm1DenE/9v/ABeXLyZt1ZWhF2OnISYhaa7zwWON7z/FOBpj5gPZJlZT2AY8Ja717j7PmARR8PXgSMBmglsCvb1V3c/MkbVfCC/eVsjItK8/nl8P/rlduLOmSUcrK4NuxyJUpjnNPOADQ0+lwfTFgETzSzNzHKACUDvYJmvA7PNrBz4MvBAI9v9GvByzKoWEWkGKUmJ3Hd1Iet37ueXb5SFXY5EKczQtEamubvPAWYD7wLTgfeAI0No3AJMcvd84DfAwx/boNn1QBHwUJM7NZtqZsVmVlxRoW4REQnPef1z+PQn8vivuaso27Yn7HIkCmGGZjlHjyAh0qV6pLt1mruPcvfLiIRrqZnlAme6+/vB8s8A5x1Z2cwuBW4HrnL3Q03t1N2fcPcidy/Kzc1t3haJiJyk2yYPJa1DErfNKNGA7m1AVKFpZt81s4zgitf/Ca5gvfw09z0LuCHY5lig0t03m1mimWUH+x0JjATmALuATDMbFKx/GbA8WO4TwH8RCcxtp1mXiEiLyemcwo8mDuGDNTt5dkF52OXICSRFudxX3f0RM/skkAvcRKR7dE5TK5jZdGA8kBOcg7wLSAZw98eJdMFOAsqA/cE2CZaZZ2YAVcD17l4TbPNm4DkzqyMSol8N1nkI6Az8X7Deene/Ksq2iYiE6nNFvXl2QTn3z17OJUO707VTh7BLkiZYNN0BZrbY3Uea2SPAm+4+w8z+7u6fiH2JsVNUVOTFxcVhlyEiwkdb9jD50Xl8enQeP732zLDLiWtmtsDdixqbF+05zQVmNofIkeGrZpYO1DVXgSIi8W5wj3S+fkE//re4nA/WHO9uPQlTtKH5NeCHwNnB/ZDJHO1OFRGRZvCdSwaQl9WR22Ys4XCNjktao2hD81zgI3ffHdzWcQegZ9uIiDSjtA5J3Hv1cMq27eXX81aHXY40ItrQfAzYb2ZnAv8GrAOejllVIiJx6uIh3ZlY2INHXy9l/Y79J15BWlS0oVnjkSuGpgCPuPsjQHrsyhIRiV93fWo4SQnGnS/o3s3WJtrQ3GNmPyIydN1LZpZIcPuIiIg0rx6ZqXz/8sG8tbKCl5ZsDrscaSDa0LwOOETkfs0tRMaIbRyxNKgAABimSURBVHKoOhEROT1fOa+AwrwM7vnzMqoOVoddjgSiCs0gKP9AZESeK4GD7q5zmiIiMZKYYNx/zQgq9h7iZ69+FHY5Eoh2GL3PAR8AnwU+B7xvZtfGsjARkXg3Mj+LG8aewdPz17Fow+6wyxGi7569ncg9ml9x9xuAMcCdsStLREQAvv/JweR2TuG2GUuoqdW9m2GLNjQTjhkIfcdJrCsiIqcoIzWZuz41nKWbqnj6vXVhlxP3og2+V8zsVTO70cxuBF4iMuC6iIjE2KQRPbhoUC4/m/MRmysPhF1OXIv2QqBbgSeIPKbrTOAJd/9BLAsTEZEIM+PeKYXU1Dn3/HlZ2OXEtWgfDYa7Pwc8F8NaRESkCX2y0/jOJQN56NWPeGPFVi4e0j3skuLScY80zWyPmVU18tpjZlUtVaSIiMDNF/RjYLfO3DlzKfsP14RdTlw6bmi6e7q7ZzTySnf3jJYqUkREoENSAtOuGcHG3Qd45PXSsMuJS7oCVkSkDRnTtyufK8rnf+atYcUWdfi1NIWmiEgb86OJQ0lPTeL2GSXU1WlA95ak0BQRaWO6dOrAbZOGsmDdLp4p3hB2OXFFoSki0gZde1Y+5/TtygMvr2D73kNhlxM3FJoiIm2QmTHtmkL2H67h/peWh11O3FBoioi0UQO6pfNPF/bn+b9v5N2y7WGXExcUmiIibdi3Lh7AGdlp3DGzhEM1tWGX0+4pNEVE2rDU5ETumVLI6u37ePzN1WGX0+7FNDTN7Ekz22ZmJU3MNzN71MzKzGyxmY1uMO9BMysJXtc1mH6JmX1oZgvN7G0zGxBMTzGzZ4JtvW9mBbFsm4hIa3HRoFyuHNmT/3yzjDXb94VdTrsW6yPNp4ArjjN/IjAweE0FHgMws8nAaGAUcA5wq5kdGYHoMeBL7j4K+CNwRzD9a8Audx8A/Bx4sFlbIiLSiv34ymGkJCZw58wS3HXvZqzENDTdfS6w8ziLTAGe9oj5QJaZ9QSGAW+5e4277wMWcTR8HTgSoJnApgbb+m3w/lngEjOz5muNiEjr1S0jlX+7YjBvl21n1qJNJ15BTknY5zTzgIZ35pYH0xYBE80szcxygAlA72CZrwOzzawc+DLwwLHbcvcaoBLIPnaHZjbVzIrNrLiioiIGTRIRCccXzzmDM3tnce+Ly6jcXx12Oe1S2KHZ2JGgu/scIg+5fheYDrwHHBnS/xZgkrvnA78BHj7ethrZ+BPuXuTuRbm5uadbv4hIq5GYYEy7upCd+w7z4Ksrwi6nXQo7NMs5egQJkE/Q3eru09x9lLtfRiQQS80sFzjT3d8Pln8GOO/YbZlZEpGu2+N1DYuItDuFeZncdH5f/vj+ehas2xV2Oe1O2KE5C7ghuIp2LFDp7pvNLNHMsgHMbCQwEpgD7AIyzWxQsP5lwPIG2/pK8P5a4A3X2XARiUO3XDaInpmp3D5jCdW1dWGX064kxXLjZjYdGA/kBOcg7wKSAdz9cSJdsJOAMmA/cFOwajIwL7iOpwq4PjhPiZndDDxnZnVEQvSrwTr/A/zOzMqIHGF+PpZtExFprTqnJHHXp4bzz79fwG/eWcPUC/uHXVK7YfF8MFZUVOTFxcVhlyEi0uzcnZufLuadsh289v2LyMvqGHZJbYaZLXD3osbmhd09KyIiMWBm3H3VcADuemFpyNW0HwpNEZF2Kr9LGt+7dCCvLd/Kq0u3hF1Ou6DQFBFpx746ri9DeqRz96yl7DtUc+IV5LgUmiIi7VhyYgLTrhnBlqqD/PwvK8Mup81TaIqItHNnndGFL4zpw2/eXcvSTZVhl9OmKTRFROLADz45hC5pydw2o4Tauvi9a+J0KTRFROJAZloyd0wexqINu/njB+vDLqfNUmiKiMSJKaN6cf6AbH76ygq27TkYdjltkkJTRCROmBn3TinkUHUd9764/MQryD9QaIqIxJF+uZ355oT+/HnRJuau1OMRT5ZCU0QkznxjfH/65XTizhdKOFhdG3Y5bYpCU0QkzqQkJXLf1YWs27Gf//xrWdjltCkKTRGROHTegByu+UQej7+1irJte8Mup81QaIqIxKnbJw+lY3Iit89YQjw/8epkKDRFROJUTucUfjhxKO+v2clzH24Mu5w2QaEpIhLHPn92b846owv3z17Orn2Hwy6n1VNoiojEsYQEY9o1hVQdqOaBl1eEXU6rp9AUEYlzQ3pk8LUL+vJM8QY+WLMz7HJaNYWmiIjw3UsGkpfVkTtmLuFwTV3Y5bRaCk0RESGtQxL3TBnOyq17+e+3V4ddTqul0BQREQAuGdqdK4b34NHXS9mwc3/Y5bRKCk0REal311XDSDTjzhdKdO9mIxSaIiJSr2dmR/7l8sG8+VEFL5dsCbucVidmoWlmT5rZNjMraWK+mdmjZlZmZovNbHSDeQ+aWUnwuq7B9HlmtjB4bTKzmcH0TDP7s5ktMrOlZnZTrNolItLefeXcMxjeK4Of/Hkpew5Wh11OqxLLI82ngCuOM38iMDB4TQUeAzCzycBoYBRwDnCrmWUAuPsF7j7K3UcB7wHPB9v6f8Aydz8TGA/8zMw6NHeDRETiQVJiAtOuGcG2PYf42ZyVYZfTqsQsNN19LnC8G36mAE97xHwgy8x6AsOAt9y9xt33AYs4JnzNLB24GJh5ZHdAupkZ0DnYb02zNkhEJI6M6p3Fl8eewdPvrWVx+e6wy2k1wjynmQdsaPC5PJi2CJhoZmlmlgNMAHofs+41wOvuXhV8/iUwFNgELAG+6+660UhE5DT86ycHk9M5hdtnlFBbp4uCINzQtEamubvPAWYD7wLTiXTDHnvU+IVg3hGfBBYCvYh06/7ySJfuP+zUbKqZFZtZcUWFnlouItKUjNRkfvypYSzZWMnT760Nu5xWIczQLOfjR5D5RI4UcfdpwbnLy4iEa+mRhcwsGxgDvNRg3ZuA54Ou3jJgDTCksZ26+xPuXuTuRbm5uc3aIBGR9mbyiJ5cOCiXn81ZyZbKg2GXE7owQ3MWcENwFe1YoNLdN5tZYhCMmNlIYCQwp8F6nwVedPeG39564JJgne7AYEBDWoiInCYz474phVTX1nHPi0vDLid0SbHasJlNJ3Ila46ZlQN3AckA7v44kS7YSUAZsJ/I0SLBMvMi1/RQBVzv7g27Zz8PPHDM7u4FnjKzJUSOTH/g7ttj0CwRkbjTJzuN71wykIde/Yi/rtjGhCHdwi4pNBbPIz4UFRV5cXFx2GWIiLR6h2vqmPToPA5W1/KXWy6iY4fEsEuKGTNb4O5Fjc3TiEAiInJCHZISmHZ1IeW7DvDoG6UnXqGdUmiKiEhUzumXzWfPyufXc1fz0ZY9YZcTCoWmiIhE7UeThpKemsTtM5ZQF4f3bio0RUQkal07deBHk4ZSvG4X/7dgw4lXaGcUmiIiclI+e1Y+Y/p25d9fXsGOvYfCLqdFKTRFROSkmBn3X1PIvkM1TJu9POxyWpRCU0RETtqAbulMvbAfz3+4kXdXxc9t8QpNERE5Jd++eCB9uqZxx8wSDtXUhl1Oi1BoiojIKUlNTuSeKcNZXbGP/3orPkYuVWiKiMgpGz+4G5NH9uSXfy1j7fZ9YZcTcwpNERE5LXddOYyUxATufKGE9j40q0JTREROS7eMVG69YjDzSrcza9GmsMuJKYWmiIicti+dcwYj8zO598XlVB6oDrucmFFoiojIaUtMMO6/ZgQ79x3ioVdXhF1OzCg0RUSkWRTmZXLjeX35w/vr+fv6XWGXExMKTRERaTb/cvkguqenctuMEmpq68Iup9kpNEVEpNl0Tkni7quGsXxzFU+9uzbscpqdQlNERJrVJ4f34JIh3Xj4LyvZuPtA2OU0K4WmiIg0KzPjJ1OG4w53z1oadjnNSqEpIiLNLr9LGt+9dCB/WbaVOUu3hF1Os1FoiohITHxtXF+G9Ejn7llL2XeoJuxymoVCU0REYiI5MYFp1xSyqfIgv3htZdjlNAuFpoiIxMxZZ3TlC2N68+Q7a1m2qSrsck5bzELTzJ40s21mVtLEfDOzR82szMwWm9noBvMeNLOS4HVdg+nzzGxh8NpkZjMbzBsfTF9qZm/Fql0iInJyfnDFELI6JnP7zCXU1bXtAd1jeaT5FHDFceZPBAYGr6nAYwBmNhkYDYwCzgFuNbMMAHe/wN1Hufso4D3g+WCdLOBXwFXuPhz4bCwaJCIiJy8rrQN3XDmUv6/fzR8/WB92OaclZqHp7nOBncdZZArwtEfMB7LMrCcwDHjL3WvcfR+wiGPC18zSgYuBI0eaXwSed/f1wb63NW9rRETkdFw9Ko/z+mfz4CsrqNhzKOxyTlmY5zTzgA0NPpcH0xYBE80szcxygAlA72PWvQZ43d2PdJAPArqY2ZtmtsDMbohx7SIichLMjHuvLuRQdR33vbQs7HJOWZihaY1Mc3efA8wG3gWmE+mGPfZa5S8E845IAs4CJgOfBO40s0GN7tRsqpkVm1lxRUXFaTZBRESi1T+3M98Y358XFm5iXmnb/P0bZmiW8/EjyHxgE4C7TwvOXV5GJFxLjyxkZtnAGOClY7b1irvvc/ftwFzgzMZ26u5PuHuRuxfl5uY2a4NEROT4vjG+P31zOnHnzBIOVteGXc5JCzM0ZwE3BFfRjgUq3X2zmSUGwYiZjQRGAnMarPdZ4EV3P9hg2gvABWaWZGZpRC4gWt4yzRARkWilJidy75RC1u7Yz6/eXBV2OSctKVYbNrPpwHggx8zKgbuAZAB3f5xIF+wkoAzYD9wUrJoMzDMzgCrgendv2D37eeCBhvty9+Vm9gqwGKgD/tvdG73VRUREwjVuYA5Xj+rF42+uYsqoXvTP7Rx2SVEz97Z9z8zpKCoq8uLi4rDLEBGJOxV7DnHJz95kWK8Mpt88luBAqVUwswXuXtTYPI0IJCIiLS43PYUfTBzC/NU7mfH3jWGXEzWFpoiIhOILZ/dhdJ8spr20nN37D4ddTlQUmiIiEoqEBGPaNSPYfaCaB15eEXY5UVFoiohIaIb2zOBr4/ryp79toHjt8QaRax0UmiIiEqrvXTqQvKyO3D6jhOraurDLOS6FpoiIhCqtQxI/uWo4H23dw3/PWxN2Ocel0BQRkdBdOqw7lw/rziOvr2TDzv1hl9MkhaaIiLQKd181nEQz7pq1lNY6hoBCU0REWoVeWR255bJBvLFiG6+UbAm7nEYpNEVEpNW48bwChvXM4O4/L2XPweqwy/kHCk0REWk1khITuP/TI9i25xAP/2Vl2OX8A4WmiIi0KqN6Z3H9OWfw23fXUrKxMuxyPkahKSIirc6tVwwmu3MKt81YQm1d67koSKEpIiKtTkZqMndeOYzF5ZX8fv66sMupp9AUEZFW6VMje3LBwBweevUjtlYdDLscQKEpIiKtlJlx39WFHK6t454/Lwu7HEChKSIirdgZ2Z349oQBvLRkM3/9aFvY5Sg0RUSkdZt6UT/653bixy+UcOBwbai1KDRFRKRVS0lKZNo1I9iw8wD/8UZpqLUoNEVEpNUb2y+ba8/K54m5q1m5dU9odSg0RUSkTbht0lA6pyZxx4wS6kK6d1OhKSIibULXTh24beJQPli7k2cXlIdSg0JTRETajGvPymdMQVfuf3k5O/YeavH9KzRFRKTNSEgw7rumkL0Ha/j3l1e0/P5jtWEze9LMtplZSRPzzcweNbMyM1tsZqMbzHvQzEqC13UNps8zs4XBa5OZzTxmm2ebWa2ZXRurdomISLgGdU9n6oX9eHZBOfNX72jRfcfySPMp4IrjzJ8IDAxeU4HHAMxsMjAaGAWcA9xqZhkA7n6Bu49y91HAe8DzRzZmZonAg8Crzd4SERFpVb598UB6d+3I7TOWcKim5e7djFlouvtcYOdxFpkCPO0R84EsM+sJDAPecvcad98HLOKY8DWzdOBioOGR5reB54Dwh4wQEZGY6tghkXumFLKqYh+/nru6xfYb5jnNPGBDg8/lwbRFwEQzSzOzHGAC0PuYda8BXnf3KgAzywumPR7zqkVEpFWYMLgbk0f05D/eKGPdjn0tss8wQ9MamebuPgeYDbwLTCfSDVtzzHJfCOYd8QvgB+5+wmN0M5tqZsVmVlxRUXFqlYuISKvw408NIzkxgTtmluAe+3s3wwzNcj5+BJkPbAJw92nBucvLiIRr/bhJZpYNjAFearBuEfAnM1sLXAv8ysyubmyn7v6Euxe5e1Fubm5ztkdERFpY94xU/vXyQcwr3c6LizfHfH9hhuYs4IbgKtqxQKW7bzazxCAYMbORwEhgToP1Pgu86O71D1dz977uXuDuBcCzwDfd/WNX1oqISPv05XMLGJmfyc//sjLmIwUlxWrDZjYdGA/kmFk5cBeQDODujxPpgp0ElAH7gZuCVZOBeWYGUAVc7+4Nu2c/DzwQq7pFRKRtSUwwfn7dKDqnJJGQ0NiZv+ZjLdEH3FoVFRV5cXFx2GWIiEgrYmYL3L2osXkaEUhERCRKCk0REZEoKTRFRESipNAUERGJkkJTREQkSgpNERGRKCk0RUREoqTQFBERiZJCU0REJEoKTRERkSjF9TB6ZlYBrGuGTeUA25thO22B2tp+xVN71db2qbnaeoa7N/oYrLgOzeZiZsVNjVPY3qit7Vc8tVdtbZ9aoq3qnhUREYmSQlNERCRKCs3m8UTYBbQgtbX9iqf2qq3tU8zbqnOaIiIiUdKRpoiISJQUmifBzK4ws4/MrMzMftjI/BQzeyaY/76ZFbR8lc0jirbeaGYVZrYweH09jDqbg5k9aWbbzKykiflmZo8GP4vFZja6pWtsLlG0dbyZVTb4Xn/c0jU2BzPrbWZ/NbPlZrbUzL7byDLt6XuNpr3t5btNNbMPzGxR0NafNLJM7H4Xu7teUbyARGAV0A/oACwChh2zzDeBx4P3nweeCbvuGLb1RuCXYdfaTO29EBgNlDQxfxLwMmDAWOD9sGuOYVvHAy+GXWcztLMnMDp4nw6sbOTfcHv6XqNpb3v5bg3oHLxPBt4Hxh6zTMx+F+tIM3pjgDJ3X+3uh4E/AVOOWWYK8Nvg/bPAJWZmLVhjc4mmre2Gu88Fdh5nkSnA0x4xH8gys54tU13ziqKt7YK7b3b3D4P3e4DlQN4xi7Wn7zWa9rYLwfe1N/iYHLyOvTgnZr+LFZrRywM2NPhczj/+o6xfxt1rgEogu0Wqa17RtBXgM0G31rNm1rtlSgtFtD+P9uLcoOvrZTMbHnYxpyvomvsEkSOShtrl93qc9kI7+W7NLNHMFgLbgL+4e5PfbXP/LlZoRq+xv1KO/esmmmXagmja8WegwN1HAq9x9K+69qi9fK/R+JDIEGJnAv8BzAy5ntNiZp2B54DvuXvVsbMbWaVNf68naG+7+W7dvdbdRwH5wBgzKzxmkZh9twrN6JUDDY+m8oFNTS1jZklAJm2zK+yEbXX3He5+KPj4a+CsFqotDNF89+2Cu1cd6fpy99lAspnlhFzWKTGzZCIB8gd3f76RRdrV93qi9ran7/YId98NvAlcccysmP0uVmhG72/AQDPra2YdiJxcnnXMMrOArwTvrwXe8OBMdBtzwrYec+7nKiLnUNqrWcANwdWWY4FKd98cdlGxYGY9jpz7MbMxRH5H7Ai3qpMXtOF/gOXu/nATi7Wb7zWa9raj7zbXzLKC9x2BS4EVxywWs9/FSc2xkXjg7jVm9i3gVSJXlz7p7kvN7B6g2N1nEflH+zszKyPyV83nw6v41EXZ1u+Y2VVADZG23hhawafJzKYTubIwx8zKgbuIXFyAuz8OzCZypWUZsB+4KZxKT18Ubb0W+IaZ1QAHgM+30T/8zge+DCwJzn0B3Ab0gfb3vRJde9vLd9sT+K2ZJRIJ/v919xdb6nexRgQSERGJkrpnRUREoqTQFBERiZJCU0REJEoKTRERkSgpNEVERKKk0BRpx8zs3eC/BWb2xbDrEWnrFJoi7Zi7nxe8LQBOKjSD++BEpAGFpkg7ZmZHngbxAHBB8BzFW4IBrx8ys78Fg+7/U7D8+OC5jH8kcqN8JzN7KRjku8TMrgutMSKtgEYEEokPPwT+1d2vBDCzqUSGjTvbzFKAd8xsTrDsGKDQ3deY2WeATe4+OVgvM4ziRVoLHWmKxKfLiYy7upDII6SygYHBvA/cfU3wfglwqZk9aGYXuHtlCLWKtBoKTZH4ZMC33X1U8Orr7keONPcdWcjdVxJ5gs0S4N/N7Mch1CrSaig0ReLDHiC9wedXiQzenQxgZoPMrNOxK5lZL2C/u/8e+P+A0S1RrEhrpXOaIvFhMVBjZouAp4BHiFxR+2HwuKgK4OpG1hsBPGRmdUA18I0WqVakldJTTkRERKKk7lkREZEoKTRFRESipNAUERGJkkJTREQkSgpNERGRKCk0RUREoqTQFBERiZJCU0REJEr/Pw2bpz7O6sAjAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<Figure size 504x360 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAdkAAAFNCAYAAABMsBVXAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAgAElEQVR4nO3dd3xVVbr/8c+TQkJJ0dADCAjSpAgBUcGxd8U6wNhgdOw6/hzniqNzx3Gce/WOM86IFQugotgVxTI6dlRIKCLdACqhSCihCAkkPL8/zgZDTEKAnOzk5Pt+vfLynL3XXutZ52CerLX32tvcHREREal+cWEHICIiEquUZEVERKJESVZERCRKlGRFRESiRElWREQkSpRkRUREokRJVqQOMrNxZnZXFOr91sxOqO5694WZuZl1CjsOkf2hJCsidZqZjTCzz8KOQ6Q8SrIiIiJRoiQrUgeY2WFmNsPMNpnZ80Bymf1nmNksMysws8/NrFewfZSZvVSm7L/M7P4qtJlkZv80sxXBzz/NLCnY19TM3gzaW2dmn5pZXLDvFjNbHsS60MyOr6D+cWb2iJm9F5T92MwOqqBsmpk9ZWb5Zvadmd1uZnFm1g14BDjCzDabWUEVPk6RGqMkK1LLmVkD4DXgaeBA4EXgvFL7+wJPAlcCGcCjwKQgIT4HnGZmqUHZeOCXwLNVaPo2YCDQB+gNDABuD/b9DsgDmgEtgD8AbmZdgOuA/u6eApwMfFtJGxcCfwGaArOACRWUGw2kAR2BXwCXACPdfT5wFfCFuzdx9/Qq9EukxijJViMzu8DM5prZDjPLqqTcKcFf+LlmNqrU9uOC0cocMxtvZgnB9jQze8PMvgrqH1nqmHZm9m8zm29m88ysfbB9QtDGHDN70swSo9dzibKBQCLwT3ff7u4vAdml9v8GeNTdp7p7ibuPB4qAge7+HTADODsoexywxd2/rEK7FwJ3uvtqd88H/gxcHOzbDrQCDgpi+tQjN0IvAZKA7maW6O7fuvviStqY7O6fuHsRkaR+hJm1LV0g+MNgKHCru29y92+Bv5eKRaTWUpLdR2Z2jJmNK7N5DnAu8Eklx8UDDwKnAt2B4WbWPZhqGw8Mc/dDge+AS4PDrgXmuXtv4Bjg78HoBuAp4G/u3o3ISGN1sH0C0BXoCTQELt/33krIWgPLffeneXxX6vVBwO+CqduCYMq0bXAcREatw4PXv6Jqo9id7ZZu57tSdf4NyAX+bWZLdv6x6O65wI3AHcBqM5toZq2p2LKdL9x9M7CuVBs7NQUalBNLZhX7IRIaJdlq5O7z3X3hHooNAHLdfYm7bwMmAkOITPMVufuioNx7/DQl6ECKmRnQhMgvomIz6w4kuPt7Qfub3X1L8PotDwDTgDbV11OpYSuBzOD736ldqdfLgL+6e3qpn0bu/lyw/0XgGDNrA5xD1ZPsCiIJvHSbKwCCEeXv3L0jcCZw085zr+7+rLsPCo514J5K2tg1ajWzJkSmw1eUKbOGyMi5bCzLg9d6lJjUWkqyNS+TUn+9EzmvlUnkF0liqWnm8/npF9ADQDciv3y+Bn7r7juAQ4ACM3vFzGaa2d+CkfIuwTTxxcA70eqQRN0XQDFwg5klmNm5RP5Y2+kx4CozO9wiGpvZ6WaWAhBM9X4EjAWWBucxq+I54HYza2ZmTYH/Bp6BXRdadQoS/0Yi08QlZtYlOO2RBBQCW4N9FTnNzAYFMzN/Aaa6e+n/P3D3EuAF4K9mlhJcHHXTzliAH4A2pWZ3RGoNJdm9ZGZTzWwW8DhwlkWu6JxlZidXtYpytu0ccQ4D7jOzacAmIr9YIXLxyCwi02h9gAeCC1kSgMHAzUB/IheFjChT90PAJ+7+aVX7KLVLMONxLpHvdj2R85OvlNqfQ+S87APB/lx+/u/gWeAEqj6KBbgLyAFmE/njbkawDaAz8D6wmcgfAQ+5+0dEzsfeTeSPxlVAcyIXRVXkWeBPRGZn+hE5D1ye64EfgSXAZ8FxTwb7PgDmAqvMbM1e9E8k6kwPbd83ZnYMMMLdR5Sz7yPg5uCXX9l9RwB3uPvJwftbAdz9f8uUOwm43N1/aWaTgbt3Jkoz+wAYReSPpLvd/Zhg+8VELna5Nnj/J+Aw4Nxg5CtSawTXNOS5++17KitSV2kkW/Oygc5m1iGY3hoGTAIws+bBf5OAW4is/wP4Hjg+2NcC6ELkL/ps4AAzaxaUOw6YF5S7nMgIeLgSrIhIOJRkq5GZnWNmecARwGQzezfY3trM3gJw92Ii6wjfBeYDL7j73KCK35vZfCLTc2+4+wfB9r8AR5rZ18B/gFvcfU1wrupm4D/BPiNyfg4iCboF8EUwnf3f0e29iIiUpeliERGRKNFIVkREJEqUZEVERKIkIewA6pKmTZt6+/btww5DRERqkenTp69x92bl7VOS3Qvt27cnJ+dnq3JERKQeM7PvKtqn6WIREZEoUZIVERGJEiVZERGRKNE5WRERqXbbt28nLy+PwsLCsEOpNsnJybRp04bExKo/nltJVkREql1eXh4pKSm0b9+e3Z/SWDe5O2vXriUvL48OHTpU+ThNF4uISLUrLCwkIyMjJhIsgJmRkZGx1yNzJVkREYmKWEmwO+1Lf5RkRUQk5t1xxx3ce++9Nd6ukqyIiEiUKMnWsJxv1/HxovywwxARiXl//etf6dKlCyeccAILFy4EYPHixZxyyin069ePwYMHs2DBAjZs2ED79u3ZsSPy6O0tW7bQtm1btm/fvt8xRDXJmtkpZrbQzHLNbFQ5+5PM7Plg/1Qza19q363B9oVmdvKe6gwegj7VzL4J6mxQWRtmlmhm483sazObb2a3Ru+T+MnoD3IZMXYa/3hvESU79JhBEZFomD59OhMnTmTmzJm88sorZGdnA3DFFVcwevRopk+fzr333ss111xDWloavXv35uOPPwbgjTfe4OSTT96rpToVidoSHjOLBx4ETgTygGwzm+Tu80oVuwxY7+6dzGwYcA8w1My6A8OAHkBr4H0zOyQ4pqI67wHuc/eJZvZIUPfDFbUBXAAkuXtPM2sEzDOz59z922h9JgCPXNSP21+bw/3/+YaZ36/nn0P7kNEkKZpNioiE6s9vzGXeio3VWmf31qn86cweFe7/9NNPOeecc2jUqBEAZ511FoWFhXz++edccMEFu8oVFRUBMHToUJ5//nmOPfZYJk6cyDXXXFMtcUZzJDsAyHX3Je6+DZgIDClTZggwPnj9EnC8RS7fGgJMdPcid18K5Ab1lVtncMxxQR0EdZ69hzYcaGxmCUBDYBtQvf8KytGwQTz3XtCLe87rydSl6zhj9GdM/259tJsVEal3yl4NvGPHDtLT05k1a9aun/nz5wORJPz222+zbt06pk+fznHHHVctMUTzZhSZwLJS7/OAwysq4+7FZrYByAi2f1nm2MzgdXl1ZgAF7l5cTvmK2niJSAJeCTQC/p+7r9unnu4lM2No/3b0aJ3GNRNmMPTRL/jDad0YeVRsLNoWESmtshFntBx99NGMGDGCUaNGUVxczBtvvMGVV15Jhw4dePHFF7ngggtwd2bPnk3v3r1p0qQJAwYM4Le//S1nnHEG8fHx1RJHNEey5WWLsichKypTXdsra2MAUEJkOroD8Dsz61i2oJldYWY5ZpaTn1+9FywdmpnGG9cP4tiuzbnzzXlc9+xMNhXu/4l2EZH6rm/fvgwdOpQ+ffpw3nnnMXjwYAAmTJjAE088Qe/evenRowevv/76rmOGDh3KM888w9ChQ6stjmiOZPOAtqXetwFWVFAmL5i2TQPW7eHY8ravAdLNLCEYzZYuX1EbvwLecfftwGozmwJkAUtKB+juY4AxAFlZWdV+pVJaw0TGXNyPMZ8s4f/eXcj8lRt56KK+dG2ZWt1NiYjUK7fddhu33Xbbz7a/88475ZY///zzca/eX/PRHMlmA52Dq34bELmQaVKZMpOAS4PX5wMfeKSHk4BhwZXBHYDOwLSK6gyO+TCog6DO1/fQxvfAcRbRGBgILKjG/leZmXHlLw7m2csPZ1NRMWc/OIVXZuSFEYqIiFSjqCXZYER5HfAuMB94wd3nmtmdZnZWUOwJIMPMcoGbgFHBsXOBF4B5wDvAte5eUlGdQV23ADcFdWUEdVfYBpGrlJsAc4gk77HuPjsKH0WVHd4xg8k3DKJP23RueuErbn3lawq3l4QZkoiI7Aer7qFxLMvKyvKcnJyot1NcsoO/v7eIhz9azKGZqTx8YT/aHtgo6u2KiFSX+fPn061bt7DDqHbl9cvMprt7VnnldcenWighPo5bTunK45dk8f3aLZx+/6e8P++HsMMSEdkrsTaI25f+KMnWYid0b8HkGwbTLqMRlz+Vw91vL6C4ZEfYYYmI7FFycjJr166NmUS783myycnJe3WcHtpey7U9sBEvXXUkd745j0c+XszM79cz+leH0Txl775oEZGa1KZNG/Ly8qjupY9hSk5Opk2bNnt1jM7J7oWaOidbkVdm5PGHV78mJTmR0cMPY2DHjNBiERGRCJ2TjRHn9m3D69cOIiUpgQsfn8ojHy+OmakYEZFYpCRbx3RpmcKk6wdxyqEtufvtBfzmqels2Kq7RImI1EZKsnVQk6QEHhh+GH86szsfLVzNGaM/Zc7yDWGHJSIiZSjJ1lFmxsijOvD8lUdQXOKc+/DnPDfte00fi4jUIkqydVy/gw5g8g2DObzDgdz6ytfc/OJstm7TXaJERGoDJdkYcGDjBowbOYAbT+jMKzPzOOehKSzJ3xx2WCIi9Z6SbIyIjzNuPOEQxo8cwA8bCznrgSm89fXKsMMSEanXlGRjzNGHNGPyDYPp3KIJ10yYwZ1vzGNbse4SJSISBiXZGNQ6vSHPX3EEI49qz5NTljJszBes3LA17LBEROodJdkY1SAhjj+d2YMHfnUYC1dt4vT7P+PTb2Ln9mYiInWBkmyMO6NXayZdP4imTRpwyZPT+Nf737Bjh5b5iIjUBCXZeuDgZk147dqjOKdPJve9v4gR47JZ9+O2sMMSEYl5SrL1RKMGCfz9l735n3N68uXitZxx/6fM/H592GGJiMQ0Jdl6xMz41eHtePnqI4mLM3756BeM//xb3SVKRCRKlGTroZ5t0ph8/WCO7tyMP02ayw0TZ7G5qDjssEREYo6SbD2V1iiRxy7J4r9O6cLk2SsY8sBnLPphU9hhiYjEFCXZeiwuzrjmmE5MuHwgG7YWM+SBKbw2c3nYYYmIxAwlWeGIgzN464ZB9GyTxo3Pz+L2176mqFgPGRAR2V9KsgJA89Rknr38cK78RUee+fJ7LnjkC5at2xJ2WCIidZqSrOySEB/Hrad2Y8zF/Vi65kfOGP0ZHyz4IeywRETqLCVZ+ZmTerTkzesHkZnekF+Py+Fv7y6gRHeJEhHZa0qyUq6DMhrzyjVHMnxAWx78cDEXPzGV/E1FYYclIlKnKMlKhZIT4/nfc3tx7wW9mfH9ek6//1OmLV0XdlgiInWGkqzs0fn92vDqNUfROCmB4Y99yZhPFusuUSIiVaAkK1XSrVUqr193FCd1b8H/vLWAK5+ezsbC7WGHJSJSqynJSpWlJify0IV9+eMZ3flgwWrOHP0Zc1dsCDssEZFaS0lW9oqZcdmgDjx/5UCKtu/g3Ic+54XsZWGHJSJSKynJyj7pd9CBvHnDIPq3P5D/enk2v3/xK7Zu012iRERKU5KVfda0SRLjfz2AG47rxIvT8zjnoSl8u+bHsMMSEak1lGRlv8THGTed1IWxI/uzamMhZ47+jHfmrAo7LBGRWkFJVqrFsV2aM/mGwXRs3oSrnpnOXyfPY3vJjrDDEhEJlZKsVJvM9Ia8cOVALj3iIB77dCnDx3zJqg2FYYclIhIaJVmpVkkJ8fx5yKHcP/ww5q3cyBmjP2VK7pqwwxIRCYWSrETFWb1bM+m6o0hv1ICLn5jKAx98ww49ZEBE6hklWYmaTs1TeP3aozizd2vu/fciLhufTcGWbWGHJSJSY6KaZM3sFDNbaGa5ZjaqnP1JZvZ8sH+qmbUvte/WYPtCMzt5T3WaWYegjm+COhtU1oaZXWhms0r97DCzPtH7NOqnxkkJ/HNoH/5y9qFMyV3L6fd/xlfLCsIOS0SkRkQtyZpZPPAgcCrQHRhuZt3LFLsMWO/unYD7gHuCY7sDw4AewCnAQ2YWv4c67wHuc/fOwPqg7grbcPcJ7t7H3fsAFwPfuvus6v4cJHKXqIsHHsSLVx0BwAWPfMHTX3yrhwyISMyL5kh2AJDr7kvcfRswERhSpswQYHzw+iXgeDOzYPtEdy9y96VAblBfuXUGxxwX1EFQ59l7aKO04cBz+91jqVTvtum8ef0gjuqUwR9fn8uNz8/ix6LisMMSEYmaaCbZTKD0TW3zgm3llnH3YmADkFHJsRVtzwAKgjrKtlVRG6UNRUm2RhzQuAFPXNqf35/chTe+WsHZD04hd/WmsMMSEYmKaCbZsqNFgLLzgxWVqa7te4zDzA4Htrj7nHLKYWZXmFmOmeXk5+eXV0T2Ulycce2xnXjmssNZv2UbZz0whUlfrQg7LBGRahfNJJsHtC31vg1Q9jfprjJmlgCkAesqObai7WuA9KCOsm1V1MZOw6hkFOvuY9w9y92zmjVrVkl3ZW8d2akpb14/mO6tUrnhuZn89+tzKCrWQwZEJHZEM8lmA52Dq34bEElmk8qUmQRcGrw+H/jAI1fDTAKGBVcGdwA6A9MqqjM45sOgDoI6X99DG5hZHHABkXO7EoKWack8d8VAfjO4A0998R2/fPRL8tZvCTssEZFqEbUkG5z/vA54F5gPvODuc83sTjM7Kyj2BJBhZrnATcCo4Ni5wAvAPOAd4Fp3L6mozqCuW4CbgroygrorbCNwNJDn7kuq/xOQqkqMj+O207vzyEV9WbJ6M2eM/oyPFq4OOywRkf1mWkZRdVlZWZ6TkxN2GDHt2zU/cvWEGSxYtZHrj+3Eb084hPi48k6ri4jUDmY23d2zytunOz5JrdK+aWNeveZILujXhvs/yOWSJ6eyZnNR2GGJiOwTJVmpdZIT4/m/83vzf+f1Iufb9Zxx/2dM/27dng8UEalllGSl1vpl/7a8cs2RJCXGMfTRL3n80yW6S5SI1ClKslKr9WidxhvXD+L4bs25a/J8rpkwg02F28MOS0SkSpRkpdZLTU7kkYv6cdtp3fj3vB8464EpzF+5MeywRET2SElW6gQz4zdHd2TiFQP5saiYcx6awkvT88IOS0SkUkqyUqf0b38gk28YzGFtD+DmF79i1MuzKdyuu0SJSO2kJCt1TrOUJJ65/HCuO7YTE7OXcd7Dn/Pd2h/DDktE5GeUZKVOio8zbj65C0+OyCJv/VbOGP0Z/567KuywRER2oyQrddpxXVvw5vWD6NC0MVc8PZ3/fWs+xSU7wg5LRARQkpUY0PbARrx41RFcNLAdj36yhF89PpXVGwvDDktERElWYkNSQjx3nd2Tfw7tw9d5Gzjt/s/4YvHasMMSkXpOSVZiytmHZTLpuqNIa5jAhY9/yUMf5bJjh+4SJSLhUJKVmNO5RQqvXzeI03q24v/eWchvnsphwxbdJUpEap6SrMSkJkkJjB5+GH8+qweffJPP6aM/5eu8DWGHJSL1jJKsxCwz49Ij2/PClUewY4dz3sOfM2Hqd3rIgIjUGCVZiXmHtTuAyTcM5oiDM7jt1Tn87oWv2LKtOOywRKQeUJKVeuGAxg0YO6I/N514CK/OWs7ZD05hcf7msMMSkRinJCv1RlycccPxnXn614ezZvM2zhr9GW/OXhF2WCISw5Rkpd4Z1Lkpk28YRJeWKVz37EzumDSXbcW6S5SIVD8lWamXWqU15Pkrj+CyQR0Y9/m3DB3zBSsKtoYdlojEGCVZqbcS4+P44xndefjCvnzzw2ZOv/9TPl6UH3ZYIhJDlGSl3ju1ZysmXXcULVKTGTF2Gve9t4gS3SVKRKqBkqwI0LFZE1695ijOOSyTf/3nG0aMnca6H7eFHZaI1HFKsiKBhg3i+fsFvbn73J5MXbqO0+//lBnfrw87LBGpw5RkRUoxM4YNaMcrVx9JYnwcv3zkC8ZOWaq7RInIPlGSFSnHoZlpvHH9II7p0pw/vzGP656byeYi3SVKRPaOkqxIBdIaJvLYJf0YdWpX3pmzirMe+IyFqzaFHZaI1CFKsiKVMDOu+sXBTLj8cDYVFjPkwc94ZUZe2GGJSB2hJCtSBQM7ZjD5hkH0aZvOTS98xR9e/ZrC7SVhhyUitZySrEgVNU9J5pnLDufqYw7m2anfc/4jn7Ns3ZawwxKRWkxJVmQvJMTHccspXXn8kiy+X7uF0+//lPfn/RB2WCJSSynJiuyDE7q34M3rB9MuoxGXP5XDPe8soLhEDxkQkd0pyYrso3YZjXjpqiP51eHtePijxVz0xFQ2Fm4POywRqUWUZEX2Q3JiPP9zTk/+fkFvcr5dz9XPTNdj80RkFyVZkWpwXr823HNeL6bkrmXUy7N1hygRASAh7ABEYsV5/dqwomArf39vEa3TG3LzyV3CDklEQqYkK1KNrjuuE8sLtvLAh7lkHtCQ4QPahR2SiIRISVakGpkZfzn7UFZuKOT21+bQMi2ZY7s0DzssEQmJzsmKVLPE+DgevLAvXVumcO2EGXydtyHskEQkJFFNsmZ2ipktNLNcMxtVzv4kM3s+2D/VzNqX2ndrsH2hmZ28pzrNrENQxzdBnQ2q0EYvM/vCzOaa2ddmlhydT0LqmyZJCYwd0Z8DGjVg5Lhs3RlKpJ6KWpI1s3jgQeBUoDsw3My6lyl2GbDe3TsB9wH3BMd2B4YBPYBTgIfMLH4Pdd4D3OfunYH1Qd2VtZEAPANc5e49gGMALXKUatM8NZlxI/uzrbiEEWOnUbBlW9ghiUgNi+ZIdgCQ6+5L3H0bMBEYUqbMEGB88Pol4Hgzs2D7RHcvcvelQG5QX7l1BsccF9RBUOfZe2jjJGC2u38F4O5r3V13fJdq1blFCmMuyWLZuq1c8fR0ior1T0ykPolmks0ElpV6nxdsK7eMuxcDG4CMSo6taHsGUBDUUbatito4BHAze9fMZpjZf+1zT0UqMbBjBvf+sjfTlq7jdy98xY4dWkMrUl9E8+piK2db2d8uFZWpaHt5fxRUVr6yNhKAQUB/YAvwHzOb7u7/2S1AsyuAKwDatdNyDNk3Z/VuzYqCrdz99gIy0xty62ndwg5JRGpANEeyeUDbUu/bACsqKhOcI00D1lVybEXb1wDpQR1l26qsjY/dfY27bwHeAvqW7YS7j3H3LHfPatasWZU7L1LWlUd35OKBB/HoJ0t46otvww5HRGpANJNsNtA5uOq3AZELmSaVKTMJuDR4fT7wgUfuRzcJGBZcGdwB6AxMq6jO4JgPgzoI6nx9D228C/Qys0ZB8v0FMK8a+y+yGzPjT2d254Ruzblj0lz+PXdV2CGJSJRFLckG5z+vI5LM5gMvuPtcM7vTzM4Kij0BZJhZLnATMCo4di7wApGk9w5wrbuXVFRnUNctwE1BXRlB3ZW1sR74B5HEPQuY4e6To/NpiEQkxMdx//DD6JmZxg0TZzLz+/VhhyQiUWS6kXnVZWVleU5OTthhSAxYs7mIcx/6nB+LinnlmiM5KKNx2CGJyD4KrufJKm+f7vgkEoKmTZIYN7I/Je6MGJvNuh+1hlYkFinJioSkY7MmPH5JFssLtnL5+GwKt2sNrUisUZIVCVFW+wP519A+zFxWwI0TZ1GiNbQiMUVJViRkp/Zsxe2nd+eduau4a7IucBeJJXrUnUgtcNmgDixfv5UnpywlM70hlw/uGHZIIlINlGRFaonbT+/Gyg1b+etb82md3pDTerYKOyQR2U+aLhapJeLijPuG9qFvuwO48flZ5Hy7LuyQRGQ/KcmK1CLJifE8dklWZMr4qRwW528OOyQR2Q9KsiK1zIGNGzBuZH/izRgxdhr5m4rCDklE9pGSrEgtdFBGY54Y0Z/8TUVcNj6bLduK93yQiNQ6SrIitVSftumMHt6XOcs3cP2zMyku2RF2SCKyl6qUZM3st2aWahFPBA85PynawYnUdyd2b8Gfz+rBfxas5o435qJ7jYvULVUdyf7a3TcCJwHNgJHA3VGLSkR2ufiI9lz5i4488+X3PPLxkrDDEZG9UNV1shb89zRgrLt/ZWZW2QEiUn1uObkrKwoKueedBbROT2ZIn8ywQxKRKqhqkp1uZv8GOgC3mlkKoBNEIjUkLs6494Je/LCxkN+/OJsWqckM7JgRdlgisgdVnS6+jMjDzvu7+xYgkciUsYjUkKSEeB67OIt2GY244qkcvvlhU9ghicgeVDXJHgEsdPcCM7sIuB3YEL2wRKQ8aY0SGTeyP0mJ8YwYm80PGwvDDklEKlHVJPswsMXMegP/BXwHPBW1qESkQm0OaMTYEf1Zv2UbI8dms7lIa2hFaquqJtlij6wdGAL8y93/BaRELywRqcyhmWk8eGFfFv6wiWsmzGC71tCK1EpVTbKbzOxW4GJgspnFEzkvKyIhObZLc/569qF8siif21+dozW0IrVQVZPsUKCIyHrZVUAm8LeoRSUiVTJsQDuuP64Tz+csY/QHuWGHIyJlVCnJBol1ApBmZmcAhe6uc7IitcBNJx7CuX0z+cd7i3hpel7Y4YhIKVW9reIvgWnABcAvgalmdn40AxORqjEz7j63F0d1ymDUy7P59Jv8sEMSkUBVp4tvI7JG9lJ3vwQYAPwxemGJyN5okBDHwxf1o1PzJlz9zAzmr9wYdkgiQtWTbJy7ry71fu1eHCsiNSA1OZGxI/vTJCmBkWOzWblha9ghidR7VU2U75jZu2Y2wsxGAJOBt6IXlojsi1ZpDRk7sj8/FhUz4slsNhZuDzskkXqtqhc+/R4YA/QCegNj3P2WaAYmIvumW6tUHrm4H4vzN3P1M9PZVqw1tCJhqfKUr7u/7O43ufv/c/dXoxmUiOyfozo15Z7zejEldy2jXp6tNbQiIan0KTxmtgko7/9OA9zdU6MSlYjst/P6tWF5wVb+8d4iMg9oyO9O6hJ2SCL1TqVJ1t1160SROuz64zqxomAroz/IpSYTu34AABl+SURBVHV6Q4YPaBd2SCL1SlWfJysidZCZ8ZezD2XlhkJuf20OLdOSObZL87DDEqk3tAxHJMYlxsfx4IV96doyhWsnzODrPD2lUqSmKMmK1ANNkhIYO6I/BzRqwK/HZ7Ns3ZawQxKpF5RkReqJ5qnJjBvZn6LtJYwcl82GLVpDKxJtSrIi9UjnFimMuSSL79du4TdP51BUXBJ2SCIxTUlWpJ4Z2DGDv13Qi2lL1/G7F75ixw6toRWJFl1dLFIPDemTyYqCQu55ZwGZBzTk1lO7hR2SSExSkhWpp676RUeWF2zh0Y+XkJnekEuOaB92SCIxR0lWpJ4yM+44swerNhRyx6S5tEpryIndW4QdlkhMieo5WTM7xcwWmlmumY0qZ3+SmT0f7J9qZu1L7bs12L7QzE7eU51m1iGo45ugzgaVtWFm7c1sq5nNCn4eid4nIVI7JcTHcf/ww+iZmcb1z81g5vfrww5JJKZELcmaWTzwIHAq0B0YbmbdyxS7DFjv7p2A+4B7gmO7A8OAHsApwENmFr+HOu8B7nP3zsD6oO4K2wgsdvc+wc9V1dh9kTqjUYMEnhjRn2YpSVw+Pofv1v4YdkgiMSOaI9kBQK67L3H3bcBEYEiZMkOA8cHrl4DjzcyC7RPdvcjdlwK5QX3l1hkcc1xQB0GdZ++hDREJNG2SxLiRAyhxZ8TYbNb9uC3skERiQjSTbCawrNT7vGBbuWXcvRjYAGRUcmxF2zOAgqCOsm1V1AZABzObaWYfm9ngfeumSGw4uFkTHr8ki+UFW7l8fDaF27WGVmR/RTPJljdaLLsgr6Iy1bW9sjZWAu3c/TDgJuBZM/vZo/vM7AozyzGznPz8/HKqEokdWe0P5F9D+zBzWQE3TpxFidbQiuyXaCbZPKBtqfdtgBUVlTGzBCANWFfJsRVtXwOkB3WUbavcNoKp6LUA7j4dWAwcUrYT7j7G3bPcPatZs2ZV7rxIXXVqz1bcfnp33pm7irsmzws7HJE6LZpJNhvoHFz124DIhUyTypSZBFwavD4f+MDdPdg+LLgyuAPQGZhWUZ3BMR8GdRDU+XplbZhZs+BCKsysY9DGkmrsv0idddmgDow8qj1jp3zLE58tDTsckTorautk3b3YzK4D3gXigSfdfa6Z3QnkuPsk4AngaTPLJTKCHRYcO9fMXgDmAcXAte5eAlBenUGTtwATzewuYGZQNxW1ARwN3GlmxUAJcJW7r4vW5yFS19x+endWFhRy1+R5tE5L5tSercIOSaTOscggUKoiKyvLc3Jywg5DpMYUbi/hwsen8vXyDTx7+eFktT8w7JBEah0zm+7uWeXt0wMCRKRCyYnxPHZJFpnpDbn8qRwW528OOySROkVJVkQqdWDjBowb2Z94M0aMnUb+pqKwQxKpM5RkRWSPDspozBMj+pO/qYjLx2ezZVvxng8SESVZEamaPm3TGT28L18v38ANz82kuGRH2CGJ1HpKsiJSZSd2b8Gfz+rB+/NXc8cbc9GFkyKV06PuRGSvXHxEe/IKtgbPoW3E1cccHHZIIrWWkqyI7LVbTu7KioJC7nlnAa3TkxnSp+xtyUUElGRFZB/ExRn3XtCLHzYW8vsXZ9MiNZmBHTP2fKBIPaNzsiKyT5IS4nns4izaZTTiiqdy+OaHTWGHJFLrKMmKyD5La5TIuJH9SUqMZ8TYbH7YWBh2SCK1ipKsiOyXNgc0YuyI/qzfso1fj8tmc5HW0IrspCQrIvvt0Mw0HrywLwtWbeLaCTPYrjW0IoCSrIhUk2O7NOevZx/Kx4vyuf3VOVpDK4KuLhaRajRsQDuWF2xl9Ae5ZB7QkBuO7xx2SCKhUpIVkWp104mHsHz9Vv7x3iJapzfk/H5twg5JJDRKsiJSrcyMu8/rxQ+bChn18mxapiYzqHPTsMMSCYXOyYpItWuQEMfDF/WjU/MmXPXMdOav3Bh2SCKhUJIVkahITU5k7Mj+NElKYOTYbFZu2Bp2SCI1TklWRKKmVVpDxo7sz+aiYkY8mc3Gwu1hhyRSo5RkRSSqurVK5eGL+rI4fzNXPzOdbcVaQyv1h5KsiETd4M7NuPu8XkzJXcuoV2ZrDa3UG7q6WERqxPn92rCiILK0JzO9Ib87qUvYIYlEnZKsiNSY64/rxPL1kZtVtE5vyPAB7cIOSSSqlGRFpMaYGXedcygrNxZy+2tzaJmWzLFdmocdlkjU6JysiNSoxPg4HrqwL11bpnDthBnMWb4h7JBEokZJVkRqXJOkBMaO6M8BjRowclw2y9ZtCTskkahQkhWRUDRPTWbcyP4UbS9h5LhsNmzRGlqJPUqyIhKazi1SGHNJFt+v3cJvns6hqLgk7JBEqpWSrIiEamDHDP52QS+mLV3HzS/OZscOraGV2KGri0UkdEP6ZLKioJB73llA6/Rkbj21W9ghiVQLJVkRqRWu+kVHlhds4dGPl5CZ3pBLjmgfdkgi+01JVkRqBTPjjjN7sGpDIXdMmkurtIac2L1F2GGJ7BedkxWRWiMhPo77hx9Gz8w0rn9uBrOWFYQdksh+UZIVkVqlUYMEHr+0P81SkrhsXDbfrf0x7JBE9pmSrIjUOs1Skhg3cgAl7owYm826H7eFHZLIPlGSFZFa6eBmTXj8kiyWF2zl8vHZFG7XGlqpe5RkRaTWymp/IP8c2oeZywq4ceIsSrSGVuoYJVkRqdVO69mK207rxjtzV/HXyfPDDkdkr2gJj4jUepcP7sjygq08OWUpmQc05LJBHcIOSaRKlGRFpE64/fTurCwo5K7J82idlsypPVuFHZLIHkV1utjMTjGzhWaWa2ajytmfZGbPB/unmln7UvtuDbYvNLOT91SnmXUI6vgmqLPBntoI9rczs81mdnP1fwIiUl3i44x/DuvDYW3T+e3zs5j+3bqwQxLZo6glWTOLBx4ETgW6A8PNrHuZYpcB6929E3AfcE9wbHdgGNADOAV4yMzi91DnPcB97t4ZWB/UXWEbpdwHvF09vRaRaEpOjOfxS/uTmd6Qy8fnsCR/c9ghiVQqmiPZAUCuuy9x923ARGBImTJDgPHB65eA483Mgu0T3b3I3ZcCuUF95dYZHHNcUAdBnWfvoQ3M7GxgCTC3GvstIlF0YOMGjBvZnzgzRozNZs3morBDEqlQNJNsJrCs1Pu8YFu5Zdy9GNgAZFRybEXbM4CCoI6ybZXbhpk1Bm4B/lxZJ8zsCjPLMbOc/Pz8PXRZRGrCQRmNeWJEf1ZvKuSycdls2Va854NEQhDNJGvlbCu7yK2iMtW1vbI2/kxkernS+SZ3H+PuWe6e1axZs8qKikgN6tM2ndHD+/L18g3c8NxMikt2hB2SyM9EM8nmAW1LvW8DrKiojJklAGnAukqOrWj7GiA9qKNsWxW1cTjwf2b2LXAj8Aczu27fuioiYTixewvuOKsH789fzR1vzMVdN6uQ2iWaSTYb6Bxc9duAyIVMk8qUmQRcGrw+H/jAI/+XTAKGBVcGdwA6A9MqqjM45sOgDoI6X6+sDXcf7O7t3b098E/gf9z9ger8AEQk+i45oj1XHt2RZ778nkc/WRJ2OCK7ido6WXcvDkaG7wLxwJPuPtfM7gRy3H0S8ATwtJnlEhldDguOnWtmLwDzgGLgWncvASivzqDJW4CJZnYXMDOom4raEJHYccspXVmxoZC7315Aq7RkhvQpe/mHSDhM0ytVl5WV5Tk5OWGHISLlKCou4eInpjHr+wKeumwAAztmhB2S1BNmNt3ds8rbp3sXi0hMSEqI57GLs2iX0Ygrnsrhmx82hR2SiJKsiMSOtEaJjB3Rn6TEeEaMzWb1xsKwQ5J6TklWRGJK2wMbMXZEf9Zv2cbIcdlsLtIaWgmPkqyIxJxDM9N48MK+LFi1iWsnzGC71tBKSJRkRSQmHdulOXedfSgfL8rnj6/N0RpaCYUedSciMWv4gHYsX7+VBz7MJTO9Idcf3znskKSeUZIVkZj2u5MOYUXBVv7+3iJapzfkvH5twg5J6hElWRGJaWbG3ef14odNhdzy8mxapCYzqHPTsMOSekLnZEUk5jVIiOPhi/pxcLMmXPXMdOav3Bh2SFJPKMmKSL2QmpzI2JH9aZKUwMix2azcsDXskKQeUJIVkXqjdXpDxo7sz+aiYkaOzWZj4fawQ5IYpyQrIvVKt1apPHxRX3JXb+bqZ6azrVhraCV6lGRFpN4Z3LkZd5/Xiym5axn1ymytoZWo0dXFIlIvnd+vDSsKtvKP9xbRJr0hN53UJeyQJAYpyYpIvXX9cZ1Yvn4r93+QS+v0hgwb0C7skCTGKMmKSL1lZtx1zqGs3FjIba/NoUVaMsd2aR52WBJDdE5WROq1xPg4HrqwL11bpnDthBnMWb4h7JAkhijJiki91yQpgSdH9OeARg0YOS6bvPVbwg5JYoSSrIgI0CI1mbEj+1O4vYQRY7PZsEVraGX/KcmKiAQOaZHCmIuz+H7tFq54Ooei4pKwQ5I6TklWRKSUIw7O4G8X9GLq0nXc/OJsduzQGlrZd7q6WESkjCF9MllRUMg97yygdXoyt57aLeyQpI5SkhURKcdVv+jI8oItPPrxEhau2kSvzDS6tkqlW6tUDjqwEXFxFnaIUgcoyYqIlMPMuOPMHiQlxPPxonw+WZTPzpnjhonxdGmZQrdWKXRrlUrXlql0bZVCanJiuEFLrWO6Z2fVZWVleU5OTthhiEgICreX8M0Pm5m/ciPzV22M/HflJjZs/ekq5Mz0hnRrlVoq+aZwUEZj4jXqjWlmNt3ds8rbp5GsiEgVJCfG07NNGj3bpO3a5u6s2ljIgpWbmLdyIwtWbWL+yo18sOCH3Ua9h7RMoXurFLq2jEw3d2mZQlpDjXrrA41k94JGsiJSFbtGvcGId8HKTcxftZGCLWVHvbtPN7fXqLdO0khWRKQGVTTq/WFjUanp5k0sWLmRDxfmUxIMe5MT4+jSIiWYco5MN3dtlapRbx2mJCsiUgPMjJZpybRMS+bYrj89hKBwewm5qzdHpptXRqab35m7ionZy3aV2Tnq3Tni7dYqVaPeOkJJVkQkRMmJ8RyamcahmeWMektPN1cw6o2c542MeLu1TCWtkUa9tYmSrIhILbPbqLfLz0e980tdZPXveat4PuenUW/rtOTIVHOp870dmmrUGxYlWRGROqKiUe/qTUW7lhQtCEa/Hy36adSblBAXWddbarpZo96aoSQrIlKHmRktUpNpkZrMMaVGvUXFkSucd454F6zayHvzf/jZqLdrsK535/IijXqrl5KsiEgMSkoof9Sbv6lotzW9C1Zu4pNF+RSXGfV2bfnTdHO3VimkN2oQVlfqNCVZEZF6wsxonppM83JGvZFzvZFlRfNXbeT9+at5ISdvV5lWO8/1tkzZdVer9hmNSYjXw9wqoyQrIlLPJSXE06N1Gj1a/3zUO3/XiDdyzrfsqPeQFim7TTdr1Ls7JVkREfmZ0qPeXxzSbNf2naPencuKFqzaxH/KjHpbpib/dDerVql0r8ejXiVZERGpsgpHvZuLfppuDpLvp9+s+dmod9e53laRq50PaBzbo14lWRER2S9mRvOUZJqn7D7q3Va8o9S63sh08wcLVvPi9N1HvV1LPbWoe3CFc6yMeqOaZM3sFOBfQDzwuLvfXWZ/EvAU0A9YCwx192+DfbcClwElwA3u/m5ldZpZB2AicCAwA7jY3bdV1IaZDQDG7AwFuMPdX43KByEiUg81SIije+tUurdO3W376k2RJxftTLzzV27ks1Kj3gYJcRzSoslP53mD0W9dHPVG7Sk8ZhYPLAJOBPKAbGC4u88rVeYaoJe7X2Vmw4Bz3H2omXUHngMGAK2B94FDgsPKrdPMXgBecfeJZvYI8JW7P1xJG42Abe5ebGatgK+A1u5eXFGf9BQeEZHo2Fa8g8X5u9/Nav7KTazZXLSrTIvUpN2WFXVrlUrHWjDqDespPAOAXHdfEgQxERgCzCtVZghwR/D6JeABM7Ng+0R3LwKWmlluUB/l1Wlm84HjgF8FZcYH9T5cURvuvqVUHMmAnvknIhKSBglxu54+VFr+pqJdd7Ha+dzeKblr2F7y06i3c/Mmu003d22VyoG1ZNQbzSSbCSwr9T4POLyiMsGIcgOQEWz/ssyxmcHr8urMAApKjUJLl6+ojTVmdjjwJHAQkenlCkexIiJS85qlJNEspRmDO+9+rndx/ubdpps/WpjPS6XO9bZITdptWdHOu1kl1vCoN5pJtrz7cpUdLVZUpqLt5X06lZWvNA53nwr0MLNuwHgze9vdC3cL0OwK4AqAdu3alVOViIjUpNKj3nMO+2n7zlHvzuVF81dt4vPFS34a9cbH0XnXud5I4j2iYwZxUbyNZDSTbB7QttT7NsCKCsrkmVkCkAas28Ox5W1fA6SbWUIwGi1dvqI2dnH3+Wb2I3AokFNm3xiCC6SysrI0pSwiUktVNOpdsmbzbtPNn3yTz8sz8jigUSIz/nhiVGOKZpLNBjoHV/0uB4bx0znTnSYBlwJfAOcDH7i7m9kk4Fkz+weRC586A9OIjEp/VmdwzIdBHRODOl/fQxsdgGXBFPJBQBfg2yh8DiIiEpIGCXGRh923TIVSo941m4tYWVBI5DKg6Ilakg2S13XAu0SW2zzp7nPN7E4gx90nAU8ATwcXNq0jkjQJyr1A5CKpYuBady8BKK/OoMlbgIlmdhcwM6ibitoABgGjzGw7sAO4xt3XROvzEBGR2qNpkySaNkmKejtRW8ITi7SER0REyqpsCU9s3FJDRESkFlKSFRERiRIlWRERkShRkhUREYkSJVkREZEoUZIVERGJEiVZERGRKFGSFRERiRIlWRERkSjRHZ/2gpnlA99VQ1VNiTzUoD5QX2NXfeqv+hqbqquvB7l7s/J2KMmGwMxyKroFV6xRX2NXfeqv+hqbaqKvmi4WERGJEiVZERGRKFGSDceYsAOoQepr7KpP/VVfY1PU+6pzsiIiIlGikayIiEiUKMlGkZmdYmYLzSzXzEaVsz/JzJ4P9k81s/Y1H2X1qEJfR5hZvpnNCn4uDyPO6mBmT5rZajObU8F+M7P7g89itpn1rekYq0sV+nqMmW0o9b3+d03HWF3MrK2ZfWhm881srpn9tpwyMfHdVrGvMfHdmlmymU0zs6+Cvv65nDLR+13s7vqJwg8QDywGOgINgK+A7mXKXAM8ErweBjwfdtxR7OsI4IGwY62m/h4N9AXmVLD/NOBtwICBwNSwY45iX48B3gw7zmrqayugb/A6BVhUzr/jmPhuq9jXmPhug++qSfA6EZgKDCxTJmq/izWSjZ4BQK67L3H3bcBEYEiZMkOA8cHrl4DjzcxqMMbqUpW+xgx3/wRYV0mRIcBTHvElkG5mrWomuupVhb7GDHdf6e4zgtebgPlAZpliMfHdVrGvMSH4rjYHbxODn7IXI0Xtd7GSbPRkAstKvc/j5/+Id5Vx92JgA5BRI9FVr6r0FeC8YIrtJTNrWzOhhaKqn0esOCKYinvbzHqEHUx1CKYLDyMy6ikt5r7bSvoKMfLdmlm8mc0CVgPvuXuF32t1/y5Wko2e8v4KKvvXU1XK1AVV6ccbQHt37wW8z09/NcaiWPleq2IGkVvK9QZGA6+FHM9+M7MmwMvAje6+sezucg6ps9/tHvoaM9+tu5e4ex+gDTDAzA4tUyRq36uSbPTkAaVHa22AFRWVMbMEII26OTW3x766+1p3LwrePgb0q6HYwlCV7z4muPvGnVNx7v4WkGhmTUMOa5+ZWSKRpDPB3V8pp0jMfLd76musfbcA7l4AfAScUmZX1H4XK8lGTzbQ2cw6mFkDIifTJ5UpMwm4NHh9PvCBB2fe65g99rXMeauziJwDilWTgEuCK1EHAhvcfWXYQUWDmbXcee7KzAYQ+Z2yNtyo9k3QjyeA+e7+jwqKxcR3W5W+xsp3a2bNzCw9eN0QOAFYUKZY1H4XJ1RHJfJz7l5sZtcB7xK5+vZJd59rZncCOe4+icg/8qfNLJfIX03Dwot431WxrzeY2VlAMZG+jggt4P1kZs8RufKyqZnlAX8icjEF7v4I8BaRq1BzgS3AyHAi3X9V6Ov5wNVmVgxsBYbV0T8UAY4CLga+Ds7fAfwBaAcx991Wpa+x8t22AsabWTyRPxRecPc3a+p3se74JCIiEiWaLhYREYkSJVkREZEoUZIVERGJEiVZERGRKFGSFRERiRIlWRHZjZl9Hvy3vZn9Kux4ROoyJVkR2Y27Hxm8bA/sVZIN1iKKSEBJVkR2Y2Y7n1hyNzA4eJbo/wtusv43M8sOHvRwZVD+mODZpM8SublBYzObHNxYfo6ZDQ2tMyIh0x2fRKQio4Cb3f0MADO7gshtBPubWRIwxcz+HZQdABzq7kvN7DxghbufHhyXFkbwIrWBRrIiUlUnEblv7ywij0XLADoH+6a5+9Lg9dfACWZ2j5kNdvcNIcQqUisoyYpIVRlwvbv3CX46uPvOkeyPOwu5+yIiT1n6GvhfM/vvEGIVqRWUZEWkIpuAlFLv3yVyw/hEADM7xMwalz3IzFoDW9z9GeBeoG9NBCtSG+mcrIhUZDZQbGZfAeOAfxG54nhG8Ai0fODsco7rCfzNzHYA24GrayRakVpIT+ERERGJEk0Xi4iIRImSrIiISJQoyYqIiESJkqyIiEiUKMmKiIhEiZKsiIhIlCjJioiIRImSrIiISJT8fwRCiiXUhWXLAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<Figure size 504x360 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "\n",
    "\n",
    "W, loss_tr, dev_loss = SGD(train_indices,train_Y,\n",
    "                            W,\n",
    "                            X_dev=dev_indices, \n",
    "                            Y_dev=dev_Y,\n",
    "                            lr=0.3, \n",
    "                            dropout=0.5,\n",
    "                            freeze_emb=False,\n",
    "                            tolerance=0.000001,\n",
    "                            epochs=100)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Plot the learning process:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compute accuracy, precision, recall and F1-Score:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:10:11.037495Z",
     "start_time": "2020-04-02T15:10:11.034999Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 0.07555555555555556\n",
      "Precision: 0.20833333333333331\n",
      "Recall: 0.05666666666666667\n",
      "F1-Score: 0.07055852644087937\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "c:\\users\\18127\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\sklearn\\metrics\\classification.py:1437: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples.\n",
      "  'precision', 'predicted', average, warn_for)\n",
      "c:\\users\\18127\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\sklearn\\metrics\\classification.py:1439: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 in labels with no true samples.\n",
      "  'recall', 'true', average, warn_for)\n",
      "c:\\users\\18127\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\sklearn\\metrics\\classification.py:1437: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.\n",
      "  'precision', 'predicted', average, warn_for)\n",
      "c:\\users\\18127\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\sklearn\\metrics\\classification.py:1439: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.\n",
      "  'recall', 'true', average, warn_for)\n"
     ]
    }
   ],
   "source": [
    "preds_te = [np.argmax(forward_pass(x, W, dropout_rate=0.0)[1]) \n",
    "            for x,y in zip(np.array(test_indices),np.array(test_Y))]\n",
    "\n",
    "print('Accuracy:', accuracy_score(test_Y,preds_te))\n",
    "print('Precision:', precision_score(test_Y,preds_te,average='macro'))\n",
    "print('Recall:', recall_score(test_Y,preds_te,average='macro'))\n",
    "print('F1-Score:', f1_score(test_Y,preds_te,average='macro'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "del W"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Discuss how did you choose model hyperparameters ? "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "scrolled": true
   },
   "source": [
    "# Discuss\n",
    "learning rate: should be small in order to learn the parameters better, but too small the train will be slow,so we should keep the balance.\n",
    "embedding size: should between the size of vocabulary and the size of output, {e.g. 50, 300, 500},\n",
    "the dropout rate: should between 0 and 1, chose 0.5 may be the best,because it means drop out 50% parameters. {e.g. 0.2, 0.5} \n",
    "Please use tables or graphs to show training and validation performance for each hyperparameter combination"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "the model is underfit"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Use Pre-trained Embeddings\n",
    "\n",
    "Now re-train the network using GloVe pre-trained embeddings. You need to modify the `backward_pass` function above to stop computing gradients and updating weights of the embedding matrix.\n",
    "\n",
    "Use the function below to obtain the embedding martix for your vocabulary. Generally, that should work without any problem. If you get errors, you can modify it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:27:32.020697Z",
     "start_time": "2020-04-02T14:27:32.015733Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def get_glove_embeddings(f_zip, f_txt, word2id, emb_size=300):\n",
    "    \n",
    "    w_emb = np.zeros((len(word2id), emb_size))\n",
    "    \n",
    "    with zipfile.ZipFile(f_zip) as z:\n",
    "        with z.open(f_txt) as f:\n",
    "            for line in f:\n",
    "                line = line.decode('utf-8')\n",
    "                word = line.split()[0]\n",
    "                     \n",
    "                if word in vocab:\n",
    "                    emb = np.array(line.strip('\\n').split()[1:]).astype(np.float32)\n",
    "                    w_emb[word2id[word]] +=emb\n",
    "    return w_emb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:28:54.548613Z",
     "start_time": "2020-04-02T14:27:32.780248Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# get glove embedding\n",
    "w_glove = get_glove_embeddings(\"glove.840B.300d.zip\",\"glove.840B.300d.txt\",word2id=word_id_dict)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, initialise the weights of your network using the `network_weights` function. Second, replace the weigths of the embedding matrix with `w_glove`. Finally, train the network by freezing the embedding weights: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:30:11.121198Z",
     "start_time": "2020-04-02T14:29:24.946124Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#initial network_weights with glove as embedding \n",
    "W = network_weights(vocab_size=len(vocab),embedding_dim=w_glove.shape[1],\n",
    "                    hidden_dim=[], num_classes=3)\n",
    "W[0] = w_glove"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 times updata parameter\n",
      "1 times updata parameter\n",
      "2 times updata parameter\n",
      "3 times updata parameter\n",
      "4 times updata parameter\n",
      "5 times updata parameter\n",
      "6 times updata parameter\n",
      "7 times updata parameter\n",
      "8 times updata parameter\n",
      "9 times updata parameter\n",
      "10 times updata parameter\n",
      "11 times updata parameter\n",
      "12 times updata parameter\n",
      "13 times updata parameter\n",
      "14 times updata parameter\n",
      "15 times updata parameter\n",
      "16 times updata parameter\n",
      "17 times updata parameter\n",
      "18 times updata parameter\n",
      "19 times updata parameter\n",
      "20 times updata parameter\n",
      "21 times updata parameter\n",
      "22 times updata parameter\n",
      "23 times updata parameter\n",
      "24 times updata parameter\n",
      "25 times updata parameter\n",
      "26 times updata parameter\n",
      "27 times updata parameter\n",
      "28 times updata parameter\n",
      "29 times updata parameter\n",
      "30 times updata parameter\n",
      "31 times updata parameter\n",
      "32 times updata parameter\n",
      "33 times updata parameter\n",
      "34 times updata parameter\n",
      "35 times updata parameter\n",
      "36 times updata parameter\n",
      "37 times updata parameter\n",
      "38 times updata parameter\n",
      "39 times updata parameter\n",
      "40 times updata parameter\n",
      "41 times updata parameter\n",
      "42 times updata parameter\n",
      "43 times updata parameter\n",
      "44 times updata parameter\n",
      "45 times updata parameter\n",
      "46 times updata parameter\n",
      "47 times updata parameter\n",
      "48 times updata parameter\n",
      "49 times updata parameter\n",
      "50 times updata parameter\n",
      "51 times updata parameter\n",
      "52 times updata parameter\n",
      "53 times updata parameter\n",
      "54 times updata parameter\n",
      "55 times updata parameter\n",
      "56 times updata parameter\n",
      "57 times updata parameter\n",
      "58 times updata parameter\n",
      "59 times updata parameter\n",
      "60 times updata parameter\n",
      "61 times updata parameter\n",
      "62 times updata parameter\n",
      "63 times updata parameter\n",
      "64 times updata parameter\n",
      "65 times updata parameter\n",
      "66 times updata parameter\n",
      "67 times updata parameter\n",
      "68 times updata parameter\n",
      "69 times updata parameter\n",
      "70 times updata parameter\n",
      "71 times updata parameter\n",
      "72 times updata parameter\n",
      "73 times updata parameter\n",
      "74 times updata parameter\n",
      "75 times updata parameter\n",
      "76 times updata parameter\n",
      "77 times updata parameter\n",
      "78 times updata parameter\n",
      "79 times updata parameter\n",
      "80 times updata parameter\n",
      "81 times updata parameter\n",
      "82 times updata parameter\n",
      "83 times updata parameter\n",
      "84 times updata parameter\n",
      "85 times updata parameter\n",
      "86 times updata parameter\n",
      "87 times updata parameter\n",
      "88 times updata parameter\n",
      "89 times updata parameter\n",
      "90 times updata parameter\n",
      "91 times updata parameter\n",
      "92 times updata parameter\n",
      "93 times updata parameter\n",
      "94 times updata parameter\n",
      "95 times updata parameter\n",
      "96 times updata parameter\n",
      "97 times updata parameter\n",
      "98 times updata parameter\n",
      "99 times updata parameter\n",
      "100 times updata parameter\n",
      "101 times updata parameter\n",
      "102 times updata parameter\n",
      "103 times updata parameter\n",
      "104 times updata parameter\n",
      "105 times updata parameter\n",
      "106 times updata parameter\n",
      "107 times updata parameter\n",
      "108 times updata parameter\n",
      "109 times updata parameter\n",
      "110 times updata parameter\n",
      "111 times updata parameter\n",
      "112 times updata parameter\n",
      "113 times updata parameter\n",
      "114 times updata parameter\n",
      "115 times updata parameter\n",
      "116 times updata parameter\n",
      "117 times updata parameter\n",
      "118 times updata parameter\n",
      "119 times updata parameter\n",
      "120 times updata parameter\n",
      "121 times updata parameter\n",
      "122 times updata parameter\n",
      "123 times updata parameter\n",
      "124 times updata parameter\n",
      "125 times updata parameter\n",
      "126 times updata parameter\n",
      "127 times updata parameter\n",
      "128 times updata parameter\n",
      "129 times updata parameter\n",
      "130 times updata parameter\n",
      "131 times updata parameter\n",
      "132 times updata parameter\n",
      "133 times updata parameter\n",
      "134 times updata parameter\n",
      "135 times updata parameter\n",
      "136 times updata parameter\n",
      "137 times updata parameter\n",
      "138 times updata parameter\n",
      "139 times updata parameter\n",
      "140 times updata parameter\n",
      "141 times updata parameter\n",
      "142 times updata parameter\n",
      "143 times updata parameter\n",
      "144 times updata parameter\n",
      "145 times updata parameter\n",
      "146 times updata parameter\n",
      "147 times updata parameter\n",
      "148 times updata parameter\n",
      "149 times updata parameter\n",
      "150 times updata parameter\n",
      "151 times updata parameter\n",
      "152 times updata parameter\n",
      "153 times updata parameter\n",
      "154 times updata parameter\n",
      "155 times updata parameter\n",
      "156 times updata parameter\n",
      "157 times updata parameter\n",
      "158 times updata parameter\n",
      "159 times updata parameter\n",
      "160 times updata parameter\n",
      "161 times updata parameter\n",
      "162 times updata parameter\n",
      "163 times updata parameter\n",
      "164 times updata parameter\n",
      "165 times updata parameter\n",
      "166 times updata parameter\n",
      "167 times updata parameter\n",
      "168 times updata parameter\n",
      "169 times updata parameter\n",
      "170 times updata parameter\n",
      "171 times updata parameter\n",
      "172 times updata parameter\n",
      "173 times updata parameter\n",
      "174 times updata parameter\n",
      "175 times updata parameter\n",
      "176 times updata parameter\n",
      "177 times updata parameter\n",
      "178 times updata parameter\n",
      "179 times updata parameter\n",
      "180 times updata parameter\n",
      "181 times updata parameter\n",
      "182 times updata parameter\n",
      "183 times updata parameter\n",
      "184 times updata parameter\n",
      "185 times updata parameter\n",
      "186 times updata parameter\n",
      "187 times updata parameter\n",
      "188 times updata parameter\n",
      "189 times updata parameter\n",
      "190 times updata parameter\n",
      "191 times updata parameter\n",
      "192 times updata parameter\n",
      "193 times updata parameter\n",
      "194 times updata parameter\n",
      "195 times updata parameter\n",
      "196 times updata parameter\n",
      "197 times updata parameter\n",
      "198 times updata parameter\n",
      "199 times updata parameter\n",
      "200 times updata parameter\n",
      "201 times updata parameter\n",
      "202 times updata parameter\n",
      "203 times updata parameter\n",
      "204 times updata parameter\n",
      "205 times updata parameter\n",
      "206 times updata parameter\n",
      "207 times updata parameter\n",
      "208 times updata parameter\n",
      "209 times updata parameter\n",
      "210 times updata parameter\n",
      "211 times updata parameter\n",
      "212 times updata parameter\n",
      "213 times updata parameter\n",
      "214 times updata parameter\n",
      "215 times updata parameter\n",
      "216 times updata parameter\n",
      "217 times updata parameter\n",
      "218 times updata parameter\n",
      "219 times updata parameter\n",
      "220 times updata parameter\n",
      "221 times updata parameter\n",
      "222 times updata parameter\n",
      "223 times updata parameter\n",
      "224 times updata parameter\n",
      "225 times updata parameter\n",
      "226 times updata parameter\n",
      "227 times updata parameter\n",
      "228 times updata parameter\n",
      "229 times updata parameter\n",
      "230 times updata parameter\n",
      "231 times updata parameter\n",
      "232 times updata parameter\n",
      "233 times updata parameter\n",
      "234 times updata parameter\n",
      "235 times updata parameter\n",
      "236 times updata parameter\n",
      "237 times updata parameter\n",
      "238 times updata parameter\n",
      "239 times updata parameter\n",
      "240 times updata parameter\n",
      "241 times updata parameter\n",
      "242 times updata parameter\n",
      "243 times updata parameter\n",
      "244 times updata parameter\n",
      "245 times updata parameter\n",
      "246 times updata parameter\n",
      "247 times updata parameter\n",
      "248 times updata parameter\n",
      "249 times updata parameter\n",
      "250 times updata parameter\n",
      "251 times updata parameter\n",
      "252 times updata parameter\n",
      "253 times updata parameter\n",
      "254 times updata parameter\n",
      "255 times updata parameter\n",
      "256 times updata parameter\n",
      "257 times updata parameter\n",
      "258 times updata parameter\n",
      "259 times updata parameter\n",
      "260 times updata parameter\n",
      "261 times updata parameter\n",
      "262 times updata parameter\n",
      "263 times updata parameter\n",
      "264 times updata parameter\n",
      "265 times updata parameter\n",
      "266 times updata parameter\n",
      "267 times updata parameter\n",
      "268 times updata parameter\n",
      "269 times updata parameter\n",
      "270 times updata parameter\n",
      "271 times updata parameter\n",
      "272 times updata parameter\n",
      "273 times updata parameter\n",
      "274 times updata parameter\n",
      "275 times updata parameter\n",
      "276 times updata parameter\n",
      "277 times updata parameter\n",
      "278 times updata parameter\n",
      "279 times updata parameter\n",
      "280 times updata parameter\n",
      "281 times updata parameter\n",
      "282 times updata parameter\n",
      "283 times updata parameter\n",
      "284 times updata parameter\n",
      "285 times updata parameter\n",
      "286 times updata parameter\n",
      "287 times updata parameter\n",
      "288 times updata parameter\n",
      "289 times updata parameter\n",
      "290 times updata parameter\n",
      "291 times updata parameter\n",
      "292 times updata parameter\n",
      "293 times updata parameter\n",
      "294 times updata parameter\n",
      "295 times updata parameter\n",
      "296 times updata parameter\n",
      "297 times updata parameter\n",
      "298 times updata parameter\n",
      "299 times updata parameter\n",
      "300 times updata parameter\n",
      "301 times updata parameter\n",
      "302 times updata parameter\n",
      "303 times updata parameter\n",
      "304 times updata parameter\n",
      "305 times updata parameter\n",
      "306 times updata parameter\n",
      "307 times updata parameter\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "308 times updata parameter\n",
      "309 times updata parameter\n",
      "310 times updata parameter\n",
      "311 times updata parameter\n",
      "312 times updata parameter\n",
      "313 times updata parameter\n",
      "314 times updata parameter\n",
      "315 times updata parameter\n",
      "316 times updata parameter\n",
      "317 times updata parameter\n",
      "318 times updata parameter\n",
      "319 times updata parameter\n",
      "320 times updata parameter\n",
      "321 times updata parameter\n",
      "322 times updata parameter\n",
      "323 times updata parameter\n",
      "324 times updata parameter\n",
      "325 times updata parameter\n",
      "326 times updata parameter\n",
      "327 times updata parameter\n",
      "328 times updata parameter\n",
      "329 times updata parameter\n",
      "330 times updata parameter\n",
      "331 times updata parameter\n",
      "332 times updata parameter\n",
      "333 times updata parameter\n",
      "334 times updata parameter\n",
      "335 times updata parameter\n",
      "336 times updata parameter\n",
      "337 times updata parameter\n",
      "338 times updata parameter\n",
      "339 times updata parameter\n",
      "340 times updata parameter\n",
      "341 times updata parameter\n",
      "342 times updata parameter\n",
      "343 times updata parameter\n",
      "344 times updata parameter\n",
      "345 times updata parameter\n",
      "346 times updata parameter\n",
      "347 times updata parameter\n",
      "348 times updata parameter\n",
      "349 times updata parameter\n",
      "350 times updata parameter\n",
      "351 times updata parameter\n",
      "352 times updata parameter\n",
      "353 times updata parameter\n",
      "354 times updata parameter\n",
      "355 times updata parameter\n",
      "356 times updata parameter\n",
      "357 times updata parameter\n",
      "358 times updata parameter\n",
      "359 times updata parameter\n",
      "360 times updata parameter\n",
      "361 times updata parameter\n",
      "362 times updata parameter\n",
      "363 times updata parameter\n",
      "364 times updata parameter\n",
      "365 times updata parameter\n",
      "366 times updata parameter\n",
      "367 times updata parameter\n",
      "368 times updata parameter\n",
      "369 times updata parameter\n",
      "370 times updata parameter\n",
      "371 times updata parameter\n",
      "372 times updata parameter\n",
      "373 times updata parameter\n",
      "374 times updata parameter\n",
      "375 times updata parameter\n",
      "376 times updata parameter\n",
      "377 times updata parameter\n",
      "378 times updata parameter\n",
      "379 times updata parameter\n",
      "380 times updata parameter\n",
      "381 times updata parameter\n",
      "382 times updata parameter\n",
      "383 times updata parameter\n",
      "384 times updata parameter\n",
      "385 times updata parameter\n",
      "386 times updata parameter\n",
      "387 times updata parameter\n",
      "388 times updata parameter\n",
      "389 times updata parameter\n",
      "390 times updata parameter\n",
      "391 times updata parameter\n",
      "392 times updata parameter\n",
      "393 times updata parameter\n",
      "394 times updata parameter\n",
      "395 times updata parameter\n",
      "396 times updata parameter\n",
      "397 times updata parameter\n",
      "398 times updata parameter\n",
      "399 times updata parameter\n",
      "400 times updata parameter\n",
      "401 times updata parameter\n",
      "402 times updata parameter\n",
      "403 times updata parameter\n",
      "404 times updata parameter\n",
      "405 times updata parameter\n",
      "406 times updata parameter\n",
      "407 times updata parameter\n",
      "408 times updata parameter\n",
      "409 times updata parameter\n",
      "410 times updata parameter\n",
      "411 times updata parameter\n",
      "412 times updata parameter\n",
      "413 times updata parameter\n",
      "414 times updata parameter\n",
      "415 times updata parameter\n",
      "416 times updata parameter\n",
      "417 times updata parameter\n",
      "418 times updata parameter\n",
      "419 times updata parameter\n",
      "420 times updata parameter\n",
      "421 times updata parameter\n",
      "422 times updata parameter\n",
      "423 times updata parameter\n",
      "424 times updata parameter\n",
      "425 times updata parameter\n",
      "426 times updata parameter\n",
      "427 times updata parameter\n",
      "428 times updata parameter\n",
      "429 times updata parameter\n",
      "430 times updata parameter\n",
      "431 times updata parameter\n",
      "432 times updata parameter\n"
     ]
    }
   ],
   "source": [
    "\n",
    "W, loss_tr, dev_loss = SGD(train_indices,train_Y,\n",
    "                            W,\n",
    "                            X_dev=dev_indices, \n",
    "                            Y_dev=dev_Y,\n",
    "                            lr=0.3, \n",
    "                            dropout=0.5,\n",
    "                            freeze_emb=True,\n",
    "                            tolerance=0.00001,\n",
    "                            epochs=100)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:12:00.815184Z",
     "start_time": "2020-04-02T15:12:00.812563Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "preds_te = [np.argmax(forward_pass(x, W, dropout_rate=0.0)[1]) \n",
    "            for x,y in zip(test_indices,test_Y)]\n",
    "\n",
    "print('Accuracy:', accuracy_score(test_Y,preds_te))\n",
    "print('Precision:', precision_score(test_Y,preds_te,average='macro'))\n",
    "print('Recall:', recall_score(test_Y,preds_te,average='macro'))\n",
    "print('F1-Score:', f1_score(test_Y,preds_te,average='macro'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "del W"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Discuss how did you choose model hyperparameters ? "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Extend to support deeper architectures \n",
    "\n",
    "Extend the network to support back-propagation for more hidden layers. You need to modify the `backward_pass` function above to compute gradients and update the weights between intermediate hidden layers. Finally, train and evaluate a network with a deeper architecture. Do deeper architectures increase performance?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T14:58:51.764619Z",
     "start_time": "2020-04-02T14:58:47.483690Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# initial network_weights add one hidden layer with 1000 dim\n",
    "W = network_weights(vocab_size=len(vocab),embedding_dim=w_glove.shape[1],\n",
    "                    hidden_dim=[1000], num_classes=3)\n",
    "# use glove as Embedding\n",
    "W[0] = w_glove"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# train the network\n",
    "W, loss_tr, dev_loss = SGD(train_indices,train_Y,\n",
    "                            W,\n",
    "                            X_dev=dev_indices, \n",
    "                            Y_dev=dev_Y,\n",
    "                            lr=0.3, \n",
    "                            dropout=0.5,\n",
    "                            freeze_emb=True,\n",
    "                            tolerance=0.00001,\n",
    "                            epochs=100)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-02T15:11:51.994986Z",
     "start_time": "2020-04-02T15:11:51.992563Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "preds_te = [np.argmax(forward_pass(x, W, dropout_rate=0.0)[1]) \n",
    "            for x,y in zip(test_indices,test_Y)]\n",
    "\n",
    "print('Accuracy:', accuracy_score(test_Y,preds_te))\n",
    "print('Precision:', precision_score(test_Y,preds_te,average='macro'))\n",
    "print('Recall:', recall_score(test_Y,preds_te,average='macro'))\n",
    "print('F1-Score:', f1_score(test_Y,preds_te,average='macro'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Discuss how did you choose model hyperparameters ? "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Full Results\n",
    "\n",
    "Add your final results here:\n",
    "\n",
    "| Model | Precision  | Recall  | F1-Score  | Accuracy\n",
    "|:-:|:-:|:-:|:-:|:-:|\n",
    "| Average Embedding  |  0.20833333333333331 | 0.05666666666666667  | 0.07055852644087937  | 0.07555555555555556  |\n",
    "| Average Embedding (Pre-trained)  |   |   |   |   |\n",
    "| Average Embedding (Pre-trained) + X hidden layers    |   |   |   |   |\n",
    "\n",
    "\n",
    "Please discuss why your best performing model is better than the rest."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
