{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "intro_to_sparse_data_and_embeddings.ipynb",
      "version": "0.3.2",
      "provenance": [],
      "collapsed_sections": [
        "mNCLhxsXyOIS",
        "eQS5KQzBybTY",
        "copyright-notice"
      ]
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "TPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "copyright-notice"
      },
      "source": [
        "#### Copyright 2017 Google LLC."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "both",
        "colab_type": "code",
        "id": "copyright-notice2",
        "colab": {}
      },
      "source": [
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PTaAdgy3LS8W",
        "colab_type": "text"
      },
      "source": [
        " # 稀疏数据和嵌入简介\n",
        "\n",
        "**学习目标：**\n",
        "* 将影评字符串数据转换为稀疏特征矢量\n",
        "* 使用稀疏特征矢量实现情感分析线性模型\n",
        "* 通过将数据投射到二维空间的嵌入来实现情感分析 DNN 模型\n",
        "* 将嵌入可视化，以便查看模型学到的词语之间的关系\n",
        "\n",
        "在此练习中，我们将探讨稀疏数据，并使用影评文本数据（来自 [ACL 2011 IMDB 数据集](http://ai.stanford.edu/~amaas/data/sentiment/)）进行嵌入。这些数据已被处理成 `tf.Example` 格式。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2AKGtmwNosU8",
        "colab_type": "text"
      },
      "source": [
        " ## 设置\n",
        "\n",
        "我们导入依赖项并下载训练数据和测试数据。[`tf.keras`](https://www.tensorflow.org/api_docs/python/tf/keras) 中包含一个文件下载和缓存工具，我们可以用它来检索数据集。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jGWqDqFFL_NZ",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 89
        },
        "outputId": "3c266453-1980-4b04-e526-b2963ba918e5"
      },
      "source": [
        "from __future__ import print_function\n",
        "\n",
        "import collections\n",
        "import io\n",
        "import math\n",
        "\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import tensorflow as tf\n",
        "from IPython import display\n",
        "from sklearn import metrics\n",
        "\n",
        "tf.logging.set_verbosity(tf.logging.ERROR)\n",
        "train_url = 'https://download.mlcc.google.cn/mledu-datasets/sparse-data-embedding/train.tfrecord'\n",
        "train_path = tf.keras.utils.get_file(train_url.split('/')[-1], train_url)\n",
        "test_url = 'https://download.mlcc.google.cn/mledu-datasets/sparse-data-embedding/test.tfrecord'\n",
        "test_path = tf.keras.utils.get_file(test_url.split('/')[-1], test_url)"
      ],
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Downloading data from https://download.mlcc.google.cn/mledu-datasets/sparse-data-embedding/train.tfrecord\n",
            "41631744/41625533 [==============================] - 2s 0us/step\n",
            "Downloading data from https://download.mlcc.google.cn/mledu-datasets/sparse-data-embedding/test.tfrecord\n",
            "40689664/40688441 [==============================] - 2s 0us/step\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6W7aZ9qspZVj",
        "colab_type": "text"
      },
      "source": [
        " ## 构建情感分析模型"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jieA0k_NLS8a",
        "colab_type": "text"
      },
      "source": [
        " 我们根据这些数据训练一个情感分析模型，以预测某条评价总体上是*好评*（标签为 1）还是*差评*（标签为 0）。\n",
        "\n",
        "为此，我们会使用*词汇表*（即我们预计将在数据中看到的每个术语的列表），将字符串值 `terms` 转换为特征矢量。在本练习中，我们创建了侧重于一组有限术语的小型词汇表。其中的大多数术语明确表示是*好评*或*差评*，但有些只是因为有趣而被添加进来。\n",
        "\n",
        "词汇表中的每个术语都与特征矢量中的一个坐标相对应。为了将样本的字符串值 `terms` 转换为这种矢量格式，我们按以下方式处理字符串值：如果该术语没有出现在样本字符串中，则坐标值将为 0；如果出现在样本字符串中，则值为 1。未出现在该词汇表中的样本中的术语将被弃用。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2HSfklfnLS8b",
        "colab_type": "text"
      },
      "source": [
        " **注意**：*我们当然可以使用更大的词汇表，而且有创建此类词汇表的专用工具。此外，我们可以添加少量的 OOV（未收录词汇）分桶，您可以在其中对词汇表中未包含的术语进行哈希处理，而不仅仅是弃用这些术语。我们还可以使用__特征哈希__法对每个术语进行哈希处理，而不是创建显式词汇表。这在实践中很有效，但却不具备可解读性（这对本练习非常实用）。如需了解处理此类词汇表的工具，请参阅 tf.feature_column 模块。*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Uvoa2HyDtgqe",
        "colab_type": "text"
      },
      "source": [
        " ## 构建输入管道"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O20vMEOurDol",
        "colab_type": "text"
      },
      "source": [
        " 首先，我们来配置输入管道，以将数据导入 TensorFlow 模型中。我们可以使用以下函数来解析训练数据和测试数据（格式为 [TFRecord](https://www.tensorflow.org/guide/datasets#consuming_tfrecord_data)），然后返回一个由特征和相应标签组成的字典。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SxxNIEniPq2z",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def _parse_function(record):\n",
        "  \"\"\"Extracts features and labels.\n",
        "  \n",
        "  Args:\n",
        "    record: File path to a TFRecord file    \n",
        "  Returns:\n",
        "    A `tuple` `(labels, features)`:\n",
        "      features: A dict of tensors representing the features\n",
        "      labels: A tensor with the corresponding labels.\n",
        "  \"\"\"\n",
        "  features = {\n",
        "    \"terms\": tf.VarLenFeature(dtype=tf.string), # terms are strings of varying lengths\n",
        "    \"labels\": tf.FixedLenFeature(shape=[1], dtype=tf.float32) # labels are 0 or 1\n",
        "  }\n",
        "  \n",
        "  parsed_features = tf.parse_single_example(record, features)\n",
        "  \n",
        "  terms = parsed_features['terms'].values\n",
        "  labels = parsed_features['labels']\n",
        "\n",
        "  return  {'terms':terms}, labels"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SXhTeeYMrp-l",
        "colab_type": "text"
      },
      "source": [
        " 为了确认函数是否能正常运行，我们为训练数据构建一个 `TFRecordDataset`，并使用上述函数将数据映射到特征和标签。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "oF4YWXR0Omt0",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        },
        "outputId": "e393d7b8-ca49-4f1d-8082-36eb69526c70"
      },
      "source": [
        "# Create the Dataset object.\n",
        "ds = tf.data.TFRecordDataset(train_path)\n",
        "# Map features and labels with the parse function.\n",
        "ds = ds.map(_parse_function)\n",
        "\n",
        "ds"
      ],
      "execution_count": 3,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<DatasetV1Adapter shapes: ({terms: (?,)}, (1,)), types: ({terms: tf.string}, tf.float32)>"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 3
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bUoMvK-9tVXP",
        "colab_type": "text"
      },
      "source": [
        " 运行以下单元，以从训练数据集中获取第一个样本。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Z6QE2DWRUc4E",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 611
        },
        "outputId": "a222df6e-1896-4a29-bf8e-5829f0d4b983"
      },
      "source": [
        "n = ds.make_one_shot_iterator().get_next()\n",
        "sess = tf.Session()\n",
        "sess.run(n)"
      ],
      "execution_count": 4,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "({'terms': array([b'but', b'it', b'does', b'have', b'some', b'good', b'action',\n",
              "         b'and', b'a', b'plot', b'that', b'is', b'somewhat', b'interesting',\n",
              "         b'.', b'nevsky', b'acts', b'like', b'a', b'body', b'builder',\n",
              "         b'and', b'he', b'isn', b\"'\", b't', b'all', b'that', b'attractive',\n",
              "         b',', b'in', b'fact', b',', b'imo', b',', b'he', b'is', b'ugly',\n",
              "         b'.', b'(', b'his', b'acting', b'skills', b'lack', b'everything',\n",
              "         b'!', b')', b'sascha', b'is', b'played', b'very', b'well', b'by',\n",
              "         b'joanna', b'pacula', b',', b'but', b'she', b'needed', b'more',\n",
              "         b'lines', b'than', b'she', b'was', b'given', b',', b'her',\n",
              "         b'character', b'needed', b'to', b'be', b'developed', b'.',\n",
              "         b'there', b'are', b'way', b'too', b'many', b'men', b'in', b'this',\n",
              "         b'story', b',', b'there', b'is', b'zero', b'romance', b',', b'too',\n",
              "         b'much', b'action', b',', b'and', b'way', b'too', b'dumb', b'of',\n",
              "         b'an', b'ending', b'.', b'it', b'is', b'very', b'violent', b'.',\n",
              "         b'i', b'did', b'however', b'love', b'the', b'scenery', b',',\n",
              "         b'this', b'movie', b'takes', b'you', b'all', b'over', b'the',\n",
              "         b'world', b',', b'and', b'that', b'is', b'a', b'bonus', b'.', b'i',\n",
              "         b'also', b'liked', b'how', b'it', b'had', b'some', b'stuff',\n",
              "         b'about', b'the', b'mafia', b'in', b'it', b',', b'not', b'too',\n",
              "         b'much', b'or', b'too', b'little', b',', b'but', b'enough',\n",
              "         b'that', b'it', b'got', b'my', b'attention', b'.', b'the',\n",
              "         b'actors', b'needed', b'to', b'be', b'more', b'handsome', b'.',\n",
              "         b'.', b'.', b'the', b'biggest', b'problem', b'i', b'had', b'was',\n",
              "         b'that', b'nevsky', b'was', b'just', b'too', b'normal', b',',\n",
              "         b'not', b'sexy', b'enough', b'.', b'i', b'think', b'for', b'most',\n",
              "         b'guys', b',', b'sascha', b'will', b'be', b'hot', b'enough', b',',\n",
              "         b'but', b'for', b'us', b'ladies', b'that', b'are', b'fans', b'of',\n",
              "         b'action', b',', b'nevsky', b'just', b'doesn', b\"'\", b't', b'cut',\n",
              "         b'it', b'.', b'overall', b',', b'this', b'movie', b'was', b'fine',\n",
              "         b',', b'i', b'didn', b\"'\", b't', b'love', b'it', b'nor', b'did',\n",
              "         b'i', b'hate', b'it', b',', b'just', b'found', b'it', b'to', b'be',\n",
              "         b'another', b'normal', b'action', b'flick', b'.'], dtype=object)},\n",
              " array([0.], dtype=float32))"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 4
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jBU39UeFty9S",
        "colab_type": "text"
      },
      "source": [
        " 现在，我们构建一个正式的输入函数，可以将其传递给 TensorFlow Estimator 对象的 `train()` 方法。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5_C5-ueNYIn_",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Create an input_fn that parses the tf.Examples from the given files,\n",
        "# and split them into features and targets.\n",
        "def _input_fn(input_filenames, num_epochs=None, shuffle=True):\n",
        "  \n",
        "  # Same code as above; create a dataset and map features and labels.\n",
        "  ds = tf.data.TFRecordDataset(input_filenames)\n",
        "  ds = ds.map(_parse_function)\n",
        "\n",
        "  if shuffle:\n",
        "    ds = ds.shuffle(10000)\n",
        "\n",
        "  # Our feature data is variable-length, so we pad and batch\n",
        "  # each field of the dataset structure to whatever size is necessary.     \n",
        "  ds = ds.padded_batch(25, ds.output_shapes)\n",
        "  \n",
        "  ds = ds.repeat(num_epochs)\n",
        "\n",
        "  \n",
        "  # Return the next batch of data.\n",
        "  features, labels = ds.make_one_shot_iterator().get_next()\n",
        "  return features, labels"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y170tVlrLS8c",
        "colab_type": "text"
      },
      "source": [
        " ## 任务 1：使用具有稀疏输入和显式词汇表的线性模型\n",
        "\n",
        "对于我们的第一个模型，我们将使用 50 个信息性术语来构建 [`LinearClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/LinearClassifier) 模型；始终从简单入手！\n",
        "\n",
        "以下代码将为我们的术语构建特征列。[`categorical_column_with_vocabulary_list`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list) 函数可使用“字符串-特征矢量”映射来创建特征列。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "B5gdxuWsvPcx",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# 50 informative terms that compose our model vocabulary. \n",
        "informative_terms = (\"bad\", \"great\", \"best\", \"worst\", \"fun\", \"beautiful\",\n",
        "                     \"excellent\", \"poor\", \"boring\", \"awful\", \"terrible\",\n",
        "                     \"definitely\", \"perfect\", \"liked\", \"worse\", \"waste\",\n",
        "                     \"entertaining\", \"loved\", \"unfortunately\", \"amazing\",\n",
        "                     \"enjoyed\", \"favorite\", \"horrible\", \"brilliant\", \"highly\",\n",
        "                     \"simple\", \"annoying\", \"today\", \"hilarious\", \"enjoyable\",\n",
        "                     \"dull\", \"fantastic\", \"poorly\", \"fails\", \"disappointing\",\n",
        "                     \"disappointment\", \"not\", \"him\", \"her\", \"good\", \"time\",\n",
        "                     \"?\", \".\", \"!\", \"movie\", \"film\", \"action\", \"comedy\",\n",
        "                     \"drama\", \"family\")\n",
        "\n",
        "terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key=\"terms\", vocabulary_list=informative_terms)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eTiDwyorwd3P",
        "colab_type": "text"
      },
      "source": [
        " 接下来，我们将构建 `LinearClassifier`，在训练集中训练该模型，并在评估集中对其进行评估。阅读上述代码后，运行该模型以了解其效果。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HYKKpGLqLS8d",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 611
        },
        "outputId": "50c7c5ad-0543-4d82-c87c-d6cd4beebf97"
      },
      "source": [
        "my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)\n",
        "my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)\n",
        "\n",
        "feature_columns = [ terms_feature_column ]\n",
        "\n",
        "\n",
        "classifier = tf.estimator.LinearClassifier(\n",
        "  feature_columns=feature_columns,\n",
        "  optimizer=my_optimizer,\n",
        ")\n",
        "\n",
        "classifier.train(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "print(\"Training set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([test_path]),\n",
        "  steps=1000)\n",
        "\n",
        "print(\"Test set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")"
      ],
      "execution_count": 7,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "\n",
            "WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.\n",
            "For more information, please see:\n",
            "  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n",
            "  * https://github.com/tensorflow/addons\n",
            "If you depend on functionality not listed there, please file an issue.\n",
            "\n",
            "Training set metrics:\n",
            "accuracy 0.77796\n",
            "accuracy_baseline 0.5\n",
            "auc 0.8696172\n",
            "auc_precision_recall 0.8638412\n",
            "average_loss 0.46821982\n",
            "label/mean 0.5\n",
            "loss 11.705495\n",
            "precision 0.8160648\n",
            "prediction/mean 0.43909755\n",
            "recall 0.71768\n",
            "global_step 1000\n",
            "---\n",
            "Test set metrics:\n",
            "accuracy 0.7734\n",
            "accuracy_baseline 0.5\n",
            "auc 0.8669497\n",
            "auc_precision_recall 0.86137354\n",
            "average_loss 0.47071296\n",
            "label/mean 0.5\n",
            "loss 11.767824\n",
            "precision 0.812072\n",
            "prediction/mean 0.43893698\n",
            "recall 0.71144\n",
            "global_step 1000\n",
            "---\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "J0ubn9gULS8g",
        "colab_type": "text"
      },
      "source": [
        " ## 任务 2：使用深度神经网络 (DNN) 模型\n",
        "\n",
        "上述模型是一个线性模型，效果非常好。但是，我们可以使用 DNN 模型实现更好的效果吗？\n",
        "\n",
        "我们将 `LinearClassifier` 切换为 [`DNNClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier)。运行以下单元，看看您的模型效果如何。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jcgOPfEALS8h",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 485
        },
        "outputId": "89ed10fa-d3d7-45fe-b503-a95f4137f684"
      },
      "source": [
        "##################### Here's what we changed ##################################\n",
        "classifier = tf.estimator.DNNClassifier(                                      #\n",
        "  feature_columns=[tf.feature_column.indicator_column(terms_feature_column)], #\n",
        "  hidden_units=[20,20],                                                       #\n",
        "  optimizer=my_optimizer,                                                     #\n",
        ")                                                                             #\n",
        "###############################################################################\n",
        "\n",
        "try:\n",
        "  classifier.train(\n",
        "    input_fn=lambda: _input_fn([train_path]),\n",
        "    steps=1000)\n",
        "\n",
        "  evaluation_metrics = classifier.evaluate(\n",
        "    input_fn=lambda: _input_fn([train_path]),\n",
        "    steps=1)\n",
        "  print(\"Training set metrics:\")\n",
        "  for m in evaluation_metrics:\n",
        "    print(m, evaluation_metrics[m])\n",
        "  print(\"---\")\n",
        "\n",
        "  evaluation_metrics = classifier.evaluate(\n",
        "    input_fn=lambda: _input_fn([test_path]),\n",
        "    steps=1)\n",
        "\n",
        "  print(\"Test set metrics:\")\n",
        "  for m in evaluation_metrics:\n",
        "    print(m, evaluation_metrics[m])\n",
        "  print(\"---\")\n",
        "except ValueError as err:\n",
        "  print(err)"
      ],
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Training set metrics:\n",
            "accuracy 0.84\n",
            "accuracy_baseline 0.6\n",
            "auc 0.9533333\n",
            "auc_precision_recall 0.9394331\n",
            "average_loss 0.33938858\n",
            "label/mean 0.4\n",
            "loss 8.4847145\n",
            "precision 0.71428573\n",
            "prediction/mean 0.47088137\n",
            "recall 1.0\n",
            "global_step 1000\n",
            "---\n",
            "Test set metrics:\n",
            "accuracy 0.88\n",
            "accuracy_baseline 0.56\n",
            "auc 0.85389614\n",
            "auc_precision_recall 0.7883935\n",
            "average_loss 0.3904149\n",
            "label/mean 0.56\n",
            "loss 9.760372\n",
            "precision 0.8235294\n",
            "prediction/mean 0.5631626\n",
            "recall 1.0\n",
            "global_step 1000\n",
            "---\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cZz68luxLS8j",
        "colab_type": "text"
      },
      "source": [
        " ## 任务 3：在 DNN 模型中使用嵌入\n",
        "\n",
        "在此任务中，我们将使用嵌入列来实现 DNN 模型。嵌入列会将稀疏数据作为输入，并返回一个低维度密集矢量作为输出。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AliRzhvJLS8k",
        "colab_type": "text"
      },
      "source": [
        " **注意**：*从计算方面而言，embedding_column 通常是用于在稀疏数据中训练模型最有效的选项。在此练习末尾的[可选部分](#scrollTo=XDMlGgRfKSVz)，我们将更深入地讨论使用 `embedding_column` 与 `indicator_column` 之间的实现差异，以及如何在这两者之间做出权衡。*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "F-as3PtALS8l",
        "colab_type": "text"
      },
      "source": [
        " 在下面的代码中，执行以下操作：\n",
        "\n",
        "* 通过将数据投射到二维空间的 `embedding_column` 来为模型定义特征列（如需详细了解 `embedding_column` 的函数签名，请参阅相关 [TF 文档](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column)）。\n",
        "* 定义符合以下规范的 `DNNClassifier`：\n",
        "  * 具有两个隐藏层，每个包含 20 个单元\n",
        "  * 采用学习速率为 0.1 的 AdaGrad 优化方法\n",
        "  * `gradient_clip_norm 值为 5.0`"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UlPZ-Q9bLS8m",
        "colab_type": "text"
      },
      "source": [
        " **注意**：*在实践中，我们可能会将数据投射到 2 维以上（比如 50 或 100）的空间中。但就目前而言，2 维是比较容易可视化的维数。*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mNCLhxsXyOIS",
        "colab_type": "text"
      },
      "source": [
        " ### 提示"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "L67xYD7hLS8m",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Here's a example code snippet you might use to define the feature columns:\n",
        "\n",
        "terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)\n",
        "feature_columns = [ terms_embedding_column ]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iv1UBsJxyV37",
        "colab_type": "text"
      },
      "source": [
        " ### 完成以下代码"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5PG_yhNGLS8u",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "########################## YOUR CODE HERE ######################################\n",
        "terms_embedding_column = # Define the embedding column\n",
        "feature_columns = # Define the feature columns\n",
        "\n",
        "classifier = # Define the DNNClassifier\n",
        "################################################################################\n",
        "\n",
        "classifier.train(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "print(\"Training set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([test_path]),\n",
        "  steps=1000)\n",
        "\n",
        "print(\"Test set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eQS5KQzBybTY",
        "colab_type": "text"
      },
      "source": [
        " ### 解决方案\n",
        "\n",
        "点击下方即可查看解决方案。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "R5xOdYeQydi5",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 485
        },
        "outputId": "89bc59c7-ef43-49ef-ef02-289bd837169d"
      },
      "source": [
        "########################## SOLUTION CODE ########################################\n",
        "terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)\n",
        "feature_columns = [ terms_embedding_column ]\n",
        "\n",
        "my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)\n",
        "my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)\n",
        "\n",
        "classifier = tf.estimator.DNNClassifier(\n",
        "  feature_columns=feature_columns,\n",
        "  hidden_units=[20,20],\n",
        "  optimizer=my_optimizer\n",
        ")\n",
        "#################################################################################\n",
        "\n",
        "classifier.train(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "print(\"Training set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([test_path]),\n",
        "  steps=1000)\n",
        "\n",
        "print(\"Test set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")"
      ],
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Training set metrics:\n",
            "accuracy 0.78516\n",
            "accuracy_baseline 0.5\n",
            "auc 0.8695219\n",
            "auc_precision_recall 0.85847235\n",
            "average_loss 0.45467508\n",
            "label/mean 0.5\n",
            "loss 11.366877\n",
            "precision 0.75171244\n",
            "prediction/mean 0.52686185\n",
            "recall 0.8516\n",
            "global_step 1000\n",
            "---\n",
            "Test set metrics:\n",
            "accuracy 0.78204\n",
            "accuracy_baseline 0.5\n",
            "auc 0.86866707\n",
            "auc_precision_recall 0.8578871\n",
            "average_loss 0.4563216\n",
            "label/mean 0.5\n",
            "loss 11.40804\n",
            "precision 0.7503728\n",
            "prediction/mean 0.5258527\n",
            "recall 0.84528\n",
            "global_step 1000\n",
            "---\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aiHnnVtzLS8w",
        "colab_type": "text"
      },
      "source": [
        " ## 任务 4：确信模型中确实存在嵌入\n",
        "\n",
        "上述模型使用了 `embedding_column`，而且似乎很有效，但这并没有让我们了解到内部发生的情形。我们如何检查该模型确实在内部使用了嵌入？\n",
        "\n",
        "首先，我们来看看该模型中的张量："
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "h1jNgLdQLS8w",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 287
        },
        "outputId": "6518c75d-de99-4a8d-b05e-08dd96fd0e6f"
      },
      "source": [
        "classifier.get_variable_names()"
      ],
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "['dnn/hiddenlayer_0/bias',\n",
              " 'dnn/hiddenlayer_0/bias/t_0/Adagrad',\n",
              " 'dnn/hiddenlayer_0/kernel',\n",
              " 'dnn/hiddenlayer_0/kernel/t_0/Adagrad',\n",
              " 'dnn/hiddenlayer_1/bias',\n",
              " 'dnn/hiddenlayer_1/bias/t_0/Adagrad',\n",
              " 'dnn/hiddenlayer_1/kernel',\n",
              " 'dnn/hiddenlayer_1/kernel/t_0/Adagrad',\n",
              " 'dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights',\n",
              " 'dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights/t_0/Adagrad',\n",
              " 'dnn/logits/bias',\n",
              " 'dnn/logits/bias/t_0/Adagrad',\n",
              " 'dnn/logits/kernel',\n",
              " 'dnn/logits/kernel/t_0/Adagrad',\n",
              " 'global_step']"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 10
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Sl4-VctMLS8z",
        "colab_type": "text"
      },
      "source": [
        " 好的，我们可以看到这里有一个嵌入层：`'dnn/input_from_feature_columns/input_layer/terms_embedding/...'`。（顺便说一下，有趣的是，该层可以与模型的其他层一起训练，就像所有隐藏层一样。）\n",
        "\n",
        "嵌入层的形状是否正确？请运行以下代码来查明。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JNFxyQUiLS80",
        "colab_type": "text"
      },
      "source": [
        " **注意**：*在我们的示例中，嵌入是一个矩阵，可让我们将一个 50 维矢量投射到 2 维空间。*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1xMbpcEjLS80",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        },
        "outputId": "3f9be787-2eb8-4cf3-b34c-75a9ffbb9a7a"
      },
      "source": [
        "classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights').shape"
      ],
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(50, 2)"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 11
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MnLCIogjLS82",
        "colab_type": "text"
      },
      "source": [
        " 花些时间来手动检查各个层及其形状，以确保一切都按照您预期的方式互相连接。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rkKAaRWDLS83",
        "colab_type": "text"
      },
      "source": [
        " ## 任务 5：检查嵌入\n",
        "\n",
        "现在，我们来看看实际嵌入空间，并了解术语最终所在的位置。请执行以下操作：\n",
        "1. 运行以下代码来查看我们在**任务 3** 中训练的嵌入。一切最终是否如您所预期的那样？\n",
        "\n",
        "2. 重新运行**任务 3** 中的代码来重新训练该模型，然后再次运行下面的嵌入可视化。哪些保持不变？哪些发生了变化？\n",
        "\n",
        "3. 最后，仅使用 10 步来重新训练该模型（这将产生一个糟糕的模型）。再次运行下面的嵌入可视化。您现在看到了什么？为什么？"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "s4NNu7KqLS84",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 269
        },
        "outputId": "0a1e61e4-ee67-4c6e-dc77-c5b3415bcc1a"
      },
      "source": [
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "embedding_matrix = classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights')\n",
        "\n",
        "for term_index in range(len(informative_terms)):\n",
        "  # Create a one-hot encoding for our term.  It has 0s everywhere, except for\n",
        "  # a single 1 in the coordinate that corresponds to that term.\n",
        "  term_vector = np.zeros(len(informative_terms))\n",
        "  term_vector[term_index] = 1\n",
        "  # We'll now project that one-hot vector into the embedding space.\n",
        "  embedding_xy = np.matmul(term_vector, embedding_matrix)\n",
        "  plt.text(embedding_xy[0],\n",
        "           embedding_xy[1],\n",
        "           informative_terms[term_index])\n",
        "\n",
        "# Do a little setup to make sure the plot displays nicely.\n",
        "plt.rcParams[\"figure.figsize\"] = (15, 15)\n",
        "plt.xlim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())\n",
        "plt.ylim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())\n",
        "plt.show() "
      ],
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXYAAAD8CAYAAABjAo9vAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAIABJREFUeJzs3XdUVNf2B/DvgHQQUNRYkKICAlPo\nICBNQIMRFVGx9xKxJbZoImiMIWILJkps4LPyxFijeYqCAuoTEKyggGDBQlE6hrZ/f8yP+0BBEAcL\nns9arOXM3HvuufNeNpdT9uYRERiGYZjWQ+pDd4BhGIaRLBbYGYZhWhkW2BmGYVoZFtgZhmFaGRbY\nGYZhWhkW2BmGYVoZFtgZhmFaGRbYGYZhWhkW2BmGYVqZNh/iohoaGqStrf0hLs0wDPPJSkhIyCWi\nDo0d90ECu7a2NuLj4z/EpRmGYT5ZPB7vflOOY0MxDMMwrQwL7AzDMK0MC+wMwzCtDAvsDMMwrQwL\n7AzDMK0MC+wMwzCtDAvsDMMwrQwL7AzTgnJycmBlZQUTExNER0e/1blJSUk4efJkC/WMac1YYGeY\nFlJZWYmzZ8+Cz+cjMTER9vb2b3U+C+xMc7HAzjBvkJmZCQMDA4wePRq9e/fGsGHDUFpaioSEBDg4\nOMDMzAzu7u548uQJAMDR0RHz5s2Dubk5fv31VyxatAhHjx6FSCRCWVkZTp8+DRsbG5iamsLb2xvF\nxcUAgLi4OPTp0wdCoRCWlpYoKCjA8uXLERYWBpFIhLCwsA/5NTCfGiJ67z9mZmbEMJ+CjIwMAkAx\nMTFERDRx4kRas2YN2djYUHZ2NhERHThwgCZOnEhERA4ODjRz5kzu/JCQEJo1axYREeXk5JC9vT0V\nFxcTEVFAQACtWLGC/vnnH9LR0aErV64QEVFBQQFVVFTUOZdhiIgAxFMTYuwHyRXDMJ8STU1N2Nra\nAgDGjBmD1atX4+bNm3B1dQUAVFVVoXPnztzxI0aMqLedy5cv4/bt21xb5eXlsLGxwZ07d9C5c2dY\nWFgAANq2bduSt8N8BlhgZ5hG8Hi8Oq9VVFRgZGSES5cu1Xu8kpJSve8TEVxdXbF///4679+4cUMy\nHWWY/8fG2BmmEQ8ePOCC+L59+2BtbY2cnBzuvYqKCty6davRdqytrREbG4u0tDQAQElJCe7evQt9\nfX08efIEcXFxAICioiJUVlZCRUUFRUVFLXRXTGvGAjvDNEJfXx+///47evfujRcvXmD27NkIDw/H\n4sWLIRQKIRKJcPHixUbb6dChA0JDQ+Hj4wOBQAAbGxukpKRAVlYWYWFhmD17NoRCIVxdXfHy5Us4\nOTnh9u3bbPKUeWs88Xj8+2Vubk4sHzvzKcjMzMTAgQNx8+bND90VhgGPx0sgIvPGjmNP7AzDMK0M\nC+wM8wba2trsaZ355LxzYOfxeJo8Hi+Sx+Pd5vF4t3g83lxJdIxhmKYLDQ2Fr68vAMDf3x9r165t\nVjuZmZnYt2+fJLvGfACSeGKvBPAtERkCsAYwi8fjGUqgXYZh3jMW2FuHdw7sRPSEiK7+/7+LACQD\n6Pqu7TJMa7Rnzx5YWlpCJBJh+vTpuH//Pnr16oXc3FxUV1fD3t4ep0+fBgD861//gkAggFAoxNix\nYwGIk4p5eXnBwsICFhYWiI2NfeP10tPT0b9/f5iZmcHe3h4pKSkAgAkTJmDOnDno06cPdHV1ER4e\nDgBYsmQJoqOjIRKJsGHDhhb8JpiWJNENSjweTxuACYD/1vPZNADTAKB79+6SvCzDfBKSk5MRFhaG\n2NhYyMjI4Ouvv8b58+exePFizJw5E5aWljA0NISbmxtu3bqFVatW4eLFi9DQ0MDz588BAHPnzsX8\n+fNhZ2eHBw8ewN3dHcnJyQ1ec9q0aQgODkavXr3w3//+F19//TXOnTsHAHjy5AliYmKQkpKCQYMG\nYdiwYQgICMDatWtx4sSJ9/KdMC1DYoGdx+MpAzgEYB4RFb76ORFtBbAVEC93lNR1GeZTcfbsWSQk\nJHCpA8rKytCxY0f4+/vj4MGDCA4ORlJSEgDg3Llz8Pb2hoaGBgCgXbt2AICIiAjcvn2ba7OwsJBL\nJPaq4uJiXLx4Ed7e3tx7//zzD/fvwYMHQ0pKCoaGhnj27Jlkb5b5oCQS2Hk8ngzEQX0vEf0piTYZ\nprUhIowfPx4///xznfdLS0vx6NEjAOJgrKKi0mAb1dXVuHz5MuTl5Ru9XnV1NdTU1LhfFq+Sk5Or\n0zem9ZDEqhgegB0Akolo/bt3iWFaJxcXF4SHhyM7OxsA8Pz5c9y/fx+LFy/G6NGjsXLlSkydOhUA\n4OzsjIMHDyIvL487FgDc3NywadMmrs2GgjYgTiamo6ODgwcPAhAH72vXrr2xj5JMY9CnTx+JtMO8\nPUmsirEFMBaAM4/HS/r/ny8l0C7DtCqGhoZYtWoV3NzcIBAI4OrqiszMTMTFxXHBXVZWFiEhITAy\nMsKyZcvg4OAAoVCIb775BgAQFBSE+Ph4CAQCGBoaIjg4+I3X3Lt3L3bs2AGhUAgjIyMcPXr0jccL\nBAJIS0tDKBS+8+RpU9IsMC2kKbl9Jf3D8rEzzIfj6elJpqamZGhoSH/88QcRESkpKdGCBQvI0NCQ\nXFxc6L///S85ODiQjo4OHT16lIjEuent7OzIxMSETExMKDY2loiIfvjhBxIKhSQUCqlLly40YcIE\nrk0iosjISHJwcCAvLy/S19enUaNGUXV1NRER/fXXX6Svr0+mpqY0e/Zs8vDweN9fxycFTczHzgI7\nw3xm8vLyiIgoOTmZ5OTkKDc3lwDQyZMniYi4QFteXk5JSUkkFAqJiKikpITKysqIiMjX15e6du1a\np90XL16QsbExKSgoEFHdwN62bVt6+PAhVVVVkbW1NUVHR1NZWRl169aN7t27R0REI0eOZIG9EU0N\n7CylAMN8ZoKCgiAUCjFkyBBUVFQgNTUVsrKy6N+/PwBg+PDhGDp0KGRkZMDn85GZmQlAnJ546tSp\n4PP5OHjwYJ2VNESEMWPG4JtvvoGU1OthxdLSEt26dYOUlBREIhEyMzORkpICXV1d6OjoAAB8fHxa\n/uY/EyywM8xnJCoqChEREbh06RJOnToFeXl5rFq1ChUVFXB3d0dZWRmOHTuGO3fuAAD+/vtvFBYW\nwszMDG5ubrh48SKuXbuGadOmoaqqCo6OjtDV1cWAAQPQrVs3TJw4kbvWP//8gyNHjgAQr8AZPXo0\njh49CmlpaVRWVn6Q+/9csMDOMJ+RgoICqKurQ1FREenp6SgtLcXgwYOhqKgINTU1HDp0iDv25cuX\nmD59OuTl5ZGQkID8/HzIyclBSkoK165dAxHhP//5D3788UecPXsW69atq3OtNm3aIDQ0FID4af/i\nxYvw8PDgPtfX18e9e/e4vwhYznnJYYGdYT4j/fv3R2VlJXr37o1ffvkFCgoK6NmzJwDAzMyMC7IA\nuKGSmqGV+fPn49GjRxAKhcjNzYWMjAzk5OSwfft2AICFhQVEIhHKy8sBANLS0khNTUV+fj4eP34M\nLy8vtGnzv60zCgoK2Lx5M5fyQEVFBaqqqu/pm2jdWGBnGAlJSkrCyZMnP3Q33khOTg6nTp1CcnIy\ntm7dCl1dXTg6OqK4uJgbIhGJRBg0aBB3Ts3O1i5duqBv3764du0aXF1dsXr1agBAZGQk9PX1cfLk\nSSQlJUFWVpY7b9y4ccjIyICioiImTZoEAPjtt98wYcIEAICTkxNSUlIQHx8PKSkpmJs3WkOCaQIW\n2BmmHs0ZA/4UAvvbkMRQyYQJE7Bx40YA4nX8r9q2bRtEIhGMjIxQUFCA6dOnv1OfGTGJJgFjmE+B\nsrIyFi9ejD179qBDhw7Q1NSEmZkZTpw4AZFIhJiYGPj4+GDcuHGYMWMGHjx4AADYuHEjbG1tceXK\nFcydOxcvX76EgoICQkJCoKOjg+XLl6OsrAz79u3D+PHjP/nsiLWHSpSUlLgcN2+jU6dO6N27NwYP\nHlzv5/Pnz8f8+fPftavMq5qyJlLSP2wdO/MhKSgokFAopLKyMiosLKSePXtSYGAgOTg40MyZM7nj\nfHx8KDo6moiI7t+/TwYGBkREVFBQQBUVFUREdObMGRo6dCgREYWEhNCsWbNo/PjxdPDgwfd8Vy2j\nqKiIiIiqq6tp5syZtH79+rc6v6SkhHR1dSk/P78luvfZQRPXsbMnduazU1VVBU9PT8jJyeGHH37A\nixcvsG7dOqirq2PFihUYOXIkxo4dy2VSfPDgAdq2bYvy8nLMnTsXUVFRyMzMhJKSEtq3b4/y8nL4\n+vri0KFDaNOmDfh8/oe+RYnZtm0bdu3ahfLycpiYmLzVUElERAQmT56M+fPns0nR94xHHyCrm7m5\nOcXHx7/36zIMIJ5AXLJkCQQCAYKDg2FkZARVVVX88ssvOHLkCMrKynDkyBH89ddfuHfvHoyMjHD3\n7l3s3r0b2dnZSEtLg0AgwL59+7BhwwZ4eXlBKBTCx8cH0dHROHz4MLZv345hw4Z96FtlWhkej5dA\nRI3OMLMnduazIy0tjePHj+P58+cYMmQINm7ciGnTpkFVVRW3bt3CrFmzMHfuXLi4uGDOnDno27cv\nFBQU8O9//xsPHjzAs2fPcP78eQDA1q1b8fLlS/j4+EBVVRXV1dVwdnb+wHfIfO7YqhjmsyMlJYVB\ngwZhz5492LBhA/h8fp2hAnl5eTg6OuKrr75CZGQkYmNjYWhoiHv37mHTpk34+++/ISMjAzU1NWhp\naXHnOTk54fbt2zh37lyjJesYpiWxJ3bms7RgwQIIBAJs3rwZmZmZ0NXVhby8PEaPHg0AGDFiBLZv\n347q6mqkp6dDVlYWW7duxZYtW3Dw4EHcvXsXd+/eRdeuXWFqaoo//vgD48ePx7Fjx2BoaAhbW9sP\nfIfM54wFduazNG3aNNy6dQsPHz6EjIwM5s2bhzVr1uCLL74AIC5oMXbsWHh6enIbbqZMmYLMzEyY\nmpqCiNChQwccOXIEQ4YMwblz52BoaIju3bvDxsbmQ94aw7DljgzzvmzZsoV27dol0TatrKxIV1f3\ntfdrll7W+OGHH+jMmTNEROTg4EBxcXFERKSlpUU5OTlERGRjY9PsfoSEhFBWVlazz2eaBmy5I8N8\nXGbMmPFBrltVVYWVK1c2ety7VDwKDQ2FsbExunTp0uw2GMlhk6cM8w727NkDS0tLiEQiTJ8+HVVV\nVVBWVsayZcsgFAphbW3N5S339/fH2rVrAYjTD1hbW0MgEGDIkCF48eIF0tPTYWpqyrWdmprKvV65\nciUsLCxgbGyMadOmiavk/L/nz59DXV0dcnJycHZ2RmlpKRYsWICLFy/C1NQUO3bsgJaWFnr27AkL\nCwsUFBTUey/KysoAxDleXFxcYGpqCj6fz5XTy8zMRO/evTF16lQYGRnBzc0NZWVlCA8PR3x8PEaP\nHg2RSISysjLJf9HMW2GBnWGaKTk5GWFhYYiNjUVSUhKkpaWxd+9elJSUwNraGteuXUPfvn2xbdu2\n184dN24cfvnlF1y/fh18Ph8rVqxAjx49oKqqyhWoDgkJ4fKb+/r6Ii4uDjdv3kRZWRlOnDjBtZWf\nn48TJ07gzJkzSExMxObNmwGIV/dcvXoVUVFRMDQ0REBAAA4dOoS7d+++8b7k5eVx+PBhXL16FZGR\nkfj222+5XySpqamYNWsWbt26xaX5HTZsGMzNzbF3714kJSVBQUFBIt8v03xsKIZhmuns2bNISEjg\ncqiUlZWhY8eOkJWVxcCBAwGIU+GeOXOmznkFBQXIz8+Hg4MDAGD8+PHw9vYGIJ6gDQkJwfr16xEW\nFoYrV64AEGdQXLNmDUpLS/H8+XMYGRnhq6++AgB06NCBW4UjKyuLyMhIAECvXr0AiHeAVlZWIjk5\nGWpqaqisrERpaWmD90VEWLp0KS5cuAApKSlkZWVxf3Xo6OhAJBJx91Y7zS/z8WCBnWGaiYgwfvx4\n/Pzzz3XeX7t2LXg8HgA0WC2oqqoKxsbGuHnzZp33vby8sGLFCjg7O8PMzAzt27fHy5cv8fXXXyM+\nPh6amprw9/fHy5cvuXNqrlX7dVVVFe7fvw8AqK6uhpWVFXg8Hk6ePIm+fftiwYIFKC8vR0lJyWt9\n27t3L3JycjBnzhwkJSUhPz+fu56cnBx3nLS0NBt2+UixoRiGgbgOaO/evbl17E3h4uKC8PBwLF26\nFIB4rLsmmL6Jqqoq2rZti5KSEoSGhmLz5s3c07uvry8sLCwwc+ZMbhimJqhqaGiguLgY4eHhddrL\nzs7GpUuXEBMTg4qKCjg5OaG6uhr37t0DIF66mZOTw+VDz87OBiAe51dSUnqtfwUFBejYsSPatGmD\nR48eNemeVFRUUFRU1OhxzPvBAjvDANi8eTPOnDmDvXv3NvkcQ0NDrFq1Cr/88gsEAgFcXV3x5MmT\nN55T83S9bt06PH36FFOmTMGWLVu4MnUzZ87E7du3kZubiw0bNuDJkydQU1ODubk51NXV0alTJxQV\nFaGiogIAkJ6eDiUlJXh4eMDFxQUFBQWYOXMmSkpK8PTpU4hEIvTu3RtZWVkYO3Ys9PX1kZaWhtu3\nb0MkEqGyshImJibIzc0FAMTHx+Pf//434uPj8f333+POnTswMDBo9LuYMGECZsyYwSZPPxZNWRMp\n6R+2jp35mEyfPp1kZGTI2NiYAgICyNramkQiEdnY2FBKSgoRiddpDxkyhNzd3alnz560cOFCIiJa\nvHgxSUlJkVAopFGjRhERkaenJ5mampKhoSH98ccfRERUWVlJBgYG1LVrVzI2Nqbvv/+eAJCCggLp\n6elRu3bt6KeffqK2bdvS1KlT6fvvv6clS5ZQu3btSCAQkL29PdffZcuWUVBQEBHRaymClZSUiIgo\nMjKSPDw8uPdrv371s9pr2ePi4sjBwYG759pr4ZkPD2wdO8M0TXBwMP7++29ERkZCVlYW3377Ldq0\naYOIiAgsXbqUK/CclJSE6dOnIyMjA7///jtmz56NgIAA/Pbbb9xKFgDYuXMn2rVrh7KyMlhYWMDL\nywuLFy/G48ePce/ePbRv3x7Xr1/Hrl27oKuri7Vr16KwsBCrV69GYWEhdu3aBR0dHaSnp8Pa2hrR\n0dE4fvw47O3tkZ+fj+LiYri7u3+or4v5BLChGIappaCgAN7e3jA2Nsb8+fNx69Yt7jMXFxeEhITg\n7Nmz6Nu3b4Njz0FBQdwa9ocPHyI1NRWBgYHQ0NCAv78//v77bygrK7826amiogIlJSV069YNgYGB\nGD58OKKjowEAY8eOxZgxY3Djxg3Iycnh0aNHAIA2bdqguroagHiStKaQ9Kt27tzJDbe8qnYbtSdl\nmU8XC+wMU8sPP/wAJycn3Lx5E8ePH68T6C5duoR79+5hwIABiIiIqDMeb2xsjMzMTBw4cABr1qyB\nqakpKisrIS0tjYKCAqirq+PatWtwdHREcHAwlixZggcPHqCwsBAAsG/fPlhbW6OyspK7ZnV1NfeL\npU2bNhgyZAgqKiq4pYcAoK2tjYSEBADAsWPHuLH32pOZVVVVmDRpEjQ0NOq959pt1Px10hz5+fnc\nGvqmmjBhwmuTwcy7Y4Gd+eQ1Z0VLfQICApCamoquXbvC0dERq1evrvO5o6MjunTpgsjISOjo6HDv\ny8jIcBt4CgsLUVZWhrlz5+LQoUMoKCjAhQsXkJubi+rqanh5eWHVqlW4efMm9PX1kZ2djaFDh+LF\nixeYPXs29PT0kJeXh3HjxuHAgQPw9PREWVkZdHR0YGxsDFtbWygqKnLXTk1Nxe+//w55eXkEBgZy\nq1y8vLyQmZkJBQUFTJw4EQEBAdzEbkJCAqKjo8Hn8zFp0iR89913mDt3LuTk5PDPP/8AEE+iBgQE\nAADOnz8PkUgEkUgEExOTBle/NCewMy2kKQPxkv5hk6eMJOnr69PDhw/fqY2aCcSLFy9Sr169SElJ\niSZOnEhaWlpE9L+JxJrjevXqRTNmzCAiokWLFpGsrCx5enpSSkoKKSoqkoGBAXl6epKOjg5NmjSJ\nkpKSyMTEhIRCIQmFQjp58iQREYWHh5Oenh4JhUIqLS0lKysrkpKSosTERDp58iSpqamRpqYmde7c\nmZskrZ3EKy8vj4jEk7MODg507do17n5++eUX7v5qJlnLysqoW7dudOfOHSIiGjt2LG3YsKHOd0BU\ndxJ14MCBFBMTQ0TiGqg19V5fNWLECJKXlyehUEgLFiygBQsWkJGRERkbG9OBAweISFw7ddasWaSn\np0cuLi40YMAA7r5WrFhB5ubmZGRkRFOnTqXq6mpKS0sjExMT7hp3796t8/pzgyZOnrLAznzSmrqi\nxdPTk/r160daWlq0adMmWrduHYlEIrKysuKCY+0VJjXBc8eOHTR37lzueu3ataPp06fTjz/+WCdw\n9ujRgzIyMigjI4OMjIy49wMDA8nPz6/J95ORkUE9e/bkXgcEBNCPP/5Yb9+IxBkjTUxMiM/nk4aG\nBu3fv5+IxEE6MzOTa6fm/KSkpDorbCIiImjIkCHcOfUF9p9//pksLS3p119/feMv0Nr3Hh4eTv36\n9aPKykp6+vQpaWpq0uPHj+nQoUPc+1lZWaSqqsrdV83/DkREY8aMoWPHjhERkaOjIyUmJhIR0Xff\nfcetCPocNTWwS2Qohsfj7eTxeNk8Hu9m40czjOQEBwdzwyMzZ85EdHQ0EhMTsXLlSm7jEADcvHkT\nf/75J+Li4rBs2TIoKioiMTERNjY2+Ne//tVg+8OHD8fx48e5seuSkhKMGjUK2trauHr1KgDg6tWr\nyMjIkNg9vbq7s76dqwCQkZGBtWvX4uzZs7h+/To8PDzqzAnUt/noTRqaRF2yZAm2b9+OsrIy2Nra\nIiUlpdG2YmJi4OPjA2lpaXTq1AkODg6Ii4vDhQsXuPe7dOlSp4xgZGQkrKyswOfzce7cOW5+oSbN\nQlVVFcLCwjBq1Ki3uq/PkaTG2EMB9JdQWwzTLG9a0eLk5AQVFRV06NABqqqqXJ4VPp//xnwnysrK\ncHZ2xokTJ5CSkgIigqGhIby8vLicLb/99hv09PQkfj+ZmZlcNsj6FBYWQklJCaqqqnj27BlOnTr1\n2jFRUVF10vHq6+sjMzMTaWlpAIDdu3dzu14bmkRNT08Hn8/H4sWLYWFh0aTA/rZq0iaEh4fjxo0b\nmDp1KvfLxcvLC6dOncKJEye4NAvMm0kksBPRBQDPJdEWwzTXm1a01H4KlpKS4l5LSUk1+ERcY8qU\nKQgNDUVISAg2bNgADQ0NKCgo4PTp07h16xZ27tyJ5ORkaGtrQ1tbu07+lwULFsDf31+yN/r/hEIh\nTExMYGBggFGjRtVbju/VwC4vL4+QkBB4e3uDz+dDSkqKyxPv5+eHuXPnwtzcHNLS0tw5GzduhLGx\nMQQCAWRkZDBgwIB6+1N7JY69vT3CwsJQVVWFnJwcXLhwAZaWlujbty/3/pMnT7iEZW9KmyAvLw93\nd/c6aRaYN3tvG5R4PN40ANMAoHv37u/rssxnpKCgAF27dgUgLvwgKVZWVnj48CGuXr2K69evS6zd\n+rz6i0FNTQ2pqam4evUqiouL8eWXX2LdunX45ptvUFxcDA0NDZw/fx6dO3dGUFAQ1qxZg/Xr18Pa\n2hrFxcUIDg6GtLQ0OnTogMmTJwMQr8dPTEx87dr29vb1pvTdtGnTG/scFBSELVu2wNTUFLa2tjA2\nNsaAAQMgEAggFArB4/G4soMNlRFUU1PD1KlTYWxsjC+++ILLmAmIV+g8e/YMUlJSkJWVxcWLF9Gn\nT59mfb+fi/cW2IloK4CtAGBubk6NHM4wb23RokUYP348Vq1aBW1tbTx48OC1Y5YvX96sTTjDhw9H\nUlIS1NXVm3T8kSNHoKenB0NDw7e6TlRUFGRlZbnAdefOHezYsQO2traYNGkSfv/9dxw+fBgzZ85E\nVlYWdHR0sGzZMuzcuRMBAQHIyMiAnJwc8vPzUVpainbt2mHSpElYsGDBW99zU23evBkRERHo1q3b\na58FBgbWec3j8fDbb7/V286qVauwatWqOu9VVlbC3NwclpaW0NfXR3R0NJSVlVlgb0xTZlib8gNA\nG8DNphzLVsUwLW3//v2krKwssfY8PDwoIiKiyce/msOlKSoqKsjPz48CAwOJSLzKRFNTk/v87Nmz\n5OLiQioqKtyySWNjY3J1dSUiInd3d/Ly8qLdu3dTUVEREVGd9lpCU1YlWVlZ0c2bN7lzalb15OXl\nkaenJ/H5fLKysuKWavr5+dGYMWOoT58+NHLkSLKzsyMVFRVKSEigTp06UZcuXUgoFNKFCxcoOzub\nhg4dSubm5mRubs4ty2yt8L6XO7LAzjRk165dxOfzSSAQ0JgxYygjI4OcnJyIz+eTs7Mz3b9/n4jE\nwXDGjBlkZWVFOjo6FBkZSRMnTiQDAwMaP348195//vMfsra2JhMTExo2bBgXxE6dOkX6+vpkYmJC\n48ePJyUlJZo8eTLJysqSg4MDlZaW0rhx46hTp06UnZ1NWlpatGTJEhIKhWRmZkYJCQnk5uZGurq6\ntGXLFiIievHiBfXq1YuGDRtGu3fvJgsLCxIKhTRt2jSqrKwkJSUlWrp0KQkEArKysqKnT59SbGws\nqaurk7a2NgmFQkpLS6O0tDRyd3cnU1NTsrOzo+TkZO6ep0+fTpaWljRkyJA6gSssLIw0NDTI0tKS\nRCIRmZqaUv/+/cna2rpOgq7x48fT7Nmzydramjp37kweHh5kYGBAqamp1KFDBwoMDGwwiRkR0fbt\n26lXr15kYWFBU6ZMeevEXzXLJAsKCrg17mfOnKGhQ4cSEdH69etp+fLlRET0+PFj0tPTIyIiX19f\n8vf3JyLxLy2hUEhE4sBuampKpaWlRFQ3admrv6h8fHwoOjqaiIju379PBgYGb9X3T817DewA9gN4\nAqACwCMAk990PAvsn4+bN29Sr169uPXReXl5NHDgQAoNDSUioh07dpCnpycRiQPUiBEjqLq6mo4c\nOUIqKip0/fp1qqqqIlNTU0pMTKScnByyt7en4uJiIhKv816xYgW38ebu3btUXV1NX375JQGgxMRE\n8vf3J6FQSLt376Z+/fqRlZV07+VBAAAgAElEQVQVEYkD0ubNm4mIaN68ecTn86mwsJCys7OpY8eO\nde7j9u3bNHDgQCovLyciopkzZ9KuXbsIALfeeuHChfTjjz9y91L7id3Z2Znu3r1LRESXL18mJycn\n7jgPDw+qrKwkInrtiR0AxcbGEhGRnZ0d2dnZUY8ePWjZsmU0a9YsKi8vJ09PT/Ly8qL09HS6desW\n9ejRgzp37kzXrl2jTp060fLlyykkJIR0dHQoPz+fysrKqHv37vTgwQPKysoiLS0tysvLo/LycrKz\ns2t2YH/w4AENHjyY25Skr69PRESPHj0iQ0NDIiLauHEjLV26lIiIRCIRpaenc+1069aNCgoKyM/P\njwv4RG8O7B06dOD+ehEKhdSlSxfuF31r1NTALpExdiLykUQ7TOtz7tw5eHt7c3lK2rVrh0uXLuHP\nP/8EIE5utWjRIu74r776CjweD3w+H506dQKfzwcAGBkZITMzE48ePcLt27e5FSDl5eWwsbFBSkoK\ndHR0uHJwgwcPRlRUFEQiEdq3b4+tW7dyy/xGjhzJXW/QoEEAxMsei4uLoaKiAhUVFW6cWk1NDUDz\ny+AB4uLQFy9e5MrfAeC27gOAt7d3nVUotWlra8PLywsvXryAjIwMrK2tER4ejhEjRiAvLw+RkZFQ\nVVXF9OnTMW7cOBQUFCAjIwM//fQT2rZti7Zt2+Lw4cPIz8+HUCiEqqoqAHEu+V9//RUVFRVwcHBA\nu3btuL40VhO1ITWrkg4fPozMzEzY2dlxVaJqMlqGhYUhODi40baauga/uroaly9fhry8fLP63Fqx\nXDHMR6X2MsRXlyhWVlaCiODq6oqkpCQkJSXh9u3b2LFjR71tSUmJ/++tqakJFRUVpKamIjc3l6vZ\n2ZTr1SASl8Grue6dO3fg7+8PGRmZRsvgVVdXQ01NjTs3KSkJycnJ3OcNBTFtbW1oaWnhjz/+wMuX\nL7mNUiKRCN999x1GjhyJW7duQU9PD0pKSoiJicGNGzegoKCAJUuWABDXQL1+/TpWrlwJLS0trm1p\naWkMHDgQTk5O9V67Od60KmnEiBFYs2YNCgoKIBAIAIhX4dQkUouKioKGhgbatm37xmu8WqnJzc2t\nzqqd2umTP2cssDMtytnZGQcPHkReXh4Acfm4Pn364MCBAwDE9TXt7e2b3J61tTViY2O5DTYlJSW4\ne/cuDAwMkJmZifT0dADA8ePH65xnaWmJP//8E9ra2g0+Hb9JTRm8mrJyjZXBqx2A2rZtCx0dHRw8\neBCA+JfEtWvXGj0PqBssd+3a1Wg/9+zZg7KyMohEIixduhREBGVlZRw6dAgHDhyAtbU1lx0yNDQU\nCQkJOH/+PC5cuAArKyssXboUJ0+exIsXL5Ceng5TU1Ou7dTUVO51QkICHBwcYGZmhmfPnuHp06dY\ntGgRvvnmGygoKGDLli117mPYsGE4cOAAhg8fzr3n7++PhIQECAQCLFmypEn399VXX+Hw4cMQiUSI\njo5GUFAQ4uPjIRAIYGho2KS/Bj4HrNAG06KMjIywbNkyODg4QFpaGiYmJti0aRMmTpyIwMBAdOjQ\nASEhIU1ur0OHDggNDYWPjw83nLFq1Sro6elh69at8PDwgKKiIoRC4Wv9CAsLQ8+ePZt1HzVl8Nzc\n3FBdXQ0ZGRn8/vvvDR4/cuRITJ06FUFBQQgPD8fevXsxc+ZMrFq1ChUVFRg5cuRrfQTEgWvYsGE4\nevQoNm3aBH9/f3h7e0NdXR3Ozs5vTF2QnJyMsLAwyMvLIykpCWPHjkVBQQFKSkrQo0cPaGlpQVFR\nEdu2bePOUVVVxdKlS+Hq6gpdXV14eXkhPT0dK1aswMaNG6GqqoqkpCSIRCKEhIRg4sSJqKiowOzZ\ns3H06FF06NABYWFhWL9+PXbu3AlVVVXs3r0bffv2xcKFC7ndsJ06dXrtr5l27drhyJEjr93Hqxu6\nHB0d4ejoCADQ09N7bS9BWFhYg9/JZ6spA/GS/mGTp8z7FhcXR3Z2dh+6Gy1q06ZN1LlzZ24iUU9P\nj/z8/EhWVpaqq6uJiOjAgQM0efJkIvrfROSjR49IU1OTKioqaODAgbR582Yug+KePXtozpw5VFlZ\nSbq6upSbm0s3btyod8nlixcv6izPvHbtWp2EaMy7w/tMAsYwH7OAgAB4eXnh559//tBdaVHUzHmA\ngIAAPH36FMbGxtDR0YGrqyv3WX15WogIRkZG3HVu3LiB06dPv9d7Zd6MBXam1VuyZAnu378POzu7\nD92VFvW28wA1Nm3ahN69e2Pbtm0ICgrCnj17uMRg9eVp0dfXR05ODi5dugQAqKiowK1bt6CmpgY1\nNTXExMQAQJ0KU8z7xcbYGaaVeNt5AADck/yuXbswY8YMlJaWQldXt868x+jRo3H48GG4ubkBEK+0\nCQ8Px5w5c1BQUIDKykrMmzcPRkZGCAkJwaRJk8Dj8bjjmfePJx62eb/Mzc0pPj7+vV+XYVpSfn4+\n9u3bh6+//hqPHz/GnDlzPup6nrNnz4apqWmjGRPXrl2LgoIC/PjjjxK57pQpU/DNN9+8dR6d+igr\nK6O4uFgCvfo08Hi8BCIyb/Q4FtgZRjIyMzMxcODAOtkZP1Y//PAD/vOf/+DUqVNvzG8+ZMgQpKen\n49y5cw0Ww/6QWGCvHxtjZxgJWbJkCdLT0yESibiCH4B4vfjgwYPh6uoKbW1t/Pbbb1i/fj1MTExg\nbW2N58/FpQzS09PRv39/mJmZwd7evkUKWtT48ccfceXKlUaLVhw+fBjXr19vdlAvKSmBh4cHhEIh\njI2NERYWBkdHR9Q82CkrK2PhwoUwMjJCv379cOXKFTg6OkJXVxfHjh0DIP7+PD094ejoiF69emHF\nihX1XiswMBAWFhYQCATw8/NrVn9bjaYsnZH0D1vuyLRGtWt+1v53SEgI9ejRg8tD07ZtWy7J2Lx5\n87hi0g3lk/mUhYeH05QpU7jX+fn5dWq2AuAKew8ePJhcXV2pvLyckpKSuKRgISEh9MUXX1Bubi6V\nlpaSkZERd76SkhIRiRPD1RTArqqqIg8PDzp//vz7vNX3Amy5I8O0rKCgIPTu3Rvq6uoICAgAAGRn\nZ9dbzq6x0ny188mIRCJMnz4dT548ea/30xL4fD7OnDmDxYsXIzo6mstVU0NWVhb9+/fnjnVwcICM\njMxrJQtdXV3Rvn17KCgoYOjQodzKmxqnT5/G6dOnYWJiAlNTU6SkpCA1NbXF7+9jxVbFMEwzvVpg\n4k21UxsrzVc7n0xroqenh6tXr+LkyZP4/vvv4eLiUufz2mvs31SysOaYhl4TEb777jtMnz69JW7j\nk8Oe2BmmGWbMmIF79+5hwIAB2LBhA3x9faGiolIna2NGRgbmz5+PFStWYN++fYiLi8PQoUORlZWF\n1atX12nvbfLJfEoeP34MRUVFjBkzBgsXLsTVq1eb1c6ZM2fw/PlzlJWV4ciRI6/Vd3V3d8fOnTu5\nidSsrCxuPf/niAV2hmmG4OBgdOnSBZGRkTh//jwSExPRvn17dO/eHWvXrsXChQsBiIca/Pz8YGRk\nBE9PT/z+++/o0qULDhw4wCVGq7F3717s2LEDQqEQRkZGOHr06Ie4NYm6ceMGLC0tIRKJsGLFCnz/\n/ffNasfS0hJeXl4QCATw8vKCuXndhSFubm4YNWoUbGxswOfzMWzYsDpJyD47TRmIl/QPmzxlWoOa\nAhOenp7Up08fIqpbCMLBwYEr1Xb27Fnq168fd669vT0lJia+/05/gmpXi/rcgU2eMkz9vvzySzx+\n/Pi9XKuxfO+ZmZncskiGkRQ2ecp8dk6ePPmhuyARlZWVaNOm9f8nPGHCBEyYMOFDd+OTwp7YGeYD\nq6qqwtSpU2FkZAQ3NzeUlZU1uFlpwoQJmDFjBqysrOqUFGSYOpoyXiPpHzbGzjBiGRkZJC0tzY23\ne3t70+7du5tc/FoS12c50z8deJ/FrBmGaT4dHR2uDquZmRkyMzObXfyaYQA2xs4w7yw4OBiKiooY\nN25cs86vPakqLS2NZ8+eNbhZ6dq1azh9+jS2bdsGTU1NmJmZoV+/flzK3R49emDnzp1QV1dHUlJS\nve8nJCRg0qRJAMBS67ZSbIydYd7RjBkzmh3U69PQZqW4uDjcv38fa9euxalTp7hEWuPGjcMvv/yC\n69evg8/nc0myGnp/4sSJ2LRpU6vYAMXUjwV2hvkI1bdZKTY2Ft27d4esrCxUVFTw1VdfoaSkBPn5\n+VzFo/Hjx+PChQsoKCio9/38/Hzk5+ejb9++AICxY8d+sHtkWg4bimGYD0hbW7tO/vYFCxZw//77\n77/rHLtx40Z4enpi2LBh761/zKeJPbEzzCfC1tYWx48fx8uXL1FcXIwTJ05ASUkJ6urqiI6OBgDs\n3r0bDg4OUFVVrff9z7Euqb+/f70ZN2vk5OTAysoKJiYmiI6Oxpdffon8/Pw3trl8+XJEREQAEP/C\nLS0tbbQftfPQtzT2xM4wnwgLCwsMGjQIAoEAnTp1Ap/Ph6qqaoP1Sht6n9Ulrevs2bPg8/nYvn07\nAMDe3r7Rc1auXMn9e+PGjRgzZgwUFRVbrI9vrSlrIiX9w9axM0zzFBUVERFRSUkJmZmZUUJCQote\nr3buGyKiyMhIio2N5V5v2bKFdu3a1aJ9aI5Vq1ZRr169yNbWlkaOHEmBgYGUlpZG7u7uZGpqSnZ2\ndpScnEyJiYmkqalJGhoaJBQKqbS0lMsBlJGRQQYGBjRlyhQyNDQkV1dXKi0tJSLxfoKDBw/Sr7/+\nSjIyMmRsbEyOjo5EJC76YW1tTSYmJjRs2DDuf7OaAiM7duyguXPncn3dunUrzZs3r0n3BZYrhmFa\nn2nTpkEkEsHU1BReXl4wNTV9r9ePiorihncAya8IkoSEhAQcOHAASUlJOHnyJOLi4gCIv7tNmzYh\nISEBa9euxddffw2RSISVK1dixIgRSEpKgoKCQp22UlNTMWvWLNy6dQtqamo4dOhQnc/nzJnDZfmM\njIxEbm4uVq1ahYiICFy9ehXm5uZYv359nXOGDx+O48ePo6KiAsD//oKSJIkMxfB4vP4AfgUgDWA7\nEQVIol2GYerat29fi7Q7ePBgPHz4EC9fvkSvXr1w+/ZtyMjI4PHjx1BQUMBPP/2EIUOGYPfu3VBU\nVERwcDAUFBTw4sULKCsrIyYmBk+ePMHAgQNRUlICaWlpKCgo4Ndff8XcuXPx8uVLlJeXo2PHjti5\ncyciIyNx/PhxlJWVoU+fPvjjjz9eK57RXNHR0RgyZAg3NDJo0CC8fPnyjZu+GlLf5rE3uXz5Mm7f\nvs3liy8vL4eNjU2dY5SVleHs7IwTJ06gd+/eqKioAJ/Pf5tbbNQ7B3YejycN4HcArgAeAYjj8XjH\niOj2u7bNMMz7sXPnTrRr1w6xsbFwcXFBcnIy7OzsoKqqCl9fXxw5cgTy8vJYtmwZlJWVMXnyZKip\nqWHFihW4desW1qxZgzNnzqBfv35IS0uDg4MDMjMz4eXlBWNjY/Tp0wcnTpzAyJEjMWPGDNjY2HBP\n0mPHjsWJEye4coGN0dbWRnx8/FsV2G5uhapXN4+VlZW98XgigqurK/bv3//G46ZMmYLVq1fDwMAA\nEydOfKs+NYUkhmIsAaQR0T0iKgdwAICnBNplGKYFhYaGcumLg4KCIBQK4e3tDSLCnj174OjoiKFD\nhwIA2rRpgxEjRnDnPnr0CO7u7tiyZQvOnz+Pa9euIT8/H9ra2hgwYAAmTZqE27fFz3bt2rXDuHHj\nMHnyZMjKyiI3NxdZWVmwsrICn8/HuXPncOvWLYndV9++fXHkyBGUlZWhqKgIx48fh6KiYotVqFJR\nUeGKelhbWyM2NhZpaWkAgJKSEty9e/e1c6ysrPDw4UPs27cPPj4+EulHbZII7F0BPKz1+tH/v8cw\nzEesJrBHRUUhIiICly5dwqJFi/DFF19www+1KSkpcf+ePXs2fH19MXPmTHh5eeHly5fcZzVPuTVD\nKzweD3l5eQgODkZ1dTUqKytx/vx5hIeHIyAgADweD8+fP0dOTg68vLxgYWEBCwsLxMbGAgDy8vLg\n5uYGIyMjTJkyBeI5xIaZmppixIgREAqFGDBgACwsLAC0XIWqadOmoX///nByckKHDh0QGhoKHx8f\nCAQC2NjYcJk5XzV8+HDY2tpCXV1dIv2o7b0td+TxeNMATAOA7t27v6/LMsxnIzMzEwMGDICdnR0u\nXryIrl274ujRo7hz585rOWPOnj2L+Ph4jB49GuXl5ejduzcUFRWhqamJhw8forCwEFFRUZCRkYGv\nry9XWFpFRQWFhYUoKChA165dcfXqVcTHx3Pr6e/fvw8jIyNu3XxiYuJr/awJzLGxsQgKCoKKigoU\nFRUxd+5czJ8/H3Z2dnjw4AHc3d2RnJyMFStWwM7ODsuXL8dff/2FHTt2NPpdLFu2DMuWLXvt/Vc3\nfQGv53uvGUfX0NBocPNYaGgo9+/Zs2dj9uzZ3GtnZ2dumKm2qKioOq9jYmIwf/78xm6lWSTxxJ4F\nQLPW627//14dRLSViMyJyLxDhw4SuCzTEhrbzBEVFYWBAwcCEP+f29fX9311jWmC+lZx1JczZtiw\nYTA3N8fevXuRkpICIkLv3r2xe/duaGlpYcmSJVBTU0NBQQHWr1/PDZV89dVXOHz4MPLz8/HVV19h\n69at3JP8rl27cObMGaxbtw5JSUlYvnx5vX2UkpKCoqIiJkyYgOrqalhbWwMAIiIi4OvrC5FIhEGD\nBqGwsBDFxcW4cOECxowZAwDw8PBokSfc9yk/Px96enpQUFCAi4tLi1xDEk/scQB68Xg8HYgD+kgA\noyTQLsMwb+nVVRzp6emv5YypvTIEEA+dnDp1qsnXuH79eoOfPXr0qM5rJSUlREVFITMzEx07doSf\nnx+0tLRw6NAh3Lt3D7/99htXmPq3337D5cuXIS8v3+S+fIrU1NTqHXeXpHd+YieiSgC+AP4DIBnA\nv4lIcjMhTIv76aefoKenBzs7O9y5cwdA3e3Pubm50NbW/oA9ZJrq1VUcjW2N/1Bqgvu4ceO4vwbc\n3NywadMm7piaFSx9+/bllnmeOnUKL168eP8d/sRIZIMSEZ0kIj0i6kFEP0miTeb9aGgzB9M6NJQz\nBqi7muN9eP78Of71r39xrw0MDLB37154e3sjPT0dQUFBiI+Ph0AggKGhIYKDgwEAfn5+uHDhAoyM\njPDnn3+yObomYLliPnP1beZgWpeGcsbU1E9VUFDApUuXXtt1KQnFxcUAxGvPa5ZW1lwbAExMTLhl\nkQAQFhb2Whvt27fH6dOnJd631owFdqZebdq0QXV1NQDUWcrGfLzelAL48uXLrx3v5eUFLy+vetvK\nzMxE//79YW1tjYsXL8LCwgITJ06En58fsrOzsXfvXvTs2ROTJk3CvXv3oKioiK1bt8LY2Bi6urpI\nSkqCmpoaAKBXr16IiYnBli1boKysjAULFiA9PR2zZs1CTk4OFBUVsW3bNhgYGEj4G/l8sVwxn7n6\nNnMA4iCRkJAAAAgPD/+QXWQ+kLS0NHz77bdISUlBSkoK9u3bh5iYGKxduxarV6+Gn58fTExMcP36\ndaxevRrjxo2DlJQUPD09cfjwYQDAf//7X2hpaaFTp0512q4vbwsjOeyJ/TNXezNHx44duc0cCxYs\nwPDhw7F161Z4eHh84F4yH4KOjg6Xw8TIyAguLi7g8Xjg8/nIzMzE/fv3uaRYzs7OyMvLQ2FhIUaM\nGIGVK1di4sSJOHDgQJ0dq4B4eKY5eVuYpmOB/SPl7+8PZWVlFBYWom/fvujXr1+LXauhzRw1y9q+\n/PJLboWCo6MjHB0dAdTd2BEaGgo3Nzd06dKlxfr5Pq/D1F1hIyUlxb2WkpJCZWUlZGRk6j3PxsYG\naWlpyMnJwZEjR/D999/X+by5eVuYpmNDMR+5lStXtmhQb4qTJ09y46UNqZ13pCW9r+swjbO3t+cq\nMEVFRUFDQwNt27YFj8fDkCFD8M0336B3795o3759nfMaKtbNSA4L7B+R+taTT5gwgRvjXrJkCQwN\nDSEQCLiJsePHj3Nlvfr164dnz54BED/xjx07FjY2NujVqxe2bdsGQPwfYN++feHh4QF9fX3MmDGD\nmyTdv38/+Hw+jI2NsXjxYq5f2trayM3NRWZmJnr37o2pU6fCyMgIbm5uKCsrQ3h4OLc9XSQSoays\nDNra2vjuu+8gEolgbm6Oq1evwt3dHT169OCWsQFAYGAgLCwsIBAI4OfnBwBvdR3mw/H390dCQgIE\nAgGWLFmCXbt2cZ+NGDECe/bseW0YpkZL5W1h/l9TqnFI+odVUHpdfHw8GRsbU0lJCRUUFFCPHj0o\nMDCQq9SSm5tLenp6VF1dTUREL168ICKi58+fc+9t27aNvvnmGyISV74RCARUWlpKOTk51K1bN8rK\nyqLIyEiSk5Oj9PR0qqyspH79+tHBgwcpKyuLNDU1KTs7myoqKsjJyYkOHz5MRFSnooy0tDQlJiYS\nEZG3tzft3r2biP5XHaaGlpYWbd68mYiI5s2bR3w+nwoLCyk7O5s6duxIROJKM1OnTqXq6mqqqqoi\nDw8POn/+/Ftdh2E+J2hiBSU2xv6RaGw9uaqqKuTl5TF58mQMHDiQy9fy6NEjjBgxAk+ePEF5eTl0\ndHS4czw9PaGgoAAFBQU4OTnhypUrUFNTg6WlJXR1dQEAPj4+iImJgYyMDBwdHVGTx2f06NG4cOEC\nBg8eXKcfb1N4oOYe+Hw+iouLoaKiAhUVFcjJySE/Px+nT5/G6dOnYWJiAkA8qZaamoru3bu/dYED\nhmH+hw3FfCLatGmDK1euYNiwYThx4gT69+8P4H/pU2/cuIE//vijzprzVyvS1E6jWt/7TfHqlvWa\nrH9vOrb2xFvN68rKShARvvvuOyQlJSEpKQlpaWmYPHnyW1+HYZi6WGD/SDS0nrxGcXExCgoK8OWX\nX2LDhg3cZFNN+lQAdcY4AeDo0aN4+fIl8vLyEBUVxS1lvHLlCjIyMlBdXY2wsDDY2dnB0tIS58+f\nR25uLqqqqrB//35u63lTNGd7uru7O3bu3MntTszKykJ2drbEr8Mwnxs2FPORaGg9eY2ioiJ4enri\n5cuXICKuQK6/vz+8vb2hrq4OZ2dnZGRkcOcIBAI4OTkhNzcXP/zwA7p06YK7d+/CwsICvr6+SEtL\ng5OTE4YMGQIpKSkEBATAyckJRAQPDw94eja9ENar29Obws3NDcnJyVxNSGVlZezZswfS0tJNvk5L\nbINnmE8djxqpRtISzM3NqSZzINMyatbB195WDohXxaxduxYnTpz4QD1jGKa5eDxeAhGZN3YcG4ph\nGIZpZdgTO8MwzCeCPbEzzCtqCgxv2LChyeccOXKkTlpZSdq4cSNKS0sbPa520ROGaQoW2JnPwtOn\nTxEXF4fr1683uYBwZWXlRxHYGeZtscDOfJIyMzNhbGzMvV67di38/f3h6OiIfv36QUFBAW3btuUq\nB7m5uSErKwsikQjbtm2Dj48PrK2toampCV1dXa7cmqOjI+bNmwdzc3P88ssvOHbsGBYuXAiRSIT0\n9PQGSwaGhoZi6NCh6N+/P3r16oVFixZxfZs5cybMzc1hZGTEpU0ICgrC48eP4eTkBCcnJwDA6dOn\nYWNjA1NTU3h7e3PLQGvs3LkT8+bN415v27atxarcM5+4pmxPlfQPSynQelRWVn6Q62ZkZJCRkRH3\nOjAwkPz8/MjBwYHU1dXp4cOH9Ndff5GLi0u9x/P5fIqKiqKQkBAyNzenuXPnEpE4ZcHMmTO542pS\nOtSondIgJyeHtLS0iIgoJCSEdHR0KD8/n8rKyqh79+704MEDIiLKy8sjIvF35eDgQNeuXSOi/6Vq\nqGnL3t6eiouLiYgoICCAVqxYUeeaRUVFpKurS+Xl5UREZGNjQ9evX5fE18l8ItDElALsif0zFhgY\niKCgIADA/Pnz4ezsDAA4d+4cRo8e3WBSMGVlZXz77bcQCoW4dOlSvcnJcnJy4OXlBQsLC1hYWCA2\nNva93NPdu3dRVFSEAQMG4OLFi4iJiYGJiQm8vLy4nN8nTpxAamoqtwHLwMAAFy5cwMGDBxEXF4fT\np0+jb9++b31tFxcXLvWDoaEh7t+/DwD497//DVNTU5iYmODWrVv1Du1cvnwZt2/fhq2tLUQiEXbt\n2sWdX0NZWRnOzs44ceIEUlJSUFFRweVLZ5ja2Aalz5i9vT3WrVuHOXPmID4+Hv/88w8qKioQHR0N\nPT09LF68GAkJCVBXV4ebmxuOHDmCwYMHo6SkBFZWVli3bh3y8vIwefJkpKSkgMfjIT8/HwAwd+5c\nzJ8/H3Z2dnjw4AHc3d2RnJwssb7XLt0H/K98n56eHgAgMjISRUVF2L17NxITE7Fnz55Gq/SsXLkS\nAoEAmzZtQs+ePRu97qslA+tLg5CRkYG1a9ciLi4O6urqmDBhQr2lBokIrq6u2L9//xv7OGXKFKxe\nvRoGBgaYOHHiG49tSEpKCiZNmoSioiK0a9cOhw4dgoaGRrPaYj5O7In9M2ZmZoaEhAQUFhZCTk4O\nNjY2iI+PR3R0NNTU1LikYG3atOGSggHioFVTK7N2crI///yTS2IWEREBX19fiEQiDBo0CIWFha+N\nGb+LTp06ITs7G3l5efjnn3/q3XBVWFiInJwcGBsbY9WqVdwTu7KyMmRkZLjx9zt37sDBwQG2trZI\nSUnB4cOHUVVVBeD1FAZvWzKwsLAQSkpKUFVVxbNnz3Dq1Cnus9ptW1tbIzY2FmlpaQCAkpIS3L17\n97X2rKys8PDhQ+zbtw8+Pj5N+q7qs2fPHty4cQN9+vSpk0aZaR1YYP+MycjIQEdHB6GhoejTpw/s\n7e0RGRmJtLQ0blKwPnJychAKhQAaTk5WXV2Ny5cvcwm+srKyoKysDABYvnw5IiIi3rnvy5cvh6Wl\nJVxdXesthPzzzz9DXtsvMVAAABqoSURBVF4eN2/exPbt20G19myIRCIsXLgQP/zwA3Jzc7F8+XIE\nBwdDR0cHz549g5mZGfLy8jBy5EgEBgbCxMQE6enpWLBgAbZs2QITExPk5uY22k+hUAgTExMYGBhg\n1KhRsLW15T6bNm0a+vfvDycnJ3To0AGhoaHckkwbGxukpKTU2+bw4cNha2sLdXX1Znxz4qGnmuye\n//zzD+Tl5ZvVDvMRa8pAvKR/2OTpx8PPz480NTXpzJkz9PTpU9LU1KTBgwfT48ePqXv37pSTk0OV\nlZXk4uJCR44cISIiRUVFbiKyqKiInj17RkRE+fn51K5dOyIi8vHxoTVr1nDXqcmt/j4mW2smJQcP\nHkzh4eHcfdZMdEZGRpKHhwcRiSc9Z82aRUREaWlpXBvm5uZcnz82Hh4eFBER8c7t/P3332RgYMDl\n9mc+fmCTp0xT2Nvb48mTJ7CxsUGnTp0gLy8Pe3t7dO7cmUsKJhQKYWZmVicpWGVlJUaPHs09jRob\nG0MkEkFZWZmb0Lty5QoEAgFkZWUxYcIEmJqa4uDBg3WqQmlra8PPzw+mpqbg8/ncU2pOTg5cXV1h\nZGSEKVOmQEtLq0lPyLUtWrQI3333HUxMTJqU9nfhwoXcZHGfPn24v0o+Fvn5+dDT04OCggJcXFze\nqa3q6mpMnjwZx44da7TsIfMJakr0l/QPe2L/tGVkZBAAiomJISKiiRMn/l97dx4V9Xk1cPz7iCIK\nhsXELYoSY5BtEEEFcdRq0kBL0KrkuLZu2YwxpomxxvjSNKbHBI2JNnGpUU8jalyi0bxNVVACRCxI\nCxYUgxMXXmONaQQFwQWe9w9kDgqWZYCR4X7O4Rxn5rfcZ+Dc8/P3e5579dtvv627d++uT548qbXW\nesqUKXr58uVa6/Ir6Hfffde8f+UphD179tQrVqzQWmv90Ucf6RkzZmittX7xxRf1H//4R6211l99\n9ZUGzFMDG0J0dLSOiYnRixYt0gcOHGiw49ZHeHh4jVfNGzZs0OfPnze/njFjhs7Ozq73OfPy8rS3\nt3e99xfWgVyxi8bUo0cP8/3iyZMnEx8fj4eHh3lWym9+8xvzw1bgnr0vAcaMGQPc2SkpOTmZ8ePH\nAxAWFlbv+8k1aa7NwtetW4e3t3e9z+nq6sqyZcvqvb+4v0liF/Vyd9elmhKTo6PjPT+rmCbY2J2S\nGrNZ+Jo1a4CmaxZeeQWsk5MTCxcuxN/fn+DgYHOMJpOJ4OBg/Pz8ePPNN80Pr6G8Qcu6desa7bsW\n1iWJXdTLuXPnzA01Nm/eTFBQEGfOnDFP1/v000/r1IHpbqGhoWzbtg0oX2pfseS/vtLT09m6dSsZ\nGRmsWrWKnTt3snnzZnbt2sXSpUvJy8sjNjYWe3t7tNZcuHCB69evM2TIEN555x0ATpw4wciRI83T\nJj/77DNCQkJo164db7zxhvmKOjU1lZUrV3L8+HFMJhOff/4533//PfPnz+fgwYNkZGSQlpbG7t27\nq8SZm5vLiy++SHZ2Ni4uLuzcuZNx48YRFBREbGwsGRkZVZqLFBUVERwcTGZmJkOHDuXPf/4zUL6W\n4OWXX+Zf//oX3bt3v2Ofbt261Wq6pmieJLGLevH09OSjjz7Cy8uLy5cv88orr7BhwwaioqLw8/Oj\nVatWPP/88/U+fnR0NPv378fX15ft27fTpUsXOnToUO/jVW4W3qFDB27evElISIj5vfXr13Px4kV6\n9+7NW2+9BcCqVaswmUxERERw7do1nJycuHz5MqtWrQKgffv2dOnShWPHjvHLX/6S1NRUAHOzcDs7\nO3Oz8LS0tHuuC6isPk287e3tzc3NK++TkpJCVFQUABMnTqz3dyeaH4tWniqlooDfA17AQK211BZt\nAXr16lXtHOuRI0fyz3/+s8r7dyenjRs3VvtZUFAQCQkJQPnCp3379tG6dWtSUlJIS0u7Y2WnpZyc\nnPDw8CArKwuj0UhCQgIDBw5kxowZ7Nixg+PHj1NcXMymTZvo3bs32dnZJCQk8PLLL5OYmIjBYADu\nfHbQGM3Ci4uLa9ynTZs25nNI428Bll+xZwFjgKqXHkJY4Ny5cwwYMAB/f3/mzJljvr1QX5WbhRcW\nFlZZ1u/k5MStW7fMzcJNJhNQ3kTc3t4eqNos/Nq1a2RnZ/PMM8/cl83Cg4OD2blzJwBbt26t076i\nebPoil1rfQLqdkUiRG306dOn2qv/+qrcLNzZ2Zlbt26Zi2wlJyczYMAAli9fTt++fWnTpg0Gg4Fh\nw4YxYcIEJkyYgK+vL+Hh4Vy8eJHp06dz+fJl7O3tiY6OJj8//75sFv7BBx8wefJk3nnnHcLCwnB2\ndq7XdyeaodrMiazpB0gAgmq7vcxjF41h1KhRun///trb21uvWbNGa621o6OjfuONN7TBYNCDBg3S\n//73v/Xp06f1Aw88oD09PXW7du10+/bt9aZNm3RcXJz29/fXDz74oHZxcdE+Pj5669atOi4uTru6\nump3d3c9bdo0XVJSon19fbWjo6PevXu3eRVrdHS0DgsL025ubtrDw0N/+OGH5tj+8Ic/6Mcee0yH\nhobq8ePH65iYmEb/PoqKinRZWZnWWustW7boyMjIRj+naFzUch57bZJ2HOW3XO7+GaXrkNiBZ4Gj\nwFF3d/em+h5EC1JR9/zatWvax8dH//jjjxrQe/bs0VprPW/ePP3222/r06dPaxcXFz1u3DhdWlqq\ns7Ozde/evbXWWu/YsUM//vjj+tatW+YSC99//71OSEjQo0aN0lqXl05wdnbW77777h3lCaKjo7W3\nt7cODw/Xly5d0m5ubvrGjRs6NTVV+/v76+LiYn3lyhX96KOPNkliT0xM1AaDQfv5+Wmj0ahzc3Mb\n/ZyicdU2sdd4K0Zr3SCrN7TWa4G1UN7MuiGOKURlK1asYNeuXQDk5eWRm5tbZcbIgQMHzNuPHj2a\nVq1a4e3tbZ77nZyczIQJE7Czs6Nz584MGzaMtLQ0IiMjmTVrFpcuXWLnzp3MnDmT119/3fywt8LE\niRNZuHAhAJ06deLixYt88803jBo1CgcHBxwcHHjqqaea4NsoLxeRmZnZJOcS9xeZ7iianbvb4kH5\nwqC4uDhSUlLIzMwkICCAkpKSameM9OrVi1GjRplnoOzZs4cbN24AsG/fvmobRyckJFBWVsamTZvY\nsGED06dPrza26mqyC9HULErsSqlfKaX+DwgB/lcpta9hwhKibgoKCnB1daV9+/bk5ORw5MiRarer\n3JwDyouZRUZG0qZNG6C8zvvhw4cpLS3l0qVLJCYmMnDgQAC6d+/OBx98AFCn5fyhoaHs3buXkpIS\nCgsLq60dL0RDsnRWzC5gVwPFIkStlZaW8swzz3D48GEefvhhtm3bxnvvvUe7du2wt7fHycnJPD1w\n+PDh9OvXj7179+Lm5sbUqVNJSUnhm2++4fDhwxgMBvNqUnd3d3766SecnJwoKytj7ty5dOnShZyc\nHNq2bYuXlxfh4eFMnz6drKwsfvrppxrr2AwYMIDIyEgMBgOdO3fGz89PZqiIRiW3YkSzdPfS+y+/\n/JKrV6/yt7/9jYKCAmbNmkV8fLy5a9ONGzcwmUykpaUB5dMpc3JyeP/99wF47rnngPKpu127dqWo\nqIisrCxiY2PNc95LS0vJzc3l7NmzjBgxgtTUVI4ePUpBQQFFRUX8/ve/N9eYAcjKyjI3LHnttdf4\n9ttv2bdvH2fPniUwMLDeY6/uVlRT7CuaD0nsolm6e+m9yWQiPz/fvOinpuqSUVFR2NnZVXvsp59+\nmlatWtGnTx8eeeQRcnJySE9P5+uvv+all14iMTGRJUuW0K9fP4YPH05JSQnnzp37r/E+++yz9OvX\nj/79+zN27Fj69+9vyfCF+K+kmbVolioeUk6dOpWysjI6duz4X7e/u7rkf6s2WV1JgMDAQEaMGMHc\nuXP59NNP2blzJ56enrWOd/PmzbXetjYqGp384x//wMfHh7/85S8sXbqUvXv3UlxczODBg1mzZg1K\nKdLT080Pe3/+8583aBzi/iRX7KLZqmg4DeW1ZVxdXc0Nqi2pLrl9+3bKysowmUx89913VRL4k08+\nycqVKyvWZzToCtnaOnnyJLNmzeLEiRM88MADfPzxx8yePZu0tDSysrIoLi42P6SdNm0aK1eulKmP\nLYgkdtEkzpw5Q9++fZk0aRJeXl6MGzeOa9euER8fT0BAAH5+fkyfPt38EPNe7/fq1YslS5ZgMpnY\nvn27+finT5/Gzc2NefPmYTAY2L9/P7m5ufWK1d3dnYEDBxIeHs7q1aurNHtetGgRN2/exGAw4OPj\nw6JFi+r5rdTf3Y1OkpOTOXToEIMGDcLPz4+DBw+SnZ1Nfn4++fn5DB06FIApU6Y0eazCCmqziqmh\nf6SkQMtTl3Z6xcXFdW6zV1ZWpj09PfUPP/ygtS5vpl2x4tTWnD59WldevR0fH69Hjx6tO3XqpM+d\nO6e1Ll8FGx0drS9fvqx79Ohh3jYzM9PciFw0P0hrPHG/qW07vZMnT9a5zZ5SiilTprBp0yby8/NJ\nSUkhPDy8CUZlHXc3OhkyZAgADz74IIWFheYmGi4uLri4uJCcnAxAbGysdQIWTUoenoomU107vf/8\n5z91Ps69HnxOmzaNp556CgcHB6Kiomjd2nb/vCsanUyfPh1vb29eeOEFLl++jK+vL126dDGXEAbM\nK2WVUvLwtIWw3b98cd+puMoMCQkxt9Nbs2YNp06d4tFHHzU/8PT09DS32av8fk26detGt27dWLx4\nMXFxcU0wIuu4V6OTxYsXs3jx4irvBwYG3vHg9L333mvU+IT1ya0Y0WRq207PwcGh3m32Jk2aRI8e\nPfDy8mrk0Qhx/1JaN32hxaCgIF1doSVhu86cOUNERARZWVmNep7Zs2cTEBDAjBkzGvU8QliDUipd\nax1U03ZyK0bYjMDAQBwdHVm2bJm1QxHCqiSxiybRq1evRr9aT09Pb9TjC9FcyD12IYSwMZLYhRDC\nxkhiF0IIGyOJXQghbIwkdiGEsDGS2EWLk5+fz8cff2zxcWbOnMnx48cBcHJyqnabqVOnmuu2CNFU\nJLGLFqeuiV1rXaUJdmlpKevWratTU2shmookdtHi/O53v8NkMtGvXz/mzZtHTEwMAwYMwGAwEB0d\nDZSvlPX09OTXv/41vr6+5OXl4eTkxKuvvoq/vz8pKSkMHz6cyiuoX3nlFXx8fBg5ciSXLl2qct70\n9HSGDRtGYGAgTz75JBcuXGiyMYuWRRK7aHGWLFlC7969ycjI4IknniA3N5fU1FQyMjJIT083lwjO\nzc1l1qxZZGdn07NnT4qKihg0aBCZmZnmMrkVioqKCAoKIjs7m2HDhvHWW2/d8fnNmzd56aWX2LFj\nh7lV3cKFC5tszKJlkZWnokXbv38/+/fvJyAgAIDCwkJyc3Nxd3enZ8+eBAcHm7e1s7Nj7Nix1R6n\nVatW5jrxkydPZsyYMXd8fvLkSbKysnjiiSeA8ls5Xbt2bYwhCSGJXbRsWmsWLFjAc889d8f7Z86c\nqVL33cHBATs7u1od9+7a81prfHx8zM0xhGhMcitGtDgdOnTg6tWrQHlj6vXr11NYWAjA+fPn+eGH\nH+p8zLKyMvPsl8odjSp4enpy6dIlc2K/efMm2dnZlgxDiHuSK3bR4nTs2JHQ0FB8fX0JDw9n4sSJ\nhISEAOXTFjdt2lTrK/MKjo6OpKamsnjxYjp16sRnn312x+f29vbs2LGDOXPmUFBQwK1bt5g7dy4+\nPj4NNi4hKkg9diHuISkpieeff542bdqQkpJCu3btqt1u+PDhLF26lKCgGstkC2GR2tZjl1sxQtxD\nbGwsCxYsICMj455JXYj7kSR20SKMHj2awMBAfHx8WLt2Ldu3b+e3v/0tAB9++CGPPPIIAN999x2h\noaGsW7eObdu2sWjRIiZNmkRCQgIRERHm482ePZuNGzdaYyhC1EjusYsWYf369bi5uVFcXMyAAQPY\nt2+fualzUlISHTt25Pz58yQlJTF06FBmzpxJcnIyERERjBs3joSEBOsOQIg6sOiKXSkVo5TKUUod\nU0rtUkq5NFRgQjSkFStW4O/vT3BwMHl5eeTl5VFYWMjVq1fJy8tj4sSJJCYmkpSUhNFotHa4QljE\n0lsxBwBfrbUB+BZYYHlIQjSshIQE4uLiSElJITMzk4CAAEpKShg8eDAbNmzA09MTo9FIUlISKSkp\nhIaGVjlG69at76gXU1JS0pRDEKJOLErsWuv9Wutbt18eAbpbHpIQDaugoABXV1fat29PTk4OR44c\nAcBoNLJ06VKGDh1KQEAAhw4dom3btjg7O1c5Rs+ePTl+/DjXr18nPz+f+Pj4ph6GELXWkPfYpwOf\n1biVEE0sLCyM1atX4+Xlhaenp7lMgNFoJC8vj6FDh2JnZ0ePHj3o27dvtcfo0aMHTz/9NL6+vnh4\neJhLEAhxP6pxHrtSKg7oUs1HC7XWX9zeZiEQBIzR9zigUupZ4FkAd3f3wLNnz1oStxBCtDi1ncde\n4xW71vrxGk40FYgARt4rqd8+zlpgLZQvUKrpvEIIIerHolsxSqkw4HVgmNb6WsOEJIQQwhKWzor5\nE9ABOKCUylBKrW6AmIS47+3evdvcFk+I+41FV+xa60cbKhAhmpPdu3cTEREhrfHEfUlKCogWIyYm\nhhUrVgDlbexGjBgBwMGDB5k0aRIvvPACQUFB+Pj4mFvkQXkrPW9vbwwGA6+99hqHDx9mz549zJs3\nj379+mEymTCZTISFhREYGIjRaCQnJ8cqYxQCpKSAaEGMRiPLli1jzpw5HD16lOvXr3Pz5k1zGYGo\nqCjc3NwoLS1l5MiRHDt2jIcffphdu3aRk5ODUor8/HxcXFyIjIw0lxsAGDlyJKtXr6ZPnz78/e9/\nZ9asWRw8eNDKIxYtlSR20WIEBgaSnp7OlStXaNu2Lf379+fo0aMkJSWxYsUKtm3bxtq1a7l16xYX\nLlzg+PHjeHt74+DgwIwZM4iIiLijEFiFwsJCDh8+TFRUlPm969evN+XQhLiDJHbRYrRp0wYPDw82\nbtzI4MGDMRgMHDp0iFOnTtGuXTuWLl1KWloarq6uTJ06lZKSElq3bk1qairx8fHs2LGDP/3pT1Wu\nxMvKynBxcSEjI8NKIxPiTnKPXbQolcsIGI1GVq9eTUBAAFeuXMHR0RFnZ2cuXrzIV199BZRfjRcU\nFPCLX/yC5cuXk5mZCdzZXu+BBx7Aw8OD7du3A+X9TSu2E8IaJLGLFsVoNHLhwgVCQkLo3LkzDg4O\nGI1G/P39CQgIoG/fvkycONFcCOzq1atERERgMBgYMmQI77//PgDjx48nJiaGgIAATCYTsbGxfPLJ\nJ/j7++Pj48MXX3xhzWGKFk5a4wkhRDMhrfGEaGZk0ZNoKJLYhWhipaWl1b4viV00FEnsQtRBTYuc\ntmzZgp+fH76+vsyfP9+8n5OTE6+++ir+/v6kpKTUatGTEPUliV2IOqjotARw9OhRCgsLzYucHnvs\nMebPn8/BgwfJyMggLS2N3bt3A1BUVMSgQYPIzMzEy8uLXbt2kZ2dzbFjx3jzzTcZPHgwkZGRxMTE\nkJGRQe/eva05TNHMSWIXog7uXuQUEhJiXuTk4uLC8OHDeeihh2jdujWTJk0iMTERADs7O8aOHQuA\ns7OzedHT559/Tvv27a05JGGDJLELUQd3L3IyGo3mRU69evW6534ODg7Y2dkBmBc9jRs3ji+//JKw\nsLAmil60FJLYhaijey1yGjhwIF9//TU//vgjpaWlbNmyhWHDhlXZvzaLnoSwhCR2IeroXoucunbt\nypIlS/jZz36Gv78/gYGBjBo1qsr+tV30JER9yQIlIYRoJmSBkhBCtFCS2IUQwsZIYhdCCBsjiV0I\nIWyMJHYhhLAxktiFEMLGSGIXQggbI4ldCCFsjCR2IYSwMZLYhRDCxkhiF0IIGyOJXQghbIwkdiGE\nsDGS2IUQwsZYlNiVUm8rpY4ppTKUUvuVUt0aKjAhhBD1Y+kVe4zW2qC17gd8CfxPA8QkhBDCAhYl\ndq31lUovHYGm79ohhBDiDq0tPYBS6h3g10AB8DOLIxJCCGGRGlvjKaXigC7VfLRQa/1Fpe0WAA5a\n6+h7HOdZ4NnbL32BrHpF3Dw9CPxo7SCakIzXtrWk8d5vY+2ptX6opo0arOepUsod+KvW2rcW2x6t\nTd8+WyHjtW0yXtvVXMdq6ayYPpVejgJyLAtHCCGEpSy9x75EKeUJlAFngectD0kIIYQlLErsWuux\n9dx1rSXnbYZkvLZNxmu7muVYG+weuxBCiPuDlBQQQggbY7XE3tLKESilYpRSObfHvEsp5WLtmBqT\nUipKKZWtlCpTSjW7WQW1oZQKU0qdVEqdUkr9ztrxNCal1Hql1A9KqRYxTVkp1UMpdUgpdfz23/HL\n1o6pLqx5xd7SyhEcAHy11gbgW2CBleNpbFnAGCDR2oE0BqWUHfAREA54AxOUUt7WjapRbQTCrB1E\nE7oFvKq19gaCgReb0+/Xaom9pZUj0Frv11rfuv3yCNDdmvE0Nq31Ca31SWvH0YgGAqe01t9prW8A\nWymf8muTtNaJwE/WjqOpaK0vaK3/cfvfV4ETwMPWjar2LC4pYIkWXI5gOvCZtYMQFnkYyKv0+v+A\nQVaKRTQipVQvIAD4u3Ujqb1GTew1lSPQWi8EFt4uRzAbqLYcQXNRm/ILSqmFlP83L7YpY2sMtS03\nIURzpZRyAnYCc++6y3Bfa9TErrV+vJabxgJ/pZkn9prGq5SaCkQAI7UNzDOtw+/XFp0HelR63f32\ne8JGKKXaUJ7UY7XWn1s7nrqw5qyYFlWOQCkVBrwORGqtr1k7HmGxNKCPUspDKWUPjAf2WDkm0UCU\nUgr4BDihtX7f2vHUldUWKCmldgJ3lCPQWtvsFY9S6hTQFvjP7beOaK1ttgSDUupXwErgISAfyNBa\nP2ndqBqWUuoXwAeAHbBea/2OlUNqNEqpLcBwyqsdXgSitdafWDWoRqSUGgIkAf+iPEcBvKG1/qv1\noqo9WXkqhBA2RlaeCiGEjZHELoQQNkYSuxBC2BhJ7EIIYWMksQshhI2RxC6EEDZGErsQQtgYSexC\nCGFj/h/NTGr1tQ2Y1AAAAABJRU5ErkJggg==\n",
            "text/plain": [
              "<Figure size 432x288 with 1 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pUb3L7pqLS86",
        "colab_type": "text"
      },
      "source": [
        " ## 任务 6：尝试改进模型的效果\n",
        "\n",
        "看看您能否优化该模型以改进其效果。您可以尝试以下几种做法：\n",
        "\n",
        "* **更改超参数**或**使用其他优化工具**，比如 Adam（通过遵循这些策略，您的准确率可能只会提高一两个百分点）。\n",
        "* **向 `informative_terms` 中添加其他术语。**此数据集有一个完整的词汇表文件，其中包含 30716 个术语，您可以在以下位置找到该文件：https://download.mlcc.google.cn/mledu-datasets/sparse-data-embedding/terms.txt 您可以从该词汇表文件中挑选出其他术语，也可以通过 `categorical_column_with_vocabulary_file` 特征列使用整个词汇表文件。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6-b3BqXvLS86",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 53
        },
        "outputId": "eb74dda1-1f6a-452a-97e0-f21ce3ee1807"
      },
      "source": [
        "# Download the vocabulary file.\n",
        "terms_url = 'https://download.mlcc.google.cn/mledu-datasets/sparse-data-embedding/terms.txt'\n",
        "terms_path = tf.keras.utils.get_file(terms_url.split('/')[-1], terms_url)"
      ],
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Downloading data from https://download.mlcc.google.cn/mledu-datasets/sparse-data-embedding/terms.txt\n",
            "253952/253538 [==============================] - 0s 1us/step\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0jbJlwW5LS8-",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 485
        },
        "outputId": "d5b477e2-8ed8-48a3-a1d6-fb8f2c3926ea"
      },
      "source": [
        "# Create a feature column from \"terms\", using a full vocabulary file.\n",
        "informative_terms = None\n",
        "with io.open(terms_path, 'r', encoding='utf8') as f:\n",
        "  # Convert it to a set first to remove duplicates.\n",
        "  informative_terms = list(set(f.read().split()))\n",
        "  \n",
        "terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key=\"terms\", \n",
        "                                                                                 vocabulary_list=informative_terms)\n",
        "\n",
        "terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)\n",
        "feature_columns = [ terms_embedding_column ]\n",
        "\n",
        "my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)\n",
        "my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)\n",
        "\n",
        "classifier = tf.estimator.DNNClassifier(\n",
        "  feature_columns=feature_columns,\n",
        "  hidden_units=[10,10],\n",
        "  optimizer=my_optimizer\n",
        ")\n",
        "\n",
        "classifier.train(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([train_path]),\n",
        "  steps=1000)\n",
        "print(\"Training set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")\n",
        "\n",
        "evaluation_metrics = classifier.evaluate(\n",
        "  input_fn=lambda: _input_fn([test_path]),\n",
        "  steps=1000)\n",
        "\n",
        "print(\"Test set metrics:\")\n",
        "for m in evaluation_metrics:\n",
        "  print(m, evaluation_metrics[m])\n",
        "print(\"---\")"
      ],
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Training set metrics:\n",
            "accuracy 0.83532\n",
            "accuracy_baseline 0.5\n",
            "auc 0.9099081\n",
            "auc_precision_recall 0.9069108\n",
            "average_loss 0.3823823\n",
            "label/mean 0.5\n",
            "loss 9.559558\n",
            "precision 0.8268736\n",
            "prediction/mean 0.5115482\n",
            "recall 0.84824\n",
            "global_step 1000\n",
            "---\n",
            "Test set metrics:\n",
            "accuracy 0.81764\n",
            "accuracy_baseline 0.5\n",
            "auc 0.8946147\n",
            "auc_precision_recall 0.89046174\n",
            "average_loss 0.41108462\n",
            "label/mean 0.5\n",
            "loss 10.277115\n",
            "precision 0.812269\n",
            "prediction/mean 0.5075213\n",
            "recall 0.82624\n",
            "global_step 1000\n",
            "---\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ew3kwGM-LS9B",
        "colab_type": "text"
      },
      "source": [
        " ## 总结\n",
        "\n",
        "我们可能获得了比我们原来的线性模型更好且具有嵌入的 DNN 解决方案，但线性模型也相当不错，而且训练速度快得多。线性模型的训练速度之所以更快，是因为它们没有太多要更新的参数或要反向传播的层。\n",
        "\n",
        "在有些应用中，线性模型的速度可能非常关键，或者从质量的角度来看，线性模型可能完全够用。在其他领域，DNN 提供的额外模型复杂性和能力可能更重要。在定义模型架构时，请记得要充分探讨您的问题，以便知道自己所处的情形。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9MquXy9zLS9B",
        "colab_type": "text"
      },
      "source": [
        " ### *可选内容：*在 `embedding_column` 与 `indicator_column` 之间进行权衡\n",
        "\n",
        "从概念上讲，在训练 `LinearClassifier` 或 `DNNClassifier` 时，需要根据实际情况使用稀疏列。TF 提供了两个选项：`embedding_column` 或 `indicator_column`。\n",
        "\n",
        "在训练 LinearClassifier（如**任务 1** 中所示）时，系统在后台使用了 `embedding_column`。正如**任务 2** 中所示，在训练 `DNNClassifier` 时，您必须明确选择 `embedding_column` 或 `indicator_column`。本部分通过一个简单的示例讨论了这两者之间的区别，以及如何在二者之间进行权衡。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "M_3XuZ_LLS9C",
        "colab_type": "text"
      },
      "source": [
        " 假设我们的稀疏数据包含 `\"great\"`、`\"beautiful\"` 和 `\"excellent\"` 这几个值。由于我们在此处使用的词汇表大小为 $V = 50$，因此第一层中的每个单元（神经元）的权重个数将为 50。我们用 $s$ 表示稀疏输入中的项数。对于此示例稀疏数据，$s = 3$。对于具有 $V$ 个可能值的输入层，带有 $d$ 个单元的隐藏层需要运行一次“矢量 - 矩阵”乘法运算：$(1 \\times V) * (V \\times d)$。此运算会产生 $O(V * d)$ 的计算成本。请注意，此成本与隐藏层中的权重个数成正比，而与 $s$ 无关。\n",
        "\n",
        "如果输入使用 [`indicator_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/indicator_column) 进行了独热编码（长度为 $V$ 的布尔型矢量，存在用 1 表示，其余则为 0），这表示很多零进行了相乘和相加运算。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I7mR4Wa2LS9C",
        "colab_type": "text"
      },
      "source": [
        " 当我们通过使用大小为 $d$ 的 [`embedding_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) 获得完全相同的结果时，我们将仅查询与示例输入中存在的 3 个特征 `\"great\"`、`\"beautiful\"` 和 `\"excellent\"` 相对应的嵌入并将这三个嵌入相加：$(1 \\times d) + (1 \\times d) + (1 \\times d)$。由于不存在的特征的权重在“矢量-矩阵”乘法中与 0 相乘，因此对结果没有任何影响；而存在的特征的权重在“矢量-矩阵”乘法中与 1 相乘。因此，将通过嵌入查询获得的权重相加会获得与“矢量-矩阵”乘法相同的结果。\n",
        "\n",
        "当使用嵌入时，计算嵌入查询是一个 $O(s * d)$ 计算；从计算方面而言，它比稀疏数据中的 `indicator_column` 的 $O(V * d)$ 更具成本效益，因为 $s$ 远远小于 $V$。（请注意，这些嵌入是临时学习的结果。在任何指定的训练迭代中，都是当前查询的权重。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "etZ9qf0kLS9D",
        "colab_type": "text"
      },
      "source": [
        " 正如我们在**任务 3** 中看到的，通过在训练 `DNNClassifier` 过程中使用 `embedding_column`，我们的模型学习了特征的低维度表示法，其中点积定义了一个针对目标任务的相似性指标。在本例中，影评中使用的相似术语（例如 `\"great\"` 和 `\"excellent\"`）在嵌入空间中彼此之间距离较近（即具有较大的点积），而相异的术语（例如 `\"great\"` 和 `\"bad\"`）在嵌入空间中彼此之间距离较远（即具有较小的点积）。"
      ]
    }
  ]
}