{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copyright (c) Microsoft Corporation. All rights reserved.\n",
    "\n",
    "Licensed under the MIT License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Training GenSen on AzureML with SNLI Dataset\n",
    "**GenSen: Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning** [\\[1\\]](#References)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction\n",
    "GenSen is a technique to learn general purpose, fixed-length representations of sentences via multi-task training.  The model combines the benefits of diverse sentence representation learning objectives into a single multi-task framework. As described in the paper **Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning**, it is \"the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors\" [\\[1\\]](#References). These representations are useful for transfer and low-resource learning. GenSen is trained on several data sources with multiple training objectives on over 100 milion sentences.\n",
    "\n",
    "GenSen yields the state-of-the-art results on multiple datasets, such as MRPC, SICK-R, SICK-E and STS, for sentence similarity. The reported results are as follows compared with other models [\\[3\\]](#References):\n",
    "\n",
    "| Model | MRPC | SICK-R | SICK-E | STS |\n",
    "| --- | --- | --- | --- | --- |\n",
    "| GenSen (Subramanian et al., 2018) | 78.6/84.4 | 0.888 | 87.8 | 78.9/78.6 |\n",
    "| [InferSent](https://arxiv.org/abs/1705.02364) (Conneau et al., 2017) | 76.2/83.1 | 0.884 | 86.3 | 75.8/75.5 |\n",
    "| [TF-KLD](https://www.aclweb.org/anthology/D13-1090) (Ji and Eisenstein, 2013) | 80.4/85.9 | - | - | - |\n",
    "\n",
    "This notebook serves as an introduction to an end-to-end NLP solution for sentence similarity by demonstrating how to train and tune GenSen on the AzureML platform. We show the advantages of AzureML when training large NLP models with GPU.\n",
    "\n",
    "For more information on **AzureML**, see these resources:\n",
    "* [Quickstart notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)\n",
    "* [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Background: Sequence-to-Sequence Learning\n",
    "![Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)](https://nlpbp.blob.core.windows.net/images/seq2seq.png)**Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)**\n",
    "\n",
    "The GenSen model is known to be most similar to that of Luong et al. (2015) [\\[4\\]](#References), who train a many-to-many **sequence-to-sequence** model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. \n",
    "\n",
    "Sequence-to-sequence learning, or seq2seq, aims to directly model the conditional probability $p(x|y)$ of mapping an input sequence, $x_1,...,x_n$, into an output sequence, $y_1,...,y_m$. This is done using an encoder-decoder framework. As illustrated in the above figure, the encoder computes a representation $s$ for each input sequence,  which the *decoder* uses to generate the ouput sequence. This decomposes the conditional probability as\" [\\[4\\]](#References):\n",
    "$$\n",
    "\\log p(y|x)=\\sum_{j=1}^{m} \\log p(y_i|y_{<j}, x, s)\n",
    "$$\n",
    "\n",
    "It is worth noting that the GenSen model deviates from Luong's seq2seq method in two key ways. First, GenSen uses an attention mechanism, meaning that the learned vector representations are not of fixed length. Second, GenSen optimizes for improvements on the same tasks on which the model is trained, rather than optimizing for transferability to different tasks or domains. [\\[1\\]](#References)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Azure ML Compute vs. Local\n",
    "We did a comparative study to make it easier for you to choose between a GPU enabled Azure VM \n",
    "and Azure ML compute. The table below provides the cost vs performance trade-off for \n",
    "each of the choices. We can tell from the table below that with distributed training on AzureML, it will make the model converge faster and get better training loss with similar training time.\n",
    "\n",
    "* The \"Azure VM\" column refers to the running time of the [gensen local](gensen_local.ipynb) notebook. All the other columns refer to the current notebook.\n",
    "* Both the Azure VM and each Azure ML Compute node are Standard_NC6 with 1 NVIDIA Tesla K80 GPU with 12 GB GPU memory. \n",
    "* The total time in the table stands for the training time + setup time.\n",
    "* Cost is the estimated cost of running the Azure ML Compute Job or the VM up-time.\n",
    "\n",
    "**Please note:** These were the estimated cost for running these notebooks as of July 1st, 2019. Please \n",
    "look at the [Azure Pricing Calculator](https://azure.microsoft.com/en-us/pricing/calculator/) to see the most up to date pricing information. \n",
    "\n",
    "|---|Azure VM| AML 1 Node| AML 2 Nodes | AML 4 Nodes | AML 8 Nodes|\n",
    "|---|---|---|---|---|---|\n",
    "|Training Loss​|4.91​|4.81​|4.78​|4.77​|4.58​|\n",
    "|Total Time​|1h 05m|1h 54m|1h 44m​|1h 26m​|1h 07m​|\n",
    "|Cost|\\$1.12​|\\$2.71​|\\$4.68​|\\$7.9​|\\$12.1​|"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Table of Contents\n",
    "0. [Global Settings](#0-Global-Settings)\n",
    "1. [Data Loading and Preprocessing](#1-Data-Loading-and-Preprocessing)    \n",
    "    * 1.1. [Load SNLI](#1.1-Load-SNLI)  \n",
    "    * 1.2. [Tokenize](#1.2-Tokenize)  \n",
    "    * 1.3. [Preprocess](#1.3-Preprocess)  \n",
    "    * 1.4. [Upload to Azure Blob Storage](#1.4-Upload-to-Azure-Blob-Storage)  \n",
    "2. [Train GenSen with Distributed Pytorch and Horovod on AzureML](#2-Train-GenSen-with-Distributed-Pytorch-and-Horovod-on-AzureML)  \n",
    "    * 2.1 [Create or Attach a Remote Compute Target](#2.1-Create-or-Attach-a-Remote-Compute-Target)  \n",
    "    * 2.2. [Prepare the Training Script](#2.2-Prepare-the-Training-Script)  \n",
    "    * 2.3. [Define the Estimator and Experiment](#2.3-Define-the-Estimator-and-Experiment)  \n",
    "        * 2.3.1 [Create a PyTorch Estimator](#2.3.1-Create-a-PyTorch-Estimator)\n",
    "        * 2.3.2 [Create the Experiment](#2.3.2-Create-the-Experiment)\n",
    "    * 2.4. [Submit the Training Job to the Compute Target](#2.4-Submit-the-Training-Job-to-the-Compute-Target)\n",
    "        * 2.4.1 [Monitor the Run](#2.4.1-Monitor-the-Run)\n",
    "        * 2.4.2 [Interpret the Training Results](#2.4.2-Interpret-the-Training-Results)\n",
    "3. [Tune Model Hyperparameters](#3-Tune-Model-Hyperparameters)\n",
    "    * 3.1 [Start a Hyperparameter Sweep](#3.1-Start-a-Hyperparameter-Sweep)\n",
    "    * 3.2 [Monitor HyperDrive Runs](#3.2-Monitor-HyperDrive-Runs)\n",
    "    * 3.3 [Find the Best Model](#3.3-Find-the-Best-Model)\n",
    "- [References](#References)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 0 Global Settings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 108,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "System version: 3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46) \n",
      "[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]\n",
      "Azure ML SDK Version: 1.0.48\n",
      "Pandas version: 0.23.4\n"
     ]
    }
   ],
   "source": [
    "import sys\n",
    "import time\n",
    "import os\n",
    "import pandas as pd\n",
    "import shutil\n",
    "import papermill as pm\n",
    "import scrapbook as sb\n",
    "\n",
    "sys.path.append(\"../../\")\n",
    "from utils_nlp.dataset import snli, preprocess, Split\n",
    "from utils_nlp.azureml import azureml_utils\n",
    "from utils_nlp.models.gensen.preprocess_utils import gensen_preprocess\n",
    "\n",
    "import azureml as aml\n",
    "import azureml.train.hyperdrive as hd\n",
    "from azureml.telemetry import set_diagnostics_collection\n",
    "import azureml.data\n",
    "from azureml.data.azure_storage_datastore import AzureFileDatastore\n",
    "from azureml.core.compute import ComputeTarget, AmlCompute\n",
    "from azureml.core.compute_target import ComputeTargetException\n",
    "from azureml.core import Experiment, get_run\n",
    "from azureml.core.runconfig import MpiConfiguration\n",
    "from azureml.train.dnn import PyTorch\n",
    "from azureml.train.estimator import Estimator\n",
    "from azureml.train.hyperdrive import (\n",
    "    RandomParameterSampling,\n",
    "    BanditPolicy,\n",
    "    HyperDriveConfig,\n",
    "    uniform,\n",
    "    PrimaryMetricGoal,\n",
    ")\n",
    "from azureml.widgets import RunDetails\n",
    "\n",
    "print(\"System version: {}\".format(sys.version))\n",
    "print(\"Azure ML SDK Version:\", aml.core.VERSION)\n",
    "print(\"Pandas version: {}\".format(pd.__version__))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 109,
   "metadata": {
    "tags": [
     "parameters"
    ]
   },
   "outputs": [],
   "source": [
    "# Model configuration\n",
    "NROWS = None\n",
    "CACHE_DIR = \"./temp\"\n",
    "AZUREML_CONFIG_PATH = \"./.azureml\"\n",
    "AZUREML_VERBOSE = False  # Prints verbose azureml logs when True\n",
    "MAX_EPOCH = 2 # by default is None\n",
    "TRAIN_SCRIPT = \"gensen_train.py\"\n",
    "CONFIG_PATH = \"gensen_config.json\"\n",
    "EXPERIMENT_NAME = \"NLP-SS-GenSen-deepdive\"\n",
    "UTIL_NLP_PATH = \"../../utils_nlp\"\n",
    "MAX_TOTAL_RUNS = 8\n",
    "MAX_CONCURRENT_RUNS = 4"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this notebook we use the Azure Machine Learning Python SDK to facilitate remote training and computation. To get started, we must first initialize an AzureML [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace), a centralized resource for managing experiment runs, compute resources, datastores, and other machine learning artifacts on the cloud. \n",
    "\n",
    "The following cell looks to set up the connection to your [Azure Machine Learning service Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace). You can choose to connect to an existing workspace or create a new one. \n",
    "\n",
    "**To access an existing workspace:**\n",
    "1. If you have a `config.json` file, you do not need to provide the workspace information; you will only need to update the `config_path` variable that is defined above which contains the file.\n",
    "2. Otherwise, you will need to supply the following:\n",
    "    * The name of your workspace\n",
    "    * Your subscription id\n",
    "    * The resource group name\n",
    "\n",
    "**To create a new workspace:**\n",
    "\n",
    "Set the following information:\n",
    "* A name for your workspace\n",
    "* Your subscription id\n",
    "* The resource group name\n",
    "* [Azure region](https://azure.microsoft.com/en-us/global-infrastructure/regions/) to create the workspace in, such as `eastus2`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 110,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Azure resources\n",
    "subscription_id = \"YOUR_SUBSCRIPTION_ID\"\n",
    "resource_group = \"YOUR_RESOURCE_GROUP_NAME\"  \n",
    "workspace_name = \"YOUR_WORKSPACE_NAME\"  \n",
    "workspace_region = \"YOUR_WORKSPACE_REGION\" #Possible values eastus, eastus2 and so on."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "ws = azureml_utils.get_or_create_workspace(\n",
    "    config_path=AZUREML_CONFIG_PATH,\n",
    "    subscription_id=subscription_id,\n",
    "    resource_group=resource_group,\n",
    "    workspace_name=workspace_name,\n",
    "    workspace_region=workspace_region,\n",
    ")\n",
    "\n",
    "print(\n",
    "    \"Workspace name: \" + ws.name,\n",
    "    \"Azure region: \" + ws.location,\n",
    "    \"Subscription id: \" + ws.subscription_id,\n",
    "    \"Resource group: \" + ws.resource_group,\n",
    "    sep=\"\\n\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Opt-in diagnostics for better experience, quality, and security of future releases."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 114,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Turning diagnostics collection on. \n"
     ]
    }
   ],
   "source": [
    "set_diagnostics_collection(send_diagnostics=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1 Data Loading and Preprocessing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We use the [SNLI](https://nlp.stanford.edu/projects/snli/) dataset in this example.\n",
    "\n",
    "Note: The dataset used in the original paper can be downloaded by running the bashfile [here](https://github.com/Maluuba/gensen/blob/master/get_data.sh). Training on the original datasets will reproduce the results in the [paper](https://arxiv.org/abs/1804.00079), but **will take about 20 hours of training time**. For the purposes of this example we use SNLI, a subset of the original dataset, as the only training dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.1 Load SNLI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 115,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 92.3k/92.3k [00:34<00:00, 2.69kKB/s]\n"
     ]
    }
   ],
   "source": [
    "data_dir = os.path.join(CACHE_DIR, \"data\")\n",
    "train = snli.load_pandas_df(data_dir, file_split=Split.TRAIN, nrows=NROWS)\n",
    "dev = snli.load_pandas_df(data_dir, file_split=Split.DEV, nrows=NROWS)\n",
    "test = snli.load_pandas_df(data_dir, file_split=Split.TEST, nrows=NROWS)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 116,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>gold_label</th>\n",
       "      <th>sentence1_binary_parse</th>\n",
       "      <th>sentence2_binary_parse</th>\n",
       "      <th>sentence1_parse</th>\n",
       "      <th>sentence2_parse</th>\n",
       "      <th>sentence1</th>\n",
       "      <th>sentence2</th>\n",
       "      <th>captionID</th>\n",
       "      <th>pairID</th>\n",
       "      <th>label1</th>\n",
       "      <th>label2</th>\n",
       "      <th>label3</th>\n",
       "      <th>label4</th>\n",
       "      <th>label5</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>neutral</td>\n",
       "      <td>( ( ( A person ) ( on ( a horse ) ) ) ( ( jump...</td>\n",
       "      <td>( ( A person ) ( ( is ( ( training ( his horse...</td>\n",
       "      <td>(ROOT (S (NP (NP (DT A) (NN person)) (PP (IN o...</td>\n",
       "      <td>(ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) ...</td>\n",
       "      <td>A person on a horse jumps over a broken down a...</td>\n",
       "      <td>A person is training his horse for a competition.</td>\n",
       "      <td>3416050480.jpg#4</td>\n",
       "      <td>3416050480.jpg#4r1n</td>\n",
       "      <td>neutral</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>contradiction</td>\n",
       "      <td>( ( ( A person ) ( on ( a horse ) ) ) ( ( jump...</td>\n",
       "      <td>( ( A person ) ( ( ( ( is ( at ( a diner ) ) )...</td>\n",
       "      <td>(ROOT (S (NP (NP (DT A) (NN person)) (PP (IN o...</td>\n",
       "      <td>(ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) ...</td>\n",
       "      <td>A person on a horse jumps over a broken down a...</td>\n",
       "      <td>A person is at a diner, ordering an omelette.</td>\n",
       "      <td>3416050480.jpg#4</td>\n",
       "      <td>3416050480.jpg#4r1c</td>\n",
       "      <td>contradiction</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>entailment</td>\n",
       "      <td>( ( ( A person ) ( on ( a horse ) ) ) ( ( jump...</td>\n",
       "      <td>( ( A person ) ( ( ( ( is outdoors ) , ) ( on ...</td>\n",
       "      <td>(ROOT (S (NP (NP (DT A) (NN person)) (PP (IN o...</td>\n",
       "      <td>(ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) ...</td>\n",
       "      <td>A person on a horse jumps over a broken down a...</td>\n",
       "      <td>A person is outdoors, on a horse.</td>\n",
       "      <td>3416050480.jpg#4</td>\n",
       "      <td>3416050480.jpg#4r1e</td>\n",
       "      <td>entailment</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>neutral</td>\n",
       "      <td>( Children ( ( ( smiling and ) waving ) ( at c...</td>\n",
       "      <td>( They ( are ( smiling ( at ( their parents ) ...</td>\n",
       "      <td>(ROOT (NP (S (NP (NNP Children)) (VP (VBG smil...</td>\n",
       "      <td>(ROOT (S (NP (PRP They)) (VP (VBP are) (VP (VB...</td>\n",
       "      <td>Children smiling and waving at camera</td>\n",
       "      <td>They are smiling at their parents</td>\n",
       "      <td>2267923837.jpg#2</td>\n",
       "      <td>2267923837.jpg#2r1n</td>\n",
       "      <td>neutral</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>entailment</td>\n",
       "      <td>( Children ( ( ( smiling and ) waving ) ( at c...</td>\n",
       "      <td>( There ( ( are children ) present ) )</td>\n",
       "      <td>(ROOT (NP (S (NP (NNP Children)) (VP (VBG smil...</td>\n",
       "      <td>(ROOT (S (NP (EX There)) (VP (VBP are) (NP (NN...</td>\n",
       "      <td>Children smiling and waving at camera</td>\n",
       "      <td>There are children present</td>\n",
       "      <td>2267923837.jpg#2</td>\n",
       "      <td>2267923837.jpg#2r1e</td>\n",
       "      <td>entailment</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "      gold_label                             sentence1_binary_parse  \\\n",
       "0        neutral  ( ( ( A person ) ( on ( a horse ) ) ) ( ( jump...   \n",
       "1  contradiction  ( ( ( A person ) ( on ( a horse ) ) ) ( ( jump...   \n",
       "2     entailment  ( ( ( A person ) ( on ( a horse ) ) ) ( ( jump...   \n",
       "3        neutral  ( Children ( ( ( smiling and ) waving ) ( at c...   \n",
       "4     entailment  ( Children ( ( ( smiling and ) waving ) ( at c...   \n",
       "\n",
       "                              sentence2_binary_parse  \\\n",
       "0  ( ( A person ) ( ( is ( ( training ( his horse...   \n",
       "1  ( ( A person ) ( ( ( ( is ( at ( a diner ) ) )...   \n",
       "2  ( ( A person ) ( ( ( ( is outdoors ) , ) ( on ...   \n",
       "3  ( They ( are ( smiling ( at ( their parents ) ...   \n",
       "4             ( There ( ( are children ) present ) )   \n",
       "\n",
       "                                     sentence1_parse  \\\n",
       "0  (ROOT (S (NP (NP (DT A) (NN person)) (PP (IN o...   \n",
       "1  (ROOT (S (NP (NP (DT A) (NN person)) (PP (IN o...   \n",
       "2  (ROOT (S (NP (NP (DT A) (NN person)) (PP (IN o...   \n",
       "3  (ROOT (NP (S (NP (NNP Children)) (VP (VBG smil...   \n",
       "4  (ROOT (NP (S (NP (NNP Children)) (VP (VBG smil...   \n",
       "\n",
       "                                     sentence2_parse  \\\n",
       "0  (ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) ...   \n",
       "1  (ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) ...   \n",
       "2  (ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) ...   \n",
       "3  (ROOT (S (NP (PRP They)) (VP (VBP are) (VP (VB...   \n",
       "4  (ROOT (S (NP (EX There)) (VP (VBP are) (NP (NN...   \n",
       "\n",
       "                                           sentence1  \\\n",
       "0  A person on a horse jumps over a broken down a...   \n",
       "1  A person on a horse jumps over a broken down a...   \n",
       "2  A person on a horse jumps over a broken down a...   \n",
       "3              Children smiling and waving at camera   \n",
       "4              Children smiling and waving at camera   \n",
       "\n",
       "                                           sentence2         captionID  \\\n",
       "0  A person is training his horse for a competition.  3416050480.jpg#4   \n",
       "1      A person is at a diner, ordering an omelette.  3416050480.jpg#4   \n",
       "2                  A person is outdoors, on a horse.  3416050480.jpg#4   \n",
       "3                  They are smiling at their parents  2267923837.jpg#2   \n",
       "4                         There are children present  2267923837.jpg#2   \n",
       "\n",
       "                pairID         label1 label2 label3 label4 label5  \n",
       "0  3416050480.jpg#4r1n        neutral    NaN    NaN    NaN    NaN  \n",
       "1  3416050480.jpg#4r1c  contradiction    NaN    NaN    NaN    NaN  \n",
       "2  3416050480.jpg#4r1e     entailment    NaN    NaN    NaN    NaN  \n",
       "3  2267923837.jpg#2r1n        neutral    NaN    NaN    NaN    NaN  \n",
       "4  2267923837.jpg#2r1e     entailment    NaN    NaN    NaN    NaN  "
      ]
     },
     "execution_count": 116,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.2 Tokenize"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here we clean the dataframes, do lowercase standardization, and tokenize the text using the [NLTK](https://www.nltk.org/) library."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "metadata": {},
   "outputs": [],
   "source": [
    "def clean_and_tokenize(df):\n",
    "    df = snli.clean_cols(df)\n",
    "    df = snli.clean_rows(df)\n",
    "    df = preprocess.to_lowercase(df)\n",
    "    df = preprocess.to_nltk_tokens(df)\n",
    "    return df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For `clean_and_tokenize` function, it may take a little bit longer. To run the following cell, it takes around 5 to 10 mins."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 118,
   "metadata": {},
   "outputs": [],
   "source": [
    "train = clean_and_tokenize(train)\n",
    "dev = clean_and_tokenize(dev)\n",
    "test = clean_and_tokenize(test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 119,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>score</th>\n",
       "      <th>sentence1</th>\n",
       "      <th>sentence2</th>\n",
       "      <th>sentence1_tokens</th>\n",
       "      <th>sentence2_tokens</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>neutral</td>\n",
       "      <td>a person on a horse jumps over a broken down a...</td>\n",
       "      <td>a person is training his horse for a competition.</td>\n",
       "      <td>[a, person, on, a, horse, jumps, over, a, brok...</td>\n",
       "      <td>[a, person, is, training, his, horse, for, a, ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>contradiction</td>\n",
       "      <td>a person on a horse jumps over a broken down a...</td>\n",
       "      <td>a person is at a diner, ordering an omelette.</td>\n",
       "      <td>[a, person, on, a, horse, jumps, over, a, brok...</td>\n",
       "      <td>[a, person, is, at, a, diner, ,, ordering, an,...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>entailment</td>\n",
       "      <td>a person on a horse jumps over a broken down a...</td>\n",
       "      <td>a person is outdoors, on a horse.</td>\n",
       "      <td>[a, person, on, a, horse, jumps, over, a, brok...</td>\n",
       "      <td>[a, person, is, outdoors, ,, on, a, horse, .]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>neutral</td>\n",
       "      <td>children smiling and waving at camera</td>\n",
       "      <td>they are smiling at their parents</td>\n",
       "      <td>[children, smiling, and, waving, at, camera]</td>\n",
       "      <td>[they, are, smiling, at, their, parents]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>entailment</td>\n",
       "      <td>children smiling and waving at camera</td>\n",
       "      <td>there are children present</td>\n",
       "      <td>[children, smiling, and, waving, at, camera]</td>\n",
       "      <td>[there, are, children, present]</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "           score                                          sentence1  \\\n",
       "0        neutral  a person on a horse jumps over a broken down a...   \n",
       "1  contradiction  a person on a horse jumps over a broken down a...   \n",
       "2     entailment  a person on a horse jumps over a broken down a...   \n",
       "3        neutral              children smiling and waving at camera   \n",
       "4     entailment              children smiling and waving at camera   \n",
       "\n",
       "                                           sentence2  \\\n",
       "0  a person is training his horse for a competition.   \n",
       "1      a person is at a diner, ordering an omelette.   \n",
       "2                  a person is outdoors, on a horse.   \n",
       "3                  they are smiling at their parents   \n",
       "4                         there are children present   \n",
       "\n",
       "                                    sentence1_tokens  \\\n",
       "0  [a, person, on, a, horse, jumps, over, a, brok...   \n",
       "1  [a, person, on, a, horse, jumps, over, a, brok...   \n",
       "2  [a, person, on, a, horse, jumps, over, a, brok...   \n",
       "3       [children, smiling, and, waving, at, camera]   \n",
       "4       [children, smiling, and, waving, at, camera]   \n",
       "\n",
       "                                    sentence2_tokens  \n",
       "0  [a, person, is, training, his, horse, for, a, ...  \n",
       "1  [a, person, is, at, a, diner, ,, ordering, an,...  \n",
       "2      [a, person, is, outdoors, ,, on, a, horse, .]  \n",
       "3           [they, are, smiling, at, their, parents]  \n",
       "4                    [there, are, children, present]  "
      ]
     },
     "execution_count": 119,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.3 Preprocess\n",
    "We format our data in a specific way in order for the Gensen model to be able to ingest it. We do this by\n",
    "* Saving the tokens for each split in a `snli_1.0_{split}.txt.clean` file, with the sentence pairs and scores tab-separated and the tokens separated by a single space. Since some of the samples have invalid scores (\"-\"), we filter those out and save them separately in a `snli_1.0_{split}.txt.clean.noblank` file.\n",
    "* Saving the tokenized sentence and labels separately, in the form `snli_1.0_{split}.txt.s1.tok` or `snli_1.0_{split}.txt.s2.tok` or `snli_1.0_{split}.txt.lab`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 120,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing input data to ./temp/data/clean/snli_1.0\n"
     ]
    }
   ],
   "source": [
    "preprocessed_data_dir = gensen_preprocess(train, dev, test, data_dir)\n",
    "print(\"Writing input data to {}\".format(preprocessed_data_dir))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.4 Upload to Azure Blob Storage\n",
    "We upload the data from the local machine into the datastore so that it can be accessed for remote training. The datastore is a reference that points to a storage account, e.g. the Azure Blob Storage service. It can be attached to an AzureML workspace to facilitate data management operations such as uploading/downloading data or interacting with data from remote compute targets.\n",
    "\n",
    "**Note: If you already have the preprocessed files under `clean/snli_1.0/` in your default datastore, you DO NOT need to redo this section.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 121,
   "metadata": {},
   "outputs": [],
   "source": [
    "ds = ws.get_default_datastore()\n",
    "\n",
    "if AZUREML_VERBOSE:\n",
    "    print(\"Datastore type: {}\".format(ds.datastore_type))\n",
    "    print(\"Datastore account: {}\".format(ds.account_name))\n",
    "    print(\"Datastore container: {}\".format(ds.container_name))\n",
    "    print(\"Data reference: {}\".format(ds.as_mount()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "metadata": {},
   "outputs": [],
   "source": [
    "_ = ds.upload(\n",
    "    src_dir=os.path.join(data_dir, \"clean/snli_1.0\"),\n",
    "    overwrite=False,\n",
    "    show_progress=AZUREML_VERBOSE,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2 Train GenSen with Distributed Pytorch and Horovod on AzureML\n",
    "In this tutorial, we train a GenSen model with PyTorch on AML using distributed training across a GPU cluster.\n",
    "\n",
    "After creating the workspace and setting up the development environment, training a model in Azure Machine Learning involves the following steps:\n",
    "1. Creating a remote compute target\n",
    "2. Preparing the training data and uploading it to datastore (Note that this was done in Section 1.4)\n",
    "3. Preparing the training script\n",
    "4. Creating Estimator and Experiment objects\n",
    "5. Submitting the Estimator to an Experiment attached to the AzureML workspace"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.1 Create or Attach a Remote Compute Target\n",
    "We create and attach a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training the model. Here we use the AzureML-managed compute target ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) as our remote training compute resource. Our cluster autoscales from 0 to 2 `STANDARD_NC6` GPU nodes.\n",
    "\n",
    "Creating and configuring the AmlCompute cluster takes approximately 5 minutes the first time around. Once a cluster with the given configuration is created, it does not need to be created again.\n",
    "\n",
    "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Read more about the default limits and how to request more quota [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 123,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Found existing compute target gensen-aml\n"
     ]
    }
   ],
   "source": [
    "cluster_name = \"gensen-aml\"\n",
    "\n",
    "try:\n",
    "    compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
    "    print(\"Found existing compute target {}\".format(cluster_name))\n",
    "except ComputeTargetException:\n",
    "    print(\"Creating a new compute target {}...\".format(cluster_name))\n",
    "    compute_config = AmlCompute.provisioning_configuration(\n",
    "        vm_size=\"STANDARD_NC6\", max_nodes=8\n",
    "    )\n",
    "    # create the cluster\n",
    "    compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
    "    compute_target.wait_for_completion(show_output=AZUREML_VERBOSE)\n",
    "\n",
    "if AZUREML_VERBOSE:\n",
    "    print(compute_target.get_status().serialize())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.2 Prepare the Training Script\n",
    "The training process involves the following steps:\n",
    "1. Create or load the dataset vocabulary\n",
    "2. Train on the training dataset for each batch epoch (batch size = 48 updates)\n",
    "3. Evaluate on the validation dataset for every 10 epochs\n",
    "4. Find the local minimum point on validation loss\n",
    "5. Save the best model and stop the training process\n",
    "\n",
    "In this section, we define the training script and move all necessary dependencies to `project_folder`, which will eventually be submitted to the remote compute target. Note that the size of the folder can not exceed 300Mb, so large dependencies such as pre-trained embeddings must be accessed from the datastore. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 124,
   "metadata": {},
   "outputs": [],
   "source": [
    "project_folder = os.path.join(CACHE_DIR, \"gensen\")\n",
    "os.makedirs(project_folder, exist_ok=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The script for distributed GenSen training is provided at `./gensen_train.py`.\n",
    "\n",
    "In this example, we use MLflow to log metrics. We also use the [AzureML-Mlflow](https://pypi.org/project/azureml-mlflow/) package to persist these metrics to the AzureML workspace. This is done with no change to the provided training script! Note that logging is done for loss *per minibatch*."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copy the training script `gensen_train.py` and config file `gensen_config.json` into the project folder."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 125,
   "metadata": {},
   "outputs": [],
   "source": [
    "utils_folder = os.path.join(project_folder, \"utils_nlp\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "metadata": {},
   "outputs": [],
   "source": [
    "_ = shutil.copytree(UTIL_NLP_PATH, utils_folder)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 133,
   "metadata": {},
   "outputs": [],
   "source": [
    "_ = shutil.copy(TRAIN_SCRIPT, project_folder)\n",
    "_ =shutil.copy(CONFIG_PATH, project_folder)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.3 Define the Estimator and Experiment"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3.1 Create a PyTorch Estimator\n",
    "The Azure ML SDK's PyTorch Estimator allows us to submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch).\n",
    "\n",
    "Note that `gensen_config.json` defines all the hyperparameters and paths when training GenSen model. The trained model will be saved in `models` to Azure Blob Storage. **Remember to clean the `models` folder in order to save new models.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 128,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "if MAX_EPOCH:\n",
    "    script_params = {\n",
    "        \"--config\": CONFIG_PATH,\n",
    "        \"--data_folder\": ws.get_default_datastore().as_mount(),\n",
    "        \"--max_epoch\": MAX_EPOCH,\n",
    "    }\n",
    "else:\n",
    "    script_params = {\n",
    "        \"--config\": CONFIG_PATH,\n",
    "        \"--data_folder\": ws.get_default_datastore().as_mount(),\n",
    "    }\n",
    "\n",
    "estimator = PyTorch(\n",
    "    source_directory=project_folder,\n",
    "    script_params=script_params,\n",
    "    compute_target=compute_target,\n",
    "    entry_script= TRAIN_SCRIPT,\n",
    "    node_count=2,\n",
    "    process_count_per_node=1,\n",
    "    distributed_training=MpiConfiguration(),\n",
    "    use_gpu=True,\n",
    "    framework_version=\"1.1\",\n",
    "    conda_packages=[\"scikit-learn=0.20.3\", \"h5py\", \"nltk\"],\n",
    "    pip_packages=[\"azureml-mlflow>=1.0.43.1\", \"numpy>=1.16.0\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This Estimator specifies that the training script will run on `2` nodes, with one worker per node. In order to execute a distributed run using GPU, we must define `use_gpu` and `distributed_backend` to use MPI/Horovod. PyTorch, Horovod, and other necessary dependencies are installed automatically. If the training script makes use of packages that are not already defined in `.azureml/conda_dependencies.yml`, we must explicitly tell the estimator to install them via the constructor's `pip_packages` or `conda_packages` parameters.\n",
    "\n",
    "Note that if the estimator is being created for the first time, this step will take longer to run because the conda dependencies found under `.azureml/conda_dependencies.yml` must be installed from scratch. After the first run, it will use the existing conda environment and run the code directly. \n",
    "\n",
    "The training time will take around **2 hours** if you use the default value `max_epoch=None`, which means the training will stop if the local minimum loss has been found. User can specify the number of epochs for training.\n",
    "\n",
    "**Requirements:**\n",
    "- python=3.6.2\n",
    "- numpy=1.15.1\n",
    "- numpy-base=1.15.1\n",
    "- pip=10.0.1\n",
    "- python=3.6.6\n",
    "- python-dateutil=2.7.3\n",
    "- scikit-learn=0.20.3\n",
    "- azureml-defaults\n",
    "- h5py\n",
    "- nltk"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3.2 Create the Experiment\n",
    "Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in the AzureML workspace for this tutorial."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 129,
   "metadata": {},
   "outputs": [],
   "source": [
    "experiment_name = EXPERIMENT_NAME\n",
    "experiment = Experiment(ws, name=experiment_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.4 Submit the Training Job to the Compute Target\n",
    "We can run the experiment by simply submitting the Estimator object to the compute target. Note that this call is asynchronous."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 130,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "run = experiment.submit(estimator)\n",
    "if AZUREML_VERBOSE:\n",
    "    print(run)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4.1 Monitor the Run\n",
    "We can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. The widget automatically plots and visualizes the loss metric that we logged to the AzureML workspace."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 131,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9d4a6faff9e449cbab55e8f73b557f2f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': True, 'log_level': 'INFO', 's…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "RunDetails(run).show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "metadata": {},
   "outputs": [],
   "source": [
    "_ = run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until the script has completed training."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4.2 Interpret the Training Results\n",
    "The following chart shows the model validation loss with different node configurations on AmlCompute. We find that the minimum validation loss decreases as the number of nodes increases; that is, the performance scales with the number of nodes in the cluster.\n",
    "\n",
    "| Standard_NC6 | AML_1node | AML_2nodes | AML_4nodes | AML_8nodes |\n",
    "| --- | --- | --- | --- | --- |\n",
    "| min_val_loss | 4.81 | 4.78 | 4.77 | 4.58 |\n",
    "\n",
    "We also observe common tradeoffs associated with distributed training. We make use of [Horovod](https://github.com/horovod/horovod), a distributed training tool for many popular deep learning frameworks that enables parallelization of work across the nodes in the cluster. Distributed training decreases the time it takes for the model to converge in theory, but the model may also take more time in communicating with each node. Note that the communication time will eventually become negligible when training on larger and larger datasets, but being aware of this tradeoff is helpful for choosing the node configuration when training on smaller datasets."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3 Tune Model Hyperparameters\n",
    "Now that we've seen how to do a simple PyTorch training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3.1 Start a Hyperparameter Sweep\n",
    "First, we define the hyperparameter space to sweep over. Since the training script uses a learning rate schedule to decay the learning rate every several epochs, we can tune the initial learning rate parameter. In this example we will use random sampling to try different configuration sets of hyperparameters to minimize our primary metric, the best validation loss.\n",
    "\n",
    "Then, we specify the early termination policy to use to early terminate poorly performing runs. Here we use the `BanditPolicy`, which terminates any run that doesn't fall within the slack factor of our primary evaluation metric. In this tutorial, we will apply this policy every epoch (since we report our the validation loss metric every epoch and `evaluation_interval=1`). Note that we explicitly define `delay_evaluation` such that the first policy evaluation does not occur until after the 10th epoch.\n",
    "\n",
    "Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparameters#specify-an-early-termination-policy) for more information on the BanditPolicy and other policies available."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {},
   "outputs": [],
   "source": [
    "param_sampling = RandomParameterSampling({\"learning_rate\": uniform(0.0001, 0.001)})\n",
    "\n",
    "early_termination_policy = BanditPolicy(\n",
    "    slack_factor=0.15, evaluation_interval=1, delay_evaluation=10\n",
    ")\n",
    "\n",
    "hyperdrive_config = HyperDriveConfig(\n",
    "    estimator=estimator,\n",
    "    hyperparameter_sampling=param_sampling,\n",
    "    policy=early_termination_policy,\n",
    "    primary_metric_name=\"min_val_loss\",\n",
    "    primary_metric_goal=PrimaryMetricGoal.MINIMIZE,\n",
    "    max_total_runs=MAX_TOTAL_RUNS,\n",
    "    max_concurrent_runs=MAX_CONCURRENT_RUNS,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, lauch the hyperparameter tuning job."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [],
   "source": [
    "hyperdrive_run = experiment.submit(hyperdrive_config) # Start the HyperDrive run"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3.2 Monitor HyperDrive Runs\n",
    "We can monitor the progress of the runs with a Jupyter widget, or again block until the run has completed. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "557c35e41681426b961877c9cb9c998b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "_HyperDriveWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': True, 'log_level': 'INFO',…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "RunDetails(hyperdrive_run).show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "_ = hyperdrive_run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until complete"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2.1 Interpret the Tuning Results\n",
    "\n",
    "The chart below shows 4 different threads running in parallel with different learning rates. The number of total runs is 8. We pick the best learning rate by minimizing the validation loss. The HyperDrive run automatically shows the tracking charts (example in the following) to facilitate visualization of the tuning process.\n",
    "\n",
    "![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune1.PNG)\n",
    "![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune2.PNG)\n",
    "\n",
    "**From the results in section [2.3.5 Monitor your run](#2.4.1-Monitor-your-run), the best validation loss for 1 node is 4.81, but with tuning we can easily achieve better performance around 4.65.**"
   ]
  },
   {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3.3 Find the Best Model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once all the runs complete, we can find the run that produced the model with the lowest loss."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Best Run:\n",
      "  Validation loss: 6.23771 \n",
      "  Learning rate: 0.00066 \n",
      "\n"
     ]
    }
   ],
   "source": [
    "best_run = hyperdrive_run.get_best_run_by_primary_metric()\n",
    "best_run_metrics = best_run.get_metrics()\n",
    "print(\n",
    "    \"Best Run:\\n  Validation loss: {0:.5f} \\n  Learning rate: {1:.5f} \\n\".format(\n",
    "        best_run_metrics[\"min_val_loss\"], best_run_metrics[\"learning_rate\"]\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/scrapbook.scrap.json+json": {
       "data": 6.237707614898682,
       "encoder": "json",
       "name": "min_val_loss",
       "version": 1
      }
     },
     "metadata": {
      "scrapbook": {
       "data": true,
       "display": false,
       "name": "min_val_loss"
      }
     },
     "output_type": "display_data"
    },
    {
     "data": {
      "application/scrapbook.scrap.json+json": {
       "data": 0.000660701847879559,
       "encoder": "json",
       "name": "learning_rate",
       "version": 1
      }
     },
     "metadata": {
      "scrapbook": {
       "data": true,
       "display": false,
       "name": "learning_rate"
      }
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Persist properties of the run so we can access the logged metrics later\n",
    "sb.glue(\"min_val_loss\", best_run_metrics['min_val_loss'])\n",
    "sb.glue(\"learning_rate\", best_run_metrics['learning_rate'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## References\n",
    "\n",
    "1. Subramanian, Sandeep and Trischler, Adam and Bengio, Yoshua and Pal, Christopher J, [*Learning general purpose distributed sentence representations via large scale multi-task learning*](https://arxiv.org/abs/1804.00079), ICLR, 2018.\n",
    "2. A. Conneau, D. Kiela, [*SentEval: An Evaluation Toolkit for Universal Sentence Representations*](https://arxiv.org/abs/1803.05449).\n",
    "3. Semantic textual similarity. url: http://nlpprogress.com/english/semantic_textual_similarity.html\n",
    "4. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. [*Multi-task sequence to sequence learning*](https://arxiv.org/abs/1511.06114), 2015.\n",
    "5. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. [*Learned in translation: Contextualized word vectors](https://arxiv.org/abs/1708.00107), 2017. "
   ]
  }
 ],
 "metadata": {
  "authors": [
   {
    "name": "minxia"
   }
  ],
  "celltoolbar": "Tags",
  "kernelspec": {
   "display_name": "Python (nlp_cpu)",
   "language": "python",
   "name": "nlp_cpu"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  },
  "msauthor": "minxia"
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
