{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Movie genre prediction with Object2Vec Algorithm\n",
    "\n",
    "1. [Introduction](#Introduction)\n",
    "2. [Install and import dependencies](#Install-and-import-dependencies)\n",
    "3. [Preprocessing](#Preprocessing)\n",
    "  1. [Build the vocabulary](#Build-the-vocabulary)\n",
    "  2. [Split data into train, validation and test](#Split-data-into-train,-validation-and-test)\n",
    "  3. [Negative sampling](#Negative-sampling)\n",
    "  4. [Tokenization](#Tokenization)\n",
    "  5. [Download pretrained word embeddings](#Download-pretrained-word-embeddings)\n",
    "4. [Sagemaker Training](#Sagemaker-Training)\n",
    "  1. [Upload data to S3](#Upload-data-to-S3)\n",
    "  1. [Training hyperparameters](#Training-hyperparameters)\n",
    "5. [Evaluation with Batch inference](#Evaluation-with-Batch-inference)\n",
    "6. [Online inference demo](#Online-inference-demo)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction\n",
    "\n",
    "In this notebook, we will explore how ObjectToVec algorithm can be used in a multi label prediction setting \n",
    "to predict the genre of a movie from its plot description. We will be using a dataset provided from imdb.\n",
    "\n",
    "\n",
    "At a high level, the network architecture that we use for this task is illustrated in the diagram below.\n",
    "\n",
    "<img src=\"image.png\" width=\"500\">\n",
    "\n",
    "We cast the problem of multi-label prediction as a binary classification problem. A positive example is the tuple of movie plot description, and a movie genre that applies to the movie in the labeled data. If a movie has multiple genres, we create multiple positive examples for the movie, one for each genre. A negative example is a pair where the genre does not apply to the movie. The negative examples are generated by picking a random subset of genres which do not apply to the movie, as determined by the labeled dataset.\n",
    "\n",
    "Let us first start with downloading the data.\n",
    "\n",
    "<div class=\"alert alert-warning\">\n",
    "Important: Before you begin downloading, please read the following README file using your browser and make sure you are okay with the license.\n",
    "ftp://ftp.fu-berlin.de/pub/misc/movies/database/frozendata/README\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget ftp://ftp.fu-berlin.de/pub/misc/movies/database/frozendata/genres.list.gz\n",
    "!gunzip genres.list.gz"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget ftp://ftp.fu-berlin.de/pub/misc/movies/database/frozendata/plot.list.gz\n",
    "!gunzip plot.list.gz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Install and import dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install langdetect\n",
    "!pip install nltk\n",
    "!conda upgrade -y sqlite\n",
    "!pip install jsonlines"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import sys\n",
    "from collections import Counter\n",
    "from collections import defaultdict\n",
    "from itertools import chain, islice\n",
    "\n",
    "import boto3\n",
    "import jsonlines\n",
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "import nltk\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import sagemaker\n",
    "import seaborn as sns\n",
    "from langdetect import detect\n",
    "from nltk.corpus import stopwords\n",
    "from nltk.tokenize import TreebankWordTokenizer, sent_tokenize\n",
    "from sagemaker import get_execution_role\n",
    "from sagemaker.amazon.amazon_estimator import get_image_uri\n",
    "from sagemaker.session import s3_input\n",
    "from sklearn.model_selection import StratifiedShuffleSplit\n",
    "\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Preprocessing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_genres(filename):\n",
    "    genres = defaultdict(list)\n",
    "    unique_genres = set()\n",
    "    with open(filename, \"r\", errors='ignore') as f:\n",
    "        for line in f:\n",
    "            if line.startswith('\"'):\n",
    "                data = line.split('\\t')\n",
    "                movie = data[0]\n",
    "                genre = data[-1].strip()\n",
    "                genres[movie].append(genre)\n",
    "                unique_genres.add(genre)\n",
    "    unique_genres = sorted(unique_genres)\n",
    "    data = []\n",
    "    for movie in genres:\n",
    "        row = [0]*len(unique_genres)\n",
    "        for g in genres[movie]:\n",
    "            row[unique_genres.index(g)] = 1\n",
    "        row.insert(0, movie)\n",
    "        data.append(row)\n",
    "    genres_df = pd.DataFrame(data)\n",
    "    genres_df.columns = ['short_title'] + unique_genres\n",
    "    return genres_df\n",
    "    \n",
    "genres_df = get_genres(\"genres.list\")\n",
    "genres_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_plots(filename):\n",
    "    with open(filename, \"r\", errors='ignore') as f:\n",
    "        data = []\n",
    "        inside = False\n",
    "        plot = ''\n",
    "        full_title = ''\n",
    "        for line in f:\n",
    "            if line.startswith(\"MV:\") and not inside:\n",
    "                inside = True\n",
    "                full_title = line.split(\"MV:\")[1].strip()\n",
    "\n",
    "            elif line.startswith(\"PL:\") and inside:\n",
    "                plot += line.split(\"PL:\")[1].replace(\"\\n\", \"\")\n",
    "\n",
    "            elif line.startswith(\"MV:\") and inside:\n",
    "                short_title = full_title.split('{')[0].strip()\n",
    "                data.append((short_title, full_title, plot))\n",
    "                plot = ''\n",
    "                inside = False\n",
    "    plots_df = pd.DataFrame(data)\n",
    "    plots_df.columns = ['short_title', 'title', 'plot']\n",
    "    return plots_df\n",
    "\n",
    "plots_df = get_plots(\"plot.list\")\n",
    "plots_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now join the genre and the plot dataframes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_df = plots_df.merge(genres_df, how='inner', on='short_title')\n",
    "data_df.dropna(inplace=True)\n",
    "data_df.drop('short_title', axis=1, inplace=True)\n",
    "data_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "genres = list(data_df.columns)[2:]\n",
    "counts = []\n",
    "for genre in genres:\n",
    "    counts.append((genre, data_df[genre].sum()))\n",
    "distribution = pd.DataFrame(counts, columns=['genre', 'count'])\n",
    "distribution"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Remove the genres with 0 movies\n",
    "data_df.drop('Lifestyle', axis=1, inplace=True)\n",
    "data_df.drop('Sci-fi', axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we select all the movies whose description are in English. Note that this will take about 12 minutes to run."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_df['plot_lang'] = data_df.apply(lambda row: detect(row['plot']), axis=1)\n",
    "data_df['plot_lang'].value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df = data_df[data_df.plot_lang.isin(['en'])]\n",
    "df.to_csv(\"movies_genres_en.csv\", sep='\\t', encoding='utf-8')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df = pd.read_csv(\"movies_genres_en.csv\", delimiter='\\t', encoding='utf-8', index_col=0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Build the vocabulary\n",
    "\n",
    "Lets define a few functions to tokenize our data and build the vocabulary."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "nltk.download('punkt')\n",
    "tokenizer = TreebankWordTokenizer()\n",
    "\n",
    "def tokenize_plot_summary(summary):\n",
    "    for sent in sent_tokenize(summary):\n",
    "        for token in tokenizer.tokenize(sent):\n",
    "            yield token\n",
    "\n",
    "UNKNOWN = '<unk>'\n",
    "def build_vocab(data, max_vocab_size=None):\n",
    "    vocab = Counter()\n",
    "    total = len(data)\n",
    "    for i, row in enumerate(data.itertuples()):\n",
    "        vocab.update(tokenize_plot_summary(row.plot))\n",
    "        if (i+1)%1000 == 0:\n",
    "            sys.stdout.write(\".\")\n",
    "            sys.stdout.flush()\n",
    "    final_vocab = {word:i for i, (word, count) in enumerate(vocab.most_common(max_vocab_size))}\n",
    "    final_vocab[UNKNOWN]=len(final_vocab)+1\n",
    "    return final_vocab"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "vocab = build_vocab(df)\n",
    "print(\"Vocab size: \", len(vocab))\n",
    "with open(\"vocab.json\", \"w\") as f:\n",
    "    json.dump(vocab, f)\n",
    "    print(\"Saved vocabulary file to vocab.json\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Split data into train, validation and test\n",
    "\n",
    "Now we show how to prepare the data for training. First we define a function to convert a dataframe into a jsonlines format which can be used by the algorithm to train.\n",
    "\n",
    "First we split the dataframe into train, validation and test partitions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def split(df, test_size):\n",
    "    data = df.values\n",
    "    data_y = df.drop(['title', 'plot', 'plot_lang'], axis=1).values\n",
    "    #StratifiedShuffleSplit does not work with one hot encoded / multiple labels. Doing the split on basis of arg max labels.\n",
    "    data_y = np.argmax(data_y, axis=1)\n",
    "    data_y.shape\n",
    "    stratified_split = StratifiedShuffleSplit(n_splits=2, test_size=test_size, random_state=42)\n",
    "    for train_index, test_index in stratified_split.split(data, data_y):\n",
    "        train, test = df.iloc[train_index], df.iloc[test_index]\n",
    "    return train, test\n",
    "\n",
    "train, test = split(df, 0.33)\n",
    "#Split the train further into train and validation\n",
    "train, validation = split(train, 0.2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Negative sampling\n",
    "\n",
    "The object2vec algorithm is setup as a binary classification problem. The true examples are the movie, genre pairs present in the dataset. In order to train the algorithm, we also need to provide negative examples. One option is to add all the genres to which the movie does not belong. However this strategy will create a highly skewed dataset with large percentage of negative example, as there are 27 classes present. Instead we choose to have 5 negative examples per positive example, as has been reported in related works like word2vec.\n",
    "\n",
    "Lets look at the class distribution and figure out the how much we should sample the negative examples to achieve a balanced distribution of positive and negative examples."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "genres = list(train.columns)[2:-1]\n",
    "print (\"Number of genres: \", len(genres))\n",
    "agg = {genre:'sum' for genre in genres}\n",
    "agg_by_genre = train.agg(agg)\n",
    "agg_by_genre"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "total_positive_samples = agg_by_genre.sum()\n",
    "total_negative_samples = len(train)*len(genres) - total_positive_samples\n",
    "\n",
    "NEGATIVE_TO_POSITIVE_RATIO = 5\n",
    "sampling_percent = NEGATIVE_TO_POSITIVE_RATIO * total_positive_samples / total_negative_samples\n",
    "print(\"total positive examples: \", total_positive_samples)\n",
    "print(\"total negative samples\", total_negative_samples)\n",
    "print(\"negative sampling needed: \", sampling_percent )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Tokenization\n",
    "\n",
    "Now we can proceed to create the tokenized jsonlines dataset for training, validation and test partitions. We will use negative sampling of 0.4 for the training set, and add all the negatives for validation and test sets."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "nltk.download('stopwords')\n",
    "def tokenize(df, vocab, filename, negative_frac=1.0, use_stopwords=False):\n",
    "    # Rename the columns so that they are valid python identifiers\n",
    "    df = df.rename(lambda x:x.replace(\"-\", \"_\") ,axis='columns')\n",
    "    genres = list(df.columns)[2:-1]\n",
    "    max_seq_length = 0\n",
    "    total = len(df)\n",
    "    stop_words = set()\n",
    "    if use_stopwords:\n",
    "        stop_words = set(stopwords.words('english'))\n",
    "    with jsonlines.open(filename, mode='w') as writer:\n",
    "        for j, row in enumerate(df.itertuples()):\n",
    "            tokens = [token for token in tokenize_plot_summary(row.plot) if token not in stop_words]\n",
    "            plot_token_ids = [vocab[token] if token in vocab else vocab[UNKNOWN] for token in tokens]\n",
    "            for i, genre in enumerate(genres):\n",
    "                label = getattr(row, genre)\n",
    "                if label == 1 or np.random.rand() < negative_frac:\n",
    "                    # All positive examples and fraction of negative examples are picked.\n",
    "                    writer.write({\"in0\": plot_token_ids, \"in1\": [i], \"label\": label})\n",
    "            max_seq_length = max(len(plot_token_ids), max_seq_length)\n",
    "            if (j+1)%1000==0:\n",
    "                sys.stdout.write(\".\")\n",
    "                sys.stdout.flush()\n",
    "        print(\"Finished tokenizing data. Max sequence length of the tokenized data: \", max_seq_length)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tokenize(df=train, vocab=vocab, filename=\"tokenized_movie_genres_train.jsonl\", negative_frac=0.4, use_stopwords=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tokenize(df=validation, vocab=vocab, filename=\"tokenized_movie_genres_validation.jsonl\", use_stopwords=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tokenize(df=test, vocab=vocab, filename=\"tokenized_movie_genres_test.jsonl\", use_stopwords=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For better performance, the training dataset needs to be shuffled."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!shuf tokenized_movie_genres_train.jsonl > tokenized_movie_genres_train_shuffled.jsonl"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Download pretrained word embeddings\n",
    "\n",
    "We will make use of pretrained word embeddings from https://nlp.stanford.edu/projects/glove/. \n",
    "\n",
    "<div class=\"alert alert-warning\">\n",
    "Important: Before you begin downloading, please read the following  and make sure you are okay with the license.\n",
    "https://opendatacommons.org/licenses/pddl/1.0/\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir /tmp/glove\n",
    "!wget -P /tmp/glove/ http://nlp.stanford.edu/data/glove.840B.300d.zip\n",
    "!unzip -d /tmp/glove /tmp/glove/glove.840B.300d.zip\n",
    "!rm /tmp/glove/glove.840B.300d.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Sagemaker Training\n",
    "\n",
    "Let us start with defining some configurations "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "bucket='<<bucket-name>>' # customize to your bucket\n",
    "\n",
    "prefix = 'object2vec-movie-genre-prediction'\n",
    "\n",
    "container = get_image_uri(boto3.Session().region_name, 'object2vec')\n",
    "\n",
    "train_s3_path = \"s3://{}/{}/data/train/\".format(bucket, prefix)\n",
    "validation_s3_path = \"s3://{}/{}/data/validation/\".format(bucket, prefix)\n",
    "test_s3_path = \"s3://{}/{}/data/test/\".format(bucket, prefix)\n",
    "auxiliary_s3_path = \"s3://{}/{}/data/auxiliary/\".format(bucket, prefix)\n",
    "prediction_s3_path = \"s3://{}/{}/predictions/\".format(bucket, prefix)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Upload data to S3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!aws s3 cp tokenized_movie_genres_train_shuffled.jsonl {train_s3_path}\n",
    "!aws s3 cp tokenized_movie_genres_validation.jsonl {validation_s3_path}\n",
    "!aws s3 cp tokenized_movie_genres_test.jsonl {test_s3_path}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!aws s3 cp vocab.json {auxiliary_s3_path}\n",
    "!aws s3 cp /tmp/glove/glove.840B.300d.txt {auxiliary_s3_path}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Training hyperparameters\n",
    "\n",
    "The object2vec is a customizable algorithm and hence it has quite a few hyperparameters. Lets review some of the important ones:\n",
    "\n",
    "* **enc_dim**: The dimension of the encoder. Both the movie plot description and genre embeddings are mapped to this dimension. \n",
    "* **mlp_dim**: The dimension of the output from multilayer perceptron (MLP) layers.\n",
    "* **mlp_activation**: Type of activation function for the multilayer perceptron (MLP) layer.\n",
    "* **mlp_layers**: The number of multilayer perceptron (MLP) layers in the network.\n",
    "* **output_layer**: The type of output layer. We choose 'softmax' as it is a classification problem.\n",
    "* **bucket_width**: The allowed difference between data sequence length when bucketing is enabled. Bucketing is enabled when a non-zero value is specified for this parameter.\n",
    "* **num_classes**: The number of classes for classification training, which is 2 for our case.\n",
    "\n",
    "The **enc0** encodes the movie plot description which is a sequence, and **enc1** encodes the movie genre which is a single token. The encoder parameters:\n",
    "\n",
    "* **max_seq_len**: The maximum sequence length that will be considered. Any input tokens beyond max_seq_len will be truncated and ignored. We choose a value of 500 for enc\n",
    "* **network**: Network model. We choose hcnn for both enc0 and enc1.\n",
    "* **cnn_filter_width**: The filter width of the hcnn encoder.\n",
    "* **layers**: The number of layers. We choose 2 layers for enc0, as we want to capture richer structures in the movie plot description which is a sequence input. For enc1, we choose 1 layer.\n",
    "* **token_embedding_dim**: The output dimension of  token embedding layer. We choose a dimension of 300 for encoder 0, consistent with the dimension of the glove embdeddings. For enc1, we choose 10.\n",
    "* **pretrained_embedding_file**: The filename of pretrained token embedding file present in the auxiliary data channel. We use the glove embeddings for enc0. For enc1, the embeddings will be learned by the algorithm.\n",
    "* **freeze_pretrained_embedding**: Whether to freeze  pretrained embedding weights. We set this to True for enc0.\n",
    "* **vocab_file**: The vocabulary file for mapping pretrained token embeddings to vocabulary IDs. This is specified only for enc0, as we use pretrained embeddings only for enc0.\n",
    "* **vocab_size**: The vocabulary size of the tokens. For enc0, it is the number of words appearing the dataset. For enc1, it is the number of genres."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "hyperparameters = {\n",
    " 'enc_dim': 4096, \n",
    " 'mlp_dim': 512, \n",
    " 'mlp_activation': 'relu', \n",
    " 'mlp_layers': 2, \n",
    " 'output_layer': 'softmax',\n",
    " 'bucket_width': 10, \n",
    " 'num_classes': 2,\n",
    " \n",
    " 'mini_batch_size': 256,\n",
    " \n",
    " 'enc0_max_seq_len': 500,\n",
    " 'enc1_max_seq_len': 2,\n",
    " \n",
    " 'enc0_network': 'hcnn',\n",
    " 'enc1_network': 'hcnn',\n",
    "    \n",
    " 'enc0_layers': '2',\n",
    " 'enc1_layers': '1',\n",
    "    \n",
    " 'enc0_cnn_filter_width': 2,\n",
    " 'enc1_cnn_filter_width': 1,\n",
    " \n",
    " 'enc0_token_embedding_dim': 300,\n",
    " 'enc1_token_embedding_dim': 10,\n",
    " \n",
    " 'enc0_pretrained_embedding_file' : \"glove.840B.300d.txt\",\n",
    " \n",
    " 'enc0_freeze_pretrained_embedding': 'true',\n",
    " \n",
    " 'enc0_vocab_file': 'vocab.json',\n",
    " 'enc1_vocab_file': '',\n",
    " \n",
    " 'enc0_vocab_size': len(vocab),\n",
    " 'enc1_vocab_size': len(genres),\n",
    "}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div class=\"alert alert-warning\">\n",
    "Note that the training will take approximately 1.5 hours to complete on the ml.p2.8xlarge instance type\n",
    "</div>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "o2v = sagemaker.estimator.Estimator(container,\n",
    "                                    get_execution_role(), \n",
    "                                    train_instance_count=1, \n",
    "                                    train_instance_type='ml.p3.8xlarge',\n",
    "                                    output_path=\"s3://{}/{}/output\".format(bucket, prefix),\n",
    "                                   )\n",
    "o2v.set_hyperparameters(**hyperparameters)\n",
    "input_data = {\n",
    "    \"train\": s3_input(train_s3_path, content_type=\"application/jsonlines\"),\n",
    "    \"validation\": s3_input(validation_s3_path, content_type=\"application/jsonlines\"),\n",
    "    \"auxiliary\": s3_input(auxiliary_s3_path)\n",
    "}\n",
    "o2v.fit(input_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluation with Batch inference\n",
    "\n",
    "<div class=\"alert alert-warning\">\n",
    "Note that the batch inference will take approximately 30 minutes to complete on the ml.p2.8xlarge instance type\n",
    "</div>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "transformer = o2v.transformer(instance_count=1, \n",
    "                              instance_type=\"ml.p3.8xlarge\", \n",
    "                              output_path=prediction_s3_path)\n",
    "transformer.transform(data=test_s3_path, content_type=\"application/jsonlines\", split_type=\"Line\")\n",
    "transformer.wait()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Download the predictions from s3 to perform the evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!aws s3 cp --recursive {prediction_s3_path} ."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def evaluate(filename, predictions, genre_dict, threshold=0.5):\n",
    "    metrics = {g:{\"genre\": g, \"tp\":0, \"tn\":0, \"fp\":0, \"fn\":0} for g in genre_dict.values()}\n",
    "    with jsonlines.open(filename, \"r\") as reader, jsonlines.open(predictions, \"r\") as preds:\n",
    "        for row, preds in zip(reader, preds):\n",
    "            prediction = preds[\"scores\"][1] > threshold\n",
    "            label = row[\"label\"]\n",
    "            g = genre_dict[row[\"in1\"][0]]\n",
    "            if prediction == 1:\n",
    "                if label == prediction:\n",
    "                    metrics[g][\"tp\"] +=1\n",
    "                else:\n",
    "                    metrics[g][\"fp\"]+=1\n",
    "            elif prediction == 0:\n",
    "                if label == prediction:\n",
    "                    metrics[g][\"tn\"]+=1\n",
    "                else:\n",
    "                    metrics[g][\"fn\"]+=1\n",
    "    summary = pd.DataFrame(list(metrics.values())).set_index('genre')\n",
    "    summary['accuracy'] = summary.apply (lambda row: (row.tp + row.tn) / (row.tp + row.tn + row.fp + row.fn),axis=1)\n",
    "    summary['precision'] = summary.apply (lambda row: row.tp / (row.tp + row.fp),axis=1)\n",
    "    summary['recall'] = summary.apply (lambda row: row.tp / (row.tp + row.fn),axis=1)\n",
    "    summary['f1'] = summary.apply (lambda row: 2*(row.precision * row.recall) /(row.precision + row.recall),axis=1)\n",
    "    return summary"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "genre_dict = {i:genre for i, genre in enumerate(genres)}\n",
    "summary =evaluate(\"tokenized_movie_genres_test.jsonl\", \"tokenized_movie_genres_test.jsonl.out\", genre_dict, threshold=0.6)\n",
    "summary"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tp_sum = summary[\"tp\"].sum()\n",
    "fp_sum = summary[\"fp\"].sum()\n",
    "tn_sum = summary[\"tn\"].sum()\n",
    "fn_sum = summary[\"fn\"].sum()\n",
    "precision = (tp_sum) / (tp_sum + fp_sum)\n",
    "recall = (tp_sum) / (tp_sum + fn_sum)\n",
    "\n",
    "print(\"Accuracy: \", (tp_sum + tn_sum) / (tp_sum + fp_sum + tn_sum + fn_sum))\n",
    "print(\"Micro Precision: \", precision)\n",
    "print(\"Micro Recall: \", recall)\n",
    "print(\"Micro F1: \", 2*precision*recall/(precision + recall))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We compared the performance with [fastText](https://fasttext.cc/). Fasttext does not perform multi-label predictions, so to do a fair comparison we trained 28 binary classification models with fastText for each of the movie genres and combined the results of each predictor. While training the fastText models we set **wordNgrams** to 2, **dim** to 300 and  **pretrainedVectors** to the glove embeddings.\n",
    "\n",
    "<img src=\"comparison.png\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Online inference demo\n",
    "\n",
    "In this section we setup a online inference endpoint and perform inference for a few recently released movies."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "predictor = o2v.deploy(initial_instance_count=1, instance_type=\"ml.m4.xlarge\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_movie_genre_predictions(movie_summary, genre_dict, vocab, predictor, threshold=0.5):\n",
    "    plot_token_ids = [vocab[token] if token in vocab else vocab[UNKNOWN] for token in tokenize_plot_summary(movie_summary)]\n",
    "    batch = [{\"in0\": plot_token_ids, \"in1\": [genre_id]} for genre_id in range(len(genre_dict))]\n",
    "    request = {\"instances\": batch}\n",
    "    response = predictor.predict(data=json.dumps(request))\n",
    "    scores = [score[\"scores\"] for score in json.loads(response)[\"predictions\"]]\n",
    "    predictions = [genre_dict[i] for i, score in enumerate(scores) if score[1] > threshold]\n",
    "    return predictions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "star_trek = \"Ten years before Kirk, Spock and the Enterprise, theUSS Discovery discovers new worlds and lifeforms \\\n",
    "as one Starfleet officer learns to understand all things alien.\"\n",
    "\n",
    "get_movie_genre_predictions(star_trek, genre_dict, vocab, predictor)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "nun = \"A priest with a haunted past and a novice on the threshold of her final vows are sent by the Vatican \\\n",
    "to investigate the death of a young nun in Romania and confront a malevolent force in the form of a demonic nun.\"\n",
    "get_movie_genre_predictions(nun, genre_dict, vocab, predictor)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "fantastic_beasts = \"The second installment of the 'Fantastic Beasts' series set in J.K. Rowling's Wizarding World \\\n",
    "featuring the adventures of magizoologist Newt Scamander.\"\n",
    "get_movie_genre_predictions(fantastic_beasts, genre_dict, vocab, predictor)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "predictor.delete_endpoint()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
