{
    "cells": [
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "<i>Copyright (c) Recommenders contributors.</i>\n",
                "\n",
                "<i>Licensed under the MIT License.</i>"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "# Hyperparameter tuning (Spark based recommender)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Hyperparameter tuning for Spark based recommender algorithm is important to select a model with the optimal performance. This notebook introduces good practices in performing hyperparameter tuning for building recommender models with the utility functions provided in the [Microsoft/Recommenders](https://github.com/recommenders-team/recommenders.git) repository.\n",
                "\n",
                "Three different approaches are introduced and comparatively studied.\n",
                "* Spark native/custom constructs (`ParamGridBuilder`, `TrainValidationSplit`).\n",
                "* `hyperopt` package with Tree of Parzen Estimator algorithm. \n",
                "* Brute-force random search of parameter values sampled with pre-defined space. "
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 0 Global settings and import"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 1,
            "metadata": {},
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "System version: 3.5.5 |Anaconda custom (64-bit)| (default, May 13 2018, 21:12:35) \n",
                        "[GCC 7.2.0]\n",
                        "Pandas version: 0.23.0\n",
                        "PySpark version: 2.3.1\n"
                    ]
                }
            ],
            "source": [
                "\n",
                "import sys\n",
                "import numpy as np\n",
                "import pandas as pd\n",
                "from hyperopt import fmin, tpe, hp, STATUS_OK, Trials\n",
                "from hyperopt.pyll.stochastic import sample\n",
                "import matplotlib.pyplot as plt\n",
                "%matplotlib notebook\n",
                "\n",
                "import pyspark\n",
                "import pyspark.sql.functions as F\n",
                "from pyspark.ml.recommendation import ALS\n",
                "from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit\n",
                "from pyspark.ml.evaluation import Evaluator, RegressionEvaluator\n",
                "from pyspark.ml.pipeline import Estimator, Model\n",
                "from pyspark import keyword_only  \n",
                "from pyspark.ml.param.shared import *\n",
                "from pyspark.ml.util import *\n",
                "from pyspark.mllib.evaluation import RankingMetrics\n",
                "from pyspark.sql.types import ArrayType, IntegerType\n",
                "\n",
                "from recommenders.utils.timer import Timer\n",
                "from recommenders.utils.spark_utils import start_or_get_spark\n",
                "from recommenders.evaluation.spark_evaluation import SparkRankingEvaluation, SparkRatingEvaluation\n",
                "from recommenders.datasets.movielens import load_spark_df\n",
                "from recommenders.datasets.spark_splitters import spark_random_split\n",
                "\n",
                "print(\"System version: {}\".format(sys.version))\n",
                "print(\"Pandas version: {}\".format(pd.__version__))\n",
                "print(\"PySpark version: {}\".format(pyspark.__version__))"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 2,
            "metadata": {
                "tags": [
                    "parameters"
                ]
            },
            "outputs": [],
            "source": [
                "MOVIELENS_DATA_SIZE = \"100k\"\n",
                "\n",
                "NUMBER_CORES = 1\n",
                "NUMBER_ITERATIONS = 25\n",
                "\n",
                "COL_USER = \"userID\"\n",
                "COL_ITEM = \"itemID\"\n",
                "COL_TIMESTAMP = \"timestamp\"\n",
                "COL_RATING = \"rating\"\n",
                "COL_PREDICTION = \"prediction\"\n",
                "\n",
                "HEADER = {\n",
                "    \"col_user\": COL_USER,\n",
                "    \"col_item\": COL_ITEM,\n",
                "    \"col_rating\": COL_RATING,\n",
                "    \"col_prediction\": COL_PREDICTION,\n",
                "}\n",
                "\n",
                "HEADER_ALS = {\n",
                "    \"userCol\": COL_USER,\n",
                "    \"itemCol\": COL_ITEM,\n",
                "    \"ratingCol\": COL_RATING\n",
                "}\n",
                "\n",
                "SUBSET_RATIO = 0.5\n",
                "\n",
                "RANK = [10, 15, 20, 30, 40]\n",
                "REG = [ 0.1, 0.01, 0.001, 0.0001, 0.00001]"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 1 Data preparation"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "A Spark session is created. Note in this case, to study the running time for different approaches, the Spark session in local mode uses only one core for running. This eliminates the impact of parallelization of parameter tuning. "
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 3,
            "metadata": {},
            "outputs": [],
            "source": [
                "spark = start_or_get_spark(url=\"local[{}]\".format(NUMBER_CORES))"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "MovieLens 100k dataset is used for running the demonstration."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 4,
            "metadata": {},
            "outputs": [
                {
                    "name": "stderr",
                    "output_type": "stream",
                    "text": [
                        "100%|██████████| 4.81k/4.81k [00:01<00:00, 2.47kKB/s]\n"
                    ]
                }
            ],
            "source": [
                "data = load_spark_df(spark, size=MOVIELENS_DATA_SIZE, header=(COL_USER, COL_ITEM, COL_RATING))"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "To reduce time spent on the comparitive study, 50% of the data is used for the experimentation below."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 5,
            "metadata": {},
            "outputs": [],
            "source": [
                "data, _ = spark_random_split(data, ratio=SUBSET_RATIO)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "The dataset is split into 3 subsets randomly with a given split ratio. The hyperparameter tuning is performed on the training and the validating data, and then the optimal recommender selected is evaluated on the testing dataset."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 6,
            "metadata": {},
            "outputs": [],
            "source": [
                "train, valid, test = spark_random_split(data, ratio=[3, 1, 1])"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 2 Hyper parameter tuning with Azure Machine Learning Services"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "The `hyperdrive` module in the [Azure Machine Learning Services](https://azure.microsoft.com/en-us/services/machine-learning-service/) runs [hyperparameter tuning and optimizing for machine learning model selection](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters). At the moment, the service supports running hyperparameter tuning on heterogenous computing targets such as cluster of commodity compute nodes with or without GPU devices (see detailed documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets)). It is feasible to run parameter tuning on a cluster of VM nodes. In this case, the service containerizes individual and independent Spark session on each node of the cluster to run the parameter tuning job in parallel, instead of inside a single Spark session where the training is executed in a distributed manner.  \n",
                "\n",
                "Detailed instructions of tuning hyperparameter of non-Spark workloads by using Azure Machine Learning Services can be found in [this](./hypertune_aml_wide_and_deep_quickstart.ipynb) notebook. "
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 3 Hyper parameter tuning with Spark ML constructs"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 3.1 Spark native construct"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Spark ML lib implements modules such as `CrossValidator` and `TrainValidationSplit` for tuning hyperparameters (see [here](https://spark.apache.org/docs/2.2.0/ml-tuning.html)). However, by default, it does not support custom machine learning algorithms, data splitting methods, and evaluation metrics, like what are offered as utility functions in the Recommenders repository. \n",
                "\n",
                "For example, the Spark native constuct can be used for tuning a recommender against the `rmse` metric which is one of the available regression metrics in Spark."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Firstly, a Spark ALS object needs to be created. In this case, for illustration purpose, it is an ALS model object."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 7,
            "metadata": {},
            "outputs": [],
            "source": [
                "# NOTE the parameters of interest, rank and regParam, are left unset, \n",
                "# because their values will be assigned in the parameter grid builder.\n",
                "als = ALS(\n",
                "    maxIter=15,\n",
                "    implicitPrefs=False,\n",
                "    alpha=0.1,\n",
                "    coldStartStrategy='drop',\n",
                "    nonnegative=False,\n",
                "    **HEADER_ALS\n",
                ")"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Then, a parameter grid can be defined as follows. Without loss of generity, only `rank` and `regParam` are considered."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 8,
            "metadata": {},
            "outputs": [],
            "source": [
                "paramGrid = ParamGridBuilder() \\\n",
                "    .addGrid(als.rank, RANK) \\\n",
                "    .addGrid(als.regParam, REG) \\\n",
                "    .build()"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Given the settings above, a `TrainValidationSplit` constructor can be created for fitting the best model in the given parameter range. In this case, the `RegressionEvaluator` is using `RMSE`, by default, as an evaluation metric. \n",
                "\n",
                "Since the data splitter is embedded in the `TrainValidationSplit` object, to make sure the splitting ratio is consistent across different approaches, the split ratio is set to be 0.75 and in the model training the training dataset and validating dataset are combined. "
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 9,
            "metadata": {},
            "outputs": [],
            "source": [
                "tvs = TrainValidationSplit(\n",
                "    estimator=als,\n",
                "    estimatorParamMaps=paramGrid,\n",
                "    # A regression evaluation method is used. \n",
                "    evaluator=RegressionEvaluator(labelCol='rating'),\n",
                "    # 75% of the data will be used for training, 25% for validation.\n",
                "    # NOTE here the splitting is random. The Spark splitting utilities (e.g. chrono splitter)\n",
                "    # are therefore not available here. \n",
                "    trainRatio=0.75\n",
                ")"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 10,
            "metadata": {},
            "outputs": [],
            "source": [
                "with Timer() as time_spark:\n",
                "    # Run TrainValidationSplit, and choose the best set of parameters.\n",
                "    # NOTE train and valid is union because in Spark TrainValidationSplit does splitting by itself.\n",
                "    model = tvs.fit(train.union(valid))\n",
                "\n"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "The model parameters in the grid and the best metrics can be then returned. "
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 11,
            "metadata": {},
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "Run 0:\n",
                        "\tValidation Metric: 1.0505385750367227\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 10\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.1\n",
                        "Run 1:\n",
                        "\tValidation Metric: 1.0444319735752456\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 15\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.1\n",
                        "Run 2:\n",
                        "\tValidation Metric: 1.040060458376737\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 20\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.1\n",
                        "Run 3:\n",
                        "\tValidation Metric: 1.0293843505140208\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 30\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.1\n",
                        "Run 4:\n",
                        "\tValidation Metric: 1.0216137585741758\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 40\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.1\n",
                        "Run 5:\n",
                        "\tValidation Metric: 1.4359689750708238\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 10\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.01\n",
                        "Run 6:\n",
                        "\tValidation Metric: 1.4607579006632527\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 15\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.01\n",
                        "Run 7:\n",
                        "\tValidation Metric: 1.462040851503185\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 20\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.01\n",
                        "Run 8:\n",
                        "\tValidation Metric: 1.3991557293601262\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 30\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.01\n",
                        "Run 9:\n",
                        "\tValidation Metric: 1.3410599805798034\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 40\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.01\n",
                        "Run 10:\n",
                        "\tValidation Metric: 1.988790687632913\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 10\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.001\n",
                        "Run 11:\n",
                        "\tValidation Metric: 1.87653183445857\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 15\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.001\n",
                        "Run 12:\n",
                        "\tValidation Metric: 1.9156975302076813\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 20\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.001\n",
                        "Run 13:\n",
                        "\tValidation Metric: 1.8551322988886354\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 30\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.001\n",
                        "Run 14:\n",
                        "\tValidation Metric: 1.8640288497082245\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 40\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.001\n",
                        "Run 15:\n",
                        "\tValidation Metric: 3.0577123716478947\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 10\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.0001\n",
                        "Run 16:\n",
                        "\tValidation Metric: 2.849817587112537\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 15\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.0001\n",
                        "Run 17:\n",
                        "\tValidation Metric: 2.5637113568725733\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 20\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.0001\n",
                        "Run 18:\n",
                        "\tValidation Metric: 2.549950528259629\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 30\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.0001\n",
                        "Run 19:\n",
                        "\tValidation Metric: 2.9097893609292096\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 40\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 0.0001\n",
                        "Run 20:\n",
                        "\tValidation Metric: 4.9701569583584115\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 10\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 1e-05\n",
                        "Run 21:\n",
                        "\tValidation Metric: 4.230109404473068\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 15\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 1e-05\n",
                        "Run 22:\n",
                        "\tValidation Metric: 3.5723074921735707\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 20\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 1e-05\n",
                        "Run 23:\n",
                        "\tValidation Metric: 3.469775656947356\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 30\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 1e-05\n",
                        "Run 24:\n",
                        "\tValidation Metric: 4.426604995574413\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='rank', doc='rank of the factorization'): 40\n",
                        "\tParam(parent='ALS_496eab0f4b1e8da03092', name='regParam', doc='regularization parameter (>= 0).'): 1e-05\n"
                    ]
                }
            ],
            "source": [
                "for idx, item in enumerate(model.getEstimatorParamMaps()):\n",
                "    print('Run {}:'.format(idx))\n",
                "    print('\\tValidation Metric: {}'.format(model.validationMetrics[idx]))\n",
                "    for key, value in item.items():\n",
                "        print('\\t{0}: {1}'.format(repr(key), value))"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 12,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "text/plain": [
                            "[1.0505385750367227,\n",
                            " 1.0444319735752456,\n",
                            " 1.040060458376737,\n",
                            " 1.0293843505140208,\n",
                            " 1.0216137585741758,\n",
                            " 1.4359689750708238,\n",
                            " 1.4607579006632527,\n",
                            " 1.462040851503185,\n",
                            " 1.3991557293601262,\n",
                            " 1.3410599805798034,\n",
                            " 1.988790687632913,\n",
                            " 1.87653183445857,\n",
                            " 1.9156975302076813,\n",
                            " 1.8551322988886354,\n",
                            " 1.8640288497082245,\n",
                            " 3.0577123716478947,\n",
                            " 2.849817587112537,\n",
                            " 2.5637113568725733,\n",
                            " 2.549950528259629,\n",
                            " 2.9097893609292096,\n",
                            " 4.9701569583584115,\n",
                            " 4.230109404473068,\n",
                            " 3.5723074921735707,\n",
                            " 3.469775656947356,\n",
                            " 4.426604995574413]"
                        ]
                    },
                    "execution_count": 12,
                    "metadata": {},
                    "output_type": "execute_result"
                }
            ],
            "source": [
                "model.validationMetrics"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "To get the best model, just do"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 13,
            "metadata": {},
            "outputs": [],
            "source": [
                "model_best_spark = model.bestModel"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 3.2 Custom `Estimator`, `Transformer`, and `Evaluator` for Spark ALS\n",
                "\n",
                "One can also customize Spark modules to allow tuning hyperparameters for a desired model and evaluation metric, given that the native Spark ALS does not allow tuning hyperparameters for ranking metrics such as precision@k, recall@k, etc. This can be done by creating custom `Estimator`, `Transformer` and `Evaluator`. The benefit is that, after the customization, the tuning process can make use of `trainValidSplit` directly, which distributes the tuning in a Spark session."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "#### Customized `Estimator` and `Transformer` for top k recommender based on Spark ALS\n",
                "\n",
                "The following shows how to implement a PySpark `Estimator` and `Transfomer` for recommending top k items from ALS model. The latter generates top k recommendations from the model object. Both of the two are designed by following the protocol of Spark APIs, to make sure that they can be run with the hyperparameter tuning constructs in Spark."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 14,
            "metadata": {},
            "outputs": [],
            "source": [
                "class ALSTopK(\n",
                "    ALS,\n",
                "    Estimator,\n",
                "    HasInputCol,\n",
                "    HasPredictionCol\n",
                "):    \n",
                "    rank = Param(Params._dummy(), \"rank\", \"rank of the factorization\",\n",
                "                 typeConverter=TypeConverters.toInt)\n",
                "    numUserBlocks = Param(Params._dummy(), \"numUserBlocks\", \"number of user blocks\",\n",
                "                          typeConverter=TypeConverters.toInt)\n",
                "    numItemBlocks = Param(Params._dummy(), \"numItemBlocks\", \"number of item blocks\",\n",
                "                          typeConverter=TypeConverters.toInt)\n",
                "    implicitPrefs = Param(Params._dummy(), \"implicitPrefs\", \"whether to use implicit preference\",\n",
                "                          typeConverter=TypeConverters.toBoolean)\n",
                "    alpha = Param(Params._dummy(), \"alpha\", \"alpha for implicit preference\",\n",
                "                  typeConverter=TypeConverters.toFloat)\n",
                "    userCol = Param(Params._dummy(), \"userCol\", \"column name for user ids. Ids must be within \" +\n",
                "                    \"the integer value range.\", typeConverter=TypeConverters.toString)\n",
                "    itemCol = Param(Params._dummy(), \"itemCol\", \"column name for item ids. Ids must be within \" +\n",
                "                    \"the integer value range.\", typeConverter=TypeConverters.toString)\n",
                "    ratingCol = Param(Params._dummy(), \"ratingCol\", \"column name for ratings\",\n",
                "                      typeConverter=TypeConverters.toString)\n",
                "    nonnegative = Param(Params._dummy(), \"nonnegative\",\n",
                "                        \"whether to use nonnegative constraint for least squares\",\n",
                "                        typeConverter=TypeConverters.toBoolean)\n",
                "    intermediateStorageLevel = Param(Params._dummy(), \"intermediateStorageLevel\",\n",
                "                                     \"StorageLevel for intermediate datasets. Cannot be 'NONE'.\",\n",
                "                                     typeConverter=TypeConverters.toString)\n",
                "    finalStorageLevel = Param(Params._dummy(), \"finalStorageLevel\",\n",
                "                              \"StorageLevel for ALS model factors.\",\n",
                "                              typeConverter=TypeConverters.toString)\n",
                "    coldStartStrategy = Param(Params._dummy(), \"coldStartStrategy\", \"strategy for dealing with \" +\n",
                "                              \"unknown or new users/items at prediction time. This may be useful \" +\n",
                "                              \"in cross-validation or production scenarios, for handling \" +\n",
                "                              \"user/item ids the model has not seen in the training data. \" +\n",
                "                              \"Supported values: 'nan', 'drop'.\",\n",
                "                              typeConverter=TypeConverters.toString)\n",
                "\n",
                "    @keyword_only\n",
                "    def __init__(\n",
                "        self,\n",
                "        rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,\n",
                "        implicitPrefs=False, alpha=1.0, userCol=\"user\", itemCol=\"item\", seed=None, k=10,\n",
                "        ratingCol=\"rating\", nonnegative=False, checkpointInterval=10,\n",
                "        intermediateStorageLevel=\"MEMORY_AND_DISK\",\n",
                "        finalStorageLevel=\"MEMORY_AND_DISK\", coldStartStrategy=\"nan\"\n",
                "    ):\n",
                "        super(ALS, self).__init__()\n",
                "        self._java_obj = self._new_java_obj(\"org.apache.spark.ml.recommendation.ALS\", self.uid)\n",
                "        self._setDefault(rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,\n",
                "                         implicitPrefs=False, alpha=1.0, userCol=\"user\", itemCol=\"item\",\n",
                "                         ratingCol=\"rating\", nonnegative=False, checkpointInterval=10,\n",
                "                         intermediateStorageLevel=\"MEMORY_AND_DISK\",\n",
                "                         finalStorageLevel=\"MEMORY_AND_DISK\", coldStartStrategy=\"nan\")\n",
                "\n",
                "        kwargs = self._input_kwargs \n",
                "        kwargs = {x: kwargs[x] for x in kwargs if x not in {'k'}}\n",
                "        self.setParams(**kwargs)\n",
                "        \n",
                "        # The manually added parameter is not present in ALS Java implementation. \n",
                "        self.k = k\n",
                "        \n",
                "    def setRank(self, value):\n",
                "        \"\"\"\n",
                "        Sets the value of :py:attr:`rank`.\n",
                "        \"\"\"\n",
                "        return self._set(rank=value)\n",
                "\n",
                "    def getRank(self):\n",
                "        \"\"\"\n",
                "        Gets the value of rank or its default value.\n",
                "        \"\"\"\n",
                "        return self.getOrDefault(self.rank)\n",
                "    \n",
                "    def setParams(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,\n",
                "                  implicitPrefs=False, alpha=1.0, userCol=\"user\", itemCol=\"item\", seed=None,\n",
                "                  ratingCol=\"rating\", nonnegative=False, checkpointInterval=10,\n",
                "                  intermediateStorageLevel=\"MEMORY_AND_DISK\",\n",
                "                  finalStorageLevel=\"MEMORY_AND_DISK\", coldStartStrategy=\"nan\"):\n",
                "        \"\"\"\n",
                "        setParams(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10, \\\n",
                "                 implicitPrefs=False, alpha=1.0, userCol=\"user\", itemCol=\"item\", seed=None, \\\n",
                "                 ratingCol=\"rating\", nonnegative=False, checkpointInterval=10, \\\n",
                "                 intermediateStorageLevel=\"MEMORY_AND_DISK\", \\\n",
                "                 finalStorageLevel=\"MEMORY_AND_DISK\", coldStartStrategy=\"nan\")\n",
                "        Sets params for ALS.\n",
                "        \"\"\"\n",
                "        kwargs = self._input_kwargs\n",
                "        kwargs = {x: kwargs[x] for x in kwargs if x not in {'k'}}\n",
                "        return self._set(**kwargs)\n",
                "        \n",
                "    def _fit(self, dataset):\n",
                "        kwargs = self._input_kwargs    \n",
                "        # Exclude k as it is not a parameter for ALS.\n",
                "        kwargs = {x: kwargs[x] for x in kwargs if x not in {'k'}}\n",
                "        kwargs['rank'] = self.getRank()\n",
                "        kwargs['regParam'] = self.getOrDefault(self.regParam)\n",
                "        als = ALS(\n",
                "            **kwargs\n",
                "        )\n",
                "        als_model = als.fit(dataset)\n",
                "        \n",
                "        user_col = kwargs['userCol']\n",
                "        item_col = kwargs['itemCol']\n",
                "        \n",
                "        k = self.k\n",
                "                     \n",
                "        topk_model = ALSTopKModel()\n",
                "        topk_model.setParams(\n",
                "            als_model,\n",
                "            user_col, \n",
                "            item_col, \n",
                "            k\n",
                "        )\n",
                "        \n",
                "        return topk_model\n",
                "    \n",
                "    \n",
                "class ALSTopKModel(\n",
                "    Model,\n",
                "    HasInputCol,\n",
                "    HasPredictionCol,\n",
                "    HasLabelCol\n",
                "):    \n",
                "    def setParams(self, model, userCol, itemCol, k):\n",
                "        self.model = model\n",
                "        self.userCol = userCol\n",
                "        self.itemCol = itemCol\n",
                "        self.k = k\n",
                "    \n",
                "    def _transform(self, dataset):\n",
                "        predictionCol = self.getPredictionCol()\n",
                "        labelCol = self.getLabelCol()\n",
                "        \n",
                "        users = dataset.select(self.userCol).distinct()\n",
                "        topk_recommendation = self.model.recommendForUserSubset(users, self.k)  \n",
                "        \n",
                "        extract_value = F.udf((lambda x: [y[0] for y in x]), ArrayType(IntegerType()))\n",
                "        topk_recommendation = topk_recommendation.withColumn(predictionCol, extract_value(F.col(\"recommendations\")))        \n",
                "        \n",
                "        dataset = (\n",
                "            dataset\n",
                "            .groupBy(self.userCol)\n",
                "            .agg(F.collect_list(F.col(self.itemCol)).alias(labelCol))\n",
                "        )\n",
                "            \n",
                "        topk_recommendation_all = dataset.join(\n",
                "            topk_recommendation, \n",
                "            on=self.userCol,\n",
                "            how=\"outer\"\n",
                "        )\n",
                "        \n",
                "        return topk_recommendation_all.select(self.userCol, labelCol, predictionCol)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "#### Customized precision@k evaluation metric\n",
                "\n",
                "In addition to the custom `Estimator` and `Transformer`, it may also be desired to customize an `Evaluator` to allow \"beyond-rating\" metrics. The codes as following illustrates a precision@k evaluator. Other types of evaluators can be developed in a similar way."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 15,
            "metadata": {},
            "outputs": [],
            "source": [
                "# Define a custom Evaulator. Here precision@k is used.\n",
                "class PrecisionAtKEvaluator(Evaluator):\n",
                "\n",
                "    def __init__(self, predictionCol=\"prediction\", labelCol=\"label\", k=10):\n",
                "        self.predictionCol = predictionCol\n",
                "        self.labelCol = labelCol\n",
                "        self.k = k\n",
                "\n",
                "    def _evaluate(self, dataset):\n",
                "        \"\"\"\n",
                "        Returns a random number. \n",
                "        Implement here the true metric\n",
                "        \"\"\"      \n",
                "        # Drop Nulls.\n",
                "        dataset = dataset.na.drop()\n",
                "        metrics = RankingMetrics(dataset.select(self.predictionCol, self.labelCol).rdd)\n",
                "        return metrics.precisionAt(self.k)\n",
                "\n",
                "    def isLargerBetter(self):\n",
                "        return True"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Then a new ALS top-k recommender can be created, and the Spark native construct, `TrainValidationSplit` module, can be used to find the optimal model w.r.t the precision@k metric."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 16,
            "metadata": {},
            "outputs": [],
            "source": [
                "alstopk = ALSTopK(\n",
                "    userCol=COL_USER,\n",
                "    itemCol=COL_ITEM,\n",
                "    ratingCol=COL_RATING,\n",
                "    k=10\n",
                ")\n",
                "\n",
                "# Here for illustration purpose, a small grid is used.\n",
                "paramGrid = ParamGridBuilder() \\\n",
                "    .addGrid(alstopk.rank, [10, 20]) \\\n",
                "    .addGrid(alstopk.regParam, [0.1, 0.01]) \\\n",
                "    .build()\n",
                "\n",
                "tvs = TrainValidationSplit(\n",
                "    estimator=alstopk,\n",
                "    estimatorParamMaps=paramGrid,\n",
                "    # A regression evaluation method is used. \n",
                "    evaluator=PrecisionAtKEvaluator(),\n",
                "    # 75% of the data will be used for training, 25% for validation.\n",
                "    # NOTE here the splitting is random. The Spark splitting utilities (e.g. chrono splitter)\n",
                "    # are therefore not available here. \n",
                "    trainRatio=0.75\n",
                ")"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 17,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "text/plain": [
                            "[{Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 10,\n",
                            "  Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.1},\n",
                            " {Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 10,\n",
                            "  Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.01},\n",
                            " {Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 20,\n",
                            "  Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.1},\n",
                            " {Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 20,\n",
                            "  Param(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.01}]"
                        ]
                    },
                    "execution_count": 17,
                    "metadata": {},
                    "output_type": "execute_result"
                }
            ],
            "source": [
                "# Run TrainValidationSplit, and choose the best set of parameters.\n",
                "# NOTE train and valid is union because in Spark TrainValidationSplit does splitting by itself.\n",
                "model_precision = tvs.fit(train.union(valid))\n",
                "\n",
                "model_precision.getEstimatorParamMaps()"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 18,
            "metadata": {},
            "outputs": [],
            "source": [
                "def best_param(model, is_larger_better=True):\n",
                "    if is_larger_better:\n",
                "        best_metric = max(model.validationMetrics)\n",
                "    else:\n",
                "        best_metric = min(model.validationMetrics)\n",
                "        \n",
                "    parameters = model.getEstimatorParamMaps()[model.validationMetrics.index(best_metric)]\n",
                "     \n",
                "    return list(parameters.values())"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 19,
            "metadata": {},
            "outputs": [],
            "source": [
                "params = best_param(model_precision)"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 20,
            "metadata": {},
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "+------+--------------------+--------------------+\n",
                        "|userID|               label|          prediction|\n",
                        "+------+--------------------+--------------------+\n",
                        "|   148|     [116, 135, 189]|[126, 735, 222, 7...|\n",
                        "|   463|[15, 100, 103, 12...|[87, 922, 343, 52...|\n",
                        "|   471|      [95, 102, 946]|[663, 114, 488, 9...|\n",
                        "|   496|[94, 143, 196, 28...|[475, 645, 69, 18...|\n",
                        "|   833|[50, 68, 79, 92, ...|[428, 20, 135, 18...|\n",
                        "+------+--------------------+--------------------+\n",
                        "\n",
                        "Run 0:\n",
                        "\tValidation Metric: 0.01411637931034483\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 10\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.1\n",
                        "Run 1:\n",
                        "\tValidation Metric: 0.007866379310344828\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 10\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.01\n",
                        "Run 2:\n",
                        "\tValidation Metric: 0.01799568965517241\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 20\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.1\n",
                        "Run 3:\n",
                        "\tValidation Metric: 0.018103448275862084\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='rank', doc='rank of the factorization'): 20\n",
                        "\tParam(parent='ALSTopK_4f48b7cc6cf2badfcea7', name='regParam', doc='regularization parameter (>= 0).'): 0.01\n"
                    ]
                }
            ],
            "source": [
                "model_precision.bestModel.transform(valid).limit(5).show()\n",
                "\n",
                "for idx, item in enumerate(model_precision.getEstimatorParamMaps()):\n",
                "    print('Run {}:'.format(idx))\n",
                "    print('\\tValidation Metric: {}'.format(model_precision.validationMetrics[idx]))\n",
                "    for key, value in item.items():\n",
                "        print('\\t{0}: {1}'.format(repr(key), value))"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 4 Hyperparameter tuning with `hyperopt`"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "`hyperopt` is an open source Python package that is designed for tuning parameters for generic function with any pre-defined loss. More information about `hyperopt` can be found [here](https://github.com/hyperopt/hyperopt). `hyperopt` supports parallelization on MongoDB but not Spark. In our case, the tuning is performed in a sequential mode on a local computer.\n",
                "\n",
                "In `hyperopt`, an *objective* function is defined for optimizing the hyper parameters. In this case, the objective is similar to that in the Spark native construct situation, which is *to the RMSE metric for an ALS recommender*. Parameters of `rank` and `regParam` are used as hyperparameters. \n",
                "\n",
                "The objective function shown below demonstrates a RMSE loss for an ALS recommender. "
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 21,
            "metadata": {},
            "outputs": [],
            "source": [
                "# Customize an objective function\n",
                "def objective(params):\n",
                "    with Timer() as time_run_start:\n",
                "    \n",
                "        rank = params['rank']\n",
                "        reg = params['reg']\n",
                "        train = params['train'] \n",
                "        valid = params['valid'] \n",
                "        col_user = params['col_user'] \n",
                "        col_item = params['col_item']\n",
                "        col_rating = params['col_rating'] \n",
                "        col_prediction = params['col_prediction'] \n",
                "        k = params['k']\n",
                "        relevancy_method = params['relevancy_method']\n",
                "\n",
                "        als = ALS(\n",
                "            rank=rank,\n",
                "            maxIter=15,\n",
                "            implicitPrefs=False,\n",
                "            alpha=0.1,\n",
                "            regParam=reg,\n",
                "            coldStartStrategy='drop',\n",
                "            nonnegative=False,\n",
                "            seed=42,\n",
                "            **HEADER_ALS\n",
                "        )\n",
                "\n",
                "        model = als.fit(train) \n",
                "        prediction = model.transform(valid)\n",
                "\n",
                "        rating_eval = SparkRatingEvaluation(\n",
                "            valid, \n",
                "            prediction, \n",
                "            **HEADER\n",
                "        )\n",
                "\n",
                "        rmse = rating_eval.rmse()\n",
                "    \n",
                "    # Return the objective function result.\n",
                "    return {\n",
                "        'loss': rmse,\n",
                "        'status': STATUS_OK,\n",
                "        'eval_time': time_run_start.interval\n",
                "    }"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "A search space is usually defined for hyperparameter exploration. Design of search space is empirical, and depends on the understanding of how distribution of parameter of interest affects the model performance measured by the loss function. \n",
                "\n",
                "In the ALS algorithm, the two hyper parameters, rank and reg, affect model performance in a way that\n",
                "* The higher the rank, the better the model performance but also the higher risk of overfitting.\n",
                "* The reg parameter prevents overfitting in certain way. \n",
                "\n",
                "Therefore, in this case, a uniform distribution and a lognormal distribution sampling spaces are used for rank and reg, respectively. A narrow search space is used for illustration purpose, that is, the range of rank is from 10 to 20, while that of reg is from $e^{-5}$ to $e^{-1}$. Together with the randomly sampled hyper parameters, other parameters use for building / evaluating the recommender, like `k`, column names, data, etc., are kept as constants."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 22,
            "metadata": {},
            "outputs": [],
            "source": [
                "# define a search space\n",
                "space = {\n",
                "    'rank': hp.quniform('rank', 10, 40, 5),\n",
                "    'reg': hp.loguniform('reg', -5, -1),\n",
                "    'train': train, \n",
                "    'valid': valid, \n",
                "    'col_user': COL_USER, \n",
                "    'col_item': COL_ITEM, \n",
                "    'col_rating': COL_RATING, \n",
                "    'col_prediction': \"prediction\", \n",
                "    'k': 10,\n",
                "    'relevancy_method': \"top_k\"\n",
                "}"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 4.1 Hyperparameter tuning with TPE"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "`fmin` of `hyperopt` is used for running the trials for searching optimal hyper parameters. In `hyperopt`, there are different strategies for intelligently optimize hyper parameters. For example, `hyperopt` avails [Tree of Parzen Estimators (TPE) method](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf) for searching optimal parameters. \n",
                "\n",
                "The TPE method models a surface response of $p(x|y)$ by transforming a generative process, replacing the distributions of the configuration prior with non-parametric densities, where $p$ is the probability of configuration space $x$ given the loss $y$. For different configuration space, the TPE method does different replacements. That is, uniform $\\to$ truncated Gaussian mixture, log-uniform $\\to$ exponentiated truncated Gaussian mixture, categorical $\\to$ re-weighted categorical, etc. Using different observations ${x(1), ..., x(k)}$ in the non-parametric densities, these substitutions represent a learning algorithm that can produce a variety of densities over the configuration space $X$. By maintaining sorted lists of observed variables in $H$, the runtime of each iteration of the TPE algorithm can scale linearly in $|H|$ and linearly in the number of variables (dimensions) being optimized. In a nutshell, the algorithm recognizes the irrelevant variables in the configuration space, and thus reduces iterations in searching for the optimal ones. Details of the TPE algorithm can be found in the reference paper.\n",
                "\n",
                "The following runs the trials with the pre-defined objective function and search space. TPE is used as the optimization method. Totally there will be 10 evaluations run for searching the best parameters."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 23,
            "metadata": {},
            "outputs": [],
            "source": [
                "with Timer() as time_hyperopt:\n",
                "    # Trials for recording each iteration of the hyperparameter searching.\n",
                "    trials = Trials()\n",
                "\n",
                "    best = fmin(\n",
                "        fn=objective,\n",
                "        space=space,\n",
                "        algo=tpe.suggest,\n",
                "        trials=trials,\n",
                "        max_evals=NUMBER_ITERATIONS\n",
                "    )\n",
                "                  \n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 24,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "text/plain": [
                            "{'book_time': datetime.datetime(2019, 7, 17, 12, 28, 19, 108000),\n",
                            " 'exp_key': None,\n",
                            " 'misc': {'cmd': ('domain_attachment', 'FMinIter_Domain'),\n",
                            "  'idxs': {'rank': [21], 'reg': [21]},\n",
                            "  'tid': 21,\n",
                            "  'vals': {'rank': [35.0], 'reg': [0.20807325457673764]},\n",
                            "  'workdir': None},\n",
                            " 'owner': None,\n",
                            " 'refresh_time': datetime.datetime(2019, 7, 17, 12, 28, 28, 873000),\n",
                            " 'result': {'eval_time': 9.763921976089478,\n",
                            "  'loss': 0.9948670591255364,\n",
                            "  'status': 'ok'},\n",
                            " 'spec': None,\n",
                            " 'state': 2,\n",
                            " 'tid': 21,\n",
                            " 'version': 0}"
                        ]
                    },
                    "execution_count": 24,
                    "metadata": {},
                    "output_type": "execute_result"
                }
            ],
            "source": [
                "trials.best_trial"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 25,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "application/javascript": "/* Put everything inside the global mpl namespace */\nwindow.mpl = {};\n\n\nmpl.get_websocket_type = function() {\n    if (typeof(WebSocket) !== 'undefined') {\n        return WebSocket;\n    } else if (typeof(MozWebSocket) !== 'undefined') {\n        return MozWebSocket;\n    } else {\n        alert('Your browser does not have WebSocket support.' +\n              'Please try Chrome, Safari or Firefox ≥ 6. ' +\n              'Firefox 4 and 5 are also supported but you ' +\n              'have to enable WebSockets in about:config.');\n    };\n}\n\nmpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n    this.id = figure_id;\n\n    this.ws = websocket;\n\n    this.supports_binary = (this.ws.binaryType != undefined);\n\n    if (!this.supports_binary) {\n        var warnings = document.getElementById(\"mpl-warnings\");\n        if (warnings) {\n            warnings.style.display = 'block';\n            warnings.textContent = (\n                \"This browser does not support binary websocket messages. \" +\n                    \"Performance may be slow.\");\n        }\n    }\n\n    this.imageObj = new Image();\n\n    this.context = undefined;\n    this.message = undefined;\n    this.canvas = undefined;\n    this.rubberband_canvas = undefined;\n    this.rubberband_context = undefined;\n    this.format_dropdown = undefined;\n\n    this.image_mode = 'full';\n\n    this.root = $('<div/>');\n    this._root_extra_style(this.root)\n    this.root.attr('style', 'display: inline-block');\n\n    $(parent_element).append(this.root);\n\n    this._init_header(this);\n    this._init_canvas(this);\n    this._init_toolbar(this);\n\n    var fig = this;\n\n    this.waiting = false;\n\n    this.ws.onopen =  function () {\n            fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n            fig.send_message(\"send_image_mode\", {});\n            if (mpl.ratio != 1) {\n                fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n            }\n            fig.send_message(\"refresh\", {});\n        }\n\n    this.imageObj.onload = function() {\n            if (fig.image_mode == 'full') {\n                // Full images could contain transparency (where diff images\n                // almost always do), so we need to clear the canvas so that\n                // there is no ghosting.\n                fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n            }\n            fig.context.drawImage(fig.imageObj, 0, 0);\n        };\n\n    this.imageObj.onunload = function() {\n        fig.ws.close();\n    }\n\n    this.ws.onmessage = this._make_on_message_function(this);\n\n    this.ondownload = ondownload;\n}\n\nmpl.figure.prototype._init_header = function() {\n    var titlebar = $(\n        '<div class=\"ui-dialog-titlebar ui-widget-header ui-corner-all ' +\n        'ui-helper-clearfix\"/>');\n    var titletext = $(\n        '<div class=\"ui-dialog-title\" style=\"width: 100%; ' +\n        'text-align: center; padding: 3px;\"/>');\n    titlebar.append(titletext)\n    this.root.append(titlebar);\n    this.header = titletext[0];\n}\n\n\n\nmpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n\n}\n\n\nmpl.figure.prototype._root_extra_style = function(canvas_div) {\n\n}\n\nmpl.figure.prototype._init_canvas = function() {\n    var fig = this;\n\n    var canvas_div = $('<div/>');\n\n    canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n\n    function canvas_keyboard_event(event) {\n        return fig.key_event(event, event['data']);\n    }\n\n    canvas_div.keydown('key_press', canvas_keyboard_event);\n    canvas_div.keyup('key_release', canvas_keyboard_event);\n    this.canvas_div = canvas_div\n    this._canvas_extra_style(canvas_div)\n    this.root.append(canvas_div);\n\n    var canvas = $('<canvas/>');\n    canvas.addClass('mpl-canvas');\n    canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n\n    this.canvas = canvas[0];\n    this.context = canvas[0].getContext(\"2d\");\n\n    var backingStore = this.context.backingStorePixelRatio ||\n\tthis.context.webkitBackingStorePixelRatio ||\n\tthis.context.mozBackingStorePixelRatio ||\n\tthis.context.msBackingStorePixelRatio ||\n\tthis.context.oBackingStorePixelRatio ||\n\tthis.context.backingStorePixelRatio || 1;\n\n    mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n\n    var rubberband = $('<canvas/>');\n    rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n\n    var pass_mouse_events = true;\n\n    canvas_div.resizable({\n        start: function(event, ui) {\n            pass_mouse_events = false;\n        },\n        resize: function(event, ui) {\n            fig.request_resize(ui.size.width, ui.size.height);\n        },\n        stop: function(event, ui) {\n            pass_mouse_events = true;\n            fig.request_resize(ui.size.width, ui.size.height);\n        },\n    });\n\n    function mouse_event_fn(event) {\n        if (pass_mouse_events)\n            return fig.mouse_event(event, event['data']);\n    }\n\n    rubberband.mousedown('button_press', mouse_event_fn);\n    rubberband.mouseup('button_release', mouse_event_fn);\n    // Throttle sequential mouse events to 1 every 20ms.\n    rubberband.mousemove('motion_notify', mouse_event_fn);\n\n    rubberband.mouseenter('figure_enter', mouse_event_fn);\n    rubberband.mouseleave('figure_leave', mouse_event_fn);\n\n    canvas_div.on(\"wheel\", function (event) {\n        event = event.originalEvent;\n        event['data'] = 'scroll'\n        if (event.deltaY < 0) {\n            event.step = 1;\n        } else {\n            event.step = -1;\n        }\n        mouse_event_fn(event);\n    });\n\n    canvas_div.append(canvas);\n    canvas_div.append(rubberband);\n\n    this.rubberband = rubberband;\n    this.rubberband_canvas = rubberband[0];\n    this.rubberband_context = rubberband[0].getContext(\"2d\");\n    this.rubberband_context.strokeStyle = \"#000000\";\n\n    this._resize_canvas = function(width, height) {\n        // Keep the size of the canvas, canvas container, and rubber band\n        // canvas in synch.\n        canvas_div.css('width', width)\n        canvas_div.css('height', height)\n\n        canvas.attr('width', width * mpl.ratio);\n        canvas.attr('height', height * mpl.ratio);\n        canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n\n        rubberband.attr('width', width);\n        rubberband.attr('height', height);\n    }\n\n    // Set the figure to an initial 600x600px, this will subsequently be updated\n    // upon first draw.\n    this._resize_canvas(600, 600);\n\n    // Disable right mouse context menu.\n    $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n        return false;\n    });\n\n    function set_focus () {\n        canvas.focus();\n        canvas_div.focus();\n    }\n\n    window.setTimeout(set_focus, 100);\n}\n\nmpl.figure.prototype._init_toolbar = function() {\n    var fig = this;\n\n    var nav_element = $('<div/>')\n    nav_element.attr('style', 'width: 100%');\n    this.root.append(nav_element);\n\n    // Define a callback function for later on.\n    function toolbar_event(event) {\n        return fig.toolbar_button_onclick(event['data']);\n    }\n    function toolbar_mouse_event(event) {\n        return fig.toolbar_button_onmouseover(event['data']);\n    }\n\n    for(var toolbar_ind in mpl.toolbar_items) {\n        var name = mpl.toolbar_items[toolbar_ind][0];\n        var tooltip = mpl.toolbar_items[toolbar_ind][1];\n        var image = mpl.toolbar_items[toolbar_ind][2];\n        var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n        if (!name) {\n            // put a spacer in here.\n            continue;\n        }\n        var button = $('<button/>');\n        button.addClass('ui-button ui-widget ui-state-default ui-corner-all ' +\n                        'ui-button-icon-only');\n        button.attr('role', 'button');\n        button.attr('aria-disabled', 'false');\n        button.click(method_name, toolbar_event);\n        button.mouseover(tooltip, toolbar_mouse_event);\n\n        var icon_img = $('<span/>');\n        icon_img.addClass('ui-button-icon-primary ui-icon');\n        icon_img.addClass(image);\n        icon_img.addClass('ui-corner-all');\n\n        var tooltip_span = $('<span/>');\n        tooltip_span.addClass('ui-button-text');\n        tooltip_span.html(tooltip);\n\n        button.append(icon_img);\n        button.append(tooltip_span);\n\n        nav_element.append(button);\n    }\n\n    var fmt_picker_span = $('<span/>');\n\n    var fmt_picker = $('<select/>');\n    fmt_picker.addClass('mpl-toolbar-option ui-widget ui-widget-content');\n    fmt_picker_span.append(fmt_picker);\n    nav_element.append(fmt_picker_span);\n    this.format_dropdown = fmt_picker[0];\n\n    for (var ind in mpl.extensions) {\n        var fmt = mpl.extensions[ind];\n        var option = $(\n            '<option/>', {selected: fmt === mpl.default_extension}).html(fmt);\n        fmt_picker.append(option)\n    }\n\n    // Add hover states to the ui-buttons\n    $( \".ui-button\" ).hover(\n        function() { $(this).addClass(\"ui-state-hover\");},\n        function() { $(this).removeClass(\"ui-state-hover\");}\n    );\n\n    var status_bar = $('<span class=\"mpl-message\"/>');\n    nav_element.append(status_bar);\n    this.message = status_bar[0];\n}\n\nmpl.figure.prototype.request_resize = function(x_pixels, y_pixels) {\n    // Request matplotlib to resize the figure. Matplotlib will then trigger a resize in the client,\n    // which will in turn request a refresh of the image.\n    this.send_message('resize', {'width': x_pixels, 'height': y_pixels});\n}\n\nmpl.figure.prototype.send_message = function(type, properties) {\n    properties['type'] = type;\n    properties['figure_id'] = this.id;\n    this.ws.send(JSON.stringify(properties));\n}\n\nmpl.figure.prototype.send_draw_message = function() {\n    if (!this.waiting) {\n        this.waiting = true;\n        this.ws.send(JSON.stringify({type: \"draw\", figure_id: this.id}));\n    }\n}\n\n\nmpl.figure.prototype.handle_save = function(fig, msg) {\n    var format_dropdown = fig.format_dropdown;\n    var format = format_dropdown.options[format_dropdown.selectedIndex].value;\n    fig.ondownload(fig, format);\n}\n\n\nmpl.figure.prototype.handle_resize = function(fig, msg) {\n    var size = msg['size'];\n    if (size[0] != fig.canvas.width || size[1] != fig.canvas.height) {\n        fig._resize_canvas(size[0], size[1]);\n        fig.send_message(\"refresh\", {});\n    };\n}\n\nmpl.figure.prototype.handle_rubberband = function(fig, msg) {\n    var x0 = msg['x0'] / mpl.ratio;\n    var y0 = (fig.canvas.height - msg['y0']) / mpl.ratio;\n    var x1 = msg['x1'] / mpl.ratio;\n    var y1 = (fig.canvas.height - msg['y1']) / mpl.ratio;\n    x0 = Math.floor(x0) + 0.5;\n    y0 = Math.floor(y0) + 0.5;\n    x1 = Math.floor(x1) + 0.5;\n    y1 = Math.floor(y1) + 0.5;\n    var min_x = Math.min(x0, x1);\n    var min_y = Math.min(y0, y1);\n    var width = Math.abs(x1 - x0);\n    var height = Math.abs(y1 - y0);\n\n    fig.rubberband_context.clearRect(\n        0, 0, fig.canvas.width, fig.canvas.height);\n\n    fig.rubberband_context.strokeRect(min_x, min_y, width, height);\n}\n\nmpl.figure.prototype.handle_figure_label = function(fig, msg) {\n    // Updates the figure title.\n    fig.header.textContent = msg['label'];\n}\n\nmpl.figure.prototype.handle_cursor = function(fig, msg) {\n    var cursor = msg['cursor'];\n    switch(cursor)\n    {\n    case 0:\n        cursor = 'pointer';\n        break;\n    case 1:\n        cursor = 'default';\n        break;\n    case 2:\n        cursor = 'crosshair';\n        break;\n    case 3:\n        cursor = 'move';\n        break;\n    }\n    fig.rubberband_canvas.style.cursor = cursor;\n}\n\nmpl.figure.prototype.handle_message = function(fig, msg) {\n    fig.message.textContent = msg['message'];\n}\n\nmpl.figure.prototype.handle_draw = function(fig, msg) {\n    // Request the server to send over a new figure.\n    fig.send_draw_message();\n}\n\nmpl.figure.prototype.handle_image_mode = function(fig, msg) {\n    fig.image_mode = msg['mode'];\n}\n\nmpl.figure.prototype.updated_canvas_event = function() {\n    // Called whenever the canvas gets updated.\n    this.send_message(\"ack\", {});\n}\n\n// A function to construct a web socket function for onmessage handling.\n// Called in the figure constructor.\nmpl.figure.prototype._make_on_message_function = function(fig) {\n    return function socket_on_message(evt) {\n        if (evt.data instanceof Blob) {\n            /* FIXME: We get \"Resource interpreted as Image but\n             * transferred with MIME type text/plain:\" errors on\n             * Chrome.  But how to set the MIME type?  It doesn't seem\n             * to be part of the websocket stream */\n            evt.data.type = \"image/png\";\n\n            /* Free the memory for the previous frames */\n            if (fig.imageObj.src) {\n                (window.URL || window.webkitURL).revokeObjectURL(\n                    fig.imageObj.src);\n            }\n\n            fig.imageObj.src = (window.URL || window.webkitURL).createObjectURL(\n                evt.data);\n            fig.updated_canvas_event();\n            fig.waiting = false;\n            return;\n        }\n        else if (typeof evt.data === 'string' && evt.data.slice(0, 21) == \"data:image/png;base64\") {\n            fig.imageObj.src = evt.data;\n            fig.updated_canvas_event();\n            fig.waiting = false;\n            return;\n        }\n\n        var msg = JSON.parse(evt.data);\n        var msg_type = msg['type'];\n\n        // Call the  \"handle_{type}\" callback, which takes\n        // the figure and JSON message as its only arguments.\n        try {\n            var callback = fig[\"handle_\" + msg_type];\n        } catch (e) {\n            console.log(\"No handler for the '\" + msg_type + \"' message type: \", msg);\n            return;\n        }\n\n        if (callback) {\n            try {\n                // console.log(\"Handling '\" + msg_type + \"' message: \", msg);\n                callback(fig, msg);\n            } catch (e) {\n                console.log(\"Exception inside the 'handler_\" + msg_type + \"' callback:\", e, e.stack, msg);\n            }\n        }\n    };\n}\n\n// from http://stackoverflow.com/questions/1114465/getting-mouse-location-in-canvas\nmpl.findpos = function(e) {\n    //this section is from http://www.quirksmode.org/js/events_properties.html\n    var targ;\n    if (!e)\n        e = window.event;\n    if (e.target)\n        targ = e.target;\n    else if (e.srcElement)\n        targ = e.srcElement;\n    if (targ.nodeType == 3) // defeat Safari bug\n        targ = targ.parentNode;\n\n    // jQuery normalizes the pageX and pageY\n    // pageX,Y are the mouse positions relative to the document\n    // offset() returns the position of the element relative to the document\n    var x = e.pageX - $(targ).offset().left;\n    var y = e.pageY - $(targ).offset().top;\n\n    return {\"x\": x, \"y\": y};\n};\n\n/*\n * return a copy of an object with only non-object keys\n * we need this to avoid circular references\n * http://stackoverflow.com/a/24161582/3208463\n */\nfunction simpleKeys (original) {\n  return Object.keys(original).reduce(function (obj, key) {\n    if (typeof original[key] !== 'object')\n        obj[key] = original[key]\n    return obj;\n  }, {});\n}\n\nmpl.figure.prototype.mouse_event = function(event, name) {\n    var canvas_pos = mpl.findpos(event)\n\n    if (name === 'button_press')\n    {\n        this.canvas.focus();\n        this.canvas_div.focus();\n    }\n\n    var x = canvas_pos.x * mpl.ratio;\n    var y = canvas_pos.y * mpl.ratio;\n\n    this.send_message(name, {x: x, y: y, button: event.button,\n                             step: event.step,\n                             guiEvent: simpleKeys(event)});\n\n    /* This prevents the web browser from automatically changing to\n     * the text insertion cursor when the button is pressed.  We want\n     * to control all of the cursor setting manually through the\n     * 'cursor' event from matplotlib */\n    event.preventDefault();\n    return false;\n}\n\nmpl.figure.prototype._key_event_extra = function(event, name) {\n    // Handle any extra behaviour associated with a key event\n}\n\nmpl.figure.prototype.key_event = function(event, name) {\n\n    // Prevent repeat events\n    if (name == 'key_press')\n    {\n        if (event.which === this._key)\n            return;\n        else\n            this._key = event.which;\n    }\n    if (name == 'key_release')\n        this._key = null;\n\n    var value = '';\n    if (event.ctrlKey && event.which != 17)\n        value += \"ctrl+\";\n    if (event.altKey && event.which != 18)\n        value += \"alt+\";\n    if (event.shiftKey && event.which != 16)\n        value += \"shift+\";\n\n    value += 'k';\n    value += event.which.toString();\n\n    this._key_event_extra(event, name);\n\n    this.send_message(name, {key: value,\n                             guiEvent: simpleKeys(event)});\n    return false;\n}\n\nmpl.figure.prototype.toolbar_button_onclick = function(name) {\n    if (name == 'download') {\n        this.handle_save(this, null);\n    } else {\n        this.send_message(\"toolbar_button\", {name: name});\n    }\n};\n\nmpl.figure.prototype.toolbar_button_onmouseover = function(tooltip) {\n    this.message.textContent = tooltip;\n};\nmpl.toolbar_items = [[\"Home\", \"Reset original view\", \"fa fa-home icon-home\", \"home\"], [\"Back\", \"Back to  previous view\", \"fa fa-arrow-left icon-arrow-left\", \"back\"], [\"Forward\", \"Forward to next view\", \"fa fa-arrow-right icon-arrow-right\", \"forward\"], [\"\", \"\", \"\", \"\"], [\"Pan\", \"Pan axes with left mouse, zoom with right\", \"fa fa-arrows icon-move\", \"pan\"], [\"Zoom\", \"Zoom to rectangle\", \"fa fa-square-o icon-check-empty\", \"zoom\"], [\"\", \"\", \"\", \"\"], [\"Download\", \"Download plot\", \"fa fa-floppy-o icon-save\", \"download\"]];\n\nmpl.extensions = [\"eps\", \"jpeg\", \"pdf\", \"png\", \"ps\", \"raw\", \"svg\", \"tif\"];\n\nmpl.default_extension = \"png\";var comm_websocket_adapter = function(comm) {\n    // Create a \"websocket\"-like object which calls the given IPython comm\n    // object with the appropriate methods. Currently this is a non binary\n    // socket, so there is still some room for performance tuning.\n    var ws = {};\n\n    ws.close = function() {\n        comm.close()\n    };\n    ws.send = function(m) {\n        //console.log('sending', m);\n        comm.send(m);\n    };\n    // Register the callback with on_msg.\n    comm.on_msg(function(msg) {\n        //console.log('receiving', msg['content']['data'], msg);\n        // Pass the mpl event to the overridden (by mpl) onmessage function.\n        ws.onmessage(msg['content']['data'])\n    });\n    return ws;\n}\n\nmpl.mpl_figure_comm = function(comm, msg) {\n    // This is the function which gets called when the mpl process\n    // starts-up an IPython Comm through the \"matplotlib\" channel.\n\n    var id = msg.content.data.id;\n    // Get hold of the div created by the display call when the Comm\n    // socket was opened in Python.\n    var element = $(\"#\" + id);\n    var ws_proxy = comm_websocket_adapter(comm)\n\n    function ondownload(figure, format) {\n        window.open(figure.imageObj.src);\n    }\n\n    var fig = new mpl.figure(id, ws_proxy,\n                           ondownload,\n                           element.get(0));\n\n    // Call onopen now - mpl needs it, as it is assuming we've passed it a real\n    // web socket which is closed, not our websocket->open comm proxy.\n    ws_proxy.onopen();\n\n    fig.parent_element = element.get(0);\n    fig.cell_info = mpl.find_output_cell(\"<div id='\" + id + \"'></div>\");\n    if (!fig.cell_info) {\n        console.error(\"Failed to find cell for figure\", id, fig);\n        return;\n    }\n\n    var output_index = fig.cell_info[2]\n    var cell = fig.cell_info[0];\n\n};\n\nmpl.figure.prototype.handle_close = function(fig, msg) {\n    var width = fig.canvas.width/mpl.ratio\n    fig.root.unbind('remove')\n\n    // Update the output cell to use the data from the current canvas.\n    fig.push_to_output();\n    var dataURL = fig.canvas.toDataURL();\n    // Re-enable the keyboard manager in IPython - without this line, in FF,\n    // the notebook keyboard shortcuts fail.\n    IPython.keyboard_manager.enable()\n    $(fig.parent_element).html('<img src=\"' + dataURL + '\" width=\"' + width + '\">');\n    fig.close_ws(fig, msg);\n}\n\nmpl.figure.prototype.close_ws = function(fig, msg){\n    fig.send_message('closing', msg);\n    // fig.ws.close()\n}\n\nmpl.figure.prototype.push_to_output = function(remove_interactive) {\n    // Turn the data on the canvas into data in the output cell.\n    var width = this.canvas.width/mpl.ratio\n    var dataURL = this.canvas.toDataURL();\n    this.cell_info[1]['text/html'] = '<img src=\"' + dataURL + '\" width=\"' + width + '\">';\n}\n\nmpl.figure.prototype.updated_canvas_event = function() {\n    // Tell IPython that the notebook contents must change.\n    IPython.notebook.set_dirty(true);\n    this.send_message(\"ack\", {});\n    var fig = this;\n    // Wait a second, then push the new image to the DOM so\n    // that it is saved nicely (might be nice to debounce this).\n    setTimeout(function () { fig.push_to_output() }, 1000);\n}\n\nmpl.figure.prototype._init_toolbar = function() {\n    var fig = this;\n\n    var nav_element = $('<div/>')\n    nav_element.attr('style', 'width: 100%');\n    this.root.append(nav_element);\n\n    // Define a callback function for later on.\n    function toolbar_event(event) {\n        return fig.toolbar_button_onclick(event['data']);\n    }\n    function toolbar_mouse_event(event) {\n        return fig.toolbar_button_onmouseover(event['data']);\n    }\n\n    for(var toolbar_ind in mpl.toolbar_items){\n        var name = mpl.toolbar_items[toolbar_ind][0];\n        var tooltip = mpl.toolbar_items[toolbar_ind][1];\n        var image = mpl.toolbar_items[toolbar_ind][2];\n        var method_name = mpl.toolbar_items[toolbar_ind][3];\n\n        if (!name) { continue; };\n\n        var button = $('<button class=\"btn btn-default\" href=\"#\" title=\"' + name + '\"><i class=\"fa ' + image + ' fa-lg\"></i></button>');\n        button.click(method_name, toolbar_event);\n        button.mouseover(tooltip, toolbar_mouse_event);\n        nav_element.append(button);\n    }\n\n    // Add the status bar.\n    var status_bar = $('<span class=\"mpl-message\" style=\"text-align:right; float: right;\"/>');\n    nav_element.append(status_bar);\n    this.message = status_bar[0];\n\n    // Add the close button to the window.\n    var buttongrp = $('<div class=\"btn-group inline pull-right\"></div>');\n    var button = $('<button class=\"btn btn-mini btn-primary\" href=\"#\" title=\"Stop Interaction\"><i class=\"fa fa-power-off icon-remove icon-large\"></i></button>');\n    button.click(function (evt) { fig.handle_close(fig, {}); } );\n    button.mouseover('Stop Interaction', toolbar_mouse_event);\n    buttongrp.append(button);\n    var titlebar = this.root.find($('.ui-dialog-titlebar'));\n    titlebar.prepend(buttongrp);\n}\n\nmpl.figure.prototype._root_extra_style = function(el){\n    var fig = this\n    el.on(\"remove\", function(){\n\tfig.close_ws(fig, {});\n    });\n}\n\nmpl.figure.prototype._canvas_extra_style = function(el){\n    // this is important to make the div 'focusable\n    el.attr('tabindex', 0)\n    // reach out to IPython and tell the keyboard manager to turn it's self\n    // off when our div gets focus\n\n    // location in version 3\n    if (IPython.notebook.keyboard_manager) {\n        IPython.notebook.keyboard_manager.register_events(el);\n    }\n    else {\n        // location in version 2\n        IPython.keyboard_manager.register_events(el);\n    }\n\n}\n\nmpl.figure.prototype._key_event_extra = function(event, name) {\n    var manager = IPython.notebook.keyboard_manager;\n    if (!manager)\n        manager = IPython.keyboard_manager;\n\n    // Check for shift+enter\n    if (event.shiftKey && event.which == 13) {\n        this.canvas_div.blur();\n        event.shiftKey = false;\n        // Send a \"J\" for go to next cell\n        event.which = 74;\n        event.keyCode = 74;\n        manager.command_mode();\n        manager.handle_keydown(event);\n    }\n}\n\nmpl.figure.prototype.handle_save = function(fig, msg) {\n    fig.ondownload(fig, null);\n}\n\n\nmpl.find_output_cell = function(html_output) {\n    // Return the cell and output element which can be found *uniquely* in the notebook.\n    // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n    // IPython event is triggered only after the cells have been serialised, which for\n    // our purposes (turning an active figure into a static one), is too late.\n    var cells = IPython.notebook.get_cells();\n    var ncells = cells.length;\n    for (var i=0; i<ncells; i++) {\n        var cell = cells[i];\n        if (cell.cell_type === 'code'){\n            for (var j=0; j<cell.output_area.outputs.length; j++) {\n                var data = cell.output_area.outputs[j];\n                if (data.data) {\n                    // IPython >= 3 moved mimebundle to data attribute of output\n                    data = data.data;\n                }\n                if (data['text/html'] == html_output) {\n                    return [cell, data, j];\n                }\n            }\n        }\n    }\n}\n\n// Register the function which deals with the matplotlib target/channel.\n// The kernel may be null if the page has been refreshed.\nif (IPython.notebook.kernel != null) {\n    IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n}\n",
                        "text/plain": [
                            "<IPython.core.display.Javascript object>"
                        ]
                    },
                    "metadata": {},
                    "output_type": "display_data"
                },
                {
                    "data": {
                        "text/html": [
                            "<img src=\"\" width=\"1500\">"
                        ],
                        "text/plain": [
                            "<IPython.core.display.HTML object>"
                        ]
                    },
                    "metadata": {},
                    "output_type": "display_data"
                }
            ],
            "source": [
                "parameters = ['rank', 'reg']\n",
                "cols = len(parameters)\n",
                "f, axes = plt.subplots(nrows=1, ncols=cols, figsize=(15,5))\n",
                "cmap = plt.cm.jet\n",
                "for i, val in enumerate(parameters):\n",
                "    xs = np.array([t['misc']['vals'][val] for t in trials.trials]).ravel()\n",
                "    ys = [t['result']['loss'] for t in trials.trials]\n",
                "    xs, ys = zip(*sorted(zip(xs, ys)))\n",
                "    ys = np.array(ys)\n",
                "    axes[i].scatter(xs, ys, s=20, linewidth=0.01, alpha=0.75, c=cmap(float(i)/len(parameters)))\n",
                "    axes[i].set_title(val)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "It can be seen from the above plot that\n",
                "* The actual impact of rank is in line with the intuition - the smaller the value the better the result.\n",
                "* It is interesting to see that the optimal value of reg is around 0.1 to 0.15. "
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Get the best model."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 26,
            "metadata": {},
            "outputs": [],
            "source": [
                "als = ALS(\n",
                "    rank=best[\"rank\"],\n",
                "    regParam=best[\"reg\"],\n",
                "    maxIter=15,\n",
                "    implicitPrefs=False,\n",
                "    alpha=0.1,\n",
                "    coldStartStrategy='drop',\n",
                "    nonnegative=False,\n",
                "    seed=42,\n",
                "    **HEADER_ALS\n",
                ")\n",
                "    \n",
                "model_best_hyperopt = als.fit(train)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Tuning prameters against other metrics can be simply done by modifying the `objective` function. The following shows an objective function of how to tune \"precision@k\". Since `fmin` in `hyperopt` only supports minimization while the actual objective of the loss is to maximize \"precision@k\", `-precision` instead of `precision` is used in the returned value of the `objective` function."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 27,
            "metadata": {},
            "outputs": [],
            "source": [
                "# Customize an objective function\n",
                "def objective_precision(params):\n",
                "    with Timer() as time_run_start:\n",
                "\n",
                "        rank = params['rank']\n",
                "        reg = params['reg']\n",
                "        train = params['train'] \n",
                "        valid = params['valid'] \n",
                "        col_user = params['col_user'] \n",
                "        col_item = params['col_item']\n",
                "        col_rating = params['col_rating'] \n",
                "        col_prediction = params['col_prediction'] \n",
                "        k = params['k']\n",
                "        relevancy_method = params['relevancy_method']\n",
                "\n",
                "        header = {\n",
                "            \"userCol\": col_user,\n",
                "            \"itemCol\": col_item,\n",
                "            \"ratingCol\": col_rating,\n",
                "        }\n",
                "\n",
                "        als = ALS(\n",
                "            rank=rank,\n",
                "            maxIter=15,\n",
                "            implicitPrefs=False,\n",
                "            alpha=0.1,\n",
                "            regParam=reg,\n",
                "            coldStartStrategy='drop',\n",
                "            nonnegative=False,\n",
                "            seed=42,\n",
                "            **header\n",
                "        )\n",
                "\n",
                "        model = als.fit(train)\n",
                "    \n",
                "        users = train.select(col_user).distinct()\n",
                "        items = train.select(col_item).distinct()\n",
                "        user_item = users.crossJoin(items)\n",
                "        dfs_pred = model.transform(user_item)\n",
                "\n",
                "        # Remove seen items.\n",
                "        dfs_pred_exclude_train = dfs_pred.alias(\"pred\").join(\n",
                "            train.alias(\"train\"),\n",
                "            (dfs_pred[col_user] == train[col_user]) & (dfs_pred[col_item] == train[col_item]),\n",
                "            how='outer'\n",
                "        )\n",
                "\n",
                "        top_all = dfs_pred_exclude_train.filter(dfs_pred_exclude_train[\"train.Rating\"].isNull()) \\\n",
                "            .select('pred.' + col_user, 'pred.' + col_item, 'pred.' + \"prediction\")\n",
                "\n",
                "        top_all.cache().count()\n",
                "\n",
                "        rank_eval = SparkRankingEvaluation(\n",
                "            valid, \n",
                "            top_all, \n",
                "            k=k, \n",
                "            col_user=col_user, \n",
                "            col_item=col_item, \n",
                "            col_rating=\"rating\", \n",
                "            col_prediction=\"prediction\", \n",
                "            relevancy_method=relevancy_method\n",
                "        )\n",
                "\n",
                "        precision = rank_eval.precision_at_k()\n",
                "\n",
                "    # Return the objective function result.\n",
                "    return {\n",
                "        'loss': -precision,\n",
                "        'status': STATUS_OK,\n",
                "        'eval_time': time_run_start.interval\n",
                "    }"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 4.2 Hyperparameter tuning with `hyperopt` sampling methods"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Though `hyperopt` works well in a single node machine, its features (e.g., `Trials` module) do not support Spark environment, which makes it hard to perform the tuning tasks in a distributed/parallel manner. It is useful to use `hyperopt` for sampling parameter values from the pre-defined sampling space, and then parallelize the model training onto Spark cluster with the sampled parameter combinations.\n",
                "\n",
                "The downside of this method is that the intelligent searching algorithm (i.e., TPE) of `hyperopt` cannot be used. The approach introduced here is therefore equivalent to random search."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 28,
            "metadata": {},
            "outputs": [],
            "source": [
                "with Timer() as time_sample:\n",
                "    # Sample the parameters used for model building from the pre-defined space. \n",
                "    sample_params = [sample(space) for x in range(NUMBER_ITERATIONS)]\n",
                "    \n",
                "    # The following runs model building on the sampled parameter values with the pre-defined objective function.\n",
                "    results_map = list(map(lambda x: objective(x), sample_params))\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 30,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "text/plain": [
                            "[{'eval_time': 9.468051671981812, 'loss': 1.027085217204854, 'status': 'ok'},\n",
                            " {'eval_time': 8.947720766067505, 'loss': 1.017764730532703, 'status': 'ok'},\n",
                            " {'eval_time': 9.599841117858887, 'loss': 1.2995721337596726, 'status': 'ok'},\n",
                            " {'eval_time': 8.645057439804077, 'loss': 0.998792404289471, 'status': 'ok'},\n",
                            " {'eval_time': 9.246882200241089, 'loss': 1.159882028048988, 'status': 'ok'},\n",
                            " {'eval_time': 9.159096479415894, 'loss': 1.3707996773718212, 'status': 'ok'},\n",
                            " {'eval_time': 9.555922508239746, 'loss': 1.4255606225971154, 'status': 'ok'},\n",
                            " {'eval_time': 9.555083751678467, 'loss': 0.9974149852593205, 'status': 'ok'},\n",
                            " {'eval_time': 9.14759874343872, 'loss': 1.0233910316377184, 'status': 'ok'},\n",
                            " {'eval_time': 9.288854122161865, 'loss': 1.1079683856151636, 'status': 'ok'},\n",
                            " {'eval_time': 9.035703420639038, 'loss': 1.4973257401273627, 'status': 'ok'},\n",
                            " {'eval_time': 9.43152904510498, 'loss': 1.3660611992616116, 'status': 'ok'},\n",
                            " {'eval_time': 9.249063491821289, 'loss': 1.3212144812805433, 'status': 'ok'},\n",
                            " {'eval_time': 9.086166143417358, 'loss': 1.1874756727128037, 'status': 'ok'},\n",
                            " {'eval_time': 8.880879878997803, 'loss': 1.017539254275622, 'status': 'ok'},\n",
                            " {'eval_time': 9.382610559463501, 'loss': 1.0726440276761462, 'status': 'ok'},\n",
                            " {'eval_time': 9.176624774932861, 'loss': 1.4048578426830673, 'status': 'ok'},\n",
                            " {'eval_time': 9.292484045028687, 'loss': 1.4308198992737957, 'status': 'ok'},\n",
                            " {'eval_time': 8.882294178009033, 'loss': 1.2712116101690774, 'status': 'ok'},\n",
                            " {'eval_time': 9.89118218421936, 'loss': 1.0887572322216503, 'status': 'ok'},\n",
                            " {'eval_time': 9.102333545684814, 'loss': 1.449849882363006, 'status': 'ok'},\n",
                            " {'eval_time': 9.27437686920166, 'loss': 1.0465564408902348, 'status': 'ok'},\n",
                            " {'eval_time': 9.215168714523315, 'loss': 1.248519446580608, 'status': 'ok'},\n",
                            " {'eval_time': 9.225409269332886, 'loss': 1.0265498645574211, 'status': 'ok'},\n",
                            " {'eval_time': 9.08506464958191, 'loss': 1.254533287299843, 'status': 'ok'}]"
                        ]
                    },
                    "execution_count": 30,
                    "metadata": {},
                    "output_type": "execute_result"
                }
            ],
            "source": [
                "results_map"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "Get the best model."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 31,
            "metadata": {},
            "outputs": [],
            "source": [
                "loss_metrics = np.array([x['loss'] for x in results_map])\n",
                "best_loss = np.where(loss_metrics == min(loss_metrics))"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 32,
            "metadata": {},
            "outputs": [],
            "source": [
                "best_param = sample_params[best_loss[0].item()]"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 33,
            "metadata": {},
            "outputs": [],
            "source": [
                "als = ALS(\n",
                "    rank=best_param[\"rank\"],\n",
                "    regParam=best_param[\"reg\"],\n",
                "    maxIter=15,\n",
                "    implicitPrefs=False,\n",
                "    alpha=0.1,\n",
                "    coldStartStrategy='drop',\n",
                "    nonnegative=False,\n",
                "    seed=42,\n",
                "    **HEADER_ALS\n",
                ")\n",
                "    \n",
                "model_best_sample = als.fit(train)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 5 Evaluation on testing data"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "The optimal parameters can then be used for building a recommender, which is then evaluated on the testing data.\n",
                "\n",
                "The following codes generate the evaluation results by using the testing dataset with the optimal model selected against the pre-defined loss. Without loss of generity, in this case, the optimal model that performs the best w.r.t regression loss (i.e., the RMSE metric) is used. One can simply use other metrics like precision@k, as illustrated in the above sections, to evaluate the optimal model on the testing dataset."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 34,
            "metadata": {},
            "outputs": [],
            "source": [
                "# Get prediction results with the optimal modesl from different approaches.\n",
                "prediction_spark = model_best_spark.transform(test)\n",
                "prediction_hyperopt = model_best_hyperopt.transform(test)\n",
                "prediction_sample = model_best_sample.transform(test)\n",
                "\n",
                "predictions = [prediction_spark, prediction_hyperopt, prediction_sample]\n",
                "elapsed = [time_spark.interval, time_hyperopt.interval, time_sample.interval]\n",
                "\n",
                "approaches = ['spark', 'hyperopt', 'sample']\n",
                "test_evaluations = pd.DataFrame()\n",
                "for ind, approach in enumerate(approaches):    \n",
                "    rating_eval = SparkRatingEvaluation(\n",
                "        test, \n",
                "        predictions[ind],\n",
                "        **HEADER\n",
                "    )\n",
                "    \n",
                "    result = pd.DataFrame({\n",
                "        'Approach': approach,\n",
                "        'RMSE': rating_eval.rmse(),\n",
                "        'MAE': rating_eval.mae(),\n",
                "        'Explained variance': rating_eval.exp_var(),\n",
                "        'R squared': rating_eval.rsquared(),\n",
                "        'Elapsed': elapsed[ind]\n",
                "    }, index=[0])\n",
                "    \n",
                "    test_evaluations = test_evaluations.append(result)"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 35,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "text/html": [
                            "<div>\n",
                            "<style scoped>\n",
                            "    .dataframe tbody tr th:only-of-type {\n",
                            "        vertical-align: middle;\n",
                            "    }\n",
                            "\n",
                            "    .dataframe tbody tr th {\n",
                            "        vertical-align: top;\n",
                            "    }\n",
                            "\n",
                            "    .dataframe thead th {\n",
                            "        text-align: right;\n",
                            "    }\n",
                            "</style>\n",
                            "<table border=\"1\" class=\"dataframe\">\n",
                            "  <thead>\n",
                            "    <tr style=\"text-align: right;\">\n",
                            "      <th></th>\n",
                            "      <th>Approach</th>\n",
                            "      <th>Elapsed</th>\n",
                            "      <th>Explained variance</th>\n",
                            "      <th>MAE</th>\n",
                            "      <th>R squared</th>\n",
                            "      <th>RMSE</th>\n",
                            "    </tr>\n",
                            "  </thead>\n",
                            "  <tbody>\n",
                            "    <tr>\n",
                            "      <th>0</th>\n",
                            "      <td>spark</td>\n",
                            "      <td>133.150098</td>\n",
                            "      <td>0.289853</td>\n",
                            "      <td>0.776693</td>\n",
                            "      <td>0.252805</td>\n",
                            "      <td>0.976022</td>\n",
                            "    </tr>\n",
                            "    <tr>\n",
                            "      <th>0</th>\n",
                            "      <td>hyperopt</td>\n",
                            "      <td>235.779974</td>\n",
                            "      <td>0.299736</td>\n",
                            "      <td>0.790172</td>\n",
                            "      <td>0.240981</td>\n",
                            "      <td>0.983563</td>\n",
                            "    </tr>\n",
                            "    <tr>\n",
                            "      <th>0</th>\n",
                            "      <td>sample</td>\n",
                            "      <td>230.902271</td>\n",
                            "      <td>0.287638</td>\n",
                            "      <td>0.791199</td>\n",
                            "      <td>0.232688</td>\n",
                            "      <td>0.988922</td>\n",
                            "    </tr>\n",
                            "  </tbody>\n",
                            "</table>\n",
                            "</div>"
                        ],
                        "text/plain": [
                            "   Approach     Elapsed  Explained variance       MAE  R squared      RMSE\n",
                            "0     spark  133.150098            0.289853  0.776693   0.252805  0.976022\n",
                            "0  hyperopt  235.779974            0.299736  0.790172   0.240981  0.983563\n",
                            "0    sample  230.902271            0.287638  0.791199   0.232688  0.988922"
                        ]
                    },
                    "execution_count": 35,
                    "metadata": {},
                    "output_type": "execute_result"
                }
            ],
            "source": [
                "test_evaluations"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "From the results, it can be seen that, *with the same number of iterations*, Spark native construct based approach takes the least amount of time, even if there is no parallel computing. This is simply because Spark native constructs leverage the underlying Java codes for running the actual analytics with high performance efficiency. Interestingly, the run time for `hyperopt` with TPE algorithm and random search methods are almost the same. Possible reasons for this are that, the TPE algorithm searches optimal parameters intelligently but runs the tuning iterations sequentially. Also, the advantage of TPE may become obvious when there is a higher dimensionality of hyperparameters. \n",
                "\n",
                "The three approaches use the same RMSE loss. In this measure, the native Spark construct performs the best. The `hyperopt` based approach performs the second best, but the advantage is very subtle. It should be noted that these differences may be owing to many factors like characteristics of datasets, dimensionality of hyperparameter space, sampling size in the searching, etc. Note the differences in the RMSE metrics may also come from the randomness of the intermediate steps in parameter tuning process. In practice, multiple runs are required for generating statistically robust comparison results. We have tried 5 times for running the same comparison codes above. The results aligned well with each other in terms of objective metric values and elapsed time. "
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "# Conclusions"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "In summary, there are mainly three different approaches for running hyperparameter tuning for Spark based recommendation algorithm. The three different approaches are compared as follows."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "|Approach|Distributed (on Spark)|Param sampling|Advanced hyperparam searching algo|Custom evaluation metrics|Custom data split|\n",
                "|---------|-------------|--------------|--------------------------|--------------|------------|\n",
                "|AzureML Services|Parallelizing Spark sessions on multi-node cluster or single Spark session on one VM node.)|Random, Grid, Bayesian sampling for discrete and continuous variables.|Bandit policy, Median stopping policy, and truncation selection policy.|Yes|Yes|\n",
                "|Spark native construct|Distributed in single-node standalone Spark environment or multi-node Spark cluster.|No|No|Need to re-engineer Spark modules|Need to re-engineer Spark modules.|\n",
                "|`hyperopt`|No (only support parallelization on MongoDB)|Random sampling for discrete and continuous variables.|Tree Parzen Estimator|Yes|Yes|"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 36,
            "metadata": {},
            "outputs": [],
            "source": [
                "# cleanup spark instance\n",
                "spark.stop()"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "# References\n",
                "\n",
                "* Azure Machine Learning Services, url: https://azure.microsoft.com/en-us/services/machine-learning-service/\n",
                "* Lisa Li, *et al*, Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization, The Journal of Machine Learning Research, Volume 18 Issue 1, pp 6765-6816, January 2017.\n",
                "* James Bergstrat *et al*, Algorithms for Hyper-Parameter Optimization, Procs 25th NIPS 2011. \n",
                "* `hyperopt`, url: http://hyperopt.github.io/hyperopt/.\n",
                "* Bergstra, J., Yamins, D., Cox, D. D. (2013) Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. Proc. of the 30th International Conference on Machine Learning (ICML 2013).\n",
                "* Kris Wright, \"Hyper parameter tuning with hyperopt\", url:https://districtdatalabs.silvrback.com/parameter-tuning-with-hyperopt"
            ]
        }
    ],
    "metadata": {
        "celltoolbar": "Tags",
        "kernelspec": {
            "display_name": "Python (reco_pyspark)",
            "language": "python",
            "name": "reco_pyspark"
        },
        "language_info": {
            "codemirror_mode": {
                "name": "ipython",
                "version": 3
            },
            "file_extension": ".py",
            "mimetype": "text/x-python",
            "name": "python",
            "nbconvert_exporter": "python",
            "pygments_lexer": "ipython3",
            "version": "3.6.0"
        }
    },
    "nbformat": 4,
    "nbformat_minor": 2
}
