{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Multi-Armed Bandits and Reinforcement Learning with Amazon SageMaker\n",
    "\n",
    "We demonstrate how you can manage your own contextual multi-armed bandit workflow on SageMaker using the built-in [AWS Reinforcement Learning Container](https://github.com/aws/sagemaker-rl-container) container to train and deploy contextual bandit models. We show how to train these models that interact with a live environment (using a simulated client application) and continuously update the model with efficient exploration.\n",
    "\n",
    "### Why Contextual Bandits?\n",
    "\n",
    "Wherever we look to personalize content for a user (content layout, ads, search, product recommendations, etc.), contextual bandits come in handy. Traditional personalization methods collect a training dataset, build a model and deploy it for generating recommendations. However, the training algorithm does not inform us on how to collect this dataset, especially in a production system where generating poor recommendations lead to loss of revenue. Contextual bandit algorithms help us collect this data in a strategic manner by trading off between exploiting known information and exploring recommendations which may yield higher benefits. The collected data is used to update the personalization model in an online manner. Therefore, contextual bandits help us train a personalization model while minimizing the impact of poor recommendations."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](img/multi_armed_bandit_maximize_reward.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To implement the exploration-exploitation strategy, we need an iterative training and deployment system that: (1) recommends an action using the contextual bandit model based on user context, (2) captures the implicit feedback over time and (3) continuously trains the model with incremental interaction data. In this notebook, we show how to setup the infrastructure needed for such an iterative learning system. While the example demonstrates a bandits application, these continual learning systems are useful more generally in dynamic scenarios where models need to be continually updated to capture the recent trends in the data (e.g. tracking fraud behaviors based on detection mechanisms or tracking user interests over time). \n",
    "\n",
    "In a typical supervised learning setup, the model is trained with a SageMaker training job and it is hosted behind a SageMaker hosting endpoint. The client application calls the endpoint for inference and receives a response. In bandits, the client application also sends the reward (a score assigned to each recommendation generated by the model) back for subsequent model training. These rewards will be part of the dataset for the subsequent model training. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Relevant Links\n",
    "\n",
    "In-Practice\n",
    "* [AWS Blog Post on Contextual Multi-Armed Bandits](https://aws.amazon.com/blogs/machine-learning/power-contextual-bandits-using-continual-learning-with-amazon-sagemaker-rl/)\n",
    "* [Multi-Armed Bandits at StitchFix](https://multithreaded.stitchfix.com/blog/2020/08/05/bandits/)\n",
    "* [Introduction to Contextual Bandits](https://getstream.io/blog/introduction-contextual-bandits/)\n",
    "* [Vowpal Wabbit Contextual Bandit Algorithms](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Contextual-Bandit-algorithms)\n",
    "\n",
    "Theory\n",
    "* [Learning to Interact](https://hunch.net/~jl/interact.pdf)\n",
    "* [Contextual Bandit Bake-Off](https://arxiv.org/pdf/1802.04064.pdf)\n",
    "* [Doubly-Robust Policy Evaluation and Learning](https://arxiv.org/pdf/1103.4601.pdf)\n",
    "\n",
    "Code\n",
    "* [AWS Open Source Reinforcement Learning Containers](https://github.com/aws/sagemaker-rl-container)\n",
    "* [AWS Open Source Bandit Experiment Manager](./common/sagemaker_rl/orchestrator/workflow/manager)\n",
    "* [Vowpal Wabbit Reinforcement Learning Framework](https://github.com/VowpalWabbit/)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# AWS Open Source Bandit `ExperimentManager` Library"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](img/multi_armed_bandit_traffic_shift.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The bandit model is implemented by the open source [**Bandit Experiment Manager**](./common/sagemaker_rl/orchestrator/workflow/manager/) provided with this example.  This This implementation continuously updates a Vowpal Wabbit reinforcement learning model using Amazon SageMaker, DynamoDB, Kinesis, and S3.\n",
    "\n",
    "The client application, a recommender system with a review service in our case, pings the SageMaker hosting endpoint that is serving the bandit model.  The application sends the an `event` with the `context` (ie. user, product, and review text) to the bandit model and receives a recommended action from the bandit model.  In our case, the action is 1 of 2 BERT models that we are testing.  The bandit model stores this event data (given context and recommended action) in S3 using Amazon Kinesis.  _Note:  The context makes this a \"contextual bandit\" and differentiates this implementation from a regular multi-armed bandit._\n",
    "\n",
    "The client application uses the recommended BERT model to classify the review text as star rating 1 through 5 and  compares the predicted star rating to the user-selected star rating.  If the BERT model correctly predicts the star rating of the review text (ie. matches the user-selected star rating), then the bandit model is rewarded with `reward=1`.  If the BERT model incorrectly classifies the star rating of the review text, the bandit model is not rewarded (`reward=0`).\n",
    "\n",
    "The client application stores the rewards data in S3 using Amazon Kinesis.  Periodically (ie. every 100 rewards), we incrementally train an updated bandit model with the latest the reward and event data.  This updated bandit model is evaluated against the current model using a holdout dataset of rewards and events.  If the bandit model accuracy is above a given threshold relative to the existing model, it is automatically deployed in a blue/green manner with no downtime.  SageMaker RL supports offline evaluation by performing counterfactual analysis (CFA).  By default, we apply [**doubly robust (DR) estimation**](https://arxiv.org/pdf/1103.4601.pdf) method. The bandit model tries to minimize the cost (`1 - reward`), so a smaller evaluation score indicates better bandit model performance.\n",
    "\n",
    "Unlike traditional A/B tests, the bandit model will learn the best BERT model (action) for a given context over time and begin to shift traffic to the best model.  Depending on the aggressiveness of the bandit model algorithm selected, the bandit model will continuously explore the under-performing models, but start to favor and exploit the over-performing models.  And unlike A/B tests, multi-armed bandits allow you to add a new action (ie. BERT model) dynamically throughout the life of the experiment.  When the bandit model sees the new BERT model, it will start sending traffic and exploring the accuracy of the new BERT model - alongside the existing BERT models in the experiment.\n",
    "\n",
    "#### Local Mode\n",
    "\n",
    "To facilitate experimentation, we provide a `local_mode` that runs the contextual bandit example using the SageMaker Notebook instance itself instead of the SageMaker training and hosting cluster instances.  The workflow remains the same in `local_mode`, but runs much faster for small datasets.  Hence, it is a useful tool for experimenting and debugging.  However, it will not scale to production use cases with high throughput and large datasets.  In `local_mode`, the training, evaluation, and hosting is done in the local [SageMaker Vowpal Wabbit Docker Container](https://github.com/aws/sagemaker-rl-container)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import boto3\n",
    "import sagemaker\n",
    "import pandas as pd\n",
    "\n",
    "sess   = sagemaker.Session()\n",
    "bucket = sess.default_bucket()\n",
    "role = sagemaker.get_execution_role()\n",
    "region = boto3.Session().region_name\n",
    "\n",
    "sm = boto3.Session().client(service_name='sagemaker', region_name=region)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "%store -r tensorflow_endpoint_name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[OK]\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    tensorflow_endpoint_name\n",
    "    print('[OK]')\n",
    "except NameError:\n",
    "    print('+++++++++++++++++++++++++++++++')\n",
    "    print('[ERROR] Please run the notebooks in this section before you continue.')\n",
    "    print('+++++++++++++++++++++++++++++++')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensorflow-training-2021-01-23-06-16-08-737-tf-1611432312\n"
     ]
    }
   ],
   "source": [
    "print(tensorflow_endpoint_name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "%store -r pytorch_endpoint_name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[OK]\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    pytorch_endpoint_name\n",
    "    print('[OK]')    \n",
    "except NameError:\n",
    "    print('+++++++++++++++++++++++++++++++')\n",
    "    print('[ERROR] Please run the notebooks in this section before you continue.')\n",
    "    print('+++++++++++++++++++++++++++++++')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensorflow-training-2021-01-23-06-16-08-737-pt-1611433340\n"
     ]
    }
   ],
   "source": [
    "print(pytorch_endpoint_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Configure the 2 BERT Models to Test with our Bandit Experiment"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that the last trained bandit model is deployed as a SageMaker Endpoint, the client application will send the context to the endpoint and receive the recommended action.  The bandit model will recommend 1 of 2 actions in our example:  `1` or `2` which correspond to BERT model 1 and BERT model 2, respectively.  Let's configure these 2 BERT models below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "model1_endpoint_name = tensorflow_endpoint_name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensorflow-training-2021-01-23-06-16-08-737-tf-1611432312\n"
     ]
    }
   ],
   "source": [
    "print(model1_endpoint_name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    waiter = sm.get_waiter('endpoint_in_service')\n",
    "    waiter.wait(EndpointName=model1_endpoint_name)\n",
    "except:\n",
    "    print('###################')\n",
    "    print('The endpoint is not running.')\n",
    "    print('Please re-run the previous section to deploy the endpoint.')\n",
    "    print('###################')    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "content_type is a no-op in sagemaker>=2.\n",
      "See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "from sagemaker.tensorflow.model import TensorFlowPredictor\n",
    "\n",
    "model1_predictor = TensorFlowPredictor(endpoint_name=model1_endpoint_name,\n",
    "                                       sagemaker_session=sess,\n",
    "                                       model_name='saved_model',\n",
    "                                       model_version=0,\n",
    "                                       content_type='application/jsonlines',\n",
    "                                       accept_type='application/jsonlines')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Predicted star_rating: 5 for review_body \"This is great!\"\n",
      "Predicted star_rating: 3 for review_body \"This is bad.\"\n"
     ]
    }
   ],
   "source": [
    "inputs = [\n",
    "    {\"review_body\": \"This is great!\"},\n",
    "    {\"review_body\": \"This is bad.\"}\n",
    "]\n",
    "\n",
    "predicted1_classes_str = model1_predictor.predict(inputs)\n",
    "predicted1_classes = predicted1_classes_str.splitlines()\n",
    "\n",
    "for predicted1_class_json, input_data in zip(predicted1_classes, inputs):\n",
    "    predicted1_class = json.loads(predicted1_class_json)['predicted_label']\n",
    "    print('Predicted star_rating: {} for review_body \"{}\"'.format(predicted1_class, input_data[\"review_body\"]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region=us-east-1#/endpoints/tensorflow-training-2021-01-23-06-16-08-737-tf-1611432312\">Model 1 SageMaker REST Endpoint</a></b>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "\n",
    "display(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}\">Model 1 SageMaker REST Endpoint</a></b>'.format(region, model1_endpoint_name)))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "model2_endpoint_name = pytorch_endpoint_name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensorflow-training-2021-01-23-06-16-08-737-pt-1611433340\n"
     ]
    }
   ],
   "source": [
    "print(model2_endpoint_name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    waiter = sm.get_waiter('endpoint_in_service')\n",
    "    waiter.wait(EndpointName=model2_endpoint_name)\n",
    "except:\n",
    "    print('###################')\n",
    "    print('The endpoint is not running.')\n",
    "    print('Please re-run the previous section to deploy the endpoint.')\n",
    "    print('###################')    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "content_type is a no-op in sagemaker>=2.\n",
      "See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "from sagemaker.predictor import Predictor\n",
    "from sagemaker.serializers import JSONSerializer\n",
    "from sagemaker.deserializers import JSONDeserializer\n",
    "        \n",
    "model2_predictor = Predictor(endpoint_name=model2_endpoint_name,\n",
    "                             sagemaker_session=sess,\n",
    "                             serializer=JSONSerializer(), \n",
    "                             deserializer=JSONDeserializer(),\n",
    "                             content_type='application/jsonlines',\n",
    "                             accept_type='application/jsonlines')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Predicted star_rating: 4 for review_body \"This is great!\"\n",
      "Predicted star_rating: 4 for review_body \"This is bad.\"\n"
     ]
    }
   ],
   "source": [
    "inputs = [\n",
    "    {\"review_body\": \"This is great!\"},\n",
    "    {\"review_body\": \"This is bad.\"}\n",
    "]\n",
    "\n",
    "predicted2_classes_str = model2_predictor.predict(inputs)\n",
    "predicted2_classes = predicted2_classes_str.splitlines()\n",
    "\n",
    "for predicted2_class_json, input_data in zip(predicted2_classes, inputs):\n",
    "    predicted2_class = json.loads(predicted2_class_json)['predicted_label']\n",
    "    print('Predicted star_rating: {} for review_body \"{}\"'.format(predicted2_class, input_data[\"review_body\"]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region=us-east-1#/endpoints/tensorflow-training-2021-01-23-06-16-08-737-pt-1611433340\">Model 2 SageMaker REST Endpoint</a></b>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "\n",
    "display(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}\">Model 2 SageMaker REST Endpoint</a></b>'.format(region, model2_endpoint_name)))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "import yaml\n",
    "import sys\n",
    "import numpy as np\n",
    "import time\n",
    "import sagemaker\n",
    "\n",
    "sys.path.append('common')\n",
    "sys.path.append('common/sagemaker_rl')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "from pylab import rcParams\n",
    "\n",
    "%matplotlib inline\n",
    "%config InlineBackend.figure_format='retina'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Configuration\n",
    "\n",
    "The configuration for the bandits application can be specified in a `config.yaml` file as can be seen below. It configures the AWS resources needed. The DynamoDB tables are used to store metadata related to experiments, models and data joins. The `private_resource` specifices the SageMaker instance types and counts used for training, evaluation and hosting. The SageMaker container image is used for the bandits application. This config file also contains algorithm and SageMaker-specific setups.  Note that all the data generated and used for the bandits application will be stored in `s3://sagemaker-{REGION}-{AWS_ACCOUNT_ID}/{experiment_id}/`.\n",
    "\n",
    "Please make sure that the `num_arms` parameter in the config is equal to the number of actions in the client application (which is defined in the cell below).\n",
    "\n",
    "The Docker image is defined here:  https://github.com/aws/sagemaker-rl-container/blob/master/vw/docker/8.7.0/Dockerfile"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[94mresource\u001b[39;49;00m:\n",
      "  \u001b[94mshared_resource\u001b[39;49;00m:\n",
      "    \u001b[94mresources_cf_stack_name\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mBanditsSharedResourceStack\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[37m# cloud formation stack\u001b[39;49;00m\n",
      "    \u001b[94mexperiment_db\u001b[39;49;00m:\n",
      "      \u001b[94mtable_name\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mBanditsExperimentTable\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[37m# Dynamo table for status of an experiment\u001b[39;49;00m\n",
      "    \u001b[94mmodel_db\u001b[39;49;00m:\n",
      "      \u001b[94mtable_name\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mBanditsModelTable\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[37m# Dynamo table for status of all models trained\u001b[39;49;00m\n",
      "    \u001b[94mjoin_db\u001b[39;49;00m:\n",
      "      \u001b[94mtable_name\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mBanditsJoinTable\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[37m# Dynamo table for status of all joining job for reward ingestion\u001b[39;49;00m\n",
      "    \u001b[94miam_role\u001b[39;49;00m:\n",
      "      \u001b[94mrole_name\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mBanditsIAMRole\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\n",
      "  \u001b[94mprivate_resource\u001b[39;49;00m:\n",
      "    \u001b[94mhosting_fleet\u001b[39;49;00m:\n",
      "      \u001b[94minstance_type\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mml.t2.medium\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\n",
      "      \u001b[94minstance_count\u001b[39;49;00m: 1\n",
      "    \u001b[94mtraining_fleet\u001b[39;49;00m:\n",
      "      \u001b[94minstance_type\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mml.c5.4xlarge\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\n",
      "    \u001b[94mevaluation_fleet\u001b[39;49;00m:\n",
      "      \u001b[94minstance_type\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mml.c5.4xlarge\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\n",
      "\u001b[94mimage\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33m462105765813.dkr.ecr.{AWS_REGION}.amazonaws.com/sagemaker-rl-vw-container:vw-8.7.0-cpu\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[37m# Vowpal Wabbit container\u001b[39;49;00m\n",
      "\u001b[94malgor\u001b[39;49;00m: \u001b[37m# Vowpal Wabbit algorithm parameters\u001b[39;49;00m\n",
      "  \u001b[94malgorithms_parameters\u001b[39;49;00m:\n",
      "    \u001b[94mexploration_policy\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mcover\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[37m# supports \"egreedy\", \"bag\", \"cover\"\u001b[39;49;00m\n",
      "    \u001b[94mepsilon\u001b[39;49;00m: 0.10 \u001b[37m# percent to explore with egreedy exploration policy\u001b[39;49;00m\n",
      "    \u001b[94mnum_policies\u001b[39;49;00m: 3 \u001b[37m# number of nested policies to create when using bag or cover exploration policy\u001b[39;49;00m\n",
      "    \u001b[94mnum_arms\u001b[39;49;00m: 2\n",
      "    \u001b[94mcfa_type\u001b[39;49;00m: \u001b[33m\"\u001b[39;49;00m\u001b[33mdr\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[37m# supports \"dr\", \"ips\"\u001b[39;49;00m\n",
      "\u001b[94mlocal_mode\u001b[39;49;00m: false \u001b[37m# use local mode?\u001b[39;49;00m\n",
      "\u001b[94msoft_deployment\u001b[39;49;00m: true \u001b[37m# use the same endpoint with updated model using a blue-green deployment?\u001b[39;49;00m\n",
      " \n"
     ]
    }
   ],
   "source": [
    "!pygmentize 'config.yaml'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "config_file = 'config.yaml'\n",
    "with open(config_file, 'r') as yaml_file:\n",
    "    config = yaml.load(yaml_file, Loader=yaml.FullLoader)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Additional permissions for the IAM role\n",
    "IAM role requires additional permissions for [AWS CloudFormation](https://aws.amazon.com/cloudformation/), [Amazon DynamoDB](https://aws.amazon.com/dynamodb/), [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/) and [Amazon Athena](https://aws.amazon.com/athena/). Make sure the SageMaker role you are using has the permissions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from markdown_helper import *\n",
    "# from IPython.display import Markdown\n",
    "\n",
    "# display(Markdown(generate_help_for_experiment_manager_permissions(role)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Client Application (Environment)\n",
    "The client application simulates a live environment that uses the bandit model to recommend a BERT model to classify review text submitted by the application user. \n",
    "\n",
    "The logic of reward generation resides in the client application.  We simulate the online learning loop with feedback.  The data consists of 2 actions - 1 for each BERT model under test.  If the bandit model selects the right class, then the model is rewarded with `reward=1`.  Otherwise, the bandit model receives `reward=0`.\n",
    "\n",
    "The workflow of the client application is as follows:\n",
    "- Our client application picks sample review text at random, which is sent to the bandit model (SageMaker endpoint) to recommend an action (BERT model) to classify the review text into star rating 1 through 5.\n",
    "- The bandit model returns an action, an action probability, and an `event_id` for this prediction event.\n",
    "- Since the client application uses the Amazon Customer Reviews Dataset, we know the true star rating for the review text\n",
    "- The client application compares the predicted and true star rating and assigns a reward to the bandit model using Amazon Kinesis, S3, and DynamoDB.  (The `event_id` is used to join the event and reward data.)\n",
    "\n",
    "`event_id` is a unique identifier for each interaction. It is used to join inference data `<state, action, action_probability>` with the reward data. \n",
    "\n",
    "In a later cell of this notebook, we illustrate how the client application interacts with the bandit model endpoint and receives the recommended action (BERT model)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step-by-step bandits model development\n",
    "\n",
    "[**Bandit Experiment Manager**](./common/sagemaker_rl/orchestrator/workflow/manager/) is the top level class for all the Bandits/RL and continual learning workflows. Similar to the estimators in the [Sagemaker Python SDK](https://github.com/aws/sagemaker-python-sdk), `ExperimentManager` contains methods for training, deployment and evaluation. It keeps track of the job status and reflects current progress in the workflow.\n",
    "\n",
    "Start the application using the `ExperimentManager` class "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "timestamp = int(time.time())\n",
    "\n",
    "bandit_experiment_name = 'bandits-{}'.format(timestamp)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# `ExperimentManager` will create a AWS CloudFormation Stack of additional resources needed for the Bandit experiment. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:orchestrator.resource_manager:Creating a new CloudFormation stack for Shared Resources. You can always reuse this StackName in your other experiments\n",
      "INFO:orchestrator.resource_manager:[\n",
      "    {\n",
      "        \"ParameterKey\": \"IAMRoleName\",\n",
      "        \"ParameterValue\": \"BanditsIAMRole\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"ExperimentDbName\",\n",
      "        \"ParameterValue\": \"BanditsExperimentTable\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"ExperimentDbRCU\",\n",
      "        \"ParameterValue\": \"5\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"ExperimentDbWCU\",\n",
      "        \"ParameterValue\": \"5\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"ModelDbName\",\n",
      "        \"ParameterValue\": \"BanditsModelTable\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"ModelDbRCU\",\n",
      "        \"ParameterValue\": \"5\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"ModelDbWCU\",\n",
      "        \"ParameterValue\": \"5\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"JoinDbName\",\n",
      "        \"ParameterValue\": \"BanditsJoinTable\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"JoinDbRCU\",\n",
      "        \"ParameterValue\": \"5\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    },\n",
      "    {\n",
      "        \"ParameterKey\": \"JoinDbWCU\",\n",
      "        \"ParameterValue\": \"5\",\n",
      "        \"UsePreviousValue\": true,\n",
      "        \"ResolvedValue\": \"string\"\n",
      "    }\n",
      "]\n",
      "INFO:orchestrator.resource_manager:Creating CloudFormation Stack for shared resource!\n",
      "INFO:orchestrator.resource_manager:Waiting for stack to get to CREATE_COMPLETE state....\n"
     ]
    }
   ],
   "source": [
    "from orchestrator.workflow.manager.experiment_manager import ExperimentManager\n",
    "\n",
    "bandit_experiment_manager = ExperimentManager(config, experiment_id=bandit_experiment_name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:orchestrator.resource_manager:Deleting firehose stream 'bandits-1611645944'...\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ignore any errors.  Errors are OK.\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    bandit_experiment_manager.clean_resource(experiment_id=bandit_experiment_manager.experiment_id)\n",
    "    bandit_experiment_manager.clean_table_records(experiment_id=bandit_experiment_manager.experiment_id)\n",
    "except:\n",
    "    print('Ignore any errors.  Errors are OK.')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:orchestrator.resource_manager:Using Resources in CloudFormation stack named: BanditsSharedResourceStack for Shared Resources.\n",
      "WARNING:orchestrator:Experiment with name bandits-1611645944 already exists. Reusing current state from ExperimentDb.\n"
     ]
    }
   ],
   "source": [
    "bandit_experiment_manager = ExperimentManager(config, experiment_id=bandit_experiment_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Initialize the Bandit Model\n",
    "To start a new experiment, we need to initialize the first bandit model or \"policy\" in reinforcement learning terminology.  \n",
    "\n",
    "If we have historical data in the format `(state, action, action probability, reward)`, we can perform a \"warm start\" and learn the bandit model offline.  \n",
    "\n",
    "However, let's assume we are starting with no historical data and initialize a random bandit model using `initialize_first_model()`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:orchestrator:Next Model name would be bandits-1611645944-model-id-1611645987\n",
      "INFO:orchestrator:Start training job for model 'bandits-1611645944-model-id-1611645987''\n",
      "INFO:orchestrator:Training job will be executed in 'SageMaker' mode\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-01-26 07:26:28 Starting - Starting the training job.."
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611645987. This exception will be ignored, and retried.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ".\n",
      "2021-01-26 07:26:53 Starting - Launching requested ML instancesProfilerReport-1611645987: InProgress\n",
      "..."
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611645987. This exception will be ignored, and retried.\n",
      "ERROR:orchestrator:Failed to start new Model Training job for ModelId {next_model_to_train_id}\n",
      "ERROR:orchestrator:An error occurred (ThrottlingException) when calling the DescribeTrainingJob operation (reached max retries: 4): Rate exceeded\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611645987. This exception will be ignored, and retried.\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611645987. This exception will be ignored, and retried.\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611645987. This exception will be ignored, and retried.\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611645987. This exception will be ignored, and retried.\n"
     ]
    }
   ],
   "source": [
    "bandit_experiment_manager.initialize_first_model()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ^^ Ignore `Failed to delete: /tmp/...` message above.  This is OK. ^^"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check Experiment State:  TRAINED\n",
    "`training_state`: `TRAINED`\n",
    "\n",
    "Note the `last_trained_model_id` variable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'evaluation_workflow_metadata': {'evaluation_state': None,\n",
      "                                  'last_evaluation_job_id': None,\n",
      "                                  'next_evaluation_job_id': None},\n",
      " 'experiment_id': 'bandits-1611645944',\n",
      " 'hosting_workflow_metadata': {'hosting_endpoint': None,\n",
      "                               'hosting_state': None,\n",
      "                               'last_hosted_model_id': None,\n",
      "                               'next_model_to_host_id': None},\n",
      " 'joining_workflow_metadata': {'joining_state': None,\n",
      "                               'last_joined_job_id': None,\n",
      "                               'next_join_job_id': None},\n",
      " 'training_workflow_metadata': {'last_trained_model_id': 'bandits-1611645944-model-id-1611645987',\n",
      "                                'next_model_to_train_id': None,\n",
      "                                'training_state': 'TRAINED'}}\n"
     ]
    }
   ],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "pprint(bandit_experiment_manager._jsonify())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Deploy the Bandit Model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once training and evaluation is done, we can deploy the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deploying newly-trained bandit model: bandits-1611645944-model-id-1611645987\n"
     ]
    }
   ],
   "source": [
    "print('Deploying newly-trained bandit model: {}'.format(bandit_experiment_manager.last_trained_model_id))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deploying bandit model_id bandits-1611645944-model-id-1611645987\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:orchestrator:Model 'bandits-1611645944-model-id-1611645987' is ready to deploy.\n",
      "INFO:orchestrator:No hosting endpoint found, creating a new hosting endpoint.\n",
      "INFO:orchestrator.resource_manager:Successfully create S3 bucket 'sagemaker-us-east-1-835319576252' for storing sagemaker data\n",
      "INFO:orchestrator.resource_manager:Creating firehose delivery stream...\n",
      "INFO:orchestrator.resource_manager:Creating firehose delivery stream...\n",
      "INFO:orchestrator.resource_manager:Creating firehose delivery stream...\n",
      "INFO:orchestrator.resource_manager:Creating firehose delivery stream...\n",
      "INFO:orchestrator.resource_manager:Creating firehose delivery stream...\n",
      "INFO:orchestrator.resource_manager:Successfully created delivery stream 'bandits-1611645944'\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-----------------!"
     ]
    }
   ],
   "source": [
    "print('Deploying bandit model_id {}'.format(bandit_experiment_manager.last_trained_model_id))\n",
    "\n",
    "bandit_experiment_manager.deploy_model(model_id=bandit_experiment_manager.last_trained_model_id) \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check Experiment State:  DEPLOYED\n",
    "`hosting_state`: `DEPLOYED`\n",
    "\n",
    "The `last_trained_model_id` and `last_hosted_model_id` are now the same as we just deployed the bandit model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'evaluation_workflow_metadata': {'evaluation_state': None,\n",
      "                                  'last_evaluation_job_id': None,\n",
      "                                  'next_evaluation_job_id': None},\n",
      " 'experiment_id': 'bandits-1611645944',\n",
      " 'hosting_workflow_metadata': {'hosting_endpoint': 'arn:aws:sagemaker:us-east-1:835319576252:endpoint/bandits-1611645944',\n",
      "                               'hosting_state': 'DEPLOYED',\n",
      "                               'last_hosted_model_id': 'bandits-1611645944-model-id-1611645987',\n",
      "                               'next_model_to_host_id': None},\n",
      " 'joining_workflow_metadata': {'joining_state': None,\n",
      "                               'last_joined_job_id': None,\n",
      "                               'next_join_job_id': None},\n",
      " 'training_workflow_metadata': {'last_trained_model_id': 'bandits-1611645944-model-id-1611645987',\n",
      "                                'next_model_to_train_id': None,\n",
      "                                'training_state': 'TRAINED'}}\n"
     ]
    }
   ],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "pprint(bandit_experiment_manager._jsonify())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Initialize the Client Application"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "import csv\n",
    "import numpy as np\n",
    "\n",
    "class ClientApp():\n",
    "    def __init__(self, data, num_events, bandit_model, bert_model_map):\n",
    "        self.bandit_model = bandit_model\n",
    "        self.bert_model_map = bert_model_map\n",
    "        \n",
    "        self.num_actions = 2\n",
    "\n",
    "        df_reviews = pd.read_csv(data, \n",
    "                                 delimiter='\\t', \n",
    "                                 quoting=csv.QUOTE_NONE,\n",
    "                                 compression='gzip')\n",
    "        df_scrubbed = df_reviews[['review_body', 'star_rating']].sample(n=num_events) # .query('star_rating == 1')\n",
    "        df_scrubbed = df_scrubbed.reset_index()\n",
    "        df_scrubbed.shape\n",
    "        np_reviews = df_scrubbed.to_numpy()\n",
    "\n",
    "        np_reviews = np.delete(np_reviews, 0, 1)\n",
    "        \n",
    "        # Last column is the label, the rest are the features (contexts)\n",
    "        self.labels = np_reviews[:, -1]\n",
    "        self.contexts = np_reviews[:, :-1].tolist()\n",
    "\n",
    "        self.optimal_rewards = [1]\n",
    "        self.rewards_tmp_buffer = []\n",
    "        self.joined_data_tmp_buffer = []\n",
    "        self.all_joined_data_buffer = []\n",
    "        \n",
    "        self.action_count = {}\n",
    "\n",
    "    def increment_action_count(self, action):\n",
    "        try:\n",
    "            action_count = self.action_count[action]\n",
    "        except:\n",
    "            self.action_count[action] = 0\n",
    "            action_count = 0\n",
    "            \n",
    "        self.action_count[action] = action_count + 1\n",
    "                \n",
    "    def choose_random_context(self):\n",
    "        context_index = np.random.choice(len(self.contexts))\n",
    "        context = self.contexts[context_index]\n",
    "        return context_index, context    \n",
    "\n",
    "    def clear_tmp_buffers(self):\n",
    "        self.rewards_tmp_buffer.clear()\n",
    "        self.joined_data_tmp_buffer.clear()\n",
    "\n",
    "    def get_reward(self, \n",
    "                   context_index, \n",
    "                   action, \n",
    "                   event_id, \n",
    "                   bandit_model_id, \n",
    "                   action_prob, \n",
    "                   sample_prob, \n",
    "                   local_mode):\n",
    "\n",
    "        context_to_predict = self.contexts[context_index][0]\n",
    "    \n",
    "        label = self.labels[context_index]\n",
    "        \n",
    "        bert_model = self.bert_model_map[action]\n",
    "\n",
    "        self.increment_action_count(action)\n",
    "        \n",
    "        inputs = [\n",
    "            {\"review_body\": context_to_predict},\n",
    "        ]\n",
    "\n",
    "        predicted_classes_str = bert_model.predict(inputs)\n",
    "        predicted_classes = predicted_classes_str.splitlines()\n",
    "\n",
    "        for predicted_class_json, input_data in zip(predicted_classes, inputs):\n",
    "            predicted_class = json.loads(predicted_class_json)['predicted_label']\n",
    "            print('Predicted star_rating: {}, actual star_rating {}, review_body \"{}\"'.format(predicted_class, label, input_data[\"review_body\"]))\n",
    "               \n",
    "        # Calculate difference between predicted and actual label\n",
    "        if abs(int(predicted_class) - int(label)) == 0:\n",
    "            reward = 1\n",
    "        else:\n",
    "            reward = 0\n",
    "\n",
    "        if local_mode:\n",
    "            json_blob = {\n",
    "                         \"reward\": reward,\n",
    "                         \"event_id\": event_id,\n",
    "                         \"action\": action,\n",
    "                         \"action_prob\": action_prob,\n",
    "                         \"model_id\": bandit_model_id,\n",
    "                         \"observation\": [context_index],\n",
    "                         \"sample_prob\": sample_prob\n",
    "                        }\n",
    "            \n",
    "            self.joined_data_tmp_buffer.append(json_blob)            \n",
    "        else:\n",
    "            json_blob = {\n",
    "                         \"reward\": reward, \n",
    "                         \"event_id\": event_id\n",
    "                        }\n",
    "            self.rewards_tmp_buffer.append(json_blob)\n",
    "        \n",
    "        return reward\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<orchestrator.resource_manager.Predictor object at 0x7f5445a5ea90>\n"
     ]
    }
   ],
   "source": [
    "bandit_model = bandit_experiment_manager.predictor\n",
    "print(bandit_model)\n",
    "\n",
    "if not bandit_model:\n",
    "    raise Exception(\"No predictor\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "client_app = ClientApp(data='./data/amazon_reviews_us_Digital_Software_v1_00.tsv.gz',\n",
    "                       num_events=100,\n",
    "                       bandit_model=bandit_model,\n",
    "                       bert_model_map={\n",
    "                         1: model1_predictor,\n",
    "                         2: model2_predictor\n",
    "                       })"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Make sure that `num_arms` specified in `config.yaml` is equal to the total unique actions in the simulation application."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print('Testing {} BERT models'.format(client_app.num_actions))\n",
    "\n",
    "# assert client_app.num_actions == bandit_experiment_manager.config[\"algor\"][\"algorithms_parameters\"][\"num_arms\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "# import time\n",
    "\n",
    "# context_index, context = client_app.choose_random_context()\n",
    "# action, event_id, bandit_model_id, action_prob, sample_prob = bandit_model.get_action(obs=[context_index])\n",
    "\n",
    "# print('event ID: {}\\nbert_model_id: {}\\naction_probability: {}'.format(event_id, action, action_prob, bandit_model_id))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Generate Sample Events to Test the Bandit `ExperimentManager`\n",
    "Thsi will generated sample contexts to pass as events to the bandit using the Amazon Customer Reviews Dataset.  The bandit model will recommend an action (BERT model) based on the context and current state of the bandit.  We will assign a reward using the star ratings from Amazon Customer Reviews Dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Client application generates a reward after receiving the recommended action and stores the tuple `<eventID, reward>` in S3. In this case, reward is 1 if predicted action is the true class, and 0 otherwise. SageMaker hosting endpoint saves all the inferences `<eventID, state, action, action probability>` to S3 using [**Kinesis Firehose**](https://aws.amazon.com/kinesis/data-firehose/). The `ExperimentManager` joins the reward with state, action and action probability using [**Amazon Athena**](https://aws.amazon.com/athena/). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "# local_mode = bandit_experiment_manager.local_mode\n",
    "\n",
    "# num_events = 100 \n",
    "\n",
    "# print('Generating {} sample events...'.format(num_events))\n",
    "\n",
    "# for i in range(num_events):\n",
    "#     context_index, context = client_app.choose_random_context()\n",
    "#     action, event_id, bandit_model_id, action_prob, sample_prob = bandit_model.get_action(obs=[context_index])\n",
    "\n",
    "#     reward = client_app.get_reward(context_index=context_index, \n",
    "#                                    action=action, \n",
    "#                                    event_id=event_id, \n",
    "#                                    bandit_model_id=bandit_model_id, \n",
    "#                                    action_prob=action_prob, \n",
    "#                                    sample_prob=sample_prob, \n",
    "#                                    local_mode=local_mode)    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Create Bandit Model Training Data\n",
    "\n",
    "Join `Event` and `Reward` data and upload to S3 in the following format:\n",
    "\n",
    "```\n",
    "{\n",
    " 'reward': 0, # 0 if the model is wrong, +1 if the model is correct\n",
    " 'event_id': 131181492351609994318271340276526219266, # unique event id\n",
    " 'action': 1, # suggested action (bert_model_id 1 or 2)\n",
    " 'action_prob': 0.9995, # probability that the suggested action is correct\n",
    " 'model_id': 'bandits-1597631299-model-id-1597631304', # unique bandit_model_id\n",
    " 'observation': [54], # feature (review_id)\n",
    " 'sample_prob': 0.43410828171830174 \n",
    "}\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# if local_mode:\n",
    "#     print('Using local mode with memory buffers.')\n",
    "#     print()\n",
    "#     print(client_app.joined_data_tmp_buffer)\n",
    "#     bandit_experiment_manager.ingest_joined_data(client_app.joined_data_tmp_buffer)\n",
    "# else:\n",
    "#     print(\"Using production mode with Kinesis Firehose.  Waiting to flush to S3...\")\n",
    "#     print()\n",
    "#     time.sleep(60) # Wait for firehose to flush data to S3\n",
    "#     rewards_s3_prefix = bandit_experiment_manager.ingest_rewards(client_app.rewards_tmp_buffer)\n",
    "#     bandit_experiment_manager.join(rewards_s3_prefix)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check Experiment Status:  JOINED\n",
    "`joining_workflow_metadata: {'joining_state': 'SUCCEEDED'}`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from pprint import pprint\n",
    "\n",
    "# pprint(bandit_experiment_manager._jsonify())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Review Bandit Model Training Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print('Bandit model training data {}'.format(bandit_experiment_manager.last_joined_job_train_data))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from sagemaker.s3 import S3Downloader\n",
    "\n",
    "# bandit_model_train_data_s3_uri = S3Downloader.list(bandit_experiment_manager.last_joined_job_train_data)[0]\n",
    "# print(bandit_model_train_data_s3_uri)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from sagemaker.s3 import S3Downloader\n",
    "\n",
    "# bandit_model_train_data = S3Downloader.read_file(bandit_model_train_data_s3_uri)\n",
    "# print(bandit_model_train_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Train the Bandit Model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can train a new model with newly collected experiences, and host the resulting model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print('Trained bandit model id {}'.format(bandit_experiment_manager.last_trained_model_id))\n",
    "\n",
    "# bandit_experiment_manager.train_next_model(input_data_s3_prefix=bandit_experiment_manager.last_joined_job_train_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Ignore ^^ `Failed to delete` Error Above ^^ "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Deploy the Bandit Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print('Deploying bandit model id {}'.format(bandit_experiment_manager.last_trained_model_id))\n",
    "\n",
    "# bandit_experiment_manager.deploy_model(model_id=bandit_experiment_manager.last_trained_model_id)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check Experiment Status:  DEPLOYED\n",
    "`deploying_state`:  `SUCCEEDED`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from pprint import pprint\n",
    "\n",
    "# pprint(bandit_experiment_manager._jsonify())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Continuously Train, Evaluate, and Deploy Bandit Models\n",
    "The above cells explained the individual steps in the training workflow. To train a model to convergence, we will continually train the model based on data collected with client application interactions. We demonstrate the continual training and evaluation loop in a single cell below.\n",
    "\n",
    "_**Train and Evaluate**_:\n",
    "After every training cycle, we evaluate if the newly trained model (`last_trained_model_id`) would perform better than the one currently deployed (`last_hosted_model_id`) using a holdout evaluation dataset.  Details of the join, train, and evaluation steps are tracked in the `BanditsJoinTable` and `BanditsModelTable` DynamoDB tables.  When you have multiple experiments, you can compare them in the `BanditsExperimentTable` DynamoDB table.\n",
    "\n",
    "_**Deploy**_: If the new bandit model is better than the current bandit model (based on offline evaluation), we will automatically deploy the new bandit model using a blue-green deployment to avoid downtime."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    ################################\n",
      "    # Incremental Training Loop 1\n",
      "    ################################\n",
      "    \n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I have had other internet security suites that seemed to work well.  This one works as well as any I have purchased or downloaded.  I even forget I have it until it warns me of a problem.  Matched with their anti-virus software makes this the best I have found to keep my laptop running without problems.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"A very good product. Meets my expectations. Why do you require so many words? I hope this is enough. Well?\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Have used TurboTax for Federal & PA state returns for many tears.  Since most forms are for same payers and payees as previous year, most of the work is carried over from the past. And that's one of the best features - namely, the ability to to transfer data from last year's returns to the present year. Also appreciate the TurboTax ability to check for mistakes or misinterpretations, and flag things which are illogical or which could cause problems. I can't foresee anything that would tempt me to leave TurboTax for some competitor of theirs. It makes sense, and it works perfectly for me. And it remembers any carry-overs from previous years.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"It's very helpful for any high school student and easy to use.\"\n",
      "Predicted star_rating: 2, actual star_rating 2, review_body \"First of all, all the video's that you download are based on the Apple environment, not on Windows.<br />System requirements are Vista, 7, 8 and XP, that implies I would think that what I get is going to help me learn on those environment's. The video files that are downloaded are mp4 format, these files work on Apple Mac correct? so why does it say “System requirements are Vista, 7, 8 and XP”.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"It's very helpful for any high school student and easy to use.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"I don't mind paying my share of taxes. I have lived in underdeveloped, impoverished countries. Taxes are the price of a free, just and prosperous society. Oh but how I hate doing taxes. I hate filling out forms. I hate the fog form filling.<br /><br />I have used this software for several years. This year (2014) they have stupified it. Now it is more confusing and difficult to go in and just enter the data from 1099's and other forms. I can't see how to enter the HSA data. If you deviate from their entry order, you have even more problems. They just want to boot you to the end. Thus, if only some of the data has arrived in the mail, and you enter what you have, it gets more confuding later when you want to enter the other data.<br /><br />If you have kids in college, the colleges want financial aid data early. They actually ask for it in November (before you have earned it!!!!!) and again in January to March. Thus, with this software, trying to do an estimate using partial data adds confusion, as if the IRS does not make it difficult enough. Also there is no apparent way to contact the company through email - so here it is. Taxact, why do you  waste 3 pages of my paper to print a half page receipt? This just is irksome. Why no email contact - I guess you  don't want to here from your customers. So I posted my comments here\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Very easy to download. I love it.\"\n",
      "Predicted star_rating: 1, actual star_rating 5, review_body \"I had trouble because it downloaded partially so when I tried to redownload it, I would get a message that I needed to close the version I already had.  I did not have a way to close the version because the program had deleted it.  Quicken 2014 would delete itself the next day and a couple of times after that.  After doing a good bit of searching, I realized I needed to go to uninstall programs and delete the program, then start over with the new download.  The problem is corrected now and everything is great.  The problem was probably me and nowt the download.  I am very pleased now.  Thank you.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"The best security for home and small businesses I have ever used.  Easy to use and understand.  The price is very reasonable also.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Hi. I am getting a little frustrated. Your sales people forced me to buy a bloated bundle of outsourced applications that you neither develop, nor adequately support, because (I was told)  the standalone 2013 will not run without the support package. You send out response questionnaires, and don't bother to read them until days later, if at all, or respond to them. Not until I wrote this review, finally, someone helpful called.<br /><br />I was told 24/7 support, but at 8\\\\\"00 last Saturday, your support line was punting callers.  Do you assume these epic glitches in your software will go away on their own? Do you suppose your client-base is as s*** happy as your team at Intuit? I really don't think so. That is a sure sign of big corporate mentality.<br /><br />As I said. The background synch issues a bad PW to my online services. After a few rejections, Chase locks the online service, and I have to call to unlock it.<br /><br />Hi, just checking in to see if you guys might get around to looking at this ticket. Not for nothing, we sent home our bookkeeping department, and did manual payroll, all because you unilaterally canceled my registration. I have no Intuit access to my banking. Um, this is getting pathetic. Do you guys outsource any higher bandwidth consulting techs who might be able to work through this disaster? I could call them for you, and even dial the phone for you.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I completed my taxes in record time because I understood what I needed to do to meet the requirements of the tax code.  I felt confident that I was doing my taxes properly.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Quicken 2014 on Windows 8 is nothing but problems.  Every time you try to save a transaction it takes 20 seconds.<br />I have used Quicken for 15 years and never had a problem like this. I have re-installed the program. Used Quicken's<br />uninstall programs. Loaded the program as an Administrator. Read multiple fixes on Quicken's site.<br />Nothing works. Quicken has done nothing to fix this problem. This is poor customer service.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Quicken 2014 on Windows 8 is nothing but problems.  Every time you try to save a transaction it takes 20 seconds.<br />I have used Quicken for 15 years and never had a problem like this. I have re-installed the program. Used Quicken's<br />uninstall programs. Loaded the program as an Administrator. Read multiple fixes on Quicken's site.<br />Nothing works. Quicken has done nothing to fix this problem. This is poor customer service.\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"Quicken Deluxe 2015 converted very easily from Deluxe 2014. It runs smoothly and seems to work well while updating with the banks. The credit score is nice to have. I am satisfied with my purchase.\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Totally Love this program and the ease of the download\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Love Avast! Finds all threats & viruses, great protection! Thank you so much\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"so good and its free\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I completed my taxes in record time because I understood what I needed to do to meet the requirements of the tax code.  I felt confident that I was doing my taxes properly.\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"I was disappointed that the deluxe edition cost more and did not include items from previous years.  They did provide an upgrade without additional charge.\"\n",
      "Predicted star_rating: 1, actual star_rating 5, review_body \"I got the map updates at Amazon because it was cheaper then going directly with Garmin.  I had already tried to have the Garmin site scan my device but waited and waited and it never completed the scan to tell me the status.  I gave up and went to Amazon and it uploaded no problem while I slept on to my computer.  Then I transferred it to my GPS and I was up and running again.  Very Happy Customer now.\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"I was disappointed that the deluxe edition cost more and did not include items from previous years.  They did provide an upgrade without additional charge.\"\n",
      "Predicted star_rating: 5, actual star_rating 4, review_body \"Works good\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"My software was all jacked up, and I did not want to bring it to Geek Squad, but this tool fixed my issues and I'm now back to work. Thanks!\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"The Amazon down-loader worked fine; the software was successfully downloaded/saved?, With the help of Angel in Digital Services,it was proven<br /> that the software would download successfully.However, the software application would not open. Please ask the provider to fix this problem?<br />Please see Angel for details?\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"They keep improving the products to make them easier to understand and input information. It retrieves information from previous files and populates new forms.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"After downloading this 4 times it would not install I have a MacBook with Mac OS X version 10.7.5 which the description said would be compatible with the software. I'm so bummed! If I could give 0 stars I would!\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"I DOWNLOADED THE QUICKBOOKS PROGAM WITH NO PROBLEMS AT ALL AND I AM VERY HAPPY WITH IT I PLAN ON BUYING THE PAY ROLL PROGAM TO GO WITH IT .<br />THANK YOU<br />DEWAYNE\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"State is too expensive.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Quicken 2014 on Windows 8 is nothing but problems.  Every time you try to save a transaction it takes 20 seconds.<br />I have used Quicken for 15 years and never had a problem like this. I have re-installed the program. Used Quicken's<br />uninstall programs. Loaded the program as an Administrator. Read multiple fixes on Quicken's site.<br />Nothing works. Quicken has done nothing to fix this problem. This is poor customer service.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Why pay for antivirus software when you can get an excellent checker for FREE? And, if you love it, you can upgrade to more options and/or other software they have available. This is the way all software should be distributed/sold.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I bought this for my grandson who is 9. I think somebody that is a good fast typist will be able to do anything in today's world better. Typing is in every place in our lives now. He is using it and getting better fast. I'd recommend it to anyone that wants to learn the right way to type!\"\n",
      "Predicted star_rating: 1, actual star_rating 3, review_body \"If you have done your research, you know that painter is in a league of its own, as far as natural media emulation goes. The reason I titled this review BEWARE, is because the management team at Corel has zero respect for the users of its product. I can say without exaggerating that Corel has the worst customer service of any company I have ever dealt with. If you are lucky and the software works well, without hitches on your machine, congratulations. If like me, you encounter memory leakage (due to engineering faults of the software) and you want their help figuring out how to navigate the situation, you had better have a plan B. Many days after I filled out their cumbersome form to communicate with \\\\\"help staff\\\\\" I got a cookie cutter response that they would get in touch with me within 12 hours. It never happened. I wrote to them again, and I might as well have put a message in a bottle and thrown it into the Atlantic. The software is very good at what it does, when it works, but the firm that owns it, and takes customer money easily, does NOTHING to help with any technical difficulties.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Hi. I am getting a little frustrated. Your sales people forced me to buy a bloated bundle of outsourced applications that you neither develop, nor adequately support, because (I was told)  the standalone 2013 will not run without the support package. You send out response questionnaires, and don't bother to read them until days later, if at all, or respond to them. Not until I wrote this review, finally, someone helpful called.<br /><br />I was told 24/7 support, but at 8\\\\\"00 last Saturday, your support line was punting callers.  Do you assume these epic glitches in your software will go away on their own? Do you suppose your client-base is as s*** happy as your team at Intuit? I really don't think so. That is a sure sign of big corporate mentality.<br /><br />As I said. The background synch issues a bad PW to my online services. After a few rejections, Chase locks the online service, and I have to call to unlock it.<br /><br />Hi, just checking in to see if you guys might get around to looking at this ticket. Not for nothing, we sent home our bookkeeping department, and did manual payroll, all because you unilaterally canceled my registration. I have no Intuit access to my banking. Um, this is getting pathetic. Do you guys outsource any higher bandwidth consulting techs who might be able to work through this disaster? I could call them for you, and even dial the phone for you.\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"After a problem with MyDVD/VideoWave the coral help line had me uninstall  Autodesk 360 and all worked well.\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Works great!\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I completed my taxes in record time because I understood what I needed to do to meet the requirements of the tax code.  I felt confident that I was doing my taxes properly.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"love using turbo tax and will continue to us the product.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Quicken 2014 on Windows 8 is nothing but problems.  Every time you try to save a transaction it takes 20 seconds.<br />I have used Quicken for 15 years and never had a problem like this. I have re-installed the program. Used Quicken's<br />uninstall programs. Loaded the program as an Administrator. Read multiple fixes on Quicken's site.<br />Nothing works. Quicken has done nothing to fix this problem. This is poor customer service.\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"Update 4/27/14: Quickbooks Pro 2014 works pretty much the same as the 2011 version I upgraded from. The installation was smooth and trouble free. I restored my 2011 company files and it converted them and everything runs fine. So no surprises there. After several weeks running on Windows 7 Pro 64 bit it has not crashed once. The 2014 version introduces lots of little tweaks: some cosmetic, some functional, some annoying, but too many to mention. In fact I would have rated it 5 stars if not for some of these questionable tweaks. The most glaring tweak for me was the new display format. I am a long-time user starting with QB Pro 2000 many years ago. I have settled on a certain Quickbooks desktop arrangement that suits my work flow. It has lots of accounts, lists, and other windows opened simultaneously and arranged on the desktop. However the 2014 version now uses a more \\\\\"spacious\\\\\" display with larger fonts and line spacing. The result is much less information displayed in given window size. In other words there is a lot more green space - or wasted space if you will - in the new display format. This might be desirable on small screens. But I use a large 27\\\\\" monitor and find it annoying and inefficient. Intuit should at least make this customizable for information freaks like me who prefer the older \\\\\"high density\\\\\" display format. Another annoyance is the new improved Company -> To-Do List. Intuit added more columns to the To-Do List but no way to remove them if not needed. The result for users who neither want or need the new columns is a lot of wasted display and a much wider To-Do List window that cannot be made any narrow than about half the width of the entire monitor. The To-Do List used to be a permanent resident on my desktop; now because of its bloated size I have to call it up only when needed... which happens to be many time throughout the day. These gripes may seem trivial, but I don't see it that way. It is frustrating and annoying when companies introducechange just for the sake of change, with no added value whatsoever and no consideration for impact to existing users. Intuit should have at least provided a \\\\\"classic\\\\\" display option. Is that too much to ask? Even Microsoft got that one right.. well, prior to Windows 8 anyway.\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"It works and it,s automatic. It has a lot of bells and whistles that will probably never be used. The average computer user will just set and forget it.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"My IT guy suggested this as a replacement for my expiring Trend Micro product.  Great price and seems to be working fine.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"The very best antivirus out there at the unbelievable price FREE! How can the rest compete with that?\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Absolutely the best free anti-virus I've ever used, combined with malwarebytes your windows operating system will be bullet-proof. A must have.\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"I have tried other tax software but always returned to Turbo Tax because it has the least amount of errors and for me is the most user friendly.  By user friendly I mean you can easily find forms to manually correct if needed and check for errors or make changes.  This was the first year I used Amazon's download and the process was very easy and the cost was cheaper, also they store the software, for a time, if I need to re-download (I still backup the download).\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"I am an Amazon EC2 user as ell as a Prime member.  Although Amazon is not the friendliest implementation or the most helpful support, their bleeding edge approach to trying new things is awesome... I just wish they devoted more time to improving the neat tools they created so competitors would not pass them by.  Making great capabilities available to its Amazon community is also a terrific business step, but some of those products lose their compelling story as competitors bypass their capabilities.  That is certainly the case with Amazon's Cloud Drive and Cloud Drive Desktop.<br /><br />On the plus side:<br />1.  The product makes use of Amazon's incredible infrastructure to deliver massive capabilities especially as far as the volume of data stored.<br />2.  That same infrastructure provides security, reliability and scalability that may not be surpassed except by other equally large companies like Google (ie Google Drive).<br />3.  With a Prime membership, you can store unlimited photographs, and 5GB of other files.  That is more than my Google Drive which limits me to 20GB total.<br /><br />On the bad side:<br />1.  The software is buggy.  For example, until this week many users could not download and install the Desktop product without getting an error something like &#34;Something went wrong and its not your fault.  Try again in a few minutes.&#34;  Really?  I am a technical guy and I did not know what to do with this.<br />2.  The apparent capabilities are lagging far behind the competition.  For example, the Desktop will allow you to load all of your directories to Cloud Drive; however, it does not allow you to specify the destination folder.  Again, this is an example of poor architectural planning for the products being offered.  Amazon's work around was to load all of the directories and then to select them and move them to the correct folder structure.  Really?  Is that an appropriate solution?  What if you are managing a huge number of images like a photographer or a studio?<br />3.  There is little effort made on significantly improving the product once it is released.  I encourage you to Google Amazon Cloud Drive reviews and see for yourself that these same problems were reported...  years ago!<br /><br />I love Amazon's innovation.  At one time I thought it was the most innovative company on the planet -- surpassing Google.  However, it seems to be moving so fast that it forgets to keep the quality of products up-to-speed.  It is just as important to continue improving innovations to keep ahead of competition as it is to find the next cool thing.<br /><br />Anyway, I hope this review was helpful and I welcome comments.  With warmest regards, I remain<br /><br />Very truly yours,<br />JDD\"\n",
      "Predicted star_rating: 2, actual star_rating 2, review_body \"I've been a long time quicken user (since 1996) and periodically update the product to get new features.  However, this release seems to have been pushed out without adequate testing.  Many things seem to have broken with this update, as previous saved reports and auto transations have been deleted, and some of my accounts no longer sync because of 'password errors' despite the passwords being correct and unchanged since before the update.  Also, the mobile integration seems extremely slapdash.  Many errors when trying to sync to/from 'quicken cloud'.  Finally, it seems to get stuck in some weird refresh loops occasionally where it will repeatedly 'compare transactions to register' over and over and over again.<br /><br />I'd stay away from this release, at least until there are reports that the bugs have been fixed.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I have had good luck with PC Matic. Computers are doing great.<br />I have had PC Matic for 2 years now and all my computers have been Virus FREE and running at peak performance.<br />I would recommend you switching to PC Matic right now, do not wait. All American too!\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"This product downloaded very smoothly and installed on present computer without any problems.<br />I was worried about not having a disk but amazon has a very good set up with the Key # in case of a problem.<br />I am well pleased.<br /> Gil\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I bought this for my grandson who is 9. I think somebody that is a good fast typist will be able to do anything in today's world better. Typing is in every place in our lives now. He is using it and getting better fast. I'd recommend it to anyone that wants to learn the right way to type!\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Quicken 2014 on Windows 8 is nothing but problems.  Every time you try to save a transaction it takes 20 seconds.<br />I have used Quicken for 15 years and never had a problem like this. I have re-installed the program. Used Quicken's<br />uninstall programs. Loaded the program as an Administrator. Read multiple fixes on Quicken's site.<br />Nothing works. Quicken has done nothing to fix this problem. This is poor customer service.\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"I am an Amazon EC2 user as ell as a Prime member.  Although Amazon is not the friendliest implementation or the most helpful support, their bleeding edge approach to trying new things is awesome... I just wish they devoted more time to improving the neat tools they created so competitors would not pass them by.  Making great capabilities available to its Amazon community is also a terrific business step, but some of those products lose their compelling story as competitors bypass their capabilities.  That is certainly the case with Amazon's Cloud Drive and Cloud Drive Desktop.<br /><br />On the plus side:<br />1.  The product makes use of Amazon's incredible infrastructure to deliver massive capabilities especially as far as the volume of data stored.<br />2.  That same infrastructure provides security, reliability and scalability that may not be surpassed except by other equally large companies like Google (ie Google Drive).<br />3.  With a Prime membership, you can store unlimited photographs, and 5GB of other files.  That is more than my Google Drive which limits me to 20GB total.<br /><br />On the bad side:<br />1.  The software is buggy.  For example, until this week many users could not download and install the Desktop product without getting an error something like &#34;Something went wrong and its not your fault.  Try again in a few minutes.&#34;  Really?  I am a technical guy and I did not know what to do with this.<br />2.  The apparent capabilities are lagging far behind the competition.  For example, the Desktop will allow you to load all of your directories to Cloud Drive; however, it does not allow you to specify the destination folder.  Again, this is an example of poor architectural planning for the products being offered.  Amazon's work around was to load all of the directories and then to select them and move them to the correct folder structure.  Really?  Is that an appropriate solution?  What if you are managing a huge number of images like a photographer or a studio?<br />3.  There is little effort made on significantly improving the product once it is released.  I encourage you to Google Amazon Cloud Drive reviews and see for yourself that these same problems were reported...  years ago!<br /><br />I love Amazon's innovation.  At one time I thought it was the most innovative company on the planet -- surpassing Google.  However, it seems to be moving so fast that it forgets to keep the quality of products up-to-speed.  It is just as important to continue improving innovations to keep ahead of competition as it is to find the next cool thing.<br /><br />Anyway, I hope this review was helpful and I welcome comments.  With warmest regards, I remain<br /><br />Very truly yours,<br />JDD\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Horrible product. The interface sucked. It doesn't feel user friendly. Kept deleting programs I have used for years.<br /><br />Right now my PC is being backed up and Windows being installed because even though the scan didn't show it it has malware and a Trojan in the PC. Very disappointed in the product.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"It's very helpful for any high school student and easy to use.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Why pay for antivirus software when you can get an excellent checker for FREE? And, if you love it, you can upgrade to more options and/or other software they have available. This is the way all software should be distributed/sold.\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Love Avast! Finds all threats & viruses, great protection! Thank you so much\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I'm a new developer to Access and find that Access 2010 makes doing more complex things (forms, switchboards, etc.) much easier.  I always like how easily Microsoft made it in this version to publish Web Databases to any Sharepoint Server providers.\"\n",
      "Predicted star_rating: 2, actual star_rating 1, review_body \"Last year the TurboTax system had a glitch in it. When I added my husband's 1099-MISC, the first went through under his name but each subsequent one came up under my name. I tried repeatedly to get TT to help with the issue. They promised that waiting two weeks for the update would fix the problem. It did not. I finally just submitted the taxes with the glitch. This year the same problem occurred. I opted to call TT and was put on hold for an hour and twenty-two minutes before I got to speak with someone. After less than a two minute conversation, the representative put me on hold again (telling me to wait &#34;just one moment&#34;). After another 7 minutes on hold, the phone call was disconnected. After being a loyal customer for almost 10 years, I am completely finished with TurboTax.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"I cannot uninstall it completely and it interferes with my Firewalls Anti-Virus. I tried a re-install and uninstall. No good. I got rid of it all except something that looks like a bolt and still have the problem. I don't think contact with them is possible. I managed to see a Board of Directors with many CXX acronyms and smiley faces. This one way non-communication or in-between payment processors (Like Avangate who I cannot stand and will always avoid now) is a the path to Out Of Business. Soon. Any suggestions from Amazon who have become similar. I almost listened to a song due to the very strange reviews of a supposed &#34;Genius: on Mr. Besos site and would have had not someone said that it was not music, it was the sound of an abortion. I tried in vain to communicate this to the upper world of this place. Ps. I also had a gorrible drop ship experience and they pointed to him and his &#34;Policy&#34; is to point to them. Further, I could not even &#34;Click&#34; doe to a &#34;off line&#34; problem. Now I see &#34;Buy it again&#34; everywhere. I would have to in several cases where I never received something. I do not have the time for non-sense for others profit or anything else so&#34; important&#34; Wow.. I would like to know who and which OS first offered this Internet. It is becomine flawed, like Wikipedia, and that means Defective. I think I noticed something like what goes on at WikiP at the Encyclopedia Brittanicca. Email is full of one wayers and No-Replyers. I do not need misinformation, which I can prove in WikiP and bizarre opinions not to mention getting Double Splattered on an &#34;Internet WALL&#34; (whatever THAT IS) AND IN MY CASE, that may be tested with a small lawsuit as there are multiple ones about unfold, again, from others that started massive madness in my life with betrayals of 4 decades in so many directions it may be, as I warned, &#34;A Spectacle&#34; if not much, much more. People cannot change Life or History then think they can vanish into a vaccumme after lying to The Supreme Court. I see what is there and The Corp, only 350+ YEARS OLD (Harvard, 1650) OR AT LEAST SOME HUGE AND DANGEROUS NAMES I WILL GO AT DIRECTLY. They are killing This Planet. I will do anything if I am drawn back into that path and it is starting now. Earth is facing Extinction and I know what is really worth what and will risk my very life.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Last year, I used the Deluxe version to file my taxes including schedule E.  When I reached that form in this year's version, I found out that I needed to spend $30 additional to upgrade to the Premier version.  Had I known, I could have purchased that version for only $6 more on Amazon. If the product is supposed help you do your taxes, it should do all the routine schedules.\"\n",
      "Predicted star_rating: 2, actual star_rating 2, review_body \"I've been a long time quicken user (since 1996) and periodically update the product to get new features.  However, this release seems to have been pushed out without adequate testing.  Many things seem to have broken with this update, as previous saved reports and auto transations have been deleted, and some of my accounts no longer sync because of 'password errors' despite the passwords being correct and unchanged since before the update.  Also, the mobile integration seems extremely slapdash.  Many errors when trying to sync to/from 'quicken cloud'.  Finally, it seems to get stuck in some weird refresh loops occasionally where it will repeatedly 'compare transactions to register' over and over and over again.<br /><br />I'd stay away from this release, at least until there are reports that the bugs have been fixed.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Finally go system.  Hacks had quick books for a long time\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"My software was all jacked up, and I did not want to bring it to Geek Squad, but this tool fixed my issues and I'm now back to work. Thanks!\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"The enhanced payroll system is perfect for small business.  You can file and pay Federal and state taxes in seconds.  Bam!  Done!  On to other things.\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"Have been using H&R Block software for the past five years for my federal tax preparation with good results; I doubt the 2014 version will be any different.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Finally go system.  Hacks had quick books for a long time\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"Confusing dialogue and progression.\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"I am an Amazon EC2 user as ell as a Prime member.  Although Amazon is not the friendliest implementation or the most helpful support, their bleeding edge approach to trying new things is awesome... I just wish they devoted more time to improving the neat tools they created so competitors would not pass them by.  Making great capabilities available to its Amazon community is also a terrific business step, but some of those products lose their compelling story as competitors bypass their capabilities.  That is certainly the case with Amazon's Cloud Drive and Cloud Drive Desktop.<br /><br />On the plus side:<br />1.  The product makes use of Amazon's incredible infrastructure to deliver massive capabilities especially as far as the volume of data stored.<br />2.  That same infrastructure provides security, reliability and scalability that may not be surpassed except by other equally large companies like Google (ie Google Drive).<br />3.  With a Prime membership, you can store unlimited photographs, and 5GB of other files.  That is more than my Google Drive which limits me to 20GB total.<br /><br />On the bad side:<br />1.  The software is buggy.  For example, until this week many users could not download and install the Desktop product without getting an error something like &#34;Something went wrong and its not your fault.  Try again in a few minutes.&#34;  Really?  I am a technical guy and I did not know what to do with this.<br />2.  The apparent capabilities are lagging far behind the competition.  For example, the Desktop will allow you to load all of your directories to Cloud Drive; however, it does not allow you to specify the destination folder.  Again, this is an example of poor architectural planning for the products being offered.  Amazon's work around was to load all of the directories and then to select them and move them to the correct folder structure.  Really?  Is that an appropriate solution?  What if you are managing a huge number of images like a photographer or a studio?<br />3.  There is little effort made on significantly improving the product once it is released.  I encourage you to Google Amazon Cloud Drive reviews and see for yourself that these same problems were reported...  years ago!<br /><br />I love Amazon's innovation.  At one time I thought it was the most innovative company on the planet -- surpassing Google.  However, it seems to be moving so fast that it forgets to keep the quality of products up-to-speed.  It is just as important to continue improving innovations to keep ahead of competition as it is to find the next cool thing.<br /><br />Anyway, I hope this review was helpful and I welcome comments.  With warmest regards, I remain<br /><br />Very truly yours,<br />JDD\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Why pay for antivirus software when you can get an excellent checker for FREE? And, if you love it, you can upgrade to more options and/or other software they have available. This is the way all software should be distributed/sold.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"I don't mind paying my share of taxes. I have lived in underdeveloped, impoverished countries. Taxes are the price of a free, just and prosperous society. Oh but how I hate doing taxes. I hate filling out forms. I hate the fog form filling.<br /><br />I have used this software for several years. This year (2014) they have stupified it. Now it is more confusing and difficult to go in and just enter the data from 1099's and other forms. I can't see how to enter the HSA data. If you deviate from their entry order, you have even more problems. They just want to boot you to the end. Thus, if only some of the data has arrived in the mail, and you enter what you have, it gets more confuding later when you want to enter the other data.<br /><br />If you have kids in college, the colleges want financial aid data early. They actually ask for it in November (before you have earned it!!!!!) and again in January to March. Thus, with this software, trying to do an estimate using partial data adds confusion, as if the IRS does not make it difficult enough. Also there is no apparent way to contact the company through email - so here it is. Taxact, why do you  waste 3 pages of my paper to print a half page receipt? This just is irksome. Why no email contact - I guess you  don't want to here from your customers. So I posted my comments here\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"I tried this program only to have issues for over a week with little service or even the means to try to fix the problem myself. For a week of waiting all I got was a copy paste email that did not even take my issue into consideration. Avoid.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Finally go system.  Hacks had quick books for a long time\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Worked better than any of the other programs this year.  Easy to navigate.  Self Explanatory.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"My IT guy suggested this as a replacement for my expiring Trend Micro product.  Great price and seems to be working fine.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Product downloaded quickly however after adding all my pictures, started my project and saved it.  The software would not recognized the file afterwards so I was able to complete my project.  I started a new project and the same thing occurred again this time after completing the project.  Big waste of money and time.  I don't recommend anyone purchase this program.  I like to get my money back.\"\n",
      "Predicted star_rating: 3, actual star_rating 4, review_body \"I have tried other tax software but always returned to Turbo Tax because it has the least amount of errors and for me is the most user friendly.  By user friendly I mean you can easily find forms to manually correct if needed and check for errors or make changes.  This was the first year I used Amazon's download and the process was very easy and the cost was cheaper, also they store the software, for a time, if I need to re-download (I still backup the download).\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"The enhanced payroll system is perfect for small business.  You can file and pay Federal and state taxes in seconds.  Bam!  Done!  On to other things.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I have had other internet security suites that seemed to work well.  This one works as well as any I have purchased or downloaded.  I even forget I have it until it warns me of a problem.  Matched with their anti-virus software makes this the best I have found to keep my laptop running without problems.\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Works great!\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Been using Avast for several years. It's better than any of the other virus protection programs. Can't believe it's still free.\"\n",
      "Predicted star_rating: 3, actual star_rating 3, review_body \"I was disappointed that the deluxe edition cost more and did not include items from previous years.  They did provide an upgrade without additional charge.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"The best security for home and small businesses I have ever used.  Easy to use and understand.  The price is very reasonable also.\"\n",
      "Predicted star_rating: 3, actual star_rating 1, review_body \"In one sentence, the software simply does not work - in practical term. I am happy to see Amazon's unlimited storage offer (comparing to others) with very competitive pricing and no limit on photo size or type of files. I have over 6TB of photos (most in raw format) and videos created over the years and have been looking for online back-up solutions. I signed up the 3-month trial and subsequently decided to stay with it. Guess what, after 4 months, I am still unable to complete my first round of uploading. Why? The software sucks! Big time. It constantly gets stuck and requires attention, either pause and resume, or restart again. It becomes a daily chore. Don't tell me this is because of my internet connection. I have other sites (like smugmug, photoshelter, flicker, you name it), all you need to do is to drop the files and walk away. It may take hours but they will get the job done for you. Another example is to upload big video files to the video sites like Vimeo or YouTube, I can upload a multi-GB video file in one shot without any problem. While on Amazon, 90% chance is that you will get a spinning wheel and you have to try it again and again. Tech support is none existence. Seriously, any smart highschool kid can create better software than this. What a shame.\"\n",
      "Predicted star_rating: 2, actual star_rating 2, review_body \"I've been a long time quicken user (since 1996) and periodically update the product to get new features.  However, this release seems to have been pushed out without adequate testing.  Many things seem to have broken with this update, as previous saved reports and auto transations have been deleted, and some of my accounts no longer sync because of 'password errors' despite the passwords being correct and unchanged since before the update.  Also, the mobile integration seems extremely slapdash.  Many errors when trying to sync to/from 'quicken cloud'.  Finally, it seems to get stuck in some weird refresh loops occasionally where it will repeatedly 'compare transactions to register' over and over and over again.<br /><br />I'd stay away from this release, at least until there are reports that the bugs have been fixed.\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Good\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"No longer allows you to deduct a home office in this edition, plus more hidden fees than ever in e-filing.  Will be looking at other options next year.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I'm a new developer to Access and find that Access 2010 makes doing more complex things (forms, switchboards, etc.) much easier.  I always like how easily Microsoft made it in this version to publish Web Databases to any Sharepoint Server providers.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"Product downloaded quickly however after adding all my pictures, started my project and saved it.  The software would not recognized the file afterwards so I was able to complete my project.  I started a new project and the same thing occurred again this time after completing the project.  Big waste of money and time.  I don't recommend anyone purchase this program.  I like to get my money back.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I've been using Avast for several years and it has caught any virus that has tried to attack my PC.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"Light and efficient. My computer works well and remains fast.\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"I don't mind paying my share of taxes. I have lived in underdeveloped, impoverished countries. Taxes are the price of a free, just and prosperous society. Oh but how I hate doing taxes. I hate filling out forms. I hate the fog form filling.<br /><br />I have used this software for several years. This year (2014) they have stupified it. Now it is more confusing and difficult to go in and just enter the data from 1099's and other forms. I can't see how to enter the HSA data. If you deviate from their entry order, you have even more problems. They just want to boot you to the end. Thus, if only some of the data has arrived in the mail, and you enter what you have, it gets more confuding later when you want to enter the other data.<br /><br />If you have kids in college, the colleges want financial aid data early. They actually ask for it in November (before you have earned it!!!!!) and again in January to March. Thus, with this software, trying to do an estimate using partial data adds confusion, as if the IRS does not make it difficult enough. Also there is no apparent way to contact the company through email - so here it is. Taxact, why do you  waste 3 pages of my paper to print a half page receipt? This just is irksome. Why no email contact - I guess you  don't want to here from your customers. So I posted my comments here\"\n",
      "Predicted star_rating: 2, actual star_rating 2, review_body \"First of all, all the video's that you download are based on the Apple environment, not on Windows.<br />System requirements are Vista, 7, 8 and XP, that implies I would think that what I get is going to help me learn on those environment's. The video files that are downloaded are mp4 format, these files work on Apple Mac correct? so why does it say “System requirements are Vista, 7, 8 and XP”.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"My software was all jacked up, and I did not want to bring it to Geek Squad, but this tool fixed my issues and I'm now back to work. Thanks!\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I have had good luck with PC Matic. Computers are doing great.<br />I have had PC Matic for 2 years now and all my computers have been Virus FREE and running at peak performance.<br />I would recommend you switching to PC Matic right now, do not wait. All American too!\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"I tried this program only to have issues for over a week with little service or even the means to try to fix the problem myself. For a week of waiting all I got was a copy paste email that did not even take my issue into consideration. Avoid.\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I have had good luck with PC Matic. Computers are doing great.<br />I have had PC Matic for 2 years now and all my computers have been Virus FREE and running at peak performance.<br />I would recommend you switching to PC Matic right now, do not wait. All American too!\"\n",
      "Predicted star_rating: 1, actual star_rating 1, review_body \"The Amazon down-loader worked fine; the software was successfully downloaded/saved?, With the help of Angel in Digital Services,it was proven<br /> that the software would download successfully.However, the software application would not open. Please ask the provider to fix this problem?<br />Please see Angel for details?\"\n",
      "Predicted star_rating: 3, actual star_rating 5, review_body \"I love Adobe Acrobat. It increases my productivity since I can save files and can then open and edit without shuffling them back and forth between programs. I recently had occasion to create a flyer for an education session and it required several drafts. I tried sending it back and forth via email in its original format but the file was too big. Acrobat to the rescue....Once I converted it to a PDF I was able to share with with several colleagues and make the changes instantly. We made posters for the same session and I was able to create tiles to glue together so that the finished article was easy to read from a few feet away.<br />I know that I may never use all the features in Acrobat XI but I consider it such great value for those that I do use.\"\n",
      "Predicted star_rating: 5, actual star_rating 5, review_body \"Works great!\"\n",
      "Predicted star_rating: 2, actual star_rating 5, review_body \"I've recently reimaged 2 computer and gotten 2 others for family members. The only things that I require on all my systems are: Norton 360, Microsoft SkyDrive (for free automatic backup/sharing/synching with other computers), and Microsoft Office (now using 365).  Norton has never let me down.\"\n",
      "Waiting for firehose to flush data to s3...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:orchestrator.resource_manager:Successfully create S3 bucket 'sagemaker-us-east-1-835319576252' for storing sagemaker data\n",
      "INFO:orchestrator:Waiting for reward data to be uploaded.\n",
      "INFO:orchestrator:Successfully upload reward files to s3 bucket path s3://sagemaker-us-east-1-835319576252/bandits-1611645944/rewards_data/bandits-1611645944-1611646854/rewards-1611646854\n",
      "WARNING:orchestrator:Start a join job to join reward data under 's3://sagemaker-us-east-1-835319576252/bandits-1611645944/rewards_data/bandits-1611645944-1611646854' with all the observation data\n",
      "INFO:orchestrator:Creating resource for joining job...\n",
      "INFO:orchestrator:Successfully create S3 bucket 'sagemaker-us-east-1-835319576252' for athena queries\n",
      "INFO:orchestrator:Started joining job...\n",
      "INFO:orchestrator:Splitting data into train/evaluation set with ratio of 0.9\n",
      "INFO:orchestrator:Joined data will be stored under s3://sagemaker-us-east-1-835319576252/bandits-1611645944/joined_data/bandits-1611645944-join-job-id-1611646854\n",
      "INFO:orchestrator:Use last trained model bandits-1611645944-model-id-1611645987 as pre-trained model for training\n",
      "INFO:orchestrator:Starting training job for ModelId 'bandits-1611645944-model-id-1611646910''\n",
      "INFO:orchestrator:Training job will be executed in 'SageMaker' mode\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-01-26 07:41:51 Starting - Starting the training job.."
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611646910. This exception will be ignored, and retried.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "."
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611646910. This exception will be ignored, and retried.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "2021-01-26 07:42:15 Starting - Launching requested ML instancesProfilerReport-1611646911: InProgress\n",
      "..."
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "ERROR:orchestrator:An error occurred (ThrottlingException) when calling the DescribeTrainingJob operation (reached max retries: 4): Rate exceeded\n",
      "WARNING:orchestrator:Sync Thread trying to update ExperimentDb with old state. This should get fixed in next run!\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611646910. This exception will be ignored, and retried.\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611646910. This exception will be ignored, and retried.\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611646910. This exception will be ignored, and retried.\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611646910. This exception will be ignored, and retried.\n",
      "WARNING:orchestrator:Failed to check SageMaker Training Job state for ModelId bandits-1611645944-model-id-1611646910. This exception will be ignored, and retried.\n",
      "INFO:orchestrator:Model 'bandits-1611645944-model-id-1611646910' is ready to deploy.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deploying new bandit model id bandits-1611645944-model-id-1611646910 in loop 0\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:orchestrator:Sync Thread trying to update ExperimentDb with old state. This should get fixed in next run!\n"
     ]
    }
   ],
   "source": [
    "do_evaluation = False\n",
    "total_loops = 1 # Increase for higher accuracy\n",
    "retrain_batch_size = 100 # Model will be trained after every `batch_size` number of data instances\n",
    "rewards_list = []\n",
    "event_list = []\n",
    "\n",
    "all_joined_train_data_s3_uri_list = []\n",
    "all_joined_eval_data_s3_uri_list = []\n",
    "\n",
    "local_mode = bandit_experiment_manager.local_mode\n",
    "\n",
    "start_time = time.time()\n",
    "for loop_no in range(total_loops):\n",
    "    print(f\"\"\"\n",
    "    ################################\n",
    "    # Incremental Training Loop {loop_no+1}\n",
    "    ################################\n",
    "    \"\"\")\n",
    "    \n",
    "    # Generate experiences and log them\n",
    "    for i in range(retrain_batch_size):\n",
    "        context_index, context = client_app.choose_random_context()\n",
    "        action, event_id, bandit_model_id, action_prob, sample_prob = bandit_model.get_action(obs=[context_index])\n",
    "\n",
    "        reward = client_app.get_reward(context_index=context_index, \n",
    "                                       action=action, \n",
    "                                       event_id=event_id, \n",
    "                                       bandit_model_id=bandit_model_id, \n",
    "                                       action_prob=action_prob, \n",
    "                                       sample_prob=sample_prob, \n",
    "                                       local_mode=local_mode)\n",
    "\n",
    "        rewards_list.append(reward)\n",
    "        \n",
    "    # Publish rewards sum for this batch to CloudWatch for monitoring \n",
    "    bandit_experiment_manager.cw_logger.publish_rewards_for_simulation(\n",
    "        bandit_experiment_manager.experiment_id,\n",
    "        sum(rewards_list[-retrain_batch_size:])/retrain_batch_size\n",
    "    )\n",
    "    \n",
    "    # Join the events and rewards data to use for the next bandit-model training job\n",
    "    # Use 90% as the training dataset and 10% as the the holdout evaluation dataset\n",
    "    if local_mode:        \n",
    "        bandit_experiment_manager.ingest_joined_data(client_app.joined_data_tmp_buffer,\n",
    "                                                     ratio=0.90)\n",
    "    else:\n",
    "        # Kinesis Firehose => S3 => Athena\n",
    "        print('Waiting for firehose to flush data to s3...')\n",
    "        time.sleep(60) \n",
    "        rewards_s3_prefix = bandit_experiment_manager.ingest_rewards(client_app.rewards_tmp_buffer)\n",
    "        bandit_experiment_manager.join(rewards_s3_prefix, ratio=0.90)\n",
    "            \n",
    "    # Train \n",
    "    bandit_experiment_manager.train_next_model(\n",
    "        input_data_s3_prefix=bandit_experiment_manager.last_joined_job_train_data)\n",
    "\n",
    "    all_joined_train_data_s3_uri_list.append(bandit_experiment_manager.last_joined_job_train_data)\n",
    "\n",
    "    # Evaluate and/or deploy the new bandit model\n",
    "    if do_evaluation:\n",
    "        bandit_experiment_manager.evaluate_model(\n",
    "            input_data_s3_prefix=bandit_experiment_manager.last_joined_job_eval_data,\n",
    "            evaluate_model_id=bandit_experiment_manager.last_trained_model_id)\n",
    "\n",
    "        eval_score_last_trained_model = bandit_experiment_manager.get_eval_score(\n",
    "            evaluate_model_id=bandit_experiment_manager.last_trained_model_id,\n",
    "            eval_data_path=bandit_experiment_manager.last_joined_job_eval_data)\n",
    "\n",
    "        bandit_experiment_manager.evaluate_model(\n",
    "            input_data_s3_prefix=bandit_experiment_manager.last_joined_job_eval_data,\n",
    "            evaluate_model_id=bandit_experiment_manager.last_hosted_model_id) \n",
    "\n",
    "        all_joined_eval_data_s3_uri_list.append(bandit_experiment_manager.last_joined_job_eval_data)\n",
    "    \n",
    "        # Eval score is a measure of `regret`, so a lower eval score is better\n",
    "        eval_score_last_hosted_model = bandit_experiment_manager.get_eval_score(\n",
    "            evaluate_model_id=bandit_experiment_manager.last_hosted_model_id, \n",
    "            eval_data_path=bandit_experiment_manager.last_joined_job_eval_data)\n",
    "    \n",
    "        print('New bandit model evaluation score {}'.format(eval_score_last_hosted_model))\n",
    "        print('Current bandit model evaluation score {}'.format(eval_score_last_trained_model))\n",
    "\n",
    "        if eval_score_last_trained_model <= eval_score_last_hosted_model:\n",
    "            print('Deploying new bandit model id {} in loop {}'.format(bandit_experiment_manager.last_trained_model_id, loop_no))\n",
    "            bandit_experiment_manager.deploy_model(model_id=bandit_experiment_manager.last_trained_model_id)\n",
    "        else:\n",
    "            print('Not deploying bandit model id {} in loop {}'.format(bandit_experiment_manager.last_trained_model_id, loop_no))\n",
    "    else:\n",
    "        # Just deploy the new bandit model without evaluating against previous model\n",
    "        print('Deploying new bandit model id {} in loop {}'.format(bandit_experiment_manager.last_trained_model_id, loop_no))\n",
    "        bandit_experiment_manager.deploy_model(model_id=bandit_experiment_manager.last_trained_model_id)\n",
    "    \n",
    "    client_app.clear_tmp_buffers()\n",
    "    \n",
    "print(f'Total time taken to complete {total_loops} loops: {time.time() - start_time}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# _Ignore Any Errors ^^ Above ^^_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check Experiment State:  EVALUATED\n",
    "\n",
    "`evaluation_state`: `EVALUATED`\n",
    "\n",
    "The same bandit_model_id will appear in both `last_trained_model_id` and `last_evaluation_job_id` fields below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "pprint(bandit_experiment_manager._jsonify())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check Experiment Status:  JOINED\n",
    "`joining_state`:  `SUCCEEDED`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "pprint(bandit_experiment_manager._jsonify())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check Experiment Status:  DEPLOYED\n",
    "`deploying_state`:  `SUCCEEDED`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "pprint(bandit_experiment_manager._jsonify())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "\n",
    "display(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}\">Bandit SageMaker REST Endpoint</a></b>'.format(region, bandit_experiment_name)))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Copy All Joined Event and Reward Data from S3 to Local Notebook"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from sagemaker.s3 import S3Downloader\n",
    "\n",
    "all_joined_data_s3_uri_list = all_joined_train_data_s3_uri_list + all_joined_eval_data_s3_uri_list\n",
    "\n",
    "df_list = []\n",
    "\n",
    "for joined_data_s3_prefix_uri in all_joined_data_s3_uri_list:\n",
    "    joined_data_s3_uri_file_path = './'\n",
    "\n",
    "    joined_data_s3_uri = S3Downloader.list(joined_data_s3_prefix_uri)[0]    \n",
    "    S3Downloader.download(joined_data_s3_uri, joined_data_s3_uri_file_path)\n",
    "    joined_data_local_file_path = joined_data_s3_uri.split('/')[-1]\n",
    "\n",
    "    df = pd.read_csv(joined_data_local_file_path, \n",
    "                     delimiter=',', \n",
    "                     quoting=csv.QUOTE_ALL)\n",
    "    \n",
    "    df_list.append(df)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "all_joined_data_df = pd.concat(df_list, ignore_index=True)\n",
    "all_joined_data_df.tail(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Review Invocations of BERT Model 1 and 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Total Invocations of BERT Model 1:  {}'.format(client_app.action_count[1]))\n",
    "print('Total Invocations of BERT Model 2:  {}'.format(client_app.action_count[2]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datetime import datetime, timedelta\n",
    "\n",
    "import boto3\n",
    "import pandas as pd\n",
    "\n",
    "cw = boto3.Session().client(service_name='cloudwatch', region_name=region)\n",
    "\n",
    "def get_invocation_metrics_for_endpoint_variant(endpoint_name,\n",
    "                                                namespace_name,\n",
    "                                                metric_name,\n",
    "                                                variant_name,\n",
    "                                                start_time,\n",
    "                                                end_time):\n",
    "    metrics = cw.get_metric_statistics(\n",
    "        Namespace=namespace_name,\n",
    "        MetricName=metric_name,\n",
    "        StartTime=start_time,\n",
    "        EndTime=end_time,\n",
    "        Period=60,\n",
    "        Statistics=[\"Sum\"],\n",
    "        Dimensions=[\n",
    "            {\n",
    "                \"Name\": \"EndpointName\",\n",
    "                \"Value\": endpoint_name\n",
    "            },\n",
    "            {\n",
    "                \"Name\": \"VariantName\",\n",
    "                \"Value\": variant_name\n",
    "            }\n",
    "        ]\n",
    "    )\n",
    "\n",
    "    if metrics['Datapoints']:\n",
    "        return pd.DataFrame(metrics[\"Datapoints\"])\\\n",
    "                .sort_values(\"Timestamp\")\\\n",
    "                .set_index(\"Timestamp\")\\\n",
    "                .drop(\"Unit\", axis=1)\\\n",
    "                .rename(columns={\"Sum\": variant_name})\n",
    "    else:\n",
    "        return pd.DataFrame()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Gather BERT Model 1 Invocations Metrics\n",
    "_Please be patient.  This will take 1-2 minutes._"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "%config InlineBackend.figure_format='retina'\n",
    "\n",
    "time.sleep(75)\n",
    "\n",
    "start_time = start_time or datetime.now() - timedelta(minutes=60)\n",
    "end_time = datetime.now()\n",
    "        \n",
    "model1_endpoint_invocations = get_invocation_metrics_for_endpoint_variant(\n",
    "                                    endpoint_name=model1_endpoint_name,\n",
    "                                    namespace_name='AWS/SageMaker',                                   \n",
    "                                    metric_name='Invocations',\n",
    "                                    variant_name='AllTraffic',\n",
    "                                    start_time=start_time, \n",
    "                                    end_time=end_time)\n",
    "\n",
    "model1_endpoint_invocations"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Gather BERT Model 2 Invocations Metrics\n",
    "_Please be patient.  This will take 1-2 minutes._"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "%config InlineBackend.figure_format='retina'\n",
    "\n",
    "time.sleep(75)\n",
    "\n",
    "start_time = start_time or datetime.now() - timedelta(minutes=60)\n",
    "end_time = datetime.now()\n",
    "        \n",
    "model2_endpoint_invocations = get_invocation_metrics_for_endpoint_variant(\n",
    "                                    endpoint_name=model2_endpoint_name,\n",
    "                                    namespace_name='AWS/SageMaker',                                   \n",
    "                                    metric_name='Invocations',\n",
    "                                    variant_name='AllTraffic',\n",
    "                                    start_time=start_time, \n",
    "                                    end_time=end_time)\n",
    "\n",
    "model2_endpoint_invocations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rcParams['figure.figsize'] = 15, 10\n",
    "\n",
    "x1 = range(0, model1_endpoint_invocations.size)\n",
    "y1 = model1_endpoint_invocations['AllTraffic']\n",
    "plt.plot(x1, y1, label=\"BERT Model 1\")\n",
    "\n",
    "x1 = range(0, model2_endpoint_invocations.size)\n",
    "y1 = model2_endpoint_invocations['AllTraffic']\n",
    "plt.plot(x1, y1, label=\"BERT Model 2\")\n",
    "\n",
    "plt.legend(loc=0, prop={'size': 20})\n",
    "plt.xlabel('Time (Minutes)')\n",
    "plt.ylabel('Number of Invocations')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Check the Invocation Metrics for the BERT Models"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "    \n",
    "display(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/cloudwatch/home?region={}#metricsV2:namespace=AWS/SageMaker;dimensions=EndpointName,VariantName;search={}\">Model 1 SageMaker REST Endpoint</a></b>'.format(region, model1_endpoint_name)))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "\n",
    "display(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/cloudwatch/home?region={}#metricsV2:namespace=AWS/SageMaker;dimensions=EndpointName,VariantName;search={}\">Model 2 SageMaker REST Endpoint</a></b>'.format(region, model2_endpoint_name)))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Visualize Bandit Action Probabilities\n",
    "This is the probability that the bandit model will choose a particular BERT model (action)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rcParams['figure.figsize'] = 15, 10\n",
    "\n",
    "x1 = all_joined_data_df.query('action==1').index\n",
    "y1 = all_joined_data_df.query('action==1').action_prob\n",
    "plt.scatter(x1, y1, label=\"BERT Model 1\")\n",
    "\n",
    "x2 = all_joined_data_df.query('action==2').index\n",
    "y2 = all_joined_data_df.query('action==2').action_prob\n",
    "plt.scatter(x2, y2, label=\"BERT Model 2\")\n",
    "\n",
    "plt.legend(loc=3, prop={'size': 20})\n",
    "plt.xlabel('Bandit Model Training Instances')\n",
    "plt.ylabel('Action Probability')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "print('Mean action probability for BERT Model 1: {}'.format(all_joined_data_df.query('action==1')['action_prob'].mean()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "print('Mean action probability for BERT Model 2: {}'.format(all_joined_data_df.query('action==2')['action_prob'].mean()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Visualize Bandit Sample Probabilities\n",
    "Despite the action probability, we sample from all actions (BERT models).  Below is the sample probability for the chosen BERT model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rcParams['figure.figsize'] = 15, 10\n",
    "\n",
    "x1 = all_joined_data_df.query('action==1').index\n",
    "y1 = all_joined_data_df.query('action==1').sample_prob\n",
    "plt.scatter(x1, y1, label=\"BERT Model 1\")\n",
    "\n",
    "x2 = all_joined_data_df.query('action==2').index\n",
    "y2 = all_joined_data_df.query('action==2').sample_prob\n",
    "plt.scatter(x2, y2, label=\"BERT Model 2\")\n",
    "\n",
    "plt.legend(loc=0, prop={'size': 20})\n",
    "plt.xlabel('Bandit Model Training Instances')\n",
    "plt.ylabel('Sample Probability')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "print('Mean sample probability for BERT Model 1: {}'.format(all_joined_data_df.query('action==1')['sample_prob'].mean()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Mean sample probability for BERT Model 2: {}'.format(all_joined_data_df.query('action==2')['sample_prob'].mean()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Visualize Bandit Rewards"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can visualize the bandit-model training performance by plotting the rolling mean reward across client interactions.\n",
    "\n",
    "Here rolling mean reward is calculated on the last `rolling_window` number of data instances, where each data instance corresponds to a single client interaction."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rolling_window = 100\n",
    "\n",
    "rcParams['figure.figsize'] = 15, 10\n",
    "lwd = 5\n",
    "cmap = plt.get_cmap('tab20')\n",
    "colors=plt.cm.tab20(np.linspace(0, 1, 20))\n",
    "\n",
    "rewards_df = pd.DataFrame(rewards_list, columns=['bandit']).rolling(rolling_window).mean()\n",
    "#rewards_df['perfect'] = sum(client_app.optimal_rewards) / len(client_app.optimal_rewards)\n",
    "\n",
    "rewards_df.tail(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rewards_df.plot(y=['bandit'],  #, 'perfect'], \n",
    "                linewidth=lwd)\n",
    "plt.legend(loc=4, prop={'size': 20})\n",
    "plt.tick_params(axis='both', which='major', labelsize=15)\n",
    "plt.yticks([0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00])\n",
    "plt.xticks([100, 200, 300, 400, 500, 600, 700, 800, 900, 1000])\n",
    "\n",
    "plt.xlabel('Training Instances (Model is Updated Every %s Instances)' % retrain_batch_size, size=20)\n",
    "plt.ylabel('Rolling {} Mean Reward'.format(rolling_window), size=30)\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rewards_df['bandit'].mean()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Monitor the Bandit Model in CloudWatch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from markdown_helper import *\n",
    "from IPython.display import Markdown\n",
    "\n",
    "display(Markdown(bandit_experiment_manager.get_cloudwatch_dashboard_details()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Review the DynamoDB Tables and S3 Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "\n",
    "display(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/dynamodb/home?region={}#tables:selected=BanditsExperimentTable;tab=items\">Bandits Experiment DynamoDB Table</a></b>'.format(region)))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "\n",
    "display(HTML('<b>Review <a target=\"blank\" href=\"https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview\">Bandits Experiment S3 Data</a></b>'.format(bucket, bandit_experiment_manager.experiment_id, region)))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%store bandit_experiment_name"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Release Resources"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have three DynamoDB tables from the bandits application above.  To better maintain them, we should remove the related records if the experiment has finished. \n",
    "\n",
    "Only execute the clean up cells below when you've finished the current experiment and want to deprecate everything associated with it. \n",
    "\n",
    "_The CloudWatch metrics will be removed during this cleanup step._"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# try:\n",
    "#     sm.delete_endpoint(\n",
    "#          EndpointName=bandit_experiment_name\n",
    "#     )\n",
    "# except:\n",
    "#     pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Cleaning up experiment_id {}'.format(bandit_experiment_manager.experiment_id))\n",
    "try:\n",
    "    bandit_experiment_manager.clean_resource(experiment_id=bandit_experiment_manager.experiment_id)\n",
    "    bandit_experiment_manager.clean_table_records(experiment_id=bandit_experiment_manager.experiment_id)\n",
    "except:\n",
    "    print('Ignore any errors.  Errors are OK.')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%html\n",
    "\n",
    "<p><b>Shutting down your kernel for this notebook to release resources.</b></p>\n",
    "<button class=\"sm-command-button\" data-commandlinker-command=\"kernelmenu:shutdown\" style=\"display:none;\">Shutdown Kernel</button>\n",
    "        \n",
    "<script>\n",
    "try {\n",
    "    els = document.getElementsByClassName(\"sm-command-button\");\n",
    "    els[0].click();\n",
    "}\n",
    "catch(err) {\n",
    "    // NoOp\n",
    "}    \n",
    "</script>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%javascript\n",
    "\n",
    "try {\n",
    "    Jupyter.notebook.save_checkpoint();\n",
    "    Jupyter.notebook.session.delete();\n",
    "}\n",
    "catch(err) {\n",
    "    // NoOp\n",
    "}"
   ]
  }
 ],
 "metadata": {
  "hide_input": false,
  "kernelspec": {
   "display_name": "Python 3 (Data Science)",
   "language": "python",
   "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {
    "height": "calc(100% - 180px)",
    "left": "10px",
    "top": "150px",
    "width": "550.4px"
   },
   "toc_section_display": true,
   "toc_window_display": false
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
