{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "f11e1b3d",
      "metadata": {
        "collapsed": true,
        "id": "f11e1b3d",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "# Copyright 2021 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "aa8753fd",
      "metadata": {
        "id": "aa8753fd"
      },
      "source": [
        "# Step by Step Guide to Building Reinforcement Learning Applications using Vertex AI"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "b36aad4b",
      "metadata": {
        "id": "b36aad4b"
      },
      "source": [
        "\u003ctable align=\"left\"\u003e\n",
        "\n",
        "  \u003ctd\u003e\n",
        "    \u003ca href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/step_by_step_sdk_tf_agents_bandits_movie_recommendation/step_by_step_sdk_tf_agents_bandits_movie_recommendation.ipynb\"\u003e\n",
        "      \u003cimg src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"\u003e Run in Colab\n",
        "    \u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/step_by_step_sdk_tf_agents_bandits_movie_recommendation/step_by_step_sdk_tf_agents_bandits_movie_recommendation.ipynb\"\u003e\n",
        "      \u003cimg src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"\u003e\n",
        "      View on GitHub\n",
        "    \u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "\u003c/table\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "a27e7ec1",
      "metadata": {
        "id": "a27e7ec1"
      },
      "source": [
        "## Overview\n",
        "This demo showcases the use of [TF-Agents](https://www.tensorflow.org/agents) and [Vertex AI](https://cloud.google.com/vertex-ai) in building a movie recommendation system with reinforcement learning. The demo is intended for developers interested in creating reinforcement learning applications using TensorFlow and the TF-Agents library, leveraging Vertex AI services (including custom training, custom prediction, model deployment over managed endpoints, and prediction fetching). It is recommended for developers to have familiarity with basic reinforcement learning theory, particularly the contextual bandits formulation, and the TF-Agents interface. Note that contextual bandits form a special case of RL, where the actions taken by the agent do not alter the state of the environment. “Contextual” refers to the fact that the agent chooses among a set of actions while having knowledge of the context (environment observation).\n",
        "\n",
        "### Dataset\n",
        "This demo uses the [MovieLens 100K](https://www.kaggle.com/prajitdatta/movielens-100k-dataset) dataset to simulate an environment with users and their respective preferences. It is available at `gs://cloud-samples-data/vertex-ai/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/u.data`.\n",
        "\n",
        "### Objective\n",
        "In this notebook, you will learn how to build a TF-Agents (particularly the bandits module) based reinforcement application using the custom training, custom prediction and endpoint deployment services of Vertex AI.\n",
        "For custom training, you will implement on-policy training, in which you actively interact with a simulation environment based on MovieLens to (1) obtain environment observations, (2) choose actions using the data-collecting policy given the observations, and (3) obtain environment feedback in the form of rewards that correspond to (1)(2). These pieces of data form the training data records. This process is different from off-policy training, where you do not necessarily have training data asscociated with the actual actions outputted by the policy.\n",
        "\n",
        "This demo consists of 2 main steps:\n",
        "1. Run locally with a [TF-Agents](https://www.tensorflow.org/agents) implementation.\n",
        "2. Execute in [Vertex AI](https://cloud.google.com/vertex-ai).\n",
        "\n",
        "In addition to the training, prediction and prediction workflow, the demo also showcases the following optimizations:\n",
        "1. Hyperparameter tuning with Vertex AI\n",
        "2. Profiling of the training process and resources with TensorBoard Profiler, which can inform designs for purposes like speed improvements, scaling\n",
        "\n",
        "This demo references code from [this TF-Agents example](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/agents/examples/v2/train_eval_movielens.py), [this Vertex AI SDK custom container training example](https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/unofficial/sdk/AI_Platform_(Unified)_SDK_BigQuery_Custom_Container_Training.ipynb), and [this Vertex AI SDK custom container prediction example](https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/unofficial/sdk/AI_Platform_(Unified)_SDK_Custom_Container_Prediction.ipynb).\n",
        "\n",
        "### Costs\n",
        "\n",
        "This tutorial uses billable components of Google Cloud:\n",
        "\n",
        "* Vertex AI\n",
        "* Cloud Build\n",
        "* Cloud Storage\n",
        "* Container Registry\n",
        "\n",
        "Learn about [Vertex AI\n",
        "pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Build pricing](https://cloud.google.com/build/pricing), [Cloud Storage\n",
        "pricing](https://cloud.google.com/storage/pricing), and [Container Registry pricing](https://cloud.google.com/container-registry/pricing) use the [Pricing\n",
        "Calculator](https://cloud.google.com/products/calculator/)\n",
        "to generate a cost estimate based on your projected usage."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8f78d1d3",
      "metadata": {
        "id": "8f78d1d3"
      },
      "source": [
        "### Set up your local development environment\n",
        "\n",
        "**If you are using Colab or Google Cloud Notebooks**, your environment already meets\n",
        "all the requirements to run this notebook. You can skip this step."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "17c51ff4",
      "metadata": {
        "id": "17c51ff4"
      },
      "source": [
        "**Otherwise**, make sure your environment meets this notebook's requirements.\n",
        "You need the following:\n",
        "\n",
        "* The Google Cloud SDK\n",
        "* Git\n",
        "* Python 3\n",
        "* virtualenv\n",
        "* Jupyter notebook running in a virtual environment with Python 3\n",
        "\n",
        "The Google Cloud guide to [Setting up a Python development\n",
        "environment](https://cloud.google.com/python/setup) and the [Jupyter\n",
        "installation guide](https://jupyter.org/install) provide detailed instructions\n",
        "for meeting these requirements. The following steps provide a condensed set of\n",
        "instructions:\n",
        "\n",
        "1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)\n",
        "\n",
        "1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)\n",
        "\n",
        "1. [Install\n",
        "   virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)\n",
        "   and create a virtual environment that uses Python 3. Activate the virtual environment.\n",
        "\n",
        "1. To install Jupyter, run `pip3 install jupyter` on the\n",
        "command-line in a terminal shell.\n",
        "\n",
        "1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n",
        "\n",
        "1. Open this notebook in the Jupyter Notebook Dashboard."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "525895b8",
      "metadata": {
        "id": "525895b8"
      },
      "source": [
        "### Install additional packages\n",
        "\n",
        "Install additional package dependencies not installed in your notebook environment, such as the AI Platform SDK and TF-Agents. Use the latest major GA version of each package."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "e36c4ded",
      "metadata": {
        "id": "e36c4ded"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "# The Google Cloud Notebook product has specific requirements\n",
        "IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n",
        "\n",
        "# Google Cloud Notebook requires dependencies to be installed with '--user'\n",
        "USER_FLAG = \"\"\n",
        "if IS_GOOGLE_CLOUD_NOTEBOOK:\n",
        "  USER_FLAG = \"--user\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "ede5245e",
      "metadata": {
        "id": "ede5245e"
      },
      "outputs": [],
      "source": [
        "! pip3 install {USER_FLAG} google-cloud-aiplatform==1.0.1\n",
        "! pip3 install {USER_FLAG} google-cloud-storage==1.39.0\n",
        "! pip3 install {USER_FLAG} numpy==1.20.3\n",
        "! pip3 install {USER_FLAG} tf-agents==0.8.0\n",
        "! pip3 install {USER_FLAG} --upgrade tensorflow\n",
        "! pip3 install {USER_FLAG} --upgrade tensorboard-plugin-profile"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ed24f0a0",
      "metadata": {
        "id": "ed24f0a0"
      },
      "source": [
        "### Restart the kernel\n",
        "\n",
        "After you install the additional packages, you need to restart the notebook kernel so it can find the packages."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "3ef50bba",
      "metadata": {
        "collapsed": true,
        "id": "3ef50bba",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "# Automatically restart kernel after installs\n",
        "import os\n",
        "\n",
        "if not os.getenv(\"IS_TESTING\"):\n",
        "  # Automatically restart kernel after installs\n",
        "  import IPython\n",
        "\n",
        "  app = IPython.Application.instance()\n",
        "  app.kernel.do_shutdown(True)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "a1c2d29f",
      "metadata": {
        "id": "a1c2d29f"
      },
      "source": [
        "## Before you begin\n",
        "\n",
        "### Select a GPU runtime\n",
        "\n",
        "**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select \"Runtime --\u003e Change runtime type \u003e GPU\"**"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8c8c8359",
      "metadata": {
        "id": "8c8c8359"
      },
      "source": [
        "### Set up your Google Cloud project\n",
        "\n",
        "**The following steps are required, regardless of your notebook environment.**\n",
        "\n",
        "1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n",
        "\n",
        "1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
        "\n",
        "1. [Enable the Vertex AI API, Cloud Build API, Cloud Storage API, and Container Registry API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,cloudbuild.googleapis.com,storage.googleapis.com,containerregistry.googleapis.com).\n",
        "\n",
        "1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).\n",
        "\n",
        "1. Enter your project ID in the cell below. Then run the cell to make sure the\n",
        "Cloud SDK uses the right project for all the commands in this notebook.\n",
        "\n",
        "**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "df1b5637",
      "metadata": {
        "id": "df1b5637"
      },
      "source": [
        "#### Set your project ID\n",
        "\n",
        "**If you don't know your project ID**, you may be able to get your project ID using `gcloud`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "503e374c",
      "metadata": {
        "collapsed": true,
        "id": "503e374c",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "# Get your Google Cloud project ID from gcloud\n",
        "if not os.getenv(\"IS_TESTING\"):\n",
        "  shell_output=!gcloud config list --format 'value(core.project)' 2\u003e/dev/null\n",
        "  PROJECT_ID = shell_output[0]\n",
        "  print(\"Project ID: \", PROJECT_ID)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "40209c4d",
      "metadata": {
        "id": "40209c4d"
      },
      "source": [
        "Otherwise, set your project ID here."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "2f665b78",
      "metadata": {
        "collapsed": true,
        "id": "2f665b78",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "if PROJECT_ID == \"\" or PROJECT_ID is None:\n",
        "  PROJECT_ID = \"[your-project-id]\"  # @param {type:\"string\"}"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6d3d2d6b",
      "metadata": {
        "id": "6d3d2d6b"
      },
      "source": [
        "#### Timestamp\n",
        "\n",
        "If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "9954b4b2",
      "metadata": {
        "collapsed": true,
        "id": "9954b4b2",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "from datetime import datetime\n",
        "\n",
        "TIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ba11364f",
      "metadata": {
        "id": "ba11364f"
      },
      "source": [
        "### Authenticate your Google Cloud account\n",
        "\n",
        "**If you are using Google Cloud Notebooks**, your environment is already\n",
        "authenticated. Skip this step."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5620ef36",
      "metadata": {
        "id": "5620ef36"
      },
      "source": [
        "**If you are using Colab**, run the cell below and follow the instructions\n",
        "when prompted to authenticate your account via oAuth.\n",
        "\n",
        "**Otherwise**, follow these steps:\n",
        "\n",
        "1. In the Cloud Console, go to the [**Create service account key**\n",
        "   page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).\n",
        "\n",
        "2. Click **Create service account**.\n",
        "\n",
        "3. In the **Service account name** field, enter a name, and\n",
        "   click **Create**.\n",
        "\n",
        "4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type \"Vertex AI\"\n",
        "into the filter box, and select\n",
        "   **Vertex AI Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n",
        "\n",
        "5. Click *Create*. A JSON file that contains your key downloads to your\n",
        "local environment.\n",
        "\n",
        "6. Enter the path to your service account key as the\n",
        "`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "de998f81",
      "metadata": {
        "collapsed": true,
        "id": "de998f81",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import sys\n",
        "\n",
        "# If you are running this notebook in Colab, run this cell and follow the\n",
        "# instructions to authenticate your GCP account. This provides access to your\n",
        "# Cloud Storage bucket and lets you submit training jobs and prediction\n",
        "# requests.\n",
        "\n",
        "# The Google Cloud Notebook product has specific requirements\n",
        "IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n",
        "\n",
        "# If on Google Cloud Notebooks, then don't execute this code\n",
        "if not IS_GOOGLE_CLOUD_NOTEBOOK:\n",
        "  if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth as google_auth\n",
        "\n",
        "    google_auth.authenticate_user()\n",
        "\n",
        "  # If you are running this notebook locally, replace the string below with the\n",
        "  # path to your service account key and run this cell to authenticate your GCP\n",
        "  # account.\n",
        "  elif not os.getenv(\"IS_TESTING\"):\n",
        "    %env GOOGLE_APPLICATION_CREDENTIALS ''"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "1275cce1",
      "metadata": {
        "id": "1275cce1"
      },
      "source": [
        "### Create a Cloud Storage bucket\n",
        "\n",
        "**The following steps are required, regardless of your notebook environment.**\n",
        "\n",
        "In this tutorial, a Cloud Storage bucket holds the MovieLens dataset files to be used for model\n",
        "training. Vertex AI also saves the trained model that results from your training job in the same\n",
        "bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in\n",
        "order to serve online predictions.\n",
        "\n",
        "Set the name of your Cloud Storage bucket below. It must be unique across all\n",
        "Cloud Storage buckets.\n",
        "\n",
        "You may also change the `REGION` variable, which is used for operations\n",
        "throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are\n",
        "available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may\n",
        "not use a Multi-Regional Storage bucket for training with Vertex AI."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "a686c328",
      "metadata": {
        "collapsed": true,
        "id": "a686c328",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "BUCKET_NAME = \"gs://[your-bucket-name]\"  # @param {type:\"string\"} The bucket should be in same region as uCAIP. The bucket should not be multi-regional for custom training jobs to work.\n",
        "REGION = \"[your-region]\"  # @param {type:\"string\"}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4acc461c",
      "metadata": {
        "collapsed": true,
        "id": "4acc461c",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n",
        "  BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "443d5920",
      "metadata": {
        "id": "443d5920"
      },
      "source": [
        "**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "2f75ab50",
      "metadata": {
        "collapsed": true,
        "id": "2f75ab50",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "! gsutil mb -l $REGION $BUCKET_NAME"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "1d0bfd8d",
      "metadata": {
        "id": "1d0bfd8d"
      },
      "source": [
        "Finally, validate access to your Cloud Storage bucket by examining its contents:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "39a302fb",
      "metadata": {
        "collapsed": true,
        "id": "39a302fb",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "! gsutil ls -al $BUCKET_NAME"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d5eee5de",
      "metadata": {
        "id": "d5eee5de"
      },
      "source": [
        "### Import libraries and define constants"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "fcf2a9c4",
      "metadata": {
        "id": "fcf2a9c4"
      },
      "outputs": [],
      "source": [
        "from collections import defaultdict\n",
        "import functools\n",
        "import json\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import os\n",
        "import time\n",
        "from typing import Callable, Dict, List, Optional, TypeVar\n",
        "\n",
        "from google.cloud import storage\n",
        "\n",
        "import tensorflow as tf\n",
        "from tf_agents.agents import TFAgent\n",
        "from tf_agents.bandits.agents import lin_ucb_agent\n",
        "from tf_agents.bandits.agents.examples.v2 import trainer\n",
        "from tf_agents.bandits.environments import environment_utilities\n",
        "from tf_agents.bandits.environments import movielens_py_environment\n",
        "from tf_agents.bandits.metrics import tf_metrics as tf_bandit_metrics\n",
        "from tf_agents.drivers import dynamic_step_driver\n",
        "from tf_agents.environments import TFEnvironment\n",
        "from tf_agents.environments import tf_py_environment\n",
        "from tf_agents.eval import metric_utils\n",
        "from tf_agents.metrics import tf_metrics\n",
        "from tf_agents.metrics.tf_metric import TFStepMetric\n",
        "from tf_agents.policies import policy_saver\n",
        "\n",
        "if tf.__version__[0] != \"2\":\n",
        "  raise Exception(\"The trainer only runs with TensorFlow version 2.\")\n",
        "\n",
        "T = TypeVar(\"T\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "1b5689db",
      "metadata": {
        "id": "1b5689db"
      },
      "outputs": [],
      "source": [
        "ROOT_DIR = f\"{BUCKET_NAME}/artifacts\"  # @param {type:\"string\"} Root directory for writing logs/summaries/checkpoints.\n",
        "ARTIFACTS_DIR = f\"{BUCKET_NAME}/artifacts\"  # @param {type:\"string\"} Where the trained model will be saved and restored.\n",
        "PROFILER_DIR = f\"{BUCKET_NAME}/profiler\"  # @param {type:\"string\"} Directory for TensorBoard Profiler artifacts.\n",
        "DATA_PATH = \"gs://cloud-samples-data/vertex-ai/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/u.data\"  # Location of the MovieLens 100K dataset's \"u.data\" file.\n",
        "RAW_BUCKET_NAME = BUCKET_NAME[5:]  # Remove the prefix `gs://`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "a6cdf060",
      "metadata": {
        "id": "a6cdf060"
      },
      "outputs": [],
      "source": [
        "# Set hyperparameters.\n",
        "BATCH_SIZE = 8  # @param {type:\"integer\"} Training and prediction batch size.\n",
        "TRAINING_LOOPS = 5  # @param {type:\"integer\"} Number of training iterations.\n",
        "STEPS_PER_LOOP = 2  # @param {type:\"integer\"} Number of driver steps per training iteration.\n",
        "\n",
        "# Set MovieLens simulation environment parameters.\n",
        "RANK_K = 20  # @param {type:\"integer\"} Rank for matrix factorization in the MovieLens environment; also the observation dimension.\n",
        "NUM_ACTIONS = 20  # @param {type:\"integer\"} Number of actions (movie items) to choose from.\n",
        "PER_ARM = False  # Use the non-per-arm version of the MovieLens environment.\n",
        "\n",
        "# Set agent parameters.\n",
        "TIKHONOV_WEIGHT = 0.001  # @param {type:\"number\"} LinUCB Tikhonov regularization weight.\n",
        "AGENT_ALPHA = 10.0  # @param {type:\"number\"} LinUCB exploration parameter that multiplies the confidence intervals."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "a172d4ef",
      "metadata": {
        "id": "a172d4ef"
      },
      "source": [
        "## Implement and execute locally (optional)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c4554cda",
      "metadata": {
        "id": "c4554cda"
      },
      "source": [
        "### Define RL modules [locally]\n",
        "\n",
        "Define a [MovieLens-specific bandits environment](https://www.tensorflow.org/agents/api_docs/python/tf_agents/bandits/environments/movielens_py_environment/MovieLensPyEnvironment), a [Linear UCB agent](https://www.tensorflow.org/agents/api_docs/python/tf_agents/bandits/agents/lin_ucb_agent) and the [regret metric](https://www.tensorflow.org/agents/api_docs/python/tf_agents/bandits/metrics/tf_metrics/RegretMetric)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "797db45e",
      "metadata": {
        "collapsed": true,
        "id": "797db45e",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "# Define RL environment.\n",
        "env = movielens_py_environment.MovieLensPyEnvironment(\n",
        "    DATA_PATH, RANK_K, BATCH_SIZE, num_movies=NUM_ACTIONS, csv_delimiter=\"\\t\")\n",
        "environment = tf_py_environment.TFPyEnvironment(env)\n",
        "\n",
        "# Define RL agent/algorithm.\n",
        "agent = lin_ucb_agent.LinearUCBAgent(\n",
        "    time_step_spec=environment.time_step_spec(),\n",
        "    action_spec=environment.action_spec(),\n",
        "    tikhonov_weight=TIKHONOV_WEIGHT,\n",
        "    alpha=AGENT_ALPHA,\n",
        "    dtype=tf.float32,\n",
        "    accepts_per_arm_features=PER_ARM)\n",
        "print(\"TimeStep Spec (for each batch):\\n\", agent.time_step_spec, \"\\n\")\n",
        "print(\"Action Spec (for each batch):\\n\", agent.action_spec, \"\\n\")\n",
        "print(\"Reward Spec (for each batch):\\n\", environment.reward_spec(), \"\\n\")\n",
        "\n",
        "# Define RL metric.\n",
        "optimal_reward_fn = functools.partial(\n",
        "    environment_utilities.compute_optimal_reward_with_movielens_environment,\n",
        "    environment=environment)\n",
        "regret_metric = tf_bandit_metrics.RegretMetric(optimal_reward_fn)\n",
        "metrics = [regret_metric]"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5d59e7c7",
      "metadata": {
        "id": "5d59e7c7"
      },
      "source": [
        "### Train the model [locally]\n",
        "\n",
        "Define the training logic (on-policy training). The following function is the same as [trainer.train](https://github.com/tensorflow/agents/blob/r0.8.0/tf_agents/bandits/agents/examples/v2/trainer.py#L104), but it keeps track of intermediate metric values and saves different artifacts to different locations. You can also directly invoke [trainer.train](https://github.com/tensorflow/agents/blob/r0.8.0/tf_agents/bandits/agents/examples/v2/trainer.py#L104) which also trains the policy."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "3ba2b6e7",
      "metadata": {
        "collapsed": true,
        "id": "3ba2b6e7",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "def train(\n",
        "    root_dir: str,\n",
        "    agent: TFAgent,\n",
        "    environment: TFEnvironment,\n",
        "    training_loops: int,\n",
        "    steps_per_loop: int,\n",
        "    additional_metrics: Optional[List[TFStepMetric]] = None,\n",
        "    training_data_spec_transformation_fn: Optional[Callable[[T], T]] = None,\n",
        ") -\u003e Dict[str, List[float]]:\n",
        "  \"\"\"Performs `training_loops` iterations of training on the agent's policy.\n",
        "\n",
        "  Uses the `environment` as the problem formulation and source of immediate\n",
        "  feedback and the agent's algorithm, to perform `training-loops` iterations\n",
        "  of on-policy training on the policy.\n",
        "  If one or more baseline_reward_fns are provided, the regret is computed\n",
        "  against each one of them. Here is example baseline_reward_fn:\n",
        "  def baseline_reward_fn(observation, per_action_reward_fns):\n",
        "   rewards = ... # compute reward for each arm\n",
        "   optimal_action_reward = ... # take the maximum reward\n",
        "   return optimal_action_reward\n",
        "\n",
        "  Args:\n",
        "    root_dir: Path to the directory where training artifacts are written.\n",
        "    agent: An instance of `TFAgent`.\n",
        "    environment: An instance of `TFEnvironment`.\n",
        "    training_loops: An integer indicating how many training loops should be run.\n",
        "    steps_per_loop: An integer indicating how many driver steps should be\n",
        "      executed and presented to the trainer during each training loop.\n",
        "    additional_metrics: Optional; list of metric objects to log, in addition to\n",
        "      default metrics `NumberOfEpisodes`, `AverageReturnMetric`, and\n",
        "      `AverageEpisodeLengthMetric`.\n",
        "    training_data_spec_transformation_fn: Optional; function that transforms\n",
        "      the data items before they get to the replay buffer.\n",
        "\n",
        "  Returns:\n",
        "    A dict mapping metric names (eg. \"AverageReturnMetric\") to a list of\n",
        "    intermediate metric values over `training_loops` iterations of training.\n",
        "  \"\"\"\n",
        "  if training_data_spec_transformation_fn is None:\n",
        "    data_spec = agent.policy.trajectory_spec\n",
        "  else:\n",
        "    data_spec = training_data_spec_transformation_fn(\n",
        "        agent.policy.trajectory_spec)\n",
        "  replay_buffer = trainer.get_replay_buffer(data_spec, environment.batch_size,\n",
        "                                            steps_per_loop)\n",
        "\n",
        "  # `step_metric` records the number of individual rounds of bandit interaction;\n",
        "  # that is, (number of trajectories) * batch_size.\n",
        "  step_metric = tf_metrics.EnvironmentSteps()\n",
        "  metrics = [\n",
        "      tf_metrics.NumberOfEpisodes(),\n",
        "      tf_metrics.AverageEpisodeLengthMetric(batch_size=environment.batch_size)\n",
        "  ]\n",
        "  if additional_metrics:\n",
        "    metrics += additional_metrics\n",
        "\n",
        "  if isinstance(environment.reward_spec(), dict):\n",
        "    metrics += [tf_metrics.AverageReturnMultiMetric(\n",
        "        reward_spec=environment.reward_spec(),\n",
        "        batch_size=environment.batch_size)]\n",
        "  else:\n",
        "    metrics += [\n",
        "        tf_metrics.AverageReturnMetric(batch_size=environment.batch_size)]\n",
        "\n",
        "  # Store intermediate metric results, indexed by metric names.\n",
        "  metric_results = defaultdict(list)\n",
        "\n",
        "  if training_data_spec_transformation_fn is not None:\n",
        "    add_batch_fn = lambda data: replay_buffer.add_batch(\n",
        "        training_data_spec_transformation_fn(data))\n",
        "  else:\n",
        "    add_batch_fn = replay_buffer.add_batch\n",
        "\n",
        "  observers = [add_batch_fn, step_metric] + metrics\n",
        "\n",
        "  driver = dynamic_step_driver.DynamicStepDriver(\n",
        "      env=environment,\n",
        "      policy=agent.collect_policy,\n",
        "      num_steps=steps_per_loop * environment.batch_size,\n",
        "      observers=observers)\n",
        "\n",
        "  training_loop = trainer.get_training_loop_fn(\n",
        "      driver, replay_buffer, agent, steps_per_loop)\n",
        "  saver = policy_saver.PolicySaver(agent.policy)\n",
        "\n",
        "  for _ in range(training_loops):\n",
        "    training_loop()\n",
        "    metric_utils.log_metrics(metrics)\n",
        "    for metric in metrics:\n",
        "      metric.tf_summaries(train_step=step_metric.result())\n",
        "      metric_results[type(metric).__name__].append(metric.result().numpy())\n",
        "  saver.save(root_dir)\n",
        "  return metric_results"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "35fb761a",
      "metadata": {
        "id": "35fb761a"
      },
      "source": [
        "Train the RL policy and gather intermediate metric results. At the same time, use [TensorBoard Profiler](https://www.tensorflow.org/guide/profiler) to profile the training process and resources."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "416255c2",
      "metadata": {
        "collapsed": true,
        "id": "416255c2",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "tf.profiler.experimental.start(PROFILER_DIR)\n",
        "\n",
        "metric_results = train(\n",
        "    root_dir=ROOT_DIR,\n",
        "    agent=agent,\n",
        "    environment=environment,\n",
        "    training_loops=TRAINING_LOOPS,\n",
        "    steps_per_loop=STEPS_PER_LOOP,\n",
        "    additional_metrics=metrics)\n",
        "\n",
        "tf.profiler.experimental.stop()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9b8cd364",
      "metadata": {
        "id": "9b8cd364"
      },
      "source": [
        "### Evaluate RL metrics [locally]\n",
        "\n",
        "You can visualize how the regret and average return metrics evolve over training steps."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "02da0069",
      "metadata": {
        "collapsed": true,
        "id": "02da0069",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "def plot(metric_results, metric_name):\n",
        "  plt.plot(metric_results[metric_name])\n",
        "  plt.ylabel(metric_name)\n",
        "  plt.xlabel(\"Step\")\n",
        "  plt.title(\"{} versus Step\".format(metric_name))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "37c36cb2",
      "metadata": {
        "collapsed": true,
        "id": "37c36cb2",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "plot(metric_results, \"RegretMetric\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "c58d38a2",
      "metadata": {
        "collapsed": true,
        "id": "c58d38a2",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "plot(metric_results, \"AverageReturnMetric\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "df762eee",
      "metadata": {
        "id": "df762eee"
      },
      "source": [
        "### Profile training [optional]\n",
        "\n",
        "Load [TensorBoard Profiler](https://www.tensorflow.org/guide/profiler) artifacts for the training process and resources. Visualize information such as operation statistics on different devices, operation tracing, and so on [1]. These pieces of information can enable you to identitfy bottlenecks in training performance and can inform you on potential improvements in speed and/or scalability."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "ab99a957",
      "metadata": {
        "id": "ab99a957"
      },
      "outputs": [],
      "source": [
        "# If on Google Cloud Notebooks, then don't execute this code.\n",
        "if not IS_GOOGLE_CLOUD_NOTEBOOK:\n",
        "  if \"google.colab\" in sys.modules:\n",
        "\n",
        "    # Load the TensorBoard notebook extension.\n",
        "    %load_ext tensorboard"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "6c51d88c",
      "metadata": {
        "id": "6c51d88c"
      },
      "outputs": [],
      "source": [
        "# If on Google Cloud Notebooks, then don't execute this code.\n",
        "if not IS_GOOGLE_CLOUD_NOTEBOOK:\n",
        "  if \"google.colab\" in sys.modules:\n",
        "\n",
        "    %tensorboard --logdir $PROFILER_DIR"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "7abd938a",
      "metadata": {
        "id": "7abd938a"
      },
      "source": [
        "[1] For Google Cloud Notebooks, you can do the following:\n",
        "\n",
        "1. Open [Cloud Shell](https://cloud.google.com/shell) from the GCP Console.\n",
        "2. Install dependencies: `pip3 install tensorflow==2.5.0 tensorboard-plugin-profile==2.5.0`.\n",
        "3. Run the following command: `tensorboard --logdir \u003cPROFILER_DIR\u003e`. You will see a message \"TensorBoard 2.5.0 at http://localhost:\u003cPORT\\\u003e/ (Press CTRL+C to quit)\" as the output. Take note of the port number.\n",
        "4. You can click on the [Web Preview](https://cloud.google.com/shell/docs/using-web-preview) button and view the TensorBoard dashboard and profiling results. You need to configure Web Preview's port to be the same port as you receive from step 3."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "18f04365",
      "metadata": {
        "id": "18f04365"
      },
      "source": [
        "## Execute in Vertex AI\n",
        "\n",
        "This section consists of the following steps:\n",
        "1.   Run unit tests on `policy_util` and `task` modules\n",
        "2.   Create hyperparameter tuning and training custom container\n",
        "3.   Submit hyperparameter tuning job [optional]\n",
        "4.   Create custom prediction container\n",
        "5.   Submit custom container training job\n",
        "6.   Deploy trained model to Endpoint\n",
        "7.   Predict on the Endpoint"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "db60f0d0",
      "metadata": {
        "id": "db60f0d0"
      },
      "source": [
        "### Run unit tests on `policy_util` and `task` modules\n",
        "\n",
        "Run unit tests on the modules in `src/training/`.\n",
        "\n",
        "Locate the tests in `src/tests/`, and fill in the configurations that are marked with \"FILL IN\" in the test files."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "262fc060",
      "metadata": {
        "id": "262fc060"
      },
      "outputs": [],
      "source": [
        "! python3 -m unittest src/tests/test_policy_util.py"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "b60196e2",
      "metadata": {
        "id": "b60196e2"
      },
      "outputs": [],
      "source": [
        "! python3 -m unittest src/tests/test_task.py"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "3726c506",
      "metadata": {
        "id": "3726c506"
      },
      "source": [
        "### Create hyperparameter tuning and training custom container\n",
        "\n",
        "Create a custom container that can be used for both hyperparameter tuning and training. The associated source code is in `src/training/`. This serves as the inner script of the custom container.\n",
        "As before, the training function is the same as [trainer.train](https://github.com/tensorflow/agents/blob/r0.8.0/tf_agents/bandits/agents/examples/v2/trainer.py#L104), but it keeps track of intermediate metric values, supports hyperparameter tuning, and (for training) saves artifacts to different locations. The training logic for hyperparameter tuning and training is the same.\n",
        "\n",
        "#### Execute hyperparameter tuning:\n",
        "- The code does not save model artifacts. It takes in command-line arguments as hyperparameter values from the Vertex AI Hyperparameter Tuning service, and reports training result metric to Vertex AI at each trial using cloudml-hypertune.\n",
        "- Note that if you decide to save model artifacts, saving them to the same directory may cause overwriting errors if you use parallel trials in the hyperparameter tuning job. The recommended approach is to save each trial's artifacts to a different sub-directory. This would also allow you to recover all the artifacts from different trials and can potentially save you from re-training.\n",
        "- Read more about hyperparameter tuning for custom containers [here](https://cloud.google.com/vertex-ai/docs/training/containers-overview#hyperparameter_tuning_with_custom_containers); read about hyperparameter tuning support [here](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overview).\n",
        "\n",
        "#### Execute training:\n",
        "- The code saves model artifacts to `os.environ[\"AIP_MODEL_DIR\"]` in addition to `ARTIFACTS_DIR`, as required [here](https://github.com/googleapis/python-aiplatform/blob/v0.8.0/google/cloud/aiplatform/training_jobs.py#L2202).\n",
        "- If you want to make changes to the function, make sure to still save the trained policy as a SavedModel to clean directories, and avoid saving checkpoints and other artifacts, so that deploying the model to endpoints works."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "bc407245",
      "metadata": {
        "id": "bc407245"
      },
      "outputs": [],
      "source": [
        "HPTUNING_TRAINING_CONTAINER = \"hptuning-training-custom-container\"  # @param {type:\"string\"} Name of the container image."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "0cb2b0eb",
      "metadata": {
        "id": "0cb2b0eb"
      },
      "source": [
        "#### Create a Cloud Build YAML file\n",
        "\n",
        "Use [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build the hyperparameter-tuning/training container. You can apply caching and specify the build machine type. Alternatively, you can also use Docker build."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "29bd8084",
      "metadata": {
        "id": "29bd8084"
      },
      "outputs": [],
      "source": [
        "cloudbuild_yaml = \"\"\"steps:\n",
        "- name: 'gcr.io/kaniko-project/executor:latest'\n",
        "  args: ['--destination=gcr.io/{PROJECT_ID}/{HPTUNING_TRAINING_CONTAINER}:latest',\n",
        "         '--cache=true',\n",
        "         '--cache-ttl=99h']\n",
        "options:\n",
        "  machineType: 'E2_HIGHCPU_8'\"\"\".format(\n",
        "    PROJECT_ID=PROJECT_ID,\n",
        "    HPTUNING_TRAINING_CONTAINER=HPTUNING_TRAINING_CONTAINER,\n",
        ")\n",
        "\n",
        "with open(\"cloudbuild.yaml\", \"w\") as fp:\n",
        "  fp.write(cloudbuild_yaml)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "eb8520d3",
      "metadata": {
        "id": "eb8520d3"
      },
      "source": [
        "#### Write a Dockerfile\n",
        "\n",
        "- Use the [cloudml-hypertune](https://github.com/GoogleCloudPlatform/cloudml-hypertune) Python package to report training metrics to Vertex AI for hyperparameter tuning.\n",
        "- Use the Google [Cloud Storage client library](https://cloud.google.com/storage/docs/reference/libraries) to read the best hyperparameters learned from a previous hyperarameter tuning job during training."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "7105edd4",
      "metadata": {
        "id": "7105edd4"
      },
      "outputs": [],
      "source": [
        "%%writefile Dockerfile\n",
        "\n",
        "# Specifies base image and tag.\n",
        "FROM gcr.io/google-appengine/python\n",
        "WORKDIR /root\n",
        "\n",
        "# Installs additional packages.\n",
        "RUN pip3 install cloudml-hypertune==0.1.0.dev6\n",
        "RUN pip3 install google-cloud-storage==1.39.0\n",
        "RUN pip3 install tensorflow==2.5.0\n",
        "RUN pip3 install tensorboard-plugin-profile==2.5.0\n",
        "RUN pip3 install tf-agents==0.8.0\n",
        "RUN pip3 install matplotlib==3.4.2\n",
        "\n",
        "# Copies training code to the Docker image.\n",
        "COPY src/training /root/src/training\n",
        "\n",
        "# Sets up the entry point to invoke the task.\n",
        "ENTRYPOINT [\"python3\", \"-m\", \"src.training.task\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e8dcc653",
      "metadata": {
        "id": "e8dcc653"
      },
      "source": [
        "#### Build the custom container with Cloud Build"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "313967e3",
      "metadata": {
        "id": "313967e3",
        "scrolled": true
      },
      "outputs": [],
      "source": [
        "! gcloud builds submit --config cloudbuild.yaml"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "2bb1b30a",
      "metadata": {
        "id": "2bb1b30a"
      },
      "source": [
        "### Submit hyperparameter tuning job [optional]\n",
        "\n",
        "- Submit a hyperparameter training job with the custom container. Read more details for using Python packages as an alternative to using custom containers in the example shown [here](https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning#create).\n",
        "- Define the hyperparameter(s), max trial count, parallel trial count, parameter search algorithm, machine spec, accelerators, worker pool, etc."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "9fb3158b",
      "metadata": {
        "id": "9fb3158b"
      },
      "outputs": [],
      "source": [
        "RUN_HYPERPARAMETER_TUNING = True  # Execute hyperparameter tuning instead of regular training.\n",
        "TRAIN_WITH_BEST_HYPERPARAMETERS = False  # Do not train.\n",
        "\n",
        "HPTUNING_RESULT_DIR = \"hptuning/\"  # @param {type: \"string\"} Directory to store the best hyperparameter(s) in `BUCKET_NAME` and locally (temporarily).\n",
        "HPTUNING_RESULT_PATH = os.path.join(HPTUNING_RESULT_DIR, \"result.json\")  # @param {type: \"string\"} Path to the file containing the best hyperparameter(s)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "f1222fe3",
      "metadata": {
        "id": "f1222fe3"
      },
      "outputs": [],
      "source": [
        "from google.cloud import aiplatform\n",
        "from google.cloud.aiplatform_v1.types import HyperparameterTuningJob\n",
        "\n",
        "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "6dbfdf0f",
      "metadata": {
        "id": "6dbfdf0f"
      },
      "outputs": [],
      "source": [
        "def create_hyperparameter_tuning_job_sample(\n",
        "    project: str,\n",
        "    display_name: str,\n",
        "    image_uri: str,\n",
        "    args: List[str],\n",
        "    location: str = \"us-central1\",\n",
        "    api_endpoint: str = \"us-central1-aiplatform.googleapis.com\") -\u003e None:\n",
        "  \"\"\"Creates a hyperparameter tuning job using a custom container.\n",
        "\n",
        "  Args:\n",
        "    project: GCP project ID.\n",
        "    display_name: GCP console display name for the hyperparameter tuning job in\n",
        "      Vertex AI.\n",
        "    image_uri: URI to the hyperparameter tuning container image in Container\n",
        "      Registry.\n",
        "    args: Arguments passed to the container.\n",
        "    location: Service location.\n",
        "    api_endpoint: API endpoint, eg. `\u003clocation\u003e-aiplatform.googleapis.com`.\n",
        "\n",
        "  Returns:\n",
        "    A string of the hyperparameter tuning job ID.\n",
        "  \"\"\"\n",
        "  # The AI Platform services require regional API endpoints.\n",
        "  client_options = {\"api_endpoint\": api_endpoint}\n",
        "  # Initialize client that will be used to create and send requests.\n",
        "  # This client only needs to be created once, and can be reused for multiple requests.\n",
        "  client = aiplatform.gapic.JobServiceClient(client_options=client_options)\n",
        "\n",
        "  # study_spec\n",
        "  # Metric based on which to evaluate which combination of hyperparameter(s) to choose\n",
        "  metric = {\n",
        "      \"metric_id\": \"final_average_return\",  # Metric you report to Vertex AI.\n",
        "      \"goal\": aiplatform.gapic.StudySpec.MetricSpec.GoalType.MAXIMIZE,\n",
        "  }\n",
        "\n",
        "  # Hyperparameter(s) to tune\n",
        "  training_loops = {\n",
        "      \"parameter_id\": \"training-loops\",\n",
        "      \"discrete_value_spec\": {\"values\": [4, 16]},\n",
        "      \"scale_type\": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,\n",
        "  }\n",
        "  steps_per_loop = {\n",
        "      \"parameter_id\": \"steps-per-loop\",\n",
        "      \"discrete_value_spec\": {\"values\": [1, 2]},\n",
        "      \"scale_type\": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,\n",
        "  }\n",
        "\n",
        "  # trial_job_spec\n",
        "  machine_spec = {\n",
        "      \"machine_type\": \"n1-standard-4\",\n",
        "      \"accelerator_type\": aiplatform.gapic.AcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,\n",
        "      \"accelerator_count\": None,\n",
        "  }\n",
        "  worker_pool_spec = {\n",
        "      \"machine_spec\": machine_spec,\n",
        "      \"replica_count\": 1,\n",
        "      \"container_spec\": {\n",
        "          \"image_uri\": image_uri,\n",
        "          \"args\": args,\n",
        "      },\n",
        "  }\n",
        "\n",
        "  # hyperparameter_tuning_job\n",
        "  hyperparameter_tuning_job = {\n",
        "      \"display_name\": display_name,\n",
        "      \"max_trial_count\": 4,\n",
        "      \"parallel_trial_count\": 2,\n",
        "      \"study_spec\": {\n",
        "          \"metrics\": [metric],\n",
        "          \"parameters\": [training_loops, steps_per_loop],\n",
        "          \"algorithm\": aiplatform.gapic.StudySpec.Algorithm.RANDOM_SEARCH,\n",
        "      },\n",
        "      \"trial_job_spec\": {\"worker_pool_specs\": [worker_pool_spec]},\n",
        "  }\n",
        "  parent = f\"projects/{project}/locations/{location}\"\n",
        "\n",
        "  # Create job\n",
        "  response = client.create_hyperparameter_tuning_job(\n",
        "      parent=parent,\n",
        "      hyperparameter_tuning_job=hyperparameter_tuning_job)\n",
        "  job_id = response.name.split(\"/\")[-1]\n",
        "  print(\"Job ID:\", job_id)\n",
        "  print(\"Job config:\", response)\n",
        "\n",
        "  return job_id"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "aa831e5e",
      "metadata": {
        "id": "aa831e5e"
      },
      "outputs": [],
      "source": [
        "args = [\n",
        "    f\"--data-path={DATA_PATH}\",\n",
        "    f\"--batch-size={BATCH_SIZE}\",\n",
        "    f\"--rank-k={RANK_K}\",\n",
        "    f\"--num-actions={NUM_ACTIONS}\",\n",
        "    f\"--tikhonov-weight={TIKHONOV_WEIGHT}\",\n",
        "    f\"--agent-alpha={AGENT_ALPHA}\",\n",
        "]\n",
        "if RUN_HYPERPARAMETER_TUNING:\n",
        "  args.append(f\"--run-hyperparameter-tuning\")\n",
        "elif TRAIN_WITH_BEST_HYPERPARAMETERS:\n",
        "  args.append(f\"--train-with-best-hyperparameters\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "763c9af3",
      "metadata": {
        "id": "763c9af3"
      },
      "outputs": [],
      "source": [
        "job_id = create_hyperparameter_tuning_job_sample(\n",
        "    project=PROJECT_ID,\n",
        "    display_name=\"movielens-hyperparameter-tuning-job\",\n",
        "    image_uri=f\"gcr.io/{PROJECT_ID}/{HPTUNING_TRAINING_CONTAINER}:latest\",\n",
        "    args=args,\n",
        "    location=REGION,\n",
        "    api_endpoint=f\"{REGION}-aiplatform.googleapis.com\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "bafc0e05",
      "metadata": {
        "id": "bafc0e05"
      },
      "source": [
        "#### Check hyperparameter tuning job status\n",
        "\n",
        "- Read more about managing jobs [here](https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning#manage)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "c959a9f3",
      "metadata": {
        "id": "c959a9f3"
      },
      "outputs": [],
      "source": [
        "def get_hyperparameter_tuning_job_sample(\n",
        "    project: str,\n",
        "    hyperparameter_tuning_job_id: str,\n",
        "    location: str = \"us-central1\",\n",
        "    api_endpoint: str = \"us-central1-aiplatform.googleapis.com\",\n",
        ") -\u003e HyperparameterTuningJob:\n",
        "  \"\"\"Gets the current status of a hyperparameter tuning job.\n",
        "\n",
        "  Args:\n",
        "    project: GCP project ID.\n",
        "    hyperparameter_tuning_job_id: Hyperparameter tuning job ID.\n",
        "    location: Service location.\n",
        "    api_endpoint: API endpoint, eg. `\u003clocation\u003e-aiplatform.googleapis.com`.\n",
        "\n",
        "  Returns:\n",
        "    Details of the hyperparameter tuning job, such as its running status,\n",
        "    results of its trials, etc.\n",
        "  \"\"\"\n",
        "  # The AI Platform services require regional API endpoints.\n",
        "  client_options = {\"api_endpoint\": api_endpoint}\n",
        "  # Initialize client that will be used to create and send requests.\n",
        "  # This client only needs to be created once, and can be reused for multiple requests.\n",
        "  client = aiplatform.gapic.JobServiceClient(client_options=client_options)\n",
        "  name = client.hyperparameter_tuning_job_path(\n",
        "      project=project,\n",
        "      location=location,\n",
        "      hyperparameter_tuning_job=hyperparameter_tuning_job_id)\n",
        "  response = client.get_hyperparameter_tuning_job(name=name)\n",
        "  return response"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "7742abe2",
      "metadata": {
        "id": "7742abe2",
        "scrolled": true
      },
      "outputs": [],
      "source": [
        "trials = None\n",
        "while True:\n",
        "  response = get_hyperparameter_tuning_job_sample(\n",
        "      project=PROJECT_ID,\n",
        "      hyperparameter_tuning_job_id=job_id,\n",
        "      location=REGION,\n",
        "      api_endpoint=f\"{REGION}-aiplatform.googleapis.com\")\n",
        "  if response.state.name == 'JOB_STATE_SUCCEEDED':\n",
        "    print(\"Job succeeded.\\nJob Time:\", response.update_time - response.create_time)\n",
        "    trials = response.trials\n",
        "    print(\"Trials:\", trials)\n",
        "    break\n",
        "  elif response.state.name == \"JOB_STATE_FAILED\":\n",
        "    print(\"Job failed.\")\n",
        "    break\n",
        "  elif response.state.name == \"JOB_STATE_CANCELLED\":\n",
        "    print(\"Job cancelled.\")\n",
        "    break\n",
        "  else:\n",
        "    print(f\"Current job status: {response.state.name}.\")\n",
        "  time.sleep(60)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "72e0a69d",
      "metadata": {
        "id": "72e0a69d"
      },
      "source": [
        "#### Find the best combination(s) hyperparameter(s) for each metric"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "389e131c",
      "metadata": {
        "id": "389e131c"
      },
      "outputs": [],
      "source": [
        "if trials:\n",
        "  # Dict mapping from metric names to the best metric values seen so far\n",
        "  best_objective_values = dict.fromkeys(\n",
        "      [metric.metric_id for metric in trials[0].final_measurement.metrics],\n",
        "      -np.inf)\n",
        "  # Dict mapping from metric names to a list of the best combination(s) of\n",
        "  # hyperparameter(s). Each combination is a dict mapping from hyperparameter\n",
        "  # names to their values.\n",
        "  best_params = defaultdict(list)\n",
        "  for trial in trials:\n",
        "    # `final_measurement` and `parameters` are `RepeatedComposite` objects.\n",
        "    # Reference the structure above to extract the value of your interest.\n",
        "    for metric in trial.final_measurement.metrics:\n",
        "      params = dict(\n",
        "          [(param.parameter_id, param.value) for param in trial.parameters])\n",
        "      if metric.value \u003e best_objective_values[metric.metric_id]:\n",
        "        best_params[metric.metric_id] = [params]\n",
        "      elif metric.value == best_objective_values[metric.metric_id]:\n",
        "        best_params[param.parameter_id].append(params)  # Handle cases where multiple hyperparameter values lead to the same performance.\n",
        "  print(\"Best hyperparameter value(s):\")\n",
        "  for metric, params in best_params.items():\n",
        "    print(f\"Metric={metric}: {sorted(params)}\")\n",
        "else:\n",
        "  print(\"No hyperparameter tuning job trials found.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "a0497a5c",
      "metadata": {
        "id": "a0497a5c"
      },
      "source": [
        "#### Convert a combination of best hyperparameter(s) for a metric of interest to JSON"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "ebced166",
      "metadata": {
        "id": "ebced166"
      },
      "outputs": [],
      "source": [
        "! mkdir $HPTUNING_RESULT_DIR\n",
        "\n",
        "with open(HPTUNING_RESULT_PATH, \"w\") as f:\n",
        "  json.dump(best_params[\"final_average_return\"][0], f)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "4a38871a",
      "metadata": {
        "id": "4a38871a"
      },
      "source": [
        "#### Upload the best hyperparameter(s) to GCS for use in training"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "0a674285",
      "metadata": {
        "id": "0a674285"
      },
      "outputs": [],
      "source": [
        "storage_client = storage.Client(project=PROJECT_ID)\n",
        "bucket = storage_client.bucket(RAW_BUCKET_NAME)\n",
        "blob = bucket.blob(HPTUNING_RESULT_PATH)\n",
        "blob.upload_from_filename(HPTUNING_RESULT_PATH)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "2b402962",
      "metadata": {
        "id": "2b402962"
      },
      "source": [
        "### Create custom prediction container\n",
        "\n",
        "As with training, create a custom prediction container. This container handles the TF-Agents specific logic that is different from a regular TensorFlow Model. Specifically, it finds the predicted action using a trained policy. The associated source code is in `src/prediction/`.\n",
        "See other options for Vertex AI predictions [here](https://cloud.google.com/vertex-ai/docs/predictions/getting-predictions).\n",
        "\n",
        "#### Serve predictions:\n",
        "- Use [`tensorflow.saved_model.load`](https://www.tensorflow.org/agents/api_docs/python/tf_agents/policies/PolicySaver#usage), instead of [`tf_agents.policies.policy_loader.load`](https://github.com/tensorflow/agents/blob/r0.8.0/tf_agents/policies/policy_loader.py#L26), to load the trained policy, because the latter produces an object of type [`SavedModelPyTFEagerPolicy`](https://github.com/tensorflow/agents/blob/402b8aa81ca1b578ec1f687725d4ccb4115386d2/tf_agents/policies/py_tf_eager_policy.py#L137) whose `action()` is not compatible for use here.\n",
        "- Note that prediction requests contain only observation data but not reward. This is because: The prediction task is a standalone request that doesn't require prior knowledge of the system state. Meanwhile, end users only know what they observe at the moment. Reward is a piece of information that comes after the action has been made, so the end users would not have knowledge of said reward. In handling prediction requests, you create a [`TimeStep`](https://www.tensorflow.org/agents/api_docs/python/tf_agents/trajectories/TimeStep) object (consisting of `observation`, `reward`, `discount`, `step_type`) using the [`restart()`](https://www.tensorflow.org/agents/api_docs/python/tf_agents/trajectories/restart) function which takes in an `observation`. This function creates the *first* TimeStep in a trajectory of steps, where reward is 0, discount is 1 and step_type is marked as the first timestep. In other words, each prediction request forms the first `TimeStep` in a brand new trajectory.\n",
        "- For the prediction response, avoid using NumPy-typed values; instead, convert them to native Python values using methods such as [`tolist()`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.tolist.html) as opposed to `list()`.\n",
        "- There exists a prestart script in `src/prediction`. FastAPI executes this script before starting up the server. The `PORT` environment variable is set to equal `AIP_HTTP_PORT` in order to run FastAPI on the same port expected by Vertex AI."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "1723a927",
      "metadata": {
        "id": "1723a927"
      },
      "outputs": [],
      "source": [
        "PREDICTION_CONTAINER = \"prediction-custom-container\"  # @param {type:\"string\"} Name of the container image."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e3dc8994",
      "metadata": {
        "id": "e3dc8994"
      },
      "source": [
        "#### Create a Cloud Build YAML file\n",
        "\n",
        "Use [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build the custom prediction container."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "15ca50b2",
      "metadata": {
        "id": "15ca50b2"
      },
      "outputs": [],
      "source": [
        "cloudbuild_yaml = \"\"\"steps:\n",
        "- name: 'gcr.io/kaniko-project/executor:latest'\n",
        "  args: ['--destination=gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest',\n",
        "         '--cache=true',\n",
        "         '--cache-ttl=99h']\n",
        "  env: ['AIP_STORAGE_URI={ARTIFACTS_DIR}']\n",
        "options:\n",
        "  machineType: 'E2_HIGHCPU_8'\"\"\".format(\n",
        "    PROJECT_ID=PROJECT_ID,\n",
        "    PREDICTION_CONTAINER=PREDICTION_CONTAINER,\n",
        "    ARTIFACTS_DIR=ARTIFACTS_DIR\n",
        ")\n",
        "\n",
        "with open(\"cloudbuild.yaml\", \"w\") as fp:\n",
        "  fp.write(cloudbuild_yaml)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "65e32312",
      "metadata": {
        "id": "65e32312"
      },
      "source": [
        "#### Define dependencies\n",
        "\n",
        "- Note that the dependencies should be compatiable with one another (eg. tensorflow==2.5.0 requires numpy\u003c=1.19.2)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4945bc35",
      "metadata": {
        "id": "4945bc35"
      },
      "outputs": [],
      "source": [
        "%%writefile requirements.txt\n",
        "\n",
        "numpy~=1.19.2\n",
        "six~=1.15.0\n",
        "typing-extensions~=3.7.4\n",
        "tf-agents==0.8.0\n",
        "tensorflow==2.5.0"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "201c8b24",
      "metadata": {
        "id": "201c8b24"
      },
      "source": [
        "#### Write a Dockerfile\n",
        "\n",
        "Note: leave the server directory `app`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "3f8b0df0",
      "metadata": {
        "id": "3f8b0df0"
      },
      "outputs": [],
      "source": [
        "%%writefile Dockerfile\n",
        "\n",
        "FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7\n",
        "\n",
        "COPY src/prediction /app\n",
        "COPY requirements.txt /app/requirements.txt\n",
        "\n",
        "RUN pip3 install -r /app/requirements.txt"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9dd4cffc",
      "metadata": {
        "id": "9dd4cffc"
      },
      "source": [
        "#### Build the prediction container with Cloud Build"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "9d4417c4",
      "metadata": {
        "id": "9d4417c4",
        "scrolled": true
      },
      "outputs": [],
      "source": [
        "! gcloud builds submit --config cloudbuild.yaml"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6895e6d5",
      "metadata": {
        "id": "6895e6d5"
      },
      "source": [
        "### Submit custom container training job\n",
        "\n",
        "- Note again that the bucket must be in the same regional location as the service location and it should not be multi-regional.\n",
        "- Read more of CustomContainerTrainingJob's source code [here](https://github.com/googleapis/python-aiplatform/blob/v0.8.0/google/cloud/aiplatform/training_jobs.py#L2153).\n",
        "- Like with local execution, you can use TensorBoard Profiler to track the training process and resources, and visualize the corresponding artifacts using the command: `%tensorboard --logdir $PROFILER_DIR`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "9f05d08a",
      "metadata": {
        "id": "9f05d08a"
      },
      "outputs": [],
      "source": [
        "from google.cloud import aiplatform\n",
        "\n",
        "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "ab31a8da",
      "metadata": {
        "id": "ab31a8da"
      },
      "outputs": [],
      "source": [
        "RUN_HYPERPARAMETER_TUNING = False  # Execute regular training instead of hyperparameter tuning.\n",
        "TRAIN_WITH_BEST_HYPERPARAMETERS = True  # @param {type:\"bool\"} Whether to use learned hyperparameters in training."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "218eca21",
      "metadata": {
        "id": "218eca21"
      },
      "outputs": [],
      "source": [
        "args = [\n",
        "    f\"--artifacts-dir={ARTIFACTS_DIR}\",\n",
        "    f\"--profiler-dir={PROFILER_DIR}\",\n",
        "    f\"--data-path={DATA_PATH}\",\n",
        "    f\"--batch-size={BATCH_SIZE}\",\n",
        "    f\"--rank-k={RANK_K}\",\n",
        "    f\"--num-actions={NUM_ACTIONS}\",\n",
        "    f\"--tikhonov-weight={TIKHONOV_WEIGHT}\",\n",
        "    f\"--agent-alpha={AGENT_ALPHA}\",\n",
        "]\n",
        "if RUN_HYPERPARAMETER_TUNING:\n",
        "  args.append(f\"--run-hyperparameter-tuning\")\n",
        "elif TRAIN_WITH_BEST_HYPERPARAMETERS:\n",
        "  args.append(f\"--train-with-best-hyperparameters\")\n",
        "  args.append(f\"--best-hyperparameters-bucket={RAW_BUCKET_NAME}\")\n",
        "  args.append(f\"--best-hyperparameters-path={HPTUNING_RESULT_PATH}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "e8080155",
      "metadata": {
        "id": "e8080155"
      },
      "outputs": [],
      "source": [
        "job = aiplatform.CustomContainerTrainingJob(\n",
        "    display_name=\"train-movielens\",\n",
        "    container_uri=f\"gcr.io/{PROJECT_ID}/{HPTUNING_TRAINING_CONTAINER}:latest\",\n",
        "    command=[\"python3\", \"-m\", \"src.training.task\"] + args,  # Pass in training arguments, including hyperparameters.\n",
        "    model_serving_container_image_uri=f\"gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest\",\n",
        "    model_serving_container_predict_route=\"/predict\",\n",
        "    model_serving_container_health_route=\"/health\")\n",
        "\n",
        "print(\"Training Spec:\", job._managed_model)\n",
        "\n",
        "model = job.run(\n",
        "    model_display_name=\"movielens-model\",\n",
        "    replica_count=1,\n",
        "    machine_type=\"n1-standard-4\",\n",
        "    accelerator_type=\"ACCELERATOR_TYPE_UNSPECIFIED\",\n",
        "    accelerator_count=0)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "3971b79d",
      "metadata": {
        "id": "3971b79d"
      },
      "outputs": [],
      "source": [
        "print(\"Model display name:\", model.display_name)\n",
        "print(\"Model ID:\", model.name)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "bfd47f03",
      "metadata": {
        "id": "bfd47f03"
      },
      "source": [
        "### Deploy trained model to an Endpoint"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "1c2fd020",
      "metadata": {
        "id": "1c2fd020"
      },
      "outputs": [],
      "source": [
        "endpoint = model.deploy(machine_type=\"n1-standard-4\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "cffc159f",
      "metadata": {
        "id": "cffc159f"
      },
      "outputs": [],
      "source": [
        "print(\"Endpoint display name:\", endpoint.display_name)\n",
        "print(\"Endpoint ID:\", endpoint.name)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c85b31b0",
      "metadata": {
        "id": "c85b31b0"
      },
      "source": [
        "### Predict on the Endpoint\n",
        "- Put prediction input(s) into a list named `instances`. The observation should of dimension (BATCH_SIZE, RANK_K). Read more about the MovieLens simulation environment observation [here](https://github.com/tensorflow/agents/blob/v0.8.0/tf_agents/bandits/environments/movielens_py_environment.py#L32-L138).\n",
        "- Read more about the endpoint prediction API [here](https://cloud.google.com/sdk/gcloud/reference/ai/endpoints/predict)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "670e5055",
      "metadata": {
        "id": "670e5055"
      },
      "outputs": [],
      "source": [
        "endpoint.predict(\n",
        "    instances=[\n",
        "        {\"observation\": [list(np.ones(20)) for _ in range(8)]},\n",
        "    ]\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "51780141",
      "metadata": {
        "id": "51780141"
      },
      "source": [
        "## Summary\n",
        "\n",
        "### What exactly is the purpose of the MovieLens simulation environment?\n",
        "\n",
        "The MovieLens environment *simulates* real-world environment containing users and their respective preferences. Internally, the MovieLens simulation environment takes the user-by-movie-item rating matrix and performs a `RANK_K` matrix factorization on the rating matrix, in order to address the sparsity of the matrix. After this construction step, the environment can generate user vectors of dimension `RANK_K` to represent users in the simulation environment, and is able to determine the approximate reward for any user and movie item pair. In RL's language, user vectors are observations, recommended movie items are actions, and approximate ratings are rewards. This environment therefore defines the RL problem at hand: how to recommend movies that maximize user ratings, in a simulated world of users with their respective preferences defined by the MovieLens dataset, while having zero knowledge of the internal mechanism of the environment.\n",
        "\n",
        "Note here the user vectors may not be in the same dimension as in the original rating matrix, and the approximate ratings (to address the sparsity of rating data) may not equal the original ratings. The individual entries in the user vectors do not correspond to real-world meanings, such as user age, etc.. In prediction requests, the observations are user vectors that lie in the same space as those generated by the MovieLens simulation environment. In other words, they represent users in the same way as the user vectors/observations generated by the MovieLens environment.\n",
        "\n",
        "The reason why this demo adopts the MovieLens environment is to base itself on a public dataset without needing to communicate with the real world; such communication adds overhead to the necessary steps of the demo and likely relies on a specific implementation that is difficult to generalize to your production requirements.\n",
        "\n",
        "### How to apply this demo in production\n",
        "\n",
        "#### Step 0: Demo\n",
        "\n",
        "Walk through this demo, which uses the MovieLens simulation environment.\n",
        "\n",
        "#### Step 1: Offline Simulation\n",
        "\n",
        "To evaluate the performance of your RL model, you may need to run offline simulation first to determine if your RL model meets production criteria. In this case, you may have a static dataset, similar to the MovieLens dataset but potentially larger, and you can construct a custom simulation environment to use in place of the MovieLens one. In the custom environment, you may decide how to formulate observations and rewards, such as in terms of how to represent users with user vectors and what those vectors look like, perhaps via an embedding layer in a neural network. You may apply the rest of the steps and code in this demo just as you did for MovieLens, and then evaluate your model. After offline simulation, you may proceed to the next-steps of launching your model, such as A/B testing.\n",
        "\n",
        "#### Step 2: Real-World System\n",
        "\n",
        "When you deploy the steps in this demo in production, you would replace the MovieLens simulation environment with a real-world system or communication mechanism that binds to the real world. In training, you pull user vectors/observations and ratings/rewards from the real-world environment. Now, the individual entries in the user vectors may have actual meanings such as user age. Again, you may decide how to formulate observations and rewards. In prediction, the observations packaged in prediction requests are again the same kind of user vectors as in training, with the same real-world meanings; you would generate them with the same mechanism.\n",
        "\n",
        "Your goal for prediction would again be to determine what movie items to recommend for a particular user. You would represent said user with a user vector using the mechanism you determined, send that vector in as the observation, and obtain the recommended movie item in the response.\n",
        "\n",
        "### Performance and scalability analysis\n",
        "\n",
        "You can use TensorBoard Profiler, as well as other TensorBoard features, to analyze training performance and find solutions to speed up and/or better scale your application."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "f532a8c5",
      "metadata": {
        "id": "f532a8c5"
      },
      "source": [
        "## Cleaning up\n",
        "\n",
        "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
        "project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
        "\n",
        "Otherwise, you can delete the individual resources you created in this tutorial:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4eb6a875",
      "metadata": {
        "collapsed": true,
        "id": "4eb6a875",
        "jupyter": {
          "outputs_hidden": true
        }
      },
      "outputs": [],
      "source": [
        "# Delete endpoint resource\n",
        "! gcloud ai endpoints delete $endpoint.name --quiet --region $REGION\n",
        "\n",
        "# Delete model resource\n",
        "! gcloud ai models delete $model.name --quiet\n",
        "\n",
        "# Delete Cloud Storage objects that were created\n",
        "! gsutil -m rm -r $ARTIFACTS_DIR"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [],
      "name": "step_by_step_reinforcement_learning_vertex_ai.ipynb",
      "private_outputs": true,
      "provenance": [
        {
          "file_id": "/piper/depot/google3/cloud/ml/growth/experiments/reinforcement_learning/third_party/step_by_step_reinforcement_learning_vertex_ai/step_by_step_reinforcement_learning_vertex_ai.ipynb?workspaceId=feiyangyu:demo-links::citc",
          "timestamp": 1628009119653
        },
        {
          "file_id": "1A-ZX7uEygYxSq8f5YJ8ufaiZWzT0knxg",
          "timestamp": 1628008887399
        },
        {
          "file_id": "1IDq02qclyKBFz4nW2HvR7lDdFwyoefVq",
          "timestamp": 1627928438876
        },
        {
          "file_id": "1ycVkm86lYd_cJwB1PQP1EpE-kF5wSgfi",
          "timestamp": 1627596187736
        },
        {
          "file_id": "/piper/depot/google3/cloud/ml/growth/experiments/reinforcement_learning/third_party/step_by_step_reinforcement_learning_vertex_ai/step_by_step_reinforcement_learning_vertex_ai.ipynb?cl=386471622",
          "timestamp": 1627434677797
        },
        {
          "file_id": "1GS2vQHQoOHS9wjHRYQciD-FLap_PfY7J",
          "timestamp": 1626967846885
        },
        {
          "file_id": "1BAPXVLDIkA_hhdMbBG7E7chEXDhxt1Fi",
          "timestamp": 1625783156182
        },
        {
          "file_id": "1I3Pre8-VsAkOwhCbX5TfCZ-u0bYqAn-8",
          "timestamp": 1625239981619
        },
        {
          "file_id": "1zZpp0FLBBxwXiukXSOamv7vp_Knw5sKZ",
          "timestamp": 1624979514157
        },
        {
          "file_id": "1cFN0t6cI-Y8Rlk2iIgOFS1JPjBj5p2Vh",
          "timestamp": 1624652911478
        },
        {
          "file_id": "1XXLIQ5xxRUBBK73zo77RoLdBwTuhxxNB",
          "timestamp": 1623684870462
        },
        {
          "file_id": "1qNUNjJkUMNrPKUOgclSo5CcEWObqG-K-",
          "timestamp": 1623117299546
        },
        {
          "file_id": "116LxGpHYdJ1SN4FsZ5pKli-7wQ6TWPlk",
          "timestamp": 1622827200717
        }
      ]
    },
    "environment": {
      "name": "common-cu110.m69",
      "type": "gcloud",
      "uri": "gcr.io/deeplearning-platform-release/base-cu110:m69"
    },
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.7.10"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
