{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Cascade Design Pattern\n",
    "\n",
    "This notebook demonstrates using the Cascade design pattern to train a model to predict the distance that a bicycle will be ridden.\n",
    "Let's assume that distances on bikes held longer than 4 hours is important to us, but these are rare.\n",
    "So, we train a Cascade of ML models.\n",
    "\n",
    "The first model classifies trips into Typical trips and Long trips.\n",
    "Then, we create two training datasets based on the prediction of the first model.\n",
    "Next, we train two regression models to predict distance.\n",
    "Finally, we combine the two models in order to evaluate the Cascade as a whole.\n",
    "\n",
    "<img src=\"pipeline.png\" />"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To try out this notebook:\n",
    "*  Create an instance of AI Platform Pipelines by following the [Setting up AI Platform Pipelines](https://cloud.google.com/ai-platform/pipelines/docs/setting-up) how-to guide. Make sure to enable the access to https://www.googleapis.com/auth/cloud-platform when creating a GKE cluster.\n",
    "* Change the following cell to reflect your setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# CHANGE the following settings\n",
    "PROJECT_ID='ai-analytics-solutions' \n",
    "KFPHOST='20844794c6e37538-dot-us-central2.pipelines.googleusercontent.com' # from settings button in CAIP Pipelines"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Dataset ai-analytics-solutions:mlpatterns\n",
      "\n",
      "   Last modified                              ACLs                              Labels  \n",
      " ----------------- ----------------------------------------------------------- -------- \n",
      "  10 Apr 05:46:16   Owners:                                                             \n",
      "                      kfpdemo@ai-analytics-solutions.iam.gserviceaccount.com,           \n",
      "                      projectOwners                                                     \n",
      "                    Writers:                                                            \n",
      "                      projectWriters                                                    \n",
      "                    Readers:                                                            \n",
      "                      projectReaders                                                    \n",
      "\n"
     ]
    }
   ],
   "source": [
    "!bq show mlpatterns || bq mk mlpatterns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import kfp\n",
    "import kfp.components as comp"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Help on function Bigquery - Query:\n",
      "\n",
      "Bigquery - Query(query:str, project_id:'GCPProjectID', dataset_id:str='', table_id:str='', output_gcs_path:'GCSPath'='', dataset_location:str='US', job_config:dict='')\n",
      "    Bigquery - Query\n",
      "    A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery \n",
      "    service and dump outputs to a Google Cloud Storage blob.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "bigquery_query_op = comp.load_component_from_url(\n",
    "    'https://raw.githubusercontent.com/kubeflow/pipelines/0e794e8a0eff6f81ddc857946ee8311c7c431ec2/components/gcp/bigquery/query/component.yaml')\n",
    "help(bigquery_query_op)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import kfp.dsl as dsl\n",
    "from typing import NamedTuple\n",
    "import json\n",
    "import os\n",
    "\n",
    "def run_bigquery_ddl(project_id: str, query_string: str, location: str) -> NamedTuple(\n",
    "    'DDLOutput', [('created_table', str), ('query', str)]):\n",
    "    \"\"\"\n",
    "    Runs BigQuery query and returns a table/model name\n",
    "    \"\"\"\n",
    "    print(query_string)\n",
    "        \n",
    "    from google.cloud import bigquery\n",
    "    from google.api_core.future import polling\n",
    "    from google.cloud import bigquery\n",
    "    from google.cloud.bigquery import retry as bq_retry\n",
    "    \n",
    "    bqclient = bigquery.Client(project=project_id, location=location)\n",
    "    job = bqclient.query(query_string, retry=bq_retry.DEFAULT_RETRY)\n",
    "    job._retry = polling.DEFAULT_RETRY\n",
    "    \n",
    "    while job.running():\n",
    "        from time import sleep\n",
    "        sleep(0.1)\n",
    "        print('Running ...')\n",
    "        \n",
    "    tblname = job.ddl_target_table\n",
    "    tblname = '{}.{}'.format(tblname.dataset_id, tblname.table_id)\n",
    "    print('{} created in {}'.format(tblname, job.ended - job.started))\n",
    "    \n",
    "    from collections import namedtuple\n",
    "    result_tuple = namedtuple('DDLOutput', ['created_table', 'query'])\n",
    "    return result_tuple(tblname, query_string)\n",
    "\n",
    "\n",
    "def train_classification_model(ddlop, project_id):\n",
    "    query = \"\"\"\n",
    "        CREATE OR REPLACE MODEL mlpatterns.classify_trips\n",
    "        TRANSFORM(\n",
    "          trip_type,\n",
    "          EXTRACT (HOUR FROM start_date) AS start_hour,\n",
    "          EXTRACT (DAYOFWEEK FROM start_date) AS day_of_week,\n",
    "          start_station_name,\n",
    "          subscriber_type,\n",
    "          ML.QUANTILE_BUCKETIZE(member_birth_year, 10) OVER() AS bucketized_age,\n",
    "          member_gender\n",
    "        )\n",
    "        OPTIONS(model_type='logistic_reg', \n",
    "                auto_class_weights=True,\n",
    "                input_label_cols=['trip_type']) AS\n",
    "\n",
    "        SELECT\n",
    "          start_date, start_station_name, subscriber_type, member_birth_year, member_gender,\n",
    "          IF(duration_sec > 3600*4, 'Long', 'Typical') AS trip_type\n",
    "        FROM `bigquery-public-data.san_francisco_bikeshare.bikeshare_trips`\n",
    "    \"\"\"\n",
    "    print(query)\n",
    "    return ddlop(project_id, query, 'US')\n",
    "\n",
    "def create_training_data(ddlop, project_id, model_name, segment):\n",
    "    query = \"\"\"\n",
    "        CREATE OR REPLACE TABLE mlpatterns.{0}_trips AS\n",
    "        SELECT \n",
    "          * EXCEPT(predicted_trip_type_probs, predicted_trip_type)\n",
    "        FROM\n",
    "        ML.PREDICT(MODEL {1}, -- mlpatterns.classify_trips\n",
    "          (SELECT\n",
    "          start_date, start_station_name, subscriber_type, member_birth_year, member_gender,\n",
    "          ST_Distance(start_station_geom, end_station_geom) AS distance\n",
    "          FROM `bigquery-public-data.san_francisco_bikeshare.bikeshare_trips`)\n",
    "        )\n",
    "        WHERE predicted_trip_type = '{0}' AND distance IS NOT NULL\n",
    "    \"\"\".format(segment, model_name)\n",
    "    print(query)\n",
    "    return ddlop(project_id, query, 'US')\n",
    "\n",
    "def train_distance_model(ddlop, project_id, train_table_name, segment):\n",
    "    query = \"\"\"\n",
    "        CREATE OR REPLACE MODEL mlpatterns.predict_distance_{0}\n",
    "        TRANSFORM(\n",
    "          distance,\n",
    "          EXTRACT (HOUR FROM start_date) AS start_hour,\n",
    "          EXTRACT (DAYOFWEEK FROM start_date) AS day_of_week,\n",
    "          start_station_name,\n",
    "          subscriber_type,\n",
    "          ML.QUANTILE_BUCKETIZE(member_birth_year, 10) OVER() AS bucketized_age,\n",
    "          member_gender\n",
    "        )\n",
    "        OPTIONS(model_type='linear_reg', input_label_cols=['distance']) AS\n",
    "\n",
    "        SELECT\n",
    "          *\n",
    "        FROM \n",
    "          {1} -- mlpatterns.{0}_trips\n",
    "        \n",
    "    \"\"\".format(segment, train_table_name)\n",
    "    print(query)\n",
    "    return ddlop(project_id, query, 'US')\n",
    "\n",
    "\n",
    "def evaluate(project_id: str,\n",
    "             classification_model: str, typical_trip_model: str, long_trip_model: str) -> float:\n",
    "    query = \"\"\"\n",
    "        WITH input_data AS (\n",
    "           SELECT start_date, start_station_name, subscriber_type, member_birth_year, member_gender,\n",
    "                  ST_Distance(start_station_geom, end_station_geom) AS distance\n",
    "           FROM `bigquery-public-data.san_francisco_bikeshare.bikeshare_trips`\n",
    "        ),\n",
    "\n",
    "        classified AS (\n",
    "        SELECT \n",
    "          * EXCEPT(predicted_trip_type_probs)\n",
    "        FROM ML.PREDICT(\n",
    "          MODEL {0},\n",
    "          (SELECT * from input_data)\n",
    "        )\n",
    "        ),\n",
    "\n",
    "        evals AS (\n",
    "\n",
    "        SELECT\n",
    "          distance, predicted_distance\n",
    "        FROM ML.PREDICT(\n",
    "          MODEL {1},\n",
    "          (SELECT * from classified WHERE predicted_trip_type = 'Typical')\n",
    "        )\n",
    "        UNION ALL\n",
    "        SELECT\n",
    "          distance, predicted_distance\n",
    "        FROM ML.PREDICT(\n",
    "          MODEL {2},\n",
    "          (SELECT * from classified WHERE predicted_trip_type = 'Long')\n",
    "        )\n",
    "\n",
    "        )\n",
    "\n",
    "        SELECT\n",
    "           APPROX_QUANTILES(ABS(distance - predicted_distance), 10)[OFFSET(5)] AS median_absolute_error\n",
    "        FROM\n",
    "           evals\n",
    "    \"\"\".format(classification_model, typical_trip_model, long_trip_model)\n",
    "    print(query)\n",
    "    from google.cloud import bigquery\n",
    "    bqclient = bigquery.Client(project=project_id, location='US')\n",
    "    df = bqclient.query(query).result().to_dataframe()\n",
    "    return df['median_absolute_error'][0]\n",
    "\n",
    "\n",
    "@dsl.pipeline(\n",
    "    name='Cascade pipeline on SF bikeshare',\n",
    "    description='Cascade pipeline on SF bikeshare'\n",
    ")\n",
    "def cascade_pipeline(\n",
    "    project_id = PROJECT_ID\n",
    "):\n",
    "    ddlop = comp.func_to_container_op(run_bigquery_ddl, packages_to_install=['google-cloud-bigquery'])\n",
    "        \n",
    "    c1 = train_classification_model(ddlop, PROJECT_ID)\n",
    "    c1_model_name = c1.outputs['created_table']\n",
    "    \n",
    "    c2a_input = create_training_data(ddlop, PROJECT_ID, c1_model_name, 'Typical')\n",
    "    c2b_input = create_training_data(ddlop, PROJECT_ID, c1_model_name, 'Long')\n",
    "    \n",
    "    c3a_model = train_distance_model(ddlop, PROJECT_ID, c2a_input.outputs['created_table'], 'Typical')\n",
    "    c3b_model = train_distance_model(ddlop, PROJECT_ID, c2b_input.outputs['created_table'], 'Long')\n",
    "    \n",
    "    evalop = comp.func_to_container_op(evaluate, packages_to_install=['google-cloud-bigquery', 'pandas'])\n",
    "    error = evalop(PROJECT_ID, c1_model_name, c3a_model.outputs['created_table'], c3b_model.outputs['created_table'])\n",
    "    print(error.output)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "        CREATE OR REPLACE MODEL mlpatterns.classify_trips\n",
      "        TRANSFORM(\n",
      "          trip_type,\n",
      "          EXTRACT (HOUR FROM start_date) AS start_hour,\n",
      "          EXTRACT (DAYOFWEEK FROM start_date) AS day_of_week,\n",
      "          start_station_name,\n",
      "          subscriber_type,\n",
      "          ML.QUANTILE_BUCKETIZE(member_birth_year, 10) OVER() AS bucketized_age,\n",
      "          member_gender\n",
      "        )\n",
      "        OPTIONS(model_type='logistic_reg', \n",
      "                auto_class_weights=True,\n",
      "                input_label_cols=['trip_type']) AS\n",
      "\n",
      "        SELECT\n",
      "          start_date, start_station_name, subscriber_type, member_birth_year, member_gender,\n",
      "          IF(duration_sec > 3600*4, 'Long', 'Typical') AS trip_type\n",
      "        FROM `bigquery-public-data.san_francisco_bikeshare.bikeshare_trips`\n",
      "    \n",
      "\n",
      "        CREATE OR REPLACE TABLE mlpatterns.Typical_trips AS\n",
      "        SELECT \n",
      "          * EXCEPT(predicted_trip_type_probs, predicted_trip_type)\n",
      "        FROM\n",
      "        ML.PREDICT(MODEL {{pipelineparam:op=Run bigquery ddl;name=created_table}}, -- mlpatterns.classify_trips\n",
      "          (SELECT\n",
      "          start_date, start_station_name, subscriber_type, member_birth_year, member_gender,\n",
      "          ST_Distance(start_station_geom, end_station_geom) AS distance\n",
      "          FROM `bigquery-public-data.san_francisco_bikeshare.bikeshare_trips`)\n",
      "        )\n",
      "        WHERE predicted_trip_type = 'Typical' AND distance IS NOT NULL\n",
      "    \n",
      "\n",
      "        CREATE OR REPLACE TABLE mlpatterns.Long_trips AS\n",
      "        SELECT \n",
      "          * EXCEPT(predicted_trip_type_probs, predicted_trip_type)\n",
      "        FROM\n",
      "        ML.PREDICT(MODEL {{pipelineparam:op=Run bigquery ddl;name=created_table}}, -- mlpatterns.classify_trips\n",
      "          (SELECT\n",
      "          start_date, start_station_name, subscriber_type, member_birth_year, member_gender,\n",
      "          ST_Distance(start_station_geom, end_station_geom) AS distance\n",
      "          FROM `bigquery-public-data.san_francisco_bikeshare.bikeshare_trips`)\n",
      "        )\n",
      "        WHERE predicted_trip_type = 'Long' AND distance IS NOT NULL\n",
      "    \n",
      "\n",
      "        CREATE OR REPLACE MODEL mlpatterns.predict_distance_Typical\n",
      "        TRANSFORM(\n",
      "          distance,\n",
      "          EXTRACT (HOUR FROM start_date) AS start_hour,\n",
      "          EXTRACT (DAYOFWEEK FROM start_date) AS day_of_week,\n",
      "          start_station_name,\n",
      "          subscriber_type,\n",
      "          ML.QUANTILE_BUCKETIZE(member_birth_year, 10) OVER() AS bucketized_age,\n",
      "          member_gender\n",
      "        )\n",
      "        OPTIONS(model_type='linear_reg', input_label_cols=['distance']) AS\n",
      "\n",
      "        SELECT\n",
      "          *\n",
      "        FROM \n",
      "          {{pipelineparam:op=Run bigquery ddl 2;name=created_table}} -- mlpatterns.Typical_trips\n",
      "        \n",
      "    \n",
      "\n",
      "        CREATE OR REPLACE MODEL mlpatterns.predict_distance_Long\n",
      "        TRANSFORM(\n",
      "          distance,\n",
      "          EXTRACT (HOUR FROM start_date) AS start_hour,\n",
      "          EXTRACT (DAYOFWEEK FROM start_date) AS day_of_week,\n",
      "          start_station_name,\n",
      "          subscriber_type,\n",
      "          ML.QUANTILE_BUCKETIZE(member_birth_year, 10) OVER() AS bucketized_age,\n",
      "          member_gender\n",
      "        )\n",
      "        OPTIONS(model_type='linear_reg', input_label_cols=['distance']) AS\n",
      "\n",
      "        SELECT\n",
      "          *\n",
      "        FROM \n",
      "          {{pipelineparam:op=Run bigquery ddl 3;name=created_table}} -- mlpatterns.Long_trips\n",
      "        \n",
      "    \n",
      "{{pipelineparam:op=Evaluate;name=output}}\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "Experiment link <a href=\"http://20844794c6e37538-dot-us-central2.pipelines.googleusercontent.com/#/experiments/details/c381cd88-f819-4a4c-a74f-061d63ba7b97\" target=\"_blank\" >here</a>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "Run link <a href=\"http://20844794c6e37538-dot-us-central2.pipelines.googleusercontent.com/#/runs/details/88cd19ee-3fcc-48ee-82bb-eee7c384a0f5\" target=\"_blank\" >here</a>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "pipeline_func = cascade_pipeline\n",
    "pipeline_filename = pipeline_func.__name__ + '.zip'\n",
    "import kfp.compiler as compiler\n",
    "compiler.Compiler().compile(pipeline_func, pipeline_filename)\n",
    "\n",
    "#Specify pipeline argument values\n",
    "arguments = {}\n",
    "\n",
    "#Get or create an experiment and submit a pipeline run\n",
    "client = kfp.Client(KFPHOST)\n",
    "experiment = client.create_experiment('cascade_experiment')\n",
    "\n",
    "#Submit a pipeline run\n",
    "run_name = pipeline_func.__name__ + ' run'\n",
    "run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
   ]
  }
 ],
 "metadata": {
  "environment": {
   "name": "tf2-gpu.2-1.m54",
   "type": "gcloud",
   "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-1:m54"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
