{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Automated model training and evaluation using\n",
    "* #### Scikit-learn\n",
    "* #### Google Cloud Composer\n",
    "* #### Goolge Cloud AI Platform Training\n",
    "* #### MLflow\n",
    "\n",
    "This notebook goes through these major steps:\n",
    "* Step 1: Creates Cloud AI Platrform Training package for Chicago taxi fare predicion model.\n",
    "* Step 2: Creates and deploy Airflow DAG to manage training process\n",
    "\n",
    "> Costs. You migth be charged for the operations in this tutorial. Refer to the [Cloud AI Platform Training pricing page](https://cloud.google.com/ai-platform/training/pricing) for more information."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from IPython.core.display import display, HTML\n",
    "import mlflow\n",
    "import pymysql\n",
    "from datetime import datetime"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Jupyter magic jinja template to create Python file with variable substitution.\n",
    "# Dictonaries for substituted variables: env[] for OS environment vars and var[] for global variables\n",
    "from IPython.core.magic import register_line_cell_magic\n",
    "from jinja2 import Template\n",
    "\n",
    "@register_line_cell_magic\n",
    "def writetemplate(line, cell):\n",
    "    dirname = os.path.dirname(line)\n",
    "    if len(dirname)>0 and not os.path.exists(dirname):\n",
    "        os.makedirs(dirname)\n",
    "    with open(line, 'w') as f:\n",
    "        f.write(Template(cell).render({'env' : os.environ, 'var' : globals()}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 0.1 Global parameters of the training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Name of the experiment in MLFlow tracking and name in MLFlow model registry\n",
    "experiment_name = 'chicago-taxi-m3'\n",
    "# How many parallel training executing with random training parameters\n",
    "number_of_parallel_trainings = 3\n",
    "# Training module version will be composed to 'trainer-0.2'\n",
    "training_module_version = '0.2'\n",
    "\n",
    "# Range of randomized RandomForestRegressor estimators lower and upper limits.\n",
    "range_of_estimators_lower = 20\n",
    "range_of_estimators_upper = 200"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 0.2 Print environment variables"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# MLflow public URI\n",
    "MLFLOW_TRACKING_EXTERNAL_URI = os.environ['MLFLOW_TRACKING_EXTERNAL_URI']\n",
    "\n",
    "REGION=os.environ['MLOPS_REGION']\n",
    "ML_IMAGE_URI = os.environ['ML_IMAGE_URI']\n",
    "COMPOSER_NAME = os.environ['MLOPS_COMPOSER_NAME']\n",
    "MLFLOW_GCS_ROOT_URI = os.environ['MLFLOW_GCS_ROOT_URI']\n",
    "\n",
    "print(f'Cloud Composer instance name: {COMPOSER_NAME}')\n",
    "print(f'Cloud Composer region: {REGION}')\n",
    "print(f'MLflow tracking server URI: {mlflow.get_tracking_uri()}')\n",
    "print(f'MLflow GCS root: {MLFLOW_GCS_ROOT_URI}')\n",
    "\n",
    "experiment_path = MLFLOW_GCS_ROOT_URI.replace('gs://','')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 1: Create training package\n",
    "### 1.1 Create training package folder and module static content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir -p ./package/training\n",
    "!touch ./package/training/__init__.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.2 Write setup.py to define package dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writetemplate ./package/setup.py\n",
    "from setuptools import find_packages\n",
    "from setuptools import setup\n",
    "\n",
    "REQUIRED_PACKAGES = ['mlflow==1.13.1','PyMySQL==0.9.3']\n",
    "\n",
    "setup(\n",
    "    name='taxi-fare-trainer',\n",
    "    version='{{ var[\"training_module_version\"] }}',\n",
    "    install_requires=REQUIRED_PACKAGES,\n",
    "    packages=find_packages(),\n",
    "    include_package_data=True,\n",
    "    description='Custom training setup for Chicago taxi fare prediction.'\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.3 Create training task Python file\n",
    "\n",
    "This complex file is the core trainer routine that will be executed in Cloud AI Platform Training environment.\n",
    "The experimental version of training routine is in 'ChicagoTaxiTrainer.ipynb' where you might adjust and test changes more easily.\n",
    "\n",
    "#### About the approach\n",
    "This example is using Scikit-learn [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) in a short training pipeline. \n",
    "> Note Cloud AI Platform Training passes input parameters through CLI arguments of main() method. For example the 'number_of_estimators' are passed in arguments list, while this parameter is defined in Airflow DAG BashOperator argument which operator invokes 'gcloud ai-platform jobs submit training' command with 'number_of_estimators'.\n",
    "\n",
    "Training input parameters are:\n",
    "* 'number_of_estimators' - The number of trees in the random forest.\n",
    "* 'max_features' - The number of features to consider when looking for the best split.\n",
    "\n",
    "Training metrics:\n",
    "* 'train_cross_valid_score_rmse_mean' - RMSE score on training data split\n",
    "* 'eval_cross_valid_score_rmse_mean' - RMSE score on test/eval data split"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writetemplate ./package/training/task.py\n",
    "\n",
    "import sys, stat\n",
    "import argparse\n",
    "import os\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import glob\n",
    "from scipy import stats\n",
    "\n",
    "from sklearn.linear_model import LogisticRegression # Only for train_test\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "from sklearn.compose import ColumnTransformer\n",
    "from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.preprocessing import OneHotEncoder, StandardScaler\n",
    "\n",
    "import mlflow\n",
    "import mlflow.sklearn\n",
    "from mlflow.models.signature import infer_signature\n",
    "\n",
    "from joblib import dump, load\n",
    "from google.cloud import storage\n",
    "\n",
    "csv_delimiter = '|'\n",
    "\n",
    "def copy_local_directory_to_gcs(local_path, gcs_uri):\n",
    "    \"\"\" \n",
    "    Uploads a local folder structure to a GCS bucket folder. Utilitized to upload trained model to GCS.\n",
    "    \"\"\"\n",
    "    assert os.path.isdir(local_path)\n",
    "    job_dir =  gcs_uri.replace('gs://', '')\n",
    "    bucket_id = job_dir.split('/')[0]\n",
    "    bucket_path = job_dir.lstrip(f'{bucket_id}/')\n",
    "    bucket = storage.Client().bucket(bucket_id)\n",
    "    blob = bucket.blob(f'{bucket_path}/{local_path}')\n",
    "    _upload_local_to_gcs(local_path, bucket, bucket_path)\n",
    "        \n",
    "def _upload_local_to_gcs(local_path, bucket, bucket_path):\n",
    "    \"\"\" Recursive file and folder upload from starting folder \"\"\"\n",
    "    for local_file in glob.glob(local_path + '/**'):\n",
    "        if not os.path.isfile(local_file):\n",
    "           _upload_local_to_gcs(local_file, bucket, bucket_path + '/' + os.path.basename(local_file))\n",
    "        else:\n",
    "           remote_path = os.path.join(bucket_path, local_file[1 + len(local_path):])\n",
    "           blob = bucket.blob(remote_path)\n",
    "           blob.upload_from_filename(local_file)\n",
    "\n",
    "def feature_engineering(data):\n",
    "    \"\"\" Prepares the preloaded dataset for final training \"\"\"\n",
    "    # Add 'N/A' for missing 'Company'\n",
    "    data.fillna(value={'company':'N/A','tolls':0}, inplace=True)\n",
    "    # Drop rows contains null data.\n",
    "    data.dropna(how='any', axis='rows', inplace=True)\n",
    "    # Pickup and dropoff locations distance\n",
    "    data['abs_distance'] = (np.hypot(data['dropoff_latitude']-data['pickup_latitude'], data['dropoff_longitude']-data['pickup_longitude']))*100\n",
    "\n",
    "    # Drop extremes and outliers\n",
    "    possible_outliers_cols = ['trip_seconds', 'trip_miles', 'fare', 'abs_distance']\n",
    "    data = data[(np.abs(stats.zscore(data[possible_outliers_cols])) < 3).all(axis=1)].copy()\n",
    "    # Reduce location accuracy to improve training speed\n",
    "    data = data.round({'pickup_latitude': 3, 'pickup_longitude': 3, 'dropoff_latitude':3, 'dropoff_longitude':3})\n",
    "\n",
    "    # Returns training only features (X) and fare (y)  \n",
    "    return (\n",
    "        data.drop(['fare', 'trip_start_timestamp'], axis=1),\n",
    "        data['fare']\n",
    "    )\n",
    "\n",
    "def build_pipeline(number_of_estimators = 20, max_features = 'auto'):\n",
    "    \"\"\" Defines the scikit training steps \"\"\"\n",
    "    ct_pipe = ColumnTransformer(transformers=[\n",
    "    ('hourly_cat', OneHotEncoder(categories=[range(0,24)], sparse = False), ['trip_start_hour']),\n",
    "    ('dow', OneHotEncoder(categories=[['Mon', 'Tue', 'Sun', 'Wed', 'Sat', 'Fri', 'Thu']], sparse = False), ['trip_start_day_of_week']),\n",
    "    ('std_scaler', StandardScaler(), [\n",
    "        'trip_start_year',\n",
    "        'abs_distance',\n",
    "        'pickup_longitude',\n",
    "        'pickup_latitude',\n",
    "        'dropoff_longitude',\n",
    "        'dropoff_latitude',\n",
    "        'trip_miles',\n",
    "        'trip_seconds'])\n",
    "    ])\n",
    "    rfr_pipe = Pipeline([\n",
    "        ('ct', ct_pipe),\n",
    "        ('forest_reg', RandomForestRegressor(n_estimators = number_of_estimators, max_features = max_features, n_jobs = -1, random_state = 3))\n",
    "    ])\n",
    "    return rfr_pipe\n",
    "\n",
    "def train_model(args):\n",
    "    \"\"\" Main training logic \"\"\"\n",
    "    print('Taxi fare estimation model training step started...')\n",
    "    # Addresses experiment by name for following tracking context\n",
    "    mlflow.set_experiment(args.experiment_name)\n",
    "    \n",
    "    # To save training parameters and metrics automatically use autolog()\n",
    "    # mlflow.sklearn.autolog()\n",
    "    with mlflow.start_run(nested=True) as mlflow_run:\n",
    "        mlflow.log_param('number_of_estimators', args.number_of_estimators)\n",
    "        mlflow.set_tag('version', args.version_tag)\n",
    "        mlflow.set_tag('job_name', args.job_name)\n",
    "        mlflow.set_tag('gcs_train_source', args.gcs_train_source)\n",
    "        mlflow.set_tag('gcs_eval_source', args.gcs_eval_source)\n",
    "\n",
    "        df = pd.read_csv(args.gcs_train_source, sep=csv_delimiter)\n",
    "        mlflow.log_param('training_size', f'{df.shape[0]}')\n",
    "        \n",
    "        # Fixing and droping invalid data rows.\n",
    "        X_train, y_train = feature_engineering(df)\n",
    "        # Create training pipeline.\n",
    "        rfr_pipe = build_pipeline(number_of_estimators=args.number_of_estimators)\n",
    "        \n",
    "        rfr_score = cross_val_score(rfr_pipe, X_train, y_train, scoring = 'neg_mean_squared_error', cv=5)\n",
    "        mlflow.log_metric('train_cross_valid_score_rmse_mean', np.sqrt(-rfr_score).mean())\n",
    "        \n",
    "        # Train the model\n",
    "        final_model = rfr_pipe.fit(X_train, y_train)\n",
    "        mlflow.sklearn.log_model(final_model, 'chicago_rnd_forest')\n",
    "\n",
    "        # Evaluate model to eval set\n",
    "        df = pd.read_csv(args.gcs_eval_source, sep=csv_delimiter)\n",
    "        mlflow.log_param('eval_size',f'{df.shape[0]}')\n",
    "        X_eval, y_eval = feature_engineering(df)\n",
    "        X_eval['fare_pred'] = final_model.predict(X_eval)\n",
    "        rfr_score = cross_val_score(final_model, X_eval, y_eval, scoring='neg_mean_squared_error', cv=5)\n",
    "        mlflow.log_metric('eval_cross_valid_score_rmse_mean', np.sqrt(-rfr_score).mean())\n",
    "        \n",
    "        # Save model\n",
    "        model_file_name = f'{args.version_tag}.joblib'\n",
    "        mlflow.sklearn.save_model(final_model, model_file_name)\n",
    "        copy_local_directory_to_gcs(model_file_name, args.job_dir)\n",
    "        mlflow.set_tag('model_file', args.job_dir+'/'+model_file_name)\n",
    "\n",
    "    print('Training finished.')\n",
    "\n",
    "def main():\n",
    "    print('Training arguments: ' + ' '.join(sys.argv[1:]))\n",
    "    parser = argparse.ArgumentParser()\n",
    "    parser.add_argument('--number_of_estimators', type=int)\n",
    "    parser.add_argument('--job-dir', type=str)\n",
    "    parser.add_argument('--local_data', type=str)\n",
    "    parser.add_argument('--gcs_train_source', type=str)\n",
    "    parser.add_argument('--gcs_eval_source', type=str)\n",
    "    parser.add_argument('--experiment_name', type=str)\n",
    "    parser.add_argument('--version_tag', type=str)\n",
    "    parser.add_argument('--job_name', type=str)\n",
    "    \n",
    "    args, unknown_args = parser.parse_known_args()\n",
    "\n",
    "    if not args.gcs_train_source:\n",
    "        print('Missing GCS training source URI')\n",
    "        return\n",
    "    if not args.gcs_eval_source:\n",
    "        print('Missing GCS evaluation source URI')\n",
    "        return\n",
    "    # CLOUD_ML_JOB conatains other CAIP Training runtime parameters in JSON object\n",
    "    # job = os.environ['CLOUD_ML_JOB']\n",
    "    \n",
    "    # MLflow locally available\n",
    "    mlflow.set_tracking_uri('http://127.0.0.1:80')\n",
    "\n",
    "    train_model(args)\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    main()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.4 Package the training modules\n",
    "Create training package and import(copy) to the Cloud Composer's 'data' folder, since than the compressed package file will be available from DAG code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create trainer packege\n",
    "!cd package && python ./setup.py sdist\n",
    "\n",
    "# Copy to Composer data folder\n",
    "!gcloud composer environments storage data import \\\n",
    "    --environment {COMPOSER_NAME} \\\n",
    "    --location {REGION} \\\n",
    "    --source ./package/dist \\\n",
    "    --destination multi_model_trainer_dag"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 2: Create model trainer Airflow DAG\n",
    "Notice: The entire cell is a template will be written to 'multi_model_trainer_dag.py' file.\n",
    "        'writetemplate' magic uses Jinja templating while Airflow also provides Jinja templating for runtime parameters.\n",
    "Airflow parameters should be wrapped in this syntax: {{ \"{{ ts_nodash }}\" }} because of 'the template in the template' mechanizm.\n",
    "\n",
    "### 2.1 Write out Cloud Composer/Airflow DAG file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writetemplate multi_model_trainer_dag.py\n",
    "\n",
    "import os\n",
    "import logging\n",
    "import random\n",
    "import uuid\n",
    "from datetime import (datetime, timedelta)\n",
    "\n",
    "import mlflow\n",
    "import mlflow.sklearn\n",
    "\n",
    "import airflow\n",
    "from airflow import DAG\n",
    "from airflow.operators.bash_operator import BashOperator\n",
    "from airflow.operators.python_operator import PythonOperator\n",
    "from airflow.contrib.operators.bigquery_operator import BigQueryOperator\n",
    "from airflow.contrib.operators.bigquery_table_delete_operator import BigQueryTableDeleteOperator\n",
    "from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator\n",
    "from airflow.providers.google.cloud.operators.mlengine import MLEngineStartTrainingJobOperator\n",
    "\n",
    "csv_delimiter = '|'\n",
    "experiment_name = '{{ var[\"experiment_name\"] }}'\n",
    "ML_IMAGE_URI = '{{ var[\"ML_IMAGE_URI\"] }}'\n",
    "job_experiment_root = f'{{ var[\"MLFLOW_GCS_ROOT_URI\"] }}/experiments/{experiment_name}'\n",
    "\n",
    "PROJECT_ID = os.getenv('GCP_PROJECT', default='edgeml-demo')\n",
    "REGION = os.getenv('COMPOSER_LOCATION', default='us-central1')\n",
    "\n",
    "# Postfixes for temporary BQ tables and output CSV files\n",
    "TRAINING_POSTFIX = '_training'\n",
    "EVAL_POSTFIX = '_eval'\n",
    "VALIDATION_POSTFIX = '_validation'\n",
    "\n",
    "BQ_DATASET = 'chicago_taxi_trips'\n",
    "BQ_TABLE = 'taxi_trips'\n",
    "\n",
    "# Query to create training and evaluation dataset from public taxi_trips table.\n",
    "# Some key aspects:\n",
    "# - Localize trip dates to Chicago time zone.\n",
    "# - Create a hint (is_airport) to categorize airport to/from travel cases.\n",
    "# - Filter out inapropiate rows (null or zero values)\n",
    "# - Add training features for less granuated year, month, day, hour and day_of_week instead of using compound date time field \n",
    "BQ_QUERY = \"\"\"\n",
    "with tmp_table as (\n",
    "SELECT trip_seconds, trip_miles, fare, tolls, \n",
    "    company, pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude,\n",
    "    DATETIME(trip_start_timestamp, 'America/Chicago') trip_start_timestamp,\n",
    "    DATETIME(trip_end_timestamp, 'America/Chicago') trip_end_timestamp,\n",
    "    CASE WHEN (pickup_community_area IN (56, 64, 76)) OR (dropoff_community_area IN (56, 64, 76)) THEN 1 else 0 END is_airport,\n",
    "FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`\n",
    "WHERE\n",
    "    dropoff_latitude IS NOT NULL and\n",
    "    dropoff_longitude IS NOT NULL and\n",
    "    pickup_latitude IS NOT NULL and\n",
    "    pickup_longitude IS NOT NULL and\n",
    "    fare > 0 and \n",
    "    trip_miles > 0 and\n",
    "    MOD(ABS(FARM_FINGERPRINT(unique_key)), 100) {}\n",
    "ORDER BY RAND()\n",
    "LIMIT {})\n",
    "SELECT *,\n",
    "    EXTRACT(YEAR FROM trip_start_timestamp) trip_start_year,\n",
    "    EXTRACT(MONTH FROM trip_start_timestamp) trip_start_month,\n",
    "    EXTRACT(DAY FROM trip_start_timestamp) trip_start_day,\n",
    "    EXTRACT(HOUR FROM trip_start_timestamp) trip_start_hour,\n",
    "    FORMAT_DATE('%a', DATE(trip_start_timestamp)) trip_start_day_of_week\n",
    "FROM tmp_table\n",
    "\"\"\"\n",
    "\n",
    "def joiner_func(training_gcs_file_name, eval_gcs_file_name, **kwargs):\n",
    "    \"\"\"\n",
    "    No-ops method to synchonize pipeline braches\n",
    "    \"\"\"\n",
    "    logging.info('Joining %s, eval GCS files %s', training_gcs_file_name, eval_gcs_file_name)\n",
    "    return None\n",
    "\n",
    "def fake_model_tracking(**kwargs):\n",
    "    \"\"\"\n",
    "    Simulated training, if you want to test environment without real time consuming training,\n",
    "    but want to see random metrics in MLflow\n",
    "    \"\"\"\n",
    "    job_name = kwargs.get('templates_dict').get('job_name')\n",
    "    print(f\"Fake model tracking: '{job_name}'\")\n",
    "    mlflow.set_experiment(experiment_name)\n",
    "    with mlflow.start_run(nested=True) as run:\n",
    "        mlflow.log_param('number_of_estimators', 0)\n",
    "        mlflow.set_tag('version', 'fake')\n",
    "        mlflow.set_tag('job_name', job_name)\n",
    "        mlflow.log_metric('train_cross_valid_score_rmse_mean', 1+random.random())\n",
    "        mlflow.log_metric('eval_cross_valid_score_rmse_mean', 1+random.random())\n",
    "    return None\n",
    "\n",
    "def register_model(run_id, model_name):\n",
    "    \"\"\"\n",
    "    Register model to MLflow\n",
    "    \"\"\"\n",
    "    model_uri = f'runs:/{run_id}/{model_name}'\n",
    "    registered_model = mlflow.register_model(model_uri, model_name)\n",
    "    print(registered_model)\n",
    "\n",
    "def compare_to_registered_model(model_name, best_run, metric_to_compare):\n",
    "    \"\"\"\n",
    "    Compare the actual training results with latest registered model.\n",
    "    Latest registered model is the previous best.\n",
    "    \"\"\"\n",
    "    mlflow_client = mlflow.tracking.MlflowClient()\n",
    "    registered_models=mlflow_client.search_registered_models(filter_string=f\"name='{model_name}'\", max_results=1, order_by=['timestamp DESC'])\n",
    "    if len(registered_models)==0:\n",
    "        # No previous training job.\n",
    "        register_model(best_run.run_id, model_name)\n",
    "    else:\n",
    "        last_version = registered_models[0].latest_versions[0]\n",
    "        run = mlflow_client.get_run(last_version.run_id)\n",
    "        if not run:\n",
    "            print(f'Registered version run missing!')            \n",
    "            return None\n",
    "        \n",
    "        # Suppose the las registered model is the best in the training history\n",
    "        last_registered_metric=run.data.metrics[metric_to_compare]\n",
    "        best_run_metric=best_run['metrics.'+metric_to_compare]\n",
    "        # Smaller value is better\n",
    "        if last_registered_metric>best_run_metric:\n",
    "            print(f'Register better version with metric: {best_run_metric}')\n",
    "            register_model(best_run.run_id, experiment_name)\n",
    "        else:\n",
    "            print(f'Registered version still better. Metric: {last_registered_metric}')    \n",
    "\n",
    "def model_blessing(**kwargs):\n",
    "    \"\"\"\n",
    "    Compare all paralell training and select the best on 'eval_cross_valid_score_rmse_mean' base.\n",
    "    \"\"\"\n",
    "    job_name = kwargs.get('templates_dict').get('job_name')\n",
    "    print(f'Model blessing: \"{job_name}\"')\n",
    "\n",
    "    # Query results of current training jobs \n",
    "    experiment = mlflow.get_experiment_by_name(experiment_name)\n",
    "    filter_string = f\"tags.job_name ILIKE '{job_name}_%'\"\n",
    "    df = mlflow.search_runs([experiment.experiment_id], filter_string=filter_string)\n",
    "\n",
    "    # Compare new trained model and select the best.\n",
    "    eval_max = df.loc[df['metrics.eval_cross_valid_score_rmse_mean'].idxmax()]\n",
    "    # train_max can be an alternative for comparizon.\n",
    "    # train_max= df.loc[df['metrics.train_cross_valid_score_rmse_mean'].idxmax()]\n",
    "    \n",
    "    compare_to_registered_model(experiment_name, eval_max, 'eval_cross_valid_score_rmse_mean')\n",
    "\n",
    "with DAG('multi_model_trainer',\n",
    "         description = 'Train evaluate and validate multi models on taxi fare dataset. Select the best one and register it to MLflow v0.1',\n",
    "         schedule_interval = '*/15 * * * *', # '*/15 ...' -> every 15 minutes,  None -> manual trigger\n",
    "         start_date = datetime(2021, 1, 1),\n",
    "         max_active_runs = 3,\n",
    "         catchup = False,\n",
    "         default_args = { 'provide_context': True}\n",
    "         ) as dag:\n",
    "\n",
    "    # Dataset split ratio and limit of query records\n",
    "    tasks = {\n",
    "        'training' : {\n",
    "            'dataset_range' : 'between 0 and 80',\n",
    "            'limit' : random.randrange(2000, 8000, 100)\n",
    "            },\n",
    "        'eval':{\n",
    "            'dataset_range' : 'between 80 and 100',\n",
    "            'limit' : random.randrange(1000, 2000, 100)\n",
    "        }}\n",
    "\n",
    "    # Define task list for preparation\n",
    "    for task_key in tasks.keys():\n",
    "        # Note: fix table names causes race condition in case when DAG triggered before the previous finished.\n",
    "        table_name = f'{PROJECT_ID}.{BQ_DATASET}.{BQ_TABLE}_{task_key}'\n",
    "        task = tasks[task_key]\n",
    "        task['gcs_file_name'] = f'{job_experiment_root}/data/ds_{task_key}.csv'\n",
    "        \n",
    "        # Deletes previous training temporary tables\n",
    "        task['delete_table'] = BigQueryTableDeleteOperator(\n",
    "            task_id = 'delete_table_' + task_key,\n",
    "            deletion_dataset_table = table_name,\n",
    "            ignore_if_missing = True)\n",
    "\n",
    "        # Splits and copy source BQ table to 'dataset_range' sized segments\n",
    "        task['split_table'] = BigQueryOperator(\n",
    "            task_id = 'split_table_' + task_key,\n",
    "            use_legacy_sql=False,\n",
    "            destination_dataset_table = table_name,\n",
    "            sql = BQ_QUERY.format(task['dataset_range'],task['limit']),\n",
    "            location = REGION)\n",
    "        \n",
    "        # Extract split tables to CSV files in GCS\n",
    "        task['extract_to_gcs'] = BigQueryToCloudStorageOperator(\n",
    "            task_id = 'extract_to_gcs_' + task_key,\n",
    "            source_project_dataset_table = table_name,\n",
    "            destination_cloud_storage_uris = [task['gcs_file_name']],\n",
    "            field_delimiter = csv_delimiter)\n",
    "\n",
    "    joiner_1 = PythonOperator(\n",
    "        task_id = 'joiner_1',\n",
    "        python_callable = joiner_func,\n",
    "        op_kwargs={ 'training_gcs_file_name': tasks['training']['gcs_file_name'],\n",
    "                    'eval_gcs_file_name': tasks['eval']['gcs_file_name']})\n",
    "\n",
    "    # Create an unique job name\n",
    "    submit_time = datetime.now().strftime('%Y%m%d_%H%M%S')\n",
    "    job_name = f'training_job_{submit_time}'\n",
    "    job_dir = f'{job_experiment_root}/dmt_{submit_time}'\n",
    "    \n",
    "    # Train model in Cloud AI Platform Training.\n",
    "    # To run training job we have 3 options:\n",
    "    # 1 - gcloud CLI command with Bashoperator\n",
    "    # 2 - API client (https://cloud.google.com/ai-platform/training/docs/python-client-library) from Python operator\n",
    "    # 3 - Native MLEngineStartTrainingJobOperator Airflow operator\n",
    "    #\n",
    "    # In this example we are using the 1st option, because this is the only way to\n",
    "    # declare custom trainer Docker image.\n",
    "    \n",
    "    # Template for string format ({variable}) and jinja template ({{variable}})\n",
    "    training_command_tmpl=\"\"\"gcloud ai-platform jobs submit training {job_name} \\\n",
    "        --region {region} \\\n",
    "        --scale-tier BASIC \\\n",
    "        --job-dir {job_dir} \\\n",
    "        --package-path /home/airflow/gcs/data/multi_model_trainer_dag/package/training/ \\\n",
    "        --module-name training.task \\\n",
    "        --master-image-uri {ml_image_uri} \\\n",
    "        --stream-logs \\\n",
    "        -- \\\n",
    "        --experiment_name {experiment_name} \\\n",
    "        --gcs_train_source {gcs_train_source} \\\n",
    "        --gcs_eval_source {gcs_eval_source} \\\n",
    "        --version_tag {version_tag} \\\n",
    "        --number_of_estimators {number_of_estimators} \\\n",
    "        --job_name {job_name}\"\"\"\n",
    "\n",
    "    training_tasks = []\n",
    "    for training_id in range(0, {{var['number_of_parallel_trainings']}}):\n",
    "        # Simulated training, if you want to test environment without real training but random metrics\n",
    "        # trainer = PythonOperator(\n",
    "        #    task_id = f'trainer_{training_id}',\n",
    "        #    python_callable = fake_model_tracking,\n",
    "        #    templates_dict={'job_name': 'training_job_{{ \"{{ ts_nodash }}\" }}'+f'_{training_id}'})\n",
    "        \n",
    "        trainer = BashOperator(\n",
    "            task_id=f'trainer_{training_id}',\n",
    "            bash_command=training_command_tmpl.format(\n",
    "                 region = REGION,\n",
    "                 job_name = 'training_job_{{ \"{{ ts_nodash }}\" }}'+f'_{training_id}',\n",
    "                 job_dir = job_dir+f'_{training_id}',\n",
    "                 ml_image_uri = ML_IMAGE_URI,\n",
    "                 gcs_train_source = tasks['training']['gcs_file_name'],\n",
    "                 gcs_eval_source = tasks['eval']['gcs_file_name'],\n",
    "                 experiment_name = experiment_name,\n",
    "                 version_tag = f'trainer_{training_id}',\n",
    "                 # The only difference in trainings:\n",
    "                 number_of_estimators = random.randrange({{var['range_of_estimators_lower']}} , {{var['range_of_estimators_upper']}}))\n",
    "        )\n",
    "\n",
    "        training_tasks.append(trainer)\n",
    "    \n",
    "    # Select the best model of this run\n",
    "    model_blessing = PythonOperator(\n",
    "        task_id = 'model_blessing',\n",
    "        python_callable = model_blessing,\n",
    "        templates_dict={'job_name': 'training_job_{{ \"{{ ts_nodash }}\" }}'})\n",
    "\n",
    "    # Exectute tasks\n",
    "    for task_key, task in tasks.items():\n",
    "        task['delete_table'] >> task['split_table'] >> task['extract_to_gcs'] >> joiner_1\n",
    "\n",
    "    # Brancing and merging training tasks\n",
    "    for trainer in training_tasks:\n",
    "        trainer.set_upstream(joiner_1)\n",
    "        model_blessing.set_upstream(trainer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 Copy DAG file to Cloud Composer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!gcloud composer environments storage dags import \\\n",
    "  --environment {COMPOSER_NAME}  \\\n",
    "  --location {REGION} \\\n",
    "  --source multi_model_trainer_dag.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 Start training pipeline\n",
    "\n",
    "* Navigate to Cloud Composer, click on 'Airflow' in [Composer environment list](http://console.cloud.google.com/composer/environments)\n",
    "* Start pipeline by enable it (off->on) \n",
    "* You will see the pipeline progresses in 'Tree View' page\n",
    "* Check training jobs in https://pantheon.corp.google.com/ai-platform/jobs\n",
    "* When all jobs and pipeline finishes, you can check results of this test in MLflow and GCS folder\n",
    "\n",
    "(next cell creates links to MLflow and GCS)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "display(HTML(f'<h4><a href=\"{MLFLOW_TRACKING_EXTERNAL_URI}\" rel=\"noopener noreferrer\" target=\"_blank\">Open MLflow UI and check metrics</a></h4>'))\n",
    "display(HTML(f'<h4><a href=\"https://console.cloud.google.com/storage/browser/{experiment_path}/experiments/{experiment_name}\" rel=\"noopener noreferrer\" target=\"_blank\">Open \"{experiment_name}\" experiment folder in GCS</a></h4>'))\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
