{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Convolutional Neural Network on pixel neighborhoods\n",
    "\n",
    "This notebook reads the pixel-neighborhood data written out by the Dataflow program of [1_explore.ipynb](./1_explore.ipynb) and trains a simple convnet model on Cloud ML Engine.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting cloudml-hypertune\n",
      "Installing collected packages: cloudml-hypertune\n",
      "Successfully installed cloudml-hypertune-0.1.0.dev5\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "%pip install cloudml-hypertune"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "BUCKET = 'cloud-training-demos-ml'\n",
    "PROJECT = 'cloud-training-demos'\n",
    "REGION = 'us-central1'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['BUCKET'] = BUCKET\n",
    "os.environ['PROJECT'] = PROJECT\n",
    "os.environ['REGION'] = REGION"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Updated property [core/project].\n",
      "Updated property [compute/region].\n"
     ]
    }
   ],
   "source": [
    "%%bash\n",
    "gcloud config set project $PROJECT\n",
    "gcloud config set compute/region $REGION"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "%load_ext autoreload\n",
    "%aimport ltgpred"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.13.1\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "print(tf.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train CNN model locally"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Copying gs://cloud-training-demos-ml/lightning/preproc_0.02_32_2/tfrecord/eval-00000-of-00253...\n",
      "Copying gs://cloud-training-demos-ml/lightning/preproc_0.02_32_2/tfrecord/train-00000-of-00522...\n",
      "| [2 files][  5.4 GiB/  5.4 GiB]   77.5 MiB/s                                   \n",
      "Operation completed over 2 objects/5.4 GiB.                                      \n"
     ]
    }
   ],
   "source": [
    "!mkdir -p preproc/tfrecord\n",
    "!gcloud storage cp gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord/*-00000-* preproc/tfrecord"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "export PYTHONPATH=${PWD}/ltgpred/\n",
    "OUTDIR=${PWD}/cnn_trained\n",
    "DATADIR=${PWD}/preproc/tfrecord\n",
    "rm -rf $OUTDIR\n",
    "mkdir -p $OUTDIR\n",
    "python3 -m trainer.train_cnn \\\n",
    "    --train_steps=10 --num_eval_records=512 --train_batch_size=16 --num_cores=1 --nlayers=5 --arch=convnet \\\n",
    "    --job-dir=$OUTDIR --train_data_path=${DATADIR}/train* --eval_data_path=${DATADIR}/eval*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088/variables\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088/variables/variables.data-00000-of-00001\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088/variables/checkpoint\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088/variables/variables.index\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088/assets\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088/assets/saved_model.json\n",
      "/home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/1556043088/saved_model.pb\n"
     ]
    }
   ],
   "source": [
    "!find /home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:\n",
      "\n",
      "signature_def['__saved_model_init_op']:\n",
      "  The given SavedModel SignatureDef contains the following input(s):\n",
      "  The given SavedModel SignatureDef contains the following output(s):\n",
      "    outputs['__saved_model_init_op'] tensor_info:\n",
      "        dtype: DT_INVALID\n",
      "        shape: unknown_rank\n",
      "        name: init_1\n",
      "  Method name is: \n",
      "\n",
      "signature_def['serving_default']:\n",
      "  The given SavedModel SignatureDef contains the following input(s):\n",
      "    inputs['ltg'] tensor_info:\n",
      "        dtype: DT_FLOAT\n",
      "        shape: (-1, 4225)\n",
      "        name: ltg:0\n",
      "    inputs['ref'] tensor_info:\n",
      "        dtype: DT_FLOAT\n",
      "        shape: (-1, 4225)\n",
      "        name: ref:0\n",
      "  The given SavedModel SignatureDef contains the following output(s):\n",
      "    outputs['model'] tensor_info:\n",
      "        dtype: DT_FLOAT\n",
      "        shape: (-1, 1)\n",
      "        name: model/dense_8/Sigmoid:0\n",
      "  Method name is: tensorflow/serving/predict\n"
     ]
    }
   ],
   "source": [
    "%%bash\n",
    "saved_model_cli show --all --dir $(ls -d -1 /home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/* | tail -1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "export CLOUDSDK_PYTHON=$(which python3)\n",
    "OUTDIR=${PWD}/cnn_trained\n",
    "DATADIR=${PWD}/preproc/tfrecord\n",
    "rm -rf $OUTDIR\n",
    "gcloud ml-engine local train \\\n",
    "    --module-name=trainer.train_cnn --package-path=${PWD}/ltgpred/trainer \\\n",
    "    -- \\\n",
    "    --train_steps=10 --num_eval_records=512 --train_batch_size=16 --num_cores=1 --nlayers=5 \\\n",
    "    --job-dir=$OUTDIR --train_data_path=${DATADIR}/train* --eval_data_path=${DATADIR}/eval*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training lighting prediction model on CMLE using GPU"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "custom_model_m_gpu is a machine with 4 K-80 GPUs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile largemachine.yaml\n",
    "trainingInput:\n",
    "  scaleTier: CUSTOM\n",
    "  masterType: complex_model_m_p100"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "#DATADIR=gs://$BUCKET/lightning/preproc/tfrecord\n",
    "DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord\n",
    "\n",
    "#for ARCH in feateng convnet dnn resnet; do\n",
    "for ARCH in convnet; do\n",
    "  JOBNAME=ltgpred_${ARCH}_$(date -u +%y%m%d_%H%M%S)\n",
    "  OUTDIR=gs://${BUCKET}/lightning/${ARCH}_trained_gpu\n",
    "  gcloud storage rm --recursive --continue-on-error $OUTDIR\n",
    "  gcloud ml-engine jobs submit training $JOBNAME \\\n",
    "      --module-name trainer.train_cnn --package-path ${PWD}/ltgpred/trainer --job-dir=$OUTDIR \\\n",
    "      --region=${REGION} --scale-tier CUSTOM --config largemachine.yaml \\\n",
    "      --python-version 3.5 --runtime-version 1.10 \\\n",
    "      -- \\\n",
    "      --train_data_path ${DATADIR}/train-* --eval_data_path ${DATADIR}/eval-* \\\n",
    "      --train_steps 5000 --train_batch_size 256 --num_cores 4 --arch $ARCH \\\n",
    "      --num_eval_records 1024000 --nlayers 5 --dprob 0 --ksize 3 --nfil 10 --learning_rate 0.01 \n",
    "done"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "### Results (Dropout=0)\n",
    "| Architecture | Training time | Validation RMSE | Validation Accuracy |\n",
    "| --- | --- | --- | --- |\n",
    "| feateng | 23 min | 0.2620 | 0.8233 |\n",
    "| dnn | 62 min | 0.2752 | 0.8272 |\n",
    "| convnet | 24 min | 0.2261 | 0.8462 |\n",
    "| resnet | 63 min | 0.3088 | 0.7142 |\n",
    "\n",
    "### Results (Dropout=0.05)\n",
    "| Architecture | Training time | Validation RMSE | Validation Accuracy |\n",
    "| --- | --- | --- | --- |\n",
    "| feateng | 20 min | 0.2641 | 0.8258 |\n",
    "| dnn | 58 min | 0.2284 | 0.8412 |\n",
    "| convnet | 23 min | 0.2268 | 0.8459 |\n",
    "| resnet | 80 min | 0.3005 | 0.6887 |\n",
    "\n",
    "\n",
    "Other than for the dnn, dropout doesn't seem to help.  Based on these results, let's train a <b> convnet with no dropout </b>.\n",
    "\n",
    "All the results above are for \n",
    "<pre>\n",
    "--train_steps=5000 --train_batch_size=256 --num_cores=4 --arch=$ARCH \\\n",
    "--num_eval_records=1024000 --nlayers=5 --dprob=... --ksize=3 --nfil=10 --learning_rate=0.01\n",
    "</pre>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Hyperparameter tuning on GPU"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting hyperparam_gpu.yaml\n"
     ]
    }
   ],
   "source": [
    "%%writefile hyperparam_gpu.yaml\n",
    "trainingInput:\n",
    "  scaleTier: CUSTOM\n",
    "  masterType: complex_model_m_p100\n",
    "  hyperparameters:\n",
    "    goal: MAXIMIZE\n",
    "    maxTrials: 30\n",
    "    maxParallelTrials: 2\n",
    "    hyperparameterMetricTag: val_acc\n",
    "    params:\n",
    "    - parameterName: learning_rate\n",
    "      type: DOUBLE\n",
    "      minValue: 0.01\n",
    "      maxValue: 0.1\n",
    "      scaleType: UNIT_LOG_SCALE\n",
    "    - parameterName: nfil\n",
    "      type: INTEGER\n",
    "      minValue: 5\n",
    "      maxValue: 30\n",
    "      scaleType: UNIT_LINEAR_SCALE\n",
    "    - parameterName: nlayers\n",
    "      type: INTEGER\n",
    "      minValue: 1\n",
    "      maxValue: 5\n",
    "      scaleType: UNIT_LINEAR_SCALE     \n",
    "    - parameterName: train_batch_size # has to be multiple of 128\n",
    "      type: DISCRETE\n",
    "      discreteValues: [128, 256, 512, 1024, 2048, 4096]       \n",
    "\n",
    "#    - parameterName: arch\n",
    "#      type: CATEGORICAL\n",
    "#      categoricalValues: [\"convnet\", \"feateng\", \"resnet\", \"dnn\"]       "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "OUTDIR=gs://${BUCKET}/lightning/convnet_trained_gpu_hparam\n",
    "DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord\n",
    "JOBNAME=ltgpred_hparam_$(date -u +%y%m%d_%H%M%S)\n",
    "gcloud storage rm --recursive --continue-on-error $OUTDIR\n",
    "gcloud ml-engine jobs submit training $JOBNAME \\\n",
    "    --module-name=ltgpred.trainer.train_cnn --package-path=${PWD}/ltgpred --job-dir=$OUTDIR \\\n",
    "    --region=${REGION} --scale-tier=CUSTOM --config=hyperparam_gpu.yaml \\\n",
    "    --python-version=3.5 --runtime-version=1.10 \\\n",
    "    -- \\\n",
    "    --train_data_path=${DATADIR}/train-* --eval_data_path=${DATADIR}/eval-* \\\n",
    "    --train_steps=5000 --train_batch_size=256 --num_cores=4 --arch=convnet \\\n",
    "    --num_eval_records=1024000 --nlayers=5 --dprob=0 --ksize=3 --nfil=10 --learning_rate=0.01 --skipexport"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The hyperparameter training took 7.5 hours for me, cost 215 ML units (about 110USD list price) and had this as the best set of parameters:\n",
    "<pre>\n",
    "{\n",
    "      \"trialId\": \"2\",\n",
    "      \"hyperparameters\": {\n",
    "        \"nfil\": \"10\",\n",
    "        \"learning_rate\": \"0.02735530997243607\",\n",
    "        \"train_batch_size\": \"1024\",\n",
    "        \"nlayers\": \"3\"\n",
    "      },\n",
    "      \"finalMetric\": {\n",
    "        \"trainingStep\": \"1\",\n",
    "        \"objectiveValue\": 0.846787109375\n",
    "      }\n",
    "    },\n",
    "</pre>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training lightning prediction model on CMLE using TPUs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, let's train on the TPU. Because our batch size is 4x, we can train for 4x fewer steps"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "OUTDIR=gs://${BUCKET}/lightning/cnn_trained_tpu\n",
    "DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord\n",
    "JOBNAME=ltgpred_cnn_$(date -u +%y%m%d_%H%M%S)\n",
    "gcloud storage rm --recursive --continue-on-error $OUTDIR\n",
    "gcloud ml-engine jobs submit training $JOBNAME \\\n",
    "    --module-name ltgpred.trainer.train_cnn --package-path ${PWD}/ltgpred --job-dir=$OUTDIR \\\n",
    "    --region ${REGION} --scale-tier BASIC_TPU \\\n",
    "    --python-version 3.5 --runtime-version 1.12 \\\n",
    "    -- \\\n",
    "    --train_data_path ${DATADIR}/train* --eval_data_path ${DATADIR}/eval* \\\n",
    "    --train_steps 1250 --train_batch_size 1024 --num_cores 8  --use_tpu \\\n",
    "    --num_eval_records 1024000 --nlayers 5 --dprob 0 --ksize 3 --nfil 10 --learning_rate 0.01"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When I ran it, training finished with accuracy=0.82 (no change)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## More data, harder problem"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "#DATADIR=gs://$BUCKET/lightning/preproc/tfrecord\n",
    "#DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord\n",
    "DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_1/tfrecord  # also 5-min validity\n",
    "\n",
    "#for ARCH in feateng convnet dnn resnet; do\n",
    "for ARCH in feateng; do\n",
    "  JOBNAME=ltgpred_${ARCH}_$(date -u +%y%m%d_%H%M%S)\n",
    "  OUTDIR=gs://${BUCKET}/lightning/${ARCH}_trained_gpu\n",
    "  gcloud storage rm --recursive --continue-on-error $OUTDIR\n",
    "  gcloud ml-engine jobs submit training $JOBNAME \\\n",
    "      --module-name ltgpred.trainer.train_cnn --package-path ${PWD}/ltgpred --job-dir=$OUTDIR \\\n",
    "      --region=${REGION} --scale-tier CUSTOM --config largemachine.yaml \\\n",
    "      --python-version 3.5 --runtime-version 1.12 \\\n",
    "      -- \\\n",
    "      --train_data_path ${DATADIR}/train-* --eval_data_path ${DATADIR}/eval-* \\\n",
    "      --train_steps 5000 --train_batch_size 256 --num_cores 4 --arch $ARCH \\\n",
    "      --num_eval_records 1024000 --nlayers 5 --dprob 0 --ksize 3 --nfil 10 --learning_rate 0.01 \n",
    "done"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "With 100 days of data:\n",
    "\n",
    "| Architecture | Training time | Validation RMSE | Validation Accuracy |\n",
    "| --- | --- | --- | --- |\n",
    "| feateng | 26 min | 0.2986 | 0.7901 |\n",
    "| dnn | 40 min | 0.2995 | 0.7916 |\n",
    "| convnet | 28 min | 0.2735 | 0.8141 |\n",
    "| resnet | 68 min | 0.5135 | 0.4852 |\n",
    "\n",
    "With full year of data:\n",
    "\n",
    "| Architecture | Training time | Validation RMSE | Validation Accuracy |\n",
    "| --- | --- | --- | --- |\n",
    "| feateng | 26 min | 0.2986 | 0.7901 |\n",
    "| dnn | 240 min | 0.2682 | 0.8114 |\n",
    "| convnet | 150 min | 0.2557 | 0.8192 |\n",
    "| resnet | 68 min | 0.5135 | 0.4852 |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the \\\"License\\\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
