{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook walks you through how to use accelerators for Kubeflow Pipelines steps.\n",
    "\n",
    "# Preparation\n",
    "\n",
    "If you installed Kubeflow via [kfctl](https://www.kubeflow.org/docs/gke/customizing-gke/#common-customizations), these steps will have already been done, and you can skip this section.\n",
    "\n",
    "If you installed Kubeflow Pipelines via [Google Cloud AI Platform Pipelines UI](https://console.cloud.google.com/ai-platform/pipelines/) or [Standalone manifest](https://github.com/kubeflow/pipelines/tree/master/manifests/kustomize), you willl need to follow these steps to set up your GPU enviroment.\n",
    "\n",
    "## Add GPU nodes to your cluster\n",
    "\n",
    "To see which accelerators are available in each zone, run the following command or check the [document](https://cloud.google.com/compute/docs/gpus#gpus-list)\n",
    "\n",
    "```\n",
    "gcloud compute accelerator-types list\n",
    "```\n",
    "\n",
    "You may also check or edit the GCP's **GPU Quota** to make sure you still have GPU quota in the region.\n",
    "\n",
    "To reduce costs, you may want to create a zero-sized node pool for GPU and enable autoscaling.\n",
    "\n",
    "Here is an example to create a P100 GPU node pool for a cluster.\n",
    "\n",
    "```shell\n",
    "# You may customize these parameters.\n",
    "export GPU_POOL_NAME=p100pool\n",
    "export CLUSTER_NAME=existingClusterName\n",
    "export CLUSTER_ZONE=us-west1-a\n",
    "export GPU_TYPE=nvidia-tesla-p100\n",
    "export GPU_COUNT=1\n",
    "export MACHINE_TYPE=n1-highmem-16\n",
    "\n",
    "\n",
    "# Node pool creation may take several minutes.\n",
    "gcloud container node-pools create ${GPU_POOL_NAME} \\\n",
    "  --accelerator type=${GPU_TYPE},count=${GPU_COUNT} \\\n",
    "  --zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} \\\n",
    "  --num-nodes=0 --machine-type=${MACHINE_TYPE} --min-nodes=0 --max-nodes=5 --enable-autoscaling \\\n",
    "  --scopes=cloud-platform\n",
    "```\n",
    "\n",
    "Here in this sample, we specified **--scopes=cloud-platform**. More info is [here](https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create#--scopes). This scope will allow node pool jobs to use the GCE Default Service Account to access GCP APIs (like GCS, etc.). You can also use [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) or [Application Default Credentials](https://cloud.google.com/docs/authentication/production) to replace **--scopes=cloud-platform**.\n",
    "\n",
    "## Install NVIDIA device driver to the cluster\n",
    "\n",
    "After adding GPU nodes to your cluster, you need to install NVIDIA’s device drivers to the nodes. Google provides a GKE `DaemonSet` that automatically installs the drivers for you.\n",
    "\n",
    "To deploy the installation DaemonSet, run the following command. You can run this command any time (even before you create your node pool), and you only need to do this once per cluster.\n",
    "\n",
    "```shell\n",
    "kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml\n",
    "```\n",
    "\n",
    "# Consume GPU via Kubeflow Pipelines SDK\n",
    "\n",
    "Once your cluster is set up to support GPUs, the next step is to indicate which steps in your pipelines should use accelerators, and what type they should use. \n",
    "Here is a [document](https://www.kubeflow.org/docs/gke/pipelines/enable-gpu-and-tpu/) that describes the options.\n",
    "\n",
    "The following is an example 'smoke test' pipeline, to see if your cluster setup is working properly.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import kfp\n",
    "from kfp import dsl\n",
    "\n",
    "def gpu_smoking_check_op():\n",
    "    return dsl.ContainerOp(\n",
    "        name='check',\n",
    "        image='tensorflow/tensorflow:latest-gpu',\n",
    "        command=['sh', '-c'],\n",
    "        arguments=['nvidia-smi']\n",
    "    ).set_gpu_limit(1)\n",
    "\n",
    "@dsl.pipeline(\n",
    "    name='GPU smoke check',\n",
    "    description='smoke check as to whether GPU env is ready.'\n",
    ")\n",
    "def gpu_pipeline():\n",
    "    gpu_smoking_check = gpu_smoking_check_op()\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You may see a warning message from Kubeflow Pipeline logs saying \"Insufficient nvidia.com/gpu\". If so, this probably means that your GPU-enabled node is still spinning up; please wait for few minutes. You can check the current nodes in your cluster like this:\n",
    "\n",
    "```\n",
    "kubectl get nodes -o wide\n",
    "```\n",
    "\n",
    "If everything runs as expected, the `nvidia-smi` command should list the CUDA version, GPU type, usage, etc. (See the logs panel in the pipeline UI to view output).\n",
    "\n",
    "> You may also notice that after the pipeline step's GKE pod has finished, the new GPU cluster node is still there. GKE autoscale algorithm will free that node if no usage for certain time. More info is [here](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Multiple GPUs pool in one cluster\n",
    "\n",
    "It's possible you want more than one type of GPU to be supported in one cluster.\n",
    "\n",
    "- There are several types of GPUs.\n",
    "- Certain regions often support a only subset of the GPUs ([document](https://cloud.google.com/compute/docs/gpus#gpus-list)).\n",
    "\n",
    "Since we can set `--num-nodes=0` for certain GPU node pool to save costs if no workload, we can create multiple node pools for different types of GPUs.\n",
    "\n",
    "## Add additional GPU nodes to your cluster\n",
    "\n",
    "\n",
    "In a previous section, we added a node pool for P100s. Here we add another pool for V100s.\n",
    "\n",
    "```shell\n",
    "# You may customize these parameters.\n",
    "export GPU_POOL_NAME=v100pool\n",
    "export CLUSTER_NAME=existingClusterName\n",
    "export CLUSTER_ZONE=us-west1-a\n",
    "export GPU_TYPE=nvidia-tesla-v100\n",
    "export GPU_COUNT=1\n",
    "export MACHINE_TYPE=n1-highmem-8\n",
    "\n",
    "\n",
    "# Node pool creation may take several minutes.\n",
    "gcloud container node-pools create ${GPU_POOL_NAME} \\\n",
    "  --accelerator type=${GPU_TYPE},count=${GPU_COUNT} \\\n",
    "  --zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} \\\n",
    "  --num-nodes=0 --machine-type=${MACHINE_TYPE} --min-nodes=0 --max-nodes=5 --enable-autoscaling\n",
    "```\n",
    "\n",
    "## Consume certain GPU via Kubeflow Pipelines SDK\n",
    "\n",
    "If your cluster has multiple GPU node pools, you can explicitly specify that a given pipeline step should use a particular type of accelerator.\n",
    "This example shows how to use P100s for one pipeline step, and V100s for another."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import kfp\n",
    "from kfp import dsl\n",
    "\n",
    "def gpu_p100_op():\n",
    "    return dsl.ContainerOp(\n",
    "        name='check_p100',\n",
    "        image='tensorflow/tensorflow:latest-gpu',\n",
    "        command=['sh', '-c'],\n",
    "        arguments=['nvidia-smi']\n",
    "    ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100').container.set_gpu_limit(1)\n",
    "\n",
    "def gpu_v100_op():\n",
    "    return dsl.ContainerOp(\n",
    "        name='check_v100',\n",
    "        image='tensorflow/tensorflow:latest-gpu',\n",
    "        command=['sh', '-c'],\n",
    "        arguments=['nvidia-smi']\n",
    "    ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100').container.set_gpu_limit(1)\n",
    "\n",
    "@dsl.pipeline(\n",
    "    name='GPU smoke check',\n",
    "    description='Smoke check as to whether GPU env is ready.'\n",
    ")\n",
    "def gpu_pipeline():\n",
    "    gpu_p100 = gpu_p100_op()\n",
    "    gpu_v100 = gpu_v100_op()\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You should see different \"nvidia-smi\" logs from the two pipeline steps."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Using Preemptible GPUs\n",
    "\n",
    "A [Preemptible GPU resource](https://cloud.google.com/compute/docs/instances/preemptible#preemptible_with_gpu) is cheaper, but use of these instances means that a pipeline step has the potential to be aborted and then retried.  This means that pipeline steps used with preemptible instances must be idempotent (the step gives the same results if run again), or creates some kind of checkpoint so that it can pick up where it left off.   To use preemptible GPUs, create a node pool as follows. Then when specifying a pipeline, you can indicate use of a preemptible node pool for a step. \n",
    "\n",
    "The only difference in the following node-pool creation example is that the **--preemptible** and **--node-taints=preemptible=true:NoSchedule** parameters have been added.\n",
    "\n",
    "```\n",
    "export GPU_POOL_NAME=v100pool-preemptible\n",
    "export CLUSTER_NAME=existingClusterName\n",
    "export CLUSTER_ZONE=us-west1-a\n",
    "export GPU_TYPE=nvidia-tesla-v100\n",
    "export GPU_COUNT=1\n",
    "export MACHINE_TYPE=n1-highmem-8\n",
    "\n",
    "gcloud container node-pools create ${GPU_POOL_NAME} \\\n",
    "  --accelerator type=${GPU_TYPE},count=${GPU_COUNT} \\\n",
    "  --zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} \\\n",
    "  --preemptible \\\n",
    "  --node-taints=preemptible=true:NoSchedule \\\n",
    "  --num-nodes=0 --machine-type=${MACHINE_TYPE} --min-nodes=0 --max-nodes=5 --enable-autoscaling\n",
    "```\n",
    "\n",
    "Then, you can define a pipeline as follows (note the use of `use_preemptible_nodepool()`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "import kfp\n",
    "import kfp.gcp as gcp\n",
    "from kfp import dsl\n",
    "\n",
    "def gpu_p100_op():\n",
    "    return dsl.ContainerOp(\n",
    "        name='check_p100',\n",
    "        image='tensorflow/tensorflow:latest-gpu',\n",
    "        command=['sh', '-c'],\n",
    "        arguments=['nvidia-smi']\n",
    "    ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100').container.set_gpu_limit(1)\n",
    "\n",
    "def gpu_v100_op():\n",
    "    return dsl.ContainerOp(\n",
    "        name='check_v100',\n",
    "        image='tensorflow/tensorflow:latest-gpu',\n",
    "        command=['sh', '-c'],\n",
    "        arguments=['nvidia-smi']\n",
    "    ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100').container.set_gpu_limit(1)\n",
    "\n",
    "def gpu_v100_preemptible_op():\n",
    "    v100_op = dsl.ContainerOp(\n",
    "        name='check_v100_preemptible',\n",
    "        image='tensorflow/tensorflow:latest-gpu',\n",
    "        command=['sh', '-c'],\n",
    "        arguments=['nvidia-smi'])\n",
    "    v100_op.container.set_gpu_limit(1)\n",
    "    v100_op.add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100')\n",
    "    v100_op.apply(gcp.use_preemptible_nodepool(hard_constraint=True))\n",
    "    return v100_op\n",
    "\n",
    "@dsl.pipeline(\n",
    "    name='GPU smoking check',\n",
    "    description='Smoking check whether GPU env is ready.'\n",
    ")\n",
    "def gpu_pipeline():\n",
    "    gpu_p100 = gpu_p100_op()\n",
    "    gpu_v100 = gpu_v100_op()\n",
    "    gpu_v100_preemptible = gpu_v100_preemptible_op()\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# TPU\n",
    "Google's TPU is awesome. It's faster and lower TOC. To consume TPUs, there is no need to create a node-pool; just call KFP SDK to use it. Here is a [doc](https://www.kubeflow.org/docs/gke/pipelines/enable-gpu-and-tpu/#configure-containerop-to-consume-tpus). Note that not all regions have TPU yet.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}