{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2020 NVIDIA Corporation. All Rights Reserved.\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     http://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "# =============================================================================="
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Training Tabular Deep Learning Models with Keras on GPU\n",
    "Deep learning has revolutionized the fields of computer vision (CV) and natural language processing (NLP) in the last few years, providing a fast and general framework for solving a host of difficult problems with unprecedented accuracy. Part and parcel of this revolution has been the development of APIs like [Keras](https://www.tensorflow.org/api_docs/python/tf/keras) for NVIDIA GPUs, allowing practitioners to quickly iterate on new and interesting ideas and receive feedback on their efficacy in shorter and shorter intervals.\n",
    "\n",
    "One class of problem which has remained largely immune to this revolution, however, is the class involving tabular data. Part of this difficulty is that, unlike CV or NLP, where different datasets are underlied by similar phenomena and therefore can be solved with similar mechanisms, \"tabular datasets\" span a vast array of phenomena, semantic meanings, and problem statements, from product and video recommendation to particle discovery and loan default prediction. This diversity makes universally useful components difficult to find or even define, and is only exacerbated by the notorious lack of standard, industrial-scale benchmark datasets in the tabular space. As a result, deep learning models are frequently bested by their machine learning analogues on these important tasks, particularly on smaller scale datasets.\n",
    "\n",
    "Yet this diversity is also what makes tools like Keras all the more valuable. Architecture components can be quickly swapped in and out for different tasks like the implementation details they are, and new components can be built and tested with ease. Importantly, domain experts can interact with models at a high level and build *a priori* knowledge into model architectures, without having to spend their time becoming Python programming wizrds.\n",
    "\n",
    "However, most out-of-the-box APIs suffer from a lack of acceleration that reduces the rate at which new components can be tested and makes production deployment of deep learning systems cost-prohibitive. In this example, we will walk through some recent advancements made by NVIDIA's [NVTabular](https://github.com/nvidia/nvtabular) data loading library that can alleviate existing bottlenecks and bring to bear the full power of GPU acceleration.\n",
    "\n",
    "#### What to Keep an Eye Out For\n",
    "The point of this walkthrough will be to show how common components of existing TensorFlow tabular-learning pipelines can be drop-in replaced by NVTabular components for cheap-as-free acceleration with minimal overhead. To do this, we'll start by examining a pipeline for fitting the [DLRM](https://arxiv.org/abs/1906.00091) architecture on the [Criteo Terabyte Dataset](https://labs.criteo.com/2013/12/download-terabyte-click-logs/) using Keras/TensorFlow's native tools on both on CPU and GPU, and discuss why the acceleration we observe on GPU is not particularly impressive. Then we'll examine what an identical pipeline would look like using NVTabular and why it overcomes those bottlenecks.\n",
    "\n",
    "Since the Criteo Terabyte Dataset is large, and you and I both have better things to do than sit around for hours waiting to train a model we have no intention of ever using, I'll restrict the training to 1000 steps in order to illustrate the similarities in convergence and the expected acceleration. Of course, there may well exist alternative choices of architectures and hyperparameters that will lead to better or faster convergence, but I trust that you, clever data scientist that you are, are more than capable of finding these yourself should you wish. I intend only to demonstrate how NVTabular can help you achieve that convergence more quickly, in the hopes that you will find it easy to apply the same methods to the dataset that really matters: your own.\n",
    "\n",
    "I will assume at least some familiarity with the relevant tabular deep learning methods (in particular what I mean by \"tabular data\" and how it is distinct from, say, image data; continuous vs. categorical variables; learned categorical embeddings; and online vs. offline preprocessing) and a passing familiarity with TensorFlow and Keras. If you are green or rusty on any of this points, it won't make this discussion illegible, but I'll put links in the relevant places just in case.\n",
    "\n",
    "The structure will be building, step-by-step, the necessary functions that a dataset-agnostic pipeline might need in order to train a model in Keras. In each function, we'll include an `accelerated` kwarg that will be used to show the difference between what such a function might look like in native TensorFlow vs. using NVTabular. Let's start here by doing our imports and defining some hyperparameters for training (which won't change from one implementation to the next)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from itertools import filterfalse\n",
    "import re\n",
    "\n",
    "import tensorflow as tf\n",
    "from tensorflow.keras.mixed_precision import experimental as mixed_precision\n",
    "\n",
    "# this is a good habit to get in now: TensorFlow's default behavior\n",
    "# is to claim all of the GPU memory that it can for itself. This\n",
    "# is a problem when it needs to run alongside another GPU library\n",
    "# like NVTabular. To get around this, NVTabular will configure\n",
    "# TensorFlow to use this fraction of available GPU memory up front.\n",
    "# Make sure, however, that you do this before you do anything\n",
    "# with TensorFlow: as soon as it's initialized, that memory is gone\n",
    "# for good\n",
    "os.environ[\"TF_MEMORY_ALLOCATION\"] = \"0.5\"\n",
    "import nvtabular as nvt\n",
    "from nvtabular.loader.tensorflow import KerasSequenceLoader\n",
    "from nvtabular.framework_utils.tensorflow import layers, make_feature_column_workflow\n",
    "\n",
    "# import custom callback for monitoring throughput\n",
    "from callbacks import ThroughputLogger"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "DATA_DIR = os.environ.get(\"DATA_DIR\", \"/data\")\n",
    "TFRECORD_DIR = os.environ.get(\"TFRECORD_DIR\", \"/tfrecords\")\n",
    "LOG_DIR = os.environ.get(\"LOG_DIR\", \"logs/\")\n",
    "\n",
    "TFRECORDS = os.path.join(TFRECORD_DIR, \"train\", \"*.tfrecords\")\n",
    "PARQUETS = os.path.join(DATA_DIR, \"train\", \"*.parquet\")\n",
    "\n",
    "# TODO: reimplement the preproc from criteo-example here\n",
    "# Alternatively, make criteo its own folder, and split preproc\n",
    "# and training into separate notebooks, then execute the\n",
    "# preproc notebook from here?\n",
    "NUMERIC_FEATURE_NAMES = [f\"I{i}\" for i in range(1, 14)]\n",
    "CATEGORICAL_FEATURE_NAMES = [f\"C{i}\" for i in range(1, 27)]\n",
    "CATEGORY_COUNTS = [\n",
    "    7599500, 33521, 17022, 7339, 20046, 3, 7068, 1377, 63, 5345303,\n",
    "    561810, 242827, 11, 2209, 10616, 100, 4, 968, 14, 7838519,\n",
    "    2580502, 6878028, 298771, 11951, 97, 35\n",
    "]\n",
    "LABEL_NAME = \"label\"\n",
    "\n",
    "# optimization params\n",
    "BATCH_SIZE = 65536\n",
    "STEPS = 1000\n",
    "LEARNING_RATE = 0.001\n",
    "\n",
    "# architecture params\n",
    "EMBEDDING_DIM = 8\n",
    "TOP_MLP_HIDDEN_DIMS = [1024, 512, 256]\n",
    "BOTTOM_MLP_HIDDEN_DIMS = [1024, 1024, 512, 256]\n",
    "\n",
    "# I'll get sloppy with warnings because just like\n",
    "# Steven Tyler sometimes you gotta live on the edge\n",
    "tf.get_logger().setLevel('ERROR')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What Does Your Data Look Like\n",
    "As we discussed before, \"tabular data\" is an umbrella term referring to data collected from a vast array of problems and phenomena. Perhaps Bob's dataset has 192 features, 54 of which are continuous variables recorded as 32 bit floating point numbers, and the remainder of which are categorical variables which he has encoded as strings. Alice, on the other hand, may have a dataset consisting of 3271 features, most of which are continuous, but a handful of which are integer IDs which can take on one of millions of possible values. We can't expect the same model to be able to handle this kind of variety unless we give it some description of what sorts of inputs to expect.\n",
    "\n",
    "Moreover, the format in which the data gets read from disk will rarely be the one the model finds useful. Bob's string categories will be of no use to a neural network which lives in the world of continuous functions of real numbers; they will need to be converted to integer lookup table indices before being ingested. For certain types of these **transformations**, Bob may want to do this conversion once, up front, before training begins, and then be done with it. However, this may not always be possible. Bob may wish to hyperparameter search over the parameters of such a transformation (if, for instance, he is using a hash function to map to indices and wants to play with the number of buckets to use). Or perhaps he wants to retain the pre-transformed values, but finds the cost of storing an entire second dataset of the transformed values prohibitive. In this case, he'll need to perform the transformations *online*, between when the data is read from disk and when it gets fed to the network.\n",
    "\n",
    "Finally, in the case of categorical variables, these lookup indices will need to, well, *look up* an embedding vector that finally puts us in the continuous space our network prefers. Therefore, we also need to define how large of an embedding vector we want to use for a given feature.\n",
    "\n",
    "TensorFlow provides a convenient module to record this information about the names of features to expect, their type (categorical or numeric), their data type, common transformations to perform on them, and the size of embedding table to use in the case of categorical variables: the [`feature_column` module](https://www.tensorflow.org/tutorials/structured_data/feature_columns). (Note: as of [TensorFlow 2.3](https://github.com/tensorflow/tensorflow/releases/tag/v2.3.0-rc0) these are being deprecated and replaced with Keras layers with similar functionality. Most of the arguments made here will still apply, the code will just look a bit different.) These objects provide both stateless representations of feature information, as well as the code that performs the transformations and embeddings at train time.\n",
    "\n",
    "While `feature_column`s are a handy and robust representation format, their transformation and embedding implementations are poorly suited for GPUs. We'll see how this looks in terms of TensorFlow profile traces later, but the upshot comes down to two basic points:\n",
    "- Many of the transformations involve ops that either don't have a GPU kernel, or have one which is unoptimized. The involvement of ops without GPU kernels means that you're spending a lot of your train step moving data around to the device which can run the current op. Many of the ops that *do* have a GPU kernel are small and don't involve much math, which drowns the math-hungry parallel computing model of GPUs in kernel launch overhead.\n",
    "- The embeddings use sparse tensor machinery that is unoptimized on GPUs and is unnecessary for one-hot categoricals, the only type we'll focus on here. This is a good time to mention that the techniques we'll cover today *do not generalize to multi-hot categorical data*, which isn't currently supported by NVTabular. However, there is active work to support this being done and we hope to have it seamlessly integrated in the near future.\n",
    "\n",
    "As we'll see later, one difficulty in addressing the second issue is that the same Keras layer which performs the embeddings *also* performs the transformations, so even if you know that all your categoricals are one-hot and want to build an accelerated embedding layer that leverages this information, you would be out of luck on a layer which can just perform whatever transformations you might need. One way to get around this is to move your transformations to NVTabular, which will do them all on the GPU at data-loading time, so that all Keras needs to handle is the embedding using a layer like the `tf.keras.layers.DenseFeatures`, or, even more accelerated, NVTabular's equivalent `layers.DenseFeatures` layer.\n",
    "\n",
    "The good news is, as of NVTabular 0.2, you don't need to change the feature columns you use to represent your inputs and preprocessing in order to enjoy GPU acceleration. The `make_feature_column_workflow` utility will take care of creating an NVTabular `Workflow` object which will perform all of the requisite preprocessing on the GPU, then pass the preprocessed columns to TensorFlow tensors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_feature_columns():\n",
    "    columns = [tf.feature_column.numeric_column(name, (1,)) for name in NUMERIC_FEATURE_NAMES]\n",
    "    for feature_name, count in zip(CATEGORICAL_FEATURE_NAMES, CATEGORY_COUNTS):\n",
    "        categorical_column = tf.feature_column.categorical_column_with_hash_bucket(\n",
    "            feature_name, int(0.75*count), dtype=tf.int64)\n",
    "        embedding_column = tf.feature_column.embedding_column(categorical_column, EMBEDDING_DIM)\n",
    "        columns.append(embedding_column)\n",
    "    return columns"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## A Data By Any Other Format: TFRecords and Tabular Representation\n",
    "By running the Criteo preprocessing example above, we generated a dataset in the parquet data format. Why Parquet? Well, besides the fact that NVTabular can read parquet files exceptionally quickly, parquet is a widely used tabular data format that can be read by libraries like Pandas or CuDF to quickly search, filter, and manipulate data using high level abstractions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>I5</th>\n",
       "      <th>I4</th>\n",
       "      <th>I6</th>\n",
       "      <th>I11</th>\n",
       "      <th>I2</th>\n",
       "      <th>I8</th>\n",
       "      <th>I12</th>\n",
       "      <th>I13</th>\n",
       "      <th>I1</th>\n",
       "      <th>I3</th>\n",
       "      <th>...</th>\n",
       "      <th>C16</th>\n",
       "      <th>C2</th>\n",
       "      <th>C17</th>\n",
       "      <th>C25</th>\n",
       "      <th>C3</th>\n",
       "      <th>C26</th>\n",
       "      <th>C9</th>\n",
       "      <th>C13</th>\n",
       "      <th>C14</th>\n",
       "      <th>label</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.406506</td>\n",
       "      <td>0.991578</td>\n",
       "      <td>1.030196</td>\n",
       "      <td>0.039582</td>\n",
       "      <td>-0.363446</td>\n",
       "      <td>0.113603</td>\n",
       "      <td>...</td>\n",
       "      <td>76</td>\n",
       "      <td>5611</td>\n",
       "      <td>1</td>\n",
       "      <td>45</td>\n",
       "      <td>5884</td>\n",
       "      <td>12</td>\n",
       "      <td>36</td>\n",
       "      <td>8</td>\n",
       "      <td>512</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>2.595875</td>\n",
       "      <td>0.674505</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>1.589269</td>\n",
       "      <td>0.881184</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>0.211819</td>\n",
       "      <td>1.143488</td>\n",
       "      <td>0.387689</td>\n",
       "      <td>0.323043</td>\n",
       "      <td>...</td>\n",
       "      <td>68</td>\n",
       "      <td>32452</td>\n",
       "      <td>1</td>\n",
       "      <td>61</td>\n",
       "      <td>7465</td>\n",
       "      <td>23</td>\n",
       "      <td>36</td>\n",
       "      <td>5</td>\n",
       "      <td>142</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>-0.113255</td>\n",
       "      <td>1.034299</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>0.410145</td>\n",
       "      <td>0.898900</td>\n",
       "      <td>0.917925</td>\n",
       "      <td>0.198978</td>\n",
       "      <td>-0.213917</td>\n",
       "      <td>1.099744</td>\n",
       "      <td>-0.156412</td>\n",
       "      <td>...</td>\n",
       "      <td>58</td>\n",
       "      <td>4183</td>\n",
       "      <td>3</td>\n",
       "      <td>45</td>\n",
       "      <td>715</td>\n",
       "      <td>23</td>\n",
       "      <td>36</td>\n",
       "      <td>2</td>\n",
       "      <td>1199</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.099380</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>-0.495383</td>\n",
       "      <td>0.236211</td>\n",
       "      <td>-1.311273</td>\n",
       "      <td>0.323043</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>3149</td>\n",
       "      <td>0</td>\n",
       "      <td>61</td>\n",
       "      <td>6167</td>\n",
       "      <td>6</td>\n",
       "      <td>62</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.561786</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>-0.043296</td>\n",
       "      <td>-1.181990</td>\n",
       "      <td>-1.311273</td>\n",
       "      <td>-1.187559</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>3149</td>\n",
       "      <td>0</td>\n",
       "      <td>45</td>\n",
       "      <td>7419</td>\n",
       "      <td>6</td>\n",
       "      <td>36</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999995</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>1.146024</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>0.327294</td>\n",
       "      <td>-1.181990</td>\n",
       "      <td>-1.311273</td>\n",
       "      <td>-1.187559</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>10231</td>\n",
       "      <td>0</td>\n",
       "      <td>61</td>\n",
       "      <td>13518</td>\n",
       "      <td>23</td>\n",
       "      <td>36</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999996</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.574969</td>\n",
       "      <td>0.733282</td>\n",
       "      <td>-0.717885</td>\n",
       "      <td>-1.181990</td>\n",
       "      <td>0.263033</td>\n",
       "      <td>-1.187559</td>\n",
       "      <td>...</td>\n",
       "      <td>76</td>\n",
       "      <td>12699</td>\n",
       "      <td>1</td>\n",
       "      <td>61</td>\n",
       "      <td>896</td>\n",
       "      <td>13</td>\n",
       "      <td>9</td>\n",
       "      <td>1</td>\n",
       "      <td>512</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999997</th>\n",
       "      <td>-0.402953</td>\n",
       "      <td>1.149989</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.077293</td>\n",
       "      <td>-0.033020</td>\n",
       "      <td>2.420449</td>\n",
       "      <td>1.056442</td>\n",
       "      <td>-0.571204</td>\n",
       "      <td>-0.837359</td>\n",
       "      <td>-0.536978</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>15240</td>\n",
       "      <td>0</td>\n",
       "      <td>61</td>\n",
       "      <td>7290</td>\n",
       "      <td>13</td>\n",
       "      <td>21</td>\n",
       "      <td>7</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999998</th>\n",
       "      <td>0.092289</td>\n",
       "      <td>0.988406</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.077293</td>\n",
       "      <td>-0.267567</td>\n",
       "      <td>-0.333486</td>\n",
       "      <td>0.442404</td>\n",
       "      <td>1.925359</td>\n",
       "      <td>-0.210880</td>\n",
       "      <td>1.289434</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>528</td>\n",
       "      <td>0</td>\n",
       "      <td>61</td>\n",
       "      <td>8663</td>\n",
       "      <td>6</td>\n",
       "      <td>36</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999999</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>0.272316</td>\n",
       "      <td>0.407738</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>-2.140079</td>\n",
       "      <td>-0.333486</td>\n",
       "      <td>-0.684663</td>\n",
       "      <td>1.314574</td>\n",
       "      <td>1.464903</td>\n",
       "      <td>1.414765</td>\n",
       "      <td>...</td>\n",
       "      <td>76</td>\n",
       "      <td>24626</td>\n",
       "      <td>1</td>\n",
       "      <td>8</td>\n",
       "      <td>4736</td>\n",
       "      <td>12</td>\n",
       "      <td>21</td>\n",
       "      <td>1</td>\n",
       "      <td>512</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>1000000 rows × 40 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "              I5        I4        I6       I11        I2        I8       I12  \\\n",
       "0      -0.898195 -1.059381 -0.488376 -0.910574  0.406506  0.991578  1.030196   \n",
       "1       2.595875  0.674505 -0.488376  1.589269  0.881184 -1.092583  0.211819   \n",
       "2      -0.113255  1.034299 -0.488376  0.410145  0.898900  0.917925  0.198978   \n",
       "3      -0.898195 -1.059381 -0.488376 -0.910574  0.099380 -1.092583 -0.495383   \n",
       "4      -0.898195 -1.059381 -0.488376 -0.910574  0.561786 -1.092583 -0.043296   \n",
       "...          ...       ...       ...       ...       ...       ...       ...   \n",
       "999995 -0.898195 -1.059381 -0.488376 -0.910574  1.146024 -1.092583  0.327294   \n",
       "999996 -0.898195 -1.059381 -0.488376 -0.910574  0.574969  0.733282 -0.717885   \n",
       "999997 -0.402953  1.149989 -0.488376 -0.077293 -0.033020  2.420449  1.056442   \n",
       "999998  0.092289  0.988406 -0.488376 -0.077293 -0.267567 -0.333486  0.442404   \n",
       "999999 -0.898195  0.272316  0.407738 -0.910574 -2.140079 -0.333486 -0.684663   \n",
       "\n",
       "             I13        I1        I3  ...  C16     C2  C17  C25     C3  C26  \\\n",
       "0       0.039582 -0.363446  0.113603  ...   76   5611    1   45   5884   12   \n",
       "1       1.143488  0.387689  0.323043  ...   68  32452    1   61   7465   23   \n",
       "2      -0.213917  1.099744 -0.156412  ...   58   4183    3   45    715   23   \n",
       "3       0.236211 -1.311273  0.323043  ...    0   3149    0   61   6167    6   \n",
       "4      -1.181990 -1.311273 -1.187559  ...    0   3149    0   45   7419    6   \n",
       "...          ...       ...       ...  ...  ...    ...  ...  ...    ...  ...   \n",
       "999995 -1.181990 -1.311273 -1.187559  ...    0  10231    0   61  13518   23   \n",
       "999996 -1.181990  0.263033 -1.187559  ...   76  12699    1   61    896   13   \n",
       "999997 -0.571204 -0.837359 -0.536978  ...    0  15240    0   61   7290   13   \n",
       "999998  1.925359 -0.210880  1.289434  ...    0    528    0   61   8663    6   \n",
       "999999  1.314574  1.464903  1.414765  ...   76  24626    1    8   4736   12   \n",
       "\n",
       "        C9  C13   C14  label  \n",
       "0       36    8   512      0  \n",
       "1       36    5   142      0  \n",
       "2       36    2  1199      0  \n",
       "3       62    4     0      0  \n",
       "4       36    2     0      0  \n",
       "...     ..  ...   ...    ...  \n",
       "999995  36    1     0      0  \n",
       "999996   9    1   512      0  \n",
       "999997  21    7     0      0  \n",
       "999998  36    5     0      0  \n",
       "999999  21    1   512      0  \n",
       "\n",
       "[1000000 rows x 40 columns]"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import cudf, glob\n",
    "filename = glob.glob(os.path.join(DATA_DIR, 'train', '*.parquet'))[0]\n",
    "df = cudf.read_parquet(filename, num_rows=1000000)\n",
    "df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>I5</th>\n",
       "      <th>I4</th>\n",
       "      <th>I6</th>\n",
       "      <th>I11</th>\n",
       "      <th>I2</th>\n",
       "      <th>I8</th>\n",
       "      <th>I12</th>\n",
       "      <th>I13</th>\n",
       "      <th>I1</th>\n",
       "      <th>I3</th>\n",
       "      <th>...</th>\n",
       "      <th>C16</th>\n",
       "      <th>C2</th>\n",
       "      <th>C17</th>\n",
       "      <th>C25</th>\n",
       "      <th>C3</th>\n",
       "      <th>C26</th>\n",
       "      <th>C9</th>\n",
       "      <th>C13</th>\n",
       "      <th>C14</th>\n",
       "      <th>label</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.406506</td>\n",
       "      <td>0.991578</td>\n",
       "      <td>1.030196</td>\n",
       "      <td>0.039582</td>\n",
       "      <td>-0.363446</td>\n",
       "      <td>0.113603</td>\n",
       "      <td>...</td>\n",
       "      <td>76</td>\n",
       "      <td>5611</td>\n",
       "      <td>1</td>\n",
       "      <td>45</td>\n",
       "      <td>5884</td>\n",
       "      <td>12</td>\n",
       "      <td>36</td>\n",
       "      <td>8</td>\n",
       "      <td>512</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.099380</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>-0.495383</td>\n",
       "      <td>0.236211</td>\n",
       "      <td>-1.311273</td>\n",
       "      <td>0.323043</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>3149</td>\n",
       "      <td>0</td>\n",
       "      <td>61</td>\n",
       "      <td>6167</td>\n",
       "      <td>6</td>\n",
       "      <td>62</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.561786</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>-0.043296</td>\n",
       "      <td>-1.181990</td>\n",
       "      <td>-1.311273</td>\n",
       "      <td>-1.187559</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>3149</td>\n",
       "      <td>0</td>\n",
       "      <td>45</td>\n",
       "      <td>7419</td>\n",
       "      <td>6</td>\n",
       "      <td>36</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>-0.187813</td>\n",
       "      <td>1.165454</td>\n",
       "      <td>2.195199</td>\n",
       "      <td>1.871938</td>\n",
       "      <td>-1.311273</td>\n",
       "      <td>2.065346</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>12554</td>\n",
       "      <td>0</td>\n",
       "      <td>8</td>\n",
       "      <td>13182</td>\n",
       "      <td>6</td>\n",
       "      <td>36</td>\n",
       "      <td>6</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-0.542999</td>\n",
       "      <td>0.407738</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>-2.140079</td>\n",
       "      <td>-0.574419</td>\n",
       "      <td>-0.581672</td>\n",
       "      <td>-1.181990</td>\n",
       "      <td>-0.837359</td>\n",
       "      <td>-1.187559</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>24999</td>\n",
       "      <td>0</td>\n",
       "      <td>61</td>\n",
       "      <td>5079</td>\n",
       "      <td>13</td>\n",
       "      <td>36</td>\n",
       "      <td>6</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999985</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.083191</td>\n",
       "      <td>-0.574419</td>\n",
       "      <td>1.745785</td>\n",
       "      <td>-0.213917</td>\n",
       "      <td>-0.837359</td>\n",
       "      <td>-0.156412</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>13613</td>\n",
       "      <td>0</td>\n",
       "      <td>24</td>\n",
       "      <td>6240</td>\n",
       "      <td>6</td>\n",
       "      <td>21</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999986</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>0.067703</td>\n",
       "      <td>0.931930</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.486195</td>\n",
       "      <td>-0.765658</td>\n",
       "      <td>-1.964434</td>\n",
       "      <td>0.930981</td>\n",
       "      <td>0.770305</td>\n",
       "      <td>1.063082</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>7452</td>\n",
       "      <td>0</td>\n",
       "      <td>45</td>\n",
       "      <td>8665</td>\n",
       "      <td>11</td>\n",
       "      <td>36</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999990</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>-1.059381</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>-0.210349</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>1.758998</td>\n",
       "      <td>-1.181990</td>\n",
       "      <td>-0.837359</td>\n",
       "      <td>-1.187559</td>\n",
       "      <td>...</td>\n",
       "      <td>60</td>\n",
       "      <td>22810</td>\n",
       "      <td>3</td>\n",
       "      <td>61</td>\n",
       "      <td>16817</td>\n",
       "      <td>23</td>\n",
       "      <td>36</td>\n",
       "      <td>6</td>\n",
       "      <td>1614</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999992</th>\n",
       "      <td>0.492124</td>\n",
       "      <td>1.426266</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>0.410145</td>\n",
       "      <td>0.322983</td>\n",
       "      <td>-1.092583</td>\n",
       "      <td>0.473773</td>\n",
       "      <td>-0.213917</td>\n",
       "      <td>-0.560138</td>\n",
       "      <td>-0.156412</td>\n",
       "      <td>...</td>\n",
       "      <td>0</td>\n",
       "      <td>31072</td>\n",
       "      <td>0</td>\n",
       "      <td>61</td>\n",
       "      <td>3920</td>\n",
       "      <td>6</td>\n",
       "      <td>21</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>999993</th>\n",
       "      <td>-0.898195</td>\n",
       "      <td>1.039733</td>\n",
       "      <td>-0.488376</td>\n",
       "      <td>-0.910574</td>\n",
       "      <td>0.325448</td>\n",
       "      <td>-0.247494</td>\n",
       "      <td>0.073857</td>\n",
       "      <td>0.930981</td>\n",
       "      <td>2.196103</td>\n",
       "      <td>0.638853</td>\n",
       "      <td>...</td>\n",
       "      <td>76</td>\n",
       "      <td>8228</td>\n",
       "      <td>1</td>\n",
       "      <td>45</td>\n",
       "      <td>4708</td>\n",
       "      <td>6</td>\n",
       "      <td>62</td>\n",
       "      <td>6</td>\n",
       "      <td>512</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>499183 rows × 40 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "              I5        I4        I6       I11        I2        I8       I12  \\\n",
       "0      -0.898195 -1.059381 -0.488376 -0.910574  0.406506  0.991578  1.030196   \n",
       "3      -0.898195 -1.059381 -0.488376 -0.910574  0.099380 -1.092583 -0.495383   \n",
       "4      -0.898195 -1.059381 -0.488376 -0.910574  0.561786 -1.092583 -0.043296   \n",
       "9      -0.898195 -1.059381 -0.488376 -0.910574 -0.187813  1.165454  2.195199   \n",
       "10     -0.898195 -0.542999  0.407738 -0.910574 -2.140079 -0.574419 -0.581672   \n",
       "...          ...       ...       ...       ...       ...       ...       ...   \n",
       "999985 -0.898195 -1.059381 -0.488376 -0.910574  0.083191 -0.574419  1.745785   \n",
       "999986 -0.898195  0.067703  0.931930 -0.910574  0.486195 -0.765658 -1.964434   \n",
       "999990 -0.898195 -1.059381 -0.488376 -0.910574 -0.210349 -1.092583  1.758998   \n",
       "999992  0.492124  1.426266 -0.488376  0.410145  0.322983 -1.092583  0.473773   \n",
       "999993 -0.898195  1.039733 -0.488376 -0.910574  0.325448 -0.247494  0.073857   \n",
       "\n",
       "             I13        I1        I3  ...  C16     C2  C17  C25     C3  C26  \\\n",
       "0       0.039582 -0.363446  0.113603  ...   76   5611    1   45   5884   12   \n",
       "3       0.236211 -1.311273  0.323043  ...    0   3149    0   61   6167    6   \n",
       "4      -1.181990 -1.311273 -1.187559  ...    0   3149    0   45   7419    6   \n",
       "9       1.871938 -1.311273  2.065346  ...    0  12554    0    8  13182    6   \n",
       "10     -1.181990 -0.837359 -1.187559  ...    0  24999    0   61   5079   13   \n",
       "...          ...       ...       ...  ...  ...    ...  ...  ...    ...  ...   \n",
       "999985 -0.213917 -0.837359 -0.156412  ...    0  13613    0   24   6240    6   \n",
       "999986  0.930981  0.770305  1.063082  ...    0   7452    0   45   8665   11   \n",
       "999990 -1.181990 -0.837359 -1.187559  ...   60  22810    3   61  16817   23   \n",
       "999992 -0.213917 -0.560138 -0.156412  ...    0  31072    0   61   3920    6   \n",
       "999993  0.930981  2.196103  0.638853  ...   76   8228    1   45   4708    6   \n",
       "\n",
       "        C9  C13   C14  label  \n",
       "0       36    8   512      0  \n",
       "3       62    4     0      0  \n",
       "4       36    2     0      0  \n",
       "9       36    6     0      0  \n",
       "10      36    6     0      0  \n",
       "...     ..  ...   ...    ...  \n",
       "999985  21    3     0      0  \n",
       "999986  36    2     0      0  \n",
       "999990  36    6  1614      0  \n",
       "999992  21    5     0      0  \n",
       "999993  62    6   512      0  \n",
       "\n",
       "[499183 rows x 40 columns]"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# do some filtering or whatever\n",
    "df[df['C18'] == 228]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is great news for data scientists: formats like parquet are the bread and butter of any sort of data exploration. You almost certainly want to keep at least *one* version of your dataset in a format like this. If your dataset is large enough, and storage gets expensive, it's probably the *only* format you want to keep your dataset in.\n",
    "\n",
    "Unfortunately, TensorFlow does not have fast native readers for formats like this that can read larger-than-memory datasets in an online fashion. TensorFlow's preferred, and fastest, data format is the [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord), a binary format which associates all field names and their values with every example in your datset. For tabular data, where small float or int features have a smaller memory footprint than string field names, the memory footprint of such a representation can get really big, really fast.\n",
    "\n",
    "More importantly, TFRecords require reading and parsing in batches using user-provided data schema descriptions. This makes doing the sorts of manipulations described above difficult, if not near impossible, and requires an enormous amount of work to change the values corresponding to a single field in your dataset. For this reason, you almost never want to use TFRecords as the *only* means of representing your data, which means you have generate and store an entire copy of your dataset every time it needs to update. This can take an enormous amount of time and resources that prolong the time from the conception of a feature to testing it in a model.\n",
    "\n",
    "The main advantage of TFRecords is the speed with which TensorFlow can read them (and its APIs for doing this online), and their support for multi-hot categorical features. While NVTabular is still working on addressing the latter, we'll show below that reading parquet files in batch using NVTabular is substantially faster than the existing TFRecord readers. In order to do this, we'll need to generate a TFRecord version of the parquet dataset we generated before. I'm going to restrict this to generating just the 1000 steps we'll need to do our training demo, but if you have a few days and a couple terabytes of storage lying around feel free to run the whole thing.\n",
    "\n",
    "Don't worry too much about the code below: it's a bit dense (and frankly still isn't fully robust to string features) and doesn't have much to do with what follows. I'm sure there are ways to make it cleaner/faster/etc., but If anything, it should make clear how nontrivial the process of building and writing TFRecords is. I'm also going to keep it commented out for now since the disk space required is so high, and the casual user clicking through cells might accidentally exhaust their allotment. If you feel like running the comparisons below to keep me honest, uncomment this cell and run it first.\n",
    "\n",
    "The last thing I'll note is that the astute and experienced TensorFlow user will at this point object that there exist ways to make reading TFRecords for tabular data faster than what I'm about to present. Among these are pre-batching examples (which, I would point out, more or less enforces a fixed valency for all categorical features) and combining all fixed valency categorical and continuous features into vectorized fields in records which can all be parsed at once. And while it's true that methods like this will accelerate TFRecord reading, they still fail to overtake NVTabular's parquet reader. Perhaps more importantly (at least from my workflow-centric view), they only compound the problems I've outlined so far of the difficulty of doing data analysis with TFRecords, and would almost certainly require the code below to be even more brittle and complicated. And this is actually a point worth emphasizing: with NVTabular data loading, you're getting better performance *and* less programming overhead, the holy grail of GPU-based DL software."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# import multiprocessing as mp\n",
    "# from glob import glob\n",
    "# from itertools import repeat\n",
    "# from tqdm.notebook import trange\n",
    "\n",
    "# def pool_initializer(num_cols, cat_cols):\n",
    "#     global numeric_columns\n",
    "#     global categorical_columns\n",
    "#     numeric_columns = num_cols\n",
    "#     categorical_columns = cat_cols\n",
    "\n",
    "# def build_and_serialize_example(data):\n",
    "#     numeric_values, categorical_values = data\n",
    "#     feature = {}\n",
    "#     if numeric_values is not None:\n",
    "#         feature.update({\n",
    "#             col: tf.train.Feature(float_list=tf.train.FloatList(value=[val]))\n",
    "#                 for col, val in zip(numeric_columns, numeric_values)\n",
    "#     })\n",
    "#     if categorical_values is not None:\n",
    "#         feature.update({\n",
    "#             col: tf.train.Feature(int64_list=tf.train.Int64List(value=[val]))\n",
    "#                 for col, val in zip(categorical_columns, categorical_values)\n",
    "#     })\n",
    "#     return tf.train.Example(features=tf.train.Features(feature=feature)).SerializeToString()\n",
    "\n",
    "# def get_writer(write_dir, file_idx):\n",
    "#     filename = str(file_idx).zfill(5) + '.tfrecords'\n",
    "#     return tf.io.TFRecordWriter(os.path.join(write_dir, filename))\n",
    "\n",
    "\n",
    "# _EXAMPLES_PER_RECORD = 20000000\n",
    "# write_dir = os.path.dirname(TFRECORDS)\n",
    "# if not os.path.exists(write_dir):\n",
    "#     os.makedirs(write_dir)\n",
    "# file_idx, example_idx = 0, 0\n",
    "# writer = get_writer(write_dir, file_idx)\n",
    "\n",
    "# do_break = False\n",
    "# column_names = [NUMERIC_FEATURE_NAMES, CATEGORICAL_FEATURE_NAMES+[LABEL_NAME]]\n",
    "# with mp.Pool(8, pool_initializer, column_names) as pool:\n",
    "#     fnames = glob(PARQUETS)\n",
    "#     ds_iterator = nvt.dataset(fnames, gpu_memory_frac=0.1)\n",
    "#     pbar = trange(BATCH_SIZE*STEPS)\n",
    "\n",
    "#     for df in ds_iterator:\n",
    "#         data = []\n",
    "#         for col_names in column_names:\n",
    "#             if len(col_names) == 0:\n",
    "#                 data.append(repeat(None))\n",
    "#             else:\n",
    "#                 data.append(df[col_names].to_pandas().values)\n",
    "#         data = zip(*data)\n",
    "\n",
    "#         record_map = pool.imap(build_and_serialize_example, data, chunksize=200)\n",
    "#         for record in record_map:\n",
    "#             writer.write(record)\n",
    "#             example_idx += 1\n",
    "\n",
    "#             if example_idx == _EXAMPLES_PER_RECORD:\n",
    "#                 writer.close()\n",
    "#                 file_idx += 1\n",
    "#                 writer = get_writer(file_idx)\n",
    "#                 example_idx = 0\n",
    "#             pbar.update(1)\n",
    "#             if pbar.n == BATCH_SIZE*STEPS:\n",
    "#                 do_break = True\n",
    "#                 break\n",
    "#         if do_break:\n",
    "#             del df\n",
    "#             break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Ok, now that we have our data set up the way that we need it, we're ready to get training! TensorFlow provides a handy utility for building an online dataloader that we'll use to parse the tfrecords. Meanwhile, on the NVTablar side, we'll use the `KerasSequenceLoader` for reading chunks of parquet files. We'll also use a the `make_feature_column_workflow` to build an NVTabular `Workflow` that handles hash bucketing online on the GPU. It will also return a simplified set of feature columns that _don't_ include the preprocessing steps.\n",
    "\n",
    "Take a look below to see the similarities in the API. What's great about using NVTabular `Workflow`s for online preprocessing is that it makes doing arbitrary preprocessing reasonably simple by using `DFlambda` ops, and the `Op` class API allows for extension to more complicated, stat-driven preprocessing as well.\n",
    "\n",
    "One potentially important difference between these dataset classes is the way in which shuffling is handled. The TensorFlow data loader maintains a buffer of size `shuffle_buffer_size` from which batch elements are randomly selected, with the buffer then sequentially replenished by the next `batch_size` elements in the TFRecord. Large shuffle buffers, while allowing for better epoch-to-epoch randomness and hence generalization, can be hard to maintain given the slow read times. The limitation this enforces on your buffer size isn't as big a deal for datasets which are uniformly shuffled in the TFRecord and only require one or two epochs to converge, but many datasets are ordered by some feature (whether it's time or some categorical groupby), and in this case the windowed shuffle buffer can lead to biased sampling and hence poorer quality gradients.\n",
    "\n",
    "On the other hand, the `KerasSequenceLoader` manages shuffling by loading in chunks of data from different parts of the full dataset, concatenating them and then shuffling, then iterating through this super-chunk sequentially in batches. The number of \"parts\" of the dataset that get sample, or \"partitions\", is controlled by the `parts_per_chunk` kwarg, while the size of each one of these parts is controlled by the `buffer_size` kwarg, which refers to a fraction of available GPU memory. Using more chunks leads to better randomness, especially at the epoch level where physically disparate samples can be brought into the same batch, but can impact throughput if you use too many. In any case, the speed of the parquet reader makes feasible buffer sizes much larger.\n",
    "\n",
    "The key thing to keep in mind is due to the asynchronus nature of the data loader, there will be `parts_per_chunk*buffer_size*3` rows of data floating around the GPU at any one time, so your goal should be to balance `parts_per_chunk` and `buffer_size` in such a way to leverage as much GPU memory as possible without going out-of-memory (OOM) and while still meeting your randomness and throughput needs.\n",
    "\n",
    "Finally, remember that once the data is loaded, it doesn't just pass to TensorFlow untouched: we also apply concatenation, shuffling, and preprocessing operations which will take memory to execute. The takeaway is that just because TensorFlow is only occupying 50% of the GPU memory, don't expect that this implies that we can algebraically balance `parts_per_chunk` and `buffer_size` to exactly occupy the remaining 50%. This might take a bit of tuning for your workload, but once you know the right combination you can use it forever. (Or at least until you get a bigger GPU!)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def make_dataset(file_pattern, columns, accelerated=False):\n",
    "    # make a tfrecord features dataset\n",
    "    if not accelerated:\n",
    "        # feature spec tells us how to parse tfrecords\n",
    "        # using FixedLenFeatures keeps from using sparse machinery,\n",
    "        # but obviously wouldn't extend to multi-hot categoricals\n",
    "        feature_spec = {LABEL_NAME: tf.io.FixedLenFeature((1,), tf.int64)}\n",
    "        for column in columns:\n",
    "            column = getattr(column, \"categorical_column\", column)\n",
    "            dtype = getattr(column, \"dtype\", tf.int64)\n",
    "            feature_spec[column.name] = tf.io.FixedLenFeature((1,), dtype)\n",
    "\n",
    "        dataset = tf.data.experimental.make_batched_features_dataset(\n",
    "            file_pattern,\n",
    "            BATCH_SIZE,\n",
    "            feature_spec,\n",
    "            label_key=LABEL_NAME,\n",
    "            num_epochs=1,\n",
    "            shuffle=True,\n",
    "            shuffle_buffer_size=4*BATCH_SIZE,\n",
    "        )\n",
    "\n",
    "    # make an nvtabular KerasSequenceLoader and add\n",
    "    # a hash bucketing workflow for online preproc\n",
    "    else:\n",
    "        dataset = KerasSequenceLoader(\n",
    "            file_pattern,\n",
    "            batch_size=BATCH_SIZE,\n",
    "            label_names=[LABEL_NAME],\n",
    "            feature_columns=columns,\n",
    "            shuffle=True,\n",
    "            buffer_size=0.06,\n",
    "            parts_per_chunk=1\n",
    "        )\n",
    "        workflow, columns = make_feature_column_workflow(columns, LABEL_NAME)\n",
    "        dataset.map(workflow)\n",
    "    return dataset, columns"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Living In The Continuous World\n",
    "So at this point, we have a description of our dataset schema contained in our `feature_column`s, and we have a `dataset` object which can load some particular materialization of this schema (our dataset) in an online fashion (with the bytes encoding that materialization organized according to either the TFRecord or Parquet standard).\n",
    "\n",
    "Once the data is loaded, it needs to get run through a neural network, which will use them to produce predictions of interaction likelihoods, compare its predictions to the labelled answers, and improve its future guesses using this comparison through the magic of backpropogation. Easy as pie.\n",
    "\n",
    "Unfortunately, the magic of backpropogation relies on a trick of calculus which, by its nature, requires that the functions represented by the neural network are *continuous*. Whether or not you fully understand exactly what that means, you can probably imagine that this is incongrous with the *categorical* features our dataset contains. Less fundamentally, but from an equally practical standpoint, much of the algebra that our network will perform on our tabular features goes much (read: *MUCH*) faster if we do it in parallel as matrix algebra.\n",
    "\n",
    "For these reasons, we'll want to convert our tabular continuous and categorical features into purely continuous vectors that can be consumed by the network and processed efficiently. For categorical features, this means using the categorical index to lookup a (typically learned) vector from some lower-dimensional space to pass to the network. The exact mechanism by which your network embeds and combines these values will depend on your choice of architecture. But the fundamental operation of looking up and concatenating (or stacking) is ubiquitous across almost all tabular deep learning architectures.\n",
    "\n",
    "The go-to Keras layer for doing this sort of operation is the `DenseFeatures` layer, which will also perform any transformations defined by your `feature_column`s. The downside of using the `DenseFeatures` layer, as we'll investigate more fully in a bit, is that its GPU performance is handicapped by the use of lots of small ops for doing things that aren't necessarily worth doing on an accelerator like a GPU e.g. checking for in-range values. This drowns the compute itself in kernel launch overhead. Moreover, `DenseFeatures` has no mechanism for identifying one-hot categorical features, instead using `SparseTensor` machinery for all categorical columns for the sake of robustness. Many sparse TensorFlow ops aren't optimized for GPU, particularly for leveraging those Tensor Cores you're paying for by using mixed precision compute, and this further bottlenecks GPU performance.\n",
    "\n",
    "Because we're now doing all our transformations in NVTabular, and we *know* all of our categorical features are one-hot, we can use a better-optimized embedding layer, NVTabular's `DenseFeatures` layer, that leverages this information. Below, we'll see how we can use such a layer to implement the input ingestion pattern of the DLRM architecture. Note how the numeric and categorical features are handled entirely separately: this is a peculiarity of DLRM, and it's worth noting that our `DenseFeatures` layer makes no assumptions about the combinations of categorical and continuous inputs. As a helpful exercise, I would encourage the reader to think of *other* input ingestion patterns that might capture information that DLRM's does not, and use these same building blocks to mock up an example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "class DLRMEmbedding(tf.keras.layers.Layer):\n",
    "    def __init__(self, columns, accelerated=False, **kwargs):\n",
    "        is_cat = lambda col: hasattr(col, \"categorical_column\")\n",
    "        embedding_columns = list(filter(is_cat, columns))\n",
    "        numeric_columns = list(filterfalse(is_cat, columns))\n",
    "\n",
    "        self.categorical_feature_names = [col.categorical_column.name for col in embedding_columns]\n",
    "        self.numeric_feature_names = [col.name for col in numeric_columns]\n",
    "\n",
    "        if not accelerated:\n",
    "            # need DenseFeatures layer to perform transformations,\n",
    "            # so we're stuck with the whole thing\n",
    "            self.categorical_densifier = tf.keras.layers.DenseFeatures(embedding_columns)\n",
    "            self.categorical_reshape = tf.keras.layers.Reshape((len(embedding_columns), -1))\n",
    "            self.numeric_densifier = tf.keras.layers.DenseFeatures(numeric_columns)\n",
    "        else:\n",
    "            # otherwise we can do a much faster embedding that\n",
    "            # doesn't break out the SparseTensor machinery\n",
    "            self.categorical_densifier = layers.DenseFeatures(embedding_columns, aggregation=\"stack\")\n",
    "            self.categorical_reshape = None\n",
    "            self.numeric_densifier = layers.DenseFeatures(numeric_columns, aggregation=\"concat\")\n",
    "        super(DLRMEmbedding, self).__init__(**kwargs)\n",
    "\n",
    "    def call(self, inputs):\n",
    "        if not isinstance(inputs, dict):\n",
    "            raise TypeError(\"Expected a dict!\")\n",
    "\n",
    "        categorical_inputs = {name: inputs[name] for name in self.categorical_feature_names}\n",
    "        numeric_inputs = {name: inputs[name] for name in self.numeric_feature_names}\n",
    "\n",
    "        fm_x = self.categorical_densifier(categorical_inputs)\n",
    "        dense_x = self.numeric_densifier(numeric_inputs)\n",
    "        if self.categorical_reshape is not None:\n",
    "            fm_x = self.categorical_reshape(fm_x)\n",
    "        return fm_x, dense_x\n",
    "\n",
    "    def get_config(self):\n",
    "        # I'm going to be lazy here. Sue me.\n",
    "        return {}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Putting Our Differences Aside\n",
    "As a practical matter, that *does it* for the differences between a typical TensorFlow pipeline and an NVTabular accelerated pipeline. Let's review where they've diverged so far:\n",
    "- We needed different feature columns because we're no longer using TensorFlow's transformation code for the hash bucketing\n",
    "- We needed a different data loader because we're reading parquet files instead of tfrecords (and using NVTabular to hash that data online)\n",
    "- We needed a different embedding layer because the existing one is suboptimal and we don't need most of its functionality\n",
    "\n",
    "Once the data is ready to be consumed by the network, we really *shouldn't* be doing anything different. So from here on out we'll just define the DLRM architecture using Keras, and then define a training function which uses the components we've built so far to string together a functional training run! Note that we'll use a layer implemented by NVTabular, `DotProductInteraction`, which computes the FM component of the DLRM architecture (and can generalize to parameterized variants of the interactions proposed in the [FibiNet](https://arxiv.org/abs/1905.09433) architecture as well)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ReLUMLP(tf.keras.layers.Layer):\n",
    "    def __init__(self, dims, output_activation, **kwargs):\n",
    "        self.layers = []\n",
    "        for dim in dims[:-1]:\n",
    "            self.layers.append(tf.keras.layers.Dense(dim, activation=\"relu\"))\n",
    "        self.layers.append(tf.keras.layers.Dense(dims[-1], activation=output_activation))\n",
    "        super(ReLUMLP, self).__init__(**kwargs)\n",
    "\n",
    "    def call(self, x):\n",
    "        for layer in self.layers:\n",
    "            x = layer(x)\n",
    "        return x\n",
    "\n",
    "    def get_config(self):\n",
    "        return {\n",
    "            \"dims\": [layer.units for layer in self.layers],\n",
    "            \"output_activation\": self.layers[-1].activation\n",
    "        }\n",
    "\n",
    "class DLRM(tf.keras.layers.Layer):\n",
    "    def __init__(self, embedding_dim, top_mlp_hidden_dims, bottom_mlp_hidden_dims, **kwargs):\n",
    "        self.top_mlp = ReLUMLP(top_mlp_hidden_dims + [embedding_dim], \"linear\", name=\"top_mlp\")\n",
    "        self.bottom_mlp = ReLUMLP(bottom_mlp_hidden_dims + [1], \"linear\", name=\"bottom_mlp\")\n",
    "        self.interaction = layers.DotProductInteraction()\n",
    "\n",
    "        # adding in an activation layer for stability for mixed precision training\n",
    "        # not strictly necessary, but worth pointing out\n",
    "        self.activation = tf.keras.layers.Activation(\"sigmoid\", dtype=\"float32\")\n",
    "        self.double_check = tf.keras.layers.Lambda(\n",
    "            lambda x: tf.clip_by_value(x, 0., 1.), dtype=\"float32\")\n",
    "        super(DLRM, self).__init__(**kwargs)\n",
    "\n",
    "    def call(self, inputs):\n",
    "        dense_x, fm_x = inputs\n",
    "        dense_x = self.top_mlp(dense_x)\n",
    "        dense_x_expanded = tf.expand_dims(dense_x, axis=1)\n",
    "\n",
    "        x = tf.concat([fm_x, dense_x_expanded], axis=1)\n",
    "        x = self.interaction(x)\n",
    "        x = tf.concat([x, dense_x], axis=1)\n",
    "        x = self.bottom_mlp(x)\n",
    "\n",
    "        # stuff I'm adding in for mixed precision stability\n",
    "        # not actually related to DLRM at all\n",
    "        x = self.activation(x)\n",
    "        x = self.double_check(x)\n",
    "        return x\n",
    "\n",
    "    def get_config(self):\n",
    "        return {\n",
    "            \"embedding_dim\": self.top_mlp.layers[-1].units,\n",
    "            \"top_mlp_hidden_dims\": [layer.units for layer in self.top_mlp.layers[:-1]],\n",
    "            \"bottom_mlp_hidden_dims\": [layer.units for layer in self.bottom_mlp.layers[:-1]]\n",
    "        }"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is an ugly little function I have for giving a more useful reporting of the model parameter count, since the embedding parameters will dominate the total count yet account for very little of the actual learning capacity. Unless you're curious, just execute the cell and keep moving."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "def print_param_counts(model):\n",
    "    # I want to go on record as saying I abhor\n",
    "    # importing inside a function, but I didn't want to\n",
    "    # make anyone think these imports were strictly\n",
    "    # *necessary* for a normal training pipeline\n",
    "    from functools import reduce\n",
    "\n",
    "    num_embedding_params, num_network_params = 0, 0\n",
    "    for weight in model.trainable_weights:\n",
    "        weight_param_count = reduce(lambda x,y: x*y, weight.shape)\n",
    "        if re.search(\"/embedding_weights:[0-9]+$\", weight.name) is not None:\n",
    "            num_embedding_params += weight_param_count\n",
    "        else:\n",
    "            num_network_params += weight_param_count\n",
    "\n",
    "    print(\"Embedding parameter count: {}\".format(num_embedding_params))\n",
    "    print(\"Non-embedding parameter count: {}\".format(num_network_params))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll also include some callbacks to use TensorFlow's incredible TensorBoard tool, both to track training metrics and to profile our GPU performance to diagnose and remove bottlenecks. We'll also use a custom summary metric to monitor throughput in samples per second, to get a sense for the acceleration our improvements bring us. I'm building a function for this just because, like the function above, it's not strictly *necessary*, particularly the throughput hook, so I don't want to muddle the clarity of the actual training function by doing this there."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_callbacks(device, accelerated=False):\n",
    "    run_name = device + \"_\" + (\"accelerated\" if accelerated else \"native\")\n",
    "    if mixed_precision.global_policy().name == \"mixed_float16\":\n",
    "        run_name += \"_mixed-precision\"\n",
    "\n",
    "    log_dir = os.path.join(LOG_DIR, run_name)\n",
    "    file_writer = tf.summary.create_file_writer(os.path.join(log_dir, \"metrics\"))\n",
    "    file_writer.set_as_default()\n",
    "\n",
    "    # note that we're going to be doing some profiling from batches 90-100, and so\n",
    "    # should expect to see a throughput dip there (since both the profiling itself\n",
    "    # and the export of the stats it gathers will eat up time). Thus, as a rule,\n",
    "    # it's not always necessary or desirable to be profiling every training run\n",
    "    # you do\n",
    "    return [\n",
    "        ThroughputLogger(BATCH_SIZE),\n",
    "        tf.keras.callbacks.TensorBoard(log_dir, update_freq=20, profile_batch=\"90,100\")\n",
    "    ]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, finally, below we will define our training pipeline from end to end. Take a look at the comments to see how each component we've built so far plugs in. What's great about such a pipeline is that it's more or less agnostic to what the schema returned by `get_feature_columns` looks like (subject of course to the constraint that there are no multi-hot categorical or vectorized continuous features, which aren't supported yet). In fact, from a certain point of view it would make sense to make the columns and filenames an *input* to this function (and possibly even the architecture itself as well). But I'll leave that level of robustness to you for when you build your own pipeline.\n",
    "\n",
    "The last thing I'll mention is that we're just going to do training below. The validation picture gets slightly complicated by the fact that `model.fit` doesn't accept Keras `Sequence` objects as validation data. To support this, we've built an extremely lightweight Keras callback to handle validation, `KerasSequenceValidater`. To see how to use it, consult the [Rossmann Store Sales example notebook](../rossmann-store-sales-example.ipynb) in the directory above this, and consider extending its functionality to support more exotic validation metrics."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def fit_a_model(accelerated=False, cpu=False):\n",
    "    # get our columns to describe our dataset\n",
    "    columns = get_feature_columns()\n",
    "\n",
    "    # build a dataset from those descriptions\n",
    "    file_pattern = PARQUETS if accelerated else TFRECORDS\n",
    "    train_dataset, columns = make_dataset(file_pattern, columns, accelerated=accelerated)\n",
    "\n",
    "    # build our Keras model, using column descriptions to build input tensors\n",
    "    inputs = {}\n",
    "    for column in columns:\n",
    "        column = getattr(column, \"categorical_column\", column)\n",
    "        dtype = getattr(column, \"dtype\", tf.int64)\n",
    "        input = tf.keras.Input(name=column.name, shape=(1,), dtype=dtype)\n",
    "        inputs[column.name] = input\n",
    "\n",
    "    fm_x, dense_x = DLRMEmbedding(columns, accelerated=accelerated)(inputs)\n",
    "    x = DLRM(EMBEDDING_DIM, TOP_MLP_HIDDEN_DIMS, BOTTOM_MLP_HIDDEN_DIMS)([dense_x, fm_x])\n",
    "    model = tf.keras.Model(inputs=list(inputs.values()), outputs=x)\n",
    "\n",
    "    # compile our Keras model with our desired loss, optimizer, and metrics\n",
    "    optimizer = tf.keras.optimizers.Adam(LEARNING_RATE)\n",
    "    metrics = [tf.keras.metrics.AUC(curve=\"ROC\", name=\"auroc\")]\n",
    "    model.compile(optimizer, \"binary_crossentropy\", metrics=metrics)\n",
    "    print_param_counts(model)\n",
    "\n",
    "    # name our run and grab our callbacks\n",
    "    device = \"cpu\" if cpu else \"gpu\"\n",
    "    callbacks = get_callbacks(device, accelerated=accelerated)\n",
    "\n",
    "    # now fit the model\n",
    "    model.fit(train_dataset, epochs=1, steps_per_epoch=STEPS, callbacks=callbacks)\n",
    "\n",
    "    # just because I'm doing multiple runs back-to-back, I'm going to\n",
    "    # clear the Keras session to free up memory now that we're done.\n",
    "    # You don't need to do this in a typical training script\n",
    "    tf.keras.backend.clear_session()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One particularly cool feature of TensorFlow's TensorBoard tool is that we can embed it directly into this notebook. This way, we can monitor training metrics, including throughput, as well as take a look at the in-depth profiles the most recent versions of TensorBoard can generate, without every having to leave the comfort of this browser tab."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One particularly cool feature of TensorFlow's TensorBoard tool is that we can embed it directly into this notebook. This way, we can monitor training metrics, including throughput, as well as take a look at the in-depth profiles the most recent versions of TensorBoard can generate, without every having to leave the comfort of this browser tab."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Reusing TensorBoard on port 6006 (pid 370), started 0:01:41 ago. (Use '!kill 370' to kill it.)"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "\n",
       "      <iframe id=\"tensorboard-frame-5bc20f560ebc98fb\" width=\"100%\" height=\"800\" frameborder=\"0\">\n",
       "      </iframe>\n",
       "      <script>\n",
       "        (function() {\n",
       "          const frame = document.getElementById(\"tensorboard-frame-5bc20f560ebc98fb\");\n",
       "          const url = new URL(\"/\", window.location);\n",
       "          const port = 6006;\n",
       "          if (port) {\n",
       "            url.port = port;\n",
       "          }\n",
       "          frame.src = url;\n",
       "        })();\n",
       "      </script>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "if not os.path.exists(LOG_DIR):\n",
    "    os.mkdir(LOG_DIR)\n",
    "\n",
    "%load_ext tensorboard\n",
    "%tensorboard --logdir /home/docker/logs --host 0.0.0.0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll start by doing a training run on CPU using all the default TensorFlow tools. Since I'm less concerned about profiling this run, we'll just note the throughput and then move on."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Embedding parameter count: 188746160\n",
      "Non-embedding parameter count: 2747145\n",
      "1000/1000 [==============================] - 2483s 2s/step - loss: 0.1317 - auroc: 0.7485\n"
     ]
    }
   ],
   "source": [
    "with tf.device(\"/CPU:0\"):\n",
    "    fit_a_model(accelerated=False, cpu=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, let's do the exact same run, but this time on GPU. This will give us some indication of the \"out-of-the-box\" acceleration generated by GPU-based training. To spoil the surprise, we'll find that it's not particularly impressive, and we'll start to get an indication of *why* that is."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Embedding parameter count: 188746160\n",
      "Non-embedding parameter count: 2747145\n",
      "1000/1000 [==============================] - 406s 406ms/step - loss: 0.1307 - auroc: 0.7474\n"
     ]
    }
   ],
   "source": [
    "fit_a_model(accelerated=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you look at the \"Throughput\" metric in your TensorBoard instance above, you should see something like this\n",
    "<img src=\"imgs/cpu-native_vs_gpu-native.PNG\"></img>\n",
    "\n",
    "This shows a roughly 3-4x improvement in throughput attained simply by moving native TensorFlow code from CPU to GPU. While this is OK, anyone who has ever trained a convolutional model on both CPU and GPU will be disappointed by that figure. Shouldn't parallel computing be able to help a lot more than that?\n",
    "\n",
    "To understand why this is, switch to the \"Profile\" tab on Tensorboard and take a look at the trace view for your `gpu_native` model\n",
    "<img src=\"imgs/gpu-native-trace.PNG\"></img>\n",
    "\n",
    "This trace view shows us when individual ops take place during the course of a training step, which piece of hardware (CPU or GPU, aka the \"host\" or \"device\") is used to execute them, and how long that execution takes. This is useful because it not only can show us which ops are taking the longest (and so motivate ways to accelerate or remove them), but also when ops aren't running at all! Let's zoom in on this portion of one training step.\n",
    "<img src=\"imgs/gpu-native-trace-zoom.PNG\"></img>\n",
    "\n",
    "Here we see compute being done by the GPU for the first ~120 ms of our training step. Notice anything missing?\n",
    "\n",
    "The issue here is that many of the ops being implemented by `feature_column`s either don't have GPU kernels, requiring data to be passed back and forth between the host and the GPU, or are so small as to not be worth a kernel launch in the first place. Moreover, the `categorical_column_with_hash_bucket`'s in particular implements a costly string mapping for integer categories before hashing.\n",
    "\n",
    "Taken together, these deficiencies provide a enormous drag on GPU acceleration. By contrast, NVTabular's fast parquet data loaders get your data on the GPU as soon as possible, and use super fast GPU-based preprocessing operations to keep it their waiting to be consumed by your network. By leveraging this fact to write faster, more efficient embedding layers, we can shift the training bottleneck to the math-heavy matrix algebra GPUs are best at.\n",
    "\n",
    "With this in mind, let's try training with NVTabular's accelerated tools and get a sense for the speed up we can expect."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, let's do the exact same run, but this time on GPU. This will give us some indication of the \"out-of-the-box\" acceleration generated by GPU-based training. We'll see that it's not particularly impressive (around 4x or so), and we'll start to get an indication of *why* that is."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Embedding parameter count: 188746160\n",
      "Non-embedding parameter count: 2747145\n",
      "1000/1000 [==============================] - 160s 160ms/step - loss: 0.1290 - auroc: 0.7666\n"
     ]
    }
   ],
   "source": [
    "fit_a_model(accelerated=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our \"Throughput\" metric should now look like\n",
    "<img src=\"imgs/cpu-native_vs_gpu-native_vs_gpu-accelerated.PNG\"></img>\n",
    "\n",
    "The first thing to note is that this gets us a 2.5-3x boost over native GPU performance, translating to a ~10x improvement over CPU. That's beginning to get closer to the value we should expect GPU training to bring. To get a picture of why this is, let's take a look at the trace view again\n",
    "<img src=\"imgs/gpu-accelerated-trace.PNG\"></img>\n",
    "\n",
    "There's almost no blank space on the GPU portion of the trace, and the ops that *are* on the trace actually occupy a reasonable amount of time, more effectively leveraging GPU resources. You can see this if you watch the output of `nvidia-smi` during training too: GPU utilization is higher and more consistent when using NVTabular for training, which is great, since usually you're paying for the whole GPU whether you're utilizing it all or not. Think of this as just getting more bang for your buck.\n",
    "\n",
    "The story doesn't end here, either. If you're using a Volta, T4, or Ampere GPU, you have silicon optimized for FP16 compute called Tensor Cores. This lower precision compute is particularly valuable if the majority of your training time is spent on math heavy ops like matrix multiplications. Since we saw that using NVTabular for data loading and preprocessing moves the training bottleneck from data loading to network compute, we should expect to see some pretty good throughput gains from switching to **mixed precision** training. Luckily, Keras has APIs that make changing this compute style extremely simple."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "# update our precision policy to use mixed\n",
    "policy = mixed_precision.Policy(\"mixed_float16\")\n",
    "mixed_precision.set_policy(policy)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So now let's compare the advantage wrought by mixed precision training in both the native and accelerated pipelines. One thing I'll note right now is that this architecture has some stability issues in lower precision, and the loss may diverge or nan-out. Increasing numeric stability across model architectures is an ongoing project for NVIDIA, and coverage for most popular tabular architectures and their components should be there soon. So while from a practical standpoint mixed precision compute may not be able to help you *today*, it's still good to know that it's a powerful options to keep an eye on for the near future."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Embedding parameter count: 188746160\n",
      "Non-embedding parameter count: 2747145\n",
      "1000/1000 [==============================] - 394s 394ms/step - loss: 0.6790 - auroc: 0.4979\n"
     ]
    }
   ],
   "source": [
    "fit_a_model(accelerated=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now our \"Throughput\" metric should show\n",
    "<img src=\"imgs/cpu-native_vs_gpu-native_vs_gpu-accelerated_vs_gpu-native-mp.PNG\"></img>\n",
    "\n",
    "As we expected, adding mixed precision compute to the native pipeline doesn't help much, since our training was bottlenecked by things like CPU compute, data transfer, and kernel overhead, none of which reduced-precision GPU compute does anything to address. Let's see what the gains look like when we remove these bottlenecks using NVTabular."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looking at the \"Throughput\" metric in the \"Scalars\" tab of TensorBoard, we should something like this:\n",
    "\n",
    "As we expected, adding mixed precision compute to the native pipeline doesn't help much, since our training was bottlenecked by things like CPU compute, data transfer, and kernel overhead, none of which reduced-precision GPU compute does anything to address. Let's see what the gains look like when we remove these bottlenecks using NVTabular."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Embedding parameter count: 188746160\n",
      "Non-embedding parameter count: 2747145\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/conda/envs/rapids/lib/python3.7/site-packages/tensorflow/python/framework/indexed_slices.py:432: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n",
      "  \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1000/1000 [==============================] - 82s 82ms/step - loss: 0.2073 - auroc: 0.5284\n"
     ]
    }
   ],
   "source": [
    "fit_a_model(accelerated=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now our \"Throughput\" metric should look like this:\n",
    "<img src=\"imgs/cpu-native_vs_gpu-native_vs_gpu-accelerated_vs_gpu-native-mp_vs_gpu-accelerated-mp.PNG\"></img>\n",
    "\n",
    "By adding in two lines of code to our accelerated pipeline, we can get an over 2x additional improvement in throughput! And again, this should stand to reason, since removing the data loading and preprocessing bottlenecks now makes the most costly parts of our pipeline the matrix multiplies in the dense layers, which are ripe for acceleration via FP16.\n",
    "\n",
    "Take for example the matmul in the second layer of the bottom MLP. We can take find it on the trace view and click on it for a timing breakdown at full precision:\n",
    "<img src=\"imgs/full-precision-matmul.PNG\"></img>\n",
    "\n",
    "So it takes around 9 ms to run. Let's take a look at the same measurement when using mixed precision:\n",
    "<img src=\"imgs/mixed-precision-matmul.PNG\"></img>\n",
    "That's a factor of over 6x improvement! Not bad for an extra line or two of code.\n",
    "\n",
    "\n",
    "As a final tip for interested mixed precision users, the particularly astute observer might have noticed that the matmul in the first layer of the bottom MLP (the `dense_4` layer) didn't enjoy the same level acceleration as the one in this second layer. Why is that?\n",
    "\n",
    "This is getting a bit beyond the scope of this tutorial, but it's worth noting here that reduced precision kernels require all relevant dimensions to be multiples of 16 in order to be accelerated. The dimension of the input to the bottom MLP, however, can't be controlled directy and is decided by the size of your data. For example, if you have $N$ categorical features and an embedding dimension of $k$, in the DLRM architecture the dimension of this vector will be $\\frac{(N+1)N}{2} + k$. As an exercise, try padding this vector with 0s to the nearest multiple of 16 and see what sort of acceleration FP16 compute provides then."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Conclusions\n",
    "Keras represents an incredibly robust and powerful way to rapidly iterate on new ideas for representing relationships between variables in tabular deep learning models, leading to better learning and, hopefully, to a better understanding of the systems we're trying to model. However, inefficiencies in certain modules related to data loading and preprocessing have so far limited the ability of GPUs to provide useful acceleration to these models. By leveraging NVTabular to replace these modules, we can not only achieve stellar acceleration with minimal coding overhead, but also shift our training bottlenecks in order to introduce the possibility of further acceleration farther down the pipeline."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
