{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Data Loading & Preprocessing for Deep Learning by tf.Data API\n",
    "\n",
    "Training a Deep Learning model using a very large dataset involves loading data into the memory (RAM) and preprocessing the data that includes type casting, scaling, augmenting. The problem is that the big data used for training Deep Learning models do not usually fit into the RAM. Thus, we need to dynamically read batches of data and preprocess.\n",
    "\n",
    "\n",
    "An efficient tool for dynamic loading of data and preprocessing is **TensorFlow's Data API (tf.data)**. This API works with tf.keras. The tf.data API enables to build complex input pipelines to aggregate data from multiple files that are stored on disks, perform per-element transformations (e.g., normalization), data augmentations (e.g., resize, rotation, zoom, etc.), create batches of data, etc.\n",
    "\n",
    "The Data API is able to read from text files (such as CSV files), binary files with fixed-size records (e.g., .mat files), and binary files that use TensorFlow’s TFRecord format, which supports records of varying sizes. \n",
    "\n",
    "The tf.data introduces a **tf.data.Dataset** abstraction that represents a sequence of elements, in which each element consists of one or more components. For example, in an image pipeline, an element might be a single training example, with a pair of tensor components representing the image and its label. The Dataset is used for representing a very large set of elements.\n",
    "\n",
    "\n",
    "\n",
    "## Dataset Object for Image Recognition\n",
    "\n",
    "In practical Deep Learning experiments, data is typically stored on disks. For example, in object recognition problems the image data is stored locally on the disk. First, we need to **construct** a Dataset object from the local image repository. Then, the Dataset object needs to be **transformed** to load image-label pairs. Prior loading we need to convert each encoded image (e.g., PNG, JPEG-encoded images) as a Tensor object, type-casting it (e.g., float32), scaling, getting the image label from the stored images (typically from the nested structure of the image directories). Finally, images have to put into batches for training the model. These stpes are described in a later notebook.  \n",
    "\n",
    "In this notebook, we present Dataset techniques for loading and preprocessing a simple data artifact (i.e., a Python list). Specifically, we describe two steps.\n",
    "- Constructing a Dataset object\n",
    "- Transforming a Dataset object\n",
    "\n",
    "\n",
    "## Constructing a Dataset\n",
    "\n",
    "To construct a Dataset object from the data artifact in memory, we may use the following methods. \n",
    "\n",
    "- tf.data.Dataset.from_tensors(): constructs a Dataset with a single element, comprising the given tensors.\n",
    "\n",
    "- tf.data.Dataset.from_tensor_slices(): constructs a Dataset whose elements are slices of the given tensors.\n",
    "\n",
    "- tf.data.Dataset.list_files(): constructs a Dataset from input files matching one or more glob patterns.\n",
    "\n",
    "Alternatively, if the input data is stored in a file in the recommended TFRecord format, we may use the tf.data.TFRecordDataset() method.\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "## Transforming a Dataset \n",
    "\n",
    "\n",
    "Once we have a Dataset object, we can transform it into a new Dataset by chaining method calls on the Dataset object. There are generally two types of transformations that we can apply.\n",
    "\n",
    "- Per-element transformation: using the map() method (e.g., for loading data as image tensor and label pairs, scaling, augmentation, etc.)\n",
    "- Multi-element transformation: using the batch() method\n",
    "\n",
    "\n",
    "In addition to these two methods, following methods are used for preprocessing or preparing the dataset for training: cache(), shuffle(), repeat(), prefetch(), interleave()\n",
    "\n",
    "Below we describe these methods briefly.\n",
    "\n",
    "\n",
    "- cache(filename)\n",
    "\n",
    "It stores the elements in the Dataset, which is useful for future reuse. The first time the Dataset is iterated over, its elements will be cached either in the specified file or in memory (default behavior). Subsequent iterations will use the cached Dataset.\n",
    "\n",
    "With a small enough dataset, the cache method makes the training extra fast because the data is saved in memory after the first epoch. For larger datasets, it may be possible to cache the data to a file.\n",
    "\n",
    "\n",
    "- map(map_func, num_parallel_calls=None)\n",
    "\n",
    "It applies the given transformation function \"map_func\" to the input Dataset. We can parallelize this process by setting the \"num_parallel_calls\" parameter. For example, we may set the \"num_parallel_calls\" to the number of threads/processes that can be used for transformation. Alternatively, we may use the value tf.data.AUTOTUNE, which dynamically sets the number of parallel calls based on available CPU.\n",
    "\n",
    "\n",
    "- shuffle(buffer_size, seed=None, reshuffle_each_iteration=None)\n",
    "\n",
    "The shuffle method randomly shuffles the elements of the Dataset. It fills a buffer with buffer_size elements, then randomly samples elements from the buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer_size greater than or equal to the full size of the Dataset is required.\n",
    "\n",
    "For instance, if the Dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.\n",
    "\n",
    "By default, the \"reshuffle_each_iteration\" argument is set to None. As a consequence, when batching is performed, the Dataset will produce different batches during each epoch. For producing same batches, its value should beset to False.\n",
    "\n",
    "\n",
    "- repeat(count) \n",
    "\n",
    "It is used to repeat the Dataset \"count\" number of times. It is useful in scenarios when data ends up but the training should be continued. If repeat is set, then it starts reading the Dataset from the very beginning. The \"count\" argument should be set equal to the value of the number of epochs.\n",
    "\n",
    "\n",
    "- batch(batch_size, drop_remainder=False, num_parallel_calls=None) \n",
    "\n",
    "It splits the Dataset into subset of the given batch_size. By setting the \"drop_remainder\" to True, emitting batches of exactly same size can be guaranteed. It does so by removing enough training examples such that the size of the training set is divisible by the batch_size. Also, this process can be parallelized by using \"num_parallel_calls\". Typically, we set it to tf.data.AUTOTUNE, which will prompt the tf.data runtime to tune the value dynamically at runtime.\n",
    "\n",
    "\n",
    "- prefetch(buffer_size) \n",
    "\n",
    "It creates a Dataset that prefetches elements from the given Dataset. It is used to prefetch a batch to decouple the time when data is produced from the time when data is consumed. The transformation uses a background thread and an internal buffer to prefetch elements from the input Dataset ahead of the time they are requested. \n",
    "This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.\n",
    "\n",
    "The number of elements to prefetch should be equal to (or possibly greater than) the number of batches consumed by a single training epoch. Instead of manually tuning this value, we set it to tf.data.AUTOTUNE, \n",
    "which will prompt the tf.data runtime to tune the value dynamically at runtime.\n",
    "\n",
    "\n",
    "- interleave(map_func, cycle_length=None, block_length=None, num_parallel_calls=None) \n",
    "\n",
    "It reads data from different files and parallelize this process. It applies the \"map_func\" function across the Dataset, and interleaves the results.\n",
    "\n",
    "The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "TensorFlow Version:  2.5.0\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "print(\"TensorFlow Version: \", tf.__version__) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Demo: Dataset from Toy Data\n",
    "\n",
    "\n",
    "Say that our data source is a tensor object (e.g., a Python list) called X, which is small enough to fit into memory. \n",
    "\n",
    "We construct a Dataset object from X by using the **from_tensor_slices()** method. Its elements are all the slices of X (along the first dimension). This dataset is called the TensorSliceDataset.\n",
    "\n",
    "The Dataset object is a Python iterable. This makes it possible to consume its elements using a for loop."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Construct a Dataset and Display Its Elements"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Dataset Object and its type:  <TensorSliceDataset shapes: (), types: tf.int32>\n",
      "\n",
      "Element Specification:\n",
      " TensorSpec(shape=(), dtype=tf.int32, name=None)\n",
      "\n",
      "Print all elements with their shape and data type information: \n",
      "tf.Tensor(0, shape=(), dtype=int32)\n",
      "tf.Tensor(1, shape=(), dtype=int32)\n",
      "tf.Tensor(2, shape=(), dtype=int32)\n",
      "tf.Tensor(3, shape=(), dtype=int32)\n",
      "tf.Tensor(4, shape=(), dtype=int32)\n",
      "tf.Tensor(5, shape=(), dtype=int32)\n",
      "tf.Tensor(6, shape=(), dtype=int32)\n",
      "tf.Tensor(7, shape=(), dtype=int32)\n",
      "tf.Tensor(8, shape=(), dtype=int32)\n",
      "tf.Tensor(9, shape=(), dtype=int32)\n",
      "\n",
      "Option 1: Print the list of all elements: \n",
      "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
      "\n",
      "Option 2: Print each element independently: \n",
      "0\n",
      "1\n",
      "2\n",
      "3\n",
      "4\n",
      "5\n",
      "6\n",
      "7\n",
      "8\n",
      "9\n"
     ]
    }
   ],
   "source": [
    "# Create a Tensor X (using one of the two techniques below)\n",
    "#X = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n",
    "X = tf.range(10)\n",
    "\n",
    "# Create a \"TensorSliceDataset\" Dataset object\n",
    "dataset = tf.data.Dataset.from_tensor_slices(X)\n",
    "print(\"Dataset Object and its type: \", dataset)\n",
    "\n",
    "# Print the type specification of an element of this dataset\n",
    "print(\"\\nElement Specification:\\n\", dataset.element_spec)\n",
    "\n",
    "\n",
    "'''\n",
    "There are two ways to inspect the dataset.\n",
    "- Technique 1: Print dataset elements directly along with element shapes and types\n",
    "- Technique 2: Print only the dataset elements\n",
    "'''\n",
    "\n",
    "# Technique 1: Print dataset elements directly along with element shapes and types\n",
    "# This is possible because the Dataset object is a Python iterable\n",
    "print(\"\\nPrint all elements with their shape and data type information: \")\n",
    "for i in dataset:\n",
    "    print(i)\n",
    "\n",
    "    \n",
    "# Technique 2: Print only the dataset elements   \n",
    "'''\n",
    "Get the content of the dataset by the as_numpy_iterator() method. \n",
    "The as_numpy_iterator() method returns an iterator.\n",
    "The iterator converts all elements of the dataset to numpy.\n",
    "This method requires that we are running in eager mode and \n",
    "the dataset's element_spec contains only TensorSpec components.\n",
    "We have two options.\n",
    "- Option 1: Print the list\n",
    "- Option 2: Print each element independently\n",
    "'''\n",
    "\n",
    "# Option 1: Print the list\n",
    "print(\"\\nOption 1: Print the list of all elements: \")\n",
    "print(list(dataset.as_numpy_iterator()))\n",
    "\n",
    "# Option 2: Print each element independently\n",
    "print(\"\\nOption 2: Print each element independently: \")\n",
    "for element in dataset.as_numpy_iterator():\n",
    "    print(element)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Transform a Dataset\n",
    "\n",
    "We will apply various transformation on our toy Dataset. Multiple transformation methods can be applied via method chaining.\n",
    "\n",
    "For each transformation method, we will briefly describe the method and the optimal combination of methods when they are chained.\n",
    "\n",
    "\n",
    "Specifically, we describe the following methods and method combinations.\n",
    "\n",
    "- cache\n",
    "- shuffle (small buffer)\n",
    "- shuffle (large buffer)\n",
    "- batch\n",
    "- repeat\n",
    "- batch --> repeat\n",
    "- repeat --> batch\n",
    "- shuffle --> repeat --> batch\n",
    "- batch --> shuffle --> repeat "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "---------------Cache ---------------------------\n",
      "\n",
      "Original Dataset:\n",
      "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
      "\n",
      "Transformed Dataset is stored in cache:\n",
      "[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\n",
      "\n",
      "Subsequent iterations read the transformed Dataset from cache:\n",
      "[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\n"
     ]
    }
   ],
   "source": [
    "print(\"\\n---------------Cache ---------------------------\")\n",
    "\n",
    "'''\n",
    "The \"cache()\" method stores dataset elements in memory (by default) or in file for future reuse.\n",
    "- The first time the dataset is iterated over (i.e., first epoch), \n",
    "its elements will be cached either in the specified file or in memory. \n",
    "- Subsequent iterations (i.e., epochs) will use the cached data.\n",
    "This will save some operations (e.g., file opening, data reading, parsing, transforming, etc.) \n",
    "from being executed during each epoch.\n",
    "\n",
    "Caching should be used judiciously.\n",
    "- Smaller dataset (that fits into memory): use the cache method. \n",
    "- Large dataset:  typically is sharded (split in multiple files), and do not fit in memory.\n",
    "Thus, it should not be cached in memory.\n",
    "'''\n",
    "\n",
    "print(\"\\nOriginal Dataset:\")\n",
    "print(list(dataset.as_numpy_iterator()))\n",
    "\n",
    "'''\n",
    "Example:\n",
    "After loading the dataset, we transform its elements by raising their power to 2.\n",
    "Then, we cache the transformed dataset.\n",
    "'''\n",
    "dataset_1 = dataset.map(lambda x: x**2)\n",
    "dataset_1 = dataset_1.cache()\n",
    "\n",
    "print(\"\\nTransformed Dataset is stored in cache:\")\n",
    "print(list(dataset_1.as_numpy_iterator()))\n",
    "\n",
    "'''\n",
    "Subsequent iterations read from the cache.\n",
    "'''\n",
    "print(\"\\nSubsequent iterations read the transformed Dataset from cache:\")\n",
    "print(list(dataset_1.as_numpy_iterator()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "---------------Shuffle ---------------------------\n",
      "\n",
      "Original Dataset:\n",
      "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
      "\n",
      "---------------Shuffle (small buffer)---------------------------\n",
      "\n",
      "Output of 10 iterations (epochs): Partial Randomness\n",
      "[1, 2, 0, 3, 5, 4, 7, 8, 6, 9]\n",
      "[1, 2, 3, 4, 5, 0, 7, 8, 6, 9]\n",
      "[1, 0, 2, 4, 5, 3, 7, 6, 9, 8]\n",
      "[1, 0, 3, 2, 4, 6, 7, 8, 5, 9]\n",
      "[1, 2, 0, 3, 4, 5, 7, 6, 9, 8]\n",
      "[0, 2, 1, 3, 4, 5, 7, 6, 8, 9]\n",
      "[0, 1, 2, 4, 3, 5, 7, 6, 9, 8]\n",
      "[1, 0, 2, 3, 5, 4, 7, 6, 8, 9]\n",
      "[1, 2, 3, 0, 5, 6, 4, 7, 9, 8]\n",
      "[0, 1, 2, 3, 4, 5, 6, 7, 9, 8]\n",
      "\n",
      "---------------Shuffle (large buffer)---------------------------\n",
      "\n",
      "Output of 10 iterations (epochs): Full Randomness\n",
      "[1, 2, 3, 9, 8, 0, 4, 7, 6, 5]\n",
      "[8, 1, 7, 4, 6, 9, 3, 0, 5, 2]\n",
      "[7, 3, 9, 6, 0, 2, 8, 5, 4, 1]\n",
      "[5, 4, 2, 6, 3, 0, 9, 8, 1, 7]\n",
      "[4, 1, 7, 8, 0, 3, 5, 9, 2, 6]\n",
      "[6, 9, 4, 5, 0, 7, 3, 2, 1, 8]\n",
      "[3, 9, 2, 8, 5, 1, 4, 6, 0, 7]\n",
      "[3, 8, 6, 2, 1, 9, 7, 5, 0, 4]\n",
      "[0, 2, 8, 3, 7, 5, 4, 1, 9, 6]\n",
      "[9, 5, 0, 1, 7, 2, 4, 3, 6, 8]\n"
     ]
    }
   ],
   "source": [
    "print(\"\\n---------------Shuffle ---------------------------\")\n",
    "\n",
    "print(\"\\nOriginal Dataset:\")\n",
    "print(list(dataset.as_numpy_iterator()))\n",
    "\n",
    "'''\n",
    "The cache() method will produce exactly the same elements during each iteration (epoch) through the dataset. \n",
    "For randomizing the iteration order, we need to call the shuffle() method after calling cache().\n",
    "\n",
    "The shuffle() method, first, fills a buffer with \"buffer_size\" elements.\n",
    "Then, randomly samples elements from this buffer by replacing the selected elements with new elements.\n",
    "\n",
    "The value of \"buffer_size\" influences the dataset randomization.\n",
    "- Small buffer (smaller than the length of dataset)\n",
    "- Large buffer (greater than or equal to the length of dataset)\n",
    "'''\n",
    "\n",
    "\n",
    "print(\"\\n---------------Shuffle (small buffer)---------------------------\")\n",
    "\n",
    "'''\n",
    "If the buffer size is smaller than the length of the dataset, its elements are not completely randomized.\n",
    "In this example, the dataset contains 10 elements but buffer_size is set to 2.\n",
    "Thus, shuffle will initially select a random element from the first 2 elements in the buffer. \n",
    "Once an element is selected, its space in the buffer is replaced by the next (i.e., 3rd) element, \n",
    "maintaining the 2 element buffer.\n",
    "\n",
    "Observe from the output of 10 iterations, that the order of the dataset elements is not purey andom.\n",
    "'''\n",
    "\n",
    "dataset_2 = dataset.shuffle(buffer_size=2)\n",
    "\n",
    "print(\"\\nOutput of 10 iterations (epochs): Partial Randomness\")\n",
    "for i in range(10):\n",
    "    print(list(dataset_2.as_numpy_iterator()))\n",
    "    \n",
    "    \n",
    "print(\"\\n---------------Shuffle (large buffer)---------------------------\")\n",
    "\n",
    "'''\n",
    "For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.\n",
    "'''\n",
    "dataset_3 = dataset.shuffle(buffer_size=10)\n",
    "\n",
    "print(\"\\nOutput of 10 iterations (epochs): Full Randomness\")\n",
    "for i in range(10):\n",
    "    print(list(dataset_3.as_numpy_iterator()))\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "---------------Batch---------------------------\n",
      "<BatchDataset shapes: (None,), types: tf.int32>\n",
      "\n",
      "The component of the batch dataset has an additional outer dimension.\n",
      "tf.Tensor([0 1 2], shape=(3,), dtype=int32)\n",
      "tf.Tensor([3 4 5], shape=(3,), dtype=int32)\n",
      "tf.Tensor([6 7 8], shape=(3,), dtype=int32)\n",
      "tf.Tensor([9], shape=(1,), dtype=int32)\n",
      "\n",
      "Display each batch as a list:\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "[9]\n",
      "\n",
      "---------------Batch (same length)---------------------------\n",
      "\n",
      "Display each batch as a list (all batches have the same length):\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n"
     ]
    }
   ],
   "source": [
    "print(\"\\n---------------Batch---------------------------\")\n",
    "\n",
    "'''\n",
    "The batch() method combines consecutive elements of the dataset into batches.\n",
    "The size of the batches is determined by the \"batch_size\" parameter.\n",
    "\n",
    "The components of the resulting element will have an additional outer dimension, \n",
    "which will be batch_size (or N % batch_size for the last element if batch_size \n",
    "does not divide the number of input elements N evenly and drop_remainder parameter is False). \n",
    "'''\n",
    "dataset_4 = dataset.batch(batch_size=3, drop_remainder=False)\n",
    "\n",
    "print(dataset_4)\n",
    "print(\"\\nThe component of the batch dataset has an additional outer dimension.\") \n",
    "for i in dataset_4:\n",
    "    print(i)\n",
    "\n",
    "\n",
    "print(\"\\nDisplay each batch as a list:\")\n",
    "for element in dataset_4.as_numpy_iterator():\n",
    "    print(element)\n",
    "    \n",
    "    \n",
    "print(\"\\n---------------Batch (same length)---------------------------\")\n",
    "'''\n",
    "To create the batches with the same outer dimension or same length, \n",
    "set the \"drop_remainder\" parameter to True.\n",
    "It prevents the smaller batch from being produced, by removing enough training examples. \n",
    "Consequently, the size of the training set will be divisible by the batch_size. \n",
    "'''\n",
    "dataset_5 = dataset.batch(batch_size=3, drop_remainder=True)\n",
    "\n",
    "print(\"\\nDisplay each batch as a list (all batches have the same length):\")\n",
    "for element in dataset_5.as_numpy_iterator():\n",
    "    print(element)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "--------------------Repeat------------------------------\n",
      "\n",
      "Original Dataset:\n",
      "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
      "\n",
      "Dataset repeated 2 times:\n",
      "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
      "\n",
      "---------------Repeat --> Batch---------------------------\n",
      "\n",
      "Original Dataset:\n",
      "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
      "\n",
      "No Repeat (batch size = 3): (runs for 1 epoch)\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "\n",
      "Repeat (batch size = 3): (runs up to 7 epochs)\n",
      "\n",
      "Epoch: 1\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 2\n",
      "[9 0 1]\n",
      "[2 3 4]\n",
      "[5 6 7]\n",
      "\n",
      "Epoch: 3\n",
      "[8 9 0]\n",
      "[1 2 3]\n",
      "[4 5 6]\n",
      "\n",
      "Epoch: 4\n",
      "[7 8 9]\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "\n",
      "Epoch: 5\n",
      "[6 7 8]\n",
      "[9 0 1]\n",
      "[2 3 4]\n",
      "\n",
      "Epoch: 6\n",
      "[5 6 7]\n",
      "[8 9 0]\n",
      "[1 2 3]\n",
      "\n",
      "Epoch: 7\n",
      "[4 5 6]\n",
      "[7 8 9]\n",
      "\n",
      "---------------Shuffle --> Batch --> Repeat---------------------------\n",
      "\n",
      "Repeat (batch size = 3): (runs up to 6 epochs): Same batches/per epoch\n",
      "\n",
      "Epoch: 1\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 2\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 3\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 4\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 5\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 6\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "[6 7 8]\n"
     ]
    }
   ],
   "source": [
    "print(\"\\n--------------------Repeat------------------------------\")\n",
    "\n",
    "'''\n",
    "The repeat() method is used to repeat the dataset.\n",
    "By default (if no argument is used), the dataset is repeated indefinitely.\n",
    "However, if the \"count\" parameter is set,\n",
    "then the dataset is repeated \"count\" number of times.\n",
    "'''\n",
    "\n",
    "print(\"\\nOriginal Dataset:\")\n",
    "print(list(dataset.as_numpy_iterator()))\n",
    "\n",
    "print(\"\\nDataset repeated 2 times:\")\n",
    "dataset_6 = dataset.repeat(count=2)\n",
    "print(list(dataset_6.as_numpy_iterator()))\n",
    "\n",
    "\n",
    "print(\"\\n---------------Repeat --> Batch---------------------------\")\n",
    "\n",
    "'''\n",
    "The repeat() method should be used in scenarios when data ends up but the training should be continued.\n",
    "For example, if we have 10 samples batched for training and we want to continue the training for 6 epochs,\n",
    "then we need to repeat the dataset at least 6 times.\n",
    "\n",
    "Because during each epoch, the model uses the whole dataset by breaking it into batches.\n",
    "In this example, batch size is 3, so we get 3 batches to run in 1 epoch.\n",
    "To train the model for 6 epochs, we must repeat the dataset at least 6 times.\n",
    "\n",
    "For training deep learning models, dataset should be repeated based on the number of epochs.\n",
    "If the dataset is repeated indefinitely, then we need to set the step size argument of a model's fit() method,\n",
    "which is determined by (dataset length)/(batch size)\n",
    "'''\n",
    "\n",
    "print(\"\\nOriginal Dataset:\")\n",
    "print(list(dataset.as_numpy_iterator()))\n",
    "\n",
    "print(\"\\nNo Repeat (batch size = 3): (runs for 1 epoch)\")\n",
    "dataset_7 = dataset.batch(batch_size=3, drop_remainder=True)\n",
    "for element in dataset_7.as_numpy_iterator():\n",
    "    print(element)\n",
    "\n",
    "    \n",
    "'''\n",
    "The repeat() method concatenates its arguments without signaling the end of one epoch \n",
    "and the beginning of the next epoch. \n",
    "Because of this, a batch() method applied after repeat() will yield batches that straddle epoch boundaries.\n",
    "'''\n",
    "print(\"\\nRepeat (batch size = 3): (runs up to 7 epochs)\")\n",
    "dataset_8 = dataset.repeat(6).batch(batch_size=3, drop_remainder=True)\n",
    "\n",
    "iteration = 0\n",
    "epoch_count = 0\n",
    "for element in dataset_8.as_numpy_iterator():\n",
    "    if (iteration%3 == 0):\n",
    "        epoch_count += 1\n",
    "        print(\"\\nEpoch: %d\" % epoch_count)\n",
    "    print(element)\n",
    "    iteration += 1\n",
    "    \n",
    "    \n",
    "print(\"\\n---------------Shuffle --> Batch --> Repeat---------------------------\")\n",
    "\n",
    "'''\n",
    "For a clear separation of epoch, we need to put the batch() method before the repeat()\n",
    "'''\n",
    "    \n",
    "print(\"\\nRepeat (batch size = 3): (runs up to 6 epochs): Same batches/per epoch\")\n",
    "dataset_9 = dataset.batch(batch_size=3, drop_remainder=True).repeat(6)\n",
    "\n",
    "iteration = 0\n",
    "epoch_count = 0\n",
    "for element in dataset_9.as_numpy_iterator():\n",
    "    if (iteration%3 == 0):\n",
    "        epoch_count += 1\n",
    "        print(\"\\nEpoch: %d\" % epoch_count)\n",
    "    print(element)\n",
    "    iteration += 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "---------------Shuffle --> Repeat ---> Batch ---------------------------\n",
      "\n",
      "Optimal Ordering: shuffle --> repeat --> batch\n",
      "\n",
      "Epoch: 1\n",
      "[6 4 2]\n",
      "[8 3 9]\n",
      "[0 7 5]\n",
      "\n",
      "Epoch: 2\n",
      "[1 9 1]\n",
      "[5 7 2]\n",
      "[0 4 3]\n",
      "\n",
      "Epoch: 3\n",
      "[6 8 6]\n",
      "[5 8 9]\n",
      "[0 2 4]\n",
      "\n",
      "Epoch: 4\n",
      "[3 1 7]\n",
      "[4 6 9]\n",
      "[5 0 3]\n",
      "\n",
      "Epoch: 5\n",
      "[7 2 1]\n",
      "[8 2 8]\n",
      "[9 6 5]\n",
      "\n",
      "Epoch: 6\n",
      "[3 4 0]\n",
      "[1 7 4]\n",
      "[0 7 3]\n",
      "\n",
      "Epoch: 7\n",
      "[5 9 6]\n",
      "[8 2 1]\n",
      "\n",
      "Non-optimal Ordering: batch --> shuffle --> repeat\n",
      "\n",
      "Epoch: 1\n",
      "[0 1 2]\n",
      "[6 7 8]\n",
      "[3 4 5]\n",
      "\n",
      "Epoch: 2\n",
      "[3 4 5]\n",
      "[0 1 2]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 3\n",
      "[3 4 5]\n",
      "[0 1 2]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 4\n",
      "[3 4 5]\n",
      "[0 1 2]\n",
      "[6 7 8]\n",
      "\n",
      "Epoch: 5\n",
      "[6 7 8]\n",
      "[0 1 2]\n",
      "[3 4 5]\n",
      "\n",
      "Epoch: 6\n",
      "[6 7 8]\n",
      "[3 4 5]\n",
      "[0 1 2]\n"
     ]
    }
   ],
   "source": [
    "print(\"\\n---------------Shuffle --> Repeat ---> Batch ---------------------------\")\n",
    "'''\n",
    "The repeat() method should be applied after shuffle().\n",
    "Because shuffle() doesn't signal the end of an epoch until the shuffle buffer is empty. \n",
    "So, to show every element of one epoch before moving to the next,\n",
    "shuffle() should be placed placed before repeat().\n",
    "\n",
    "This ordering of shuffle --> repeat puts batch after repeat, which is the optimal ordering.\n",
    "It ensures that the batches are unique.\n",
    "\n",
    "On the other hand, batch --> shuffle --> repeat ordering emits batches with exact same elements.\n",
    "'''\n",
    "    \n",
    "print(\"\\nOptimal Ordering: shuffle --> repeat --> batch\")\n",
    "dataset_10 = dataset.shuffle(buffer_size=10).repeat(6).batch(batch_size=3, drop_remainder=True)\n",
    "\n",
    "iteration = 0\n",
    "epoch_count = 0\n",
    "for element in dataset_10.as_numpy_iterator():\n",
    "    if (iteration%3 == 0):\n",
    "        epoch_count += 1\n",
    "        print(\"\\nEpoch: %d\" % epoch_count)\n",
    "    print(element)\n",
    "    iteration += 1\n",
    "    \n",
    "    \n",
    "print(\"\\nNon-optimal Ordering: batch --> shuffle --> repeat\")\n",
    "dataset_11 = dataset.batch(batch_size=3, drop_remainder=True).shuffle(buffer_size=10).repeat(6)\n",
    "\n",
    "iteration = 0\n",
    "epoch_count = 0\n",
    "for element in dataset_11.as_numpy_iterator():\n",
    "    if (iteration%3 == 0):\n",
    "        epoch_count += 1\n",
    "        print(\"\\nEpoch: %d\" % epoch_count)\n",
    "    print(element)\n",
    "    iteration += 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Method Chaining: Template for Deep Learning Pre-processing Pipeline\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "------map (scale) --> cache --> shuffle --> repeat --> batch --> map (augment)---------\n",
      "\n",
      "Epoch: 1\n",
      "[0.   0.16 0.01]\n",
      "[0.04 0.25 0.64]\n",
      "[0.49 0.81 0.09]\n",
      "\n",
      "Epoch: 2\n",
      "[0.36 0.25 0.16]\n",
      "[0.49 0.64 0.  ]\n",
      "[0.36 0.09 0.81]\n",
      "\n",
      "Epoch: 3\n",
      "[0.04 0.01 0.36]\n",
      "[0.81 0.09 0.04]\n",
      "[0.25 0.49 0.64]\n",
      "\n",
      "Epoch: 4\n",
      "[0.16 0.01 0.  ]\n",
      "[0.01 0.49 0.64]\n",
      "[0.09 0.16 0.04]\n",
      "\n",
      "Epoch: 5\n",
      "[0.36 0.25 0.81]\n",
      "[0.   0.01 0.04]\n",
      "[0.25 0.09 0.49]\n",
      "\n",
      "Epoch: 6\n",
      "[0.16 0.   0.64]\n",
      "[0.36 0.81 0.49]\n",
      "[0.25 0.81 0.04]\n",
      "\n",
      "Epoch: 7\n",
      "[0.16 0.09 0.36]\n",
      "[0.64 0.01 0.  ]\n"
     ]
    }
   ],
   "source": [
    "print(\"\\n------map (scale) --> cache --> shuffle --> repeat --> batch --> map (augment)---------\")\n",
    "\n",
    "'''\n",
    "In a Deep Learning pre-processing pipeline, \n",
    "typically we need to apply some tranformations on the Dataset:\n",
    "- Per-element\n",
    "- Per-batch\n",
    "\n",
    "For example, we want to scale each element of the Dataset (dividing by 10.0) \n",
    "and augment each batch by raising the batch elements by power of 2.\n",
    "\n",
    "Following we show how to perform these two transformations along with the prevously discussed pre-processing\n",
    "transformations such as cache, shuffle, repeat, batch\n",
    "\n",
    "The optimal order of these transformation should be:\n",
    "map (scale) --> cache --> shuffle --> repeat --> batch --> map (augment)\n",
    "'''\n",
    "\n",
    "\n",
    "# Function for per-element transformation\n",
    "def scale(x):\n",
    "    return x/10\n",
    "\n",
    "\n",
    "# Function for per-batch transformation\n",
    "def augment(x):\n",
    "    return x**2\n",
    "\n",
    "\n",
    "buffer_size = 10 # shuffle buffer\n",
    "count = 6 # repeat count\n",
    "batch_size = 3\n",
    "\n",
    "\n",
    "\n",
    "dataset_12 = dataset.map(lambda x: (scale(x))).cache().shuffle(buffer_size)\\\n",
    "            .repeat(count).batch(batch_size, drop_remainder=True).map(lambda y: (augment(y)))\n",
    "\n",
    "\n",
    "iteration = 0\n",
    "epoch_count = 0\n",
    "for element in dataset_12.as_numpy_iterator():\n",
    "    if (iteration%3 == 0):\n",
    "        epoch_count += 1\n",
    "        print(\"\\nEpoch: %d\" % epoch_count)\n",
    "    print(element)\n",
    "    iteration += 1"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
