{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2019 Google LLC\n",
    "# \n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a target=\"_blank\" href=\"https://colab.research.google.com/github/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/community-labs/Community Lab - Encoders for CNN.ipynb\">\n",
    "<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
    "\n",
    "For best performance using Colab, once the notebook is launched, from dropdown menu select **Runtime -> Change Runtime Type**, and select **GPU** for **Hardware Accelerator**."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Composable \"Design Pattern\" for AutoML friendly models\n",
    "\n",
    "## Community Lab 1: Training Encoder for CNN\n",
    "\n",
    "### Objective\n",
    "\n",
    "To replace a traditional \"stem convolution group\" of higher input dimensionality with lower dimensionality encoding, learned from first training the dataset on an autoencoder. Goal is that by using a lower dimensionality encoding, one can substantially increase training time of a model.\n",
    "\n",
    "*Question*: Can one achieve the same accuracy as using the original input image?\n",
    "\n",
    "*Question*: How fast can we speed up training?\n",
    "\n",
    "### Approach\n",
    "\n",
    "We will use the composable design pattern, and prebuilt units from the Google Cloud AI Developer Relations repo: [Model Zoo](https://github.com/GoogleCloudPlatform/keras-idiomatic-programmer/tree/master/zoo)\n",
    "\n",
    "If you are not familiar with the Composable design pattern, we recommemd you review the [ResNet](https://github.com/GoogleCloudPlatform/keras-idiomatic-programmer/tree/master/zoo/resnet) model in our zoo. Then review the [AutoEncoder](https://github.com/GoogleCloudPlatform/keras-idiomatic-programmer/tree/master/zoo/autoencoder) model.\n",
    "\n",
    "We recommend a constant set for hyperparameters, where batch_size is 32 and initial learning rate is 0.001 -- but you may use any value for hyperparameters you prefer.\n",
    "\n",
    "We will use the metaparameters feature in the composable design pattern for the macro architecture search -- sort of a 'human assisted AutoML'.\n",
    "\n",
    "We recommend using a warmup training to find most optimal numerical stabilization of weights.\n",
    "\n",
    "### Reporting Findings\n",
    "\n",
    "You can contact us on your findings via the twitter account: @andrewferlitsch\n",
    "\n",
    "### Dataset\n",
    "\n",
    "In this notebook, we use the CIFAR-10 datasets which consist of images 32x32x3 for 10 classes -- but you may use any dataset you prefer.\n",
    "\n",
    "### Steps\n",
    "\n",
    "1. Build and Train an AutoEncoder for CIFAR10 (or your dataset).\n",
    "\n",
    "2. Extract the pretrained Encoder network from the trained AutoEncoder.\n",
    "\n",
    "3. Preprocess the training and test data with the Encoder.\n",
    "\n",
    "4. Build a composable model for CIFAR10 using the Encoder embedding.\n",
    "\n",
    "5. Use warmup to initialize the weights on the model.\n",
    "\n",
    "6. Train the model with the encoded training set.\n",
    "\n",
    "7. Evaluate the model with the endoded test set.\n",
    "\n",
    "8. Repeat making macro architecture modifications to the AutoEncoder and/or model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Lab\n",
    "\n",
    "### Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow.keras import Input, Model\n",
    "from tensorflow.keras.layers import Conv2D, Flatten, Conv2DTranspose, ReLU, Add, Dense, Dropout, Activation\n",
    "from tensorflow.keras.layers import BatchNormalization, GlobalAveragePooling2D, ZeroPadding2D, MaxPooling2D\n",
    "from tensorflow.keras.optimizers import Adam\n",
    "from tensorflow.keras.regularizers import l2\n",
    "from tensorflow.keras.callbacks import LearningRateScheduler\n",
    "from tensorflow.keras.datasets import cifar10\n",
    "from tensorflow.keras.utils import to_categorical\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Import Composable class\n",
    "\n",
    "Composable is a super (base) class that is inherited by our models which are coded using the Composable design pattern. It provides many abstracted functions in the construction and training of the models. Don't concern yourself about the details; it's not necessary to know how the underlying base works for the purpose of this lab."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from models_c.py\n",
    "class Composable(object):\n",
    "    ''' Composable base (super) class for Models '''\n",
    "    init_weights = 'he_normal'\t# weight initialization\n",
    "    reg          = None         # kernel regularizer\n",
    "    relu         = None         # ReLU max value\n",
    "    bias         = True         # whether to use bias in dense/conv layers\n",
    "\n",
    "    def __init__(self, init_weights=None, reg=None, relu=None, bias=True):\n",
    "        \"\"\" Constructor\n",
    "            init_weights : kernel initializer\n",
    "            reg          : kernel regularizer\n",
    "            relu         : clip value for ReLU\n",
    "            bias         : whether to use bias\n",
    "        \"\"\"\n",
    "        if init_weights is not None:\n",
    "            self.init_weights = init_weights\n",
    "        if reg is not None:\n",
    "            self.reg = reg\n",
    "        if relu is not None:\n",
    "            self.relu = relu\n",
    "        if bias is not None:\n",
    "            self.bias = bias\n",
    "\n",
    "        # Feature maps encoding at the bottleneck layer in classifier (high dimensionality)\n",
    "        self._encoding = None\n",
    "        # Pooled and flattened encodings at the bottleneck layer (low dimensionality)\n",
    "        self._embedding = None\n",
    "        # Pre-activation conditional probabilities for classifier\n",
    "        self._probabilities = None\n",
    "        # Post-activation conditional probabilities for classifier\n",
    "        self._softmax = None\n",
    "\n",
    "        self._model = None\n",
    "\n",
    "    @property\n",
    "    def model(self):\n",
    "        return self._model\n",
    "\n",
    "    @model.setter\n",
    "    def model(self, _model):\n",
    "        self._model = _model\n",
    "\n",
    "    @property\n",
    "    def encoding(self):\n",
    "        return self._encoding\n",
    "\n",
    "    @encoding.setter\n",
    "    def encoding(self, layer):\n",
    "        self._encoding = layer\n",
    "\n",
    "    @property\n",
    "    def embedding(self):\n",
    "        return self._embedding\n",
    "\n",
    "    @embedding.setter\n",
    "    def embedding(self, layer):\n",
    "        self._embedding = layer\n",
    "\n",
    "    @property\n",
    "    def probabilities(self):\n",
    "        return self._probabilities\n",
    "\n",
    "    @probabilities.setter\n",
    "    def probabilities(self, layer):\n",
    "        self._probabilities = layer\n",
    "\n",
    "    def classifier(self, x, n_classes, **metaparameters):\n",
    "      \"\"\" Construct the Classifier Group \n",
    "          x         : input to the classifier\n",
    "          n_classes : number of output classes\n",
    "          pooling   : type of feature map pooling\n",
    "      \"\"\"\n",
    "      if 'pooling' in metaparameters:\n",
    "          pooling = metaparameters['pooling']\n",
    "      else:\n",
    "          pooling = GlobalAveragePooling2D\n",
    "      if 'dropout' in metaparameters:\n",
    "          dropout = metaparameters['dropout']\n",
    "      else:\n",
    "          dropout = None\n",
    "\n",
    "      if pooling is not None:\n",
    "          # Save the encoding layer (high dimensionality)\n",
    "          self.encoding = x\n",
    "\n",
    "          # Pooling at the end of all the convolutional groups\n",
    "          x = pooling()(x)\n",
    "\n",
    "          # Save the embedding layer (low dimensionality)\n",
    "          self.embedding = x\n",
    "\n",
    "      if dropout is not None:\n",
    "          x = Dropout(dropout)(x)\n",
    "\n",
    "      # Final Dense Outputting Layer for the outputs\n",
    "      x = self.Dense(x, n_classes, use_bias=True, **metaparameters)\n",
    "      \n",
    "      # Save the pre-activation probabilities layer\n",
    "      self.probabilities = x\n",
    "      outputs = Activation('softmax')(x)\n",
    "      # Save the post-activation probabilities layer\n",
    "      self.softmax = outputs\n",
    "      return outputs\n",
    "\n",
    "    def Dense(self, x, units, activation=None, use_bias=True, **hyperparameters):\n",
    "        \"\"\" Construct Dense Layer\n",
    "            x           : input to layer\n",
    "            activation  : activation function\n",
    "            use_bias    : whether to use bias\n",
    "            init_weights: kernel initializer\n",
    "            reg         : kernel regularizer\n",
    "        \"\"\"\n",
    "        if 'reg' in hyperparameters:\n",
    "            reg = hyperparameters['reg']\n",
    "        else:\n",
    "            reg = self.reg\n",
    "        if 'init_weights' in hyperparameters:\n",
    "            init_weights = hyperparameters['init_weights']\n",
    "        else:\n",
    "            init_weights = self.init_weights\n",
    "            \n",
    "        x = Dense(units, activation, use_bias=use_bias,\n",
    "                  kernel_initializer=init_weights, kernel_regularizer=reg)(x)\n",
    "        return x\n",
    "\n",
    "    def Conv2D(self, x, n_filters, kernel_size, strides=(1, 1), padding='valid', activation=None, **hyperparameters):\n",
    "        \"\"\" Construct a Conv2D layer\n",
    "            x           : input to layer\n",
    "            n_filters   : number of filters\n",
    "            kernel_size : kernel (filter) size\n",
    "            strides     : strides\n",
    "            padding     : how to pad when filter overlaps the edge\n",
    "            activation  : activation function\n",
    "            use_bias    : whether to include the bias\n",
    "            init_weights: kernel initializer\n",
    "            reg         : kernel regularizer\n",
    "        \"\"\"\n",
    "        if 'reg' in hyperparameters:\n",
    "            reg = hyperparameters['reg']\n",
    "        else:\n",
    "            reg = self.reg\n",
    "        if 'init_weights' in hyperparameters:\n",
    "            init_weights = hyperparameters['init_weights']\n",
    "        else:\n",
    "            init_weights = self.init_weights\n",
    "        if 'bias' in hyperparameters:\n",
    "            bias = hyperparameters['bias']\n",
    "        else:\n",
    "            bias = self.bias\n",
    "\n",
    "        x = Conv2D(n_filters, kernel_size, strides=strides, padding=padding, activation=activation,\n",
    "                   use_bias=bias, kernel_initializer=init_weights, kernel_regularizer=reg)(x)\n",
    "        return x\n",
    "\n",
    "    def Conv2DTranspose(self, x, n_filters, kernel_size, strides=(1, 1), padding='valid', activation=None, **hyperparameters):\n",
    "        \"\"\" Construct a Conv2DTranspose layer\n",
    "            x           : input to layer\n",
    "            n_filters   : number of filters\n",
    "            kernel_size : kernel (filter) size\n",
    "            strides     : strides\n",
    "            padding     : how to pad when filter overlaps the edge\n",
    "            activation  : activation function\n",
    "            use_bias    : whether to include the bias\n",
    "            init_weights: kernel initializer\n",
    "            reg         : kernel regularizer\n",
    "        \"\"\"\n",
    "        if 'reg' in hyperparameters:\n",
    "            reg = hyperparameters['reg']\n",
    "        else:\n",
    "            reg = self.reg\n",
    "        if 'init_weights' in hyperparameters:\n",
    "            init_weights = hyperparameters['init_weights']\n",
    "        else:\n",
    "            init_weights = self.init_weights \n",
    "        if 'bias' in hyperparameters:\n",
    "            bias = hyperparameters['bias']\n",
    "        else:\n",
    "            bias = self.bias\n",
    "\n",
    "        x = Conv2DTranspose(n_filters, kernel_size, strides=strides, padding=padding, activation=activation, \n",
    "                            use_bias=bias, kernel_initializer=init_weights, kernel_regularizer=reg)(x)\n",
    "        return x\n",
    "\n",
    "\n",
    "\n",
    "    def ReLU(self, x):\n",
    "        \"\"\" Construct ReLU activation function\n",
    "            x  : input to activation function\n",
    "        \"\"\"\n",
    "        x = ReLU(self.relu)(x)\n",
    "        return x\n",
    "\n",
    "\n",
    "    def BatchNormalization(self, x, **params):\n",
    "        \"\"\" Construct a Batch Normalization function\n",
    "            x : input to function\n",
    "        \"\"\"\n",
    "        x = BatchNormalization(epsilon=1.001e-5, **params)(x)\n",
    "        return x\n",
    "\n",
    "    ###\n",
    "    # Preprocessing\n",
    "    ###\n",
    "\n",
    "    def normalization(self, x_train, x_test=None, centered=False):\n",
    "        \"\"\" Normalize the input\n",
    "            x_train : training images\n",
    "            y_train : test images\n",
    "        \"\"\"\n",
    "        if x_train.dtype == np.uint8:\n",
    "            if centered:\n",
    "                x_train = ((x_train - 1) / 127.5).astype(np.float32)\n",
    "                if x_test:\n",
    "                    x_test  = ((x_test  - 1) / 127.5).astype(np.float32)\n",
    "            else:\n",
    "                x_train = (x_train / 255.0).astype(np.float32)\n",
    "                if x_test:\n",
    "                    x_test  = (x_test  / 255.0).astype(np.float32)\n",
    "        return x_train, x_test\n",
    "\n",
    "    def standardization(self, x_train, x_test):\n",
    "        \"\"\" Standardize the input\n",
    "            x_train : training images\n",
    "            x_test  : test images\n",
    "        \"\"\"\n",
    "        self.mean = np.mean(x_train)\n",
    "        self.std  = np.std(x_train)\n",
    "        x_train = ((x_train - self.mean) / self.std).astype(np.float32)\n",
    "        x_test  = ((x_test  - self.mean) / self.std).astype(np.float32)\n",
    "        return x_train, x_test\n",
    "\n",
    "    def label_smoothing(self, y_train, n_classes, factor=0.1):\n",
    "        \"\"\" Convert a matrix of one-hot row-vector labels into smoothed versions. \n",
    "            y_train  : training labels\n",
    "            n_classes: number of classes\n",
    "            factor   : smoothing factor (between 0 and 1)\n",
    "        \"\"\"\n",
    "        if 0 <= factor <= 1:\n",
    "            # label smoothing ref: https://www.robots.ox.ac.uk/~vgg/rg/papers/reinception.pdf\n",
    "            y_train *= 1 - factor\n",
    "            y_train += factor / n_classes\n",
    "        else:\n",
    "            raise Exception('Invalid label smoothing factor: ' + str(factor))\n",
    "        return y_train\n",
    "\n",
    "    ###\n",
    "    # Training\n",
    "    ###\n",
    "\n",
    "    def compile(self, loss='categorical_crossentropy', optimizer=Adam(lr=0.001, decay=1e-5), metrics=['acc']):\n",
    "        \"\"\" Compile the model for training\n",
    "            loss     : the loss function\n",
    "            optimizer: the optimizer\n",
    "            metrics  : metrics to report\n",
    "        \"\"\"\n",
    "        self.model.compile(loss=loss, optimizer=optimizer, metrics=metrics)\n",
    "        \n",
    "    def warmup_scheduler(self, epoch, lr):\n",
    "        \"\"\" learning rate schedular for warmup training\n",
    "            epoch : current epoch iteration\n",
    "            lr    : current learning rate\n",
    "        \"\"\"\n",
    "        if epoch == 0:\n",
    "           return lr\n",
    "        return epoch * self.w_lr / self.w_epochs\n",
    "\n",
    "    def warmup(self, x_train, y_train, epochs=5, s_lr=1e-6, e_lr=0.001):\n",
    "        \"\"\" Warmup for numerical stability\n",
    "            x_train : training images\n",
    "            y_train : training labels\n",
    "            epochs  : number of epochs for warmup\n",
    "            s_lr    : start warmup learning rate\n",
    "            e_lr    : end warmup learning rate\n",
    "        \"\"\"\n",
    "        print(\"*** Warmup\")\n",
    "        # Setup learning rate scheduler\n",
    "        self.compile(optimizer=Adam(s_lr))\n",
    "        lrate = LearningRateScheduler(self.warmup_scheduler, verbose=1)\n",
    "        self.w_epochs = epochs\n",
    "        self.w_lr     = e_lr - s_lr\n",
    "\n",
    "        # Train the model\n",
    "        self.model.fit(x_train, y_train, epochs=epochs, batch_size=32, verbose=1,\n",
    "                       callbacks=[lrate])\n",
    "        \n",
    "    def cosine_decay(self, epoch, lr, alpha=0.0):\n",
    "        \"\"\" Cosine Decay\n",
    "        \"\"\"\n",
    "        cosine_decay = 0.5 * (1 + np.cos(np.pi * (self.e_steps * epoch) / self.t_steps))\n",
    "        decayed = (1 - alpha) * cosine_decay + alpha\n",
    "        return lr * decayed\n",
    "\n",
    "    def training_scheduler(self, epoch, lr):\n",
    "        \"\"\" Learning Rate scheduler for full-training\n",
    "            epoch : epoch number\n",
    "            lr    : current learning rate\n",
    "        \"\"\"\n",
    "        # First epoch (not started) - do nothing\n",
    "        if epoch == 0:\n",
    "            return lr\n",
    "\n",
    "        # Decay the learning rate\n",
    "        if self.t_decay > 0:\n",
    "            lr -= self.t_decay\n",
    "            self.t_decay *= 0.9 # decrease the decay\n",
    "        else:\n",
    "            lr = self.cosine_decay(epoch, lr)\n",
    "        return lr\n",
    "\n",
    "    def training(self, x_train, y_train, epochs=10, batch_size=32, lr=0.001, decay=0):\n",
    "        \"\"\" Full Training of the Model\n",
    "            x_train    : training images\n",
    "            y_train    : training labels\n",
    "            epochs     : number of epochs\n",
    "            batch_size : size of batch\n",
    "            lr         : learning rate\n",
    "            decay      : step-wise learning rate decay\n",
    "        \"\"\"\n",
    "\n",
    "        # Check for hidden dropout layer in classifier\n",
    "        for layer in self.model.layers:\n",
    "            if isinstance(layer, Dropout):\n",
    "                self.hidden_dropout = layer\n",
    "                break    \n",
    "\n",
    "        self.t_decay = decay\n",
    "        self.e_steps = x_train.shape[0] // batch_size\n",
    "        self.t_steps = self.e_steps * epochs\n",
    "        self.compile(optimizer=Adam(lr=lr, decay=decay))\n",
    "\n",
    "        lrate = LearningRateScheduler(self.training_scheduler, verbose=1)\n",
    "        self.model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_split=0.1, verbose=1,\n",
    "                       callbacks=[lrate])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Get the Dataset\n",
    "\n",
    "Load the dataset into memory as numpy arrays, and then normalize the image data (preprocessing)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow.keras.datasets import cifar10\n",
    "(x_train, y_train), (x_test, y_test) = cifar10.load_data()\n",
    "x_train = (x_train / 255.0).astype(np.float32)\n",
    "x_test  = (x_test / 255.0).astype(np.float32)\n",
    "print(x_train.shape)\n",
    "\n",
    "y_train = to_categorical(y_train, 10)\n",
    "y_test  = to_categorical(y_test, 10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Build the AutoEncoder for CIFAR-10\n",
    "\n",
    "Now, let's build the AutoEncoder for the dataset.\n",
    "\n",
    "In our example, the dimensionality of the input (3072 pixels) is reduced down to 512 at the bottleneck layer (ReLU (None, 4, 4, 32))."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from autoencoder/autoencoder_c.py\n",
    "\n",
    "class AutoEncoder(Composable):\n",
    "    ''' Construct an AutoEncoder '''\n",
    "    # metaparameter: number of filters per layer\n",
    "    layers = [ {'n_filters': 64 }, { 'n_filters': 32 }, { 'n_filters': 16 } ]\n",
    "\n",
    "    def __init__(self, layers=None, input_shape=(32, 32, 3),\n",
    "                 init_weights='he_normal', reg=None, relu=None, bias=True):\n",
    "        ''' Construct an AutoEncoder\n",
    "            input_shape : input shape to the autoencoder\n",
    "            layers      : the number of filters per layer\n",
    "            init_weights: kernel initializer\n",
    "            reg         : kernel regularizer\n",
    "            relu        : clip value for ReLU\n",
    "            bias        : whether to use bias\n",
    "        '''\n",
    "        # Configure base (super) class\n",
    "        super().__init__(init_weights=init_weights, reg=reg, relu=relu, bias=bias)\n",
    "\n",
    "        if layers is None:\n",
    "           layers = self.layers\n",
    "\n",
    "        # remember the layers\n",
    "        self.layers = layers\n",
    "\n",
    "        # remember the input shape\n",
    "        self.input_shape = input_shape\n",
    "\n",
    "        inputs = Input(input_shape)\n",
    "        encoder = self.encoder(inputs, layers=layers)\n",
    "        outputs = self.decoder(encoder, layers=layers)\n",
    "        self._model = Model(inputs, outputs)\n",
    "\n",
    "    def encoder(self, x, **metaparameters):\n",
    "        ''' Construct the Encoder \n",
    "            x     : input to the encoder\n",
    "            layers: number of filters per layer\n",
    "        '''\n",
    "        layers = metaparameters['layers']\n",
    "\n",
    "        # Progressive Feature Pooling\n",
    "        for layer in layers:\n",
    "            n_filters = layer['n_filters']\n",
    "            x = self.Conv2D(x, n_filters, (3, 3), strides=2, padding='same')\n",
    "            x = self.BatchNormalization(x)\n",
    "            x = self.ReLU(x)\n",
    "\n",
    "        # The Encoding\n",
    "        return x\n",
    "\n",
    "    def decoder(self, x, init_weights=None, **metaparameters):\n",
    "        ''' Construct the Decoder\n",
    "            x     : input to the decoder\n",
    "            layers: filters per layer\n",
    "        '''\n",
    "        layers = metaparameters['layers']\n",
    "\n",
    "        # Progressive Feature Unpooling\n",
    "        for _ in range(len(layers)-1, 0, -1):\n",
    "            n_filters = layers[_]['n_filters']\n",
    "            x = self.Conv2DTranspose(x, n_filters, (3, 3), strides=2, padding='same')\n",
    "            x = self.BatchNormalization(x)\n",
    "            x = self.ReLU(x)\n",
    "\n",
    "        # Last unpooling and match shape to input\n",
    "        x = self.Conv2DTranspose(x, 3, (3, 3), strides=2, padding='same')\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "\n",
    "        # The decoded image\n",
    "        return x\n",
    "\n",
    "    def compile(self, optimizer='adam'):\n",
    "        ''' Compile the model using Mean Square Error loss '''\n",
    "        self._model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])\n",
    "\n",
    "    def extract(self):\n",
    "        ''' Extract the pretrained encoder\n",
    "        '''\n",
    "        # Get the trained weights from the autoencoder\n",
    "        weights = self._model.get_weights()\n",
    "\n",
    "        # Extract out the weights for just the encoder  (6 sets per layer)\n",
    "        encoder_weights = weights[0 : int((6 * len(self.layers)))]\n",
    "  \n",
    "        # Construct a copy the encoder\n",
    "        inputs = Input(self.input_shape)\n",
    "        outputs = self.encoder(inputs, layers=self.layers)\n",
    "        encoder = Model(inputs, outputs)\n",
    "\n",
    "        # Initialize the encoder with the pretrained weights\n",
    "        encoder.set_weights(encoder_weights)\n",
    "\n",
    "        return encoder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "autoencoder = AutoEncoder(input_shape=(32, 32, 3), layers=[{'n_filters': 64}, {'n_filters': 32}, {'n_filters': 32}])\n",
    "autoencoder.model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Warmup Training for AutoEncoder\n",
    "\n",
    "Let's numerical stabilize the weights (which are initialized from a random draw from a random distribution (i.e., he_normal) using warmup.\n",
    "\n",
    "We will start with a very low learning rate (1e-6) and over five epochs incremently step it up to out target learning rate (0.001)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "autoencoder.warmup(x_train, x_train, epochs=5, s_lr=1e-6, e_lr=0.001)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Train the AutoEncoder\n",
    "\n",
    "Let's now fully train the autoencoder on our image data for 20 epochs -- but you may choose to use more.\n",
    "\n",
    "*When using colab with runtime=GPU, this takes about 4 minutes*\n",
    "*You should see a validation accuracy ~80%*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "autoencoder.compile(optimizer='adam')\n",
    "autoencoder.training(x_train, x_train, epochs=20, batch_size=32)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's see what the accuracy is on the test (holdout) data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "autoencoder.model.evaluate(x_test, x_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Extract the pre-trained Encoder\n",
    "\n",
    "Next, we will extract from the pretrained encoder from our trained autoencoder."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "encoder = autoencoder.extract()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Encode the CIFAR-10 Training Data\n",
    "\n",
    "Next, we will encode the higher dimensional training data (*x_train*) into the lower dimensional encoding (*e_train*)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "e_train = encoder.predict(x_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Build mini-ResNet with Encoding as input (no stem convolution)\n",
    "\n",
    "Let's now use the composable design pattern for ResNet to build a mini-resnet model (*e_resnet*)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ResNetV2(Composable):\n",
    "    \"\"\" Construct a Residual Convolution Network Network V2 \"\"\"\n",
    "    # Meta-parameter: list of groups: number of filters and number of blocks\n",
    "    groups = { 50 : [ { 'n_filters' : 64, 'n_blocks': 3 },\n",
    "                      { 'n_filters': 128, 'n_blocks': 4 },\n",
    "                      { 'n_filters': 256, 'n_blocks': 6 },\n",
    "                      { 'n_filters': 512, 'n_blocks': 3 } ],            # ResNet50\n",
    "               101: [ { 'n_filters' : 64, 'n_blocks': 3 },\n",
    "                      { 'n_filters': 128, 'n_blocks': 4 },\n",
    "                      { 'n_filters': 256, 'n_blocks': 23 },\n",
    "                      { 'n_filters': 512, 'n_blocks': 3 } ],            # ResNet101\n",
    "               152: [ { 'n_filters' : 64, 'n_blocks': 3 },\n",
    "                      { 'n_filters': 128, 'n_blocks': 8 },\n",
    "                      { 'n_filters': 256, 'n_blocks': 36 },\n",
    "                      { 'n_filters': 512, 'n_blocks': 3 } ]             # ResNet152\n",
    "             }\n",
    "\n",
    "    def __init__(self, n_layers, input_shape=(224, 224, 3), n_classes=1000, \n",
    "                 reg=l2(0.001), relu=None, init_weights='he_normal', bias=False):\n",
    "        \"\"\" Construct a Residual Convolutional Neural Network V2\n",
    "            n_layers    : number of layers\n",
    "            input_shape : input shape\n",
    "            n_classes   : number of output classes\n",
    "            reg         : kernel regularizer\n",
    "            init_weights: kernel initializer\n",
    "            relu        : max value for ReLU\n",
    "            bias        : whether to include a bias in the dense/conv layers\n",
    "        \"\"\"\n",
    "        # Configure base (super) class\n",
    "        super().__init__(reg=reg, init_weights=init_weights, relu=relu, bias=bias)\n",
    "\n",
    "        # predefined\n",
    "        if isinstance(n_layers, int):\n",
    "            if n_layers not in [50, 101, 152]:\n",
    "                raise Exception(\"ResNet: Invalid value for n_layers\")\n",
    "            groups = self.groups[n_layers]\n",
    "        # user defined\n",
    "        else:\n",
    "            groups = n_layers\n",
    "\n",
    "        # The input tensor\n",
    "        inputs = Input(input_shape)\n",
    "\n",
    "        # The stem convolutional group\n",
    "        x = self.stem(inputs)\n",
    "\n",
    "        # The learner\n",
    "        x = self.learner(x, groups=groups)\n",
    "\n",
    "        # The classifier \n",
    "        # Add hidden dropout for training-time regularization\n",
    "        outputs = self.classifier(x, n_classes, dropout=0.0)\n",
    "\n",
    "        # Instantiate the Model\n",
    "        self._model = Model(inputs, outputs)\n",
    "\n",
    "    def stem(self, inputs):\n",
    "        \"\"\" Construct the Stem Convolutional Group \n",
    "            inputs : the input vector\n",
    "        \"\"\"\n",
    "        # The 224x224 images are zero padded (black - no signal) to be 230x230 images prior to the first convolution\n",
    "        x = ZeroPadding2D(padding=(3, 3))(inputs)\n",
    "    \n",
    "        # First Convolutional layer uses large (coarse) filter\n",
    "        x = self.Conv2D(x, 64, (7, 7), strides=(2, 2), padding='valid')\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "    \n",
    "        # Pooled feature maps will be reduced by 75%\n",
    "        x = ZeroPadding2D(padding=(1, 1))(x)\n",
    "        x = MaxPooling2D((3, 3), strides=(2, 2))(x)\n",
    "        return x\n",
    "\n",
    "    def learner(self, x, **metaparameters):\n",
    "        \"\"\" Construct the Learner\n",
    "            x     : input to the learner\n",
    "            groups: list of groups: number of filters and blocks\n",
    "        \"\"\"\n",
    "        groups = metaparameters['groups']\n",
    "\n",
    "        # First Residual Block Group (not strided)\n",
    "        x = self.group(x, strides=(1, 1), **groups.pop(0))\n",
    "\n",
    "        # Remaining Residual Block Groups (strided)\n",
    "        for group in groups:\n",
    "            x = self.group(x, **group)\n",
    "        return x\n",
    "    \n",
    "    def group(self, x, strides=(2, 2), **metaparameters):\n",
    "        \"\"\" Construct a Residual Group\n",
    "            x         : input into the group\n",
    "            strides   : whether the projection block is a strided convolution\n",
    "            n_blocks  : number of residual blocks with identity link\n",
    "        \"\"\"\n",
    "        n_blocks  = metaparameters['n_blocks']\n",
    "\n",
    "        # Double the size of filters to fit the first Residual Block\n",
    "        x = self.projection_block(x, strides=strides, **metaparameters)\n",
    "\n",
    "        # Identity residual blocks\n",
    "        for _ in range(n_blocks):\n",
    "            x = self.identity_block(x, **metaparameters)\n",
    "        return x\n",
    "\n",
    "    def identity_block(self, x, **metaparameters):\n",
    "        \"\"\" Construct a Bottleneck Residual Block with Identity Link\n",
    "            x        : input into the block\n",
    "            n_filters: number of filters\n",
    "        \"\"\"\n",
    "        n_filters = metaparameters['n_filters']\n",
    "        del metaparameters['n_filters']\n",
    "    \n",
    "        # Save input vector (feature maps) for the identity link\n",
    "        shortcut = x\n",
    "    \n",
    "        ## Construct the 1x1, 3x3, 1x1 convolution block\n",
    "    \n",
    "        # Dimensionality reduction\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "        x = self.Conv2D(x, n_filters, (1, 1), strides=(1, 1), **metaparameters)\n",
    "\n",
    "        # Bottleneck layer\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "        x = self.Conv2D(x, n_filters, (3, 3), strides=(1, 1), padding=\"same\", **metaparameters)\n",
    "\n",
    "        # Dimensionality restoration - increase the number of output filters by 4X\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "        x = self.Conv2D(x, n_filters * 4, (1, 1), strides=(1, 1), **metaparameters)\n",
    "\n",
    "        # Add the identity link (input) to the output of the residual block\n",
    "        x = Add()([shortcut, x])\n",
    "        return x\n",
    "\n",
    "    def projection_block(self, x, strides=(2,2), **metaparameters):\n",
    "        \"\"\" Construct a Bottleneck Residual Block of Convolutions with Projection Shortcut\n",
    "            Increase the number of filters by 4X\n",
    "            x        : input into the block\n",
    "            strides  : whether the first convolution is strided\n",
    "            n_filters: number of filters\n",
    "            reg      : kernel regularizer\n",
    "        \"\"\"\n",
    "        n_filters = metaparameters['n_filters']\n",
    "        del metaparameters['n_filters']\n",
    "\n",
    "        # Construct the projection shortcut\n",
    "        # Increase filters by 4X to match shape when added to output of block\n",
    "        shortcut = self.BatchNormalization(x)\n",
    "        shortcut = self.Conv2D(shortcut, 4 * n_filters, (1, 1), strides=strides, **metaparameters)\n",
    "\n",
    "        ## Construct the 1x1, 3x3, 1x1 convolution block\n",
    "    \n",
    "        # Dimensionality reduction\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "        x = self.Conv2D(x, n_filters, (1, 1), strides=(1,1), **metaparameters)\n",
    "\n",
    "        # Bottleneck layer\n",
    "        # Feature pooling when strides=(2, 2)\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "        x = self.Conv2D(x, n_filters, (3, 3), strides=strides, padding='same', **metaparameters)\n",
    "\n",
    "        # Dimensionality restoration - increase the number of filters by 4X\n",
    "        x = self.BatchNormalization(x)\n",
    "        x = self.ReLU(x)\n",
    "        x = self.Conv2D(x, 4 * n_filters, (1, 1), strides=(1, 1), **metaparameters)\n",
    "\n",
    "        # Add the projection shortcut to the output of the residual block\n",
    "        x = Add()([x, shortcut])\n",
    "        return x\n",
    "    \n",
    "groups = [ { 'n_filters' : 64, 'n_blocks': 1 },\n",
    "           { 'n_filters': 128, 'n_blocks': 2 },\n",
    "           { 'n_filters': 256, 'n_blocks': 2 }]\n",
    "e_resnet = ResNetV2(groups, input_shape=(4, 4, 32), n_classes=10)\n",
    "e_resnet.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])\n",
    "e_resnet.model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Train the Model\n",
    "\n",
    "Let's now train our mini-resnet model (*e_resnet*) with the encoded training data (*e_train*).\n",
    "\n",
    "*When using colab with runtime=GPU, this takes about 4 minutes*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "e_resnet.training(e_train, y_train, epochs=20, batch_size=32)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Evaluate the Model\n",
    "\n",
    "Let's convert our test (holdout) data into an encoding (*e_test*) using our pretrained encoder (*encoder*), and evaluate our model (*e_resnet*)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "e_test = encoder.predict(x_test)\n",
    "e_resnet.model.evaluate(e_test, y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Next\n",
    "\n",
    "If you followed this lab as-is, our encoded model overfits the encoded training data, and plateaus on accuracy on the encoded test data at ~61% (50% with V2).\n",
    "\n",
    "Think how you can modify this experiment, to meet the objectives."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
