{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2020 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a target=\"_blank\" href=\"https://colab.research.google.com/github/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/books/deep-learning-design-patterns/Workshops/Junior/Deep%20Learning%20Design%20Patterns%20-%20Workshop%20-%20Chapter%202.ipynb\">\n",
    "<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Deep Learning Design Patterns - Code Labs\n",
    "\n",
    "## Lab Exercise #6 - Get Familiar with Wide Convolutional Models\n",
    "\n",
    "## Prerequistes:\n",
    "\n",
    "    1. Familiar with Python\n",
    "    2. Completed Chapter 2: Wide Convolutional Models\n",
    "\n",
    "## Objectives:\n",
    "\n",
    "    1. Code a Naive Inception module\n",
    "    2. Code a Inception V1 block\n",
    "    3. Refactor an Inception V1 block.\n",
    "    4. Code a mini Wide Residual Network (WRN)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Code a Naive Inception Module\n",
    "\n",
    "Let's code an naive inception module:\n",
    "\n",
    "<img src='https://github.com/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/books/deep-learning-design-patterns/Workshops/Junior/naive-inception.jpg?raw=true'>\n",
    "\n",
    "    \n",
    "You fill in the blanks (replace the ??), make sure it passes the Python interpreter.\n",
    "\n",
    "You will need to:\n",
    "\n",
    "    1. Create 4 branches.\n",
    "    2. Implement each parallel branch\n",
    "    3. Concatenate the output from each branch into a single output for the module."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow.keras import Input, Model\n",
    "from tensorflow.keras.layers import Conv2D, ReLU, BatchNormalization, MaxPooling2D, Concatenate, SeparableConv2D\n",
    "\n",
    "def naive_inception(inputs):\n",
    "    # pooling branch\n",
    "    # HINT: The feature map output must stay the same, so don't downsample it, and remember the padding\n",
    "    x1 = MaxPooling2D((2, 2), ??)(inputs)\n",
    "    \n",
    "    # 1x1 branch\n",
    "    x2 = Conv2D(64, (1, 1), strides=1, padding='same', activation='relu')(inputs)\n",
    "    \n",
    "    # 3x3 branch\n",
    "    # HINT: should look like the 1x1 convolution, except it uses a 3x3\n",
    "    x3 = ??\n",
    "    \n",
    "    # 5x5 branch\n",
    "    x4 = Conv2D(64, (5, 5), strides=1, padding='same', activation='relu')(inputs)\n",
    "    \n",
    "    # Concatenate the output from the four branches together\n",
    "    # HINT: Should be a list of the four branches outputs (x...)\n",
    "    outputs = Concatenate()([??])\n",
    "    return outputs\n",
    "    \n",
    "inputs = Input((32, 32, 3))\n",
    "outputs = naive_inception(inputs)\n",
    "model = Model(inputs, outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Verify the module using summary method\n",
    "\n",
    "It should look like below:\n",
    "\n",
    "```\n",
    "Layer (type)                    Output Shape         Param #     Connected to                     \n",
    "==================================================================================================\n",
    "input_3 (InputLayer)            [(None, 32, 32, 3)]  0                                            \n",
    "__________________________________________________________________________________________________\n",
    "max_pooling2d_2 (MaxPooling2D)  (None, 32, 32, 3)    0           input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_6 (Conv2D)               (None, 32, 32, 64)   256         input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_7 (Conv2D)               (None, 32, 32, 64)   1792        input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_8 (Conv2D)               (None, 32, 32, 64)   4864        input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_2 (Concatenate)     (None, 32, 32, 195)  0           max_pooling2d_2[0][0]            \n",
    "                                                                 conv2d_6[0][0]                   \n",
    "                                                                 conv2d_7[0][0]                   \n",
    "                                                                 conv2d_8[0][0]                   \n",
    "==================================================================================================\n",
    "Total params: 6,912\n",
    "Trainable params: 6,912\n",
    "Non-trainable params: 0\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Code and Inception V1 Block\n",
    "\n",
    "Let's now code an inception V1 block (referred to as module in paper). Remember, this V1 module used factorization to reduce complexity (parameters) while maintaining representational equivalence.\n",
    "\n",
    "<img src='https://github.com/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/books/deep-learning-design-patterns/Workshops/Junior/block-v1.jpg?raw=true'>\n",
    "\n",
    "You will need to:\n",
    "\n",
    "    1. Add 1x1 bottleneck convolutions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def inception_block(inputs):\n",
    "    # pooling branch\n",
    "    x1 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(inputs)\n",
    "    # Add a 1x1 bottleneck convolution with 64 filters\n",
    "    # HINT: the output shape should not change (think of strides and padding)\n",
    "    x1 = Conv2D(64, (1, 1), ??)\n",
    "    \n",
    "    # 1x1 branch\n",
    "    x2 = Conv2D(64, (1, 1), strides=(1, 1), padding='same', activation='relu')(inputs)\n",
    "    \n",
    "    # 3x3 branch\n",
    "    # Add 1x1 bottleneck convolution of 64 filters\n",
    "    # HINT: the input should be the input to the block\n",
    "    x3 = ??\n",
    "    x3 = Conv2D(96, (3, 3), strides=(1, 1), padding='same', activation='relu')(x3)\n",
    "    \n",
    "    # 5x5 branch\n",
    "    # Add 1x1 bottleneck convolution of 64 filters\n",
    "    # HINT: the input should be the input to the block\n",
    "    x4 = ??\n",
    "    x4 = Conv2D(48, (5, 5), strides=(1, 1), padding='same', activation='relu')(x4)\n",
    "    \n",
    "    outputs = Concatenate()([x1, x2, x3, x4])\n",
    "    return outputs\n",
    "    \n",
    "inputs = Input((32, 32, 3))\n",
    "outputs = inception_block(inputs)\n",
    "model = Model(inputs, outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Verify the module using summary method\n",
    "\n",
    "It should look like below:\n",
    "\n",
    "```\n",
    "Model: \"model_3\"\n",
    "__________________________________________________________________________________________________\n",
    "Layer (type)                    Output Shape         Param #     Connected to                     \n",
    "==================================================================================================\n",
    "input_8 (InputLayer)            [(None, 32, 32, 3)]  0                                            \n",
    "__________________________________________________________________________________________________\n",
    "max_pooling2d_7 (MaxPooling2D)  (None, 32, 32, 3)    0           input_8[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_29 (Conv2D)              (None, 32, 32, 64)   256         input_8[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_31 (Conv2D)              (None, 32, 32, 64)   256         input_8[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_27 (Conv2D)              (None, 32, 32, 64)   256         max_pooling2d_7[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_28 (Conv2D)              (None, 32, 32, 64)   256         input_8[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_30 (Conv2D)              (None, 32, 32, 96)   55392       conv2d_29[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_32 (Conv2D)              (None, 32, 32, 48)   76848       conv2d_31[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_7 (Concatenate)     (None, 32, 32, 272)  0           conv2d_27[0][0]                  \n",
    "                                                                 conv2d_28[0][0]                  \n",
    "                                                                 conv2d_30[0][0]                  \n",
    "                                                                 conv2d_32[0][0]                  \n",
    "==================================================================================================\n",
    "Total params: 133,264\n",
    "Trainable params: 133,264\n",
    "Non-trainable params: 0\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Refactor an Inception V1 Block\n",
    "\n",
    "Let's refactor the Inception V1 block, where:\n",
    "\n",
    "    1. The 5x5 parallel convolution is replaced by two 3x3 (B(3,3))\n",
    "    2. Replace the 3x3 convolution with a spatially separable convolution (3x1, 1x3)\n",
    "\n",
    "\n",
    "You will need to:\n",
    "\n",
    "    1. Add the parallel spatially separable 3x1 and 1x3 convolutions.\n",
    "    2. Concatenate the outputs together from the separable convolutions.\n",
    "    3. Add the sequential two 3x3 convolutions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def inception_block(inputs):\n",
    "    # pooling branch\n",
    "    x1 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(inputs)  \n",
    "    x1 = Conv2D(64, (1, 1), strides=(1, 1), padding='same', activation='relu')(inputs)\n",
    "    \n",
    "    # 1x1 branch\n",
    "    x2 = Conv2D(64, (1, 1), strides=(1, 1), padding='same', activation='relu')(inputs)\n",
    "    \n",
    "    # 3x3 branch\n",
    "    x3 = Conv2D(64, (1, 1), strides=(1, 1), padding='same', activation='relu')(inputs)\n",
    "    # Add two parallel spatially separable convolutions for 3x1 and 1x3 with 96 filters\n",
    "    # HINT: Use SeparableConv2D. The input to both convolutions is the same, i.e., the output from\n",
    "    # the prior 1x1 bottleneck.\n",
    "    x3_a = ??\n",
    "    x3_b = ??\n",
    "    # Concatenate the outputs together from the spatially separable convolutions\n",
    "    # HINT: x3 was split into a and b, let's put them back together.\n",
    "    x3 = Concatenate()([??])\n",
    "    \n",
    "    # 5x5 branch replaced by two 3x3\n",
    "    x4 = Conv2D(64, (1, 1), strides=(1, 1), padding='same', activation='relu')(inputs)\n",
    "    # Add two sequential 3x3 normal convolutions with 48 filters\n",
    "    # HINT: both should have x4 as input.\n",
    "    x4 = ??\n",
    "    x4 = ??\n",
    "    \n",
    "    outputs = Concatenate()([x1, x2, x3, x4])\n",
    "    return outputs\n",
    "    \n",
    "inputs = Input((32, 32, 3))\n",
    "outputs = inception_block(inputs)\n",
    "model = Model(inputs, outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Verify the module using summary method\n",
    "\n",
    "It should look like below. Note how the number of parameters after refactoring is about 1/2.\n",
    "\n",
    "```\n",
    "Layer (type)                    Output Shape         Param #     Connected to                     \n",
    "==================================================================================================\n",
    "input_3 (InputLayer)            [(None, 32, 32, 3)]  0                                            \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_11 (Conv2D)              (None, 32, 32, 64)   256         input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_12 (Conv2D)              (None, 32, 32, 64)   256         input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "separable_conv2d (SeparableConv (None, 32, 32, 96)   6432        conv2d_11[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "separable_conv2d_1 (SeparableCo (None, 32, 32, 96)   6432        conv2d_11[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_13 (Conv2D)              (None, 32, 32, 48)   27696       conv2d_12[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_9 (Conv2D)               (None, 32, 32, 64)   256         input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_10 (Conv2D)              (None, 32, 32, 64)   256         input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_1 (Concatenate)     (None, 32, 32, 192)  0           separable_conv2d[0][0]           \n",
    "                                                                 separable_conv2d_1[0][0]         \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_14 (Conv2D)              (None, 32, 32, 48)   20784       conv2d_13[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_2 (Concatenate)     (None, 32, 32, 368)  0           conv2d_9[0][0]                   \n",
    "                                                                 conv2d_10[0][0]                  \n",
    "                                                                 concatenate_1[0][0]              \n",
    "                                                                 conv2d_14[0][0]                  \n",
    "==================================================================================================\n",
    "Total params: 62,368\n",
    "Trainable params: 62,368\n",
    "Non-trainable params: 0\n",
    "__________________________________________________________________________________________________\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Code a Wide Residual Network\n",
    "\n",
    "Let's now code a mini version of a WRN:\n",
    "\n",
    "    1. Stem\n",
    "    2. Single Group of two residual blocks\n",
    "    3. Classifier\n",
    "    \n",
    "You will need to:\n",
    "\n",
    "    1. Get the value for k (width factor) from kwargs\n",
    "    2. Pass the width factor along with block params to the block method.\n",
    "    3. Determine the number of input channels (feature maps) for the block.\n",
    "    4. Complete the residual link.\n",
    "    5. Add the activation function for the classifier."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow.keras import Input, Model\n",
    "from tensorflow.keras.layers import Conv2D, BatchNormalization, ReLU, GlobalAveragePooling2D, Dense, Add\n",
    "\n",
    "def stem(inputs):\n",
    "    # 3x3 16 filter stem convolution with post-activation batch norm (CONV-BN-RE)\n",
    "    outputs = Conv2D(16, (3, 3), strides=(1, 1), padding='same')(inputs)\n",
    "    outputs = BatchNormalization()(outputs)\n",
    "    outputs = ReLU()(outputs)\n",
    "    return outputs\n",
    "\n",
    "def group(inputs, **params):\n",
    "    # Get the kwarg blocks info.\n",
    "    blocks = params['blocks']\n",
    "    # Get the kwarg k (width factor)\n",
    "    # HINT: its the value of the key 'k'\n",
    "    k = params[??]\n",
    "    \n",
    "    # Construct each block for this group\n",
    "    outputs = inputs\n",
    "    for block_params in blocks:\n",
    "        # Pass the global width parameter along with the block paramters\n",
    "        # HINT: You extracted the key-value above\n",
    "        outputs = block(outputs, **block_params, k=??)\n",
    "    return outputs\n",
    "\n",
    "def block(inputs, **params):\n",
    "    n_filters = params['n_filters']\n",
    "    k = params['k']\n",
    "    \n",
    "    # input will not match output shape.\n",
    "    # do 1x1 linear projection to match the shapes\n",
    "    # HINT: the channels is the last dimension. Input is a 4D tensor: (batch, height, width, channels)\n",
    "    in_channels = inputs.shape[??]\n",
    "    if in_channels != n_filters:\n",
    "        inputs = BatchNormalization()(inputs)\n",
    "        inputs = Conv2D(n_filters, (1, 1), strides=(1, 1), padding='same')(inputs)\n",
    "        \n",
    "    \n",
    "    # Dimensionality expansion\n",
    "    outputs = BatchNormalization()(inputs)\n",
    "    outputs = ReLU()(outputs)\n",
    "    # Set the number of expanded filters\n",
    "    # HINT: multiply the number of filters for the block by the width factor\n",
    "    outputs = Conv2D(??, (3, 3), strides=(1, 1), padding='same')(outputs)\n",
    "    \n",
    "    # Dimensionality reduction\n",
    "    outputs = BatchNormalization()(outputs)\n",
    "    outputs = ReLU()(outputs)\n",
    "    outputs = Conv2D(n_filters, (3, 3), strides=(1, 1), padding='same')(outputs)\n",
    "    \n",
    "    # Add the residual link to the outputs\n",
    "    # HINT: the residual link is the inputs to the block\n",
    "    outputs = Add()([??])\n",
    "    return outputs\n",
    "\n",
    "def classifier(inputs, n_classes):\n",
    "    # Pool and Flatten into 1x1xC\n",
    "    outputs = GlobalAveragePooling2D()(inputs)\n",
    "    \n",
    "    # Add the activation method to the classifier\n",
    "    # HINT: what activation is used for a multi-class classifier?\n",
    "    outputs = Dense(n_classes, activation=??)(outputs)\n",
    "    return outputs\n",
    "    \n",
    "\n",
    "inputs = Input((32, 32, 3))\n",
    "outputs = stem(inputs)\n",
    "outputs = group(outputs, **{ 'blocks': [ { 'n_filters': 32 }, { 'n_filters': 64 }], 'k': 4 })\n",
    "outputs = classifier(outputs, 10)\n",
    "\n",
    "model = Model(inputs, outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Verify the module using summary method\n",
    "\n",
    "It should look like below:\n",
    "\n",
    "```\n",
    "Layer (type)                    Output Shape         Param #     Connected to                     \n",
    "==================================================================================================\n",
    "input_6 (InputLayer)            [(None, 32, 32, 3)]  0                                            \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_18 (Conv2D)              (None, 32, 32, 16)   448         input_6[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_18 (BatchNo (None, 32, 32, 16)   64          conv2d_18[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_17 (ReLU)                 (None, 32, 32, 16)   0           batch_normalization_18[0][0]     \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_19 (BatchNo (None, 32, 32, 16)   64          re_lu_17[0][0]                   \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_19 (Conv2D)              (None, 32, 32, 32)   544         batch_normalization_19[0][0]     \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_20 (BatchNo (None, 32, 32, 32)   128         conv2d_19[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_18 (ReLU)                 (None, 32, 32, 32)   0           batch_normalization_20[0][0]     \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_20 (Conv2D)              (None, 32, 32, 128)  36992       re_lu_18[0][0]                   \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_21 (BatchNo (None, 32, 32, 128)  512         conv2d_20[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_19 (ReLU)                 (None, 32, 32, 128)  0           batch_normalization_21[0][0]     \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_21 (Conv2D)              (None, 32, 32, 32)   36896       re_lu_19[0][0]                   \n",
    "__________________________________________________________________________________________________\n",
    "add_6 (Add)                     (None, 32, 32, 32)   0           conv2d_19[0][0]                  \n",
    "                                                                 conv2d_21[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_22 (BatchNo (None, 32, 32, 32)   128         add_6[0][0]                      \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_22 (Conv2D)              (None, 32, 32, 64)   2112        batch_normalization_22[0][0]     \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_23 (BatchNo (None, 32, 32, 64)   256         conv2d_22[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_20 (ReLU)                 (None, 32, 32, 64)   0           batch_normalization_23[0][0]     \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_23 (Conv2D)              (None, 32, 32, 256)  147712      re_lu_20[0][0]                   \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_24 (BatchNo (None, 32, 32, 256)  1024        conv2d_23[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_21 (ReLU)                 (None, 32, 32, 256)  0           batch_normalization_24[0][0]     \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_24 (Conv2D)              (None, 32, 32, 64)   147520      re_lu_21[0][0]                   \n",
    "__________________________________________________________________________________________________\n",
    "add_7 (Add)                     (None, 32, 32, 64)   0           conv2d_22[0][0]                  \n",
    "                                                                 conv2d_24[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "global_average_pooling2d (Globa (None, 64)           0           add_7[0][0]                      \n",
    "__________________________________________________________________________________________________\n",
    "dense (Dense)                   (None, 10)           650         global_average_pooling2d[0][0]   \n",
    "==================================================================================================\n",
    "Total params: 375,050\n",
    "Trainable params: 373,962\n",
    "Non-trainable params: 1,088\n",
    "__________________________________________________________________________________________________\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training\n",
    "\n",
    "Finally, let's do a bit of training with your WRN model.\n",
    "\n",
    "### Dataset\n",
    "\n",
    "Let's get the tf.Keras builtin dataset for CIFAR-10. These are 32x32 color images (3 channels) of 10 classes (airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks). We will preprocess the image data (not covered yet)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow.keras.datasets import cifar10\n",
    "import numpy as np\n",
    "\n",
    "(x_train, y_train), (x_test, y_test) = cifar10.load_data()\n",
    "x_train = (x_train / 255.0).astype(np.float32)\n",
    "x_test  = (x_test / 255.0).astype(np.float32)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Results\n",
    "Let's train the model  for 3 epochs.\n",
    "\n",
    "Because it just a few epochs, you test accuracy may vary from run to run. For me, it was 52.8%"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['acc'])\n",
    "model.fit(x_train, y_train, epochs=3, batch_size=32, validation_split=0.1, verbose=1)\n",
    "model.evaluate(x_test, y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## End of Lab Exercise"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
