{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2019 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a target=\"_blank\" href=\"https://colab.research.google.com/github/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/workshops/Idiomatic%20Programmer%20-%20handbook%201%20-%20Codelab%204.ipynb\">\n",
    "<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Idiomatic Programmer Code Labs\n",
    "\n",
    "## Code Labs #4 - Get Familiar with Advanced CNN Designs\n",
    "\n",
    "## Prerequistes:\n",
    "\n",
    "    1. Familiar with Python\n",
    "    2. Completed Handbook 1/Part 4: Advanced Convolutional Neural Networks\n",
    "\n",
    "## Objectives:\n",
    "\n",
    "    1. Architecture Changes - Pre-stems\n",
    "    2. Dense connections across sublayers in DenseNet\n",
    "    3. Xception Redesigned Macro-Architecture for CNN"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Pre-Stems Groups for Handling Different Input Sizes\n",
    "\n",
    "Let's create a pre-stem to handle an input size different than what the neural network was designed for.\n",
    "\n",
    "We will use these approaches:\n",
    "\n",
    "    1. Calculate the difference in size between the expected input and the actual size of\n",
    "       the input (in our case we are assuming actual size less than expected size).\n",
    "       A. Expected = (230, 230, 3)\n",
    "       B. Actual   = (224, 224, 3)\n",
    "    2. Pad the inputs to fit into the expected size.\n",
    "    \n",
    "You fill in the blanks (replace the ??), make sure it passes the Python interpreter, and then verify it's correctness with the summary output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from keras import layers, Input\n",
    "\n",
    "# Not the input shape expected by the stem (which is (230, 230, 3)\n",
    "inputs = Input(shape=(224, 224, 3))\n",
    "\n",
    "# Add a pre-stem and pad (224, 224, 3) to (230, 230, 3)\n",
    "# HINT: Since the pad is on both sides (left/right, top/bottom) you want to divide the\n",
    "# difference by two (half goes to the left, half goes to the right, etc)\n",
    "inputs = layers.ZeroPadding2D(??)(inputs)\n",
    "\n",
    "# This stem's expected shape is (230, 230, 3)\n",
    "x = layers.Conv2D(64, (7, 7), strides=(2,2))(inputs)\n",
    "X = layers.BatchNormalization()(x)\n",
    "x = layers.ReLU()(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Verify that actual is padded to expected:\n",
    "\n",
    "You should get the following output on the shape of the inputs and outputs\n",
    "\n",
    "```\n",
    "inputs (?, 230, 230, 3)\n",
    "outputs (?, 112, 112, 64)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# this will output: (230, 230, 3)\n",
    "print(\"inputs\", inputs.shape)\n",
    "\n",
    "# this will output: (?, 112, 112, 64)\n",
    "print(\"outputs\", x.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## DenseNet  as Function API\n",
    "\n",
    "Let's create a DenseNet-121:\n",
    "\n",
    "We will use these approaches:\n",
    "\n",
    "    1. Add a pre-stem step of padding by 1 pixel so a 230x230x3 input results in 7x7 \n",
    "       feature maps at the global average (bottleneck) layer.\n",
    "    2. Use average pooling (subsamnpling) in transition blocks.\n",
    "    3. Accumulated feature maps through residual blocks by concatenting the input to the \n",
    "       output, and making that the new output.\n",
    "    4. Use compression to reduce feature map sizes between dense blocks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from keras import layers, Input, Model\n",
    "\n",
    "def stem(inputs):\n",
    "    \"\"\" The Stem Convolution Group\n",
    "        inputs : input tensor\n",
    "    \"\"\"\n",
    "    # First large convolution for abstract features for input 230 x 230 and output\n",
    "    # 112 x 112\n",
    "    x = layers.Conv2D(64, (7, 7), strides=2)(inputs)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "    # Add padding so when downsampling we fit shape 56 x 56\n",
    "    # Hint: we want to pad one pixel all around.\n",
    "    x = layers.ZeroPadding2D(padding=(??, ??)(x)\n",
    "    x = layers.MaxPooling2D((3, 3), strides=2)(x)\n",
    "    return x\n",
    "\n",
    "def dense_block(x, nblocks, nb_filters):\n",
    "    \"\"\" Construct a Dense Block\n",
    "        x         : input layer\n",
    "        nblocks   : number of residual blocks in dense block\n",
    "        nb_filters: number of filters in convolution layer in residual block\n",
    "    \"\"\"\n",
    "    # Construct a group of residual blocks\n",
    "    for _ in range(nblocks):\n",
    "        x = residual_block(x, nb_filters)\n",
    "    return x\n",
    "\n",
    "def residual_block(x, nb_filters):\n",
    "    \"\"\" Construct Residual Block\n",
    "        x         : input layer\n",
    "        nb_filters: number of filters in convolution layer in residual block\n",
    "    \"\"\"\n",
    "    shortcut = x # remember input tensor into residual block\n",
    "\n",
    "    # Bottleneck convolution, expand filters by 4 (DenseNet-B)\n",
    "    x = layers.Conv2D(4 * nb_filters, (1, 1), strides=(1, 1))(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # 3 x 3 convolution with padding=same to preserve same shape of feature maps\n",
    "    x = layers.Conv2D(nb_filters, (3, 3), strides=(1, 1), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Concatenate the input (identity) with the output of the residual block\n",
    "    # Concatenation (vs. merging) provides Feature Reuse between layers\n",
    "    # HINT: Use a list which includes the remembered input and the output from the residual block - which becomes the new output\n",
    "    x = layers.concatenate([??])\n",
    "    return x\n",
    "\n",
    "def trans_block(x, reduce_by):\n",
    "    \"\"\" Construct a Transition Block\n",
    "        x        : input layer\n",
    "        reduce_by: percentage of reduction of feature maps\n",
    "    \"\"\"\n",
    "    # Reduce (compression) the number of feature maps (DenseNet-C)\n",
    "    # shape[n] returns a class object. We use int() to cast it into the dimension\n",
    "    # size\n",
    "    # HINT: the compression is a percentage (~0.5) that was passed as a parameter to this function\n",
    "    nb_filters = int( int(x.shape[3]) * ?? )\n",
    "\n",
    "    # Bottleneck convolution\n",
    "    x = layers.Conv2D(nb_filters, (1, 1), strides=(1, 1))(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Use mean value (average) instead of max value sampling when pooling\n",
    "    # reduce by 75%\n",
    "    # HINT: instead of Max Pooling (downsampling) we use Average Pooling (subsampling)                 \n",
    "    x = layers.??Pooling2D((2, 2), strides=(2, 2))(x)\n",
    "    return x\n",
    "\n",
    "inputs = Input(shape=(230, 230, 3))\n",
    "\n",
    "# Create the Stem Convolution Group\n",
    "x = stem(inputs)\n",
    "\n",
    "# number of residual blocks in each dense block\n",
    "blocks = [6, 12, 24, 16]\n",
    "\n",
    "# pop off the list the last dense block\n",
    "last   = blocks.pop()\n",
    "\n",
    "# amount to reduce feature maps by (compression) during transition blocks\n",
    "reduce_by = 0.5\n",
    "\n",
    "# number of filters in a convolution block within a residual block\n",
    "nb_filters = 32\n",
    "\n",
    "# Create the dense blocks and interceding transition blocks\n",
    "for nblocks in blocks:\n",
    "    x = dense_block(x, nblocks, nb_filters)\n",
    "    x = trans_block(x, reduce_by)\n",
    "\n",
    "# Add the last dense block w/o a following transition block\n",
    "x = dense_block(x, last, nb_filters)\n",
    "\n",
    "# Classifier\n",
    "# Global Average Pooling will flatten the 7x7 feature maps into 1D feature maps\n",
    "x = layers.GlobalAveragePooling2D()(x)\n",
    "# Fully connected output layer (classification)\n",
    "outputs = x = layers.Dense(1000, activation='softmax')(x)\n",
    "\n",
    "model = Model(inputs, outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Verify the model architecture using summary method\n",
    "\n",
    "It should look like below:\n",
    "\n",
    "```\n",
    "Layer (type)                    Output Shape         Param #     Connected to                     \n",
    "==================================================================================================\n",
    "input_3 (InputLayer)            (None, 230, 230, 3)  0                                            \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_241 (Conv2D)             (None, 112, 112, 64) 9472        input_3[0][0]                    \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_241 (BatchN (None, 112, 112, 64) 256         conv2d_241[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_241 (ReLU)                (None, 112, 112, 64) 0           batch_normalization_241[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "zero_padding2d_2 (ZeroPadding2D (None, 114, 114, 64) 0           re_lu_241[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "max_pooling2d_3 (MaxPooling2D)  (None, 56, 56, 64)   0           zero_padding2d_2[0][0]           \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_242 (Conv2D)             (None, 56, 56, 128)  8320        max_pooling2d_3[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_242 (BatchN (None, 56, 56, 128)  512         conv2d_242[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_242 (ReLU)                (None, 56, 56, 128)  0           batch_normalization_242[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_243 (Conv2D)             (None, 56, 56, 32)   36896       re_lu_242[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_243 (BatchN (None, 56, 56, 32)   128         conv2d_243[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_243 (ReLU)                (None, 56, 56, 32)   0           batch_normalization_243[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_117 (Concatenate)   (None, 56, 56, 96)   0           max_pooling2d_3[0][0]            \n",
    "                                                                 re_lu_243[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_244 (Conv2D)             (None, 56, 56, 128)  12416       concatenate_117[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_244 (BatchN (None, 56, 56, 128)  512         conv2d_244[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_244 (ReLU)                (None, 56, 56, 128)  0           batch_normalization_244[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_245 (Conv2D)             (None, 56, 56, 32)   36896       re_lu_244[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_245 (BatchN (None, 56, 56, 32)   128         conv2d_245[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_245 (ReLU)                (None, 56, 56, 32)   0           batch_normalization_245[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_118 (Concatenate)   (None, 56, 56, 128)  0           concatenate_117[0][0]            \n",
    "                                                                 re_lu_245[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_246 (Conv2D)             (None, 56, 56, 128)  16512       concatenate_118[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_246 (BatchN (None, 56, 56, 128)  512         conv2d_246[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_246 (ReLU)                (None, 56, 56, 128)  0           batch_normalization_246[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_247 (Conv2D)             (None, 56, 56, 32)   36896       re_lu_246[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_247 (BatchN (None, 56, 56, 32)   128         conv2d_247[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_247 (ReLU)                (None, 56, 56, 32)   0           batch_normalization_247[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_119 (Concatenate)   (None, 56, 56, 160)  0           concatenate_118[0][0]            \n",
    "                                                                 re_lu_247[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_248 (Conv2D)             (None, 56, 56, 128)  20608       concatenate_119[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_248 (BatchN (None, 56, 56, 128)  512         conv2d_248[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_248 (ReLU)                (None, 56, 56, 128)  0           batch_normalization_248[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_249 (Conv2D)             (None, 56, 56, 32)   36896       re_lu_248[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_249 (BatchN (None, 56, 56, 32)   128         conv2d_249[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_249 (ReLU)                (None, 56, 56, 32)   0           batch_normalization_249[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_120 (Concatenate)   (None, 56, 56, 192)  0           concatenate_119[0][0]            \n",
    "                                                                 re_lu_249[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_250 (Conv2D)             (None, 56, 56, 128)  24704       concatenate_120[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_250 (BatchN (None, 56, 56, 128)  512         conv2d_250[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_250 (ReLU)                (None, 56, 56, 128)  0           batch_normalization_250[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_251 (Conv2D)             (None, 56, 56, 32)   36896       re_lu_250[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_251 (BatchN (None, 56, 56, 32)   128         conv2d_251[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_251 (ReLU)                (None, 56, 56, 32)   0           batch_normalization_251[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_121 (Concatenate)   (None, 56, 56, 224)  0           concatenate_120[0][0]            \n",
    "                                                                 re_lu_251[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_252 (Conv2D)             (None, 56, 56, 128)  28800       concatenate_121[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_252 (BatchN (None, 56, 56, 128)  512         conv2d_252[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_252 (ReLU)                (None, 56, 56, 128)  0           batch_normalization_252[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_253 (Conv2D)             (None, 56, 56, 32)   36896       re_lu_252[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_253 (BatchN (None, 56, 56, 32)   128         conv2d_253[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_253 (ReLU)                (None, 56, 56, 32)   0           batch_normalization_253[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_122 (Concatenate)   (None, 56, 56, 256)  0           concatenate_121[0][0]            \n",
    "                                                                 re_lu_253[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_254 (Conv2D)             (None, 56, 56, 128)  32896       concatenate_122[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_254 (BatchN (None, 56, 56, 128)  512         conv2d_254[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_254 (ReLU)                (None, 56, 56, 128)  0           batch_normalization_254[0][0]    \n",
    "\n",
    "REMOVED for BREVITY ...\n",
    "__________________________________________________________________________________________________\n",
    "average_pooling2d_9 (AveragePoo (None, 7, 7, 512)    0           re_lu_328[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_329 (Conv2D)             (None, 7, 7, 128)    65664       average_pooling2d_9[0][0]        \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_329 (BatchN (None, 7, 7, 128)    512         conv2d_329[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_329 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_329[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_330 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_329[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_330 (BatchN (None, 7, 7, 32)     128         conv2d_330[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_330 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_330[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_159 (Concatenate)   (None, 7, 7, 544)    0           average_pooling2d_9[0][0]        \n",
    "                                                                 re_lu_330[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_331 (Conv2D)             (None, 7, 7, 128)    69760       concatenate_159[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_331 (BatchN (None, 7, 7, 128)    512         conv2d_331[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_331 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_331[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_332 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_331[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_332 (BatchN (None, 7, 7, 32)     128         conv2d_332[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_332 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_332[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_160 (Concatenate)   (None, 7, 7, 576)    0           concatenate_159[0][0]            \n",
    "                                                                 re_lu_332[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_333 (Conv2D)             (None, 7, 7, 128)    73856       concatenate_160[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_333 (BatchN (None, 7, 7, 128)    512         conv2d_333[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_333 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_333[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_334 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_333[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_334 (BatchN (None, 7, 7, 32)     128         conv2d_334[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_334 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_334[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_161 (Concatenate)   (None, 7, 7, 608)    0           concatenate_160[0][0]            \n",
    "                                                                 re_lu_334[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_335 (Conv2D)             (None, 7, 7, 128)    77952       concatenate_161[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_335 (BatchN (None, 7, 7, 128)    512         conv2d_335[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_335 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_335[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_336 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_335[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_336 (BatchN (None, 7, 7, 32)     128         conv2d_336[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_336 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_336[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_162 (Concatenate)   (None, 7, 7, 640)    0           concatenate_161[0][0]            \n",
    "                                                                 re_lu_336[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_337 (Conv2D)             (None, 7, 7, 128)    82048       concatenate_162[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_337 (BatchN (None, 7, 7, 128)    512         conv2d_337[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_337 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_337[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_338 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_337[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_338 (BatchN (None, 7, 7, 32)     128         conv2d_338[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_338 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_338[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_163 (Concatenate)   (None, 7, 7, 672)    0           concatenate_162[0][0]            \n",
    "                                                                 re_lu_338[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_339 (Conv2D)             (None, 7, 7, 128)    86144       concatenate_163[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_339 (BatchN (None, 7, 7, 128)    512         conv2d_339[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_339 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_339[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_340 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_339[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_340 (BatchN (None, 7, 7, 32)     128         conv2d_340[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_340 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_340[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_164 (Concatenate)   (None, 7, 7, 704)    0           concatenate_163[0][0]            \n",
    "                                                                 re_lu_340[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_341 (Conv2D)             (None, 7, 7, 128)    90240       concatenate_164[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_341 (BatchN (None, 7, 7, 128)    512         conv2d_341[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_341 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_341[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_342 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_341[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_342 (BatchN (None, 7, 7, 32)     128         conv2d_342[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_342 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_342[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_165 (Concatenate)   (None, 7, 7, 736)    0           concatenate_164[0][0]            \n",
    "                                                                 re_lu_342[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_343 (Conv2D)             (None, 7, 7, 128)    94336       concatenate_165[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_343 (BatchN (None, 7, 7, 128)    512         conv2d_343[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_343 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_343[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_344 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_343[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_344 (BatchN (None, 7, 7, 32)     128         conv2d_344[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_344 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_344[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_166 (Concatenate)   (None, 7, 7, 768)    0           concatenate_165[0][0]            \n",
    "                                                                 re_lu_344[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_345 (Conv2D)             (None, 7, 7, 128)    98432       concatenate_166[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_345 (BatchN (None, 7, 7, 128)    512         conv2d_345[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_345 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_345[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_346 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_345[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_346 (BatchN (None, 7, 7, 32)     128         conv2d_346[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_346 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_346[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_167 (Concatenate)   (None, 7, 7, 800)    0           concatenate_166[0][0]            \n",
    "                                                                 re_lu_346[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_347 (Conv2D)             (None, 7, 7, 128)    102528      concatenate_167[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_347 (BatchN (None, 7, 7, 128)    512         conv2d_347[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_347 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_347[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_348 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_347[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_348 (BatchN (None, 7, 7, 32)     128         conv2d_348[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_348 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_348[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_168 (Concatenate)   (None, 7, 7, 832)    0           concatenate_167[0][0]            \n",
    "                                                                 re_lu_348[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_349 (Conv2D)             (None, 7, 7, 128)    106624      concatenate_168[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_349 (BatchN (None, 7, 7, 128)    512         conv2d_349[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_349 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_349[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_350 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_349[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_350 (BatchN (None, 7, 7, 32)     128         conv2d_350[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_350 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_350[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_169 (Concatenate)   (None, 7, 7, 864)    0           concatenate_168[0][0]            \n",
    "                                                                 re_lu_350[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_351 (Conv2D)             (None, 7, 7, 128)    110720      concatenate_169[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_351 (BatchN (None, 7, 7, 128)    512         conv2d_351[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_351 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_351[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_352 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_351[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_352 (BatchN (None, 7, 7, 32)     128         conv2d_352[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_352 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_352[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_170 (Concatenate)   (None, 7, 7, 896)    0           concatenate_169[0][0]            \n",
    "                                                                 re_lu_352[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_353 (Conv2D)             (None, 7, 7, 128)    114816      concatenate_170[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_353 (BatchN (None, 7, 7, 128)    512         conv2d_353[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_353 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_353[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_354 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_353[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_354 (BatchN (None, 7, 7, 32)     128         conv2d_354[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_354 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_354[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_171 (Concatenate)   (None, 7, 7, 928)    0           concatenate_170[0][0]            \n",
    "                                                                 re_lu_354[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_355 (Conv2D)             (None, 7, 7, 128)    118912      concatenate_171[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_355 (BatchN (None, 7, 7, 128)    512         conv2d_355[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_355 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_355[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_356 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_355[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_356 (BatchN (None, 7, 7, 32)     128         conv2d_356[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_356 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_356[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_172 (Concatenate)   (None, 7, 7, 960)    0           concatenate_171[0][0]            \n",
    "                                                                 re_lu_356[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_357 (Conv2D)             (None, 7, 7, 128)    123008      concatenate_172[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_357 (BatchN (None, 7, 7, 128)    512         conv2d_357[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_357 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_357[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_358 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_357[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_358 (BatchN (None, 7, 7, 32)     128         conv2d_358[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_358 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_358[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_173 (Concatenate)   (None, 7, 7, 992)    0           concatenate_172[0][0]            \n",
    "                                                                 re_lu_358[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_359 (Conv2D)             (None, 7, 7, 128)    127104      concatenate_173[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_359 (BatchN (None, 7, 7, 128)    512         conv2d_359[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_359 (ReLU)                (None, 7, 7, 128)    0           batch_normalization_359[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "conv2d_360 (Conv2D)             (None, 7, 7, 32)     36896       re_lu_359[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "batch_normalization_360 (BatchN (None, 7, 7, 32)     128         conv2d_360[0][0]                 \n",
    "__________________________________________________________________________________________________\n",
    "re_lu_360 (ReLU)                (None, 7, 7, 32)     0           batch_normalization_360[0][0]    \n",
    "__________________________________________________________________________________________________\n",
    "concatenate_174 (Concatenate)   (None, 7, 7, 1024)   0           concatenate_173[0][0]            \n",
    "                                                                 re_lu_360[0][0]                  \n",
    "__________________________________________________________________________________________________\n",
    "global_average_pooling2d_3 (Glo (None, 1024)         0           concatenate_174[0][0]            \n",
    "__________________________________________________________________________________________________\n",
    "dense_3 (Dense)                 (None, 1000)         1025000     global_average_pooling2d_3[0][0] \n",
    "==================================================================================================\n",
    "Total params: 7,946,408\n",
    "Trainable params: 7,925,928\n",
    "Non-trainable params: 20,480\n",
    "__________________________________________________________________________________________________\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Xception Architecture using Functional API\n",
    "\n",
    "Let's layout a CNN using the Xception architecture pattern.\n",
    "\n",
    "We will use these approaches:\n",
    "\n",
    "    1. Decompose into a stem, entrance, middle and exit module.\n",
    "    2. Stem does the initial sequential convolutional layers for the input.\n",
    "    3. Entrance does the coarse filter learning.\n",
    "    4. Middle does the detail filter learning.\n",
    "    5. Exit does the classification.\n",
    "    \n",
    "We won't build a full Xception, just a mini-example to practice the layout."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from keras import layers, Input, Model\n",
    "\n",
    "def entryFlow(inputs):\n",
    "    \"\"\" Create the entry flow section\n",
    "        inputs : input tensor to neural network\n",
    "    \"\"\"\n",
    "\n",
    "    def stem(inputs):\n",
    "        \"\"\" Create the stem entry into the neural network\n",
    "            inputs : input tensor to neural network\n",
    "        \"\"\"\n",
    "        # The stem uses two 3x3 convolutions.\n",
    "        # The first one downsamples and the second one doubles the number of filters\n",
    "        \n",
    "        # First convolution\n",
    "        x = layers.Conv2D(32, (3, 3), strides=(2, 2))(inputs)\n",
    "        x = layers.BatchNormalization()(x)\n",
    "        x = layers.ReLU()(x)\n",
    "\n",
    "        # Second convolution, double the number of filters (no downsampling)\n",
    "        # HINT: when stride > 1 you are downsampling (also known as strided convolution)\n",
    "        x = layers.Conv2D(??, (3, 3), strides=??)(inputs)\n",
    "        x = layers.BatchNormalization()(x)\n",
    "        x = layers.ReLU()(x)\n",
    "        return x\n",
    "        \n",
    "    # Create the stem to the neural network\n",
    "    x = stem(inputs)\n",
    "\n",
    "    # Create three residual blocks\n",
    "    for nb_filters in [128, 256, 728]:\n",
    "        x = residual_block_entry(x, nb_filters)\n",
    "\n",
    "    return x\n",
    "\n",
    "def middleFlow(x):\n",
    "    \"\"\" Create the middle flow section\n",
    "        x : input tensor into section\n",
    "    \"\"\"\n",
    "    # Create 8 residual blocks, each with 728 filters\n",
    "    for _ in range(8):\n",
    "        x = residual_block_middle(x, ??)\n",
    "    return x\n",
    "\n",
    "def exitFlow(x):\n",
    "    \"\"\" Create the exit flow section\n",
    "        x : input tensor into section\n",
    "    \"\"\"\n",
    "    def classifier(x):\n",
    "        \"\"\" The output classifier\n",
    "            x : input tensor\n",
    "        \"\"\"\n",
    "        # Global Average Pooling will flatten the 10x10 feature maps into 1D\n",
    "        # feature maps\n",
    "        x = layers.??()(x)\n",
    "        # Fully connected output layer (classification)\n",
    "        x = layers.Dense(1000, activation='softmax')(x)\n",
    "        return x\n",
    "\n",
    "    shortcut = x\n",
    "\n",
    "    # First Depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(728, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "\n",
    "    # Second Depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(1024, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Create pooled feature maps, reduce size by 75%\n",
    "    x = layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)\n",
    "\n",
    "    # Add strided convolution to identity link to double number of filters to\n",
    "    # match output of residual block for the add operation\n",
    "    shortcut = layers.Conv2D(1024, (1, 1), strides=(2, 2),\n",
    "                             padding='same')(shortcut)\n",
    "    shortcut = layers.BatchNormalization()(shortcut)\n",
    "\n",
    "    x = layers.add([x, shortcut])\n",
    "\n",
    "    # Third Depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(1556, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Fourth Depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(2048, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Create classifier section\n",
    "    x = classifier(x)\n",
    "\n",
    "    return x\n",
    "\n",
    "def residual_block_entry(x, nb_filters):\n",
    "    \"\"\" Create a residual block using Depthwise Separable Convolutions\n",
    "        x         : input into residual block\n",
    "        nb_filters: number of filters\n",
    "    \"\"\"\n",
    "    shortcut = x\n",
    "\n",
    "    # First Depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Second depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Create pooled feature maps, reduce size by 75%\n",
    "    x = layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)\n",
    "\n",
    "    # Add strided convolution to identity link to double number of filters to\n",
    "    # match output of residual block for the add operation\n",
    "    # HINT: this is the identity branch, so what should be the input?\n",
    "    shortcut = layers.Conv2D(nb_filters, (1, 1), strides=(2, 2),\n",
    "                             padding='same')(??)\n",
    "    shortcut = layers.BatchNormalization()(shortcut)\n",
    "\n",
    "    x = layers.add([x, shortcut])\n",
    "\n",
    "    return x\n",
    "\n",
    "def residual_block_middle(x, nb_filters):\n",
    "    \"\"\" Create a residual block using Depthwise Separable Convolutions\n",
    "        x         : input into residual block\n",
    "        nb_filters: number of filters\n",
    "    \"\"\"\n",
    "    # Remember to save the input for the identity link\n",
    "    # HINT: it's in the params!\n",
    "    shortcut = ??\n",
    "\n",
    "    # First Depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Second depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)\n",
    "\n",
    "    # Third depthwise Separable Convolution\n",
    "    x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)\n",
    "    x = layers.BatchNormalization()(x)\n",
    "    x = layers.ReLU()(x)    \n",
    "    \n",
    "    x = layers.add([x, shortcut])\n",
    "    return x\n",
    "\n",
    "inputs = Input(shape=(299, 299, 3))\n",
    "\n",
    "# Create entry section\n",
    "x = entryFlow(inputs)\n",
    "# Create the middle section\n",
    "x = middleFlow(x)\n",
    "# Create the exit section\n",
    "outputs = exitFlow(x)\n",
    "\n",
    "model = Model(inputs, outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Verify the model architecture using summary method\n",
    "\n",
    "It should look (end) like below:\n",
    "\n",
    "```\n",
    "global_average_pooling2d_1 (Glo (None, 2048)         0           re_lu_37[0][0]                   \n",
    "__________________________________________________________________________________________________\n",
    "dense_1 (Dense)                 (None, 1000)         2049000     global_average_pooling2d_1[0][0] \n",
    "==================================================================================================\n",
    "Total params: 22,981,736\n",
    "Trainable params: 22,927,232\n",
    "Non-trainable params: 54,504\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## End of Code Lab"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
