{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "models as layers_F.ipynb",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_o0d2wU8ZJ0w"
      },
      "source": [
        "# Models as layers\n",
        "1 pora pretrained-model uthake layer ke tarah use kerley.\n",
        "\n",
        "Ismai bhe layer sharing ke tarah layers ke sath us wakt ke weights bhe atay hai(use hote hai).\n",
        "\n",
        "Calling an instance, whether it’s a layer instance or a model instance, will always reuse the existing learned representations of the instance"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YeKq3Nt4ZaUI"
      },
      "source": [
        "\n",
        "# This means you can call a model on an input tensor and retrieve an output tensor:\n",
        "y = model(x) # for eg this is a model\n",
        "\n",
        "#If the model has multiple input tensors and multiple output tensors, it should be called with a list of tensors:\n",
        "y1, y2 = model([x1, x2])\n",
        "\n",
        "# When you call a model instance, you’re reusing the weights of the model"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yI3WqZ6GZYr9"
      },
      "source": [
        "One simple practical example of what you can build by reusing a model instance is a vision model that uses a dual camera as its input: two parallel cameras, a few centimeters (one inch) apart. Such a model can perceive depth, which can be useful in many applications. You shouldn’t need two independent models to extract visual features from the left camera and the right camera before merging the two feeds. Such low-level processing can be shared across the two inputs: that is, done via layers that use the same weights and thus share the same representations. Here’s how you’d implement a Siamese vision model (shared convolutional base) in Keras:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Lr6BZSH0ZUSz"
      },
      "source": [
        "from keras import layers\n",
        "from keras import applications\n",
        "from keras import Input\n",
        "\n",
        "# The base image-processing model is the Xception network (convolutional base only)\n",
        "xception_base = applications.Xception(weights=None,include_top=False)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ywfv6MQ_ZnZ1"
      },
      "source": [
        "\n",
        "left_input = Input(shape=(250, 250, 3))  #The inputs are 250 × 250 RGB images.\n",
        "right_input = Input(shape=(250, 250, 3)) #Calls the same vision model twice\n",
        "\n",
        "left_features = xception_base(left_input)\n",
        "right_input = xception_base(right_input)\n",
        "\n",
        "merged_features = layers.concatenate([left_features, right_input], axis=-1)\n",
        "# The merged features contain information from the right visual feed and the left visual feed."
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FmoM8dbvbSfj"
      },
      "source": [
        "#############################################################"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lsfS7c-OZpN9"
      },
      "source": [
        "Above are book codes now Lets try to implement them"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IbBlcqz2bU1j"
      },
      "source": [
        "# Mnist_Fashion\n",
        "Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes.\n",
        "\n",
        "The training and test data sets have 785 columns. The first column consists of the class labels and represents the article of clothing. The rest of the columns contain the pixel-values of the associated image."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CkUpJq6XZoc9",
        "outputId": "0176b97f-056e-49fd-c86f-bda254eaccdf",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 207
        }
      },
      "source": [
        "import numpy as np\n",
        "import pandas as pd\n",
        "import tensorflow as tf\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "from tensorflow.keras.datasets import fashion_mnist\n",
        "(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()\n",
        "print()\n",
        "\n",
        "class_names = [\"0.T-shirt/top\",\"1.Trouser\",\"2.Pullover\",\"3.Dress\",\"4.Coat\",\"5.Sandal\",\"6.Shirt\",\"7.Sneaker\",\"8.Bag\",\"9.Ankle boot\"]\n",
        "display(class_names)"
      ],
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "\n"
          ],
          "name": "stdout"
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "['0.T-shirt/top',\n",
              " '1.Trouser',\n",
              " '2.Pullover',\n",
              " '3.Dress',\n",
              " '4.Coat',\n",
              " '5.Sandal',\n",
              " '6.Shirt',\n",
              " '7.Sneaker',\n",
              " '8.Bag',\n",
              " '9.Ankle boot']"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "tI5LQxH6bwup",
        "outputId": "dd23f6ab-ea6e-4db8-a6be-6256923763de",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 282
        }
      },
      "source": [
        "# Visualizing Data¶\n",
        "pic  = x_test[7]\n",
        "plt.imshow(pic,cmap = plt.cm.binary)\n",
        "plt.show()\n",
        "\n",
        "# Actual Label\n",
        "print(\"Actual Label :\",y_test[7])"
      ],
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAUKUlEQVR4nO3da2xVVdoH8P9Daa3Y4dYbpcBUlIsVBOkJQiSiErkZA/rBCInyirzMB01mdD5omA/4TTQOxJg3JowS8M3IhAQUNERHgYBGQAuWcil3C20pbancC5TLMx+6mVTsflY9l+6j6/9Lmrbn39WzzqYP+5yz9lpLVBVE9PvXLeoOEFHXYLETeYLFTuQJFjuRJ1jsRJ7o3pV3lpeXpyUlJV15l95rbm4284sXL5q5a7QmIyPDzLOzs0OzvLw8sy39etXV1Th16pR0lCVU7CIyFcA7ADIAvK+qi6yfLykpQXl5eSJ3+Zt048YNM0+0oCwffvihmW/dutXMr127ZuZ9+vQx8+HDh4dmc+fONdu6uI6bSId/80kR5X1bYrFYaBb303gRyQDwfwCmASgFMEtESuP9fUSUWom8Zh8L4LCqHlXVVgD/AjAjOd0iomRLpNiLAdS0+742uO1nRGS+iJSLSHlTU1MCd0dEiUj5u/GqulRVY6oay8/PT/XdEVGIRIq9DsDAdt8PCG4jojSUSLF/D2CIiNwpIlkAngGwLjndIqJki3voTVWvichLAL5A29DbMlXdm7Se/Y5065baV0uVlZWh2Zw5c8y248ePN3NX3zMzM818yZIloZmrb64hR9fwljU8lujQWFRDa4lIaJxdVdcDWJ+kvhBRCvFyWSJPsNiJPMFiJ/IEi53IEyx2Ik+w2Ik80aXz2alj+/fvN/OGhgYzLygoCM22b99utl24cKGZnz171sxvv/12M3///fdDsy1btphtv/nmGzN/9dVXzTwrK8vMfcMzO5EnWOxEnmCxE3mCxU7kCRY7kSdY7ESe4NBbEuzYscPMP/nkEzM/ceKEmT/44INmfubMmdCsb9++Ztthw4aZeWNjo5m7ht5GjRoVmrW2tppte/bsaeZvvfWWmU+cODE0u+eee8y2v8dlrnlmJ/IEi53IEyx2Ik+w2Ik8wWIn8gSLncgTLHYiT4hrN8pkisVi+lvdxdWaTjlp0iSzbVVVlZnn5uaa+YgRI8y8uro6NFu/3l78t6yszMyvX79u5leuXDHz8+fPh2ZTpkwx27qm127bts3Mrb7n5OSYbWfOnGnmQ4YMMfOoxGIxlJeXd7jONc/sRJ5gsRN5gsVO5AkWO5EnWOxEnmCxE3mCxU7kCc5nD+zevdvM160L33r+zTffNNuWlJSYeffu9j/D4MGD4/79p0+fNts+//zzZn706FEzb2lpMfOKiorQ7IEHHkjod/fv39/Mi4uL4+oXACxevNjM33vvPTNPRwkVu4hUAzgP4DqAa6oaS0aniCj5knFmf0RVTyXh9xBRCvE1O5EnEi12BfBvEdkhIvM7+gERmS8i5SJS3tTUlODdEVG8Ei32Cao6BsA0AC+KyEO3/oCqLlXVmKrG8vPzE7w7IopXQsWuqnXB50YAHwMYm4xOEVHyxV3sInKHiPzh5tcAJgPYk6yOEVFyJfJufCGAj0Xk5u/5SFU/T0qvIuBa+/3zz8Mf2rJly8y2a9euNXPXfHbXGufWls+ffvqp2fbcuXNmbs2VB9zbSR86dCg0KywsNNseOHDAzF3XAFhr5peWlpptH3/8cTP/LYq72FX1KIDwHQCIKK1w6I3IEyx2Ik+w2Ik8wWIn8gSLncgTnOIa2Lhxo5nfeeedodno0aPNtr169TJz17LGrqWkjx07FpoVFRWZbR999FEzP3LkiJlfvXrVzK2pw65tkV3Deq6hO1ffLLW1tWZ+6pQ99ysdt3zmmZ3IEyx2Ik+w2Ik8wWIn8gSLncgTLHYiT7DYiTzBcfaAa6pnTU1NaBaL2Yvqusa6L1++bOa9e/c2c2u56MzMTLOta+th17bJPXr0MHNrmqq1nTPgftzZ2dlmPnHixNBs9erVZltrai4ANDc3mznH2YkoMix2Ik+w2Ik8wWIn8gSLncgTLHYiT7DYiTzBcfZAImPZ69evN9u6dsK5dOmSmffr18/MreWeXUtBu3JrmWrAXq4ZsJd7njdvntn2xIkTZu7adnnz5s2h2bfffmu2dV0/cOXKFTNPRzyzE3mCxU7kCRY7kSdY7ESeYLETeYLFTuQJFjuRJzjOHigrKzPzOXPmhGauMVvX1sI//fSTmdfX15u5dQ3AhQsXzLZnzpwxc9ec8+vXr5u5Ne/btTa7a055S0uLmVtru7vWIHBdd+G6viAdOc/sIrJMRBpFZE+72/qKyJcicij43Ce13SSiRHXmafxyAFNvue01ABtUdQiADcH3RJTGnMWuqlsA3Po8cwaAFcHXKwDMTHK/iCjJ4n2DrlBVb76QPAkgdNMtEZkvIuUiUt7U1BTn3RFRohJ+N15VFYAa+VJVjalqzDUhhIhSJ95ibxCRIgAIPjcmr0tElArxFvs6ADfHouYAWJuc7hBRqjjH2UVkJYCHAeSJSC2AhQAWAVglIi8AOAbg6VR2MhmsfcIBYOXKlWY+a9as0OzGjRtm22vXrpl5ovu3W+1bW1vNtq48kT3OASA3Nzc0cz3u7t3tP89u3exzVVZWVmg2deqtA0w/d/LkSTPftGmTmT/77LNmHgVnsatq2F/5pCT3hYhSiJfLEnmCxU7kCRY7kSdY7ESeYLETecKbKa4XL140c9dQy/Lly0Mz11LSCxcuNPOhQ4eaeWFh6NXIAOzhsbq6OrPt+PHjzdw1vFVQUGDm1lRQ13bRrt9tTe0FgCeffDI0q6qqMtvu2rXLzMeMGWPm6Tj0xjM7kSdY7ESeYLETeYLFTuQJFjuRJ1jsRJ5gsRN5wptx9tLSUjN/4403zHzy5MmhmWsFntWrV5u5a6rngAEDzNwaC//oo4/MtoMHDzbzI0eOmLlrmeuvv/46NOvTx16UuKamxsxdy1xbpk+fbuaPPPKImbv+ntIRz+xEnmCxE3mCxU7kCRY7kSdY7ESeYLETeYLFTuQJb8bZXdv/Hjx40MwzMjJCs8ZGe48M11LSrm2PL126ZOZW31xj1Xv37jXz/fv3m/mVK1fMvG3DoI65lqk+fvy4mbu2ur733ntDM9caAa6/l8rKSjO/7777zDwKPLMTeYLFTuQJFjuRJ1jsRJ5gsRN5gsVO5AkWO5EnOM4eyM7ONnNrTHjVqlVm20WLFpm5NR4MAL179zZza8voHj16mG1nz55t5j/88IOZu47bjz/+GJpNmzbNbOta0941zv7yyy+HZq7H1dLSYuaZmZlmfubMGTN3/ZumgvPMLiLLRKRRRPa0u+11EakTkYrgw14JgIgi15mn8csBdLRz/RJVHR182FuiEFHknMWuqlsA2M+XiCjtJfIG3UsiUhk8zQ9dTExE5otIuYiUNzU1JXB3RJSIeIv9PQB3ARgNoB7A38N+UFWXqmpMVWOuhRmJKHXiKnZVbVDV66p6A8A/AIxNbreIKNniKnYRKWr37ZMA9oT9LBGlB+c4u4isBPAwgDwRqQWwEMDDIjIagAKoBvCnFPYxKXbs2GHm1j7iANDc3ByaHThwwGzbvbt9mDdu3Gjmw4YNM/MLFy6EZps3bzbb3n///WbumufvGk+2jttDDz1ktt26dauZZ2VlmfmgQYNCM9c4e3FxsZmfOnXKzF3vT0Uxzu4sdlWd1cHNH6SgL0SUQrxclsgTLHYiT7DYiTzBYifyBIudyBPeTHF1TZccN26cme/ZE34pwYQJE8y2rq2Jd+/ebeatra1mbk1xtZZyBtzLWLumyLqGmKzf71pi27XUtGvoLScnJzQ7d+6c2XbkyJFm7touuqCgwMyjwDM7kSdY7ESeYLETeYLFTuQJFjuRJ1jsRJ5gsRN5wptx9oqKCjO/++67427vmg5ZX19v5nV1dWZeVFRk5taYr2vb49raWjO3loLuTHtrSeaGhgazravvubm5Zj506NDQzHXtwoABA8z82LFjZn769Gkz79Wrl5mnAs/sRJ5gsRN5gsVO5AkWO5EnWOxEnmCxE3mCxU7kCW/G2T/77DMzd837fuedd0KzKVOmmG3LysrMvFs3+//cMWPGmHlNTU1oNnasvX+Ha7toa6484B4vtuasjxo1ymzr2pLZtU6ANdf+lVdeMdu6lgd3XRuxYMECMy8pKTHzVOCZncgTLHYiT7DYiTzBYifyBIudyBMsdiJPsNiJPOHNOPvbb79t5q515a1tke+66y6zrWtbY9f66NnZ2WZubf/br18/s61rLr5rnP3EiRNmbq3PLiJm24EDB5r55cuXzdyasz5v3jyzrWsvANdxcbWPgvPMLiIDRWSTiOwTkb0i8ufg9r4i8qWIHAo+21c4EFGkOvM0/hqAv6pqKYBxAF4UkVIArwHYoKpDAGwIvieiNOUsdlWtV9WdwdfnAVQBKAYwA8CK4MdWAJiZqk4SUeJ+1Rt0IlIC4H4A2wEUqurNxdVOAigMaTNfRMpFpNy1LxgRpU6ni11EcgCsBvAXVf3Zuy7aNoukw5kkqrpUVWOqGsvPz0+os0QUv04Vu4hkoq3Q/6mqa4KbG0SkKMiLADSmpotElAzOoTdpGx/5AECVqi5uF60DMAfAouDz2pT0MEmOHj1q5q7hLWvr4WHDhpltN2zYYOZr1qwx8507d5q5Nfy1fPlys61ryWNr+iwAVFVVmbk1POYatnMt/93c3GzmkydPDs1cLyldy1y7lqJ2DbdG8Sy3M+PsDwJ4FsBuEbl59BegrchXicgLAI4BeDo1XSSiZHAWu6p+AyDs6odJye0OEaUKL5cl8gSLncgTLHYiT7DYiTzBYifyhDdTXC9evGjmrnFXK4/FYmZb11LQQ4YMMXPXdMldu3aFZq7rB5555hkz37t3r5m7Hpt1fcLs2bPNtq7j6lpqeurUqaGZ63FZU5oB99/TpUuXzDwKPLMTeYLFTuQJFjuRJ1jsRJ5gsRN5gsVO5AkWO5EnvBlnP3/+vJm75m0fPnw4NOvRo4fZ9osvvjBzaywasLc9BoCTJ0+GZqWlpWZbF9djGzlypJlb6whYS2ADQEFBgZm75pzX19eHZjk5OWbb48ePm7nr78m1zHUUeGYn8gSLncgTLHYiT7DYiTzBYifyBIudyBMsdiJPeDPO7hoPHjdunJkfPHgwNMvMzDTbWtsWA0BWVpaZnz171sy3bt0amuXl5Zltv/rqKzN3zesePHiwmW/fvj00e+yxx8y2rmsfqqurzXzo0KGh2cSJE822+/btM/OePXuauWsb7yjwzE7kCRY7kSdY7ESeYLETeYLFTuQJFjuRJ1jsRJ7ozP7sAwF8CKAQgAJYqqrviMjrAP4XwM0F1Reo6vpUdTRRgwYNMnPXHurW/OZu3ez/MysrK828f//+Zt7S0mLm1nhz3759zbYurrn0rvXRrdw1J9z1uF3j8Koamt12221mW9dc+eLiYjPv06ePmUehMxfVXAPwV1XdKSJ/ALBDRL4MsiWq+nbqukdEydKZ/dnrAdQHX58XkSoA9n9rRJR2ftVrdhEpAXA/gJvXQL4kIpUiskxEOnzeIiLzRaRcRMpdWywRUep0uthFJAfAagB/UdVzAN4DcBeA0Wg78/+9o3aqulRVY6oay8/PT0KXiSgenSp2EclEW6H/U1XXAICqNqjqdVW9AeAfAMamrptElChnsYuIAPgAQJWqLm53e1G7H3sSwJ7kd4+IkqUz78Y/COBZALtFpCK4bQGAWSIyGm3DcdUA/pSSHiaJa4rru+++a+bfffdd3Pf93HPPmfm2bdvMPCMjw8ytaai5ublm2yNHjpi5a/qua3jMyl1Dlq2trWbuGt4aPnx4aOYaDnXlJSUlZt52jkwvnXk3/hsAHfU8bcfUieiXeAUdkSdY7ESeYLETeYLFTuQJFjuRJ1jsRJ7wZinp7t3th/rUU0+Zeb9+/eK+7xEjRiSUu8ydOzc0KysrM9tevXrVzF3Tb13jzUVFRaGZaztp1+9+4oknzNziOi6uawAGDhxo5uk4zs4zO5EnWOxEnmCxE3mCxU7kCRY7kSdY7ESeYLETeUKs5XaTfmciTQCOtbspD8CpLuvAr5OufUvXfgHsW7yS2bc/qmqH6791abH/4s5FylU1FlkHDOnat3TtF8C+xaur+san8USeYLETeSLqYl8a8f1b0rVv6dovgH2LV5f0LdLX7ETUdaI+sxNRF2GxE3kikmIXkakickBEDovIa1H0IYyIVIvIbhGpEJHyiPuyTEQaRWRPu9v6isiXInIo+BzJ3sAhfXtdROqCY1chItMj6ttAEdkkIvtEZK+I/Dm4PdJjZ/SrS45bl79mF5EMAAcBPAagFsD3AGap6r4u7UgIEakGEFPVyC/AEJGHAFwA8KGqjghuewvAT6q6KPiPso+qvpomfXsdwIWot/EOdisqar/NOICZAP4HER47o19PowuOWxRn9rEADqvqUVVtBfAvADMi6EfaU9UtAH665eYZAFYEX69A2x9LlwvpW1pQ1XpV3Rl8fR7AzW3GIz12Rr+6RBTFXgygpt33tUiv/d4VwL9FZIeIzI+6Mx0oVNX64OuTAAqj7EwHnNt4d6VbthlPm2MXz/bnieIbdL80QVXHAJgG4MXg6Wpa0rbXYOk0dtqpbby7SgfbjP9XlMcu3u3PExVFsdcBaL9a34DgtrSgqnXB50YAHyP9tqJuuLmDbvC5MeL+/Fc6bePd0TbjSINjF+X251EU+/cAhojInSKSBeAZAOsi6McviMgdwRsnEJE7AExG+m1FvQ7AnODrOQDWRtiXn0mXbbzDthlHxMcu8u3PVbXLPwBMR9s78kcA/C2KPoT0azCAXcHH3qj7BmAl2p7WXUXbexsvAMgFsAHAIQBfAeibRn37fwC7AVSirbCKIurbBLQ9Ra8EUBF8TI/62Bn96pLjxstliTzBN+iIPMFiJ/IEi53IEyx2Ik+w2Ik8wWIn8gSLncgT/wGOocZ0IzvtIgAAAABJRU5ErkJggg==\n",
            "text/plain": [
              "<Figure size 432x288 with 1 Axes>"
            ]
          },
          "metadata": {
            "tags": [],
            "needs_background": "light"
          }
        },
        {
          "output_type": "stream",
          "text": [
            "Actual Label : 6\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nhlFb4wRbzK4",
        "outputId": "ee85c08c-03f2-4c32-a088-978978c48621",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "source": [
        "# Normalizing data\n",
        "x_train, x_test = x_train/255.0, x_test/255.0\n",
        "\n",
        "# One hot encoding\n",
        "from tensorflow.keras.utils import to_categorical\n",
        "y_train = to_categorical(y_train) \n",
        "y_test = to_categorical(y_test)\n",
        "\n",
        "print(x_train.shape)\n",
        "print(x_test.shape)"
      ],
      "execution_count": 3,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "(60000, 28, 28)\n",
            "(10000, 28, 28)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nBQx0so9cqrj"
      },
      "source": [
        "# We have 2 prolmes here:\n",
        "1. Our pretrained require shape of (32,32) but we have (28,28)\n",
        "\n",
        "2. Our data is Greay scale (1-chanel) but our pretrained requres (3-channel) RGB"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HG4dwem4dkHS"
      },
      "source": [
        "# 1st Problem Fix (Padding)\n",
        "\n",
        "https://stackoverflow.com/questions/61309432/padding-mnist-images-from-28-28-1-to-32-32-1\n",
        "\n",
        "The pad_width argument to np.pad works like this: ((axis 1 pad before, axis 1 pad after), ...) so if you want to pad 1 pixel on each side you should do ((0,0), (1,1), (1,1)). (Your code is padding all axes 2 on each side.)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RFqXJQzGcq4B",
        "outputId": "8aa6d638-b021-4af7-8d7b-2b45d3c91d6a",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 103
        }
      },
      "source": [
        "print(x_train.shape)\n",
        "print(x_test.shape,\"\\n\")\n",
        "\n",
        "new_x_train = np.pad(x_train, ((0,0),(2,2),(2,2)), 'constant')\n",
        "new_x_test = np.pad(x_test, ((0,0),(2,2),(2,2)), 'constant')\n",
        "\n",
        "print(new_x_train.shape)\n",
        "print(new_x_test.shape)"
      ],
      "execution_count": 4,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "(60000, 28, 28)\n",
            "(10000, 28, 28) \n",
            "\n",
            "(60000, 32, 32)\n",
            "(10000, 32, 32)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GyFDImNvep8y"
      },
      "source": [
        "# 2nd problem fix\n",
        "\n",
        " Our data is of grayscale but the pretrained model is of RGB\n",
        " so we will make it of same shape by putting same data in 3 channels\n",
        "https://stackoverflow.com/questions/51995977/how-can-i-use-a-pre-trained-neural-network-with-grayscale-images"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZKDIfE0ofXEr",
        "outputId": "eea74946-6769-44a7-a937-cdd1fbc10a2f",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 103
        }
      },
      "source": [
        "print(new_x_train.shape)\n",
        "print(new_x_test.shape,\"\\n\")\n",
        "\n",
        "new_x_train = np.repeat(new_x_train[..., np.newaxis], 3, -1)\n",
        "new_x_test = np.repeat(new_x_test[..., np.newaxis], 3, -1)\n",
        "\n",
        "print(new_x_train.shape)\n",
        "print(new_x_test.shape)"
      ],
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "(60000, 32, 32)\n",
            "(10000, 32, 32) \n",
            "\n",
            "(60000, 32, 32, 3)\n",
            "(10000, 32, 32, 3)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eHAOQWb1gCSC"
      },
      "source": [
        "# Importing pre-trained model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6_fsAgpnfXCF",
        "outputId": "c62f33db-927a-4d24-9925-3844822a369f",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 814
        }
      },
      "source": [
        "from tensorflow.keras import Input\n",
        "input_tensor = Input(shape=(32, 32, 3))\n",
        "\n",
        "from tensorflow.keras.applications import VGG16\n",
        "conv_base = VGG16(weights='imagenet',include_top=False,\n",
        "                  input_tensor = input_tensor)\n",
        "\n",
        "conv_base.summary()"
      ],
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"vgg16\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_1 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "block1_conv1 (Conv2D)        (None, 32, 32, 64)        1792      \n",
            "_________________________________________________________________\n",
            "block1_conv2 (Conv2D)        (None, 32, 32, 64)        36928     \n",
            "_________________________________________________________________\n",
            "block1_pool (MaxPooling2D)   (None, 16, 16, 64)        0         \n",
            "_________________________________________________________________\n",
            "block2_conv1 (Conv2D)        (None, 16, 16, 128)       73856     \n",
            "_________________________________________________________________\n",
            "block2_conv2 (Conv2D)        (None, 16, 16, 128)       147584    \n",
            "_________________________________________________________________\n",
            "block2_pool (MaxPooling2D)   (None, 8, 8, 128)         0         \n",
            "_________________________________________________________________\n",
            "block3_conv1 (Conv2D)        (None, 8, 8, 256)         295168    \n",
            "_________________________________________________________________\n",
            "block3_conv2 (Conv2D)        (None, 8, 8, 256)         590080    \n",
            "_________________________________________________________________\n",
            "block3_conv3 (Conv2D)        (None, 8, 8, 256)         590080    \n",
            "_________________________________________________________________\n",
            "block3_pool (MaxPooling2D)   (None, 4, 4, 256)         0         \n",
            "_________________________________________________________________\n",
            "block4_conv1 (Conv2D)        (None, 4, 4, 512)         1180160   \n",
            "_________________________________________________________________\n",
            "block4_conv2 (Conv2D)        (None, 4, 4, 512)         2359808   \n",
            "_________________________________________________________________\n",
            "block4_conv3 (Conv2D)        (None, 4, 4, 512)         2359808   \n",
            "_________________________________________________________________\n",
            "block4_pool (MaxPooling2D)   (None, 2, 2, 512)         0         \n",
            "_________________________________________________________________\n",
            "block5_conv1 (Conv2D)        (None, 2, 2, 512)         2359808   \n",
            "_________________________________________________________________\n",
            "block5_conv2 (Conv2D)        (None, 2, 2, 512)         2359808   \n",
            "_________________________________________________________________\n",
            "block5_conv3 (Conv2D)        (None, 2, 2, 512)         2359808   \n",
            "_________________________________________________________________\n",
            "block5_pool (MaxPooling2D)   (None, 1, 1, 512)         0         \n",
            "=================================================================\n",
            "Total params: 14,714,688\n",
            "Trainable params: 14,714,688\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YK0VfjwHhBpx"
      },
      "source": [
        "# Now using pretrained model as layer"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VeU6n4ETf_-W",
        "outputId": "46fc4625-227d-4073-f5af-babd76e414c3",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 433
        }
      },
      "source": [
        "from tensorflow.keras import layers, optimizers, Input, Model\n",
        "\n",
        "input_tensor = Input(shape=(32, 32, 3))\n",
        "x = conv_base(input_tensor)\n",
        "y = layers.Flatten()(x)\n",
        "z = layers.Dense(256, activation='relu')(y)\n",
        "z = layers.Dense(128, activation='relu')(z)\n",
        "z = layers.Dense(64, activation='relu')(z)\n",
        "z = layers.Dense(32, activation='relu')(z)\n",
        "output_tensor = layers.Dense(10, activation='softmax')(z)\n",
        "\n",
        "model = Model(input_tensor, output_tensor)\n",
        "\n",
        "#SGD #RMSprop #Adam #Adadelta #Adagrad ##Adamax ###Nadam #Ftrl\n",
        "opt = optimizers.Adam(lr=1e-3)\n",
        "model.compile(optimizer = opt, \n",
        "              loss = \"categorical_crossentropy\",\n",
        "              metrics = [\"accuracy\"])\n",
        "\n",
        "model.summary()"
      ],
      "execution_count": 7,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_1\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_2 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "vgg16 (Functional)           (None, 1, 1, 512)         14714688  \n",
            "_________________________________________________________________\n",
            "flatten (Flatten)            (None, 512)               0         \n",
            "_________________________________________________________________\n",
            "dense (Dense)                (None, 256)               131328    \n",
            "_________________________________________________________________\n",
            "dense_1 (Dense)              (None, 128)               32896     \n",
            "_________________________________________________________________\n",
            "dense_2 (Dense)              (None, 64)                8256      \n",
            "_________________________________________________________________\n",
            "dense_3 (Dense)              (None, 32)                2080      \n",
            "_________________________________________________________________\n",
            "dense_4 (Dense)              (None, 10)                330       \n",
            "=================================================================\n",
            "Total params: 14,889,578\n",
            "Trainable params: 14,889,578\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pte_KOsGjvQ_"
      },
      "source": [
        "#freezing the convolutional base\n",
        "In Keras, you freeze a network by setting its trainable attribute to False:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zs_PvDoVjxjX",
        "outputId": "f5b5c8c5-a4b4-41d0-a0b0-86758f0bbaf8",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "source": [
        "print('This is the number of trainable weights ''before freezing the conv base:', len(model.trainable_weights))\n",
        "\n",
        "conv_base.trainable = False\n",
        "print('This is the number of trainable weights ''after freezing the conv base:', len(model.trainable_weights))"
      ],
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "This is the number of trainable weights before freezing the conv base: 36\n",
            "This is the number of trainable weights after freezing the conv base: 10\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-vAOnHYOtQNc"
      },
      "source": [
        "# or freezing the convolutional base with fine tunning\n",
        "Unfreeze some layers in the base network.\n",
        "\n",
        "Freezing all layers up to a specific one"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FgOw8fd_tsdL"
      },
      "source": [
        "#set_trainable = False\n",
        "#for layer in conv_base.layers:\n",
        "#    if layer.name == 'block5_conv1':\n",
        "#        set_trainable = True\n",
        "#    if set_trainable:\n",
        "#        layer.trainable = True\n",
        "#    else:\n",
        "#        layer.trainable = False\n",
        "#\n",
        "#print('This is the number of trainable weights ''after Un-freezing some layers of conv base:', len(model.trainable_weights))"
      ],
      "execution_count": 9,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7XMKYQ5_gWCS",
        "outputId": "ce41f90c-b770-4943-976d-958f67160c61",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 401
        }
      },
      "source": [
        "batch_size = 512\n",
        "epochs = 10\n",
        "\n",
        "history = model.fit(new_x_train, y_train,batch_size=batch_size, epochs = epochs, validation_split = 0.2)"
      ],
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            " 2/94 [..............................] - ETA: 4s - loss: 2.4511 - accuracy: 0.0791WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0362s vs `on_train_batch_end` time: 0.0617s). Check your callbacks.\n",
            "94/94 [==============================] - 11s 120ms/step - loss: 1.8913 - accuracy: 0.2211 - val_loss: 1.4827 - val_accuracy: 0.3508\n",
            "Epoch 2/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.9975 - accuracy: 0.5780 - val_loss: 0.6064 - val_accuracy: 0.7747\n",
            "Epoch 3/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.5036 - accuracy: 0.8165 - val_loss: 0.3955 - val_accuracy: 0.8622\n",
            "Epoch 4/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.3443 - accuracy: 0.8800 - val_loss: 0.3278 - val_accuracy: 0.8863\n",
            "Epoch 5/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.2811 - accuracy: 0.9018 - val_loss: 0.2690 - val_accuracy: 0.9030\n",
            "Epoch 6/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.2339 - accuracy: 0.9167 - val_loss: 0.2428 - val_accuracy: 0.9138\n",
            "Epoch 7/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.2148 - accuracy: 0.9245 - val_loss: 0.2322 - val_accuracy: 0.9197\n",
            "Epoch 8/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.1788 - accuracy: 0.9375 - val_loss: 0.2435 - val_accuracy: 0.9187\n",
            "Epoch 9/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.1615 - accuracy: 0.9437 - val_loss: 0.2346 - val_accuracy: 0.9231\n",
            "Epoch 10/10\n",
            "94/94 [==============================] - 10s 103ms/step - loss: 0.1482 - accuracy: 0.9476 - val_loss: 0.2166 - val_accuracy: 0.9270\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3QbLDjWvhV4i"
      },
      "source": [
        "# Done Now lets save our model and use it as a layer too"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mZEr91nVgWFE"
      },
      "source": [
        "model.save('mnist_fasion_1.h5')"
      ],
      "execution_count": 11,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kAdA5mDegWLG",
        "outputId": "3ba54d71-e656-421c-cf24-6dc0df47668b",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 450
        }
      },
      "source": [
        "# loading saved model\n",
        "from tensorflow.keras.models import load_model\n",
        "mnist_fasion = '/content/mnist_fasion_1.h5'\n",
        "mnist_fasion_model = load_model(mnist_fasion)\n",
        "mnist_fasion_model.summary()"
      ],
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.\n",
            "Model: \"functional_1\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_2 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "vgg16 (Functional)           (None, 1, 1, 512)         14714688  \n",
            "_________________________________________________________________\n",
            "flatten (Flatten)            (None, 512)               0         \n",
            "_________________________________________________________________\n",
            "dense (Dense)                (None, 256)               131328    \n",
            "_________________________________________________________________\n",
            "dense_1 (Dense)              (None, 128)               32896     \n",
            "_________________________________________________________________\n",
            "dense_2 (Dense)              (None, 64)                8256      \n",
            "_________________________________________________________________\n",
            "dense_3 (Dense)              (None, 32)                2080      \n",
            "_________________________________________________________________\n",
            "dense_4 (Dense)              (None, 10)                330       \n",
            "=================================================================\n",
            "Total params: 14,889,578\n",
            "Trainable params: 174,890\n",
            "Non-trainable params: 14,714,688\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aibJsuuan3Nq"
      },
      "source": [
        "# Remove layers\n",
        "we can remove layers from our saved model if we want too"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "T_0yf_Atm3vi",
        "outputId": "c3075d05-51a4-4b4f-eb01-58be97bbae98",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 398
        }
      },
      "source": [
        "new_model = Model(mnist_fasion_model.inputs, mnist_fasion_model.layers[-2].output) # removing last layer\n",
        "new_model.summary()\n",
        "# didn't use this at the moment (Mentioned it just for educational purpose)"
      ],
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_3\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_2 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "vgg16 (Functional)           (None, 1, 1, 512)         14714688  \n",
            "_________________________________________________________________\n",
            "flatten (Flatten)            (None, 512)               0         \n",
            "_________________________________________________________________\n",
            "dense (Dense)                (None, 256)               131328    \n",
            "_________________________________________________________________\n",
            "dense_1 (Dense)              (None, 128)               32896     \n",
            "_________________________________________________________________\n",
            "dense_2 (Dense)              (None, 64)                8256      \n",
            "_________________________________________________________________\n",
            "dense_3 (Dense)              (None, 32)                2080      \n",
            "=================================================================\n",
            "Total params: 14,889,248\n",
            "Trainable params: 174,560\n",
            "Non-trainable params: 14,714,688\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Bbnmxu4IibvR",
        "outputId": "138de1cf-fc2e-486e-d409-c4317add8e22",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 433
        }
      },
      "source": [
        "from tensorflow.keras import layers, optimizers, Input, Model\n",
        "\n",
        "input_tensor = Input(shape=(32, 32, 3))\n",
        "x = mnist_fasion_model(input_tensor)       # this is our model\n",
        "y = layers.Flatten()(x)\n",
        "z = layers.Dense(256, activation='relu')(y)\n",
        "z = layers.Dense(128, activation='relu')(z)\n",
        "z = layers.Dense(64, activation='relu')(z)\n",
        "z = layers.Dense(32, activation='relu')(z)\n",
        "output_tensor = layers.Dense(10, activation='softmax')(z)\n",
        "\n",
        "model = Model(input_tensor, output_tensor)\n",
        "\n",
        "#SGD #RMSprop #Adam #Adadelta #Adagrad ##Adamax ###Nadam #Ftrl\n",
        "opt = optimizers.Adam(lr=1e-3)\n",
        "model.compile(optimizer = opt, \n",
        "              loss = \"categorical_crossentropy\",\n",
        "              metrics = [\"accuracy\"])\n",
        "\n",
        "model.summary()"
      ],
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_5\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_3 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "functional_1 (Functional)    (None, 10)                14889578  \n",
            "_________________________________________________________________\n",
            "flatten_1 (Flatten)          (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "dense_5 (Dense)              (None, 256)               2816      \n",
            "_________________________________________________________________\n",
            "dense_6 (Dense)              (None, 128)               32896     \n",
            "_________________________________________________________________\n",
            "dense_7 (Dense)              (None, 64)                8256      \n",
            "_________________________________________________________________\n",
            "dense_8 (Dense)              (None, 32)                2080      \n",
            "_________________________________________________________________\n",
            "dense_9 (Dense)              (None, 10)                330       \n",
            "=================================================================\n",
            "Total params: 14,935,956\n",
            "Trainable params: 221,268\n",
            "Non-trainable params: 14,714,688\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-tlBnldribtn",
        "outputId": "7d71fcff-313f-444f-dd76-173c4b68070b",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 363
        }
      },
      "source": [
        "batch_size = 512\n",
        "epochs = 10\n",
        "\n",
        "history = model.fit(new_x_train, y_train,batch_size=batch_size, epochs = epochs, validation_split = 0.2)"
      ],
      "execution_count": 15,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "94/94 [==============================] - 4s 43ms/step - loss: 0.7388 - accuracy: 0.8816 - val_loss: 0.2951 - val_accuracy: 0.9256\n",
            "Epoch 2/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1258 - accuracy: 0.9631 - val_loss: 0.2491 - val_accuracy: 0.9262\n",
            "Epoch 3/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1129 - accuracy: 0.9633 - val_loss: 0.2450 - val_accuracy: 0.9272\n",
            "Epoch 4/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1096 - accuracy: 0.9640 - val_loss: 0.2488 - val_accuracy: 0.9268\n",
            "Epoch 5/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1097 - accuracy: 0.9640 - val_loss: 0.2432 - val_accuracy: 0.9281\n",
            "Epoch 6/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1087 - accuracy: 0.9639 - val_loss: 0.2544 - val_accuracy: 0.9257\n",
            "Epoch 7/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1086 - accuracy: 0.9636 - val_loss: 0.2372 - val_accuracy: 0.9289\n",
            "Epoch 8/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1071 - accuracy: 0.9647 - val_loss: 0.2492 - val_accuracy: 0.9286\n",
            "Epoch 9/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1079 - accuracy: 0.9645 - val_loss: 0.2502 - val_accuracy: 0.9269\n",
            "Epoch 10/10\n",
            "94/94 [==============================] - 4s 39ms/step - loss: 0.1061 - accuracy: 0.9645 - val_loss: 0.2460 - val_accuracy: 0.9257\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lTuufC1rY_eB"
      },
      "source": [
        "The above method doen't work as well (improved) maybe because we have given it conv base after flatten but before we just given it pretrained model conv base"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0H0UL-KtqO-m"
      },
      "source": [
        "# Our model conv Base\n",
        "How about creating our model with conv and then saving and reusing conv base only"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kGTOqyveutEG",
        "outputId": "4b9b98e7-be34-4a83-c749-07c6793c7d7e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 814
        }
      },
      "source": [
        "from tensorflow.keras import layers, optimizers, Input, Model\n",
        "from tensorflow.keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Activation\n",
        "\n",
        "input_tensor = Input(shape=(32, 32, 3))\n",
        "\n",
        "#mlpconv block1\n",
        "x = Conv2D(32, (5, 5), activation='relu',padding='valid')(input_tensor)\n",
        "x = Conv2D(32, (1, 1), activation='relu')(x)\n",
        "x = Conv2D(32, (1, 1), activation='relu')(x)\n",
        "x = MaxPooling2D((2,2))(x)\n",
        "x = Dropout(0.5)(x)\n",
        "\n",
        "#mlpconv block2\n",
        "x = Conv2D(64, (3, 3), activation='relu',padding='valid')(x)\n",
        "x = Conv2D(64, (1, 1), activation='relu')(x)\n",
        "x = Conv2D(64, (1, 1), activation='relu')(x)\n",
        "x = MaxPooling2D((2,2))(x)\n",
        "x = Dropout(0.5)(x)\n",
        "\n",
        "#mlpconv block3\n",
        "x = Conv2D(128, (3, 3), activation='relu',padding='valid')(x)\n",
        "x = Conv2D(32, (1, 1), activation='relu')(x)\n",
        "x = Conv2D(10, (1, 1), activation='relu')(x)\n",
        "x = GlobalAveragePooling2D()(x)\n",
        "\n",
        "y = layers.Flatten()(x)\n",
        "z = layers.Dense(256, activation='relu')(y)\n",
        "z = layers.Dense(64, activation='relu')(z)\n",
        "output_tensor = layers.Dense(10, activation='softmax')(z)\n",
        "\n",
        "model = Model(input_tensor, output_tensor)\n",
        "\n",
        "#SGD #RMSprop #Adam #Adadelta #Adagrad ##Adamax ###Nadam #Ftrl\n",
        "opt = optimizers.Adam(lr=1e-3)\n",
        "model.compile(optimizer = opt, \n",
        "              loss = \"categorical_crossentropy\",\n",
        "              metrics = [\"accuracy\"])\n",
        "\n",
        "model.summary()"
      ],
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_7\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_4 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "conv2d (Conv2D)              (None, 28, 28, 32)        2432      \n",
            "_________________________________________________________________\n",
            "conv2d_1 (Conv2D)            (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "conv2d_2 (Conv2D)            (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "dropout (Dropout)            (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "conv2d_3 (Conv2D)            (None, 12, 12, 64)        18496     \n",
            "_________________________________________________________________\n",
            "conv2d_4 (Conv2D)            (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "conv2d_5 (Conv2D)            (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "dropout_1 (Dropout)          (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "conv2d_6 (Conv2D)            (None, 4, 4, 128)         73856     \n",
            "_________________________________________________________________\n",
            "conv2d_7 (Conv2D)            (None, 4, 4, 32)          4128      \n",
            "_________________________________________________________________\n",
            "conv2d_8 (Conv2D)            (None, 4, 4, 10)          330       \n",
            "_________________________________________________________________\n",
            "global_average_pooling2d (Gl (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "flatten_2 (Flatten)          (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "dense_10 (Dense)             (None, 256)               2816      \n",
            "_________________________________________________________________\n",
            "dense_11 (Dense)             (None, 64)                16448     \n",
            "_________________________________________________________________\n",
            "dense_12 (Dense)             (None, 10)                650       \n",
            "=================================================================\n",
            "Total params: 129,588\n",
            "Trainable params: 129,588\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pKuTjXVEr2ss",
        "outputId": "27f8356d-78fd-43b1-a443-50dd81f860a2",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 710
        }
      },
      "source": [
        "batch_size = 512\n",
        "epochs = 20\n",
        "\n",
        "history = model.fit(new_x_train, y_train,batch_size=batch_size, epochs = epochs, validation_split = 0.2)"
      ],
      "execution_count": 17,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/20\n",
            "94/94 [==============================] - 2s 25ms/step - loss: 1.3854 - accuracy: 0.4724 - val_loss: 0.8555 - val_accuracy: 0.6852\n",
            "Epoch 2/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.8343 - accuracy: 0.6823 - val_loss: 0.7139 - val_accuracy: 0.7262\n",
            "Epoch 3/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.7285 - accuracy: 0.7229 - val_loss: 0.6691 - val_accuracy: 0.7443\n",
            "Epoch 4/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.6518 - accuracy: 0.7506 - val_loss: 0.5716 - val_accuracy: 0.7826\n",
            "Epoch 5/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.5972 - accuracy: 0.7726 - val_loss: 0.5745 - val_accuracy: 0.7719\n",
            "Epoch 6/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.5534 - accuracy: 0.7921 - val_loss: 0.5000 - val_accuracy: 0.8172\n",
            "Epoch 7/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.5313 - accuracy: 0.8024 - val_loss: 0.4795 - val_accuracy: 0.8209\n",
            "Epoch 8/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.5035 - accuracy: 0.8152 - val_loss: 0.4708 - val_accuracy: 0.8288\n",
            "Epoch 9/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.4791 - accuracy: 0.8232 - val_loss: 0.4268 - val_accuracy: 0.8426\n",
            "Epoch 10/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.4608 - accuracy: 0.8305 - val_loss: 0.4283 - val_accuracy: 0.8418\n",
            "Epoch 11/20\n",
            "94/94 [==============================] - 2s 21ms/step - loss: 0.4511 - accuracy: 0.8331 - val_loss: 0.4017 - val_accuracy: 0.8524\n",
            "Epoch 12/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.4328 - accuracy: 0.8389 - val_loss: 0.3989 - val_accuracy: 0.8583\n",
            "Epoch 13/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.4172 - accuracy: 0.8463 - val_loss: 0.3781 - val_accuracy: 0.8641\n",
            "Epoch 14/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.4039 - accuracy: 0.8517 - val_loss: 0.3583 - val_accuracy: 0.8712\n",
            "Epoch 15/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3976 - accuracy: 0.8545 - val_loss: 0.3538 - val_accuracy: 0.8718\n",
            "Epoch 16/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3839 - accuracy: 0.8604 - val_loss: 0.3675 - val_accuracy: 0.8626\n",
            "Epoch 17/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3815 - accuracy: 0.8606 - val_loss: 0.3543 - val_accuracy: 0.8693\n",
            "Epoch 18/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3726 - accuracy: 0.8624 - val_loss: 0.3327 - val_accuracy: 0.8783\n",
            "Epoch 19/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3595 - accuracy: 0.8663 - val_loss: 0.3335 - val_accuracy: 0.8768\n",
            "Epoch 20/20\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3529 - accuracy: 0.8697 - val_loss: 0.3379 - val_accuracy: 0.8746\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XAV-2KWDsFEA"
      },
      "source": [
        "# saving our model\n",
        "model.save('mnist_fasion_2.h5')"
      ],
      "execution_count": 18,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hHgDz372s1ay",
        "outputId": "03216da4-6da8-46fe-cd39-c70a8f0917e9",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 814
        }
      },
      "source": [
        "# loading saved model\n",
        "from tensorflow.keras.models import load_model\n",
        "mnist_fasion = '/content/mnist_fasion_2.h5'\n",
        "mnist_fasion_model = load_model(mnist_fasion)\n",
        "mnist_fasion_model.summary()"
      ],
      "execution_count": 19,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_7\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_4 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "conv2d (Conv2D)              (None, 28, 28, 32)        2432      \n",
            "_________________________________________________________________\n",
            "conv2d_1 (Conv2D)            (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "conv2d_2 (Conv2D)            (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "dropout (Dropout)            (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "conv2d_3 (Conv2D)            (None, 12, 12, 64)        18496     \n",
            "_________________________________________________________________\n",
            "conv2d_4 (Conv2D)            (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "conv2d_5 (Conv2D)            (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "dropout_1 (Dropout)          (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "conv2d_6 (Conv2D)            (None, 4, 4, 128)         73856     \n",
            "_________________________________________________________________\n",
            "conv2d_7 (Conv2D)            (None, 4, 4, 32)          4128      \n",
            "_________________________________________________________________\n",
            "conv2d_8 (Conv2D)            (None, 4, 4, 10)          330       \n",
            "_________________________________________________________________\n",
            "global_average_pooling2d (Gl (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "flatten_2 (Flatten)          (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "dense_10 (Dense)             (None, 256)               2816      \n",
            "_________________________________________________________________\n",
            "dense_11 (Dense)             (None, 64)                16448     \n",
            "_________________________________________________________________\n",
            "dense_12 (Dense)             (None, 10)                650       \n",
            "=================================================================\n",
            "Total params: 129,588\n",
            "Trainable params: 129,588\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gW2mZLQ_s7j-",
        "outputId": "8cf5ecaf-f61b-4607-e17f-a295b4cfed16",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 675
        }
      },
      "source": [
        "# Removing layers\n",
        "\n",
        "new_model = Model(mnist_fasion_model.inputs, mnist_fasion_model.layers[-5].output) # removing layers\n",
        "new_model.summary()\n",
        "# removed all layers except conv"
      ],
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_9\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_4 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "conv2d (Conv2D)              (None, 28, 28, 32)        2432      \n",
            "_________________________________________________________________\n",
            "conv2d_1 (Conv2D)            (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "conv2d_2 (Conv2D)            (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "dropout (Dropout)            (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "conv2d_3 (Conv2D)            (None, 12, 12, 64)        18496     \n",
            "_________________________________________________________________\n",
            "conv2d_4 (Conv2D)            (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "conv2d_5 (Conv2D)            (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "dropout_1 (Dropout)          (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "conv2d_6 (Conv2D)            (None, 4, 4, 128)         73856     \n",
            "_________________________________________________________________\n",
            "conv2d_7 (Conv2D)            (None, 4, 4, 32)          4128      \n",
            "_________________________________________________________________\n",
            "conv2d_8 (Conv2D)            (None, 4, 4, 10)          330       \n",
            "_________________________________________________________________\n",
            "global_average_pooling2d (Gl (None, 10)                0         \n",
            "=================================================================\n",
            "Total params: 109,674\n",
            "Trainable params: 109,674\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JnOZIRMktf9y"
      },
      "source": [
        "Now lets use our conv base"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cw7Kh0P4tP62",
        "outputId": "c44560f3-b577-4e0e-ef96-7d3da2839878",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 363
        }
      },
      "source": [
        "from tensorflow.keras import layers, optimizers, Input, Model\n",
        "\n",
        "input_tensor = Input(shape=(32, 32, 3))\n",
        "x = new_model(input_tensor)       # this is our old model\n",
        "y = layers.Flatten()(x)\n",
        "z = layers.Dense(256, activation='relu')(y)\n",
        "z = layers.Dense(64, activation='relu')(z)\n",
        "output_tensor = layers.Dense(10, activation='softmax')(z)\n",
        "\n",
        "model = Model(input_tensor, output_tensor)\n",
        "\n",
        "#SGD #RMSprop #Adam #Adadelta #Adagrad ##Adamax ###Nadam #Ftrl\n",
        "opt = optimizers.Adam(lr=1e-3)\n",
        "model.compile(optimizer = opt, \n",
        "              loss = \"categorical_crossentropy\",\n",
        "              metrics = [\"accuracy\"])\n",
        "\n",
        "model.summary()"
      ],
      "execution_count": 21,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_11\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_5 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "functional_9 (Functional)    (None, 10)                109674    \n",
            "_________________________________________________________________\n",
            "flatten_3 (Flatten)          (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "dense_13 (Dense)             (None, 256)               2816      \n",
            "_________________________________________________________________\n",
            "dense_14 (Dense)             (None, 64)                16448     \n",
            "_________________________________________________________________\n",
            "dense_15 (Dense)             (None, 10)                650       \n",
            "=================================================================\n",
            "Total params: 129,588\n",
            "Trainable params: 129,588\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ustpReOUujsa"
      },
      "source": [
        "# freezing the convolutional base with fine tunning\n",
        "Unfreeze some layers in the base network.\n",
        "\n",
        "Freezing all layers up to a specific one"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zGKjYlb12aNd",
        "outputId": "b337a7df-4fcd-4719-d0ef-5bd27f6e1ddb",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 277
        }
      },
      "source": [
        "# all of our pretrained model layers\n",
        "for layer in new_model.layers:\n",
        "    print(layer.name)"
      ],
      "execution_count": 22,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "input_4\n",
            "conv2d\n",
            "conv2d_1\n",
            "conv2d_2\n",
            "max_pooling2d\n",
            "dropout\n",
            "conv2d_3\n",
            "conv2d_4\n",
            "conv2d_5\n",
            "max_pooling2d_1\n",
            "dropout_1\n",
            "conv2d_6\n",
            "conv2d_7\n",
            "conv2d_8\n",
            "global_average_pooling2d\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VktUjtVFalts",
        "outputId": "acc0b9eb-df43-489d-bded-11232d778545",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        }
      },
      "source": [
        "print('This is the number of trainable weights ''before freezing the conv base:', len(new_model.trainable_weights))\n",
        "\n",
        "set_trainable = False\n",
        "for layer in new_model.layers:\n",
        "    if layer.name == 'conv2d_7':\n",
        "        set_trainable = True\n",
        "    if layer.name == 'conv2d_8':\n",
        "        set_trainable = True\n",
        "    if set_trainable:\n",
        "        layer.trainable = True\n",
        "    else:\n",
        "        layer.trainable = False\n",
        "\n",
        "print('This is the number of trainable weights ''after Un-freezing some layers of conv base:', len(new_model.trainable_weights))"
      ],
      "execution_count": 23,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "This is the number of trainable weights before freezing the conv base: 18\n",
            "This is the number of trainable weights after Un-freezing some layers of conv base: 4\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_t3SiPc-2aV7",
        "outputId": "9672d3b9-7cd3-443a-ddcd-afa6ee15920c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 363
        }
      },
      "source": [
        "batch_size = 512\n",
        "epochs = 10\n",
        "\n",
        "history = model.fit(new_x_train, y_train,batch_size=batch_size, epochs = epochs, validation_split = 0.2)"
      ],
      "execution_count": 24,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "94/94 [==============================] - 2s 23ms/step - loss: 0.7384 - accuracy: 0.7534 - val_loss: 0.3625 - val_accuracy: 0.8697\n",
            "Epoch 2/10\n",
            "94/94 [==============================] - 2s 22ms/step - loss: 0.3888 - accuracy: 0.8592 - val_loss: 0.3480 - val_accuracy: 0.8767\n",
            "Epoch 3/10\n",
            "94/94 [==============================] - 2s 21ms/step - loss: 0.3626 - accuracy: 0.8681 - val_loss: 0.3356 - val_accuracy: 0.8798\n",
            "Epoch 4/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3566 - accuracy: 0.8687 - val_loss: 0.3165 - val_accuracy: 0.8858\n",
            "Epoch 5/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3463 - accuracy: 0.8722 - val_loss: 0.3051 - val_accuracy: 0.8912\n",
            "Epoch 6/10\n",
            "94/94 [==============================] - 2s 21ms/step - loss: 0.3354 - accuracy: 0.8782 - val_loss: 0.3080 - val_accuracy: 0.8870\n",
            "Epoch 7/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3300 - accuracy: 0.8802 - val_loss: 0.2992 - val_accuracy: 0.8905\n",
            "Epoch 8/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3237 - accuracy: 0.8806 - val_loss: 0.2892 - val_accuracy: 0.8958\n",
            "Epoch 9/10\n",
            "94/94 [==============================] - 2s 21ms/step - loss: 0.3193 - accuracy: 0.8834 - val_loss: 0.2832 - val_accuracy: 0.8986\n",
            "Epoch 10/10\n",
            "94/94 [==============================] - 2s 21ms/step - loss: 0.3172 - accuracy: 0.8843 - val_loss: 0.2996 - val_accuracy: 0.8901\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VXgt2BBsTeMr"
      },
      "source": [
        "# saving our model\n",
        "model.save('mnist_fasion_3.h5')"
      ],
      "execution_count": 25,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AajHxjxfPdnn"
      },
      "source": [
        "# Updating weights\n",
        "lets use different approach. How about using our model weights and updating them in new model rather than using them as a layer\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6iI5zmedRFvg",
        "outputId": "f2c3c1bd-926c-43fb-b282-0481d37bd0ec",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 381
        }
      },
      "source": [
        "# loading saved model\n",
        "from tensorflow.keras.models import load_model\n",
        "mnist_fasion = '/content/mnist_fasion_3.h5'\n",
        "mnist_fasion_model = load_model(mnist_fasion)\n",
        "mnist_fasion_model.summary()"
      ],
      "execution_count": 26,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.\n",
            "Model: \"functional_11\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_5 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "functional_9 (Functional)    (None, 10)                109674    \n",
            "_________________________________________________________________\n",
            "flatten_3 (Flatten)          (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "dense_13 (Dense)             (None, 256)               2816      \n",
            "_________________________________________________________________\n",
            "dense_14 (Dense)             (None, 64)                16448     \n",
            "_________________________________________________________________\n",
            "dense_15 (Dense)             (None, 10)                650       \n",
            "=================================================================\n",
            "Total params: 129,588\n",
            "Trainable params: 24,372\n",
            "Non-trainable params: 105,216\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FJ5CcY-a3ohL",
        "outputId": "26e9a114-5334-4464-faa5-c2cac676b7c3",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 814
        }
      },
      "source": [
        "from tensorflow.keras import layers, optimizers, Input, Model\n",
        "from tensorflow.keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Activation\n",
        "\n",
        "input_tensor = Input(shape=(32, 32, 3))\n",
        "\n",
        "#mlpconv block1\n",
        "x = Conv2D(32, (5, 5), activation='relu',padding='valid')(input_tensor)\n",
        "x = Conv2D(32, (1, 1), activation='relu')(x)\n",
        "x = Conv2D(32, (1, 1), activation='relu')(x)\n",
        "x = MaxPooling2D((2,2))(x)\n",
        "x = Dropout(0.5)(x)\n",
        "\n",
        "#mlpconv block2\n",
        "x = Conv2D(64, (3, 3), activation='relu',padding='valid')(x)\n",
        "x = Conv2D(64, (1, 1), activation='relu')(x)\n",
        "x = Conv2D(64, (1, 1), activation='relu')(x)\n",
        "x = MaxPooling2D((2,2))(x)\n",
        "x = Dropout(0.5)(x)\n",
        "\n",
        "#mlpconv block3\n",
        "x = Conv2D(128, (3, 3), activation='relu',padding='valid')(x)\n",
        "x = Conv2D(32, (1, 1), activation='relu')(x)\n",
        "x = Conv2D(10, (1, 1), activation='relu')(x)\n",
        "x = GlobalAveragePooling2D()(x)\n",
        "\n",
        "y = layers.Flatten()(x)\n",
        "z = layers.Dense(256, activation='relu')(y)\n",
        "z = layers.Dense(64, activation='relu')(z)\n",
        "output_tensor = layers.Dense(10, activation='softmax')(z)\n",
        "\n",
        "model = Model(input_tensor, output_tensor)\n",
        "\n",
        "#SGD #RMSprop #Adam #Adadelta #Adagrad ##Adamax ###Nadam #Ftrl\n",
        "opt = optimizers.Adam(lr=1e-3)\n",
        "model.compile(optimizer = opt, \n",
        "              loss = \"categorical_crossentropy\",\n",
        "              metrics = [\"accuracy\"])\n",
        "\n",
        "model.set_weights(mnist_fasion_model.get_weights())   # using pretrained model weights\n",
        "\n",
        "model.summary()"
      ],
      "execution_count": 27,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Model: \"functional_13\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_6 (InputLayer)         [(None, 32, 32, 3)]       0         \n",
            "_________________________________________________________________\n",
            "conv2d_9 (Conv2D)            (None, 28, 28, 32)        2432      \n",
            "_________________________________________________________________\n",
            "conv2d_10 (Conv2D)           (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "conv2d_11 (Conv2D)           (None, 28, 28, 32)        1056      \n",
            "_________________________________________________________________\n",
            "max_pooling2d_2 (MaxPooling2 (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "dropout_2 (Dropout)          (None, 14, 14, 32)        0         \n",
            "_________________________________________________________________\n",
            "conv2d_12 (Conv2D)           (None, 12, 12, 64)        18496     \n",
            "_________________________________________________________________\n",
            "conv2d_13 (Conv2D)           (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "conv2d_14 (Conv2D)           (None, 12, 12, 64)        4160      \n",
            "_________________________________________________________________\n",
            "max_pooling2d_3 (MaxPooling2 (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "dropout_3 (Dropout)          (None, 6, 6, 64)          0         \n",
            "_________________________________________________________________\n",
            "conv2d_15 (Conv2D)           (None, 4, 4, 128)         73856     \n",
            "_________________________________________________________________\n",
            "conv2d_16 (Conv2D)           (None, 4, 4, 32)          4128      \n",
            "_________________________________________________________________\n",
            "conv2d_17 (Conv2D)           (None, 4, 4, 10)          330       \n",
            "_________________________________________________________________\n",
            "global_average_pooling2d_1 ( (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "flatten_4 (Flatten)          (None, 10)                0         \n",
            "_________________________________________________________________\n",
            "dense_16 (Dense)             (None, 256)               2816      \n",
            "_________________________________________________________________\n",
            "dense_17 (Dense)             (None, 64)                16448     \n",
            "_________________________________________________________________\n",
            "dense_18 (Dense)             (None, 10)                650       \n",
            "=================================================================\n",
            "Total params: 129,588\n",
            "Trainable params: 129,588\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PgkIfCC6VkZr",
        "outputId": "be8046e5-3bfe-4d16-894e-9ba2d63d17d9",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 363
        }
      },
      "source": [
        "batch_size = 512\n",
        "epochs = 10\n",
        "\n",
        "history = model.fit(new_x_train, y_train,batch_size=batch_size, epochs = epochs, validation_split = 0.2)"
      ],
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "94/94 [==============================] - 2s 23ms/step - loss: 0.3252 - accuracy: 0.8806 - val_loss: 0.2788 - val_accuracy: 0.8987\n",
            "Epoch 2/10\n",
            "94/94 [==============================] - 2s 21ms/step - loss: 0.3050 - accuracy: 0.8884 - val_loss: 0.2971 - val_accuracy: 0.8941\n",
            "Epoch 3/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.3068 - accuracy: 0.8877 - val_loss: 0.2751 - val_accuracy: 0.8979\n",
            "Epoch 4/10\n",
            "94/94 [==============================] - 2s 21ms/step - loss: 0.3029 - accuracy: 0.8883 - val_loss: 0.2761 - val_accuracy: 0.8996\n",
            "Epoch 5/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.2955 - accuracy: 0.8921 - val_loss: 0.2804 - val_accuracy: 0.8962\n",
            "Epoch 6/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.2982 - accuracy: 0.8903 - val_loss: 0.2656 - val_accuracy: 0.9032\n",
            "Epoch 7/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.2952 - accuracy: 0.8921 - val_loss: 0.2811 - val_accuracy: 0.8962\n",
            "Epoch 8/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.2832 - accuracy: 0.8963 - val_loss: 0.3101 - val_accuracy: 0.8827\n",
            "Epoch 9/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.2838 - accuracy: 0.8951 - val_loss: 0.2640 - val_accuracy: 0.9028\n",
            "Epoch 10/10\n",
            "94/94 [==============================] - 2s 20ms/step - loss: 0.2797 - accuracy: 0.8956 - val_loss: 0.2659 - val_accuracy: 0.9028\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-_EtZQL2R_Jt"
      },
      "source": [
        ""
      ],
      "execution_count": 28,
      "outputs": []
    }
  ]
}