{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6fPJf4rM_l6w"
      },
      "source": [
        "##### Copyright 2020 Google LLC"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "r54Ih2I-_q6p"
      },
      "outputs": [],
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XrBMn0OIlUVN"
      },
      "source": [
        "# Adversarial Learning: Building Robust Image Classifiers\n",
        "\n",
        "\u003cbr\u003e\n",
        "\n",
        "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "\u003c/table\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "85lXAvQDz7K-"
      },
      "source": [
        "\n",
        "## Overview\n",
        "\n",
        "In this tutorial, we will explore the use of adversarial learning\n",
        "([Goodfellow et al., 2014](https://arxiv.org/abs/1412.6572)) for image\n",
        "classification using Neural Structured Learning (NSL).\n",
        "\n",
        "Adversarial attacks intentionally introduce some noise in the form of perturbations to input images to fool the deep learning model. For example, in a classification system, by adding an imperceptibly small vector whose elements are equal to the sign of the elements of the gradient of the loss function with respect to the input, we can change the model's classification of the image. \n",
        "\n",
        "\n",
        "## CNN Classifier\n",
        "\n",
        "The most popular deep learning models leveraged for computer vision problems are convolutional neural networks (CNNs)!\n",
        "\n",
        "![](https://i.imgur.com/32WEbHg.png)\n",
        "\u003cfont size=2\u003eCreated by: Dipanjan Sarkar\u003c/font\u003e\n",
        "\n",
        "We will look at how we can build, train and evaluate a multi-class CNN classifier in this notebook and also perform adversarial learning.\n",
        "\n",
        "\n",
        "## Transfer Learning\n",
        "\n",
        "The idea is to leverage a pre-trained model instead of building a CNN from scratch in our image classification problem\n",
        "\n",
        "![](https://i.imgur.com/WcUabml.png)\n",
        "\u003cfont size=2\u003eSource: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)\u003c/font\u003e\n",
        "\n",
        "## Tutorial Outline\n",
        "\n",
        "In this tutorial, we illustrate the following procedure of applying adversarial learning to obtain robust models using the Neural Structured Learning framework on a CNN model:\n",
        "\n",
        "1.  Create a neural network as a base model. In this tutorial, the base model is\n",
        "    created with the `tf.keras` sequential API by wrapping a pre-trained `VGG19` model which we use for fine-tuning using transfer learning\n",
        "2. Train and evaluate the base model performance on organic FashionMNIST data\n",
        "3. Perform perturbations using the fast gradient sign method (FSGM) technique and look at model weaknesses\n",
        "4. Wrap the base model with the **`nsl.keras.AdversarialRegularization`** wrapper class,\n",
        "    which is provided by the NSL framework, to create a new `tf.keras.Model`\n",
        "    instance. This new model will include the __adversarial loss__ as a\n",
        "    regularization term in its training objective.\n",
        "5.  Convert examples in the training data to a tf.data.Dataset to train.\n",
        "6.  Train and evaluate the adversarial-regularized model\n",
        "7. Generate perturbed dataset from the test data using FGSM and evaluate base model performance\n",
        "8. Evaluate adversarial model performance on organic and perturbed test datasets"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6R-3UHdk23HV"
      },
      "source": [
        "# Load Dependencies \n",
        "\n",
        "This leverages the __`tf.keras`__ API style and hence it is recommended you try this out on TensorFlow 2.x"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "49d7nXjJ4T_j"
      },
      "outputs": [],
      "source": [
        "# To prevent unnecessary warnings (e.g. FutureWarnings in TensorFlow)\n",
        "import warnings\n",
        "warnings.simplefilter(action='ignore', category=FutureWarning)\n",
        "\n",
        "# TensorFlow and tf.keras\n",
        "import tensorflow as tf\n",
        "\n",
        "# Helper libraries\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "import os\n",
        "import subprocess\n",
        "import json\n",
        "import requests\n",
        "from tqdm import tqdm\n",
        "\n",
        "print(tf.__version__)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BNWq4-tI3MyT"
      },
      "source": [
        "# Main Objective — Building an Apparel Classifier \u0026 Performing Adversarial Learning \n",
        "\n",
        "- We will keep things simple here with regard to the key objective. We will build a simple apparel classifier by training models on the very famous [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset based on Zalando’s article images — consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The task is to classify these images into an apparel category amongst 10 categories on which we will be training our models on.\n",
        "\n",
        "- The second main objective here is to perturb and add some intentional noise to these apparel images to try and fool our classification model\n",
        "\n",
        "- The third main objective is to build an adversarial regularized model on top of our base model by training it on perturbed images to try and perform better on adversarial attacks\n",
        "\n",
        "Here's an example how the data looks (each class takes three-rows):\n",
        "\n",
        "\u003ctable\u003e\n",
        "  \u003ctr\u003e\u003ctd\u003e\n",
        "    \u003cimg src=\"https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png\"\n",
        "         alt=\"Fashion MNIST sprite\"  width=\"600\"\u003e\n",
        "  \u003c/td\u003e\u003c/tr\u003e\n",
        "  \u003ctr\u003e\u003ctd align=\"center\"\u003e\n",
        "    \u003ca href=\"https://github.com/zalandoresearch/fashion-mnist\"\u003eFashion-MNIST samples\u003c/a\u003e (by Zalando, MIT License).\u003cbr/\u003e\u0026nbsp;\n",
        "  \u003c/td\u003e\u003c/tr\u003e\n",
        "\u003c/table\u003e\n",
        "\n",
        "Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the \"Hello, World\" of machine learning programs for computer vision. You can access the Fashion MNIST dataset directly from TensorFlow.\n",
        "\n",
        "__Note:__ Although these are really images, they are loaded as NumPy arrays and not binary image objects."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ojdYNyra3ynO"
      },
      "source": [
        "We will build the following two deep learning CNN (Convolutional Neural Network) classifiers in this notebook.\n",
        "- Fine-tuned pre-trained VGG-19 CNN (Base Model)\n",
        "- Adversarial Regularization Trained VGG-19 CNN Model (Adversarial Model)\n",
        "\n",
        "The idea is to look at how to use transfer learning where you fine-tune a pre-trained model to adapt it to classify images based on your dataset and then build a robust classifier which can handle adversarial attacks using adversarial learning."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rcGTAx-J4JiC"
      },
      "source": [
        "# Load Dataset"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qKNH32fRBE57"
      },
      "outputs": [],
      "source": [
        "fashion_mnist = tf.keras.datasets.fashion_mnist\n",
        "(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()\n",
        "\n",
        "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n",
        "               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']\n",
        "\n",
        "print('\\nTrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))\n",
        "print('Test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Rvy45oW04Kjx"
      },
      "source": [
        "# Fine-tuning a pre-trained VGG-19 CNN Model - Base Model\n",
        "\n",
        "Here, we will use a VGG-19 model which was pre-trained on the ImageNet dataset by fine-tuning it on the Fashion-MNIST dataset. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NhYXykze4R8x"
      },
      "source": [
        "## Model Architecture Details\n",
        "\n",
        "![](https://i.imgur.com/1VZ7MlO.png)\n",
        "\u003cfont size=2\u003eSource: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)\u003c/font\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "r-hvmAoe4Wz2"
      },
      "source": [
        "## Reshaping Image Data for Modeling\n",
        "\n",
        "We do need to reshape our data before we train our model. Here we will convert the images to 3-channel images (image pixel tensors) as the VGG model was originally trained on RGB images"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "R4jLZ-HiBIw3"
      },
      "outputs": [],
      "source": [
        "train_images_3ch = np.stack([train_images]*3, axis=-1)\n",
        "test_images_3ch = np.stack([test_images]*3, axis=-1)\n",
        "\n",
        "print('\\nTrain_images.shape: {}, of {}'.format(train_images_3ch.shape, train_images_3ch.dtype))\n",
        "print('Test_images.shape: {}, of {}'.format(test_images_3ch.shape, test_images_3ch.dtype))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bCHF22OG4X7j"
      },
      "source": [
        "## Resizing Image Data for Modeling\n",
        "\n",
        "The minimum image size expected by the VGG model is 32x32 so we need to resize our images"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mQ-192wqBYnE"
      },
      "outputs": [],
      "source": [
        "def resize_image_array(img, img_size_dims):\n",
        "  img = tf.image.resize(\n",
        "      img, img_size_dims, method=tf.image.ResizeMethod.BICUBIC)\n",
        "  img = np.array(img, dtype=np.float32)\n",
        "  return img"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GP0f5DSCBcx_"
      },
      "outputs": [],
      "source": [
        "%%time\n",
        "\n",
        "IMG_DIMS = (32, 32)\n",
        "\n",
        "train_images_3ch = np.array([resize_image_array(img, img_size_dims=IMG_DIMS) for img in train_images_3ch])\n",
        "test_images_3ch = np.array([resize_image_array(img, img_size_dims=IMG_DIMS) for img in test_images_3ch])\n",
        "\n",
        "print('\\nTrain_images.shape: {}, of {}'.format(train_images_3ch.shape, train_images_3ch.dtype))\n",
        "print('Test_images.shape: {}, of {}'.format(test_images_3ch.shape, test_images_3ch.dtype))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pPkhVHcE4fLP"
      },
      "source": [
        "## View Sample Data"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Q9OQCxNSBkuU"
      },
      "outputs": [],
      "source": [
        "fig, ax = plt.subplots(2, 5, figsize=(12, 6))\n",
        "c = 0\n",
        "for i in range(10):\n",
        "  idx = i // 5\n",
        "  idy = i % 5 \n",
        "  ax[idx, idy].imshow(train_images_3ch[i]/255.)\n",
        "  ax[idx, idy].set_title(class_names[train_labels[i]])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vG8LrkTo4g0u"
      },
      "source": [
        "## Build CNN Model Architecture\n",
        "\n",
        "We will now build our CNN model architecture customizing the VGG-19 model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CutujgiR4kDs"
      },
      "source": [
        "### Build Cut-VGG19 Model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ubUn6yJdBtNt"
      },
      "outputs": [],
      "source": [
        "# define input shape\n",
        "INPUT_SHAPE = (32, 32, 3)\n",
        "\n",
        "# get the VGG19 model\n",
        "vgg_layers = tf.keras.applications.vgg19.VGG19(weights='imagenet', include_top=False, \n",
        "                                               input_shape=INPUT_SHAPE)\n",
        "\n",
        "vgg_layers.summary()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kxCorQL14mo8"
      },
      "source": [
        "### Set layers to trainable to enable fine-tuning"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "W1Aw1KUkCBUy"
      },
      "outputs": [],
      "source": [
        "# Fine-tune all the layers\n",
        "for layer in vgg_layers.layers:\n",
        "  layer.trainable = True\n",
        "\n",
        "# Check the trainable status of the individual layers\n",
        "for layer in vgg_layers.layers:\n",
        "  print(layer, layer.trainable)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TBXVsTvx4pUi"
      },
      "source": [
        "### Build CNN model on top of VGG19"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "osZnOlkSCEuC"
      },
      "outputs": [],
      "source": [
        "# define sequential model\n",
        "model = tf.keras.models.Sequential()\n",
        "\n",
        "# Add the vgg convolutional base model\n",
        "model.add(vgg_layers)\n",
        "\n",
        "# add flatten layer\n",
        "model.add(tf.keras.layers.Flatten())\n",
        "\n",
        "# add dense layers with some dropout\n",
        "model.add(tf.keras.layers.Dense(256, activation='relu'))\n",
        "model.add(tf.keras.layers.Dropout(rate=0.3))\n",
        "model.add(tf.keras.layers.Dense(256, activation='relu'))\n",
        "model.add(tf.keras.layers.Dropout(rate=0.3))\n",
        "\n",
        "# add output layer\n",
        "model.add(tf.keras.layers.Dense(10))\n",
        "\n",
        "# compile model\n",
        "model.compile(\n",
        "    optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5),\n",
        "    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
        "    metrics=['accuracy'])\n",
        "\n",
        "# view model layers\n",
        "model.summary()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iFH_dNmG4qvJ"
      },
      "source": [
        "## Train CNN Model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YAp2GuDPCkIg"
      },
      "outputs": [],
      "source": [
        "EPOCHS = 100\n",
        "train_images_3ch_scaled = train_images_3ch / 255.\n",
        "es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', \n",
        "                                               patience=2, \n",
        "                                               restore_best_weights=True,\n",
        "                                               verbose=1)\n",
        "\n",
        "history = model.fit(train_images_3ch_scaled, train_labels,\n",
        "                    batch_size=32,\n",
        "                    callbacks=[es_callback], \n",
        "                    validation_split=0.1, epochs=EPOCHS,\n",
        "                    verbose=1)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RvSWc4CN43Rd"
      },
      "source": [
        "## Plot Learning Curves"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "lKbd8QntCo3c"
      },
      "outputs": [],
      "source": [
        "import pandas as pd\n",
        "\n",
        "fig, ax = plt.subplots(1, 2, figsize=(10, 4))\n",
        "\n",
        "history_df = pd.DataFrame(history.history)\n",
        "history_df[['loss', 'val_loss']].plot(kind='line', \n",
        "                                      ax=ax[0])\n",
        "history_df[['accuracy', 'val_accuracy']].plot(kind='line', \n",
        "                                              ax=ax[1]);"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z-j1rfSu45eT"
      },
      "source": [
        "## Evaluate Model Performance on Organic Test Data\n",
        "\n",
        "Here we check the performance of our pre-trained CNN model on the organic test data (without introducing any perturbations)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VRZHrV_IDtlB"
      },
      "outputs": [],
      "source": [
        "test_images_3ch_scaled = test_images_3ch / 255.\n",
        "predictions = model.predict(test_images_3ch_scaled)\n",
        "predictions[:5]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Tf3eqduADvqO"
      },
      "outputs": [],
      "source": [
        "prediction_labels = np.argmax(predictions, axis=1)\n",
        "prediction_labels[:5]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8G9ZUwAvDyWF"
      },
      "outputs": [],
      "source": [
        "from sklearn.metrics import confusion_matrix, classification_report\n",
        "\n",
        "print(classification_report(test_labels, prediction_labels, \n",
        "                            target_names=class_names))\n",
        "pd.DataFrame(confusion_matrix(test_labels, prediction_labels), \n",
        "             index=class_names, columns=class_names)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q6EQErUK49mH"
      },
      "source": [
        "# Adversarial Attacks with Fast Gradient Sign Method (FGSM)\n",
        "\n",
        "## What is an adversarial example?\n",
        "\n",
        "Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is a *white box* attack whose goal is to ensure misclassification. A white box attack is where the attacker has complete access to the model being attacked. One of the most famous examples of an adversarial image shown below is taken from the aforementioned paper.\n",
        "\n",
        "![Adversarial Example](https://i.imgur.com/FyYq2Q0.png)\n",
        "\u003cfont size=2\u003eSource: [Explaining and Harnessing Adversarial Examples, Goodfellow et al., 2014](https://arxiv.org/abs/1412.6572)\u003c/font\u003e\n",
        "\n",
        "Here, starting with the image of a panda, the attacker adds small perturbations (distortions) to the original image, which results in the model labelling this image as a gibbon, with high confidence. The process of adding these perturbations is explained below.\n",
        "\n",
        "## Fast gradient sign method\n",
        "\n",
        "The fast gradient sign method works by using the gradients of the neural network to create an adversarial example. For an input image, the method uses the gradients of the loss with respect to the input image to create a new image that maximises the loss. This new image is called the adversarial image. This can be summarised using the following expression:\n",
        "$$adv\\_x = x + \\epsilon*\\text{sign}(\\nabla_xJ(\\theta, x, y))$$\n",
        "\n",
        "where \n",
        "\n",
        "*   adv_x : Adversarial image.\n",
        "*   x : Original input image.\n",
        "*   y : Original input label.\n",
        "*   $\\epsilon$ : Multiplier to ensure the perturbations are small.\n",
        "*   $\\theta$ : Model parameters.\n",
        "*   $J$ : Loss.\n",
        "\n",
        "The gradients are taken with respect to the input image because the objective is to create an image that maximizes the loss. A method to accomplish this is to find how much each pixel in the image contributes to the loss value, and add a perturbation accordingly. This works pretty fast because it is easy to find how much each input pixel contributes to the loss by using the chain rule and finding the required gradients. Since our goal here is to attack a model that has already been trained, the gradient is not taken with respect to the trainable variables, i.e., the model parameters, which are now frozen.\n",
        "\n",
        "So let's try and fool our pretrained VGG19 model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hvIozwLu5rV5"
      },
      "source": [
        "## Utility Functions for FGSM\n",
        "\n",
        "1. __`get_model_preds(...)`__: Helps in getting the top predicted class label and probability of an input image based on a specific trained CNN model\n",
        "\n",
        "2. __`generate_adversarial_pattern(...)`__: Helps in getting the gradients and the sign of the gradients w.r.t the input image and the trained CNN model\n",
        "\n",
        "3. __`perform_adversarial_attack_fgsm(...)`__: Create perturbations which will be used to distort the original image resulting in an adversarial image by adding epsilon to the gradient signs (can be added to gradients also) and then showcase model performance on the same"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ecHH5CvIuas2"
      },
      "outputs": [],
      "source": [
        "def get_model_preds(input_image, class_names_map, model):\n",
        "  preds = model.predict(input_image)\n",
        "  # Convert logits to probabilities by taking softmax.\n",
        "  probs = np.exp(preds) / np.sum(np.exp(preds))\n",
        "  top_idx = np.argsort(-probs)[0][0]\n",
        "  top_prob = -np.sort(-probs)[0][0]\n",
        "  top_class = np.array(class_names_map)[top_idx]\n",
        "  return top_class, top_prob\n",
        "\n",
        "\n",
        "def generate_adversarial_pattern(input_image, image_label_idx, model, loss_func):\n",
        "  with tf.GradientTape() as tape:\n",
        "    tape.watch(input_image)\n",
        "    prediction = model(input_image)\n",
        "    loss = loss_func(image_label_idx, prediction)\n",
        "  # Get the gradients of the loss w.r.t to the input image.\n",
        "  gradient = tape.gradient(loss, input_image)\n",
        "  # Get the sign of the gradients to create the perturbation\n",
        "  signed_grad = tf.sign(gradient)\n",
        "  return signed_grad\n",
        "\n",
        "\n",
        "def perform_adversarial_attack_fgsm(input_image, image_label_idx, cnn_model, class_names_map, loss_func, eps=0.01):\n",
        "  # basic image shaping\n",
        "  input_image = np.array([input_image])\n",
        "  tf_img = tf.convert_to_tensor(input_image)\n",
        "  # predict class before adversarial attack\n",
        "  ba_pred_class, ba_pred_prob = get_model_preds(tf_img, class_names_map, cnn_model)\n",
        "  # generate adversarial image\n",
        "  adv_pattern = generate_adversarial_pattern(tf_img, image_label_idx, model, loss_func)\n",
        "  clip_adv_pattern = tf.clip_by_value(adv_pattern, clip_value_min=0., clip_value_max=1.)\n",
        "\n",
        "  perturbed_img = tf_img + (eps * adv_pattern)\n",
        "  perturbed_img = tf.clip_by_value(perturbed_img, clip_value_min=0., clip_value_max=1.)\n",
        "  # predict class after adversarial attack\n",
        "  aa_pred_class, aa_pred_prob = get_model_preds(perturbed_img, class_names_map, cnn_model)\n",
        "\n",
        "  # visualize results\n",
        "  fig, ax = plt.subplots(1, 3, figsize=(15, 4))\n",
        "  ax[0].imshow(tf_img[0].numpy())\n",
        "  ax[0].set_title('Before Adversarial Attack\\nTrue:{}  Pred:{}  Prob:{:.3f}'.format(class_names_map[image_label_idx],\n",
        "                                                                                    ba_pred_class,\n",
        "                                                                                    round(ba_pred_prob, 3)))\n",
        "  \n",
        "  ax[1].imshow(clip_adv_pattern[0].numpy())\n",
        "  ax[1].set_title('Adversarial Pattern -  EPS:{}'.format(eps))\n",
        "  \n",
        "  ax[2].imshow(perturbed_img[0].numpy())\n",
        "  ax[2].set_title('After Adversarial Attack\\nTrue:{}  Pred:{}  Prob:{:.3f}'.format(class_names_map[image_label_idx],\n",
        "                                                                                    aa_pred_class,\n",
        "                                                                                    aa_pred_prob))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "At3MM0C967Tv"
      },
      "source": [
        "## Get Loss Function for our problem\n",
        "\n",
        "We use Sparse Categorical Crossentropy here as we focus on a multi-class classification problem"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Xxadfpn2jexV"
      },
      "outputs": [],
      "source": [
        "scc = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "W2KeGFC96_n5"
      },
      "source": [
        "## Adversarial Attack Examples\n",
        "\n",
        "Here we look at a few examples of applying thte FGSM adversarial attack on sample apparel images and how it affects our model predictions. We create a simple wrapper over our `perform_adversarial_attack_fgsm` function to try it out on sample images."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d-yy8_Sc7Ihy"
      },
      "outputs": [],
      "source": [
        "def show_adv_attack_example(image_idx, image_dataset, \n",
        "                            image_labels, cnn_model,\n",
        "                            class_names, loss_fn, eps):\n",
        "  sample_apparel_img = image_dataset[image_idx]\n",
        "  sample_apparel_labelidx = image_labels[image_idx]\n",
        "  perform_adversarial_attack_fgsm(input_image=sample_apparel_img, \n",
        "                                  image_label_idx=sample_apparel_labelidx, \n",
        "                                  cnn_model=cnn_model, \n",
        "                                  class_names_map=class_names,\n",
        "                                  loss_func=loss_fn, eps=eps)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "nnvwU1KJvIcc"
      },
      "outputs": [],
      "source": [
        "show_adv_attack_example(6, test_images_3ch_scaled, \n",
        "                        test_labels, model,\n",
        "                        class_names, scc, 0.05)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ubz5GhmjB_VG"
      },
      "outputs": [],
      "source": [
        "show_adv_attack_example(60, test_images_3ch_scaled, \n",
        "                        test_labels, model,\n",
        "                        class_names, scc, 0.05)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "273SaMunCUfF"
      },
      "outputs": [],
      "source": [
        "show_adv_attack_example(500, test_images_3ch_scaled, \n",
        "                        test_labels, model,\n",
        "                        class_names, scc, 0.05)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "K1eWaNaVXFCR"
      },
      "outputs": [],
      "source": [
        "show_adv_attack_example(560, test_images_3ch_scaled, \n",
        "                        test_labels, model,\n",
        "                        class_names, scc, 0.05)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gjJpzCOO7f8O"
      },
      "source": [
        "# Adversarial Learning with Neural Structured Learning\n",
        "\n",
        "We will now leverage Neural Structured Learning (NSL) to train an adversarial-regularized VGG-19 model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SkAKUGxq9Wes"
      },
      "source": [
        "# Install NSL Dependency"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bBobTSU1J3-O"
      },
      "outputs": [],
      "source": [
        "!pip install neural-structured-learning"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ZFSrRBf6Fdmr"
      },
      "outputs": [],
      "source": [
        "import neural_structured_learning as nsl"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PTVVz5mH9aSH"
      },
      "source": [
        "# Adversarial Learning Configs\n",
        "\n",
        "*   **`adv_multiplier`**: The weight of adversarial loss in the training\n",
        "objective, relative to the labeled loss.\n",
        "*   **`adv_step_size`**: The magnitude of adversarial perturbation.\n",
        "*  **`adv_grad_norm`**: The norm to measure the magnitude of adversarial\n",
        "perturbation.\n",
        "\n",
        "Adversarial Neighbors are created leveraging the above config settings.\n",
        "\n",
        "__`adv_neighbor = input_features + adv_step_size * gradient`__ where __`adv_step_size`__ is the step size (analogous to learning rate) for searching/calculating adversarial neighbors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tp1xpfqAJ268"
      },
      "outputs": [],
      "source": [
        "adv_multiplier = 0.45\n",
        "adv_step_size = 0.95\n",
        "adv_grad_norm = 'l2'\n",
        "\n",
        "adversarial_config = nsl.configs.make_adv_reg_config(\n",
        "  multiplier=adv_multiplier,\n",
        "  adv_step_size=adv_step_size,\n",
        "  adv_grad_norm=adv_grad_norm\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mdNwNBdxLfPR"
      },
      "outputs": [],
      "source": [
        "adversarial_config"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RyprtLa9y1w5"
      },
      "source": [
        "#### Feel free to play around with the hyperparameters and observe model performance"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PBLHRAW497OF"
      },
      "source": [
        "# Fine-tuning VGG-19 CNN Model with Adversarial Learning - Adversarial Model\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nw01eSoU-BvZ"
      },
      "source": [
        "## Create Base Model Architecture"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "eCrYcNReOoS9"
      },
      "outputs": [],
      "source": [
        "vgg_layers = tf.keras.applications.vgg19.VGG19(weights='imagenet', include_top=False, \n",
        "                                               input_shape=INPUT_SHAPE)\n",
        "\n",
        "# Fine-tune all the layers\n",
        "for layer in vgg_layers.layers:\n",
        "  layer.trainable = True\n",
        "\n",
        "# Check the trainable status of the individual layers\n",
        "for layer in vgg_layers.layers:\n",
        "  print(layer, layer.trainable)\n",
        "\n",
        "# define sequential model\n",
        "base_model = tf.keras.models.Sequential()\n",
        "\n",
        "# Add the vgg convolutional base model\n",
        "base_model.add(vgg_layers)\n",
        "\n",
        "# add flatten layer\n",
        "base_model.add(tf.keras.layers.Flatten())\n",
        "\n",
        "# add dense layers with some dropout\n",
        "base_model.add(tf.keras.layers.Dense(256, activation='relu'))\n",
        "base_model.add(tf.keras.layers.Dropout(rate=0.3))\n",
        "base_model.add(tf.keras.layers.Dense(256, activation='relu'))\n",
        "base_model.add(tf.keras.layers.Dropout(rate=0.3))\n",
        "\n",
        "# add output layer\n",
        "base_model.add(tf.keras.layers.Dense(10))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xGAbFJWt-EdQ"
      },
      "source": [
        "## Setup Adversarial Model with Adversarial Regularization on Base Model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "QNeQkOwnLgUM"
      },
      "outputs": [],
      "source": [
        "adv_model = nsl.keras.AdversarialRegularization(\n",
        "  base_model,\n",
        "  label_keys=['label'],\n",
        "  adv_config=adversarial_config\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "EI3kU0ZJMFur"
      },
      "outputs": [],
      "source": [
        "adv_model.compile(\n",
        "    optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5),\n",
        "    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
        "    metrics=['accuracy'])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "V1DhRPVc-Lnv"
      },
      "source": [
        "## Format Training / Validation data into TF Datasets"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0ZNxIiNcPaGm"
      },
      "outputs": [],
      "source": [
        "from sklearn.model_selection import train_test_split\n",
        "\n",
        "X_train, X_val, y_train, y_val = train_test_split(train_images_3ch_scaled, \n",
        "                                                  train_labels, \n",
        "                                                  test_size=0.1, \n",
        "                                                  random_state=42)\n",
        "batch_size = 256\n",
        "\n",
        "train_data = tf.data.Dataset.from_tensor_slices(\n",
        "  {'input': X_train, \n",
        "    'label': tf.convert_to_tensor(y_train, dtype='float32')}).batch(batch_size)\n",
        "\n",
        "val_data = tf.data.Dataset.from_tensor_slices(\n",
        "  {'input': X_val, \n",
        "    'label': tf.convert_to_tensor(y_val, dtype='float32')}).batch(batch_size)\n",
        "\n",
        "\n",
        "val_steps = X_val.shape[0] / batch_size"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "U58BFzps-Qxy"
      },
      "source": [
        "## Train Model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8UFG2YmEMKSD"
      },
      "outputs": [],
      "source": [
        "EPOCHS = 100\n",
        "\n",
        "es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', \n",
        "                                               patience=2, \n",
        "                                               restore_best_weights=False,\n",
        "                                               verbose=1)\n",
        "\n",
        "history = adv_model.fit(train_data, validation_data=val_data,\n",
        "                        validation_steps=val_steps, \n",
        "                        batch_size=batch_size,\n",
        "                        callbacks=[es_callback], \n",
        "                        epochs=EPOCHS,\n",
        "                        verbose=1)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-COJiiKj-THJ"
      },
      "source": [
        "## Visualize Learning Curves"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YNvZC8iIMh7r"
      },
      "outputs": [],
      "source": [
        "fig, ax = plt.subplots(1, 2, figsize=(10, 4))\n",
        "\n",
        "history_df = pd.DataFrame(history.history)\n",
        "history_df[['loss', 'val_loss']].plot(kind='line', \n",
        "                                      ax=ax[0])\n",
        "history_df[['sparse_categorical_accuracy', \n",
        "            'val_sparse_categorical_accuracy']].plot(kind='line', \n",
        "                                                     ax=ax[1]);"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q5qBXLpJ-VOa"
      },
      "source": [
        "## VGG-19 Adversarial Model Performance on Organic Test Dataset\n",
        "\n",
        "Here we check the performance of our adversarially-trained CNN model on the organic test data (without introducing any perturbations)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Wy-tqqzHNWJV"
      },
      "outputs": [],
      "source": [
        "predictions = adv_model.base_model.predict(test_images_3ch_scaled)\n",
        "prediction_labels = np.argmax(predictions, axis=1)\n",
        "print(classification_report(test_labels, prediction_labels, \n",
        "                            target_names=class_names))\n",
        "pd.DataFrame(confusion_matrix(test_labels, prediction_labels), \n",
        "             index=class_names, columns=class_names)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mEqFDKAKy1xU"
      },
      "source": [
        "Almost similar performance as our non-adversarial trained CNN model!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0ZojtMft7LRd"
      },
      "source": [
        "## Generate Adversarial Attacks (FGSM) on Test Data to create Perturbed Test Dataset\n",
        "\n",
        "Here we create a helper function to help us create a perturbed dataset using a specific adversarial epsilon multiplier."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-NvQb89fDv8D"
      },
      "outputs": [],
      "source": [
        "def generate_perturbed_images(input_images, image_label_idxs, model, loss_func, eps=0.01):\n",
        "  perturbed_images = []\n",
        "  # don't use list on large data - used just to view fancy progress-bar\n",
        "  for image, label in tqdm(list(zip(input_images, image_label_idxs))): \n",
        "    image = tf.convert_to_tensor(np.array([image]))\n",
        "    adv_pattern = generate_adversarial_pattern(image, label, model, loss_func)\n",
        "    perturbed_img = image + (eps * adv_pattern)\n",
        "    perturbed_img = tf.clip_by_value(perturbed_img, clip_value_min=0., clip_value_max=1.)[0]\n",
        "    perturbed_images.append(perturbed_img)\n",
        "\n",
        "  return tf.convert_to_tensor(perturbed_images)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "X92mpBdV7Vl3"
      },
      "source": [
        "# Generate a Perturbed Test Dataset\n",
        "\n",
        "We generate a perturbed version of the test dataset using an epsilion multiplier of 0.05 to test the performance of our base VGG model and adversarially-trained VGG model shortly."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "01KvCY1uE42D"
      },
      "outputs": [],
      "source": [
        "perturbed_test_imgs = generate_perturbed_images(input_images=test_images_3ch_scaled, \n",
        "                                                image_label_idxs=test_labels, model=model, \n",
        "                                                loss_func=scc, eps=0.05)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oqIE9cTI-dbn"
      },
      "source": [
        "# VGG-19 Base Model performance on Perturbed Test Dataset\n",
        "\n",
        "#### Let's look at the performance of our base VGG-19 model on the perturbed dataset."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xXD4kcT2rtcI"
      },
      "outputs": [],
      "source": [
        "predictions = model.predict(perturbed_test_imgs)\n",
        "prediction_labels = np.argmax(predictions, axis=1)\n",
        "print(classification_report(test_labels, prediction_labels, \n",
        "                            target_names=class_names))\n",
        "pd.DataFrame(confusion_matrix(test_labels, prediction_labels), \n",
        "             index=class_names, columns=class_names)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "U1p6jOOp-wMh"
      },
      "source": [
        " We can see that the performance of the base VGG-19 (non adversarial-trained) model reduces by almost 50% on the perturbed test dataset, bringing a powerful ImageNet winning model to its knees!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VWAtWpi4-gfM"
      },
      "source": [
        "# VGG-19 Adversarial Model performance on Perturbed Test Dataset\n",
        "\n",
        "#### Evaluating our adversarial trained CNN model on the test dataset with perturbations. We see an approx. 38% jump in performance!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Xy3rG9lqU1Qr"
      },
      "outputs": [],
      "source": [
        "predictions = adv_model.base_model.predict(perturbed_test_imgs)\n",
        "prediction_labels = np.argmax(predictions, axis=1)\n",
        "print(classification_report(test_labels, prediction_labels, \n",
        "                            target_names=class_names))\n",
        "pd.DataFrame(confusion_matrix(test_labels, prediction_labels), \n",
        "             index=class_names, columns=class_names)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "--jVksXQ-jKK"
      },
      "source": [
        "# Compare Model Performances on Sample Perturbed Test Examples"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LVETtaCXnahv"
      },
      "outputs": [],
      "source": [
        "f, ax = plt.subplots(2, 5, figsize=(30, 15))\n",
        "for idx, i in enumerate([6, 7, 8 , 9, 10, 11, 95, 99, 29, 33]):\n",
        "  idx_x = idx // 5\n",
        "  idx_y = idx % 5 \n",
        "\n",
        "  sample_apparel_idx = i\n",
        "  sample_apparel_img =  tf.convert_to_tensor([perturbed_test_imgs[sample_apparel_idx]])\n",
        "  sample_apparel_labelidx = test_labels[sample_apparel_idx]\n",
        "\n",
        "  bm_pred = get_model_preds(input_image=sample_apparel_img, \n",
        "                            class_names_map=class_names, \n",
        "                            model=model)[0]\n",
        "  am_pred = get_model_preds(input_image=sample_apparel_img, \n",
        "                            class_names_map=class_names, \n",
        "                            model=adv_model.base_model)[0]\n",
        "\n",
        "  ax[idx_x, idx_y].imshow(sample_apparel_img[0])\n",
        "  ax[idx_x, idx_y].set_title('True Label:{}\\nBase VGG Model Pred:{}\\nAdversarial Reg. Model Pred:{}'.format(class_names[sample_apparel_labelidx],\n",
        "                                                                                                            bm_pred,\n",
        "                                                                                                            am_pred))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lB6ibsi26yAC"
      },
      "source": [
        "You can clearly see here that the adversarially trained model does perform better in terms of predicting the right class for the sample images depicted above."
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [],
      "name": "adversarial_cnn_transfer_learning_fashionmnist.ipynb",
      "private_outputs": true,
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.7.6"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
