{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Ic4_occAAiAT"
   },
   "source": [
    "##### Copyright 2020 The TensorFlow Authors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "both",
    "colab": {},
    "colab_type": "code",
    "id": "ioaprt5q5US7"
   },
   "outputs": [],
   "source": [
    "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "sPrjeJhFQBmu"
   },
   "source": [
    "# Integrated gradients"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This tutorial:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
    " <td>\n",
    "  <a target=\"_blank\" href=\"https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/integrated_gradients/integrated_gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
    " </td>\n",
    " <td>\n",
    "  <a target=\"_blank\" href=\"https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/integrated_gradients/integrated_gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
    " </td>\n",
    " <td>\n",
    "  <a href=\"https://github.com/GoogleCloudPlatform/training-data-analyst/raw/master/blogs/integrated_gradients/integrated_gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
    " </td>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "hKY4XMc9o8iB"
   },
   "source": [
    "A shorter version of this notebook is also available as a TensorFlow tutorial:\n",
    "\n",
    "<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
    "  <td>\n",
    "    <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/interpretability/integrated_gradients\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
    "  </td>\n",
    "  <td>\n",
    "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/interpretability/integrated_gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
    "  </td>\n",
    "  <td>\n",
    "    <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/interpretability/integrated_gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
    "  </td>\n",
    "  <td>\n",
    "    <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/interpretability/integrated_gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
    "  </td>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "NG17_Wp6ikKf"
   },
   "source": [
    "This tutorial demonstrates how to implement **Integrated Gradients (IG)**, an explainable AI technique described in the paper [Axiomatic Attribution for Deep Networks](https://arxiv.org/abs/1703.01365). IG aims to explain the relationship between a model's predictions in terms of its features. It has many use cases including understanding feature importances, identifying data skew, and debugging model performance.\n",
    "\n",
    "IG has become a popular interpretability technique due to its broad applicability to any differentiable model, ease of implementation, theoretical justifications, and computational efficiency relative to alternative approaches that allows it to scale to large networks and feature spaces such as images.\n",
    "\n",
    "You will start by walking through an implementation of IG step-by-step. Next, you will apply IG attributions to understand the pixel feature importances of an image classifier and explore applied machine learning use cases. Lastly, you will conclude with discussion of IG's properties, limitations, and suggestions for next steps in your learning journey.\n",
    "\n",
    "To motivate this tutorial, here is the result of using IG to highlight important pixels that were used to classify this [image](https://commons.wikimedia.org/wiki/File:San_Francisco_fireboat_showing_off.jpg) as a fireboat.\n",
    "\n",
    "![Output Image 1](./images/IG_fireboat.png)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "ppydw6ZbKzM1"
   },
   "source": [
    "## Explaining an image classifier"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "cbUMIubipgg0"
   },
   "outputs": [],
   "source": [
    "import matplotlib.pylab as plt\n",
    "import numpy as np\n",
    "import math\n",
    "import sys\n",
    "import tensorflow as tf\n",
    "import tensorflow_hub as hub"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GVVV4BGrABkA"
   },
   "source": [
    "### Download Inception V1 from TF-Hub"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7kwwJ35xmtoK"
   },
   "source": [
    "**TensorFlow Hub Module**\n",
    "\n",
    "IG can be applied to any neural network. To mirror the paper's implementation, you will use a pre-trained version of [Inception V1]((https://arxiv.org/abs/1409.4842)) from [TensorFlow Hub](https://tfhub.dev/google/imagenet/inception_v1/classification/4).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "cHjTloLzG1hJ"
   },
   "outputs": [],
   "source": [
    "inception_v1_url = \"https://tfhub.dev/google/imagenet/inception_v1/classification/4\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 187
    },
    "colab_type": "code",
    "id": "14APZcfHolKj",
    "outputId": "15ee0232-987f-4aab-b23c-fe2888569c85"
   },
   "outputs": [],
   "source": [
    "inception_v1_classifier = tf.keras.Sequential([\n",
    "    hub.KerasLayer(name='inception_v1', \n",
    "                   handle=inception_v1_url, \n",
    "                   trainable=False),\n",
    "])\n",
    "inception_v1_classifier.build([None, 224, 224, 3])\n",
    "inception_v1_classifier.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GjLRn2e5xFOb"
   },
   "source": [
    "From the TF Hub module page, you need to keep in mind the following about Inception V1 for image classification:\n",
    "\n",
    "**Inputs**: The expected input shape for the model is `(None, 224, 224, 3,)`. This is a dense 4D tensor of dtype float32 and shape `(batch_size, height, width, RGB channels)` whose elements are RGB color values of pixels normalized to the range [0, 1]. The first element is `None` to indicate that the model can take any integer batch size.\n",
    "\n",
    "**Outputs**: A `tf.Tensor` of logits in the shape of `(n_images, 1001)`. Each row represents the model's predicted score for each of ImageNet's 1,001 classes. For the model's top predicted class index you can use `tf.argmax(predictions, axis=-1)`. Furthmore, you can also covert the model's logit output to predicted probabilities across all classes using `tf.nn.softmax(predictions, axis=-1)` to quantify the model's uncertainty as well as explore similar predicted classes for debugging."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "huZnb_O0L9mw"
   },
   "outputs": [],
   "source": [
    "def load_imagenet_labels(file_path):\n",
    "  \"\"\"\n",
    "  Args:\n",
    "    file_path(str): A URL download path.\n",
    "  Returns:\n",
    "    imagenet_label_array(numpy.ndarray): Array of strings with shape (1001,).\n",
    "  \"\"\"\n",
    "  labels_file = tf.keras.utils.get_file('ImageNetLabels.txt', file_path)\n",
    "  with open(labels_file, \"r\") as reader:\n",
    "    f = reader.read()\n",
    "    labels = f.splitlines()\n",
    "    imagenet_label_array = np.array(labels)\n",
    "\n",
    "  return imagenet_label_array"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "Rtrl-u7T6NEk"
   },
   "outputs": [],
   "source": [
    "imagenet_label_vocab = load_imagenet_labels('https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "STpIJr1Z5r_u"
   },
   "source": [
    "### Load and preprocess images with `tf.image`\n",
    "\n",
    "You will illustrate IG using several images. Links to the original images are as follows ([Fireboat](https://commons.wikimedia.org/wiki/File:San_Francisco_fireboat_showing_off.jpg), [School Bus](https://commons.wikimedia.org/wiki/File:Thomas_School_Bus_Bus.jpg), [Giant Panda](https://commons.wikimedia.org/wiki/File:Giant_Panda_2.JPG), [Black Beetle](https://commons.wikimedia.org/wiki/File:Lucanus.JPG), [Golden Retriever](https://commons.wikimedia.org/wiki/File:Golden_retriever.jpg), [General Ulysses S. Grant](https://commons.wikimedia.org/wiki/Category:Ulysses_S._Grant#/media/File:Portrait_of_Maj._Gen._Ulysses_S._Grant,_officer_of_the_Federal_Army_LOC_cwpb.06941.jpg), [Greece Presidential Guard](https://commons.wikimedia.org/wiki/File:Greek_guard_uniforms_1.jpg)).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "YOb0Adq-rU5J"
   },
   "outputs": [],
   "source": [
    "def parse_image(file_name):\n",
    "  \"\"\"\n",
    "  This function downloads and standardizes input JPEG images for the \n",
    "    inception_v1 model. Its applies the following processing:\n",
    "    - Reads JPG file.\n",
    "    - Decodes JPG file into colored image.\n",
    "    - Converts data type to standard tf.float32.\n",
    "    - Resizes image to expected Inception V1 input dimension of\n",
    "      (224, 224, 3) with preserved aspect ratio. E.g. don't stretch image.\n",
    "    - Pad image to (224, 224, 3) shape with black pixels.\n",
    "  Args:\n",
    "    file_name(str): Direct URL path to the JPG image.\n",
    "  Returns:\n",
    "    image(Tensor): A Tensor of floats with shape (224, 224, 3).\n",
    "    label(str): A text label for display above the image.\n",
    "  \"\"\"\n",
    "  image = tf.io.read_file(file_name)\n",
    "  image = tf.image.decode_jpeg(image, channels=3)\n",
    "  image = tf.image.convert_image_dtype(image, tf.float32)\n",
    "  image = tf.image.resize(image, (224, 224), preserve_aspect_ratio=True)\n",
    "  image = tf.image.resize_with_pad(image, target_height=224, target_width=224)\n",
    "\n",
    "  return image"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "_khLTN75CLMJ"
   },
   "outputs": [],
   "source": [
    "# img_name_url {image_name: origin_url}\n",
    "img_name_url = {\n",
    "    'Fireboat': 'https://storage.googleapis.com/applied-dl/temp/San_Francisco_fireboat_showing_off.jpg',\n",
    "    'School Bus': 'https://storage.googleapis.com/applied-dl/temp/Thomas_School_Bus_Bus.jpg',\n",
    "    'Giant Panda': 'https://storage.googleapis.com/applied-dl/temp/Giant_Panda_2.jpeg',\n",
    "    'Black Beetle': 'https://storage.googleapis.com/applied-dl/temp/Lucanus.jpeg',\n",
    "    'Golden Retriever': 'https://storage.googleapis.com/applied-dl/temp/Golden_retriever.jpg',\n",
    "    'Yellow Labrador Retriever': 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg',\n",
    "    'Military Uniform (Grace Hopper)': 'https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',\n",
    "    'Military Uniform (General Ulysses S. Grant)': 'https://storage.googleapis.com/applied-dl/temp/General_Ulysses_S._Grant%2C_Union_Army_(6186252896).jpg',\n",
    "    'Military Uniform (Greek Presidential Guard)': 'https://storage.googleapis.com/applied-dl/temp/Greek_guard_uniforms_1.jpg',\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "N5iftCDgE1Fo"
   },
   "outputs": [],
   "source": [
    "# img_name_path {image_name: downloaded_image_local_path}\n",
    "img_name_path = {name: tf.keras.utils.get_file(name, url) for (name, url) in img_name_url.items()}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "YIBce9ccJvZ7"
   },
   "outputs": [],
   "source": [
    "# img_name_tensors {image_name: parsed_image_tensor}\n",
    "img_name_tensors = {name: parse_image(img_path) for (name, img_path) in img_name_path.items()}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 1000
    },
    "colab_type": "code",
    "id": "AYIeu8rMLN-8",
    "outputId": "6cc735b6-da5b-4632-f384-9960dd8b9a29"
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(14,14))\n",
    "for n, (name, img_tensors) in enumerate(img_name_tensors.items()):\n",
    "  ax = plt.subplot(3,3,n+1)\n",
    "  ax.imshow(img_tensors)\n",
    "  ax.set_title(name)\n",
    "  ax.axis('off')\n",
    "plt.tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "v-lH1N4timM2"
   },
   "source": [
    "## Applying integrated gradients"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "FV7pfZHANaz1"
   },
   "source": [
    "IG is an elegant and simple idea to explain a model's predictions in relation to its input. The basic intuition is to measure a feature's importance to your model by incrementally increasing a feature's intensity between its absense (baseline) and its input value, compute the change between your model's predictions with respect to the original feature at each step, and average these incremental changes together. To gain a deeper understanding for how IG works, you will walk through its application over the sub-sections below."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "MsCPIPdgK5Jb"
   },
   "source": [
    "### Step 1: Identify model input and output tensors"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "-E_DubntvPy9"
   },
   "source": [
    "IG is a post-hoc explanatory method that works with any differentiable model regardless of its implementation. As such, you can pass any input example tensor to a model to generate an output prediction tensor. Note that InceptionV1 outputs a multiclass un-normalized logits prediction tensor. So you will use a softmax operator to turn the logits tensor into an output softmax predicted probabilities tensor for use to compute IG feature attributions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 34
    },
    "colab_type": "code",
    "id": "QWygBFWSUdb_",
    "outputId": "59eaf361-ac6d-4629-bf2b-61b6b198da61"
   },
   "outputs": [],
   "source": [
    "# stack images into a batch for processing.\n",
    "image_titles = tf.convert_to_tensor(list(img_name_tensors.keys()))\n",
    "image_batch = tf.convert_to_tensor(list(img_name_tensors.values()))\n",
    "image_batch.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "Do4lfHWQ27YV"
   },
   "outputs": [],
   "source": [
    "def top_k_predictions_scores_labels(model, img, label_vocab, top_k=3):\n",
    "  \"\"\"\n",
    "  Args:\n",
    "    model(tf.keras.Model): Trained Keras model.\n",
    "    img(tf.Tensor): A 4D tensor of floats with the shape \n",
    "      (img_n, img_height, img_width, 3).\n",
    "    label_vocab(numpy.ndarray): An array of strings with shape (1001,).\n",
    "    top_k(int): Number of results to return.\n",
    "  Returns:\n",
    "    k_predictions_idx(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.int32 \n",
    "      prediction indicies.\n",
    "    k_predictions_proba(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.float32 \n",
    "      prediction probabilities.\n",
    "    k_predictions_label(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.string \n",
    "      prediction labels.   \n",
    "  \"\"\"\n",
    "  # These are logits (unnormalized scores).\n",
    "  predictions = model(img)\n",
    "  # Convert logits into probabilities.\n",
    "  predictions_proba = tf.nn.softmax(predictions, axis=-1)\n",
    "  # Filter top k prediction probabilities and indices.\n",
    "  k_predictions_proba, k_predictions_idx = tf.math.top_k(\n",
    "      input=predictions_proba, k=top_k)\n",
    "  # Lookup top k prediction labels in label_vocab array.\n",
    "  k_predictions_label = tf.convert_to_tensor(\n",
    "      label_vocab[k_predictions_idx.numpy()], \n",
    "      dtype=tf.string)\n",
    "\n",
    "  return k_predictions_idx, k_predictions_label, k_predictions_proba"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "DFBBf8zT7OJo"
   },
   "outputs": [],
   "source": [
    "def plot_img_predictions(model, img, img_titles, label_vocab, top_k=3):\n",
    "  \"\"\"Plot images with top_k predictions.\n",
    "  Args:\n",
    "    model(tf.keras.Model): Trained Keras model.\n",
    "    img(Tensor): A 4D Tensor of floats with the shape \n",
    "      (img_n, img_height, img_width, 3).\n",
    "    img_titles(Tensor): A Tensor of strings with the shape \n",
    "      (img_n, img_height, img_width, 3).\n",
    "    label_vocab(numpy.ndarray): An array of strings with shape (1001,).\n",
    "    top_k(int): Number of results to return.\n",
    "  Returns:\n",
    "    fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving \n",
    "      plots.\n",
    "  \"\"\"\n",
    "  pred_idx, pred_label, pred_proba = \\\n",
    "  top_k_predictions_scores_labels(\n",
    "      model=model, \n",
    "      img=img, \n",
    "      label_vocab=label_vocab, \n",
    "      top_k=top_k)\n",
    "  \n",
    "  img_arr = img.numpy()\n",
    "  title_arr = img_titles.numpy()\n",
    "  pred_idx_arr = pred_idx.numpy()\n",
    "  pred_label_arr = pred_label.numpy()\n",
    "  pred_proba_arr = pred_proba.numpy()\n",
    "\n",
    "  n_rows = img_arr.shape[0]\n",
    "  # Preserve image height by converting pixels to inches based on dpi.\n",
    "  size = n_rows * (224 // 48)\n",
    "  \n",
    "  fig, axs = plt.subplots(nrows=img_arr.shape[0], ncols=1, figsize=(size, size), squeeze=False)\n",
    "  for idx, image in enumerate(img_arr):\n",
    "    axs[idx, 0].imshow(image)\n",
    "    axs[idx, 0].set_title(title_arr[idx].decode('utf-8'), fontweight='bold')\n",
    "    axs[idx, 0].axis('off')\n",
    "    for k in range(top_k):\n",
    "      k_idx = pred_idx_arr[idx][k]\n",
    "      k_label = pred_label_arr[idx][k].decode('utf-8')\n",
    "      k_proba = pred_proba_arr[idx][k]\n",
    "      if k==0:\n",
    "        s = 'Prediction {:}: ({:}-{:}) Score: {:.1%}'.format(k+1, k_idx, k_label, k_proba)\n",
    "        axs[idx, 0].text(244 + size, 102+(k*40), s, fontsize=12, fontweight='bold')\n",
    "      else:\n",
    "        s = 'Prediction {:}: ({:}-{:}) Score: {:.1%}'.format(k+1, k_idx, k_label, k_proba)\n",
    "        axs[idx, 0].text(244 + size, 102+(k*20), s, fontsize=12)\n",
    "\n",
    "  plt.tight_layout()      \n",
    "\n",
    "  return fig\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 1000
    },
    "colab_type": "code",
    "id": "6086yb8TY_YK",
    "outputId": "05465d73-bf04-459f-d5ad-910872176c83"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_predictions(\n",
    "    model=inception_v1_classifier,\n",
    "    img=image_batch,\n",
    "    img_titles=image_titles,\n",
    "    label_vocab=imagenet_label_vocab, \n",
    "    top_k=5\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "28MU35BCLM-s"
   },
   "source": [
    "### Step 2: establish baseline to compare inputs against"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "MIPG5yYfkydQ"
   },
   "source": [
    "Defining missingness or a starting point in feature spaces for comparison is at the core of machine learning interpretability methods. For IG, this concept is encoded as a baseline. A **baseline** is an uniformative input used as a starting point for defining IG attributions in relation to and essential for interpreting IG prediction attributions as a function of individual input features.\n",
    "\n",
    "When selecting a baseline for neural networks, the goal is to choose a baseline such as the prediction at the baseline is near zero to minimize aspects of the baseline impacting interpretation of the prediction attributions.\n",
    "\n",
    "For image classification networks, a baseline image with its pixels set to 0 meets this objective. For text networks, an all zero input embedding vector makes for a good baseline. Models with structured data that typically involve a mix of continuous numeric features will typically use the observed median value as a baseline because 0 is an informative value for these features. Note, however, that this changes the interpretation of the features to their importance in relation to the baseline value as opposed to the input data directly. The paper author's provide additional guidance on baseline selection for different input feature data types and models under a [How to Use Integrated Gradients Guide](https://github.com/ankurtaly/Integrated-Gradients/blob/master/howto.md#sanity-checking-baselines) on Github."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "kozx1p3hSDGz"
   },
   "outputs": [],
   "source": [
    "# name_baseline_tensors. Set random seed for reproducibility of random baseline image and associated attributions.\n",
    "tf.random.set_seed(42)\n",
    "name_baseline_tensors = {\n",
    "    'Baseline Image: Black': tf.zeros(shape=(224,224,3)),\n",
    "    'Baseline Image: Random': tf.random.uniform(shape=(224,224,3), minval=0.0, maxval=1.0),\n",
    "    'Baseline Image: White': tf.ones(shape=(224,224,3)),\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 321
    },
    "colab_type": "code",
    "id": "vXRYwBWQS19B",
    "outputId": "1da6e2ac-af7f-47e8-8174-d59675e2d6db"
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(12,12))\n",
    "for n, (name, baseline_tensor) in enumerate(name_baseline_tensors.items()):\n",
    "  ax = plt.subplot(1,3,n+1)\n",
    "  ax.imshow(baseline_tensor)\n",
    "  ax.set_title(name)\n",
    "  ax.axis('off')\n",
    "plt.tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "xphryu2mGAk8"
   },
   "source": [
    "### Step 3: Integrated gradients in TensorFlow 2.x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "QstMR0IcbfFA"
   },
   "source": [
    "The exact formula for Integrated Gradients from the original paper is the following:\n",
    "\n",
    "$IntegratedGradients_{i}(x) ::= (x_{i} - x'_{i})\\times\\int_{\\alpha=0}^1\\frac{\\partial F(x'+\\alpha \\times (x - x'))}{\\partial x_i}{d\\alpha}$\n",
    "\n",
    "where:\n",
    "\n",
    "$_{i}$ = feature   \n",
    "$x$ = input    \n",
    "$x'$ = baseline   \n",
    "$\\alpha$ = interpolation constant to perturbe features by"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "_7w8SD5YMqvi"
   },
   "source": [
    "However, in practice, computing a definite integral is not always numerically possible and computationally costly so you compute the following numerical approximation:\n",
    "\n",
    "$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\\times\\sum_{k=1}^{m}\\frac{\\partial F(x' + \\frac{k}{m}\\times(x - x'))}{\\partial x_{i}} \\times \\frac{1}{m}$\n",
    "\n",
    "where:\n",
    "\n",
    "$_{i}$ = feature (individual pixel)   \n",
    "$x$ = input (image tensor)  \n",
    "$x'$ = baseline (image tensor)  \n",
    "$k$ = scaled feature perturbation constant  \n",
    "$m$ = number of steps in the Riemann sum approximation of the integral. This is covered in depth in the section *Compute integral approximation* below.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "-3KcfyCYV1ZG"
   },
   "source": [
    "You will walk through the intuition and implementation of the above equation in the sections below."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "5aPG68RssS2h"
   },
   "source": [
    "#### Generate interpolated path inputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "pPrldEYsIR4M"
   },
   "source": [
    "$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\\times\\sum_{k=1}^{m}\\frac{\\partial F(\\overbrace{x' + \\frac{k}{m}\\times(x - x')}^\\text{generate m interpolated images at k intervals})}{\\partial x_{i}} \\times \\frac{1}{m}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "y4r-ZrIIsdbI"
   },
   "source": [
    "The first step is to generate a [linear interpolation](https://en.wikipedia.org/wiki/Linear_interpolation) path between your known baseline and input images. You can think of interpolated images as small steps in the feature space between each feature pixel between your baseline and input images. These steps are represented by $\\alpha$ in the original equation. You will revisit $\\alpha$ in greater depth in the subsequent section *Compute approximate integral* as its values are tied to the your choice of integration approximation method.\n",
    "\n",
    "For now, you can use the handy `tf.linspace` function to generate a `Tensor` with 20 m_steps at k linear intervals between 0 and 1 as an input to the `generate_path_inputs` function below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "I42mBKXyjcIc"
   },
   "outputs": [],
   "source": [
    "m_steps=20\n",
    "alphas = tf.linspace(start=0.0, stop=1.0, num=m_steps+1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "7SWLSFOHsbgh"
   },
   "outputs": [],
   "source": [
    "def generate_path_inputs(baseline,\n",
    "                         input,\n",
    "                         alphas):\n",
    "  \"\"\"Generate m interpolated inputs between baseline and input features.\n",
    "  Args:\n",
    "    baseline(Tensor): A 3D image tensor of floats with the shape \n",
    "      (img_height, img_width, 3).\n",
    "    input(Tensor): A 3D image tensor of floats with the shape \n",
    "      (img_height, img_width, 3).\n",
    "    alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape \n",
    "      (m_steps,).\n",
    "  Returns:\n",
    "    path_inputs(Tensor): A 4D tensor of floats with the shape \n",
    "      (m_steps, img_height, img_width, 3).\n",
    "  \"\"\"\n",
    "  # Expand dimensions for vectorized computation of interpolations.\n",
    "  alphas_x = alphas[:, tf.newaxis, tf.newaxis, tf.newaxis]\n",
    "  baseline_x = tf.expand_dims(baseline, axis=0)\n",
    "  input_x = tf.expand_dims(input, axis=0) \n",
    "  delta = input_x - baseline_x\n",
    "  path_inputs = baseline_x +  alphas_x * delta\n",
    "  \n",
    "  return path_inputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "s4zFzbUBj684"
   },
   "source": [
    "Generate interpolated images along a linear path at alpha intervals between a black baseline image and the example \"Giant Panda\" image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "NgVx8swDQtTl"
   },
   "outputs": [],
   "source": [
    "path_inputs = generate_path_inputs(\n",
    "    baseline=name_baseline_tensors['Baseline Image: Black'], \n",
    "    input=img_name_tensors['Giant Panda'],\n",
    "    alphas=alphas)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "path_inputs.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "QABFsuCvkO1h"
   },
   "source": [
    "The interpolated images are visualized below. Note that another way of thinking about the $\\alpha$ constant is that it is monotonically and consistently increasing each interpolated image's intensity."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 346
    },
    "colab_type": "code",
    "id": "j9hqBz-dSDd2",
    "outputId": "0201cbb6-7d48-4697-d971-77787b2b12b4"
   },
   "outputs": [],
   "source": [
    "fig, axs = plt.subplots(nrows=1, ncols=5, squeeze=False, figsize=(24, 24))\n",
    "\n",
    "axs[0,0].set_title('Baseline \\n alpha: {:.2f}'.format(alphas[0]))\n",
    "axs[0,0].imshow(path_inputs[0])\n",
    "axs[0,0].axis('off')\n",
    "\n",
    "axs[0,1].set_title('=> Interpolated Image # 1 \\n alpha: {:.2f}'.format(alphas[1]))\n",
    "axs[0,1].imshow(path_inputs[1])\n",
    "axs[0,1].axis('off')\n",
    "\n",
    "axs[0,2].set_title('=> Interpolated Image # 2 \\n alpha: {:.2f}'.format(alphas[2]))\n",
    "axs[0,2].imshow(path_inputs[2])\n",
    "axs[0,2].axis('off')\n",
    "\n",
    "axs[0,3].set_title('... => Interpolated Image # 10 \\n alpha: {:.2f}'.format(alphas[10]))\n",
    "axs[0,3].imshow(path_inputs[10])\n",
    "axs[0,3].axis('off')\n",
    "\n",
    "axs[0,4].set_title('... => Input Image \\n alpha: {:.2f}'.format(alphas[-1]))\n",
    "axs[0,4].imshow(path_inputs[-1])\n",
    "axs[0,4].axis('off')\n",
    "\n",
    "plt.tight_layout();"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "h7T0f1cqsaxA"
   },
   "source": [
    "#### Compute gradients"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "tps0eWc0REqL"
   },
   "source": [
    "Now that you generated 20 interpolated images between a black baseline and your example \"Giant Panda\" photo, lets take a look at how to calculate [gradients](https://en.wikipedia.org/wiki/Gradient) to measure the relationship between changes to your feature pixels and changes in your model's predictions.\n",
    "\n",
    "\n",
    "The gradients of F, your Inception V1 model function, represents the direction of maximum increase between your predictions with respect to your input. In the case of images, your gradient tells you which pixels have the steepest local slope between your output model's predicted class probabilities with respect to the original pixels."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "ouuVIsdfgukW"
   },
   "source": [
    "$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\\times\\sum_{k=1}^{m}\\frac{\\overbrace{\\partial F(\\text{interpolated images})}^\\text{Compute gradients}}{\\partial x_{i}} \\times \\frac{1}{m}$\n",
    "\n",
    "where:  \n",
    "$F()$ = your model's prediction function  \n",
    "$\\frac{\\partial{F}}{\\partial{x_i}}$ = gradient (vector of partial derivatives $\\partial$) of your model F's prediction function relative to each feature $x_i$  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "hY_Ok3CoJW1W"
   },
   "source": [
    "TensorFlow 2.x makes computing gradients extremely easy for you with the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) object which performantly computes and records gradient operations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "JW1O9qEsxZOP"
   },
   "outputs": [],
   "source": [
    "def compute_gradients(model, path_inputs, target_class_idx):\n",
    "  \"\"\"Compute gradients of model predicted probabilties with respect to inputs.\n",
    "  Args:\n",
    "    mode(tf.keras.Model): Trained Keras model.\n",
    "    path_inputs(Tensor): A 4D tensor of floats with the shape \n",
    "      (m_steps, img_height, img_width, 3).\n",
    "    target_class_idx(Tensor): A 0D tensor of an int corresponding to the correct\n",
    "      ImageNet target class index.\n",
    "  Returns:\n",
    "    gradients(Tensor): A 4D tensor of floats with the shape \n",
    "      (m_steps, img_height, img_width, 3).\n",
    "  \"\"\"\n",
    "  with tf.GradientTape() as tape:\n",
    "    tape.watch(path_inputs)\n",
    "    predictions = model(path_inputs)\n",
    "    # Note: IG requires softmax probabilities; converting Inception V1 logits.\n",
    "    outputs = tf.nn.softmax(predictions, axis=-1)[:, target_class_idx]      \n",
    "  gradients = tape.gradient(outputs, path_inputs)\n",
    "\n",
    "  return gradients"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "9BfRuzx4-c87"
   },
   "source": [
    "Compute gradients between your model Inception V1's predicted probabilities for the target class on each interpolated image with respect to each interpolated input. Recall that your model returns a `(1, 1001)` shaped `Tensor` with of logits that you will convert to predicted probabilities for every class. You need to pass the correct ImageNet target class index to the `compute_gradients` function below in order to identify the specific output tensor you wish to explain in relation to your input and baseline."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "kHIR58rNJ3q_"
   },
   "outputs": [],
   "source": [
    "path_gradients = compute_gradients(\n",
    "    model=inception_v1_classifier, \n",
    "    path_inputs=path_inputs, \n",
    "    target_class_idx=389)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "JJodSbDUQ3T_"
   },
   "source": [
    "Note the output shape `(n_interpolated_images, img_height, img_width, RGB)`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "SRqPLR3P-08b"
   },
   "source": [
    "Below you can see the local gradients visualized for the first 5 interpolated inputs relative to the input \"Giant Panda\" image as a series of ghostly shapes. You can think these gradients as measuring the change in your model's predictions for each small step in the feature space. *The largest gradient magnitudes generally occur at the lowest alphas*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 319
    },
    "colab_type": "code",
    "id": "W_EVPinNTnSz",
    "outputId": "3ce213ee-a107-403c-c9df-d1ee4dfa4759"
   },
   "outputs": [],
   "source": [
    "fig, axs = plt.subplots(nrows=1, ncols=5, squeeze=False, figsize=(24, 24))\n",
    "for i in range(5):\n",
    "  axs[0,i].imshow(tf.cast(255 * path_gradients[i], tf.uint8), cmap=plt.cm.inferno)\n",
    "  axs[0,i].axis('off')\n",
    "plt.tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "a5ki-WUCsfj-"
   },
   "source": [
    "**Why not just use gradients for attribution? Saturation**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "zmaPpr5bUtbr"
   },
   "source": [
    "You may be wondering at this point, why not just compute the gradients of the predictions with respect to the input as feature attributions? Why bother with slowly changing the intensity of the input image at all? The reason why is networks can *saturate*, meaning the magnitude of the local feature gradients can become extremely small and go toward zero resulting in important features having a small gradient. *The implication is that saturation can result in discontinuous feature importances and miss important features.*\n",
    "\n",
    "This concept is visualized in the 2 graphs below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 295
    },
    "colab_type": "code",
    "id": "mCH8sAf3TTJ2",
    "outputId": "223ca020-75bd-433f-c22d-f7c666750fa8"
   },
   "outputs": [],
   "source": [
    "pred = inception_v1_classifier(path_inputs)\n",
    "pred_proba = tf.nn.softmax(pred, axis=-1)[:, 389]\n",
    "\n",
    "plt.figure(figsize=(10,4))\n",
    "ax1 = plt.subplot(1,2,1)\n",
    "ax1.plot(alphas, pred_proba)\n",
    "ax1.set_title('Target class predicted probability over alpha')\n",
    "ax1.set_ylabel('model p(target class)')\n",
    "ax1.set_xlabel('alpha')\n",
    "ax1.set_ylim([0,1])\n",
    "\n",
    "ax2 = plt.subplot(1,2,2)\n",
    "# Average across interpolation steps\n",
    "average_grads = tf.math.reduce_mean(path_gradients, axis=[1,2,3])\n",
    "# Normalize average gradients to 0 to 1 scale. E.g. (x - min(x))/(max(x)-min(x))\n",
    "average_grads_norm = (average_grads-tf.math.reduce_min(average_grads))/(tf.math.reduce_max(average_grads)-tf.reduce_min(average_grads))\n",
    "ax2.plot(alphas, average_grads_norm)\n",
    "ax2.set_title('Average pixel gradients (normalized) over alpha')\n",
    "ax2.set_ylabel('Average pixel gradients')\n",
    "ax2.set_xlabel('alpha')\n",
    "ax2.set_ylim([0,1]);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "-ntMpA87jNN6"
   },
   "source": [
    "Notice in the left plot above, how the model prediction function quickly learns the correct \"Giant Panda\" class when alpha is between 0.0 and 0.3 and then largely flattens between 0.3 and 1.0. There could still be features that the model relies on for correct prediction that differ from the baseline but the magnitudes of those feature gradients become really small and bounce around 0 starting from 0.3 to 1.0. \n",
    "\n",
    "Similarly, in the right plot of the average pixel gradients plotted over alpha, you can see the peak \"aha\" moment where the model learns the target \"Giant Panda\" but also that the gradient magnitudes quickly minimize toward 0 and even become discontinuous briefly around 0.6. In practice, this can result in gradient attributions to miss important features that differ between input and baseline and to focus on irrelvant features.\n",
    "\n",
    "**The beauty of IG is that is solves the problem of discontinuous gradient feature importances by taking small steps in the feature space to compute local gradients between predictions and inputs across the feature space and then averages these gradients together to produce IG feature attributions.**\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "LQdACCM6sJdW"
   },
   "source": [
    "#### Compute integral approximation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "QHopk9evmO5P"
   },
   "source": [
    "There are many different ways you can go about computing the numeric approximation of an integral for IG with different tradeoffs in accuracy and convergence across varying functions. A popular class of methods is called [Riemann sums](https://en.wikipedia.org/wiki/Riemann_sum). The code below shows the visual geometric interpretation for Left, Right, Midpoint, and Trapezoidal Riemann Sums for intuition below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "1oTeYgzOV6cE"
   },
   "outputs": [],
   "source": [
    "def plot_riemann_sums(fn, start_val, end_val, m_steps=10):\n",
    "  \"\"\"\n",
    "  Plot Riemann Sum integral approximations for single variable functions.\n",
    "  Args:\n",
    "    fn(function): Any single variable function.\n",
    "    start_val(int): Minimum function value constraint.\n",
    "    end_val(int): Maximum function value constraint.\n",
    "    m_steps(int): Linear interpolation steps for approximation.\n",
    "  Returns:\n",
    "    fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving \n",
    "      plots.\n",
    "  \"\"\"\n",
    "  # fn plot args\n",
    "  x = tf.linspace(start_val, end_val, m_steps**2+1)\n",
    "  y = fn(x)\n",
    "\n",
    "  fig = plt.figure(figsize=(16,4))\n",
    "\n",
    "  # Left Riemann Sum\n",
    "  lr_ax = plt.subplot(1,4,1)\n",
    "  lr_ax.plot(x, y)\n",
    "  lr_x = tf.linspace(0.0, 1.0, m_steps+1)\n",
    "  lr_point = lr_x[:-1]\n",
    "  lr_height = fn(lr_x[:-1])\n",
    "  lr_ax.plot(lr_point, lr_height, 'b.', markersize=10)\n",
    "  lr_ax.bar(lr_point, lr_height, width=(end_val-start_val)/m_steps, alpha=0.2, align='edge', edgecolor='b')\n",
    "  lr_ax.set_title('Left Riemann Sum \\n m_steps = {}'.format(m_steps))\n",
    "  lr_ax.set_xlabel('alpha')\n",
    "  # Right Riemann Sum\n",
    "  rr_ax = plt.subplot(1,4,2)\n",
    "  rr_ax.plot(x, y)\n",
    "  rr_x = tf.linspace(0.0, 1.0, m_steps+1)\n",
    "  rr_point = rr_x[1:]\n",
    "  rr_height = fn(rr_x[1:])\n",
    "  rr_ax.plot(rr_point, rr_height, 'b.', markersize=10)\n",
    "  rr_ax.bar(rr_point, rr_height, width=-(end_val-start_val)/m_steps, alpha=0.2, align='edge', edgecolor='b')\n",
    "  rr_ax.set_title('Right Riemann Sum \\n m_steps = {}'.format(m_steps))\n",
    "  rr_ax.set_xlabel('alpha')\n",
    "  # Midpoint Riemann Sum\n",
    "  mr_ax = plt.subplot(1,4,3)\n",
    "  mr_ax.plot(x, y)\n",
    "  mr_x = tf.linspace(0.0, 1.0, m_steps+1)\n",
    "  mr_point = (mr_x[:-1] + mr_x[1:])/2\n",
    "  mr_height = fn(mr_point)\n",
    "  mr_ax.plot(mr_point, mr_height, 'b.', markersize=10)\n",
    "  mr_ax.bar(mr_point, mr_height, width=(end_val-start_val)/m_steps, alpha=0.2, edgecolor='b')\n",
    "  mr_ax.set_title('Midpoint Riemann Sum \\n m_steps = {}'.format(m_steps))\n",
    "  mr_ax.set_xlabel('alpha')\n",
    "  # Trapezoidal Riemann Sum\n",
    "  tp_ax = plt.subplot(1,4,4)\n",
    "  tp_ax.plot(x, y)\n",
    "  tp_x = tf.linspace(0.0, 1.0, m_steps+1)\n",
    "  tp_y = fn(tp_x)\n",
    "  for i in range(m_steps):\n",
    "    xs = [tp_x[i], tp_x[i], tp_x[i+1], tp_x[i+1]]\n",
    "    ys = [0, tp_y[i], tp_y[i+1], 0]\n",
    "    tp_ax.plot(tp_x,tp_y,'b.',markersize=10)\n",
    "    tp_ax.fill_between(xs, ys, color='C0', edgecolor='blue', alpha=0.2)\n",
    "  tp_ax.set_title('Trapezoidal Riemann Sum \\n m_steps = {}'.format(m_steps))\n",
    "  tp_ax.set_xlabel('alpha')\n",
    "\n",
    "  return fig"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "ZgvuOJ8VhB35"
   },
   "source": [
    "Recall that a feature's gradient will vary in magnitude over the interpolated images between the baseline and input. You want to choose a method to best approximate the area of difference, also know as the [integral](https://en.wikipedia.org/wiki/Integral) between your baseline and input in the feature space. Lets consider the down facing parabola function $y = sin(x*\\pi)$ varying between 0 and 1 as a proxy for how a feature gradient could vary in magnitude and sign over different alphas. To implement IG, you care about approximation accuracy and covergence. Left, Right, and Midpoint Riemann Sums utilize rectangles to approximate areas under the function while Trapezoidal Riemann Sums utilize trapezoids."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 310
    },
    "colab_type": "code",
    "id": "ebvkxHuWB7sX",
    "outputId": "c6dda5b0-a9d8-4831-f007-786e9f6eb7ff"
   },
   "outputs": [],
   "source": [
    "_ = plot_riemann_sums(lambda x: tf.math.sin(x*math.pi), 0.0, 1.0, m_steps=5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 310
    },
    "colab_type": "code",
    "id": "HLGm7y4pCHcz",
    "outputId": "f9181fd1-9ebf-4c1d-de83-ab68f2f3e885"
   },
   "outputs": [],
   "source": [
    "_ = plot_riemann_sums(lambda x: tf.math.sin(x*math.pi), 0.0, 1.0, m_steps=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Frdu5n68XNhN"
   },
   "source": [
    "**Which integral approximation method should you choose for IG?**\n",
    "\n",
    "From the Riemann sum plots above you can see that the Trapezoidal Riemann Sum clearly provides a more accurate approximation and coverges more quickly over m_steps than the alternatives e.g. less white space under function not covered by shapes. Consequently, it is presented as the default method in the code below while also showing alternative methods for further study. Additional support for Trapezoidal Riemann approximation for IG is presented in section 4 of [\"Computing Linear Restrictions of Neural Networks\"](https://arxiv.org/abs/1908.06214)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "kmaJAqxYR2ds"
   },
   "source": [
    "Let us return to the $\\alpha$ constant previously introduced in the *Generate interpolated path inputs* section for varying the intensity of the interpolated images between the baseline and input image. In the `generate_alphas` function below, you can see that $\\alpha$ changes with each approximation method to reflect different start and end points and underlying geometric shapes of either a rectangle or trapezoid used to approximate the integral area. It takes a `method` parameter and a `m_steps` parameter that controls the accuracy of the integral approximation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "QfaQp33rU0BC"
   },
   "outputs": [],
   "source": [
    "def generate_alphas(m_steps=50,\n",
    "                    method='riemann_trapezoidal'):\n",
    "  \"\"\"\n",
    "  Args:\n",
    "    m_steps(Tensor): A 0D tensor of an int corresponding to the number of linear\n",
    "      interpolation steps for computing an approximate integral. Default is 50.\n",
    "    method(str): A string representing the integral approximation method. The \n",
    "      following methods are implemented:\n",
    "      - riemann_trapezoidal(default)\n",
    "      - riemann_left\n",
    "      - riemann_midpoint\n",
    "      - riemann_right\n",
    "  Returns:\n",
    "    alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape \n",
    "      (m_steps,).\n",
    "  \"\"\"\n",
    "  m_steps_float = tf.cast(m_steps, float) # cast to float for division operations.\n",
    "\n",
    "  if method == 'riemann_trapezoidal':\n",
    "    alphas = tf.linspace(0.0, 1.0, m_steps+1) # needed to make m_steps intervals.\n",
    "  elif method == 'riemann_left':\n",
    "    alphas = tf.linspace(0.0, 1.0 - (1.0 / m_steps_float), m_steps)\n",
    "  elif method == 'riemann_midpoint':\n",
    "    alphas = tf.linspace(1.0 / (2.0 * m_steps_float), 1.0 - 1.0 / (2.0 * m_steps_float), m_steps)\n",
    "  elif method == 'riemann_right':    \n",
    "    alphas = tf.linspace(1.0 / m_steps_float, 1.0, m_steps)\n",
    "  else:\n",
    "    raise AssertionError(\"Provided Riemann approximation method is not valid.\")\n",
    "\n",
    "  return alphas"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "alphas = generate_alphas(m_steps=20, method='riemann_trapezoidal')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "alphas.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GshPZQgROs80"
   },
   "source": [
    "$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\\times \\overbrace{\\sum_{k=1}^{m}}^\\text{4. Sum m local gradients}\n",
    "\\text{gradients(interpolated images)} \\times \\overbrace{\\frac{1}{m}}^\\text{4. Divide by m steps}$\n",
    "\n",
    "From the equation, you can see you are summing over m gradients and dividing by m steps. You can implement the two operations together for step 4 as an *average of the local gradients of m interpolated predictions and input images*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "1cMVl-Grx3lp"
   },
   "outputs": [],
   "source": [
    "def integral_approximation(gradients, \n",
    "                           method='riemann_trapezoidal'):\n",
    "  \"\"\"Compute numerical approximation of integral from gradients.\n",
    "\n",
    "  Args:\n",
    "    gradients(Tensor): A 4D tensor of floats with the shape \n",
    "      (m_steps, img_height, img_width, 3).\n",
    "    method(str): A string representing the integral approximation method. The \n",
    "      following methods are implemented:\n",
    "      - riemann_trapezoidal(default)\n",
    "      - riemann_left\n",
    "      - riemann_midpoint\n",
    "      - riemann_right \n",
    "  Returns:\n",
    "    integrated_gradients(Tensor): A 3D tensor of floats with the shape\n",
    "      (img_height, img_width, 3).\n",
    "  \"\"\"\n",
    "  if method == 'riemann_trapezoidal':  \n",
    "    grads = (gradients[:-1] + gradients[1:]) / tf.constant(2.0)\n",
    "  elif method == 'riemann_left':\n",
    "    grads = gradients\n",
    "  elif method == 'riemann_midpoint':\n",
    "    grads = gradients\n",
    "  elif method == 'riemann_right':    \n",
    "    grads = gradients\n",
    "  else:\n",
    "    raise AssertionError(\"Provided Riemann approximation method is not valid.\")\n",
    "\n",
    "  # Average integration approximation.\n",
    "  integrated_gradients = tf.math.reduce_mean(grads, axis=0)\n",
    "\n",
    "\n",
    "  return integrated_gradients"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "QVQAHunkW79t"
   },
   "source": [
    "The `integral_approximation` function takes the gradients of the predicted probability of the \"Giant Panda\" class with respect to the interpolated images between the baseline and \"Giant Panda\" image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "JeF01fydNq0I"
   },
   "outputs": [],
   "source": [
    "ig = integral_approximation(\n",
    "    gradients=path_gradients,\n",
    "    method='riemann_trapezoidal')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7XVItLpurOAM"
   },
   "source": [
    "You can confirm averaging across the gradients of m interpolated images returns an integrated gradients tensor with the same shape as the original \"Giant Panda\" image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 34
    },
    "colab_type": "code",
    "id": "z1bP6l3ahfyn",
    "outputId": "830673a2-9a00-4226-da36-621cb90560e9"
   },
   "outputs": [],
   "source": [
    "ig.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "C1ODXUevyGxL"
   },
   "source": [
    "#### Putting it all together"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "NcaTR-x8v1At"
   },
   "source": [
    "Now you will combine the previous steps together into an `IntegratedGradients` function. To recap: "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "5YdWHscoovhk"
   },
   "source": [
    "$IntegratedGrads^{approx}_{i}(x)::=\\overbrace{(x_{i}-x'_{i})}^\\text{5.}\\times \\overbrace{\\sum_{k=1}^{m}}^\\text{4.} \\frac{\\partial \\overbrace{F(\\overbrace{x' + \\overbrace{\\frac{k}{m}}^\\text{1.}\\times(x - x'))}^\\text{2.}}^\\text{3.}}{\\partial x_{i}} \\times \\overbrace{\\frac{1}{m}}^\\text{4.}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "pjdfjp_3pHiY"
   },
   "source": [
    "1. Generate alphas $\\alpha$\n",
    "\n",
    "2. Generate interpolated path inputs = $(x' + \\frac{k}{m}\\times(x - x'))$\n",
    "\n",
    "3. Compute gradients between model output predictions with respect to input features = $\\frac{\\partial F(\\text{interpolated path inputs})}{\\partial x_{i}}$\n",
    "\n",
    "4. Integral approximation through averaging = $\\sum_{k=1}^m \\text{gradients} \\times \\frac{1}{m}$\n",
    "\n",
    "5. Scale integrated gradients with respect to original image = $(x_{i}-x'_{i}) \\times \\text{average gradients}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "O_H3k9Eu7Rl5"
   },
   "outputs": [],
   "source": [
    "@tf.function\n",
    "def integrated_gradients(model,\n",
    "                         baseline, \n",
    "                         input,  \n",
    "                         target_class_idx,\n",
    "                         m_steps=50,\n",
    "                         method='riemann_trapezoidal',\n",
    "                         batch_size=32\n",
    "                        ):\n",
    "  \"\"\"\n",
    "  Args:\n",
    "    model(keras.Model): A trained model to generate predictions and inspect.\n",
    "    baseline(Tensor): A 3D image tensor with the shape \n",
    "      (image_height, image_width, 3) with the same shape as the input tensor.\n",
    "    input(Tensor): A 3D image tensor with the shape \n",
    "      (image_height, image_width, 3).\n",
    "    target_class_idx(Tensor): An integer that corresponds to the correct \n",
    "      ImageNet class index in the model's output predictions tensor. Default \n",
    "        value is 50 steps.           \n",
    "    m_steps(Tensor): A 0D tensor of an integer corresponding to the number of \n",
    "      linear interpolation steps for computing an approximate integral.\n",
    "    method(str): A string representing the integral approximation method. The \n",
    "      following methods are implemented:\n",
    "      - riemann_trapezoidal(default)\n",
    "      - riemann_left\n",
    "      - riemann_midpoint\n",
    "      - riemann_right\n",
    "    batch_size(Tensor): A 0D tensor of an integer corresponding to a batch\n",
    "      size for alpha to scale computation and prevent OOM errors. Note: needs to\n",
    "      be tf.int64 and shoud be < m_steps. Default value is 32.      \n",
    "  Returns:\n",
    "    integrated_gradients(Tensor): A 3D tensor of floats with the same \n",
    "      shape as the input tensor (image_height, image_width, 3).\n",
    "  \"\"\"\n",
    "\n",
    "  # 1. Generate alphas.\n",
    "  alphas = generate_alphas(m_steps=m_steps,\n",
    "                           method=method)\n",
    "\n",
    "  # Initialize TensorArray outside loop to collect gradients. Note: this data structure\n",
    "  # is similar to a Python list but more performant and supports backpropogation.\n",
    "  # See https://www.tensorflow.org/api_docs/python/tf/TensorArray for additional details.\n",
    "  gradient_batches = tf.TensorArray(tf.float32, size=m_steps+1)\n",
    "\n",
    "  # Iterate alphas range and batch computation for speed, memory efficiency, and scaling to larger m_steps.\n",
    "  # Note: this implementation opted for lightweight tf.range iteration with @tf.function.\n",
    "  # Alternatively, you could also use tf.data, which adds performance overhead for the IG \n",
    "  # algorithm but provides more functionality for working with tensors and image data pipelines.\n",
    "  for alpha in tf.range(0, len(alphas), batch_size):\n",
    "    from_ = alpha\n",
    "    to = tf.minimum(from_ + batch_size, len(alphas))\n",
    "    alpha_batch = alphas[from_:to]\n",
    "\n",
    "    # 2. Generate interpolated inputs between baseline and input.\n",
    "    interpolated_path_input_batch = generate_path_inputs(baseline=baseline,\n",
    "                                                         input=input,\n",
    "                                                         alphas=alpha_batch)\n",
    "\n",
    "    # 3. Compute gradients between model outputs and interpolated inputs.\n",
    "    gradient_batch = compute_gradients(model=model,\n",
    "                                       path_inputs=interpolated_path_input_batch,\n",
    "                                       target_class_idx=target_class_idx)\n",
    "    \n",
    "    # Write batch indices and gradients to TensorArray. Note: writing batch indices with\n",
    "    # scatter() allows for uneven batch sizes. Note: this operation is similar to a Python list extend().\n",
    "    # See https://www.tensorflow.org/api_docs/python/tf/TensorArray#scatter for additional details.\n",
    "    gradient_batches = gradient_batches.scatter(tf.range(from_, to), gradient_batch)    \n",
    "  \n",
    "  # Stack path gradients together row-wise into single tensor.\n",
    "  total_gradients = gradient_batches.stack()\n",
    "    \n",
    "  # 4. Integral approximation through averaging gradients.\n",
    "  avg_gradients = integral_approximation(gradients=total_gradients,\n",
    "                                         method=method)\n",
    "    \n",
    "  # 5. Scale integrated gradients with respect to input.\n",
    "  integrated_gradients = (input - baseline) * avg_gradients\n",
    "\n",
    "  return integrated_gradients"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "8G0ELl_wRrd0"
   },
   "outputs": [],
   "source": [
    "ig_attributions = integrated_gradients(model=inception_v1_classifier,\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],\n",
    "                          input=img_name_tensors['Giant Panda'],\n",
    "                          target_class_idx=389,\n",
    "                          m_steps=55,\n",
    "                          method='riemann_trapezoidal')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ig_attributions.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "P-LSHD2sajFf"
   },
   "source": [
    "Again, you can check that the IG feature attributions have the same shape as the input \"Giant Panda\" image."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "bhoKWJqiGKgn"
   },
   "source": [
    "### Step 4: checks to pick number of steps for IG approximation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Vfj0CkaZXLPb"
   },
   "source": [
    "One of IG nice theoretical properties is **completeness**. It is desireable because it holds that IG feature attributions break down the entire model's output prediction. Each feature importance score captures each feature's individual contribution to the prediction, and when added together, you can recover the entire example prediction value itself as tidy book keeping. This provides a principled means to select the `m_steps` hyperparameter for IG.\n",
    "\n",
    "$IntegratedGrads_i(x) = F(x) - F(x')$\n",
    "\n",
    "where:\n",
    "\n",
    "$F(x)$ = model's predictions on input at target class  \n",
    "$F(x')$ = model's predictions on baseline at target class\n",
    "\n",
    "You can translate this formula to return a numeric score, with 0 representing convergance, through the following:\n",
    "\n",
    "$\\delta(score) = \\sum{(IntegratedGrads_i(x))} - (\\sum{F(input)} - \\sum{F(x')})$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "ieU835ooUQs5"
   },
   "source": [
    "\n",
    "The original paper suggests the number of steps to range between 20 to 300 depending upon the example and application for the integral approximation. In practice, this can vary up to a few thousand `m_steps` to achieve an integral approximation within 5% error of the actual integral. Visual result convergence can generally be achieved with far few steps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "9X8DA3pTR_hP"
   },
   "outputs": [],
   "source": [
    "def convergence_check(model, attributions, baseline, input, target_class_idx):\n",
    "  \"\"\"\n",
    "  Args:\n",
    "    model(keras.Model): A trained model to generate predictions and inspect.\n",
    "    baseline(Tensor): A 3D image tensor with the shape \n",
    "      (image_height, image_width, 3) with the same shape as the input tensor.\n",
    "    input(Tensor): A 3D image tensor with the shape \n",
    "      (image_height, image_width, 3).\n",
    "    target_class_idx(Tensor): An integer that corresponds to the correct \n",
    "      ImageNet class index in the model's output predictions tensor. Default \n",
    "        value is 50 steps.   \n",
    "  Returns:\n",
    "    (none): Prints scores and convergence delta to sys.stdout.\n",
    "  \"\"\"\n",
    "  # Your model's prediction on the baseline tensor. Ideally, the baseline score\n",
    "  # should be close to zero.\n",
    "  baseline_prediction = model(tf.expand_dims(baseline, 0))\n",
    "  baseline_score = tf.nn.softmax(tf.squeeze(baseline_prediction))[target_class_idx]\n",
    "  # Your model's prediction and score on the input tensor.\n",
    "  input_prediction = model(tf.expand_dims(input, 0))\n",
    "  input_score = tf.nn.softmax(tf.squeeze(input_prediction))[target_class_idx]\n",
    "  # Sum of your IG prediction attributions.\n",
    "  ig_score = tf.math.reduce_sum(attributions)\n",
    "  delta = ig_score - (input_score - baseline_score)\n",
    "  try:\n",
    "    # Test your IG score is <= 5% of the input minus baseline score.\n",
    "    tf.debugging.assert_near(ig_score, (input_score - baseline_score), rtol=0.05)\n",
    "    tf.print('Approximation accuracy within 5%.', output_stream=sys.stdout)\n",
    "  except tf.errors.InvalidArgumentError:\n",
    "    tf.print('Increase or decrease m_steps to increase approximation accuracy.', output_stream=sys.stdout)\n",
    "  \n",
    "  tf.print('Baseline score: {:.3f}'.format(baseline_score))\n",
    "  tf.print('Input score: {:.3f}'.format(input_score))\n",
    "  tf.print('IG score: {:.3f}'.format(ig_score))     \n",
    "  tf.print('Convergence delta: {:.3f}'.format(delta))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "convergence_check(model=inception_v1_classifier,\n",
    "                  attributions=ig_attributions, \n",
    "                  baseline=name_baseline_tensors['Baseline Image: Black'], \n",
    "                  input=img_name_tensors['Giant Panda'], \n",
    "                  target_class_idx=389)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "_LD7Qw9ub0Vd"
   },
   "source": [
    "Through utilizing the completeness axiom and the corresponding `convergence` function above, you were able to identify that you needed about 50 steps to approximate feature importances within 5% error for the \"Giant Panda\" image."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "o55W6NYXGSZ8"
   },
   "source": [
    "### Step 5: visualize IG attributions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "XSQ6Y-DZvrQu"
   },
   "source": [
    "Finally, you are ready to visualize IG attributions. In order to visualize IG, you will utilize the plotting code below which sums the absolute values of the IG attributions across the color channels for simplicity to return a greyscale attribution mask for standalone visualization and overlaying on the original image. This plotting method captures the relative impact of pixels on the model's predictions well. Note that another visualization option for you to try is to preserve the direction of the gradient sign e.g. + or - for visualization on different channels to more accurately represent how the features might combine."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "4QN2cEA_WFym"
   },
   "outputs": [],
   "source": [
    "def plot_img_attributions(model,\n",
    "                          baseline,                          \n",
    "                          img,  \n",
    "                          target_class_idx,\n",
    "                          m_steps=50,                           \n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.4):\n",
    "  \"\"\"\n",
    "  Args:\n",
    "    model(keras.Model): A trained model to generate predictions and inspect.\n",
    "    baseline(Tensor): A 3D image tensor with the shape \n",
    "      (image_height, image_width, 3) with the same shape as the input tensor.\n",
    "    img(Tensor): A 3D image tensor with the shape \n",
    "      (image_height, image_width, 3).\n",
    "    target_class_idx(Tensor): An integer that corresponds to the correct \n",
    "      ImageNet class index in the model's output predictions tensor. Default \n",
    "        value is 50 steps.\n",
    "    m_steps(Tensor): A 0D tensor of an integer corresponding to the number of \n",
    "      linear interpolation steps for computing an approximate integral.\n",
    "    cmap(matplotlib.cm): Defaults to None. Reference for colormap options -\n",
    "      https://matplotlib.org/3.2.1/tutorials/colors/colormaps.html. Interesting\n",
    "      options to try are None and high contrast 'inferno'.\n",
    "    overlay_alpha(float): A float between 0 and 1 that represents the intensity\n",
    "      of the original image overlay.    \n",
    "  Returns:\n",
    "    fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving \n",
    "      plots.\n",
    "  \"\"\"\n",
    "  # Attributions\n",
    "  ig_attributions = integrated_gradients(model=model,\n",
    "                          baseline=baseline,\n",
    "                          input=img,\n",
    "                          target_class_idx=target_class_idx,\n",
    "                          m_steps=m_steps)\n",
    "\n",
    "  convergence_check(model, ig_attributions, baseline, img, target_class_idx)\n",
    "  \n",
    "  # Per the original paper, take the absolute sum of the attributions across \n",
    "  # color channels for visualization. The attribution mask shape is a greyscale image\n",
    "  # with shape (224, 224).\n",
    "  attribution_mask = tf.reduce_sum(tf.math.abs(ig_attributions), axis=-1)\n",
    "\n",
    "  # Visualization\n",
    "  fig, axs = plt.subplots(nrows=2, ncols=2, squeeze=False, figsize=(8, 8))\n",
    "\n",
    "  axs[0,0].set_title('Baseline Image')\n",
    "  axs[0,0].imshow(baseline)\n",
    "  axs[0,0].axis('off')\n",
    "\n",
    "  axs[0,1].set_title('Original Image')\n",
    "  axs[0,1].imshow(img)\n",
    "  axs[0,1].axis('off') \n",
    "\n",
    "  axs[1,0].set_title('IG Attribution Mask')\n",
    "  axs[1,0].imshow(attribution_mask, cmap=cmap)\n",
    "  axs[1,0].axis('off')  \n",
    "\n",
    "  axs[1,1].set_title('Original + IG Attribution Mask Overlay')\n",
    "  axs[1,1].imshow(attribution_mask, cmap=cmap)\n",
    "  axs[1,1].imshow(img, alpha=overlay_alpha)\n",
    "  axs[1,1].axis('off')\n",
    "\n",
    "  plt.tight_layout()\n",
    "\n",
    "  return fig"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "n73VxzbxeMvD"
   },
   "source": [
    "Visual inspection of the IG attributions on the \"Fireboat\" image, show that Inception V1 identifies the water cannons and spouts as contributing to its correct prediction."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "vxCQFx96iDVs",
    "outputId": "dc7610bd-c637-4761-88b7-fc7e6afb0a56"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Fireboat'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=555,\n",
    "                          m_steps=240,\n",
    "                          cmap=plt.cm.inferno,\n",
    "                          overlay_alpha=0.4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "vfDxeuk9552f"
   },
   "source": [
    "IG attributions on the \"School Bus\" image highlight the shape, front lighting, and front stop sign."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "S5ahYTXNhnxx",
    "outputId": "ea3ac12f-7a69-44ed-9bc2-ca21a882118d"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['School Bus'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=780,\n",
    "                          m_steps=100,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Lo4SncDZfTw0"
   },
   "source": [
    "Returning to the \"Giant Panda\" image, IG attributions hightlight the texture, nose shape, and white fur of the Panda's face."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "TcpGLJWuHnYl",
    "outputId": "1403d7c1-7ccd-468c-cc55-d82d0b688873"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Giant Panda'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=389,\n",
    "                          m_steps=55,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7F367uG_WzQQ"
   },
   "source": [
    "### How do different baselines impact interpretation of IG attributions?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "ylCa5fbxlz-N"
   },
   "source": [
    "In the section **Step 2: Establish baseline to compare against inputs**, the explanation from the original IG paper and discussion recommended a black baseline image to \"ignore\" and allow for interpretation of the predictions solely as a function of the input pixels. To motivate the choice of a black baseline image for interpretation, lets take a look at how a random baseline influences IG attributions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "fZguYzcVYlBQ"
   },
   "source": [
    "Recall from above that a black baseline with the fireboat image, the IG attributions were primarily focused on the right water cannon on the fireboat. Now with a random baseline, the interpretation is much less clear. The IG attribution mask below shows a hazy attribution cloud of varying pixel intensity around the entire region of the water cannon streams. Are these truly significant features identified by the model or artifacts of random dark pixels from the random basline? Inconclusive without more investigation. The random baseline has changed interpretation of the pixel intensities from being solely in relation to the input features to input features plus spurious attributions from the baseline."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "aUJwzFG4jHgX",
    "outputId": "d5fddb60-bbdc-442f-ee12-dc213c6fe328"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Fireboat'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Random'],                          \n",
    "                          target_class_idx=555,\n",
    "                          m_steps=240,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "jeiUtM875tT6"
   },
   "source": [
    "Returning to the school bus image, a black baseline really highlighted the school bus shape and stop sign as strongly distingushing features. In contrast, a random noise baseline makes interpretation of the IG attribution mask significantly more difficult. In particular, this attribution mask would wrongly leave you to believe that the model found a small area of pixels along the side of the bus significant."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "uzY_9oeSjbn2",
    "outputId": "0a578103-4224-40ae-f242-463120af619d"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['School Bus'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Random'],                          \n",
    "                          target_class_idx=780,\n",
    "                          m_steps=100,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "-KwaT2FfJTzK"
   },
   "source": [
    "**Are there any scenarios where you prefer a non-black baseline? Yes.** \n",
    "\n",
    "Consider the photo below of an all black beetle on a white background. The beetle primarily receives 0 pixel attribution with a black baseline and only highlights small bright portions of the beetle caused by glare and some of the spurious background and colored leg pixels. *For this example, black pixels are meaningful and do not provide an uninformative baseline.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "uRNTWw6R6K_s",
    "outputId": "f005edef-d7a7-4ed7-a9e8-a5f73a12f676"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Black Beetle'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=307,\n",
    "                          m_steps=200,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "8ZX_LHr8cJaS"
   },
   "source": [
    "A white baseline is a better contrastive choice here to highlight the important pixels on the beetle."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "GmUyLVZXbmBG",
    "outputId": "e423f9fd-d77b-4e66-9f64-705c9c74c514"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Black Beetle'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: White'],                          \n",
    "                          target_class_idx=307,\n",
    "                          m_steps=200,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7CkU6rHGS7nF"
   },
   "source": [
    "Ultimately, picking any constant color baseline has potential interpretation problems through just visual inspection alone without consideration of the underlying values and their signs. Baseline selection is still an area of active research with various proposals e.g. averaging multiple random baselines, blurred inputs, etc. discussed in depth in the distill.pub article [Visualizing the Impact of Feature Attribution Baselines](https://distill.pub/2020/attribution-baselines/)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "_J-XtD47lICs"
   },
   "source": [
    "## Use cases"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "zrmcn_uHtVIB"
   },
   "source": [
    "IG is a model-agnostic interpretability method that can be applied to any differentiable model (e.g. neural networks) to understand its predictions in terms of its input features; whether they be images, video, text, or structured data.\n",
    "\n",
    "**At Google, IG has been applied in 20+ product areas to recommender system, classification, and regression models for feature importance and selection, model error analysis, train-test data skew monitoring, and explaining model behavior to stakeholders.**\n",
    "\n",
    "The subsections below present a non-exhaustive list of the most common use cases for IG biased toward production machine learning workflows."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "XiACh_tYvFMi"
   },
   "source": [
    "### Use case: understanding feature importances"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "1i6azOGmpArS"
   },
   "source": [
    "IG relative feature importances provide better understanding of your model's learned features to both model builders and stakeholders, insight into the underlying data it was trained on, as well as provide a basis for feature selection. Lets take a look at an example of how IG relative feature importances can provide insight into the underlying input data."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "NyXYyANp2fhS"
   },
   "source": [
    "**What is the difference between a Golden Retriever and Labrador Retriever?**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "JiIPPlrzv4fx"
   },
   "source": [
    "Consider again the example images of the [Golden Retriever](https://en.wikipedia.org/wiki/Golden_Retriever) and the Yellow [Labrador Retriever](https://en.wikipedia.org/wiki/Labrador_Retriever) below. If you are not a domain expert familiar with these breeds, you might reasonably conclude these are 2 images of the same type of dog. They both have similar face and body shapes as well as coloring."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "tL_XB5vi-y4k"
   },
   "source": [
    "Your model, Inception V1, already correctly identifies a Golden Retriever and Labrador Retriever. In fact, it is quite confident about the Golden Retriever in the top image, even though there is a bit of lingering doubt about the Labrador Retriever as seen with its appearance in prediction # 4. In comparison, the model is relatively less confident about its correct prediction of the Labrador Retriever in the second image and also sees some shades of similarity with the Golden Retriever which also makes an appearance in the top 5 predictions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 639
    },
    "colab_type": "code",
    "id": "1YEdi-0s8lvU",
    "outputId": "c107e97d-991d-4377-9736-4ef1d1ef41aa"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_predictions(\n",
    "    model=inception_v1_classifier,\n",
    "    img=tf.stack([img_name_tensors['Golden Retriever'], \n",
    "                  img_name_tensors['Yellow Labrador Retriever']]),\n",
    "    img_titles=tf.stack(['Golden Retriever', \n",
    "                         'Yellow Labrador Retriever']), \n",
    "    label_vocab=imagenet_label_vocab, \n",
    "    top_k=5\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "kw-YkX5Xyq_o"
   },
   "source": [
    "Without any prior understanding of how to differentiate these dogs or the features to do so, what can you learn from IG's feature importances?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "KqsPWTZ_7qW7"
   },
   "source": [
    "Review the Golden Retriever IG attribution mask and IG Overlay of the original image below. Notice how it the pixel intensities are primarily highlighted on the face and shape of the dog but are brightest on the front and back legs and tail in areas of *lengthy and wavy fur*. A quick Google search validates that this is indeed a key distinguishing feature of Golden Retrievers compared to Labrador Retrievers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "ahKsV1t8Bq87",
    "outputId": "7fc806a0-d165-49c3-b799-d473d0b8a34f"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Golden Retriever'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=208,\n",
    "                          m_steps=200,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "toLalXjVEMmC"
   },
   "source": [
    "Comparatively, IG also highlights the face and body shape of the Labrador Retriever with a density of bright pixels on its *straight and short hair coat*. This provides additional evidence toward the length and texture of the coats being key differentiators between these 2 breeds."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "HhgExu4zEF8d",
    "outputId": "888c4e24-d0bf-4bbf-f4f4-ef18443ed68b"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Yellow Labrador Retriever'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=209,\n",
    "                          m_steps=100,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "1-2pKdoRkBEe"
   },
   "source": [
    "From visual inspection of the IG attributions, you now have insight into the underlying causal structure behind distringuishing Golden Retrievers and Yellow Labrador Retrievers without any prior knowledge. Going forward, you can use this insight to improve your model's performance further through refining its learned representations of these 2 breeds by retraining with additional examples of each dog breed and augmenting your training data through random perturbations of each dog's coat textures and colors."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "cB4PpufN8hfQ"
   },
   "source": [
    "### Use case: debugging data skew"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "akhpe3iJnmms"
   },
   "source": [
    "Training-serving data skew, a difference between performance during training and during model serving, is a hard to detect and widely prevalent issue impacting the performance of production machine learning systems. ML systems require dense samplings of input spaces in their training data to learn representations that generalize well to unseen data. To complement existing production ML monitoring of dataset and model performance statistics, tracking IG feature importances across time (e.g. \"next day\" splits) and data splits (e.g. train/dev/test splits) allows for meaningful monitoring of train-serving feature drift and skew."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "6ppUUdgzHlMA"
   },
   "source": [
    "**Military uniforms change across space and time.** Recall from this tutorial's section on ImageNet that each class (e.g. military uniform) in the ILSVRC-2012-CLS training dataset is represented by an average of 1,000 images that Inception V1 could learn from. At present, there are about 195 countries around the world that have significantly different military uniforms by service branch, climate, and occasion, etc. Additionally, military uniforms have changed significantly over time within the same country. As a result, the potential input space for military uniforms is enormous with many uniforms over-represented (e.g. US military) while others sparsely represented (e.g. US Union Army) or absent from the training data altogether (e.g. Greece Presidential Guard)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 927
    },
    "colab_type": "code",
    "id": "Nce31CpmlRZW",
    "outputId": "477750ba-eb8a-4a5d-9b8b-0b4d2b08c1d4"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_predictions(\n",
    "    model=inception_v1_classifier,\n",
    "    img=tf.stack([img_name_tensors['Military Uniform (Grace Hopper)'], \n",
    "                  img_name_tensors['Military Uniform (General Ulysses S. Grant)'], \n",
    "                  img_name_tensors['Military Uniform (Greek Presidential Guard)']]),\n",
    "    img_titles=tf.stack(['Military Uniform (Grace Hopper)',\n",
    "                        'Military Uniform (General Ulysses S. Grant)',\n",
    "                        'Military Uniform (Greek Presidential Guard)']),\n",
    "    label_vocab=imagenet_label_vocab, \n",
    "    top_k=5\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7juRV7iOcMad"
   },
   "source": [
    "Inception V1 correctly classifies this image of [United States Rear Admiral and Computer Scientist, Grace Hopper](https://en.wikipedia.org/wiki/Grace_Hopper), under the class \"military uniform\" above. From visual inspection of the IG feature attributions, you can see that brightest intensity pixels are focused around the shirt colar and tie, military insignia on the jacket and hat, and various pixel areas around her face. Note that there are potentially spurious pixels also highlighted in the background worth investigating empirically to refine the model's learned representation of military uniforms. However, IG does not provide insight into how these pixels were combined into the final prediction so its possible these pixels helped the model distinguish between military uniform and other similar classes such as the windsor tie and suit."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "Z7o9y3bxmH5c",
    "outputId": "797c6098-bb7f-4cd3-a3da-0d222f0bfacc"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Military Uniform (Grace Hopper)'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=653,\n",
    "                          m_steps=200,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "sRs5oG4i2OJ0"
   },
   "source": [
    "Below is an image of the [United States General Ulysses S. Grant](https://en.wikipedia.org/wiki/Ulysses_S._Grant) circa 1865. He is wearing a military uniform for the same country as Rear Admiral Hopper above, but how well can the model identify a military uniform to this image of different coloring and taken 120+ years earlier? From the model predictions above, you can see not very well as the model incorrectly predicts a trench coat and suit above a military uniform.\n",
    "\n",
    "From visual inspection of the IG attribution mask, it is clear the model struggled to identify a military uniform with the faded black and white image lacking the contrastive range of a color image. Since this is a faded black and white image with prominent darker features, a white baseline is a better choice.The IG Overlay of the original image does suggest that the model identified the military insignia patch on the right shoulder, face, collar, jacket buttons, and pixels around the edges of the coat. Using this insight, you can improve model performance by adding data augmentation to your input data pipeline to include additional colorless images and image translations as well as additional example images with military coats."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "gxqwppaRj1a3",
    "outputId": "60bc462f-abdd-44c4-fb94-064e91ef1eb9"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Military Uniform (General Ulysses S. Grant)'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: White'],                          \n",
    "                          target_class_idx=870,\n",
    "                          m_steps=200,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GPOMrmUR2FQR"
   },
   "source": [
    "Yikes! Inception V1 incorrectly predicted the image of a [Greek Presidential Guard](https://en.wikipedia.org/wiki/Presidential_Guard_(Greece)) as a vestment with low confidence. The underlying training data does not appear to have sufficient representation and density of Greek military uniforms. In fact, the lack of geo-diversity in large public image datasets, including ImageNet, was studied in the paper S. Shankar, Y. Halpern, E. Breck, J. Atwood, J. Wilson, and D. Sculley. [\"No classification without representation: Assessing geodiversity issues in open data\n",
    "sets for the developing world.\"](https://arxiv.org/abs/1711.08536), 2017. The authors found \"observable amerocentric and eurocentric representation bias\" and strong differences in relative model performance across geographic areas."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 585
    },
    "colab_type": "code",
    "id": "ra96MuLA1-Ks",
    "outputId": "b0b7b563-2938-4d13-bfe3-f7db7d8331dd"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_attributions(model=inception_v1_classifier,\n",
    "                          img=img_name_tensors['Military Uniform (Greek Presidential Guard)'],\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],                          \n",
    "                          target_class_idx=653,\n",
    "                          m_steps=200,\n",
    "                          cmap=None,\n",
    "                          overlay_alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "17hgnW0x9WGm"
   },
   "source": [
    "Using the IG attributions above, you can see the model focused primarily on the face and high contrast white wavy kilt in the front and vest rather than the military insignia on the red hat or sword hilt. While IG attributions alone will not identify or fix data skew or bias, when combined with model evaluation performance metrics and dataset statistics, IG attributions provide you with a guided path forward to collecting more and diverse data to improve model performance.\n",
    "\n",
    "Re-training the model on this more diverse sampling of the input space of Greece military uniforms, in particular those that emphasize military insignia, as well as utilizing weighting strategies during training can help mitigate biased data and further refine model performance and generalization."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "y7mxSb3-LDCh"
   },
   "source": [
    "### Use case: debugging model performance"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "SPLZm9AT3VSS"
   },
   "source": [
    "IG feature attributions provide a useful debugging complement to dataset statistics and model performance evaluation metrics to better understand model quality.\n",
    "\n",
    "When using IG feature attributions for debugging, you are looking for insights into the following questions:\n",
    "\n",
    "*   Which features are important? \n",
    "*   How well does the model's learned features generalize? \n",
    "*   Does the model learn \"incorrect\" or spurious features in the image beyond the true class object?\n",
    "*   What features did my model miss?\n",
    "*   Comparing correct and incorrect examples of the same class, what is the difference in the feature attributions?\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "yLc-P2b3HATZ"
   },
   "source": [
    "IG feature attributions are well suited for counterfactual reasoning to gain insight into your model's performance and limitations. This involves comparing feature attributions for images of the same class that receive different predictions. When combined with model performance metrics and dataset statistics, IG feature attributions give greater insight into model errors during debuggin to understand which features contributed to the incorrect prediction when compared to feature attributions on correct predictions. To go deeper on model debugging, see The Google AI [What-if tool](https://pair-code.github.io/what-if-tool/) to interactively inspect your dataset, and model, and IG feature attributions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "KhoY5v3IQPWf"
   },
   "source": [
    "In the example below, you will apply 3 transformations to the \"Yellow Labrador Retriever\" image and constrast correct and incorrect IG feature attributions to gain insight into your model's limitations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "orGL2fyQLUrn"
   },
   "outputs": [],
   "source": [
    "rotate90_labrador_retriever_img = tf.image.rot90(img_name_tensors['Yellow Labrador Retriever'])\n",
    "upsidedown_labrador_retriever_img = tf.image.flip_up_down(img_name_tensors['Yellow Labrador Retriever'])\n",
    "zoom_labrador_retriever_img = tf.keras.preprocessing.image.random_zoom(x=img_name_tensors['Yellow Labrador Retriever'], zoom_range=(0.45,0.45))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 1000
    },
    "colab_type": "code",
    "id": "3a9iv5VXCeja",
    "outputId": "cd865ccd-8c55-43b4-9179-7c71bc2e19e2"
   },
   "outputs": [],
   "source": [
    "_ = plot_img_predictions(\n",
    "    model=inception_v1_classifier,\n",
    "    img=tf.stack([img_name_tensors['Yellow Labrador Retriever'],\n",
    "                  rotate90_labrador_retriever_img, \n",
    "                  upsidedown_labrador_retriever_img,\n",
    "                  zoom_labrador_retriever_img]),\n",
    "    img_titles=tf.stack(['Yellow Labrador Retriever (original)',\n",
    "                         'Yellow Labrador Retriever (rotated 90 degrees)',\n",
    "                         'Yellow Labrador Retriever (flipped upsidedown)',\n",
    "                         'Yellow Labrador Retriever (zoomed in)']),\n",
    "    label_vocab=imagenet_label_vocab, \n",
    "    top_k=5\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "eru6VhHSI8Oa"
   },
   "source": [
    "These rotation and zooming examples serve to highlight an important limitation of convolutional neural networks like Inception V1 - *CNNs are not naturally rotationally or scale invariant.* All of these examples resulted in incorrect predictions. Now you will see an example of how comparing 2 example attributions - one incorrect prediction vs. one known correct prediction - gives a deeper feature-level insight into why the model made an error to take corrective action."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "pbVEFA6rZfDz"
   },
   "outputs": [],
   "source": [
    "labrador_retriever_attributions = integrated_gradients(model=inception_v1_classifier,\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],\n",
    "                          input=img_name_tensors['Yellow Labrador Retriever'],\n",
    "                          target_class_idx=209,\n",
    "                          m_steps=200,\n",
    "                          method='riemann_trapezoidal')\n",
    "\n",
    "zoom_labrador_retriever_attributions = integrated_gradients(model=inception_v1_classifier,\n",
    "                          baseline=name_baseline_tensors['Baseline Image: Black'],\n",
    "                          input=zoom_labrador_retriever_img,\n",
    "                          target_class_idx=209,\n",
    "                          m_steps=200,\n",
    "                          method='riemann_trapezoidal')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "g9cNHiOh1MKu"
   },
   "source": [
    "Zooming in on the Labrador Retriever image causes Inception V1 to incorrectly predict a different dog breed, a [Saluki](https://en.wikipedia.org/wiki/Saluki). Compare the IG attributions on the incorrect and correct predictions below. You can see the IG attributions on the zoomed image still focus on the legs but they are now much further apart and the midsection is proportionally narrower. Compared to the IG attributions on the original image, the visible head size is significantly smaller as well. Aimed with deeper feature-level understanding of your model's error, you can improve model performance by pursuing strategies such as training data augmentation to make your model more robust to changes in object proportions or check your image preprocessing code is the same during training and serving to prevent data skew introduced from by zooming or resizing operations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://8080-dot-14301553-dot-devshell.appspot.com/",
     "height": 395
    },
    "colab_type": "code",
    "id": "ep6-g9yVrQpe",
    "outputId": "2ab85b06-f663-4cab-c4ce-466660166419"
   },
   "outputs": [],
   "source": [
    "fig, axs = plt.subplots(nrows=1, ncols=3, squeeze=False, figsize=(16, 12))\n",
    "\n",
    "axs[0,0].set_title('IG Attributions - Incorrect Prediction: Saluki')\n",
    "axs[0,0].imshow(tf.reduce_sum(tf.abs(zoom_labrador_retriever_attributions), axis=-1), cmap=plt.cm.inferno)\n",
    "axs[0,0].axis('off')\n",
    "\n",
    "axs[0,1].set_title('IG Attributions - Correct Prediction: Labrador Retriever')\n",
    "axs[0,1].imshow(tf.reduce_sum(tf.abs(labrador_retriever_attributions), axis=-1), cmap=None)\n",
    "axs[0,1].axis('off')\n",
    "\n",
    "axs[0,2].set_title('IG Attributions - both predictions overlayed')\n",
    "axs[0,2].imshow(tf.reduce_sum(tf.abs(zoom_labrador_retriever_attributions), axis=-1), cmap=plt.cm.inferno, alpha=0.99)\n",
    "axs[0,2].imshow(tf.reduce_sum(tf.abs(labrador_retriever_attributions), axis=-1), cmap=None, alpha=0.5)\n",
    "axs[0,2].axis('off')\n",
    "\n",
    "plt.tight_layout();"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "f8yzHajlI3Ud"
   },
   "source": [
    "## Properties"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "_Ptn0P60Zeth"
   },
   "source": [
    "To summarize, IG is a popular Explainable AI method because of its broad applicability to any differentiable model, ease of implementation as this tutorial demonstrated, and relative computational efficiency compared to alternative explainability approaches that allows it to scale to large networks and feature spaces (e.g. images).\n",
    "\n",
    "Explainable AI techniques are challenging to empirically evaluate and compare against each other. A secondary contribution of the original IG paper was to establish several axioms to evaluate explainability approaches and guide their own development of IG. Below are the axioms related to this tutorial and restated for clarity:\n",
    "\n",
    "*   **Completeness**: the sum of IG attributions of all features is equal to the difference in your model's output for its input features and your model's output for the basline. The implications of this property are that you have a theoretical basis to determine how well IG's integral approximation converged and how you should adjust the number of steps hyperparameter. This was discussed previously in the tutorial in section *Step 4: Checks to pick number of steps for IG approximation* of this tutorial.\n",
    "\n",
    "*   **Sensitivity**: all input features that differ between the input and baseline and result in different predictions will a non-zero attribution by IG. Conversely, any feature that does not impact the model's function will not receive any attribution. The practical implication of sensitivity is that you can count on IG to identify all feature importances and be more resistant to spurious attributions compared to just gradients. The key is in proper selection of the baseline as discussed in sections *Step 2: Establish baseline to compare inputs against* and *How do different baselines impact interpretation of IG attributions?*\n",
    "\n",
    "*   **Implementation Invariance**: IG attributions will be the same for functionally equivalent models that output the same value for any given input. This is important to note in practice when interpretting and communicating results, because models with vastly different architectures and hyperparameters, can have identical IG attributions.\n",
    "\n",
    "*   **Linearity**: Gradients are a natural analog to coefficients in linear models. IG attributions preserve linear relationships in the model. This includes linear combinations of different models; for these combined ensembled models, attributions represented the weighted sum between the individual model's attributions.\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "H3etJZHuI6hX"
   },
   "source": [
    "## Limitations"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "xEX4Jh2uLvxA"
   },
   "source": [
    "\n",
    "*   **IG provides local, not global interpretability**: IG provides a theoretically sound understanding of feature importances on individual examples. However, it does not provide a relative global feature importance for understanding overall model performance across data sets. For image data, there is no clear way to aggregate feature attributions of individual image pixels for global model interpretability. Individual pixels by themselves carry little meaning; its through their combination with other pixels into higher order features (e.g. edges, shapes) that you can determine meaning. For structured numeric and text data, you can apply statistical aggregations (e.g. mean) across the absolute value of IG attributions to various sets of examples to get a sense of a model's global feature importances relative to selected baselines. Although it is importance to keep in mind interpretation of these results can be potentially misleading and sensitive to feature baseline selection and relationships between features.\n",
    "\n",
    "*   **IG explains network predictions in relation to individual features, not feature interactions and combinations**: Deep Neural Networks are powerful universal function approximators due to flexible function fitting capabilities that non-linearities (e.g. activation functions) introduce. However, IG is doing a first-order linear approximation of the functional relationship between model outputs and individual input features so you still do not know how the individual features interact, which are correlated, and how the network combines features to make its prediction.\n",
    "\n",
    "*   **IG can only be applied to differentiable ML models**: IG can be applied to any differentiable models such as neural networks. However, this method cannot be applied to other types of ML models without modification such as tree-based models or model ensembles that involve non-differentiable parts.\n",
    "\n",
    "*   **Limitations of baseline selection and visual inspection**: As this tutorial highlighted, proper interpretation of IG feature attributions depends upon selection of a good baseline. A black image is a great choice in most scenarios to limit interpretation of the prediction to the input features without any artifacts from the baseline but is limited when pixels important to the prediction are black themselves. As a result, visual inspection of the IG prediction attributions does not by itself always highlight all pixels of importance and may require trying out a few different baselines and contrastive explanations to fully understand the model's learned representation.\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Ejc2Ho_8i162"
   },
   "source": [
    "## Next steps"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "G8Ra5ijj7pEc"
   },
   "source": [
    "In this tutorial, you walked through an application of IG to understand an image classification network and use cases for IG in production machine learning workflows. As a next step, use this notebook to try out IG with different models, images, and baselines for yourself.\n",
    "\n",
    "To go deeper with your understanding of IG, review the original IG paper [Axiomatic Attribution for Deep Networks](https://arxiv.org/abs/1703.01365) by Mukund Sundararajan et al. as well as section 4 of [Computing Linear Restrictions of Neural Networks](https://arxiv.org/abs/1908.06214) by Matthew Sotoudeh and Aditya V. Thakur. for a study on various methods to calculate approximate IG. You can also review the paper author's [Integrated Gradients Github repository](https://github.com/ankurtaly/Integrated-Gradients) for additional resources and TensorFlow 1.x IG and visualization code. Within that repository, the [How to Use Integrated Gradients Guide](https://github.com/ankurtaly/Integrated-Gradients/blob/master/howto.md#sanity-checking-baselines) provides additional details on baseline selection for different problem framings and feature types. To see how IG compares to other image classification techniques, check out the Google PAIR research group's [Saliency Github repository](https://github.com/ankurtaly/Integrated-Gradients).\n",
    "\n",
    "Interested in incorporating IG into your production machine learning workflows for feature importances, model error analysis, and data skew monitoring? Check out Google Cloud's [Explainable AI](https://cloud.google.com/explainable-ai) product that supports IG attributions and read through the excellent accompanying [AI Explainability Whitepaper](https://storage.googleapis.com/cloud-ai-whitepapers/AI%20Explainability%20Whitepaper.pdf) to learn more. The Google AI PAIR research group also open-sourced the [What-if tool](https://pair-code.github.io/what-if-tool/index.html#about) which can be used for model debugging, including visualizing IG feature attributions."
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "collapsed_sections": [],
   "machine_shape": "hm",
   "name": "integrated_gradients_tutorial_ASL.ipynb",
   "provenance": [],
   "toc_visible": true
  },
  "environment": {
   "name": "tf2-gpu.2-1.m56",
   "type": "gcloud",
   "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-1:m56"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
