{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DDJwQPZcupab"
   },
   "source": [
    "# EECS 498-007/598-005 Assignment 4-3: Network Visualization\n",
    "\n",
    "Before we start, please put your name and UMID in following format\n",
    "\n",
    ": Firstname LASTNAME, #00000000   //   e.g.) Justin JOHNSON, #12345678"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2KMxqLt1h2kx"
   },
   "source": [
    "**Your Answer:**   \n",
    "Hello WORLD, #XXXXXXXX"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true,
    "id": "mU0tSHf82M4D",
    "tags": [
     "pdf-title"
    ]
   },
   "source": [
    "# Network Visualization\n",
    "\n",
    "In this notebook we will explore the use of *image gradients* for generating new images.\n",
    "\n",
    "When training a model, we define a loss function which measures our current unhappiness with the model's performance; we then use backpropagation to compute the gradient of the loss with respect to the model parameters, and perform gradient descent on the model parameters to minimize the loss.\n",
    "\n",
    "Here we will do something slightly different. We will start from a convolutional neural network model which has been pretrained to perform image classification on the ImageNet dataset. We will use this model to define a loss function which quantifies our current unhappiness with our image, then use backpropagation to compute the gradient of this loss with respect to the pixels of the image. We will then keep the model fixed, and perform gradient descent *on the image* to synthesize a new image which minimizes the loss.\n",
    "\n",
    "In this notebook we will explore three techniques for image generation:\n",
    "\n",
    "1. **Saliency Maps**: Saliency maps are a quick way to tell which part of the image influenced the classification decision made by the network.\n",
    "2. **Adversarial Attack**: We can perturb an input image so that it appears the same to humans, but will be misclassified by the pretrained network.\n",
    "3. **Class Visualization**: We can synthesize an image to maximize the classification score of a particular class; this can give us some sense of what the network is looking for when it classifies images of that class.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "34wOHak5_2zk"
   },
   "source": [
    "# Setup Code\n",
    "Before getting started we need to run some boilerplate code to set up our environment. You'll need to rerun this setup code each time you start the notebook.\n",
    "\n",
    "First, run this cell load the [autoreload](https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html?highlight=autoreload) extension. This allows us to edit `.py` source files, and re-import them into the notebook for a seamless editing and debugging experience."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "G0EjT2fP_8Iy"
   },
   "outputs": [],
   "source": [
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HCcwwNq2ABEu"
   },
   "source": [
    "### Google Colab Setup\n",
    "Next we need to run a few commands to set up our environment on Google Colab. If you are running this notebook on a local machine you can skip this section.\n",
    "\n",
    "Run the following cell to mount your Google Drive. Follow the link, sign in to your Google account (the same account you used to store this notebook!) and copy the authorization code into the text box that appears below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "PbYp2Bf4-VGG"
   },
   "outputs": [],
   "source": [
    "from google.colab import drive\n",
    "drive.mount('/content/drive')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8EjW6hQkANqX"
   },
   "source": [
    "Now recall the path in your Google Drive where you uploaded this notebook, fill it in below. If everything is working correctly then running the folowing cell should print the filenames from the assignment:\n",
    "\n",
    "```\n",
    "['eecs598', 'network_visualization.py', 'style_transfer.py',  'network_visualization.ipynb', 'a4_helper.py', 'pytorch_autograd_and_nn.py', 'pytorch_autograd_and_nn.ipynb', 'style_transfer.ipynb', 'rnn_lstm_attention_captioning.ipynb',  'rnn_lstm_attention_captioning.py']\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "oTx4qnMnAPpK"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# TODO: Fill in the Google Drive path where you uploaded the assignment\n",
    "# Example: If you create a 2020FA folder and put all the files under A1 folder, then '2020FA/A1'\n",
    "GOOGLE_DRIVE_PATH_AFTER_MYDRIVE = None\n",
    "GOOGLE_DRIVE_PATH = os.path.join('drive', 'My Drive', GOOGLE_DRIVE_PATH_AFTER_MYDRIVE)\n",
    "print(os.listdir(GOOGLE_DRIVE_PATH))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "fZqKEDc0A9UZ"
   },
   "source": [
    "Once you have successfully mounted your Google Drive and located the path to this assignment, run th following cell to allow us to import from the `.py` files of this assignment. If it works correctly, it should print the message:\n",
    "\n",
    "```\n",
    "Hello from network_visualization.py!\n",
    "```\n",
    "\n",
    "as well as the last edit time for the file `network_visualization.py`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "TZSrCDV6BegK"
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "sys.path.append(GOOGLE_DRIVE_PATH)\n",
    "\n",
    "import time, os\n",
    "os.environ[\"TZ\"] = \"US/Eastern\"\n",
    "time.tzset()\n",
    "\n",
    "from network_visualization import *\n",
    "from a4_helper import *\n",
    "hello()\n",
    "\n",
    "py_path = os.path.join(GOOGLE_DRIVE_PATH, 'network_visualization.py')\n",
    "py_edit_time = time.ctime(os.path.getmtime(py_path))\n",
    "print('knn.py last edited on %s' % py_edit_time)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7XEsGSj4Pdwy"
   },
   "source": [
    "Run some setup code for this notebook: Import some useful packages and increase the default figure size."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "inj07qEW2M4G",
    "tags": [
     "pdf-ignore"
    ]
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import torch\n",
    "import torchvision\n",
    "import random\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from PIL import Image\n",
    "\n",
    "%matplotlib inline\n",
    "plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\n",
    "plt.rcParams['image.interpolation'] = 'nearest'\n",
    "plt.rcParams['image.cmap'] = 'gray'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Ngf-vkPDu-XI"
   },
   "source": [
    "We will use GPUs to accelerate our computation in this notebook. Run the following to make sure GPUs are enabled:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "uorkfrB0u_uB"
   },
   "outputs": [],
   "source": [
    "if torch.cuda.is_available:\n",
    "  print('Good to go!')\n",
    "else:\n",
    "  print('Please set GPU via Edit -> Notebook Settings.')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ILDAjhi72M4S"
   },
   "source": [
    "# Pretrained Model\n",
    "\n",
    "For all of our image generation experiments, we will start with a convolutional neural network which was pretrained to perform image classification on ImageNet. We can use any model here, but for the purposes of this assignment we will use SqueezeNet [1], which achieves accuracies comparable to AlexNet but with a significantly reduced parameter count and computational complexity.\n",
    "\n",
    "Using SqueezeNet rather than AlexNet or VGG or ResNet means that we can easily perform all image generation experiments without heavy computation.\n",
    "\n",
    "[1] Iandola et al, \"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size\", arXiv 2016"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "rTd2DO5c2M4T"
   },
   "outputs": [],
   "source": [
    "print('Download and load the pretrained SqueezeNet model.')\n",
    "model = torchvision.models.squeezenet1_1(pretrained=True).to(device='cuda')\n",
    "\n",
    "# We don't want to train the model, so tell PyTorch not to compute gradients\n",
    "# with respect to model parameters.\n",
    "for param in model.parameters():\n",
    "  param.requires_grad = False\n",
    "    \n",
    "# Make sure the model is in \"eval\" mode\n",
    "model.eval()\n",
    "\n",
    "# you may see warning regarding initialization deprecated, that's fine, please continue to next steps"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1zzn631C2M4Y",
    "tags": [
     "pdf-ignore"
    ]
   },
   "source": [
    "## Load some ImageNet images\n",
    "We have provided a few example images from the validation set of the ImageNet ILSVRC 2012 Classification dataset.\n",
    "\n",
    "Since they come from the validation set, our pretrained model did not see these images during training.\n",
    "\n",
    "Run the following cells to visualize some of these images, along with their ground-truth labels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "XmiHk1srXY_B"
   },
   "outputs": [],
   "source": [
    "# download imagenet_val\n",
    "if os.path.isfile('imagenet_val_25.npz'):\n",
    "  print('ImageNet val images exist')\n",
    "else:\n",
    "  print('download ImageNet val images')\n",
    "  !wget http://web.eecs.umich.edu/~justincj/teaching/eecs498/imagenet_val_25.npz -P ./datasets/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "d08ggh9B2M4Z",
    "tags": [
     "pdf-ignore"
    ]
   },
   "outputs": [],
   "source": [
    "X, y, class_names = load_imagenet_val(num=5, path='./datasets/imagenet_val_25.npz')\n",
    "\n",
    "plt.figure(figsize=(12, 6))\n",
    "for i in range(5):\n",
    "  plt.subplot(1, 5, i + 1)\n",
    "  plt.imshow(X[i])\n",
    "  plt.title(class_names[y[i]])\n",
    "  plt.axis('off')\n",
    "plt.gcf().tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SX_3nqhM2M4f"
   },
   "source": [
    "# Saliency Maps\n",
    "Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [2].\n",
    "\n",
    "A **saliency map** tells us the degree to which each pixel in the image affects the classification score for that image. To compute it, we compute the gradient of the unnormalized score corresponding to the correct class (which is a scalar) with respect to the pixels of the image. If the image has shape `(3, H, W)` then this gradient will also have shape `(3, H, W)`; for each pixel in the image, this gradient tells us the amount by which the classification score will change if the pixel changes by a small amount. To compute the saliency map, we take the absolute value of this gradient, then take the maximum value over the 3 input channels; the final saliency map thus has shape `(H, W)` and all entries are nonnegative.\n",
    "\n",
    "Implement the `compute_saliency_maps` funciton and run the following to visualize some class saliency maps on our example images from the ImageNet validation set. You will be submitting the results as part of your submission.\n",
    "\n",
    "[2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. \"Deep Inside Convolutional Networks: Visualising\n",
    "Image Classification Models and Saliency Maps\", ICLR Workshop 2014."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "pEYpSXBz2M4t"
   },
   "outputs": [],
   "source": [
    "def show_saliency_maps(X, y):\n",
    "  # Convert X and y from numpy arrays to Torch Tensors\n",
    "  X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0).to(device='cuda')\n",
    "  y_tensor = torch.tensor(y, device='cuda')\n",
    "\n",
    "  # YOUR_TURN: Impelement the compute_saliency_maps function\n",
    "  saliency = compute_saliency_maps(X_tensor, y_tensor, model)\n",
    "\n",
    "  # Convert the saliency map from Torch Tensor to numpy array and show images\n",
    "  # and saliency maps together.\n",
    "  saliency = saliency.to('cpu').numpy()\n",
    "  N = X.shape[0]\n",
    "  for i in range(N):\n",
    "    plt.subplot(2, N, i + 1)\n",
    "    plt.imshow(X[i])\n",
    "    plt.axis('off')\n",
    "    plt.title(class_names[y[i]])\n",
    "    plt.subplot(2, N, N + i + 1)\n",
    "    plt.imshow(saliency[i], cmap=plt.cm.hot)\n",
    "    plt.axis('off')\n",
    "    plt.gcf().set_size_inches(12, 5)\n",
    "  plt.savefig(os.path.join(GOOGLE_DRIVE_PATH,'saliency_maps_results.jpg'))\n",
    "  plt.show()\n",
    "\n",
    "show_saliency_maps(X, y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Rogq37Rb2M40"
   },
   "source": [
    "# Adversarial Attack\n",
    "We can also use image gradients to generate \"adversarial attacks\" as discussed in [3].\n",
    "Given an image and a target class, we can perform gradient **ascent** over the image to maximize the target class, stopping when the network classifies the image as the target class. \n",
    "\n",
    "Implement the `make_adversarial_attack` function and run the following cell to generate a $\\ell_{2}$  adversarial attack. You should ideally see at first glance no major difference between the original and attacked images, and the network should now make an incorrect prediction on the attacked one. However you should see a bit of random noise if you look at the 10x magnified difference between the original and attacked images. Feel free to change the `idx` variable to explore other images.\n",
    "\n",
    "[3] Szegedy et al, \"Intriguing properties of neural networks\", ICLR 2014"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "zDEZa19G2M45"
   },
   "outputs": [],
   "source": [
    "idx = 0\n",
    "target_y = 6\n",
    "\n",
    "X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0).to(device='cuda')\n",
    "print('Print your progress using the following format: the model is fooled if the target score and max score are the same.')\n",
    "print('Iteration %d: target score %.3f, max score %.3f')\n",
    "# YOUR_TURN: Impelement the make_adversarial_attack function\n",
    "X_adv = make_adversarial_attack(X_tensor[idx:idx+1], target_y, model, max_iter=100)\n",
    "\n",
    "scores = model(X_adv)\n",
    "assert target_y == scores.data.max(1)[1][0].item(), 'The model is not fooled!'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "-tSd5yuf2M48"
   },
   "source": [
    "After generating an adversarially attacked image, run the following cell to visualize the original image, the attacked image, as well as the difference between them. You will be submitting the results as part of your submission."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Yfd7jjHb2M49"
   },
   "outputs": [],
   "source": [
    "# checkout a4_helper.py for the implementation details of deprocess\n",
    "from a4_helper import deprocess\n",
    "\n",
    "X_adv = X_adv.to('cpu')\n",
    "X_adv_np = deprocess(X_adv.clone())\n",
    "X_adv_np = np.asarray(X_adv_np).astype(np.uint8)\n",
    "\n",
    "plt.subplot(1, 4, 1)\n",
    "plt.imshow(X[idx])\n",
    "plt.title(class_names[y[idx]])\n",
    "plt.axis('off')\n",
    "\n",
    "plt.subplot(1, 4, 2)\n",
    "plt.imshow(X_adv_np)\n",
    "plt.title(class_names[target_y])\n",
    "plt.axis('off')\n",
    "\n",
    "plt.subplot(1, 4, 3)\n",
    "X_pre = preprocess(Image.fromarray(X[idx]))\n",
    "diff = np.asarray(deprocess(X_adv - X_pre, should_rescale=False))\n",
    "plt.imshow(diff)\n",
    "plt.title('Difference')\n",
    "plt.axis('off')\n",
    "\n",
    "plt.subplot(1, 4, 4)\n",
    "diff = np.asarray(deprocess(10 * (X_adv - X_pre), should_rescale=False))\n",
    "plt.imshow(diff)\n",
    "plt.title('Magnified difference (10x)')\n",
    "plt.axis('off')\n",
    "\n",
    "plt.gcf().set_size_inches(12, 5)\n",
    "plt.savefig(os.path.join(GOOGLE_DRIVE_PATH,'adversarial_attacks_results.jpg'))\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "oJNRllhg2M5B"
   },
   "source": [
    "# Class visualization\n",
    "By starting with a random noise image and performing gradient ascent on a target class, we can generate an image that the network will recognize as the target class. This idea was first presented in [2]; [3] extended this idea by suggesting several regularization techniques that can improve the quality of the generated image.\n",
    "\n",
    "Concretely, let $I$ be an image and let $y$ be a target class. Let $s_y(I)$ be the score that a convolutional network assigns to the image $I$ for class $y$; note that these are raw unnormalized scores, not class probabilities. We wish to generate an image $I^*$ that achieves a high score for the class $y$ by solving the problem\n",
    "\n",
    "$$\n",
    "I^* = \\arg\\max_I (s_y(I) - R(I))\n",
    "$$\n",
    "\n",
    "where $R$ is a (possibly implicit) regularizer (note the sign of $R(I)$ in the argmax: we want to minimize this regularization term). We can solve this optimization problem using gradient ascent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form\n",
    "\n",
    "$$\n",
    "R(I) = \\lambda \\|I\\|_2^2\n",
    "$$\n",
    "\n",
    "**and** implicit regularization as suggested by [3] by periodically blurring the generated image. We can solve this problem using gradient ascent on the generated image.\n",
    "\n",
    "[2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. \"Deep Inside Convolutional Networks: Visualising\n",
    "Image Classification Models and Saliency Maps\", ICLR Workshop 2014.\n",
    "\n",
    "[3] Yosinski et al, \"Understanding Neural Networks Through Deep Visualization\", ICML 2015 Deep Learning Workshop"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "kFZPnQMGJ-Vl"
   },
   "outputs": [],
   "source": [
    "def create_class_visualization(target_y, model, class_names, device='cpu', save_fig=False, **kwargs):\n",
    "  \"\"\"\n",
    "  Generate an image to maximize the score of target_y under a pretrained model.\n",
    "  \n",
    "  Inputs:\n",
    "  - target_y: Integer in the range [0, 1000) giving the index of the class\n",
    "  - model: A pretrained CNN that will be used to generate the image\n",
    "  - class_names: Dictionary for class names\n",
    "  - save_fig: saves the final figure for submission\n",
    "  - device: 'cpu' or 'gpu'\n",
    "  \n",
    "  Keyword arguments:\n",
    "  - num_iterations: How many iterations to use\n",
    "  - blur_every: How often to blur the image as an implicit regularizer\n",
    "  - max_jitter: How much to jitter the image as an implicit regularizer\n",
    "  - show_every: How often to show the intermediate result\n",
    "  \"\"\"\n",
    "  num_iterations = kwargs.pop('num_iterations', 100)\n",
    "  blur_every = kwargs.pop('blur_every', 10)\n",
    "  max_jitter = kwargs.pop('max_jitter', 16)\n",
    "  show_every = kwargs.pop('show_every', 25)\n",
    "\n",
    "  # Randomly initialize the image as a PyTorch Tensor, and make it requires gradient.\n",
    "  img = torch.randn((1, 3, 224, 224), device=device).mul_(1.0).requires_grad_()\n",
    "\n",
    "  for t in range(num_iterations):\n",
    "    # Randomly jitter the image a bit; this gives slightly nicer results\n",
    "    ox, oy = random.randint(0, max_jitter), random.randint(0, max_jitter)\n",
    "    img.data.copy_(jitter(img.data, ox, oy))\n",
    "\n",
    "    # YOUR_TURN: Impelement the create_class_visualization function to perform \n",
    "    # gradient step\n",
    "    img = class_visualization_step(img, target_y, model) \n",
    "    \n",
    "    # Undo the random jitter\n",
    "    img.data.copy_(jitter(img.data, -ox, -oy))\n",
    "    # As regularizer, clamp and periodically blur the image\n",
    "    for c in range(3):\n",
    "      lo = float(-SQUEEZENET_MEAN[c] / SQUEEZENET_STD[c])\n",
    "      hi = float((1.0 - SQUEEZENET_MEAN[c]) / SQUEEZENET_STD[c])\n",
    "      img.data[:, c].clamp_(min=lo, max=hi)\n",
    "    if t % blur_every == 0:\n",
    "      blur_image(img.data, sigma=0.5)\n",
    "\n",
    "    # Periodically show the image\n",
    "    if t == 0 or (t + 1) % show_every == 0 or t == num_iterations - 1:\n",
    "      plt.imshow(deprocess(img.data.clone().cpu()))\n",
    "      class_name = class_names[target_y]\n",
    "      plt.title('%s\\nIteration %d / %d' % (class_name, t + 1, num_iterations))\n",
    "      plt.gcf().set_size_inches(4, 4)\n",
    "      plt.axis('off')\n",
    "      if save_fig:\n",
    "        plt.savefig(os.path.join(GOOGLE_DRIVE_PATH,'class_viz_result.jpg'))\n",
    "      plt.show()\n",
    "\n",
    "  return deprocess(img.data.cpu())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "hY3W11HfVg5j"
   },
   "source": [
    "Implement the `class_visualization_step` function for performing gradient step update for class visualization. We are going to use the `create_class_visualization` (above) to generate an image that the network recognizes as the target class."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "D_97sqLp2M5K"
   },
   "outputs": [],
   "source": [
    "target_y = 76 # Tarantula\n",
    "# target_y = 78 # Tick\n",
    "# target_y = 187 # Yorkshire Terrier\n",
    "# target_y = 683 # Oboe\n",
    "# target_y = 366 # Gorilla\n",
    "# target_y = 604 # Hourglass\n",
    "# YOUR_TURN: make sure you have implemented the class_visualization_step function\n",
    "out = create_class_visualization(target_y, model, class_names, save_fig=True, device='cuda')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qkq8oPEQ2M5N"
   },
   "source": [
    "Try out your class visualization on other classes! You should also feel free to play with various hyperparameters to try and improve the quality of the generated image, but this is not required."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "RCX-_h4Q2M5O"
   },
   "outputs": [],
   "source": [
    "# target_y = 78 # Tick\n",
    "# target_y = 187 # Yorkshire Terrier\n",
    "# target_y = 683 # Oboe\n",
    "# target_y = 366 # Gorilla\n",
    "# target_y = 604 # Hourglass\n",
    "target_y = random.randint(0,999) # [0,999]\n",
    "print(class_names[target_y])\n",
    "out = create_class_visualization(target_y, model, class_names, save_fig=False, device='cuda')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0vTPNahZySVj"
   },
   "source": [
    "# Final checks\n",
    "Make sure you run \"Runtime -> Restart and run all...\" to double check Network Visualization before submitting."
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "collapsed_sections": [],
   "name": "network_visualization.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
