{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Tutorial 4: Encrypted Convolution on MNIST\n",
    "\n",
    "Welcome to tutorial 4 where we are going to perform encrypted evaluation on MNIST examples, using a convolutional neural network. If you haven't played with TenSEAL before, or need a quick overview of what homomorphic encryption is, I would suggest going through [Tutorial 0 - Getting Started](./Tutorial%200%20-%20Getting%20Started.ipynb) first.\n",
    "\n",
    "We will be using CKKS extensively in this tutorial, so if you don't know how it works, I would recommend checking [Tutorial 2 - Working with Approximate Numbers](./Tutorial%202%20-%20Working%20with%20Approximate%20Numbers.ipynb) first.\n",
    "\n",
    "We will start by explaining how the different layers can be performed on encrypted data. Next we train a PyTorch model on MNIST, then implement an equivalent one using TenSEAL, but which can evaluate encrypted inputs.\n",
    "\n",
    "Authors:\n",
    "- Ayoub Benaissa - Twitter: [@y0uben11](https://twitter.com/y0uben11)\n",
    "- Bilal Retiat - Twitter: [@philomath213](https://twitter.com/philomath213)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Machine Learning Model\n",
    "\n",
    "With the MNIST dataset in hand, we can use a simple neural network composed of a convolutional layer, followed by two linear layers. Here we use the square activation function for simplicity, and ease of use, given the limitation of the number of multiplications with the CKKS scheme.\n",
    "\n",
    "We will keep in mind that the input for the model needs to be encrypted using CKKS, but the parameters of the model don't, they will be kept in plain during the whole protocol.\n",
    "\n",
    "### Model Description\n",
    "The model is the sequence of the below layers:\n",
    "\n",
    "- **Conv:** Convolution with 4 kernels. Shape of the kernel is 7x7. Strides are 3x3.\n",
    "- **Activation:** Square activation function.\n",
    "- **Linear Layer 1:** Input size: 256. Output size: 64.\n",
    "- **Activation:** Square activation function.\n",
    "- **Linear Layer 2:** Input size: 64. Output size: 10.\n",
    "\n",
    "\n",
    "### Input Representation\n",
    "\n",
    "In order to keep the memory and computation to its lowest, we will mostly try to use a single ciphertext. It's not always possible, and we often loose some flexibility. For this model, there are two different representations. One for the convolution, and one for the linear layers. The former will be quickly explained in the convolution section. For the latter, it's simply the input vector for the linear layer which is replicated many times to fit the slots of the ciphertexts. So a single ciphertext will contain the whole input for the linear layer.\n",
    "\n",
    "\n",
    "### Convolution\n",
    "\n",
    "There is actually different ways for doing convolution, and one way we can do it is via a well-known algorithm that translates the 2D convolution into a single matrix multiplication operation. This operation is often referred to as image-to-column convolution and is depicted in *Figure1*.\n",
    "\n",
    "<div align=\"center\">\n",
    "<img src=\"assets/im2col_conv2d.png\" width=\"50%\"/>\n",
    "<div><b>Figure1:</b> Image to column convolution</div>\n",
    "</div>\n",
    "\n",
    "However, this requires arranging the elements of the input matrix in a special way, and since we can't easily do that with a ciphertext, we have to do this as a pre-processing step before encryption. This also means that only a single convolution can be performed. To perform the convolution, we first need to do *im2col* encoding to the input matrix and encrypt it into a single ciphertext. It's worth noting that the matrix is translated into a vector using vertical scan. We then perform a matrix multiplication between an encrypted matrix (input image encoded in a ciphertext) and a plain vector (the flattened kernel of the convolution). This is done by first constructing this new flattened kernel, which replicates every element in the kernel $n$ times, where $n$ is the number of windows. Then we perform a ciphertext-plaintext multiplication, and continue with a sequence of rotate and sum operations in order to sum the elements of the same window. The process is depicted in *Figure2* and *Figure3*.\n",
    "\n",
    "<div align=\"center\">\n",
    "<img src=\"assets/im2col_conv2d_ckks1.png\" width=\"50%\"/>\n",
    "<div><b>Figure2:</b> Image to column convolution with CKKS - step 1</div>\n",
    "</div>\n",
    "\n",
    "<div align=\"center\">\n",
    "<img src=\"assets/im2col_conv2d_ckks2.png\" width=\"50%\"/>\n",
    "<div><b>Figure3:</b> Image to column convolution with CKKS - step 2</div>\n",
    "</div>\n",
    "\n",
    "If multiple kernels are used, then we need to perform this operation multiple times, yielding different output ciphertexts. These ciphertexts can later be combined (using a single multiplication) into a flattened vector. So every convolution will output a ciphertext containing 64 useful slots, then combining the 4 kernel outputs will yield us a ciphertext with 256 useful slots that will be the input for the first linear layer. The algorithm requires a single multiplication and $log_2(n)$ ciphertext rotations where $n$ is the number of windows in the convolution.\n",
    "\n",
    "### Linear Layer\n",
    "A linear layer boils down to a vector-matrix multiplication and an addition of a bias. The matrix and the bias are not encrypted. The vector-matrix multiplication is implemented based on [Halevi and Shoup ](https://link.springer.com/chapter/10.1007/978-3-662-44371-2_31) diagonal method. It's an accumulation of multiple ciphertext-plaintext multiplications, with slightly different rotations. We iterate over every diagonal in the plain matrix and multiply it with the ciphertext rotated $n$ slots to the left, where $n$ is the index (0-indexed) of the diagonal. The process is depicted in *Figure4*. The algorithm runs in $O(n)$ where $n$ is the size of the encrypted vector.\n",
    "\n",
    "<div align=\"center\">\n",
    "<img src=\"assets/vec-matmul.png\" width=\"65%\"/>\n",
    "<div><b>Figure4:</b> Vector-Matrix Multiplication</div>\n",
    "</div>\n",
    "\n",
    "### Square Activation\n",
    "The square activation is pretty straightforward. We just multiply a ciphertext by itself.\n",
    "\n",
    "\n",
    "Building on these operations, we now know that this evaluation requires exactly 6 multiplications to be performed, 2 for the convolution, 1 for the first square activation, 1 for the first linear layer, 1 for the second square activation, and finally 1 for the last linear layer."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training\n",
    "\n",
    "Now that we know how we can implement such a model via HE, we will start using a library called [TenSEAL](https://github.com/OpenMined/TenSEAL) that implements all these operations we have been describing. But first, we need to train a plain PyTorch model to classify the MNIST dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 1 \tTraining Loss: 0.392145\n",
      "Epoch: 2 \tTraining Loss: 0.131439\n",
      "Epoch: 3 \tTraining Loss: 0.090824\n",
      "Epoch: 4 \tTraining Loss: 0.070182\n",
      "Epoch: 5 \tTraining Loss: 0.059312\n",
      "Epoch: 6 \tTraining Loss: 0.049881\n",
      "Epoch: 7 \tTraining Loss: 0.045489\n",
      "Epoch: 8 \tTraining Loss: 0.038426\n",
      "Epoch: 9 \tTraining Loss: 0.035883\n",
      "Epoch: 10 \tTraining Loss: 0.031704\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from torchvision import datasets\n",
    "import torchvision.transforms as transforms\n",
    "import numpy as np\n",
    "\n",
    "torch.manual_seed(73)\n",
    "\n",
    "train_data = datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor())\n",
    "test_data = datasets.MNIST('data', train=False, download=True, transform=transforms.ToTensor())\n",
    "\n",
    "batch_size = 64\n",
    "\n",
    "train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True)\n",
    "test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, shuffle=True)\n",
    "\n",
    "class ConvNet(torch.nn.Module):\n",
    "    def __init__(self, hidden=64, output=10):\n",
    "        super(ConvNet, self).__init__()        \n",
    "        self.conv1 = torch.nn.Conv2d(1, 4, kernel_size=7, padding=0, stride=3)\n",
    "        self.fc1 = torch.nn.Linear(256, hidden)\n",
    "        self.fc2 = torch.nn.Linear(hidden, output)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.conv1(x)\n",
    "        # the model uses the square activation function\n",
    "        x = x * x\n",
    "        # flattening while keeping the batch axis\n",
    "        x = x.view(-1, 256)\n",
    "        x = self.fc1(x)\n",
    "        x = x * x\n",
    "        x = self.fc2(x)\n",
    "        return x\n",
    "\n",
    "\n",
    "def train(model, train_loader, criterion, optimizer, n_epochs=10):\n",
    "    # model in training mode\n",
    "    model.train()\n",
    "    for epoch in range(1, n_epochs+1):\n",
    "\n",
    "        train_loss = 0.0\n",
    "        for data, target in train_loader:\n",
    "            optimizer.zero_grad()\n",
    "            output = model(data)\n",
    "            loss = criterion(output, target)\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "            train_loss += loss.item()\n",
    "\n",
    "        # calculate average losses\n",
    "        train_loss = train_loss / len(train_loader)\n",
    "\n",
    "        print('Epoch: {} \\tTraining Loss: {:.6f}'.format(epoch, train_loss))\n",
    "    \n",
    "    # model in evaluation mode\n",
    "    model.eval()\n",
    "    return model\n",
    "\n",
    "\n",
    "model = ConvNet()\n",
    "criterion = torch.nn.CrossEntropyLoss()\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n",
    "model = train(model, train_loader, criterion, optimizer, 10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then test its accuracy on the test set:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test Loss: 0.099073\n",
      "\n",
      "Test Accuracy of 0: 99% (971/980)\n",
      "Test Accuracy of 1: 99% (1130/1135)\n",
      "Test Accuracy of 2: 97% (1005/1032)\n",
      "Test Accuracy of 3: 98% (995/1010)\n",
      "Test Accuracy of 4: 97% (960/982)\n",
      "Test Accuracy of 5: 97% (869/892)\n",
      "Test Accuracy of 6: 97% (938/958)\n",
      "Test Accuracy of 7: 96% (994/1028)\n",
      "Test Accuracy of 8: 96% (937/974)\n",
      "Test Accuracy of 9: 96% (978/1009)\n",
      "\n",
      "Test Accuracy (Overall): 97% (9777/10000)\n"
     ]
    }
   ],
   "source": [
    "def test(model, test_loader, criterion):\n",
    "    # initialize lists to monitor test loss and accuracy\n",
    "    test_loss = 0.0\n",
    "    class_correct = list(0. for i in range(10))\n",
    "    class_total = list(0. for i in range(10))\n",
    "\n",
    "    # model in evaluation mode\n",
    "    model.eval()\n",
    "\n",
    "    for data, target in test_loader:\n",
    "        output = model(data)\n",
    "        loss = criterion(output, target)\n",
    "        test_loss += loss.item()\n",
    "        # convert output probabilities to predicted class\n",
    "        _, pred = torch.max(output, 1)\n",
    "        # compare predictions to true label\n",
    "        correct = np.squeeze(pred.eq(target.data.view_as(pred)))\n",
    "        # calculate test accuracy for each object class\n",
    "        for i in range(len(target)):\n",
    "            label = target.data[i]\n",
    "            class_correct[label] += correct[i].item()\n",
    "            class_total[label] += 1\n",
    "\n",
    "    # calculate and print avg test loss\n",
    "    test_loss = test_loss/len(test_loader)\n",
    "    print(f'Test Loss: {test_loss:.6f}\\n')\n",
    "\n",
    "    for label in range(10):\n",
    "        print(\n",
    "            f'Test Accuracy of {label}: {int(100 * class_correct[label] / class_total[label])}% '\n",
    "            f'({int(np.sum(class_correct[label]))}/{int(np.sum(class_total[label]))})'\n",
    "        )\n",
    "\n",
    "    print(\n",
    "        f'\\nTest Accuracy (Overall): {int(100 * np.sum(class_correct) / np.sum(class_total))}% ' \n",
    "        f'({int(np.sum(class_correct))}/{int(np.sum(class_total))})'\n",
    "    )\n",
    "    \n",
    "test(model, test_loader, criterion)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Encrypted Evaluation\n",
    "\n",
    "Now start the encrypted evaluation that will use the pre-trained model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "It's a PyTorch-like model using operations implemented in TenSEAL.\n",
    "    - .mm() method is doing the vector-matrix multiplication explained above.\n",
    "    - you can use + operator to add a plain vector as a bias.\n",
    "    - .conv2d_im2col() method is doing a single convolution operation.\n",
    "    - .square_() just square the encrypted vector inplace.\n",
    "\"\"\"\n",
    "\n",
    "import tenseal as ts\n",
    "\n",
    "\n",
    "class EncConvNet:\n",
    "    def __init__(self, torch_nn):\n",
    "        self.conv1_weight = torch_nn.conv1.weight.data.view(\n",
    "            torch_nn.conv1.out_channels, torch_nn.conv1.kernel_size[0],\n",
    "            torch_nn.conv1.kernel_size[1]\n",
    "        ).tolist()\n",
    "        self.conv1_bias = torch_nn.conv1.bias.data.tolist()\n",
    "        \n",
    "        self.fc1_weight = torch_nn.fc1.weight.T.data.tolist()\n",
    "        self.fc1_bias = torch_nn.fc1.bias.data.tolist()\n",
    "        \n",
    "        self.fc2_weight = torch_nn.fc2.weight.T.data.tolist()\n",
    "        self.fc2_bias = torch_nn.fc2.bias.data.tolist()\n",
    "        \n",
    "        \n",
    "    def forward(self, enc_x, windows_nb):\n",
    "        # conv layer\n",
    "        enc_channels = []\n",
    "        for kernel, bias in zip(self.conv1_weight, self.conv1_bias):\n",
    "            y = enc_x.conv2d_im2col(kernel, windows_nb) + bias\n",
    "            enc_channels.append(y)\n",
    "        # pack all channels into a single flattened vector\n",
    "        enc_x = ts.CKKSVector.pack_vectors(enc_channels)\n",
    "        # square activation\n",
    "        enc_x.square_()\n",
    "        # fc1 layer\n",
    "        enc_x = enc_x.mm(self.fc1_weight) + self.fc1_bias\n",
    "        # square activation\n",
    "        enc_x.square_()\n",
    "        # fc2 layer\n",
    "        enc_x = enc_x.mm(self.fc2_weight) + self.fc2_bias\n",
    "        return enc_x\n",
    "    \n",
    "    def __call__(self, *args, **kwargs):\n",
    "        return self.forward(*args, **kwargs)\n",
    "\n",
    "    \n",
    "def enc_test(context, model, test_loader, criterion, kernel_shape, stride):\n",
    "    # initialize lists to monitor test loss and accuracy\n",
    "    test_loss = 0.0\n",
    "    class_correct = list(0. for i in range(10))\n",
    "    class_total = list(0. for i in range(10))\n",
    "\n",
    "    for data, target in test_loader:\n",
    "        # Encoding and encryption\n",
    "        x_enc, windows_nb = ts.im2col_encoding(\n",
    "            context, data.view(28, 28).tolist(), kernel_shape[0],\n",
    "            kernel_shape[1], stride\n",
    "        )\n",
    "        # Encrypted evaluation\n",
    "        enc_output = enc_model(x_enc, windows_nb)\n",
    "        # Decryption of result\n",
    "        output = enc_output.decrypt()\n",
    "        output = torch.tensor(output).view(1, -1)\n",
    "\n",
    "        # compute loss\n",
    "        loss = criterion(output, target)\n",
    "        test_loss += loss.item()\n",
    "        \n",
    "        # convert output probabilities to predicted class\n",
    "        _, pred = torch.max(output, 1)\n",
    "        # compare predictions to true label\n",
    "        correct = np.squeeze(pred.eq(target.data.view_as(pred)))\n",
    "        # calculate test accuracy for each object class\n",
    "        label = target.data[0]\n",
    "        class_correct[label] += correct.item()\n",
    "        class_total[label] += 1\n",
    "\n",
    "\n",
    "    # calculate and print avg test loss\n",
    "    test_loss = test_loss / sum(class_total)\n",
    "    print(f'Test Loss: {test_loss:.6f}\\n')\n",
    "\n",
    "    for label in range(10):\n",
    "        print(\n",
    "            f'Test Accuracy of {label}: {int(100 * class_correct[label] / class_total[label])}% '\n",
    "            f'({int(np.sum(class_correct[label]))}/{int(np.sum(class_total[label]))})'\n",
    "        )\n",
    "\n",
    "    print(\n",
    "        f'\\nTest Accuracy (Overall): {int(100 * np.sum(class_correct) / np.sum(class_total))}% ' \n",
    "        f'({int(np.sum(class_correct))}/{int(np.sum(class_total))})'\n",
    "    )\n",
    "\n",
    "\n",
    "# Load one element at a time\n",
    "test_loader = torch.utils.data.DataLoader(test_data, batch_size=1, shuffle=True)\n",
    "# required for encoding\n",
    "kernel_shape = model.conv1.kernel_size\n",
    "stride = model.conv1.stride[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Choosing the parameters isn't easy, so we list some intuition here for why we have chosen these parameters exactly:\n",
    "\n",
    "1. For a given security level (e.g. 128-bits security) and a polynomial modulus degree (e.g. 8192) there is an upper bound for the bit count of the coefficient modulus (`sum(coeff_mod_bit_sizes)`). If the upper bound is surpassed, there is a need to use a higher polynomial modulus degree (e.g. 16384) in order to make sure we still have the required security level.\n",
    "2. The multiplicative depth is controlled by the number of primes constituting our coefficient modulus.\n",
    "3. All elements of `coeff_mod_bit_sizes[1: -1]` should be equal in TenSEAL, since it takes care of rescaling ciphertexts. And we also want to use the same number of bits (e.g. 2 ^ 26) for the scale during encryption.\n",
    "4. The scale is what controls the precision of the fractional part, since it's the value that plaintexts are multiplied with before being encoded into a polynomial of integer coefficients.\n",
    "\n",
    "Starting with a scale of more than 20 bits, we need to choose the number of bits of all the middle primes equal to that, so we are already over 120 bits. With this lower bound of coefficient modulus and a security level of 128-bits, we will need a polynomial modulus degree of at least 8192. The upper bound for choosing a higher degree is at 218. Trying different values for the precision and adjusting the coefficient modulus, while studying the loss and accuracy, we end up with 26-bits of scale and primes. We also have 5 bits (31 - 26) for the integer part in the last coefficient modulus, which should be enough for our use case, since output values aren't that big."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Encryption Parameters\n",
    "\n",
    "# controls precision of the fractional part\n",
    "bits_scale = 26\n",
    "\n",
    "# Create TenSEAL context\n",
    "context = ts.context(\n",
    "    ts.SCHEME_TYPE.CKKS,\n",
    "    poly_modulus_degree=8192,\n",
    "    coeff_mod_bit_sizes=[31, bits_scale, bits_scale, bits_scale, bits_scale, bits_scale, bits_scale, 31]\n",
    ")\n",
    "\n",
    "# set the scale\n",
    "context.global_scale = pow(2, bits_scale)\n",
    "\n",
    "# galois keys are required to do ciphertext rotations\n",
    "context.generate_galois_keys()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This will now run encrypted evaluation over the whole test-set. It's gonna take time, but with this, you can feel proud of having done encrypted inference on a test-set of 10000 elements, congratulations!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test Loss: 0.136371\n",
      "\n",
      "Test Accuracy of 0: 98% (962/980)\n",
      "Test Accuracy of 1: 99% (1130/1135)\n",
      "Test Accuracy of 2: 97% (1004/1032)\n",
      "Test Accuracy of 3: 98% (995/1010)\n",
      "Test Accuracy of 4: 97% (959/982)\n",
      "Test Accuracy of 5: 97% (869/892)\n",
      "Test Accuracy of 6: 96% (923/958)\n",
      "Test Accuracy of 7: 96% (995/1028)\n",
      "Test Accuracy of 8: 95% (934/974)\n",
      "Test Accuracy of 9: 96% (977/1009)\n",
      "\n",
      "Test Accuracy (Overall): 97% (9748/10000)\n"
     ]
    }
   ],
   "source": [
    "enc_model = EncConvNet(model)\n",
    "enc_test(context, enc_model, test_loader, criterion, kernel_shape, stride)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Cost of the Encrypted Inference\n",
    "\n",
    "To conclude, I wanted to give you some numbers about memory and computation costs for this specific use case. Running this on a personal computer with a *Intel(R) Core(TM) i7-3612QM CPU @ 2.10GHz* CPU requires 2 seconds per encrypted inference. In a real-world use case, this would also require sending the encrypted input from the client to the server, and the encrypted result from the server to the client, so the size of these objects really matters. The encrypted input takes about 476KB, while the encrypted result is only about 70KB."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Congratulations!!! - Time to Join the Community!\n",
    "\n",
    "Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement towards privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!\n",
    "\n",
    "### Star TenSEAL on GitHub\n",
    "\n",
    "The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.\n",
    "\n",
    "- [Star TenSEAL](https://github.com/OpenMined/TenSEAL)\n",
    "\n",
    "### Join our Slack!\n",
    "\n",
    "The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org). #lib_tenseal and #code_tenseal are the main channels for the TenSEAL project.\n",
    "\n",
    "### Join our Team!\n",
    "\n",
    "If you're excited about what we are working on TenSEAL, and if you're interested to work on homomorphic encryption related use cases, you should definitely join us!\n",
    "\n",
    "[Apply to the crypto team!](https://docs.google.com/forms/d/1T6MJ21V1lb7aEr4ilZOTYQXzxXP6KbpLumZVmTZMSuY/edit)\n",
    "\n",
    "\n",
    "### Donate\n",
    "\n",
    "If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!\n",
    "\n",
    "[OpenMined's Open Collective Page](https://opencollective.com/openmined)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
