{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div align=\"center\">\n",
    "    <a href=\"https://github.com/arogozhnikov/einops\">\n",
    "        <img src=\"http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png\" alt=\"einops package logo\" width=\"150\" height=\"150\" style='padding: 50px 50px 25px;' />\n",
    "    </a>\n",
    "    <div>\n",
    "    <a href=\"https://github.com/arogozhnikov/einops\">[github]</a>, &nbsp;&nbsp;\n",
    "    tutorials\n",
    "    <a href=\"https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb\">[1]</a> and \n",
    "    <a href=\"https://github.com/arogozhnikov/einops/blob/master/docs/2-einops-for-deep-learning.ipynb\">[2]</a>    \n",
    "    <br />\n",
    "    <br />\n",
    "    </div>\n",
    "</div>\n",
    "\n",
    "# Writing a better code with pytorch and einops\n",
    "\n",
    "\n",
    "<br /><br />\n",
    "\n",
    "## Rewriting building blocks of deep learning \n",
    "\n",
    "Now let's get to examples from real world.\n",
    "These code fragments taken from official tutorials and popular repositories.\n",
    "\n",
    "Learn how to improve code and how `einops` can help you.\n",
    "\n",
    "**Left**: as it was, **Right**: improved version"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "# start from importing some stuff\n",
    "import torch\n",
    "import torch.nn as nn \n",
    "import torch.nn.functional as F\n",
    "import numpy as np\n",
    "import math\n",
    "\n",
    "from einops import rearrange, reduce, asnumpy, parse_shape\n",
    "from einops.layers.torch import Rearrange, Reduce"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def initialize(model):\n",
    "    for p in model.parameters():\n",
    "        p.data[:] = torch.from_numpy(np.random.RandomState(sum(p.shape)).randn(*p.shape))\n",
    "    return model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Simple ConvNet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class Net(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n",
    "        self.conv2_drop = nn.Dropout2d()\n",
    "        self.fc1 = nn.Linear(320, 50)\n",
    "        self.fc2 = nn.Linear(50, 10)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = F.relu(F.max_pool2d(self.conv1(x), 2))\n",
    "        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n",
    "        x = x.view(-1, 320)\n",
    "        x = F.relu(self.fc1(x))\n",
    "        x = F.dropout(x, training=self.training)\n",
    "        x = self.fc2(x)\n",
    "        return F.log_softmax(x, dim=1)\n",
    "\n",
    "conv_net_old = Net()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "conv_net_new = nn.Sequential(\n",
    "    nn.Conv2d(1, 10, kernel_size=5),\n",
    "    nn.MaxPool2d(kernel_size=2),\n",
    "    nn.ReLU(),\n",
    "    nn.Conv2d(10, 20, kernel_size=5),\n",
    "    nn.MaxPool2d(kernel_size=2),\n",
    "    nn.ReLU(),\n",
    "    nn.Dropout2d(),\n",
    "    Rearrange('b c h w -> b (c h w)'),\n",
    "    nn.Linear(320, 50),\n",
    "    nn.ReLU(),\n",
    "    nn.Dropout(),\n",
    "    nn.Linear(50, 10),\n",
    "    nn.LogSoftmax(dim=1)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Reasons to prefer new implementation:\n",
    "\n",
    "- in the original code (to the left) if input size is changed and batch size is divisible by 16 (that's usually so), we'll get something senseless after reshaping\n",
    "    - new code will explicitly raise an error in this case\n",
    "- we won't forget to use dropout with flag self.training with new version\n",
    "- code is straightforward to read and analyze\n",
    "- sequential makes printing / saving / passing trivial. And there is no need in your code to load a model (which also has a number of benefits)\n",
    "- don't need logsoftmax? Now you can use `conv_net_new[:-1]`. One more reason to prefer `nn.Sequential`\n",
    "- ... and we could also add inplace for ReLU\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([4, 10])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv_net_old(torch.zeros([16, 1, 20, 20])).shape\n",
    "# conv_net_new(torch.zeros([16, 1, 20, 20])).shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Super-resolution\n",
    "\n",
    "<!-- minified https://github.com/pytorch/examples/tree/master/super_resolution, withour initialization -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class SuperResolutionNetOld(nn.Module):\n",
    "    def __init__(self, upscale_factor):\n",
    "        super(SuperResolutionNetOld, self).__init__()\n",
    "\n",
    "        self.relu = nn.ReLU()\n",
    "        self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))\n",
    "        self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))\n",
    "        self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))\n",
    "        self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))\n",
    "        self.pixel_shuffle = nn.PixelShuffle(upscale_factor)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.relu(self.conv1(x))\n",
    "        x = self.relu(self.conv2(x))\n",
    "        x = self.relu(self.conv3(x))\n",
    "        x = self.pixel_shuffle(self.conv4(x))\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def SuperResolutionNetNew(upscale_factor):\n",
    "    return nn.Sequential(\n",
    "        nn.Conv2d(1, 64, kernel_size=5, padding=2),\n",
    "        nn.ReLU(inplace=True),\n",
    "        nn.Conv2d(64, 64, kernel_size=3, padding=1),\n",
    "        nn.ReLU(inplace=True),\n",
    "        nn.Conv2d(64, 32, kernel_size=3, padding=1),\n",
    "        nn.ReLU(inplace=True),\n",
    "        nn.Conv2d(32, upscale_factor ** 2, kernel_size=3, padding=1),\n",
    "        Rearrange('b (h2 w2) h w -> b (h h2) (w w2)', h2=upscale_factor, w2=upscale_factor),\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is the difference:\n",
    "\n",
    "- no need in special instruction pixel_shuffle (and result is transferrable between frameworks)\n",
    "- output doesn't contain a fake axis (and we could do the same for the input)\n",
    "- inplace ReLU used now, for high resolution pictures that becomes critical and saves us much memory\n",
    "- and all the benefits of nn.Sequential again"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "model1 = initialize(SuperResolutionNetOld(upscale_factor=3))\n",
    "model2 = initialize(SuperResolutionNetNew(upscale_factor=3))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "assert torch.allclose(model1(torch.zeros(1, 1, 30, 30)), model2(torch.zeros(1, 1, 30, 30))[None])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "## that's how this code was mentioned to use\n",
    "\n",
    "# from PIL import Image\n",
    "# img = Image.open(opt.input_image).convert('YCbCr')\n",
    "# y, cb, cr = img.split()\n",
    "\n",
    "# model = torch.load(opt.model)\n",
    "# img_to_tensor = ToTensor()\n",
    "# input = img_to_tensor(y).view(1, -1, y.size[1], y.size[0])\n",
    "\n",
    "# if opt.cuda:\n",
    "#     model = model.cuda()\n",
    "#     input = input.cuda()\n",
    "\n",
    "# out = model(input)\n",
    "# out = out.cpu()\n",
    "# out_img_y = out[0].detach().numpy()\n",
    "# out_img_y *= 255.0\n",
    "# out_img_y = out_img_y.clip(0, 255)\n",
    "# out_img_y = Image.fromarray(np.uint8(out_img_y[0]), mode='L')\n",
    "\n",
    "# out_img_cb = cb.resize(out_img_y.size, Image.BICUBIC)\n",
    "# out_img_cr = cr.resize(out_img_y.size, Image.BICUBIC)\n",
    "# out_img = Image.merge('YCbCr', [out_img_y, out_img_cb, out_img_cr]).convert('RGB')\n",
    "\n",
    "## Benefits\n",
    "\n",
    "# - no need to remembder the order of components in PIL.Image.size (as you see, it is actually different)\n",
    "# - code explicitly shows shapes passed in and out\n",
    "# - normalization to [0, 1] range and back is also explicit (it is needed to rememebed in original code that division by 255 is done by ToTensor)\n",
    "\n",
    "input_image = '../../logo/einops_logo_350x350.png' \n",
    "from PIL import Image\n",
    "import numpy as np\n",
    "\n",
    "from torchvision.transforms import ToTensor\n",
    "model = SuperResolutionNetOld(upscale_factor=2)\n",
    "\n",
    "img = Image.open(input_image).convert('YCbCr')\n",
    "y, cb, cr = img.split()\n",
    "\n",
    "img_to_tensor = ToTensor()\n",
    "input = img_to_tensor(y).view(1, -1, y.size[1], y.size[0])\n",
    "out = model(input)\n",
    "\n",
    "out_img_y = out[0].detach().numpy()\n",
    "out_img_y = np.clip(out_img_y[0] * 255, 0, 255)\n",
    "\n",
    "model = SuperResolutionNetNew(upscale_factor=2)\n",
    "\n",
    "img = Image.open(input_image).convert('YCbCr')\n",
    "y, cb, cr = img.split()\n",
    "\n",
    "# TODO numpy.asarray\n",
    "y = torch.from_numpy(np.array(y, dtype='float32') / 255)\n",
    "out = model(rearrange(y, 'h w -> () () h w'))\n",
    "\n",
    "out_img_y = asnumpy(rearrange(out, '() h w -> h w'))\n",
    "out_img_y = np.clip(out_img_y * 255, 0, 255)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Restyling Gram matrix for style transfer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<!-- from https://github.com/pytorch/examples/blob/29c2ed8ca6dc36fc78a3e74a5908615619987863/fast_neural_style/neural_style/utils.py#L21-L26 -->\n",
    "\n",
    "Original code is already good - first line shows what kind of input is expected\n",
    "\n",
    "- einsum operation should be read like:\n",
    "  - for each batch and for each pair of channels, we sum over h and w.\n",
    "- I've also changed normalization, because that's how Gram matrix is defined, otherwise we should call it normalized Gram matrix or alike"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "def gram_matrix_old(y):\n",
    "    (b, ch, h, w) = y.size()\n",
    "    features = y.view(b, ch, w * h)\n",
    "    features_t = features.transpose(1, 2)\n",
    "    gram = features.bmm(features_t) / (ch * h * w)\n",
    "    return gram"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def gram_matrix_new(y):\n",
    "    b, ch, h, w = y.shape\n",
    "    return torch.einsum('bchw,bdhw->bcd', [y, y]) / (h * w)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It would be great to use just `'b c1 h w,b c2 h w->b c1 c2'`, but einsum supports only one-letter axes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.randn([32, 128, 40, 40])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "7.58 ms ± 492 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
      "9.66 ms ± 258 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
     ]
    }
   ],
   "source": [
    "%timeit gram_matrix_old(x).sum()\n",
    "%timeit gram_matrix_new(x).sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "assert torch.allclose(gram_matrix_old(x), gram_matrix_new(x) / 128)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "# x = x.to('cuda')\n",
    "# %timeit -n100 gram_matrix_old(x).sum(); torch.cuda.synchronize()\n",
    "# %timeit -n100 gram_matrix_new(x).sum(); torch.cuda.synchronize()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Recurrent model\n",
    "\n",
    "All we did here is just made information about shapes explicit to skip deciphering\n",
    "\n",
    "<!-- simplified version of https://github.com/pytorch/examples/blob/master/word_language_model/model.py -->\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class RNNModelOld(nn.Module):\n",
    "    \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n",
    "    def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5):\n",
    "        super(RNNModel, self).__init__()\n",
    "        self.drop = nn.Dropout(dropout)\n",
    "        self.encoder = nn.Embedding(ntoken, ninp)\n",
    "        self.rnn = nn.LSTM(ninp, nhid, nlayers, dropout=dropout)\n",
    "        self.decoder = nn.Linear(nhid, ntoken)\n",
    "\n",
    "    def forward(self, input, hidden):\n",
    "        emb = self.drop(self.encoder(input))\n",
    "        output, hidden = self.rnn(emb, hidden)\n",
    "        output = self.drop(output)\n",
    "        decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))\n",
    "        return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class RNNModelNew(nn.Module):\n",
    "    \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n",
    "    def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5):\n",
    "        super(RNNModel, self).__init__()\n",
    "        self.drop = nn.Dropout(p=dropout)\n",
    "        self.encoder = nn.Embedding(ntoken, ninp)\n",
    "        self.rnn = nn.LSTM(ninp, nhid, nlayers, dropout=dropout)\n",
    "        self.decoder = nn.Linear(nhid, ntoken)\n",
    "\n",
    "    def forward(self, input, hidden):\n",
    "        t, b = input.shape\n",
    "        emb = self.drop(self.encoder(input))\n",
    "        output, hidden = self.rnn(emb, hidden)\n",
    "        output = rearrange(self.drop(output), 't b nhid -> (t b) nhid')\n",
    "        decoded = rearrange(self.decoder(output), '(t b) token -> t b token', t=t, b=b)\n",
    "        return decoded, hidden"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Channel shuffle (from shufflenet) \n",
    "<!-- from https://github.com/jaxony/ShuffleNet/blob/master/model.py -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "def channel_shuffle_old(x, groups):\n",
    "    batchsize, num_channels, height, width = x.data.size()\n",
    "\n",
    "    channels_per_group = num_channels // groups\n",
    "    \n",
    "    # reshape\n",
    "    x = x.view(batchsize, groups, \n",
    "        channels_per_group, height, width)\n",
    "\n",
    "    # transpose\n",
    "    # - contiguous() required if transpose() is used before view().\n",
    "    #   See https://github.com/pytorch/pytorch/issues/764\n",
    "    x = torch.transpose(x, 1, 2).contiguous()\n",
    "\n",
    "    # flatten\n",
    "    x = x.view(batchsize, -1, height, width)\n",
    "\n",
    "    return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def channel_shuffle_new(x, groups):\n",
    "    return rearrange(x, 'b (c1 c2) h w -> b (c2 c1) h w', c1=groups)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "While progress is obvious, this is not the limit. As you'll see below, we don't even need to write these couple of lines."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.zeros([32, 64, 100, 100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "51.2 ms ± 2.18 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
      "49.9 ms ± 594 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
     ]
    }
   ],
   "source": [
    "%timeit -n100 channel_shuffle_old(x, 8); torch.cuda.synchronize()\n",
    "%timeit -n100 channel_shuffle_new(x, 8); torch.cuda.synchronize()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Shufflenet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "def conv3x3(in_channels, out_channels, stride=1, \n",
    "            padding=1, bias=True, groups=1):    \n",
    "    \"\"\"3x3 convolution with padding\n",
    "    \"\"\"\n",
    "    return nn.Conv2d(\n",
    "        in_channels, \n",
    "        out_channels, \n",
    "        kernel_size=3, \n",
    "        stride=stride,\n",
    "        padding=padding,\n",
    "        bias=bias,\n",
    "        groups=groups)\n",
    "\n",
    "\n",
    "def conv1x1(in_channels, out_channels, groups=1):\n",
    "    \"\"\"1x1 convolution with padding\n",
    "    - Normal pointwise convolution When groups == 1\n",
    "    - Grouped pointwise convolution when groups > 1\n",
    "    \"\"\"\n",
    "    return nn.Conv2d(\n",
    "        in_channels, \n",
    "        out_channels, \n",
    "        kernel_size=1, \n",
    "        groups=groups,\n",
    "        stride=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "from collections import OrderedDict\n",
    "\n",
    "def channel_shuffle(x, groups):\n",
    "    batchsize, num_channels, height, width = x.data.size()\n",
    "\n",
    "    channels_per_group = num_channels // groups\n",
    "    \n",
    "    # reshape\n",
    "    x = x.view(batchsize, groups, \n",
    "        channels_per_group, height, width)\n",
    "\n",
    "    # transpose\n",
    "    # - contiguous() required if transpose() is used before view().\n",
    "    #   See https://github.com/pytorch/pytorch/issues/764\n",
    "    x = torch.transpose(x, 1, 2).contiguous()\n",
    "\n",
    "    # flatten\n",
    "    x = x.view(batchsize, -1, height, width)\n",
    "\n",
    "    return x\n",
    "\n",
    "class ShuffleUnitOld(nn.Module):\n",
    "    def __init__(self, in_channels, out_channels, groups=3,\n",
    "                 grouped_conv=True, combine='add'):\n",
    "        \n",
    "        super(ShuffleUnitOld, self).__init__()\n",
    "\n",
    "        self.in_channels = in_channels\n",
    "        self.out_channels = out_channels\n",
    "        self.grouped_conv = grouped_conv\n",
    "        self.combine = combine\n",
    "        self.groups = groups\n",
    "        self.bottleneck_channels = self.out_channels // 4\n",
    "\n",
    "        # define the type of ShuffleUnit\n",
    "        if self.combine == 'add':\n",
    "            # ShuffleUnit Figure 2b\n",
    "            self.depthwise_stride = 1\n",
    "            self._combine_func = self._add\n",
    "        elif self.combine == 'concat':\n",
    "            # ShuffleUnit Figure 2c\n",
    "            self.depthwise_stride = 2\n",
    "            self._combine_func = self._concat\n",
    "            \n",
    "            # ensure output of concat has the same channels as \n",
    "            # original output channels.\n",
    "            self.out_channels -= self.in_channels\n",
    "        else:\n",
    "            raise ValueError(\"Cannot combine tensors with \\\"{}\\\"\" \\\n",
    "                             \"Only \\\"add\\\" and \\\"concat\\\" are\" \\\n",
    "                             \"supported\".format(self.combine))\n",
    "\n",
    "        # Use a 1x1 grouped or non-grouped convolution to reduce input channels\n",
    "        # to bottleneck channels, as in a ResNet bottleneck module.\n",
    "        # NOTE: Do not use group convolution for the first conv1x1 in Stage 2.\n",
    "        self.first_1x1_groups = self.groups if grouped_conv else 1\n",
    "\n",
    "        self.g_conv_1x1_compress = self._make_grouped_conv1x1(\n",
    "            self.in_channels,\n",
    "            self.bottleneck_channels,\n",
    "            self.first_1x1_groups,\n",
    "            batch_norm=True,\n",
    "            relu=True\n",
    "            )\n",
    "\n",
    "        # 3x3 depthwise convolution followed by batch normalization\n",
    "        self.depthwise_conv3x3 = conv3x3(\n",
    "            self.bottleneck_channels, self.bottleneck_channels,\n",
    "            stride=self.depthwise_stride, groups=self.bottleneck_channels)\n",
    "        self.bn_after_depthwise = nn.BatchNorm2d(self.bottleneck_channels)\n",
    "\n",
    "        # Use 1x1 grouped convolution to expand from \n",
    "        # bottleneck_channels to out_channels\n",
    "        self.g_conv_1x1_expand = self._make_grouped_conv1x1(\n",
    "            self.bottleneck_channels,\n",
    "            self.out_channels,\n",
    "            self.groups,\n",
    "            batch_norm=True,\n",
    "            relu=False\n",
    "            )\n",
    "\n",
    "\n",
    "    @staticmethod\n",
    "    def _add(x, out):\n",
    "        # residual connection\n",
    "        return x + out\n",
    "\n",
    "\n",
    "    @staticmethod\n",
    "    def _concat(x, out):\n",
    "        # concatenate along channel axis\n",
    "        return torch.cat((x, out), 1)\n",
    "\n",
    "\n",
    "    def _make_grouped_conv1x1(self, in_channels, out_channels, groups,\n",
    "        batch_norm=True, relu=False):\n",
    "\n",
    "        modules = OrderedDict()\n",
    "        conv = conv1x1(in_channels, out_channels, groups=groups)\n",
    "        modules['conv1x1'] = conv\n",
    "\n",
    "        if batch_norm:\n",
    "            modules['batch_norm'] = nn.BatchNorm2d(out_channels)\n",
    "        if relu:\n",
    "            modules['relu'] = nn.ReLU()\n",
    "        if len(modules) > 1:\n",
    "            return nn.Sequential(modules)\n",
    "        else:\n",
    "            return conv\n",
    "\n",
    "\n",
    "    def forward(self, x):\n",
    "        # save for combining later with output\n",
    "        residual = x\n",
    "        if self.combine == 'concat':\n",
    "            residual = F.avg_pool2d(residual, kernel_size=3, \n",
    "                stride=2, padding=1)\n",
    "\n",
    "        out = self.g_conv_1x1_compress(x)\n",
    "        out = channel_shuffle(out, self.groups)\n",
    "        out = self.depthwise_conv3x3(out)\n",
    "        out = self.bn_after_depthwise(out)\n",
    "        out = self.g_conv_1x1_expand(out)\n",
    "        \n",
    "        out = self._combine_func(residual, out)\n",
    "        return F.relu(out)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class ShuffleUnitNew(nn.Module):\n",
    "    def __init__(self, in_channels, out_channels, groups=3, \n",
    "                 grouped_conv=True, combine='add'):\n",
    "        super().__init__()\n",
    "        first_1x1_groups = groups if grouped_conv else 1\n",
    "        bottleneck_channels = out_channels // 4\n",
    "        self.combine = combine\n",
    "        if combine == 'add':\n",
    "            # ShuffleUnit Figure 2b\n",
    "            self.left = Rearrange('...->...') # identity\n",
    "            depthwise_stride = 1\n",
    "        else:\n",
    "            # ShuffleUnit Figure 2c\n",
    "            self.left = nn.AvgPool2d(kernel_size=3, stride=2, padding=1)\n",
    "            depthwise_stride = 2\n",
    "            # ensure output of concat has the same channels as original output channels.\n",
    "            out_channels -= in_channels\n",
    "            assert out_channels > 0\n",
    "\n",
    "        self.right = nn.Sequential(\n",
    "            # Use a 1x1 grouped or non-grouped convolution to reduce input channels\n",
    "            # to bottleneck channels, as in a ResNet bottleneck module.\n",
    "            conv1x1(in_channels, bottleneck_channels, groups=first_1x1_groups),\n",
    "            nn.BatchNorm2d(bottleneck_channels),\n",
    "            nn.ReLU(inplace=True),\n",
    "            # channel shuffle\n",
    "            Rearrange('b (c1 c2) h w -> b (c2 c1) h w', c1=groups),\n",
    "            # 3x3 depthwise convolution followed by batch \n",
    "            conv3x3(bottleneck_channels, bottleneck_channels,\n",
    "                    stride=depthwise_stride, groups=bottleneck_channels),\n",
    "            nn.BatchNorm2d(bottleneck_channels),\n",
    "            # Use 1x1 grouped convolution to expand from \n",
    "            # bottleneck_channels to out_channels\n",
    "            conv1x1(bottleneck_channels, out_channels, groups=groups),\n",
    "            nn.BatchNorm2d(out_channels),\n",
    "        )        \n",
    "        \n",
    "    def forward(self, x):\n",
    "        if self.combine == 'add':\n",
    "            combined = self.left(x) + self.right(x)\n",
    "        else:\n",
    "            combined = torch.cat([self.left(x), self.right(x)], dim=1)\n",
    "        return F.relu(combined, inplace=True)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Rewriting the code helped to identify:\n",
    "\n",
    "- There is no sense in doing reshuffling and not using groups in the first convolution\n",
    "  (indeed, in the paper it is not so). However, result is an equivalent model.\n",
    "- It is also strange that the first convolution may be not grouped, while the last convolution is always grouped\n",
    "  (and that is different from the paper)\n",
    "\n",
    "Other comments:\n",
    "\n",
    "- There is an identity layer for pytorch introduced here\n",
    "- The last thing left is get rid of conv1x1 and conv3x3 in the code - those are not better than standard"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "model1 = ShuffleUnitOld(32, 32, groups=4, grouped_conv=True, combine='add')\n",
    "model2 = ShuffleUnitNew(32, 32, groups=4, grouped_conv=True, combine='add')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.randn(1, 32, 14, 14)\n",
    "initialize(model1)\n",
    "initialize(model2)\n",
    "torch.allclose(model1(x), model2(x))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle\n",
    "dump1 = pickle.dumps(model1._combine_func)\n",
    "dump2 = pickle.dumps(model2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Simplifying ResNet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class ResNetOld(nn.Module):\n",
    "\n",
    "    def __init__(self, block, layers, num_classes=1000):\n",
    "        self.inplanes = 64\n",
    "        super(ResNetOld, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n",
    "                               bias=False)\n",
    "        self.bn1 = nn.BatchNorm2d(64)\n",
    "        self.relu = nn.ReLU(inplace=True)\n",
    "        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n",
    "        self.layer1 = self._make_layer(block, 64, layers[0])\n",
    "        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n",
    "        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n",
    "        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n",
    "        self.avgpool = nn.AvgPool2d(7, stride=1)\n",
    "\n",
    "        self.fc = nn.Linear(512 * block.expansion, num_classes)\n",
    "\n",
    "        for m in self.modules():\n",
    "            if isinstance(m, nn.Conv2d):\n",
    "                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n",
    "                m.weight.data.normal_(0, math.sqrt(2. / n))\n",
    "            elif isinstance(m, nn.BatchNorm2d):\n",
    "                m.weight.data.fill_(1)\n",
    "                m.bias.data.zero_()\n",
    "\n",
    "    def _make_layer(self, block, planes, blocks, stride=1):\n",
    "        downsample = None\n",
    "        if stride != 1 or self.inplanes != planes * block.expansion:\n",
    "            downsample = nn.Sequential(\n",
    "                nn.Conv2d(self.inplanes, planes * block.expansion,\n",
    "                          kernel_size=1, stride=stride, bias=False),\n",
    "                nn.BatchNorm2d(planes * block.expansion),\n",
    "            )\n",
    "\n",
    "        layers = []\n",
    "        layers.append(block(self.inplanes, planes, stride, downsample))\n",
    "        self.inplanes = planes * block.expansion\n",
    "        for i in range(1, blocks):\n",
    "            layers.append(block(self.inplanes, planes))\n",
    "\n",
    "        return nn.Sequential(*layers)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.conv1(x)\n",
    "        x = self.bn1(x)\n",
    "        x = self.relu(x)\n",
    "        x = self.maxpool(x)\n",
    "\n",
    "        x = self.layer1(x)\n",
    "        x = self.layer2(x)\n",
    "        x = self.layer3(x)\n",
    "        x = self.layer4(x)\n",
    "        x = self.avgpool(x)\n",
    "        x = x.view(x.size(0), -1)\n",
    "        x = self.fc(x)\n",
    "\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def make_layer(inplanes, planes, block, n_blocks, stride=1):\n",
    "    downsample = None\n",
    "    if stride != 1 or inplanes != planes * block.expansion:\n",
    "        # output size won't match input, so adjust residual\n",
    "        downsample = nn.Sequential(\n",
    "            nn.Conv2d(inplanes, planes * block.expansion,\n",
    "                      kernel_size=1, stride=stride, bias=False),\n",
    "            nn.BatchNorm2d(planes * block.expansion),\n",
    "        )\n",
    "    return nn.Sequential(\n",
    "        block(inplanes, planes, stride, downsample),\n",
    "        *[block(planes * block.expansion, planes) for _ in range(1, n_blocks)]\n",
    "    )\n",
    "\n",
    "\n",
    "def ResNetNew(block, layers, num_classes=1000):    \n",
    "    e = block.expansion\n",
    "    \n",
    "    resnet = nn.Sequential(\n",
    "        Rearrange('b c h w -> b c h w', c=3, h=224, w=224),\n",
    "        nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),\n",
    "        nn.BatchNorm2d(64),\n",
    "        nn.ReLU(inplace=True),\n",
    "        nn.MaxPool2d(kernel_size=3, stride=2, padding=1),\n",
    "        make_layer(64,      64,  block, layers[0], stride=1),\n",
    "        make_layer(64 * e,  128, block, layers[1], stride=2),\n",
    "        make_layer(128 * e, 256, block, layers[2], stride=2),\n",
    "        make_layer(256 * e, 512, block, layers[3], stride=2),\n",
    "        # combined AvgPool and view in one averaging operation\n",
    "        Reduce('b c h w -> b c', 'mean'),\n",
    "        nn.Linear(512 * e, num_classes),\n",
    "    )\n",
    "    \n",
    "    # initialization\n",
    "    for m in resnet.modules():\n",
    "        if isinstance(m, nn.Conv2d):\n",
    "            n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n",
    "            m.weight.data.normal_(0, math.sqrt(2. / n))\n",
    "        elif isinstance(m, nn.BatchNorm2d):\n",
    "            m.weight.data.fill_(1)\n",
    "            m.bias.data.zero_()\n",
    "    return resnet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Changes:\n",
    "\n",
    "- explicit check for input shape\n",
    "- no views and simple sequential structure, output is just nn.Sequential, so can always be saved/passed/etc\n",
    "- no need in AvgPool and additional views, this place is much clearer now\n",
    "- `make_layer` doesn't use internal state (that's quite faulty place)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchvision.models.resnet import BasicBlock, Bottleneck, ResNet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.randn(2, 3, 224, 224)\n",
    "with torch.no_grad():\n",
    "    model_old = ResNetOld(BasicBlock, layers=[2, 2, 2, 3])\n",
    "    model_new = ResNetNew(BasicBlock, layers=[2, 2, 2, 3])\n",
    "    initialize(model_old)\n",
    "    initialize(model_new)\n",
    "    assert torch.allclose(model_old(x), model_new(x), atol=1e-3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [],
   "source": [
    "# with torch.no_grad():\n",
    "#     x = torch.randn([2, 512, 7, 7])\n",
    "#     torch.allclose(nn.AvgPool2d(7)(x), reduce(x, 'b c h w -> b c', 'mean'), atol=1e-8)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Improving RNN language modelling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class RNNOld(nn.Module):\n",
    "    def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout):\n",
    "        super().__init__()\n",
    "        \n",
    "        self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, \n",
    "                           bidirectional=bidirectional, dropout=dropout)\n",
    "        self.fc = nn.Linear(hidden_dim*2, output_dim)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        #x = [sent len, batch size]\n",
    "        \n",
    "        embedded = self.dropout(self.embedding(x))\n",
    "        \n",
    "        #embedded = [sent len, batch size, emb dim]\n",
    "        \n",
    "        output, (hidden, cell) = self.rnn(embedded)\n",
    "        \n",
    "        #output = [sent len, batch size, hid dim * num directions]\n",
    "        #hidden = [num layers * num directions, batch size, hid dim]\n",
    "        #cell = [num layers * num directions, batch size, hid dim]\n",
    "        \n",
    "        #concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers\n",
    "        #and apply dropout\n",
    "        \n",
    "        hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim=1))\n",
    "                \n",
    "        #hidden = [batch size, hid dim * num directions]\n",
    "            \n",
    "        return self.fc(hidden.squeeze(0))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class RNNNew(nn.Module):\n",
    "    def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout):\n",
    "        super().__init__()\n",
    "        \n",
    "        self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, \n",
    "                           bidirectional=bidirectional, dropout=dropout)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "        self.directions = 2 if bidirectional else 1\n",
    "        self.fc = nn.Linear(hidden_dim * self.directions, output_dim)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        #x = [sent len, batch size]        \n",
    "        embedded = self.dropout(self.embedding(x))\n",
    "        \n",
    "        #embedded = [sent len, batch size, emb dim]\n",
    "        output, (hidden, cell) = self.rnn(embedded)\n",
    "        \n",
    "        hidden = rearrange(hidden, '(layer dir) b c -> layer b (dir c)', \n",
    "                           dir=self.directions)\n",
    "        # take the final layer's hidden\n",
    "        return self.fc(self.dropout(hidden[-1]))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_old = initialize(RNNOld(10, 10, 10, output_dim=15, n_layers=2, bidirectional=True, dropout=0.1)).eval()\n",
    "model_new = initialize(RNNNew(10, 10, 10, output_dim=15, n_layers=2, bidirectional=True, dropout=0.1)).eval()\n",
    "\n",
    "x = torch.randint(0, 10, size=[23, 10]).long()\n",
    "\n",
    "assert torch.allclose(model_old(x), model_new(x))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [],
   "source": [
    "# this code fails\n",
    "# model_old = initialize(RNNOld(10, 10, 10, output_dim=15, n_layers=1, bidirectional=False, dropout=0.1)).eval()\n",
    "# model_old(x).shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- original code misbehaves for non-bidirectional models\n",
    "- ... and fails when bidirectional = False, and there is only one layer\n",
    "- modification of the code shows both how hidden is structured and how it is modified"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Writing FastText faster\n",
    "\n",
    "<!-- from # https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/3%20-%20Faster%20Sentiment%20Analysis.ipynb -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class FastTextOld(nn.Module):\n",
    "    def __init__(self, vocab_size, embedding_dim, output_dim):\n",
    "        super().__init__()\n",
    "        \n",
    "        self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.fc = nn.Linear(embedding_dim, output_dim)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        \n",
    "        #x = [sent len, batch size]\n",
    "        \n",
    "        embedded = self.embedding(x)\n",
    "                \n",
    "        #embedded = [sent len, batch size, emb dim]\n",
    "        \n",
    "        embedded = embedded.permute(1, 0, 2)\n",
    "        \n",
    "        #embedded = [batch size, sent len, emb dim]\n",
    "        \n",
    "        pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1) \n",
    "        \n",
    "        #pooled = [batch size, embedding_dim]\n",
    "                \n",
    "        return self.fc(pooled)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def FastTextNew(vocab_size, embedding_dim, output_dim):\n",
    "    return nn.Sequential(\n",
    "        Rearrange('t b -> t b'),\n",
    "        nn.Embedding(vocab_size, embedding_dim),\n",
    "        Reduce('t b c -> b c', 'mean'),\n",
    "        nn.Linear(embedding_dim, output_dim),\n",
    "        Rearrange('b c -> b c'),\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some comments on new code:\n",
    "\n",
    "- first and last operations do nothing and can be removed\n",
    "    - but were added to explicitly show expected input and output\n",
    "- this also gives you a flexibility of changing interface by editing a single line. Should you need to accept inputs as (batch, time), \n",
    "  you just change first line to `Rearrange('b t -> t b'),`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# CNNs for text classification"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class CNNOld(nn.Module):\n",
    "    def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout):\n",
    "        super().__init__()\n",
    "        self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.conv_0 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[0],embedding_dim))\n",
    "        self.conv_1 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[1],embedding_dim))\n",
    "        self.conv_2 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[2],embedding_dim))\n",
    "        self.fc = nn.Linear(len(filter_sizes)*n_filters, output_dim)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        \n",
    "        #x = [sent len, batch size]\n",
    "        \n",
    "        x = x.permute(1, 0)\n",
    "                \n",
    "        #x = [batch size, sent len]\n",
    "        \n",
    "        embedded = self.embedding(x)\n",
    "                \n",
    "        #embedded = [batch size, sent len, emb dim]\n",
    "        \n",
    "        embedded = embedded.unsqueeze(1)\n",
    "        \n",
    "        #embedded = [batch size, 1, sent len, emb dim]\n",
    "        \n",
    "        conved_0 = F.relu(self.conv_0(embedded).squeeze(3))\n",
    "        conved_1 = F.relu(self.conv_1(embedded).squeeze(3))\n",
    "        conved_2 = F.relu(self.conv_2(embedded).squeeze(3))\n",
    "            \n",
    "        #conv_n = [batch size, n_filters, sent len - filter_sizes[n]]\n",
    "        \n",
    "        pooled_0 = F.max_pool1d(conved_0, conved_0.shape[2]).squeeze(2)\n",
    "        pooled_1 = F.max_pool1d(conved_1, conved_1.shape[2]).squeeze(2)\n",
    "        pooled_2 = F.max_pool1d(conved_2, conved_2.shape[2]).squeeze(2)\n",
    "        \n",
    "        #pooled_n = [batch size, n_filters]\n",
    "        \n",
    "        cat = self.dropout(torch.cat((pooled_0, pooled_1, pooled_2), dim=1))\n",
    "\n",
    "        #cat = [batch size, n_filters * len(filter_sizes)]\n",
    "            \n",
    "        return self.fc(cat)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class CNNNew(nn.Module):\n",
    "    def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout):\n",
    "        super().__init__()\n",
    "        self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.convs = nn.ModuleList([\n",
    "            nn.Conv1d(embedding_dim, n_filters, kernel_size=size) for size in filter_sizes\n",
    "        ])\n",
    "        self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        x = rearrange(x, 't b -> t b')\n",
    "        emb = rearrange(self.embedding(x), 't b c -> b c t')\n",
    "        pooled = [reduce(conv(emb), 'b c t -> b c', 'max') for conv in self.convs]\n",
    "        concatenated = rearrange(pooled, 'filter b c -> b (filter c)')\n",
    "        return self.fc(self.dropout(F.relu(concatenated)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Original code misuses Conv2d, while Conv1d is the right choice\n",
    "- Fixed code can work with any number of filter_sizes (and won't fail)\n",
    "- First line in new code does nothing, but was added for simplicity"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "# old_model = initialize(CNNOld(32, 32, 32, [1, 2, 4], 32, dropout=0.1)).eval()\n",
    "# new_model = initialize(CNNNew(32, 32, 32, [1, 2, 4], 32, dropout=0.1)).eval()\n",
    "\n",
    "# x = torch.zeros([10, 20]).long()\n",
    "# assert torch.allclose(old_model(x), new_model(x), atol=1e-3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Highway convolutions\n",
    "\n",
    "- Highway convolutions are common in TTS systems. Code below makes splitting a bit more explicit.\n",
    "- Splitting policy may eventually turn out to be important if input had previously groups over channel axes (group convolutions or bidirectional LSTMs/GRUs)\n",
    "- Same applies to GLU and gated units in general"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class HighwayConv1dOld(nn.Conv1d):\n",
    "    def forward(self, inputs):\n",
    "        L = super(HighwayConv1dOld, self).forward(inputs)\n",
    "        H1, H2 = torch.chunk(L, 2, 1)  # chunk at the feature dim\n",
    "        torch.sigmoid_(H1)\n",
    "        return H1 * H2 + (1.0 - H1) * inputs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class HighwayConv1dNew(nn.Conv1d):\n",
    "    def forward(self, inputs):\n",
    "        L = super().forward(inputs)\n",
    "        H1, H2 = rearrange(L, 'b (split c) t -> split b c t', split=2)\n",
    "        torch.sigmoid_(H1)\n",
    "        return H1 * H2 + (1.0 - H1) * inputs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "hc1 = HighwayConv1dOld(10, 20, kernel_size=3, padding=1)\n",
    "hc2 = HighwayConv1dNew(10, 20, kernel_size=3, padding=1)\n",
    "initialize(hc1)\n",
    "initialize(hc2)\n",
    "fw1 = hc1(torch.zeros(1, 10, 100))\n",
    "fw2 = hc2(torch.zeros(1, 10, 100))\n",
    "assert torch.allclose(fw1, fw2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Tacotron's CBHG module\n",
    "\n",
    "<!-- https://github.com/r9y9/tacotron_pytorch/blob/master/tacotron_pytorch/tacotron.py -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class CBHG_Old(nn.Module):\n",
    "    \"\"\"CBHG module: a recurrent neural network composed of:\n",
    "        - 1-d convolution banks\n",
    "        - Highway networks + residual connections\n",
    "        - Bidirectional gated recurrent units\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, in_dim, K=16, projections=[128, 128]):\n",
    "        super(CBHG, self).__init__()\n",
    "        self.in_dim = in_dim\n",
    "        self.relu = nn.ReLU()\n",
    "        self.conv1d_banks = nn.ModuleList(\n",
    "            [BatchNormConv1d(in_dim, in_dim, kernel_size=k, stride=1,\n",
    "                             padding=k // 2, activation=self.relu)\n",
    "             for k in range(1, K + 1)])\n",
    "        self.max_pool1d = nn.MaxPool1d(kernel_size=2, stride=1, padding=1)\n",
    "\n",
    "        in_sizes = [K * in_dim] + projections[:-1]\n",
    "        activations = [self.relu] * (len(projections) - 1) + [None]\n",
    "        self.conv1d_projections = nn.ModuleList(\n",
    "            [BatchNormConv1d(in_size, out_size, kernel_size=3, stride=1,\n",
    "                             padding=1, activation=ac)\n",
    "             for (in_size, out_size, ac) in zip(\n",
    "                 in_sizes, projections, activations)])\n",
    "\n",
    "        self.pre_highway = nn.Linear(projections[-1], in_dim, bias=False)\n",
    "        self.highways = nn.ModuleList(\n",
    "            [Highway(in_dim, in_dim) for _ in range(4)])\n",
    "\n",
    "        self.gru = nn.GRU(\n",
    "            in_dim, in_dim, 1, batch_first=True, bidirectional=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "def forward_old(self, inputs):\n",
    "    # (B, T_in, in_dim)\n",
    "    x = inputs\n",
    "\n",
    "    # Needed to perform conv1d on time-axis\n",
    "    # (B, in_dim, T_in)\n",
    "    if x.size(-1) == self.in_dim:\n",
    "        x = x.transpose(1, 2)\n",
    "\n",
    "    T = x.size(-1)\n",
    "\n",
    "    # (B, in_dim*K, T_in)\n",
    "    # Concat conv1d bank outputs\n",
    "    x = torch.cat([conv1d(x)[:, :, :T] for conv1d in self.conv1d_banks], dim=1)\n",
    "    assert x.size(1) == self.in_dim * len(self.conv1d_banks)\n",
    "    x = self.max_pool1d(x)[:, :, :T]\n",
    "\n",
    "    for conv1d in self.conv1d_projections:\n",
    "        x = conv1d(x)\n",
    "\n",
    "    # (B, T_in, in_dim)\n",
    "    # Back to the original shape\n",
    "    x = x.transpose(1, 2)\n",
    "\n",
    "    if x.size(-1) != self.in_dim:\n",
    "        x = self.pre_highway(x)\n",
    "\n",
    "    # Residual connection\n",
    "    x += inputs\n",
    "    for highway in self.highways:\n",
    "        x = highway(x)\n",
    "\n",
    "    # (B, T_in, in_dim*2)\n",
    "    outputs, _ = self.gru(x)\n",
    "\n",
    "    return outputs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def forward_new(self, inputs, input_lengths=None):\n",
    "    x = rearrange(inputs, 'b t c -> b c t')\n",
    "    _, _, T = x.shape\n",
    "    # Concat conv1d bank outputs\n",
    "    x = rearrange([conv1d(x)[:, :, :T] for conv1d in self.conv1d_banks], \n",
    "                 'bank b c t -> b (bank c) t', c=self.in_dim)\n",
    "    x = self.max_pool1d(x)[:, :, :T]\n",
    "\n",
    "    for conv1d in self.conv1d_projections:\n",
    "        x = conv1d(x)\n",
    "    x = rearrange(x, 'b c t -> b t c')\n",
    "    if x.size(-1) != self.in_dim:\n",
    "        x = self.pre_highway(x)\n",
    "\n",
    "    # Residual connection\n",
    "    x += inputs\n",
    "    for highway in self.highways:\n",
    "        x = highway(x)\n",
    "\n",
    "    # (B, T_in, in_dim*2)\n",
    "    outputs, _ = self.gru(self.highways(x))\n",
    "\n",
    "    return outputs    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is still a large room for improvements, but in this example only forward function was changed"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Simple attention\n",
    "Good news: there is no more need to guess order of dimensions. Neither for inputs nor for outputs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class Attention(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Attention, self).__init__()\n",
    "    \n",
    "    def forward(self, K, V, Q):\n",
    "        A = torch.bmm(K.transpose(1,2), Q) / np.sqrt(Q.shape[1])\n",
    "        A = F.softmax(A, 1)\n",
    "        R = torch.bmm(V, A)\n",
    "        return torch.cat((R, Q), dim=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def attention(K, V, Q):\n",
    "    _, n_channels, _ = K.shape\n",
    "    A = torch.einsum('bct,bcl->btl', [K, Q])\n",
    "    A = F.softmax(A * n_channels ** (-0.5), 1)\n",
    "    R = torch.einsum('bct,btl->bcl', [V, A])\n",
    "    return torch.cat((R, Q), dim=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "336 µs ± 7.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
      "336 µs ± 7.48 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
     ]
    }
   ],
   "source": [
    "args = dict(\n",
    "    K=torch.zeros(32, 128, 40).cuda(),\n",
    "    V=torch.zeros(32, 128, 40).cuda(),\n",
    "    Q=torch.zeros(32, 128, 30).cuda(), \n",
    ")\n",
    "    \n",
    "%timeit -n100 result_old = Attention()(**args); torch.cuda.synchronize()\n",
    "%timeit -n100 result_new = attention(**args); torch.cuda.synchronize()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [],
   "source": [
    "result_old = Attention()(**args); torch.cuda.synchronize()\n",
    "result_new = attention(**args); torch.cuda.synchronize()\n",
    "assert torch.allclose(result_old, result_new)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Transformer's attention needs more attention"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class ScaledDotProductAttention(nn.Module):\n",
    "    ''' Scaled Dot-Product Attention '''\n",
    "\n",
    "    def __init__(self, temperature, attn_dropout=0.1):\n",
    "        super().__init__()\n",
    "        self.temperature = temperature\n",
    "        self.dropout = nn.Dropout(attn_dropout)\n",
    "        self.softmax = nn.Softmax(dim=2)\n",
    "\n",
    "    def forward(self, q, k, v, mask=None):\n",
    "\n",
    "        attn = torch.bmm(q, k.transpose(1, 2))\n",
    "        attn = attn / self.temperature\n",
    "\n",
    "        if mask is not None:\n",
    "            attn = attn.masked_fill(mask, -np.inf)\n",
    "\n",
    "        attn = self.softmax(attn)\n",
    "        attn = self.dropout(attn)\n",
    "        output = torch.bmm(attn, v)\n",
    "\n",
    "        return output, attn\n",
    "\n",
    "\n",
    "\n",
    "class MultiHeadAttentionOld(nn.Module):\n",
    "    ''' Multi-Head Attention module '''\n",
    "\n",
    "    def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1):\n",
    "        super().__init__()\n",
    "\n",
    "        self.n_head = n_head\n",
    "        self.d_k = d_k\n",
    "        self.d_v = d_v\n",
    "\n",
    "        self.w_qs = nn.Linear(d_model, n_head * d_k)\n",
    "        self.w_ks = nn.Linear(d_model, n_head * d_k)\n",
    "        self.w_vs = nn.Linear(d_model, n_head * d_v)\n",
    "        nn.init.normal_(self.w_qs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))\n",
    "        nn.init.normal_(self.w_ks.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))\n",
    "        nn.init.normal_(self.w_vs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_v)))\n",
    "\n",
    "        self.attention = ScaledDotProductAttention(temperature=np.power(d_k, 0.5))\n",
    "        self.layer_norm = nn.LayerNorm(d_model)\n",
    "\n",
    "        self.fc = nn.Linear(n_head * d_v, d_model)\n",
    "        nn.init.xavier_normal_(self.fc.weight)\n",
    "\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "\n",
    "    def forward(self, q, k, v, mask=None):\n",
    "        \n",
    "        d_k, d_v, n_head = self.d_k, self.d_v, self.n_head\n",
    "        \n",
    "        sz_b, len_q, _ = q.size()\n",
    "        sz_b, len_k, _ = k.size()\n",
    "        sz_b, len_v, _ = v.size()\n",
    "        \n",
    "        residual = q\n",
    "        \n",
    "        q = self.w_qs(q).view(sz_b, len_q, n_head, d_k)\n",
    "        k = self.w_ks(k).view(sz_b, len_k, n_head, d_k)\n",
    "        v = self.w_vs(v).view(sz_b, len_v, n_head, d_v)\n",
    "        \n",
    "        q = q.permute(2, 0, 1, 3).contiguous().view(-1, len_q, d_k) # (n*b) x lq x dk\n",
    "        k = k.permute(2, 0, 1, 3).contiguous().view(-1, len_k, d_k) # (n*b) x lk x dk\n",
    "        v = v.permute(2, 0, 1, 3).contiguous().view(-1, len_v, d_v) # (n*b) x lv x dv\n",
    "        \n",
    "        mask = mask.repeat(n_head, 1, 1) # (n*b) x .. x ..\n",
    "        output, attn = self.attention(q, k, v, mask=mask)\n",
    "        \n",
    "        output = output.view(n_head, sz_b, len_q, d_v)\n",
    "        output = output.permute(1, 2, 0, 3).contiguous().view(sz_b, len_q, -1) # b x lq x (n*dv)\n",
    "        \n",
    "        output = self.dropout(self.fc(output))\n",
    "        output = self.layer_norm(output + residual)\n",
    "        \n",
    "        return output, attn\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class MultiHeadAttentionNew(nn.Module):\n",
    "    def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1):\n",
    "        super().__init__()\n",
    "        self.n_head = n_head\n",
    "        \n",
    "        self.w_qs = nn.Linear(d_model, n_head * d_k)\n",
    "        self.w_ks = nn.Linear(d_model, n_head * d_k)\n",
    "        self.w_vs = nn.Linear(d_model, n_head * d_v)\n",
    "        \n",
    "        nn.init.normal_(self.w_qs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))\n",
    "        nn.init.normal_(self.w_ks.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))\n",
    "        nn.init.normal_(self.w_vs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_v)))\n",
    "        \n",
    "        self.fc = nn.Linear(n_head * d_v, d_model)\n",
    "        nn.init.xavier_normal_(self.fc.weight)\n",
    "        self.dropout = nn.Dropout(p=dropout)\n",
    "        self.layer_norm = nn.LayerNorm(d_model)\n",
    "\n",
    "    def forward(self, q, k, v, mask=None):\n",
    "        residual = q\n",
    "        q = rearrange(self.w_qs(q), 'b l (head k) -> head b l k', head=self.n_head)\n",
    "        k = rearrange(self.w_ks(k), 'b t (head k) -> head b t k', head=self.n_head)\n",
    "        v = rearrange(self.w_vs(v), 'b t (head v) -> head b t v', head=self.n_head)\n",
    "        attn = torch.einsum('hblk,hbtk->hblt', [q, k]) / np.sqrt(q.shape[-1])\n",
    "        if mask is not None:\n",
    "            attn = attn.masked_fill(mask[None], -np.inf)\n",
    "        attn = torch.softmax(attn, dim=3)\n",
    "        output = torch.einsum('hblt,hbtv->hblv', [attn, v])\n",
    "        output = rearrange(output, 'head b l v -> b l (head v)')\n",
    "        output = self.dropout(self.fc(output))\n",
    "        output = self.layer_norm(output + residual)\n",
    "        return output, attn\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Benefits of new implementation\n",
    "\n",
    "- we have one module, not two\n",
    "- now code does not fail for None mask\n",
    "- the amount of caveats in the original code that we removed is huge. \n",
    "  Try erasing comments and deciphering what happens there"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Poor implementation of torch.einsum, so code below doesn't work\n",
    "class MultiHeadAttentionHard(nn.Module):\n",
    "    def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1):\n",
    "        super().__init__()\n",
    "        self.w_qs = nn.Parameter(torch.randn(d_model, n_head, d_k) * np.sqrt(2.0 / (d_model + d_k)))\n",
    "        self.w_ks = nn.Parameter(torch.randn(d_model, n_head, d_k) * np.sqrt(2.0 / (d_model + d_k)))\n",
    "        self.w_vs = nn.Parameter(torch.randn(d_model, n_head, d_v) * np.sqrt(2.0 / (d_model + d_v)))\n",
    "        self.w_fc = nn.Parameter(torch.randn(d_model, n_head, d_v) * np.sqrt(2.0 / (d_model + n_head * d_v)))\n",
    "\n",
    "        self.dropout = nn.Dropout(p=dropout)\n",
    "        self.layer_norm = nn.LayerNorm(d_model)\n",
    "\n",
    "    def forward(self, q, k, v, mask=None):\n",
    "        attn = torch.einsum('bld,dhc,bte,ehc->hblt', [q, self.w_qs, k, self.w_ks])\n",
    "        if mask is not None:\n",
    "            attn = attn.masked_fill(mask[None], -np.inf)\n",
    "        attn = torch.softmax(attn, dim=3)\n",
    "        output = torch.einsum('hblt,bte,ehv,dhv->hbd', [attn, v, self.w_vs, self.w_fc])\n",
    "        output = self.dropout(output)\n",
    "        output = self.layer_norm(output + q)\n",
    "        return output, attn    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [],
   "source": [
    "n_heads = 8\n",
    "d_k = 32\n",
    "d_v = 64\n",
    "d_model = 100\n",
    "t = 51\n",
    "l = 53\n",
    "batch = 30\n",
    "\n",
    "layer1 = initialize(MultiHeadAttentionOld(n_heads, d_k=d_k, d_v=d_v, d_model=d_model)).eval().cuda()\n",
    "layer2 = initialize(MultiHeadAttentionNew(n_heads, d_k=d_k, d_v=d_v, d_model=d_model)).eval().cuda()\n",
    "\n",
    "args = dict(\n",
    "    q=torch.randn(batch, l, d_model),\n",
    "    k=torch.randn(batch, t, d_model) * 0.1,\n",
    "    v=torch.randn(batch, t, d_model),\n",
    "    mask=torch.randn(batch, l, t) > 0,\n",
    ")\n",
    "args = {k:v.cuda() for k, v in args.items()}\n",
    "\n",
    "o1, a1 = layer1(**args)\n",
    "o2, a2 = layer2(**args)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([240, 53, 51]), torch.Size([8, 30, 53, 51]))"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a1.shape, a2.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [],
   "source": [
    "assert torch.allclose(o1, o2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "4.82 ms ± 73.9 µs per loop (mean ± std. dev. of 7 runs, 200 loops each)\n",
      "4.6 ms ± 116 µs per loop (mean ± std. dev. of 7 runs, 200 loops each)\n"
     ]
    }
   ],
   "source": [
    "%timeit -n 200 layer1(**args); torch.cuda.synchronize()\n",
    "%timeit -n 200 layer2(**args); torch.cuda.synchronize()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Self-attention GANs\n",
    "\n",
    "SAGANs are currently SotA for image generation, and can be simplified using same tricks.\n",
    "<!-- If torch.einsum supported non-one letter axes, we could improve this solution further. -->\n",
    "\n",
    "<!-- from  https://github.com/heykeetae/Self-Attention-GAN/blob/master/sagan_models.py -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class Self_Attn_Old(nn.Module):\n",
    "    \"\"\" Self attention Layer\"\"\"\n",
    "    def __init__(self,in_dim,activation):\n",
    "        super(Self_Attn_Old,self).__init__()\n",
    "        self.chanel_in = in_dim\n",
    "        self.activation = activation\n",
    "        \n",
    "        self.query_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim//8 , kernel_size= 1)\n",
    "        self.key_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim//8 , kernel_size= 1)\n",
    "        self.value_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim , kernel_size= 1)\n",
    "        self.gamma = nn.Parameter(torch.zeros(1))\n",
    "        \n",
    "        self.softmax  = nn.Softmax(dim=-1) #\n",
    "\n",
    "    def forward(self, x):\n",
    "        \"\"\"\n",
    "            inputs :\n",
    "                x : input feature maps( B X C X W X H)\n",
    "            returns :\n",
    "                out : self attention value + input feature \n",
    "                attention: B X N X N (N is Width*Height)\n",
    "        \"\"\"\n",
    "        \n",
    "        m_batchsize,C,width ,height = x.size()\n",
    "        proj_query  = self.query_conv(x).view(m_batchsize,-1,width*height).permute(0,2,1) # B X CX(N)\n",
    "        proj_key =  self.key_conv(x).view(m_batchsize,-1,width*height) # B X C x (*W*H)\n",
    "        energy =  torch.bmm(proj_query,proj_key) # transpose check\n",
    "        attention = self.softmax(energy) # BX (N) X (N) \n",
    "        proj_value = self.value_conv(x).view(m_batchsize,-1,width*height) # B X C X N\n",
    "\n",
    "        out = torch.bmm(proj_value,attention.permute(0,2,1) )\n",
    "        out = out.view(m_batchsize,C,width,height)\n",
    "        \n",
    "        out = self.gamma*out + x\n",
    "        return out,attention"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class Self_Attn_New(nn.Module):\n",
    "    \"\"\" Self attention Layer\"\"\"\n",
    "    def __init__(self, in_dim):\n",
    "        super().__init__()\n",
    "        self.query_conv = nn.Conv2d(in_dim, out_channels=in_dim//8, kernel_size=1)\n",
    "        self.key_conv = nn.Conv2d(in_dim, out_channels=in_dim//8, kernel_size=1)\n",
    "        self.value_conv = nn.Conv2d(in_dim, out_channels=in_dim, kernel_size=1)\n",
    "        self.gamma = nn.Parameter(torch.zeros([1]))\n",
    "\n",
    "    def forward(self, x):\n",
    "        proj_query = rearrange(self.query_conv(x), 'b c h w -> b (h w) c')\n",
    "        proj_key = rearrange(self.key_conv(x), 'b c h w -> b c (h w)')\n",
    "        proj_value = rearrange(self.value_conv(x), 'b c h w -> b (h w) c')\n",
    "        energy = torch.bmm(proj_query, proj_key)\n",
    "        attention = F.softmax(energy, dim=2)\n",
    "        out = torch.bmm(attention, proj_value)\n",
    "        out = x + self.gamma * rearrange(out, 'b (h w) c -> b c h w',\n",
    "                                         **parse_shape(x, 'b c h w'))\n",
    "        return out, attention"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_old = initialize(Self_Attn_Old(128, None))\n",
    "model_new = initialize(Self_Attn_New(128))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.randn(2, 128, 30, 30)\n",
    "assert torch.allclose(model_old(x)[0], model_new(x)[0], atol=1e-4)\n",
    "# returned attention is transposed\n",
    "assert torch.allclose(model_old(x)[1], model_new(x)[1], atol=1e-4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "15.9 ms ± 990 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
      "14.5 ms ± 81.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
     ]
    }
   ],
   "source": [
    "%timeit model_old(x)[0].sum().item()\n",
    "%timeit model_new(x)[0].sum().item()\n",
    "# surprise - I had slow down here due to the order of softmax, not einops"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Improving time sequence prediction\n",
    "\n",
    "<!-- https://github.com/pytorch/examples/blob/master/time_sequence_prediction/train.py -->\n",
    "\n",
    "While this example was considered to be simplistic, I had to analyze surrounding code to understand what kind of input was expected.\n",
    "You can try yourself. \n",
    "\n",
    "Additionally now the code works with any dtype, not only double; and new code supports using GPU."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class SequencePredictionOld(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(SequencePredictionOld, self).__init__()\n",
    "        self.lstm1 = nn.LSTMCell(1, 51)\n",
    "        self.lstm2 = nn.LSTMCell(51, 51)\n",
    "        self.linear = nn.Linear(51, 1)\n",
    "\n",
    "    def forward(self, input, future = 0):\n",
    "        outputs = []\n",
    "        h_t = torch.zeros(input.size(0), 51, dtype=torch.double)\n",
    "        c_t = torch.zeros(input.size(0), 51, dtype=torch.double)\n",
    "        h_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)\n",
    "        c_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)\n",
    "\n",
    "        for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):\n",
    "            h_t, c_t = self.lstm1(input_t, (h_t, c_t))\n",
    "            h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))\n",
    "            output = self.linear(h_t2)\n",
    "            outputs += [output]\n",
    "            \n",
    "        for i in range(future):# if we should predict the future\n",
    "            h_t, c_t = self.lstm1(output, (h_t, c_t))\n",
    "            h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))\n",
    "            output = self.linear(h_t2)\n",
    "            outputs += [output]\n",
    "        outputs = torch.stack(outputs, 1).squeeze(2)\n",
    "        return outputs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class SequencePredictionNew(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(SequencePredictionNew, self).__init__()\n",
    "        self.lstm1 = nn.LSTMCell(1, 51)\n",
    "        self.lstm2 = nn.LSTMCell(51, 51)\n",
    "        self.linear = nn.Linear(51, 1)\n",
    "\n",
    "    def forward(self, input, future=0):\n",
    "        b, t = input.shape\n",
    "        h_t, c_t, h_t2, c_t2 = torch.zeros(4, b, 51, dtype=self.linear.weight.dtype, \n",
    "                                           device=self.linear.weight.device)\n",
    "\n",
    "        outputs = []\n",
    "        for input_t in rearrange(input, 'b t -> t b ()'):\n",
    "            h_t, c_t = self.lstm1(input_t, (h_t, c_t))\n",
    "            h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))\n",
    "            output = self.linear(h_t2)\n",
    "            outputs += [output]\n",
    "            \n",
    "        for i in range(future): # if we should predict the future\n",
    "            h_t, c_t = self.lstm1(output, (h_t, c_t))\n",
    "            h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))\n",
    "            output = self.linear(h_t2)\n",
    "            outputs += [output]\n",
    "        return rearrange(outputs, 't b () -> b t')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [],
   "source": [
    "seq_old = SequencePredictionOld().double()\n",
    "seq_new = SequencePredictionNew().double()\n",
    "initialize(seq_old)\n",
    "initialize(seq_new)\n",
    "x = torch.randn([10, 10], dtype=torch.double)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [],
   "source": [
    "result_old = seq_old(x)\n",
    "result_new = seq_new(x)\n",
    "assert torch.allclose(result_old, result_new)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Transforming spacial transformer network (STN)\n",
    "\n",
    "\n",
    "<!-- modified version of https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "class SpacialTransformOld(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "\n",
    "        # Spatial transformer localization-network\n",
    "        self.localization = nn.Sequential(\n",
    "            nn.Conv2d(1, 8, kernel_size=7),\n",
    "            nn.MaxPool2d(2, stride=2),\n",
    "            nn.ReLU(True),\n",
    "            nn.Conv2d(8, 10, kernel_size=5),\n",
    "            nn.MaxPool2d(2, stride=2),\n",
    "            nn.ReLU(True)\n",
    "        )\n",
    "\n",
    "        # Regressor for the 3 * 2 affine matrix\n",
    "        self.fc_loc = nn.Sequential(\n",
    "            nn.Linear(10 * 3 * 3, 32),\n",
    "            nn.ReLU(True),\n",
    "            nn.Linear(32, 3 * 2)\n",
    "        )\n",
    "\n",
    "        # Initialize the weights/bias with identity transformation\n",
    "        self.fc_loc[2].weight.data.zero_()\n",
    "        self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))\n",
    "\n",
    "    # Spatial transformer network forward function\n",
    "    def stn(self, x):\n",
    "        xs = self.localization(x)\n",
    "        xs = xs.view(-1, 10 * 3 * 3)\n",
    "        theta = self.fc_loc(xs)\n",
    "        theta = theta.view(-1, 2, 3)\n",
    "\n",
    "        grid = F.affine_grid(theta, x.size())\n",
    "        x = F.grid_sample(x, grid)\n",
    "\n",
    "        return x\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "class SpacialTransformNew(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        # Spatial transformer localization-network\n",
    "        linear = nn.Linear(32, 3 * 2)\n",
    "        # Initialize the weights/bias with identity transformation\n",
    "        linear.weight.data.zero_()\n",
    "        linear.bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))\n",
    "        \n",
    "        self.compute_theta = nn.Sequential(\n",
    "            nn.Conv2d(1, 8, kernel_size=7),\n",
    "            nn.MaxPool2d(2, stride=2),\n",
    "            nn.ReLU(True),\n",
    "            nn.Conv2d(8, 10, kernel_size=5),\n",
    "            nn.MaxPool2d(2, stride=2),\n",
    "            nn.ReLU(True),\n",
    "            Rearrange('b c h w -> b (c h w)', h=3, w=3),\n",
    "            nn.Linear(10 * 3 * 3, 32),\n",
    "            nn.ReLU(True),\n",
    "            linear,\n",
    "            Rearrange('b (row col) -> b row col', row=2, col=3),\n",
    "        )\n",
    "\n",
    "    # Spatial transformer network forward function\n",
    "    def stn(self, x):\n",
    "        grid = F.affine_grid(self.compute_theta(x), x.size())\n",
    "        return F.grid_sample(x, grid)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- new code will give reasonable errors when passed image size is different from expected\n",
    "- if batch size is divisible by 18, whatever you input in the old code, it'll fail no sooner than affine_grid."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Improving GLOW\n",
    "\n",
    "That's a good old depth-to-space written manually!\n",
    "\n",
    "Since GLOW is revertible, it will frequently rely on `rearrange`-like operations.\n",
    "\n",
    "<!-- from https://github.com/chaiyujin/glow-pytorch/blob/master/glow/modules.py -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "def unsqueeze2d_old(input, factor=2):\n",
    "    assert factor >= 1 and isinstance(factor, int)\n",
    "    factor2 = factor ** 2\n",
    "    if factor == 1:\n",
    "        return input\n",
    "    size = input.size()\n",
    "    B = size[0]\n",
    "    C = size[1]\n",
    "    H = size[2]\n",
    "    W = size[3]\n",
    "    assert C % (factor2) == 0, \"{}\".format(C)\n",
    "    x = input.view(B, C // factor2, factor, factor, H, W)\n",
    "    x = x.permute(0, 1, 4, 2, 5, 3).contiguous()\n",
    "    x = x.view(B, C // (factor2), H * factor, W * factor)\n",
    "    return x\n",
    "\n",
    "def squeeze2d_old(input, factor=2):\n",
    "    assert factor >= 1 and isinstance(factor, int)\n",
    "    if factor == 1:\n",
    "        return input\n",
    "    size = input.size()\n",
    "    B = size[0]\n",
    "    C = size[1]\n",
    "    H = size[2]\n",
    "    W = size[3]\n",
    "    assert H % factor == 0 and W % factor == 0, \"{}\".format((H, W))\n",
    "    x = input.view(B, C, H // factor, factor, W // factor, factor)\n",
    "    x = x.permute(0, 1, 3, 5, 2, 4).contiguous()\n",
    "    x = x.view(B, C * factor * factor, H // factor, W // factor)\n",
    "    return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def unsqueeze2d_new(input, factor=2):\n",
    "    return rearrange(input, 'b (c h2 w2) h w -> b c (h h2) (w w2)', h2=factor, w2=factor)\n",
    "\n",
    "def squeeze2d_new(input, factor=2):\n",
    "    return rearrange(input, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=factor, w2=factor)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- term `squeeze` isn't very helpful: which dimension is squeezed? There is `torch.squeeze`, but it's very different.\n",
    "- in fact, we could skip creating functions completely - it is a single call to `einops` anyway"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Detecting problems in YOLO detection\n",
    "\n",
    "<!-- mixture of \n",
    "    # https://github.com/BobLiu20/YOLOv3_PyTorch/blob/c6b483743598b5f64d520d81e7e5f47ba936d4c9/nets/yolo_loss.py#L28-L44\n",
    "    # https://github.com/BobLiu20/YOLOv3_PyTorch/blob/c6b483743598b5f64d520d81e7e5f47ba936d4c9/nets/yolo_loss.py#L70-L92\n",
    "-->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [],
   "source": [
    "#left\n",
    "def YOLO_prediction_old(input, num_classes, num_anchors, anchors, stride_h, stride_w):\n",
    "    bs = input.size(0)\n",
    "    in_h = input.size(2)\n",
    "    in_w = input.size(3)\n",
    "    scaled_anchors = [(a_w / stride_w, a_h / stride_h) for a_w, a_h in anchors]\n",
    "\n",
    "    prediction = input.view(bs, num_anchors,\n",
    "                            5 + num_classes, in_h, in_w).permute(0, 1, 3, 4, 2).contiguous()\n",
    "    # Get outputs\n",
    "    x = torch.sigmoid(prediction[..., 0])  # Center x\n",
    "    y = torch.sigmoid(prediction[..., 1])  # Center y\n",
    "    w = prediction[..., 2]  # Width\n",
    "    h = prediction[..., 3]  # Height\n",
    "    conf = torch.sigmoid(prediction[..., 4])  # Conf\n",
    "    pred_cls = torch.sigmoid(prediction[..., 5:])  # Cls pred.\n",
    "\n",
    "    FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor\n",
    "    LongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor\n",
    "    # Calculate offsets for each grid\n",
    "    grid_x = torch.linspace(0, in_w - 1, in_w).repeat(in_w, 1).repeat(\n",
    "        bs * num_anchors, 1, 1).view(x.shape).type(FloatTensor)\n",
    "    grid_y = torch.linspace(0, in_h - 1, in_h).repeat(in_h, 1).t().repeat(\n",
    "        bs * num_anchors, 1, 1).view(y.shape).type(FloatTensor)\n",
    "    # Calculate anchor w, h\n",
    "    anchor_w = FloatTensor(scaled_anchors).index_select(1, LongTensor([0]))\n",
    "    anchor_h = FloatTensor(scaled_anchors).index_select(1, LongTensor([1]))\n",
    "    anchor_w = anchor_w.repeat(bs, 1).repeat(1, 1, in_h * in_w).view(w.shape)\n",
    "    anchor_h = anchor_h.repeat(bs, 1).repeat(1, 1, in_h * in_w).view(h.shape)\n",
    "    # Add offset and scale with anchors\n",
    "    pred_boxes = FloatTensor(prediction[..., :4].shape)\n",
    "    pred_boxes[..., 0] = x.data + grid_x\n",
    "    pred_boxes[..., 1] = y.data + grid_y\n",
    "    pred_boxes[..., 2] = torch.exp(w.data) * anchor_w\n",
    "    pred_boxes[..., 3] = torch.exp(h.data) * anchor_h\n",
    "    # Results\n",
    "    _scale = torch.Tensor([stride_w, stride_h] * 2).type(FloatTensor)\n",
    "    output = torch.cat((pred_boxes.view(bs, -1, 4) * _scale,\n",
    "                        conf.view(bs, -1, 1), pred_cls.view(bs, -1, num_classes)), -1)\n",
    "    return output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [],
   "source": [
    "#right\n",
    "def YOLO_prediction_new(input, num_classes, num_anchors, anchors, stride_h, stride_w):\n",
    "    raw_predictions = rearrange(input, 'b (anchor prediction) h w -> prediction b anchor h w', \n",
    "                                anchor=num_anchors, prediction=5 + num_classes)\n",
    "    anchors = torch.FloatTensor(anchors).to(input.device)\n",
    "    anchor_sizes = rearrange(anchors, 'anchor dim -> dim () anchor () ()')\n",
    "\n",
    "    _, _, _, in_h, in_w = raw_predictions.shape\n",
    "    grid_h = rearrange(torch.arange(in_h).float(), 'h -> () () h ()').to(input.device)\n",
    "    grid_w = rearrange(torch.arange(in_w).float(), 'w -> () () () w').to(input.device)\n",
    "\n",
    "    predicted_bboxes = torch.zeros_like(raw_predictions)\n",
    "    predicted_bboxes[0] = (raw_predictions[0].sigmoid() + grid_w) * stride_w  # center x\n",
    "    predicted_bboxes[1] = (raw_predictions[1].sigmoid() + grid_h) * stride_h  # center y\n",
    "    predicted_bboxes[2:4] = (raw_predictions[2:4].exp()) * anchor_sizes  # bbox width and height\n",
    "    predicted_bboxes[4] = raw_predictions[4].sigmoid()  # confidence\n",
    "    predicted_bboxes[5:] = raw_predictions[5:].sigmoid()  # class predictions\n",
    "    # merging all predicted bboxes for each image\n",
    "    return rearrange(predicted_bboxes, 'prediction b anchor h w -> b (anchor h w) prediction')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We changed and fixed a lot:\n",
    "\n",
    "- new code won't fail if input is not on the first GPU\n",
    "- old code has wrong grid_x and grid_y for non-square images\n",
    "- new code doesn't use replication when broadcasting is sufficient\n",
    "- old code strangely sometimes takes `.data`, but this has no real effect, as some branches preserve gradient till the end\n",
    "    - if gradients not needed, torch.no_grad should be used, so it's redundant"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Simpler output for a bunch of pictures\n",
    "\n",
    "Next time you need to output drawings of you generative models, you can use this trick\n",
    "\n",
    "<!-- # from https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {},
   "outputs": [],
   "source": [
    "fake_batch = torch.rand([100, 3, 1, 1]) + torch.zeros([100, 3, 32, 32])\n",
    "from matplotlib import pyplot as plt\n",
    "import torchvision.utils as vutils"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<matplotlib.image.AxesImage at 0x7fdd8d606908>"
      ]
     },
     "execution_count": 76,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQUAAAD8CAYAAAB+fLH0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAD4FJREFUeJzt3X2wXHV9x/H3tyiIiMNDNA0hBXRiNSgNMSIdHpShYhK1wdHhoVMakc7FTqhS6bQB2oJt0ar1oVQLxBIJVIVM0RIxIJBhBJ2i3MQYEiIQNRkSQ1IeFAYQBb79Y09gf+lN9t6793d3Ke/XzJlz9rdn9/vNyc1nzjm7N7/ITCRpu9/qdQOS+ouhIKlgKEgqGAqSCoaCpIKhIKlQLRQiYlZE3BMR6yNiQa06ksZW1PieQkTsBtwLvAPYBNwJnJqZd495MUljqtaZwhHA+sz8aWb+GrgamFuplqQx9JJK7zsZuL/t8SbgrTvbOSL8WqVU34OZ+apOO9UKhY4iYgAY6FV96UVo43B2qhUKm4EpbY8PbMaek5kLgYXw/JnCvzxwXqV2nveR3/44AD88/9nqtQ6/qHV1tvFVJ1WvddD/LAFg3cqV1Wu9YcYMAN5y1ceq17rztAsAuPnk+zvs2b13XNP6kb3sb9ZUr3XmP74RgA8v/Gb1WhcPvGdE+9e6p3AnMDUiDomI3YFTgKWVakkaQ1XOFDLz6Yg4C/g2sBuwKDPX1qglaWxVu6eQmcuAZbXeX1IdfqNRUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUqHKBLMjbsJp46TxsCIzZ3bayTMFSYWezSU5lA2fu6F6jYP/YjYA1/3ZrdVrzb3kOAD+ff9Dq9f604dac+089fH51Wvtcd4XW+sJx1av9dSDtwFw3RfeXb3W3LOuB2D6nu+sXmvVk98GYNvb/rx6rVd/519HtL9nCpIKhoKkgqEgqWAoSCoYCpIKhoKkgqEgqdDV9xQiYgPwGPAM8HRmzoyI/YBrgIOBDcBJmflId21KGi9jcaZwXGZOb/v65AJgeWZOBZY3jyW9QNS4fJgLLG62FwMnVqghqZJuQyGBmyJiRUQMNGMTM3NLs/0AMHGoF0bEQEQMRsRglz1IGkPd/u7D0Zm5OSJeDdwcET9ufzIzc2e/AZmZC4GF4G9JSv2kqzOFzNzcrLcB3wCOALZGxCSAZr2t2yYljZ9Rh0JE7BURe2/fBk4A1gBLgXnNbvOA67ptUtL46ebyYSLwjYjY/j5fzcwbI+JOYElEnAFsBE7qvk1J42XUoZCZPwV+b4jxh4Dju2lKUu/4jUZJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVnDZOevFw2jhJI9dX08Y9+eGrqtfY8+LTADhmTf2p3G5/Y2sqt31+/z+r1/rFf78fgE8d/sHqtf7qh4sA+K+lv6pe68Q/fBkAR16/snqtO949A4AvD55dvdbpMz8PwAWL6v9sfOyD7x/R/p4pSCoYCpIKhoKkgqEgqWAoSCoYCpIKhoKkgqEgqWAoSCoYCpIKhoKkgqEgqWAoSCoYCpIKhoKkgqEgqWAoSCoYCpIKhoKkgqEgqdAxFCJiUURsi4g1bWP7RcTNEXFfs963GY+IuDgi1kfE6oiYUbN5SWNvOGcKVwCzdhhbACzPzKnA8uYxwGxgarMMAJeMTZuSxkvHUMjM24CHdxieCyxuthcDJ7aNX5ktdwD7RMSksWpWUn2jvacwMTO3NNsPABOb7cnA/W37bWrG/o+IGIiIwYgYHGUPkmrIzI4LcDCwpu3xL3Z4/pFmfT1wdNv4cmDmMN4/XVxcqi+Dw/n3Ptozha3bLwua9bZmfDMwpW2/A5sxSS8Qo502bikwD/inZn1d2/hZEXE18Fbgl22XGR395tKfj7Kd4Xvphw4A4Ng511avdduy9wFw7mWfqF7rE2eeC8DZJw95tTamPn9NK+c/+cUPVK/11/OvAOCm7321eq0TjvojAP7hrnOq1/rbN30GgDcsWVG91rqT3jyi/TuGQkR8DXg7MCEiNgEX0AqDJRFxBrAROKnZfRkwB1gPPAGcPqJuJPVcx1DIzFN38tTxQ+ybwPxum5LUO36jUVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUiGbatt42EdH7JqT//1Zk5sxOO3mmIKkw2mnjqvjCgxur1zhrwkEArHrl8uq1pj/ami/nzoNur17rLRuPAeC+M/evXmvqZQ8BsPruw6rXOmzaagAmbl1ZvdbWiTMA+N3Hn6pe65699gDg8DOOqV7rh5eP7OfPMwVJBUNBUsFQkFQwFCQVDAVJhY6hEBGLImJbRKxpG7swIjZHxKpmmdP23LkRsT4i7omId9ZqXFIdwzlTuAKYNcT45zJzerMsA4iIacApwKHNa/4tInYbq2Yl1dcxFDLzNuDhYb7fXODqzHwqM38GrAeO6KI/SeOsm3sKZ0XE6ubyYt9mbDJwf9s+m5oxSS8Qow2FS4DXAtOBLcBnRvoGETEQEYMRMTjKHiRVMKpQyMytmflMZj4LfInnLxE2A1Padj2wGRvqPRZm5szh/IKGpPEzqlCIiEltD98LbP9kYilwSkTsERGHAFOBH3TXoqTx1PEXoiLia8DbgQkRsQm4AHh7REwHEtgAnAmQmWsjYglwN/A0MD8zn6nTuqQaOoZCZp46xPDlu9j/IuCibpqS1Dt+o1FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVHDaOOnFw2njJI1cX00b94mrrq9e49zT3g3AN0+eXb3We665AYDZ73l99Vo3fPPHAEyf9NHqtVZt+SwAnz1jYfVaH718AIAr5txWvdYHlh0LwIG/qj9F3aaXtaao+9a806vXetfiL49of88UJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUmFjqEQEVMi4taIuDsi1kbER5rx/SLi5oi4r1nv24xHRFwcEesjYnVEzKj9h5A0doZzpvA0cE5mTgOOBOZHxDRgAbA8M6cCy5vHALOBqc0yAFwy5l1LqqZjKGTmlsxc2Ww/BqwDJgNzgcXNbouBE5vtucCV2XIHsE9ETBrzziVVMaJ7ChFxMHA48H1gYmZuaZ56AJjYbE8G7m972aZmbMf3GoiIwYgYHGHPkioadihExCuAa4GzM/PR9ueyNaPMiCZ0ycyFmTlzOJNTSBo/wwqFiHgprUD4SmZ+vRneuv2yoFlva8Y3A1PaXn5gMybphSAzd7kAAVwJfH6H8U8DC5rtBcCnmu13ATc0rzsS+MEwaqSLi0v1ZbDTv8XM7DyXZEQcDdwO3AU82wyfR+u+whLgd4CNwEmZ+XBEBPAFYBbwBHB6Zu7yvoFzSUrjYlhzSfbVBLOvZ7fqtX7MMwAc9b5HO+zZve9d+0oA9vzJG6rXevK16wD4u0NfWb3W369tHbtZx3+seq0bl18AwMlP7Fu91jUvfwSAw7778eq1Vh99HgCXHvOu6rU+dPu3tm86waykkTMUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSoa8mg5FUlZPBSBq5l/S6gXZ/cNFD1Wvccv7+AOw/7+XVaz20+AkA7p35ePVarxvcC4A/vvH71Wv9x6y3AnDpgz+vXutDEw4AYOVNZ1SvNeOEywG499Z11Wu97rjWVIJvftN3qtdacdfbRrS/ZwqSCoaCpIKhIKlgKEgqdAyFiJgSEbdGxN0RsTYiPtKMXxgRmyNiVbPMaXvNuRGxPiLuiYh31vwDSBpbw/n04WngnMxcGRF7Aysi4ubmuc9l5j+37xwR04BTgEOBA4BbIuJ1mfnMWDYuqY6OZwqZuSUzVzbbjwHrgMm7eMlc4OrMfCozfwasB44Yi2Yl1TeiewoRcTBwOLD9w/CzImJ1RCyKiH2bscnA/W0v28QQIRIRAxExGBGDI+5aUjXDDoWIeAVwLXB2Zj4KXAK8FpgObAE+M5LCmbkwM2cO52uXksbPsEIhIl5KKxC+kplfB8jMrZn5TGY+C3yJ5y8RNgNT2l5+YDMm6QVgOJ8+BHA5sC4zP9s2Pqltt/cCa5rtpcApEbFHRBwCTAV+MHYtS6ppOJ8+HAWcBtwVEauasfOAUyNiOpDABuBMgMxcGxFLgLtpfXIx308epBeOjqGQmd8FYoinlu3iNRcBF3XRl6Qe8RuNkgqGgqSCoSCpYChIKhgKkgqGgqSCoSCpYChIKhgKkgqGgqSCoSCp4LRx0ouH08ZJGrl+mTbuQeDxZt2PJtC/vYH9daOfe4Ox7e+g4ezUF5cPABEx2K//NVs/9wb2141+7g1605+XD5IKhoKkQj+FwsJeN7AL/dwb2F83+rk36EF/fXNPQVJ/6KczBUl9oOehEBGzmolo10fEgl73AxARGyLirmbi3MFmbL+IuDki7mvW+3Z6nzHsZ1FEbIuINW1jQ/YTLRc3x3N1RMzoQW99M/nwLiZI7vnx69vJmzOzZwuwG/AT4DXA7sCPgGm97KnpawMwYYexTwELmu0FwCfHsZ9jgRnAmk79AHOAG2j9D9xHAt/vQW8XAn85xL7Tmr/jPYBDmr/73Sr3NwmY0WzvDdzb9NHz47eL3np6/Hp9pnAEsD4zf5qZvwaupjVBbT+aCyxuthcDJ45X4cy8DXh4mP3MBa7MljuAfXaYuGc8etuZcZ98OHc+QXLPj98uetuZcTl+vQ6FYU1G2wMJ3BQRKyJioBmbmJlbmu0HgIm9ae05O+unX47pqCcfrmWHCZL76viN5eTN3ep1KPSrozNzBjAbmB8Rx7Y/ma1zub752Kbf+qHLyYdrGGKC5Of0+viN9eTN3ep1KPTlZLSZublZbwO+QesUbev208hmva13HcIu+un5Mc0+m3x4qAmS6ZPj14+TN/c6FO4EpkbEIRGxO3AKrQlqeyYi9oqIvbdvAyfQmjx3KTCv2W0ecF1vOnzOzvpZCvxJcxf9SOCXbafJ46KfJh/e2QTJ9MHx21lvPT9+Ne/8DvMO7Bxad11/ApzfB/28htYd3h8Ba7f3BOwPLAfuA24B9hvHnr5G6zTyN7SuI8/YWT+07pp/sTmedwEze9DbVU3t1c0P8qS2/c9versHmD0Ox+5oWpcGq4FVzTKnH47fLnrr6fHzG42SCr2+fJDUZwwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBU+F/ejqya4rYvVQAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "#right\n",
    "device = 'cpu'\n",
    "plt.imshow(np.transpose(vutils.make_grid(fake_batch.to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<matplotlib.image.AxesImage at 0x7fdd8d595390>"
      ]
     },
     "execution_count": 77,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQUAAAD8CAYAAAB+fLH0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAD4tJREFUeJzt3XvQXHV9x/H3VypIgQoYGmOIgjYqYaAhRqQjopUWkigGpzbCOBg004c6QaHa6QRoRW3jhRYviE0JQ4bAUCGtosGGa3QIOAVyIeYCBiKEkhgSLtYLgpbw7R97gvtLnye7eZ49u0t9v2bOnLO/Pbvfb06ST845u09+kZlI0k4v6XUDkvqLoSCpYChIKhgKkgqGgqSCoSCpUFsoRMSUiNgQERsjYk5ddSR1VtTxPYWI2At4APhTYDOwHDg9M+/reDFJHVXXmcKxwMbMfCgzfw1cC0yvqZakDvqdmt53LPBo0+PNwFuG2jki/FqlVL8nMvOQVjvVFQotRcQAMLDz8VceO7/2mue88rMArLrg+dprAUya+xI2HTKjK7UOe3wRAPevWlV7rSMmTQLgzVd9uvZayz94IQC3vP+/aq8FcNJ1r+ayv13blVpn/cNRAHzsshtqr3XJWacAPNLOvnWFwhZgXNPjQ6uxF2TmfGA+eKYg9ZO67iksB8ZHxOERsTdwGrC4plqSOqiWM4XMfC4izgZuBvYCFmTm+jpqSeqs2u4pZOYSYEld7y+pHn6jUVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFQwFSYVa5pLc4yac90HqhpWZObnVTp4pSCr0bNq4XT38pRtrr3H4X00F4Fsf+W7ttQBOnfdOLh91ZFdq/cUTjWk1np07u/ZaL7vga431qBNqr/XsE8sA+NZX3117LYBTP/odJu57cldqrX7mZgC2vf2jtdcafftX297XMwVJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVRvTlpYjYBPwc2AE8l5mTI+Jg4DrgMGATMCMzfzKyNiV1SyfOFP44Myc2fad6DrA0M8cDS6vHkl4k6rh8mA4srLYXAqfWUENSTUYaCgncEhErI2KgGhudmVur7ceA0SOsIamLRvoDUcdn5paI+H3g1oj4YfOTmZlD/Vh0FSIDgz0nqXdGdKaQmVuq9XbgeuBYYFtEjAGo1tuHeO38zJzczs93S+qeYYdCROwXEQfs3AZOAtYBi4GZ1W4zgW+PtElJ3TOSy4fRwPURsfN9/jUzb4qI5cCiiJgFPALMGHmbkrpl2KGQmQ8BfzjI+JPAiSNpSlLv+I1GSQVDQVLBUJBUMBQkFQwFSQVDQVLBUJBUMBQkFZxLUvrt0dZckn0zbdwvP3Z17TV+95IzAHjb2u5M5XbHUes56I/+rSu1fvKffw7ARcd8uPZaf3PvAgCuX/xM7bXe+559ATjuhlW11wK465RJLFh+bldqffjNXwbgk1f8e+21PjPrfW3v6+WDpIKhIKlgKEgqGAqSCoaCpIKhIKlgKEgqGAqSCoaCpIKhIKlgKEgqGAqSCoaCpIKhIKlgKEgqGAqSCoaCpIKhIKlgKEgqGAqSCoaCpELLUIiIBRGxPSLWNY0dHBG3RsSD1fqgajwi4pKI2BgRayJiUp3NS+q8ds4UrgSm7DI2B1iameOBpdVjgKnA+GoZAOZ1pk1J3dIyFDJzGfDULsPTgYXV9kLg1Kbxq7LhLuDAiBjTqWYl1W+49xRGZ+bWavsxYHS1PRZ4tGm/zdWYpBeLzGy5AIcB65oe//cuz/+kWn8HOL5pfCkweYj3HABWVEu6uLjUvqxo5+/7cM8Utu28LKjW26vxLcC4pv0Orcb+j8ycn5mT25nbTlL3DHcuycXATODz1frbTeNnR8S1wFuAnzZdZuzWr+cNmh0dtfdHGlcyJ0yrf+4+gGVL3sd5l32uK7U+d9Z5AJzz/vqv1r5yXeP36vOXnll7rTlnXwnAzd+/pvZaACe/9QP8/dpPdKXW3x11MQBHXLei9lr3v7/9f3tbhkJEfB14BzAqIjYDF9IIg0URMQt4BJhR7b4EmAZsBH4JfGhPGpfUey1DITNPH+KpEwfZN4HZI21KUu/4jUZJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUsFQkFQwFCQVDAVJBUNBUiGqKdx620RE75uQ/v9b2c6MbJ4pSCoMd9q4jrv08U211zj7kMMAuPf3ltZeC+CYn53IPa9Z1pVaxz5yAgAPDLyi9lqvn/8kAGvWH117raOPXAPA6MdW1l4LYNsr38QbfvFsV2pt2P9lABwz622117r3ijva3tczBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSoWUoRMSCiNgeEeuaxj4VEVsiYnW1TGt67ryI2BgRGyLi5Loal1SPds4UrgSmDDL+pcycWC1LACJiAnAacGT1mn+OiL061ayk+rUMhcxcBjzV5vtNB67NzF9l5sPARuDYEfQnqctGck/h7IhYU11eHFSNjQUebdpnczUm6UViuKEwD3gdMBHYCly8p28QEQMRsSIiVgyzB0k1GFYoZOa2zNyRmc8Dl/ObS4QtwLimXQ+txgZ7j/mZObmdn9qS1D3DCoWIGNP08L3Azk8mFgOnRcQ+EXE4MB64Z2QtSuqmlj8lGRFfB94BjIqIzcCFwDsiYiKQwCbgLIDMXB8Ri4D7gOeA2Zm5o57WJdWhZShk5umDDF+xm/3nAnNH0pSk3vEbjZIKhoKkgqEgqWAoSCoYCpIKhoKkgqEgqWAoSCoYCpIKThsn/fZw2jhJe65vpo377NU31F7j/DNOAWDxjKm11wJ4z6IbmXrKG7tS68YbfgjAxFd9vPZaq3/8RQC+OGt+7bU+fsUAAFdOu732WgBnLnk7hz67qiu1Nr9sEgDfmXlm7bXevfDKtvf1TEFSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBVahkJEjIuI70XEfRGxPiLOqcYPjohbI+LBan1QNR4RcUlEbIyINRExqe5fhKTOaedM4TngE5k5ATgOmB0RE4A5wNLMHA8srR4DTAXGV8sAMK/jXUuqTctQyMytmbmq2v45cD8wFpgOLKx2WwicWm1PB67KhruAAyNiTMc7l1SLPbqnEBGHAccAdwOjM3Nr9dRjwOhqeyzwaNPLNldjkl4E2p73ISL2B74BnJuZP4uIF57LzNzTWZ4iYoDG5YWkPtLWmUJEvJRGIFyTmd+shrftvCyo1tur8S3AuKaXH1qNFTJzfmZObmcaK0ldlJm7XYAArgK+vMv4PwJzqu05wEXV9ruAG6vXHQfc00aNdHFxqX1Z0ervYma2nmA2Io4H7gDWAs9Xw+fTuK+wCHg18AgwIzOfisZ1xaXAFOCXwIcyc0WLGvlG9tptH53wQ3YA8NY/+2nttQC+/42Xs+/GI7pS65k/uB+ATx758tprfWZ94/hNOfHTtde6aemFAMx4+uDaawEs2u8pjr7zs12pteb48wH4l7e9q/Zaf3nHf0CbE8y2vKeQmXfS+Fd/MCcOsn8Cs1u9r6T+5DcaJRUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVGg5Q1RXmtjDyWklDUtbM0R5piCp0PZU9HX7k7lP1F7jtgtGAfCKmfvVXgvgyYVPs2HyL7pS6w0r9gfgAzfdVXuta6YcB8C8x39ce62PHPIqAFbePKv2WgBvOvkKNnz3vq7UesM7JzRqHnV77bVWrn172/t6piCpYChIKhgKkgqGgqRCy1CIiHER8b2IuC8i1kfEOdX4pyJiS0SsrpZpTa85LyI2RsSGiDi5zl+ApM5q59OH54BPZOaqiDgAWBkRt1bPfSkz/6l554iYAJwGHAm8CrgtIl6fmTs62bikerQ8U8jMrZm5qtr+OXA/MHY3L5kOXJuZv8rMh4GNwLGdaFZS/fbonkJEHAYcA9xdDZ0dEWsiYkFEHFSNjQUebXrZZnYfIpL6SNuhEBH7A98Azs3MnwHzgNcBE4GtwMV7UjgiBiJiRUSs2JPXSapXW6EQES+lEQjXZOY3ATJzW2buyMzngcv5zSXCFmBc08sPrcYKmTk/Mye3811sSd3TzqcPAVwB3J+ZX2waH9O023uBddX2YuC0iNgnIg4HxgP3dK5lSXVq59OHtwJnAGsjYnU1dj5wekRMBBLYBJwFkJnrI2IRcB+NTy5m+8mD9OLRMhQy804gBnlqyW5eMxeYO4K+JPWI32iUVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFQ0FSwVCQVDAUJBUMBUkFp42Tfnu0NW1cv8wQ9QTwdLXuV6Owv5Ho5/76uTfoXH+vaWenvjhTAIiIFf38H67Y38j0c3/93Bt0vz/vKUgqGAqSCv0UCvN73UAL9jcy/dxfP/cGXe6vb+4pSOoP/XSmIKkP9DwUImJKNefkxoiY0+t+ACJiU0SsrebIXFGNHRwRt0bEg9X6oFbv08F+FkTE9ohY1zQ2aD/RcEl1PNdExKQe9dc3c43uZj7UvjiGfTdfa2b2bAH2An4EvBbYG/gBMKGXPVV9bQJG7TJ2ETCn2p4DfKGL/ZwATALWteoHmAbcSOM/2z0OuLtH/X0K+OtB9p1Q/T7vAxxe/f7vVXN/Y4BJ1fYBwANVH31xDHfTX0+OYa/PFI4FNmbmQ5n5a+BaGnNR9qPpwMJqeyFwarcKZ+Yy4Kk2+5kOXJUNdwEH7jJHR7f6G0rX5xrNoedD7YtjuJv+hlLrMex1KPTrvJMJ3BIRKyNioBobnZlbq+3HgNG9ae0FQ/XTT8e07+Ya3WU+1L47hv0wX2uvQ6FfHZ+Zk4CpwOyIOKH5yWycw/XNxzb91k9lRHON1mGQ+VBf0A/HsNPztQ5Xr0OhrXknuy0zt1Tr7cD1NE7Ntu08hazW23vXIeymn744pjnCuUY7bbD5UOmjY1jHfK3D1etQWA6Mj4jDI2Jv4DQac1H2TETsFxEH7NwGTqIxT+ZiYGa120zg273p8AVD9bMY+GB1B/044KdNp8hd009zjQ41Hyp9cgyH6q9nx7DOu6pt3nmdRuNu64+AC/qgn9fSuLP7A2D9zp6AVwBLgQeB24CDu9jT12mcPv4PjevHWUP1Q+OO+deq47kWmNyj/q6u6q+p/hCPadr/gqq/DcDULvR3PI1LgzXA6mqZ1i/HcDf99eQY+o1GSYVeXz5I6jOGgqSCoSCpYChIKhgKkgqGgqSCoSCpYChIKvwvZzqwY4oVzloAAAAASUVORK5CYII=\n",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "#right\n",
    "padded = F.pad(fake_batch[:64], [1, 1, 1, 1])\n",
    "plt.imshow(rearrange(padded, '(b1 b2) c h w -> (b1 h) (b2 w) c', b1=8).cpu())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [],
   "source": [
    "# TODO: Hierarchical softmax \n",
    "# TODO: some reinforcement stuff would also be needed"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Instead of conclusion\n",
    "\n",
    "Better code is a vague term; to be specific, code is expected to be:\n",
    "\n",
    "- reliable: does what expected and does not fail. Explicitly fails for wrong inputs\n",
    "- maintainable and modifiable\n",
    "- reusable: understanding and modifying code should be easier than writing from scratch\n",
    "- fast: in my measurements, proposed versions have speed similar to the original code\n",
    "- readability counts, as a mean to achieve previous goals\n",
    "\n",
    "Provided examples show how to improve these criteria for deep learning code. And `einops` helps a lot."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Links\n",
    "\n",
    "- [pytorch](http://github.com/pytorch/pytorch) and [einops](https://github.com/arogozhnikov/einops)\n",
    "- significant part of the code was taken from the official [examples](https://github.com/pytorch/examples) and [tutorials](https://github.com/pytorch/tutorials). All code fragments were taken for educational purpose.\n",
    "- (references for other code are given in source of this html)\n",
    "- einops has a [tutorial](https://github.com/arogozhnikov/einops/tree/master/docs) for a more gentle introduction"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
