{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/GMvandeVen/continual-learning/blob/master/hands_on_tutorial_InvictaSpringSchool.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Hands-on tutorial: Continual Learning\n",
        "\n",
        "**You can make your own copy of this notebook by selecting File->Save a copy in Drive from the menu bar above.**"
      ],
      "metadata": {
        "id": "pB15rzv-5A-i"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "This Colab Notebook was originally used for a hands-on tutorial at the [INVICTA Spring School](https://invicta.inesctec.pt/) in March 2024. The estimated time for this tutorial is 2 hours. At the school, this hands-on tutorial was preceded by a 3-hour interactive lecutre, for which the slides can be found [here](https://gmvandeven.github.io/files/slides/InvictaSpringSchool_Mar2024.pdf)."
      ],
      "metadata": {
        "id": "aBduCQ3QONvx"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Things you'll learn in this session:\n",
        "- How to set up a simple continual learning experiment\n",
        "- How to implement EWC, replay and EWC+replay\n",
        "- How to evaluate and visualize the results of a continual learning experiment"
      ],
      "metadata": {
        "id": "DvB7ZIPdQr5c"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "This tutorial is based on code from this repository: https://github.com/GMvandeVen/continual-learning.\n",
        "\n",
        "Other popular libraries for continual learning are [Avalanche](https://github.com/ContinualAI/avalanche) and [Mammoth](https://github.com/aimagelab/mammoth)."
      ],
      "metadata": {
        "id": "OS7kSVkOth4g"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Setup"
      ],
      "metadata": {
        "id": "WojfTislQ6dg"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Load required libraries\n",
        "First, let's load some packages that we are going to need. We use [PyTorch](https://pytorch.org/) as our main deep learning library."
      ],
      "metadata": {
        "id": "anTIGQYWnm--"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Standard libraries\n",
        "import numpy as np\n",
        "import copy\n",
        "import tqdm\n",
        "# Pytorch\n",
        "import torch\n",
        "from torch.nn import functional as F\n",
        "from torchvision import datasets, transforms\n",
        "# For visualization\n",
        "from torchvision.utils import make_grid\n",
        "import matplotlib.pyplot as plt"
      ],
      "metadata": {
        "id": "LNJi8AZPQ8HO"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Download data\n",
        "For this tutorial we will use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/), to construct different types of conitnual learning experiments."
      ],
      "metadata": {
        "id": "lCOLAWSkyA5A"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "MNIST_trainset = datasets.MNIST(root='data/', train=True, download=True,\n",
        "                                transform=transforms.ToTensor())\n",
        "MNIST_testset = datasets.MNIST(root='data/', train=False, download=True,\n",
        "                               transform=transforms.ToTensor())\n",
        "config = {'size': 28, 'channels': 1, 'classes': 10}"
      ],
      "metadata": {
        "id": "9mngyfO-RzyP"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "#@title Visualization functions\n",
        "def multi_context_barplot(axis, accs, title=None):\n",
        "    '''Generate barplot using the values in [accs].'''\n",
        "    contexts = len(accs)\n",
        "    axis.bar(range(contexts), accs, color='k')\n",
        "    axis.set_ylabel('Testing Accuracy (%)')\n",
        "    axis.set_xticks(range(contexts), [f'Context {i+1}' for i in range(contexts)])\n",
        "    if title is not None:\n",
        "        axis.set_title(title)\n",
        "\n",
        "def plot_examples(axis, dataset, context_id=None):\n",
        "    '''Plot 25 examples from [dataset].'''\n",
        "    data_loader = torch.utils.data.DataLoader(dataset, batch_size=25, shuffle=True)\n",
        "    image_tensor, _ = next(iter(data_loader))\n",
        "    image_grid = make_grid(image_tensor, nrow=5, pad_value=1) # pad_value=0 would give black borders\n",
        "    axis.imshow(np.transpose(image_grid.numpy(), (1,2,0)))\n",
        "    if context_id is not None:\n",
        "        axis.set_title(\"Context {}\".format(context_id+1))\n",
        "    axis.axis('off')"
      ],
      "metadata": {
        "id": "yv-AN2xn1mZd",
        "cellView": "form"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Part 1: Catastrophic forgetting - Permuted MNIST\n",
        "Let's start by trying to set up a simple continual learning experiment, to check whether there is indeed catastrophic forgetting."
      ],
      "metadata": {
        "id": "sFEmAjmVQfRs"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Set up the benchmark (Permuted MNIST)\n",
        "For this we will use \"Permuted MNIST\". In this continual learning experiment, in each context (or task), the neural network must learn to classify the ten MNIST digits. However, in each context a different permutation is applied to the pixels of all images.\n",
        "\n",
        "Permuted MNIST was first used in this paper: https://arxiv.org/abs/1312.6211."
      ],
      "metadata": {
        "id": "wdbrA_V6jrI_"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Let's start by specifying a function and a dataset class that we will use to create the various contexts (or tasks) of Permuted MNIST."
      ],
      "metadata": {
        "id": "tm0iox7zklFq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Function to apply a given permutation the pixels of an image.\n",
        "def permutate_image_pixels(image, permutation):\n",
        "    '''Permutate the pixels of [image] according to [permutation].'''\n",
        "\n",
        "    if permutation is None:\n",
        "        return image\n",
        "    else:\n",
        "        c, h, w = image.size()\n",
        "        image = image.view(c, -1)\n",
        "        image = image[:, permutation]  #--> same permutation for each channel\n",
        "        image = image.view(c, h, w)\n",
        "        return image"
      ],
      "metadata": {
        "id": "DHjgX-YuyvFj"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Class to create a dataset with images that have all been transformed in the same way.\n",
        "class TransformedDataset(torch.utils.data.Dataset):\n",
        "    '''To modify an existing dataset with a transform.\n",
        "    Useful for creating different permutations of MNIST without loading the data multiple times.'''\n",
        "\n",
        "    def __init__(self, original_dataset, transform=None, target_transform=None):\n",
        "        super().__init__()\n",
        "        self.dataset = original_dataset\n",
        "        self.transform = transform\n",
        "        self.target_transform = target_transform\n",
        "\n",
        "    def __len__(self):\n",
        "        return len(self.dataset)\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        (input, target) = self.dataset[index]\n",
        "        if self.transform:\n",
        "            input = self.transform(input)\n",
        "        if self.target_transform:\n",
        "            target = self.target_transform(target)\n",
        "        return (input, target)"
      ],
      "metadata": {
        "id": "PlVMxtWVyhaa"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now let's use these tools to create a Permuted MNIST benchmark with 2 contexts."
      ],
      "metadata": {
        "id": "oGt1bpQkk8Kw"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "contexts = 2"
      ],
      "metadata": {
        "id": "EklsnmDolDbR"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Specify for each context the permutations to use (with no permutation for the first context)\n",
        "permutations = [None] + [np.random.permutation(config['size']**2) for _ in range(contexts-1)]"
      ],
      "metadata": {
        "id": "XGx9n5ezlvzf"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Specify for each context the transformed train- and testset\n",
        "train_datasets = []\n",
        "test_datasets = []\n",
        "for context_id, perm in enumerate(permutations):\n",
        "    train_datasets.append(TransformedDataset(\n",
        "        MNIST_trainset, transform=transforms.Lambda(lambda x, p=perm: permutate_image_pixels(x, p)),\n",
        "    ))\n",
        "    test_datasets.append(TransformedDataset(\n",
        "        MNIST_testset, transform=transforms.Lambda(lambda x, p=perm: permutate_image_pixels(x, p)),\n",
        "    ))"
      ],
      "metadata": {
        "id": "AVAF1WGnyVlY"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Visualize the contexts\n",
        "figure, axis = plt.subplots(1, contexts, figsize=(3*contexts, 4))\n",
        "\n",
        "for context_id in range(len(train_datasets)):\n",
        "    plot_examples(axis[context_id], train_datasets[context_id], context_id=context_id)"
      ],
      "metadata": {
        "id": "w0tTkEFZy-VU"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Set up the model\n",
        "Now it is time to define the neural network model that we will sequentially train on these two contexts."
      ],
      "metadata": {
        "id": "A9GSVyeAl1nh"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "We start by specifying some \"helper functions\" and \"helper code\" that make it easier to specify the model. If you are interested you can have a look at the code in the cell below, but that is not needed to follow the rest of this tutorial.\n",
        "\n",
        "**It is however needed to run the code in the below cell!**"
      ],
      "metadata": {
        "id": "xwfRu6HTmoSS"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@title Helper functions\n",
        "\n",
        "class Identity(torch.nn.Module):\n",
        "    '''A nn-module to simply pass on the input data.'''\n",
        "    def forward(self, x):\n",
        "        return x\n",
        "\n",
        "    def __repr__(self):\n",
        "        tmpstr = self.__class__.__name__ + '()'\n",
        "        return tmpstr\n",
        "\n",
        "\n",
        "class Flatten(torch.nn.Module):\n",
        "    '''A nn-module to flatten a multi-dimensional tensor to 2-dim tensor.'''\n",
        "    def forward(self, x):\n",
        "        batch_size = x.size(0)   # first dimenstion should be batch-dimension.\n",
        "        return x.view(batch_size, -1)\n",
        "\n",
        "    def __repr__(self):\n",
        "        tmpstr = self.__class__.__name__ + '()'\n",
        "        return tmpstr\n",
        "\n",
        "\n",
        "class fc_layer(torch.nn.Module):\n",
        "    '''Fully connected layer, with possibility of returning \"pre-activations\".\n",
        "\n",
        "    Input:  [batch_size] x ... x [in_size] tensor\n",
        "    Output: [batch_size] x ... x [out_size] tensor'''\n",
        "\n",
        "    def __init__(self, in_size, out_size, nl=torch.nn.ReLU(), bias=True):\n",
        "        super().__init__()\n",
        "        self.bias = bias\n",
        "        self.linear = torch.nn.Linear(in_size, out_size, bias=bias)\n",
        "        if isinstance(nl, torch.nn.Module):\n",
        "            self.nl = nl\n",
        "        elif nl==\"relu\":\n",
        "            self.nl = torch.nn.ReLU()\n",
        "        elif nl==\"leakyrelu\":\n",
        "          self.nl = torch.nn.LeakyReLU()\n",
        "\n",
        "    def forward(self, x):\n",
        "        pre_activ = self.linear(x)\n",
        "        output = self.nl(pre_activ) if hasattr(self, 'nl') else pre_activ\n",
        "        return output\n",
        "\n",
        "\n",
        "class MLP(torch.nn.Module):\n",
        "    '''Module for a multi-layer perceptron (MLP).\n",
        "\n",
        "    Input:  [batch_size] x ... x [size_per_layer[0]] tensor\n",
        "    Output: (tuple of) [batch_size] x ... x [size_per_layer[-1]] tensor'''\n",
        "\n",
        "    def __init__(self, input_size=1000, output_size=10, layers=2,\n",
        "                 hid_size=1000, hid_smooth=None, size_per_layer=None,\n",
        "                 nl=\"relu\", bias=True, output='normal'):\n",
        "        '''sizes: 0th=[input], 1st=[hid_size], ..., 1st-to-last=[hid_smooth], last=[output].\n",
        "        [input_size]       # of inputs\n",
        "        [output_size]      # of units in final layer\n",
        "        [layers]           # of layers\n",
        "        [hid_size]         # of units in each hidden layer\n",
        "        [hid_smooth]       if None, all hidden layers have [hid_size] units, else # of units linearly in-/decreases s.t.\n",
        "                             final hidden layer has [hid_smooth] units (if only 1 hidden layer, it has [hid_size] units)\n",
        "        [size_per_layer]   None or <list> with for each layer number of units (1st element = number of inputs)\n",
        "                                --> overwrites [input_size], [output_size], [layers], [hid_size] and [hid_smooth]\n",
        "        [nl]               <str>; type of non-linearity to be used (options: \"relu\", \"leakyrelu\", \"none\")\n",
        "        [output]           <str>; if - \"normal\", final layer is same as all others\n",
        "                                     - \"none\", final layer has no non-linearity\n",
        "                                     - \"sigmoid\", final layer has sigmoid non-linearity'''\n",
        "\n",
        "        super().__init__()\n",
        "        self.output = output\n",
        "\n",
        "        # get sizes of all layers\n",
        "        if size_per_layer is None:\n",
        "            hidden_sizes = []\n",
        "            if layers > 1:\n",
        "                if (hid_smooth is not None):\n",
        "                    hidden_sizes = [int(x) for x in np.linspace(hid_size, hid_smooth, num=layers-1)]\n",
        "                else:\n",
        "                    hidden_sizes = [int(x) for x in np.repeat(hid_size, layers - 1)]\n",
        "            size_per_layer = [input_size] + hidden_sizes + [output_size] if layers>0 else [input_size]\n",
        "        self.layers = len(size_per_layer)-1\n",
        "\n",
        "        # set label for this module\n",
        "        # -determine \"non-default options\"-label\n",
        "        nd_label = \"{bias}{nl}\".format(\n",
        "            bias=\"\" if bias else \"n\",\n",
        "            nl=\"l\" if nl==\"leakyrelu\" else (\"n\" if nl==\"none\" else \"\"),\n",
        "        )\n",
        "        nd_label = \"{}{}\".format(\"\" if nd_label==\"\" else \"-{}\".format(nd_label),\n",
        "                                 \"\" if output==\"normal\" else \"-{}\".format(output))\n",
        "        # -set label\n",
        "        size_statement = \"\"\n",
        "        for i in size_per_layer:\n",
        "            size_statement += \"{}{}\".format(\"-\" if size_statement==\"\" else \"x\", i)\n",
        "        self.label = \"F{}{}\".format(size_statement, nd_label) if self.layers>0 else \"\"\n",
        "\n",
        "        # set layers\n",
        "        for lay_id in range(1, self.layers+1):\n",
        "            # number of units of this layer's input and output\n",
        "            in_size = size_per_layer[lay_id-1]\n",
        "            out_size = size_per_layer[lay_id]\n",
        "            # define and set the fully connected layer\n",
        "            layer = fc_layer(\n",
        "                in_size, out_size, bias=bias,\n",
        "                nl=(\"none\" if output==\"none\" else nn.Sigmoid()) if (\n",
        "                    lay_id==self.layers and not output==\"normal\"\n",
        "                ) else nl,\n",
        "            )\n",
        "            setattr(self, 'fcLayer{}'.format(lay_id), layer)\n",
        "\n",
        "        # if no layers, add \"identity\"-module to indicate in this module's representation nothing happens\n",
        "        if self.layers<1:\n",
        "            self.noLayers = Identity()\n",
        "\n",
        "    def forward(self, x):\n",
        "        for lay_id in range(1, self.layers + 1):\n",
        "            x = getattr(self, \"fcLayer{}\".format(lay_id))(x)\n",
        "        return x\n",
        "\n",
        "    @property\n",
        "    def name(self):\n",
        "        return self.label"
      ],
      "metadata": {
        "cellView": "form",
        "id": "W5zR4KSamLJT"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now let's specify a class for a basic neural network classifier model."
      ],
      "metadata": {
        "id": "UOymyzR9p3ob"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class Classifier(torch.nn.Module):\n",
        "    '''Model for classifying images.'''\n",
        "\n",
        "    def __init__(self, image_size, image_channels, output_units,\n",
        "                 fc_layers=3, fc_units=1000, fc_nl=\"relu\", bias=True):\n",
        "\n",
        "        super().__init__()\n",
        "\n",
        "       # Flatten image to 2D-tensor\n",
        "        self.flatten = Flatten()\n",
        "\n",
        "        # Specify the fully connected hidden layers\n",
        "        input_size = image_channels * image_size * image_size\n",
        "        self.fcE = MLP(input_size=input_size, output_size=fc_units, layers=fc_layers-1,\n",
        "                       hid_size=fc_units, nl=fc_nl, bias=bias)\n",
        "        mlp_output_size = fc_units if fc_layers>1 else self.input_size\n",
        "\n",
        "        # Specify the final linear classifier layer\n",
        "        self.classifier = fc_layer(mlp_output_size, output_units, nl='none')\n",
        "\n",
        "    def forward(self, x):\n",
        "        flatten_x = self.flatten(x)\n",
        "        final_features = self.fcE(flatten_x)\n",
        "        out = self.classifier(final_features)\n",
        "        return out"
      ],
      "metadata": {
        "id": "s5hXbEWan7w0"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Let's create an instance of such a classifier model, and print some details of it to screen."
      ],
      "metadata": {
        "id": "SoADM8X_qCSk"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Specify the architectural layout of the network to use\n",
        "fc_lay = 4        #--> number of fully-connected layers\n",
        "fc_units = 40     #--> number of units in each hidden layer\n",
        "fc_nl = \"relu\"    #--> what non-linearity to use?"
      ],
      "metadata": {
        "id": "pQJISF0xqJ2B"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Define the model\n",
        "model = Classifier(image_size=config['size'], image_channels=config['channels'],\n",
        "                   output_units=config['classes'],\n",
        "                   fc_layers=fc_lay, fc_units=fc_units, fc_nl=fc_nl)"
      ],
      "metadata": {
        "id": "JHKjU4MUqT4f"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Print the network architecture\n",
        "model"
      ],
      "metadata": {
        "id": "LsSDP19EqZHs"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Print info regarding number of parameters\n",
        "total_params = 0\n",
        "for param in model.parameters():\n",
        "    n_params = index_dims = 0\n",
        "    for dim in param.size():\n",
        "        n_params = dim if index_dims==0 else n_params*dim\n",
        "        index_dims += 1\n",
        "    total_params += n_params\n",
        "print( \"--> this network has {} parameters (~{}K)\"\n",
        "      .format(total_params, round(total_params / 1000)))"
      ],
      "metadata": {
        "id": "sPe52AA8qY4U"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Train on first context\n",
        "It's time to start training the model on the first context (i.e., the regular MNIST dataset)."
      ],
      "metadata": {
        "id": "Pg_98FO7q7SI"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "To do this, let's define a function to that can train a given neural network (i.e., `model`) on a particular dataset (i.e., `dataset`). We can then later re-use this function, for example to train the network on the dataset of the second context.\n",
        "\n",
        "We can also specify the number of iterations we want to train for, the learning rate and the mini batch-size."
      ],
      "metadata": {
        "id": "jbtG3Elh5wbV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def train(model, dataset, iters, lr, batch_size):\n",
        "    # Define the optimizer\n",
        "    optimizer = torch.optim.Adam(model.parameters(), lr=lr, betas=(0.9, 0.999))\n",
        "\n",
        "    # Set model in training-mode\n",
        "    model.train()\n",
        "\n",
        "    # Initialize # iters left on current data-loader(s)\n",
        "    iters_left = 1\n",
        "\n",
        "    # Define tqdm progress bar(s)\n",
        "    progress_bar = tqdm.tqdm(range(1, iters+1))\n",
        "\n",
        "    # Loop over all iterations\n",
        "    for batch_index in range(1, iters+1):\n",
        "\n",
        "        # Update # iters left on current data-loader(s) and, if needed, create new one(s)\n",
        "        iters_left -= 1\n",
        "        if iters_left==0:\n",
        "            data_loader = iter(torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n",
        "                                                          shuffle=True, drop_last=True))\n",
        "            iters_left = len(data_loader)\n",
        "\n",
        "        # Sample training data of current context\n",
        "        x, y = next(data_loader)\n",
        "\n",
        "        # Reset optimizer\n",
        "        optimizer.zero_grad()\n",
        "\n",
        "        # Run model\n",
        "        y_hat = model(x)\n",
        "\n",
        "        # Calculate prediction loss\n",
        "        loss = torch.nn.functional.cross_entropy(input=y_hat, target=y, reduction='mean')\n",
        "\n",
        "        # Calculate training-accuracy (in %)\n",
        "        accuracy = (y == y_hat.max(1)[1]).sum().item()*100 / x.size(0)\n",
        "\n",
        "        # Backpropagate errors\n",
        "        loss.backward()\n",
        "\n",
        "        # Take the optimizer step\n",
        "        optimizer.step()\n",
        "\n",
        "        # Update progress bar\n",
        "        progress_bar.set_description(\n",
        "        '<CLASSIFIER> | training loss: {loss:.3} | training accuracy: {prec:.3}% |'\n",
        "            .format(loss=loss.item(), prec=accuracy)\n",
        "        )\n",
        "        progress_bar.update(1)\n",
        "\n",
        "    # Close the progress bar\n",
        "    progress_bar.close()"
      ],
      "metadata": {
        "id": "yNJOs8Vj54Ub"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now we need to choose the training hyperparameters (i.e., learning rate, batch size and number of iterations)."
      ],
      "metadata": {
        "id": "DlFBimOt7Tbx"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "iters = 200       # for how many iterations to train?\n",
        "lr = 0.01         # learning rate\n",
        "batch_size = 128  # size of mini-batches"
      ],
      "metadata": {
        "id": "ITZJbebd7cZY"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "And let's train our neural network on the first context."
      ],
      "metadata": {
        "id": "XyTV4Ee_7d9a"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Train the model on the first context\n",
        "train(model, dataset=train_datasets[0], iters=iters, lr=lr, batch_size=batch_size)"
      ],
      "metadata": {
        "id": "GJrEXhHwtO69"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Evaluate\n",
        "Did the training work? Let's find out by evaluating the performance of the trained model on the MNIST test set."
      ],
      "metadata": {
        "id": "_XeVQWFWtSo9"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Again, let's start by specifying a function to evaluate any given model (i.e., `model`) on a specific dataset (i.e., `dataset`)."
      ],
      "metadata": {
        "id": "gYkoS0NYyYC-"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def test_acc(model, dataset, test_size=None, batch_size=128):\n",
        "    '''Evaluate accuracy (% samples classified correctly) of a classifier ([model]) on [dataset].'''\n",
        "\n",
        "    # Set model to eval()-mode\n",
        "    mode = model.training\n",
        "    model.eval()\n",
        "\n",
        "    # Loop over batches in [dataset]\n",
        "    data_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n",
        "                                              shuffle=True, drop_last=False)\n",
        "    total_tested = total_correct = 0\n",
        "    for x, y in data_loader:\n",
        "        # -break on [test_size] (if \"None\", full dataset is used)\n",
        "        if test_size:\n",
        "            if total_tested >= test_size:\n",
        "                break\n",
        "        # -evaluate model\n",
        "        with torch.no_grad():\n",
        "            scores = model(x)\n",
        "        _, predicted = torch.max(scores, 1)\n",
        "        # -update statistics\n",
        "        total_correct += (predicted == y).sum().item()\n",
        "        total_tested += len(x)\n",
        "    accuracy = total_correct*100 / total_tested\n",
        "\n",
        "    # Set model back to its initial mode, print result on screen (if requested) and return it\n",
        "    model.train(mode=mode)\n",
        "\n",
        "    return accuracy"
      ],
      "metadata": {
        "id": "oeR-wzk7trnq"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now let's use this function to evaluate the performance of the model on both the test data from context 1 (i.e., regular MNIST, on which the model was just trained) and on the test data from context 2 (i.e., a permuted version of MNIST, on which the model has not yet been trained)."
      ],
      "metadata": {
        "id": "oy0jxo7GzTZS"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Evaluate accuracy per context and print to screen\n",
        "print(\"\\n Accuracy (in %) of the model on test-set of:\")\n",
        "context1_accs = []\n",
        "for i in range(contexts):\n",
        "    acc = test_acc(model, test_datasets[i], test_size=None)\n",
        "    print(\" - Context {}: {:.1f}\".format(i+1, acc))\n",
        "    context1_accs.append(acc)"
      ],
      "metadata": {
        "id": "RLbnEFYizT-T"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The model does well on test data from context 1, but is around chance level for test data from context 2.\n",
        "\n",
        "This is not very surprising, as we have not yet trained the model on context 2!"
      ],
      "metadata": {
        "id": "tDX1C8e80Ous"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Store model copy (for later)\n",
        "Before continuing to train the model on the second context, let's store a copy of the model after training on the first context. We will later use this model copy to try out different ways of training the same model on the second context (i.e., we will try out different continual learning methods)."
      ],
      "metadata": {
        "id": "nuSXFD-O00ZJ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model_after_context1 = copy.deepcopy(model)"
      ],
      "metadata": {
        "id": "hfDXSlr903z-"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Train on second context\n",
        "Now let's continue to train our model on the second context, and see what happens."
      ],
      "metadata": {
        "id": "KG5vmXzrtVY-"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Continue to train the model on the second context\n",
        "train(model, dataset=train_datasets[1], iters=iters, lr=lr, batch_size=batch_size)"
      ],
      "metadata": {
        "id": "Qm8nl4vZ8Qg8"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Evaluate"
      ],
      "metadata": {
        "id": "gzz4tXXFtXbT"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Evaluate accuracy per context and print to screen\n",
        "print(\"\\n Accuracy (in %) of the model on test-set of:\")\n",
        "context2_accs = []\n",
        "for i in range(contexts):\n",
        "    acc = test_acc(model, test_datasets[i], test_size=None)\n",
        "    print(\" - Context {}: {:.1f}\".format(i+1, acc))\n",
        "    context2_accs.append(acc)"
      ],
      "metadata": {
        "id": "FHVaWONJtsps"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Train on both contexts at the same time"
      ],
      "metadata": {
        "id": "Rq6PsBLitcRr"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Define a new model with same architecture\n",
        "model_joint = Classifier(image_size=config['size'], image_channels=config['channels'],\n",
        "                         output_units=config['classes'],\n",
        "                         fc_layers=fc_lay, fc_units=fc_units, fc_nl=fc_nl)"
      ],
      "metadata": {
        "id": "ix_3i8cittOj"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a joint dataset with data from both contexts\n",
        "joint_trainset = torch.utils.data.ConcatDataset(train_datasets)"
      ],
      "metadata": {
        "id": "y4trFHRl9jEk"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "We will use the same training-hyperparameters, except that we double the batch size."
      ],
      "metadata": {
        "id": "Ldr-Z3Xa9Vgq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "batch_size_joint = 256"
      ],
      "metadata": {
        "id": "ON_nH7Ap94R6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Train the joint model\n",
        "train(model_joint, dataset=joint_trainset, iters=iters, lr=lr, batch_size=batch_size_joint)"
      ],
      "metadata": {
        "id": "HP73reGQ-KlI"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Evaluate the model\n",
        "print(\"\\n Accuracy (in %) of the model on test-set of:\")\n",
        "joint_accs = []\n",
        "for i in range(contexts):\n",
        "    acc = test_acc(model_joint, test_datasets[i], test_size=None)\n",
        "    print(\" - Context {}: {:.1f}\".format(i+1, acc))\n",
        "    joint_accs.append(acc)"
      ],
      "metadata": {
        "id": "ngV6LI73-zKV"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Visualize results"
      ],
      "metadata": {
        "id": "jot0FyzK--uE"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Visualize\n",
        "figure, axis = plt.subplots(1, 4, figsize=(15, 5))\n",
        "\n",
        "title='After training on context 1, \\nbut not yet training on context 2'\n",
        "multi_context_barplot(axis[0], context1_accs, title)\n",
        "\n",
        "title='After first training on context 1, \\nand then training on context 2'\n",
        "multi_context_barplot(axis[1], context2_accs, title)\n",
        "\n",
        "axis[2].axis('off')\n",
        "\n",
        "title='After jointly training on both contexts'\n",
        "multi_context_barplot(axis[3], joint_accs, title)"
      ],
      "metadata": {
        "id": "OJEcM-8i_CAV"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "We have observed **catastrophic forgetting**! When our neural network model first learns context 1 and is then trained on context 2, the model's performance on data from context 1 substantially drops (left panels). When the same neural network model is instead trained on both context 1 and 2 at the same time, the model is able to learn both contexts well (right panel), demonstrating that the forgetting cannot be explained by limited model capacity."
      ],
      "metadata": {
        "id": "pWWeiBRcyaCH"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Part 2: Overcoming catastrophic forgetting - EWC & replay"
      ],
      "metadata": {
        "id": "HrAUoOovzzMf"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now that we found catastrophic forgetting, let's try out some methods to mitigate the forgetting.\n",
        "\n",
        "We will start by exploring two methods: EWC and replay. For both methods, the training on the first context is the same as before. We can therefore use as starting point the copy of the model that we stored after finishing training on the first context."
      ],
      "metadata": {
        "id": "q6ox4ZT10EGB"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model_ewc = copy.deepcopy(model_after_context1)\n",
        "model_replay = copy.deepcopy(model_after_context1)"
      ],
      "metadata": {
        "id": "z9FGHT8Z161d"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Elastic Weight Consolidation (EWC)\n",
        "EWC is a popular parameter regularization strategy for continual learning. It was introduced in the paper \"[Overcoming catastrophic forgetting in neural networks\n",
        "](https://www.pnas.org/doi/abs/10.1073/pnas.1611835114)\" (Kirkpatrick et al., 2017; *PNAS*).\n",
        "\n",
        "EWC computes a diagonal approximation to the [Fisher Information matrix]((https://en.wikipedia.org/wiki/Fisher_information) to estimate for each parameter of the network how important it is for the performance on the previous context. During training on the next context, these parameter importance estimates are then used to penalize changes to the parameters, with changes to the most important parameters penalized most.\n",
        "\n"
      ],
      "metadata": {
        "id": "s7dQ5itP0RJc"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "When training on context $k>1$, the EWC regularization term is given by:\n",
        "$$\n",
        "\\mathcal{L}^{(k)}_{\\text{regularization}_{\\text{EWC}}}\\left(\\boldsymbol{\\theta}\\right) = \\frac{1}{2} \\sum_{i=1}^{N_{\\text{params}}} \\tilde{F}_{ii}^{(k)} \\left(\\theta_i - \\hat{\\theta}_{i}^{(k)} \\right)^2\n",
        "$$\n",
        "whereby $\\hat{\\theta}_{i}^{(k)}$ is the $i^{\\text{th}}$ element of $\\hat{\\boldsymbol{\\theta}}^{\\left(k\\right)}$, which is the vector with parameter values at the end of training of task $k$, and $\\tilde{F}_{ii}^{(k)}$ is an approximation of $F_{ii}^{(k)}$, the $i^{\\text{th}}$ diagonal element of $\\boldsymbol{F}^{(k)}$, which is the Fisher Information matrix of task $k$ evaluated at $\\hat{\\boldsymbol{\\theta}}^{(k)}$."
      ],
      "metadata": {
        "id": "OcXpF4N6kyc_"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "(Technically, the above regularization term is for `[Online EWC](https://arxiv.org/abs/1805.06370)'. The original version of EWC did something weird, as explained in [this blog post](https://www.inference.vc/comment-on-overcoming-catastrophic-forgetting-in-nns-are-multiple-penalties-needed-2/).)"
      ],
      "metadata": {
        "id": "uRJG6pGDoj_O"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "The Fisher Information matrix is defined as the covariance of the `score', which is the partial derivative with respect to $\\boldsymbol{\\theta}$ of the natural logarithm of the likelihood function. The $i^{\\text{th}}$ diagonal element of the Fisher Information on context $k$ is therefore given by:\n",
        "\n",
        "$$\n",
        "F_{ii}^{(k)} = \\mathbb{E}_{\\boldsymbol{x}\\sim Q_{\\boldsymbol{x}}^{(k)}} \\left[ \\ \\mathbb{E}_{p_{\\hat{\\boldsymbol{\\theta}}^{(k)}}} \\left[ \\left(  \\left. \\frac{\\delta \\log{p_{\\boldsymbol{\\theta}}\\left(Y=y|\\boldsymbol{x}\\right)}}{\\delta \\theta_i} \\right\\rvert_{\\boldsymbol{\\theta}=\\hat{\\boldsymbol{\\theta}}^{(k)}} \\right)^2 \\right] \\right]\n",
        "$$\n",
        "\n",
        "whereby $Q_{\\boldsymbol{x}}^{(k)}$ is the input distribution of context $k$, $p_{\\boldsymbol{\\theta}}$ is the conditional distribution of $y$ given $\\boldsymbol{x}$ defined by the neural network with parameters $\\boldsymbol{\\theta}$, and $\\hat{\\boldsymbol{\\theta}}^{(k)}$ is the vector with parameter values after finisihing training on context $k$."
      ],
      "metadata": {
        "id": "C_8xnOyI-D5_"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "The outer expectation in the above equation can be approximated using a random sample from the training set of context $k$. Because there is only a finit number of possible classes, the inner expectation in the above equation can be calculated for each sample exactly:\n",
        "$$\n",
        "\\tilde{F}_{ii}^{(k)} = \\frac{1}{|S^{(k)}|} \\sum_{\\boldsymbol{x}\\in S^{(k)}} \\left( \\sum_{c=1}^{N_{\\text{classes}}} \\tilde{y}_c \\left( \\left. \\frac{\\delta\\log p_{\\boldsymbol{\\theta}}\\left(Y=c|\\boldsymbol{x}\\right)}{\\delta\\theta_i} \\right\\rvert_{\\boldsymbol{\\theta}=\\hat{\\boldsymbol{\\theta}}^{(k)}} \\right)^2 \\right)\n",
        "$$\n",
        "whereby $S^{(k)}$ is the random sample of training data of context $k$ and $\\tilde{y}_c = p_{\\hat{\\boldsymbol{\\theta}}^{(k)}}\\left(Y=c|\\boldsymbol{x}\\right)$ (i.e., the probability that input $\\boldsymbol{x}$ belongs to class $c$ as predicted by the model after finishing training on context $k$)."
      ],
      "metadata": {
        "id": "8MMh_ZZslXpd"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**In the literature or on GitHub, often different, rather crude approximations of the Fisher Information are used ([example](https://github.com/ContinualAI/avalanche/blob/dbdc3804b11710b85b0e564b13034f487c7cf806/avalanche/training/plugins/ewc.py#L132-L186)). Be careful, as the quality of this approximation might influence the results!**"
      ],
      "metadata": {
        "id": "Uitfc1Hom_cC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def estimate_fisher(model, dataset, n_samples, ewc_gamma=1.):\n",
        "    '''Estimate diagonal of Fisher Information matrix for [model] on [dataset] using [n_samples].'''\n",
        "\n",
        "    # Prepare <dict> to store estimated Fisher Information matrix\n",
        "    est_fisher_info = {}\n",
        "    for n, p in model.named_parameters():\n",
        "        n = n.replace('.', '__')\n",
        "        est_fisher_info[n] = p.detach().clone().zero_()\n",
        "\n",
        "    # Set model to evaluation mode\n",
        "    mode = model.training\n",
        "    model.eval()\n",
        "\n",
        "    # Create data-loader to give batches of size 1\n",
        "    data_loader = torch.utils.data.DataLoader(dataset, batch_size=1)\n",
        "\n",
        "    # Estimate the FI-matrix for [n_samples] batches of size 1\n",
        "    for index,(x,y) in enumerate(data_loader):\n",
        "        # break from for-loop if max number of samples has been reached\n",
        "        if n_samples is not None:\n",
        "            if index > n_samples:\n",
        "                break\n",
        "        # run forward pass of model\n",
        "        output = model(x)\n",
        "        # calculate the FI-matrix\n",
        "        with torch.no_grad():\n",
        "            label_weights = F.softmax(output, dim=1)  #--> get weights, with no gradient tracked\n",
        "        # - loop over all classes\n",
        "        for label_index in range(output.shape[1]):\n",
        "            label = torch.LongTensor([label_index])\n",
        "            negloglikelihood = F.cross_entropy(output, label)\n",
        "            # Calculate gradient of negative loglikelihood for this class\n",
        "            model.zero_grad()\n",
        "            negloglikelihood.backward(retain_graph=True if (label_index+1)<output.shape[1] else False)\n",
        "            # Square gradients and keep running sum (using the weights)\n",
        "            for n, p in model.named_parameters():\n",
        "                n = n.replace('.', '__')\n",
        "                if p.grad is not None:\n",
        "                    est_fisher_info[n] += label_weights[0][label_index] * (p.grad.detach() ** 2)\n",
        "\n",
        "    # Normalize by sample size used for estimation\n",
        "    est_fisher_info = {n: p/index for n, p in est_fisher_info.items()}\n",
        "\n",
        "    # Store new values in the network\n",
        "    for n, p in model.named_parameters():\n",
        "        n = n.replace('.', '__')\n",
        "        # -mode (=MAP parameter estimate)\n",
        "        model.register_buffer('{}_EWC_param_values'.format(n,), p.detach().clone())\n",
        "        # -precision (approximated by diagonal Fisher Information matrix)\n",
        "        if hasattr(model, '{}_EWC_estimated_fisher'.format(n)):\n",
        "            existing_values = getattr(model, '{}_EWC_estimated_fisher'.format(n))\n",
        "            est_fisher_info[n] += ewc_gamma * existing_values\n",
        "        model.register_buffer('{}_EWC_estimated_fisher'.format(n), est_fisher_info[n])\n",
        "\n",
        "    # Set model back to its initial mode\n",
        "    model.train(mode=mode)"
      ],
      "metadata": {
        "id": "TdBqM3I3-PTc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "We need to define a new training function for EWC by adding the above regularization term to the loss."
      ],
      "metadata": {
        "id": "zr9yPQcQ9sNl"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# (only the steps that differ from the original `train`-function are commented)\n",
        "def train_ewc(model, dataset, iters, lr, batch_size, current_context, ewc_lambda=100.):\n",
        "    optimizer = torch.optim.Adam(model.parameters(), lr=lr, betas=(0.9, 0.999))\n",
        "    model.train()\n",
        "    iters_left = 1\n",
        "    progress_bar = tqdm.tqdm(range(1, iters+1))\n",
        "\n",
        "    for batch_index in range(1, iters+1):\n",
        "        iters_left -= 1\n",
        "        if iters_left==0:\n",
        "            data_loader = iter(torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n",
        "                                                           shuffle=True, drop_last=True))\n",
        "            iters_left = len(data_loader)\n",
        "        x, y = next(data_loader)\n",
        "        optimizer.zero_grad()\n",
        "        y_hat = model(x)\n",
        "        loss = torch.nn.functional.cross_entropy(input=y_hat, target=y, reduction='mean')\n",
        "\n",
        "        # Compute the EWC-regularization term, and add it to the loss (except if first context)\n",
        "        if current_context>1:\n",
        "            ewc_losses = []\n",
        "            for n, p in model.named_parameters():\n",
        "                # Retrieve stored mode (MAP estimate) and precision (Fisher Information matrix)\n",
        "                n = n.replace('.', '__')\n",
        "                mean = getattr(model, '{}_EWC_param_values'.format(n))\n",
        "                fisher = getattr(model, '{}_EWC_estimated_fisher'.format(n))\n",
        "                # Calculate weight regularization loss\n",
        "                ewc_losses.append((fisher * (p-mean)**2).sum())\n",
        "            ewc_loss = (1./2)*sum(ewc_losses)\n",
        "            total_loss = loss + ewc_lambda*ewc_loss\n",
        "        else:\n",
        "            total_loss = loss\n",
        "\n",
        "        accuracy = (y == y_hat.max(1)[1]).sum().item()*100 / x.size(0)\n",
        "        total_loss.backward()\n",
        "        optimizer.step()\n",
        "        progress_bar.set_description(\n",
        "        '<CLASSIFIER> | training loss: {loss:.3} | training accuracy: {prec:.3}% |'\n",
        "            .format(loss=total_loss.item(), prec=accuracy)\n",
        "        )\n",
        "        progress_bar.update(1)\n",
        "    progress_bar.close()"
      ],
      "metadata": {
        "id": "JjDl0lsg0Vs_"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Let's train the model on the second context using EWC."
      ],
      "metadata": {
        "id": "8ikYkBt-JMm6"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Estimate the FI-matrix (and store it as attribute in the network)\n",
        "estimate_fisher(model_ewc, train_datasets[0], n_samples=200)"
      ],
      "metadata": {
        "id": "mvW1FKRNJU2F"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Train on the second context using EWC parameter regularization\n",
        "ewc_lambda = 100   #--> this is a \"continual learning hyperparameter\", setting these is a delicate\n",
        "                   #    business. Here we ignore that and just use one that gives good performance.\n",
        "train_ewc(model_ewc, train_datasets[1], iters=iters, lr=lr, batch_size=batch_size,\n",
        "          current_context=2, ewc_lambda=ewc_lambda)"
      ],
      "metadata": {
        "id": "9UjjXJtLJUg7"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "... and evaluate its performance."
      ],
      "metadata": {
        "id": "60zj34suJVYJ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Evaluate the model\n",
        "print(\"\\n Accuracy (in %) of the model on test-set of:\")\n",
        "ewc_accs = []\n",
        "for i in range(contexts):\n",
        "    acc = test_acc(model_ewc, test_datasets[i], test_size=None)\n",
        "    print(\" - Context {}: {:.1f}\".format(i+1, acc))\n",
        "    ewc_accs.append(acc)"
      ],
      "metadata": {
        "id": "jiNB3edeJZ-L"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "That worked well! The performance on the first context barely dropped while the network learned the second context."
      ],
      "metadata": {
        "id": "vSfLqS87Vm5e"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Experience Replay\n",
        "For comparison, now let's train another model copy on the second context using 'experience replay'.\n",
        "\n",
        "The typical approach is to store a relatively small amount of samples from previous contexts, and revisit those when training on a new context. We thus first need to populate a memory buffer with some samples from the first context. We select these samples using class-balanced random sampling from the training set (other approaches are possible here, how to optimally select the samples to store in the memory buffer is an active field of research)."
      ],
      "metadata": {
        "id": "bLC3-drp0TIk"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@title Helper dataset classes for constructing memory buffer\n",
        "class SubDataset(torch.utils.data.Dataset):\n",
        "    '''To sub-sample a dataset, taking only those samples with label in [sub_labels].\n",
        "\n",
        "    After this selection of samples has been made, it is possible to transform the target-labels,\n",
        "    which can be useful when doing continual learning with fixed number of output units.'''\n",
        "\n",
        "    def __init__(self, original_dataset, sub_labels, target_transform=None):\n",
        "        super().__init__()\n",
        "        self.dataset = original_dataset\n",
        "        self.sub_indeces = []\n",
        "        for index in range(len(self.dataset)):\n",
        "            if hasattr(original_dataset, \"targets\"):\n",
        "                if self.dataset.target_transform is None:\n",
        "                    label = self.dataset.targets[index]\n",
        "                else:\n",
        "                    label = self.dataset.target_transform(self.dataset.targets[index])\n",
        "            else:\n",
        "                label = self.dataset[index][1]\n",
        "            if label in sub_labels:\n",
        "                self.sub_indeces.append(index)\n",
        "        self.target_transform = target_transform\n",
        "\n",
        "    def __len__(self):\n",
        "        return len(self.sub_indeces)\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        sample = self.dataset[self.sub_indeces[index]]\n",
        "        if self.target_transform:\n",
        "            target = self.target_transform(sample[1])\n",
        "            sample = (sample[0], target)\n",
        "        return sample\n",
        "\n",
        "\n",
        "class MemorySetDataset(torch.utils.data.Dataset):\n",
        "    '''Create dataset from list of <np.arrays> with shape (N, C, H, W) (i.e., with N images each).\n",
        "\n",
        "    The images at the i-th entry of [memory_sets] belong to class [i],\n",
        "    unless a [target_transform] is specified\n",
        "    '''\n",
        "\n",
        "    def __init__(self, memory_sets, target_transform=None):\n",
        "        super().__init__()\n",
        "        self.memory_sets = memory_sets\n",
        "        self.target_transform = target_transform\n",
        "\n",
        "    def __len__(self):\n",
        "        total = 0\n",
        "        for class_id in range(len(self.memory_sets)):\n",
        "            total += len(self.memory_sets[class_id])\n",
        "        return total\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        total = 0\n",
        "        for class_id in range(len(self.memory_sets)):\n",
        "            examples_in_this_class = len(self.memory_sets[class_id])\n",
        "            if index < (total + examples_in_this_class):\n",
        "                class_id_to_return = class_id if self.target_transform is None else self.target_transform(class_id)\n",
        "                example_id = index - total\n",
        "                break\n",
        "            else:\n",
        "                total += examples_in_this_class\n",
        "        image = torch.from_numpy(self.memory_sets[class_id][example_id])\n",
        "        return (image, class_id_to_return)"
      ],
      "metadata": {
        "id": "06uwJLSlYfYs",
        "cellView": "form"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Fill the memory buffer using class-balanced random sampling\n",
        "def fill_memory_buffer(memory_sets, dataset, buffer_size_per_class, class_indeces):\n",
        "    '''This function is rather slow and can be optimized.'''\n",
        "    for class_id in class_indeces:\n",
        "        # Create dataset with only instances of one class\n",
        "        class_dataset = SubDataset(original_dataset=dataset, sub_labels=[class_id])\n",
        "\n",
        "        # Randomly select which indeces to store in the buffer\n",
        "        n_total = len(class_dataset)\n",
        "        indeces_selected = np.random.choice(n_total, size=min(buffer_size_per_class, n_total),\n",
        "                                            replace=False)\n",
        "\n",
        "        # Select those indeces\n",
        "        memory_set = []\n",
        "        for k in indeces_selected:\n",
        "            memory_set.append(class_dataset[k][0].numpy())\n",
        "\n",
        "        # Add this [memory_set] as a [n]x[ich]x[isz]x[isz] to the list of [memory_sets]\n",
        "        memory_sets.append(np.array(memory_set))\n",
        "\n",
        "    return memory_sets"
      ],
      "metadata": {
        "id": "Ll7YdfMo0VSl"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "buffer_size_per_class = 20\n",
        "memory_sets = []\n",
        "# The next command is unneccesary slow, apologies! Bonus question: optimize this implementation :)\n",
        "memory_sets = fill_memory_buffer(memory_sets, train_datasets[0],\n",
        "                                 buffer_size_per_class=buffer_size_per_class,\n",
        "                                 class_indeces=list(range(10)))\n",
        "buffer_dataset = MemorySetDataset(memory_sets)"
      ],
      "metadata": {
        "id": "0Bbg05iTOye4"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now we also need to define a new training-function that revisits data from the memory buffer along with training on the data from the new context."
      ],
      "metadata": {
        "id": "XA5cDdKSO_Pq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# (only the steps that differ from the original `train`-function are commented)\n",
        "def train_replay(model, dataset, iters, lr, batch_size, current_context, buffer_dataset=None):\n",
        "    optimizer = torch.optim.Adam(model.parameters(), lr=lr, betas=(0.9, 0.999))\n",
        "    model.train()\n",
        "    iters_left = 1\n",
        "    iters_left_replay = 1\n",
        "    progress_bar = tqdm.tqdm(range(1, iters+1))\n",
        "\n",
        "    for batch_index in range(1, iters+1):\n",
        "        optimizer.zero_grad()\n",
        "\n",
        "        # Data from current context\n",
        "        iters_left -= 1\n",
        "        if iters_left==0:\n",
        "            data_loader = iter(torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n",
        "                                                           shuffle=True, drop_last=True))\n",
        "            iters_left = len(data_loader)\n",
        "        x, y = next(data_loader)\n",
        "        y_hat = model(x)\n",
        "        loss = torch.nn.functional.cross_entropy(input=y_hat, target=y, reduction='mean')\n",
        "        accuracy = (y == y_hat.max(1)[1]).sum().item()*100 / x.size(0)\n",
        "\n",
        "        # Replay data from memory buffer\n",
        "        if buffer_dataset is not None:\n",
        "          iters_left_replay -= 1\n",
        "          if iters_left_replay==0:\n",
        "              batch_size_to_use = min(batch_size, len(buffer_dataset))\n",
        "              data_loader_replay = iter(torch.utils.data.DataLoader(buffer_dataset,\n",
        "                                                                    batch_size_to_use, shuffle=True,\n",
        "                                                                    drop_last=True))\n",
        "              iters_left_replay = len(data_loader_replay)\n",
        "          x_, y_ = next(data_loader_replay)\n",
        "          y_hat_ = model(x_)\n",
        "          loss_replay = torch.nn.functional.cross_entropy(input=y_hat_, target=y_, reduction='mean')\n",
        "\n",
        "        # Combine both losses to approximate the joint loss over both contexts\n",
        "        # (i.e., the loss on the replayed data has weight proportional to number of contexts so far)\n",
        "        if buffer_dataset is not None:\n",
        "            rnt = 1./current_context\n",
        "            total_loss = rnt*loss + (1-rnt)*loss_replay\n",
        "        else:\n",
        "            total_loss = loss\n",
        "\n",
        "        total_loss.backward()\n",
        "        optimizer.step()\n",
        "        progress_bar.set_description(\n",
        "        '<CLASSIFIER> | training loss: {loss:.3} | training accuracy: {prec:.3}% |'\n",
        "            .format(loss=total_loss.item(), prec=accuracy)\n",
        "        )\n",
        "        progress_bar.update(1)\n",
        "    progress_bar.close()"
      ],
      "metadata": {
        "id": "FVfyNqsMPIf4"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Train on the second context using experience replay\n",
        "train_replay(model_replay, train_datasets[1], iters=iters, lr=lr, batch_size=batch_size,\n",
        "             current_context=2, buffer_dataset=buffer_dataset)"
      ],
      "metadata": {
        "id": "XU8LLSWFUokA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "... and evaluate its performance"
      ],
      "metadata": {
        "id": "H35hMk3zVFs_"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Evaluate the model\n",
        "print(\"\\n Accuracy (in %) of the model on test-set of:\")\n",
        "replay_accs = []\n",
        "for i in range(contexts):\n",
        "    acc = test_acc(model_replay, test_datasets[i], test_size=None)\n",
        "    print(\" - Context {}: {:.1f}\".format(i+1, acc))\n",
        "    replay_accs.append(acc)"
      ],
      "metadata": {
        "id": "hKuT_aFDVEn0"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "That also worked!"
      ],
      "metadata": {
        "id": "t_ZmSg9hWJiV"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Compare\n",
        "Let's compare the performance of naive fine-tuning, EWC and experience replay."
      ],
      "metadata": {
        "id": "Rp4rvzgZVLBT"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "figure, axis = plt.subplots(1, 3, figsize=(12, 4))\n",
        "\n",
        "title='Fine-tuning'\n",
        "multi_context_barplot(axis[0], context2_accs, title)\n",
        "\n",
        "title='EWC \\n(lambda: {})'.format(ewc_lambda)\n",
        "multi_context_barplot(axis[1], ewc_accs, title)\n",
        "\n",
        "title='Replay \\n(buffer: {} samples per class)'.format(buffer_size_per_class)\n",
        "multi_context_barplot(axis[2], replay_accs, title)"
      ],
      "metadata": {
        "id": "YWWDQqiFVSc0"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### **ASSIGNMENT**: Combine EWC and replay\n",
        "Train another model copy on the second context using *both* EWC and experience replay."
      ],
      "metadata": {
        "id": "kuzoBMh8fh2i"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model_ewc_replay = copy.deepcopy(model_after_context1)"
      ],
      "metadata": {
        "id": "E-vo75W6f1tU"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Start by defining the training function that can be used to train the model on the new context using both EWC and experience replay."
      ],
      "metadata": {
        "id": "CxUvSooRgxiZ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def train_ewc_replay(model, dataset, buffer_dataset, iters, lr, batch_size, ewc_lambda):\n",
        "    pass\n",
        "    # TO BE COMPLETED (tip: use the above training functions as example / starting point)"
      ],
      "metadata": {
        "id": "5r96gtzkgWzZ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "#@title Possible Answer\n",
        "def train_ewc_replay(model, dataset, iters, lr, batch_size, current_context,\n",
        "                     ewc_lambda=100., buffer_dataset=None):\n",
        "    optimizer = torch.optim.Adam(model.parameters(), lr=lr, betas=(0.9, 0.999))\n",
        "    model.train()\n",
        "    iters_left = 1\n",
        "    iters_left_replay = 1\n",
        "    progress_bar = tqdm.tqdm(range(1, iters+1))\n",
        "\n",
        "    for batch_index in range(1, iters+1):\n",
        "        optimizer.zero_grad()\n",
        "\n",
        "        # Data from current context\n",
        "        iters_left -= 1\n",
        "        if iters_left==0:\n",
        "            data_loader = iter(torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n",
        "                                                           shuffle=True, drop_last=True))\n",
        "            iters_left = len(data_loader)\n",
        "        x, y = next(data_loader)\n",
        "        y_hat = model(x)\n",
        "        loss = torch.nn.functional.cross_entropy(input=y_hat, target=y, reduction='mean')\n",
        "        accuracy = (y == y_hat.max(1)[1]).sum().item()*100 / x.size(0)\n",
        "\n",
        "        # Replay data from memory buffer\n",
        "        if buffer_dataset is not None:\n",
        "            iters_left_replay -= 1\n",
        "            if iters_left_replay==0:\n",
        "                batch_size_to_use = min(batch_size, len(buffer_dataset))\n",
        "                data_loader_replay = iter(torch.utils.data.DataLoader(buffer_dataset,\n",
        "                                                                      batch_size_to_use,\n",
        "                                                                      shuffle=True,\n",
        "                                                                      drop_last=True))\n",
        "                iters_left_replay = len(data_loader_replay)\n",
        "            x_, y_ = next(data_loader_replay)\n",
        "            y_hat_ = model(x_)\n",
        "            loss_replay = torch.nn.functional.cross_entropy(input=y_hat_, target=y_,\n",
        "                                                            reduction='mean')\n",
        "\n",
        "        # Compute the EWC-regularization term, and add it to the loss\n",
        "        if current_context>1:\n",
        "            ewc_losses = []\n",
        "            for n, p in model.named_parameters():\n",
        "                # Retrieve stored mode (MAP estimate) and precision (Fisher Information matrix)\n",
        "                n = n.replace('.', '__')\n",
        "                mean = getattr(model, '{}_EWC_param_values'.format(n))\n",
        "                fisher = getattr(model, '{}_EWC_estimated_fisher'.format(n))\n",
        "                # Calculate weight regularization loss\n",
        "                ewc_losses.append((fisher * (p-mean)**2).sum())\n",
        "            ewc_loss = (1./2)*sum(ewc_losses)\n",
        "        else:\n",
        "            ewc_loss = 0.\n",
        "\n",
        "        # Combine all three losses\n",
        "        if buffer_dataset is not None:\n",
        "            rnt = 1./current_context\n",
        "            total_loss = rnt*loss + (1-rnt)*loss_replay + ewc_lambda*ewc_loss\n",
        "        else:\n",
        "            total_loss = loss + ewc_lambda*ewc_loss\n",
        "\n",
        "        total_loss.backward()\n",
        "        optimizer.step()\n",
        "        progress_bar.set_description(\n",
        "        '<CLASSIFIER> | training loss: {loss:.3} | training accuracy: {prec:.3}% |'\n",
        "            .format(loss=total_loss.item(), prec=accuracy)\n",
        "        )\n",
        "        progress_bar.update(1)\n",
        "    progress_bar.close()"
      ],
      "metadata": {
        "id": "qON7O-O1ozuM",
        "cellView": "form"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now test your function by training the model, ..."
      ],
      "metadata": {
        "id": "1T2XW32xhAl3"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Select the hyperparameter for EWC\n",
        "ewc_lambda_with_replay = 100 # YOU CAN EXPLORE OTHER VALUES\n",
        "# (if you want to do a new try, first 'reset' [model_ewc_replay] by running the command\n",
        "#  `model_ewc_replay = copy.deepcopy(model_after_context1)` at the top of the assignment)\n",
        "\n",
        "# Compute the Fisher Information matrix (and store it as attribute in the network)\n",
        "estimate_fisher(model_ewc_replay, train_datasets[0], n_samples=200)\n",
        "\n",
        "# Train on the second context using EWC and experience replay\n",
        "train_ewc_replay(model_ewc_replay, train_datasets[1], iters=iters, lr=lr, batch_size=batch_size,\n",
        "                 current_context=2, ewc_lambda=ewc_lambda_with_replay,\n",
        "                 buffer_dataset=buffer_dataset)"
      ],
      "metadata": {
        "id": "bVKKz3xdhBa9"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "... evaluating it, ..."
      ],
      "metadata": {
        "id": "xCAXSuUGhu4B"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "print(\"\\n Accuracy (in %) of the model on test-set of:\")\n",
        "ewc_replay_accs = []\n",
        "for i in range(contexts):\n",
        "    acc = test_acc(model_ewc_replay, test_datasets[i], test_size=None)\n",
        "    print(\" - Context {}: {:.1f}\".format(i+1, acc))\n",
        "    ewc_replay_accs.append(acc)"
      ],
      "metadata": {
        "id": "Damfry4kh7d7"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "... and comparing its performance with the performance of the individual methods."
      ],
      "metadata": {
        "id": "E5zUtbJ3h1EG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "figure, axis = plt.subplots(1, 4, figsize=(16, 4))\n",
        "\n",
        "title='Fine-tuning'\n",
        "multi_context_barplot(axis[0], context2_accs, title)\n",
        "\n",
        "title='EWC \\n(lambda: {})'.format(ewc_lambda)\n",
        "multi_context_barplot(axis[1], ewc_accs, title)\n",
        "\n",
        "title='Replay \\n(buffer: {} samples per class)'.format(buffer_size_per_class)\n",
        "multi_context_barplot(axis[2], replay_accs, title)\n",
        "\n",
        "title='EWC + replay \\n(lambda: {} - buffer: {} per class)'.format(ewc_lambda_with_replay,\n",
        "                                                                          buffer_size_per_class)\n",
        "multi_context_barplot(axis[3], ewc_replay_accs, title)"
      ],
      "metadata": {
        "id": "QWR5Nw4bh7Im"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Part 3: Class-incremental learning - Split MNIST\n",
        "We saw that on Permuted MNIST with two contexts, both EWC and experience replay (with a relatively small buffer of 20 samples per class) are able to succesfully prevent a large part of the catastrophic forgetting.\n",
        "\n",
        "Now let's look at a different type of continual learning problem. As discussed in the lecture, when it comes to supervised continual learning, three fundamental types - or 'scenarios' - can be distinguished: **task-incremental learning**, **domain-incremental learning** and **class-incremental learning**."
      ],
      "metadata": {
        "id": "tW8VSZ_90BDN"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![image.png]()"
      ],
      "metadata": {
        "id": "uxpSI2BttGa1"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![Screenshot 2024-03-18 at 19.00.26.png]()"
      ],
      "metadata": {
        "id": "ac1wTL2duGD2"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ASSIGNMENT: What scenario was used for Permuted MNIST?\n",
        "\n",
        "What type of 'scenario' was the permuted MNIST problem that we explored above? Was it task-incremental, domain-incremental or class-incremental? Try to motivate your answer."
      ],
      "metadata": {
        "id": "R_oxTtVTrM5g"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@title Possible Answer\n",
        "\n",
        "'''\n",
        "The Permuted MNIST problem consisted of two contexts: normal MNIST (context 1) and MNIST\n",
        "with permuted input images (context 2).\n",
        "\n",
        "After learning both contexts, when the model was evaluated, the model was not told to\n",
        "which context an image belongs (i.e., the model was not told whether the image to be\n",
        "classified was permuted or not), but the model also did not need to identify to\n",
        "which context an image belongs (i.e., the model did not need to predict whether\n",
        "the image to be classified had permuted pixels or not; it only needed to predict\n",
        "the original digit displayed in the image).\n",
        "This thus means that the above Permuted MNIST problem was an example of a domain-incremental\n",
        "learning problem.\n",
        "\n",
        "Another way to motivate that this problem is an example of domain-incremental\n",
        "learning, is to say that in both context 1 (normal MNIST) and context 2 (MNIST with\n",
        "permuted input images), the 'type of problem' is the same (i.e., to identify the\n",
        "digit displayed in the original image), but the 'domain' or 'context' is changing (i.e.,\n",
        "the order/permutation in which the image pixels are presented).\n",
        "''';"
      ],
      "metadata": {
        "id": "CIvJrwERry3C",
        "cellView": "form"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now let's explore a **class-incremental learning** problem. For this we will no longer use Permuted MNIST (because it is a bit unintuitive to perform Permuted MNIST according to the class-incremental learning scenario), but we will use Split MNIST, which was introduced in the lecture."
      ],
      "metadata": {
        "id": "Hbp9I_kpstgv"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Set up the benchmark (Split MNIST)\n",
        "We will split the MNIST dataset up in five contexts with two different classes per context."
      ],
      "metadata": {
        "id": "Ho1dVPxFuKPj"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "contexts = 5\n",
        "classes_per_context = 2\n",
        "# Generate labels-per-context\n",
        "labels_per_context = [\n",
        "    list(np.array(range(classes_per_context))+classes_per_context*context_id) for context_id in range(contexts)\n",
        "]\n",
        "# Split the train and test datasets up into sub-datasets, one for each context\n",
        "train_datasets = []\n",
        "test_datasets = []\n",
        "for labels in labels_per_context:\n",
        "    train_datasets.append(SubDataset(MNIST_trainset, labels))\n",
        "    test_datasets.append(SubDataset(MNIST_testset, labels))"
      ],
      "metadata": {
        "id": "aBAcIMoG-voX"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Visualize the contexts\n",
        "figure, axis = plt.subplots(1, contexts, figsize=(3*contexts, 4))\n",
        "\n",
        "for context_id in range(len(train_datasets)):\n",
        "    plot_examples(axis[context_id], train_datasets[context_id], context_id=context_id)"
      ],
      "metadata": {
        "id": "kVr-Qc3LaJAw"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Set up the model\n",
        "We use the same network architecture as before."
      ],
      "metadata": {
        "id": "VLBAv9CyFRzc"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Specify the architectural layout of the network to use\n",
        "fc_lay = 4        #--> number of fully-connected layers\n",
        "fc_units = 40     #--> number of units in each hidden layer\n",
        "fc_nl = \"relu\"    #--> what non-linearity to use?"
      ],
      "metadata": {
        "id": "JBmaNwqAlXQ6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Define the model\n",
        "model = Classifier(image_size=config['size'], image_channels=config['channels'],\n",
        "                   output_units=config['classes'],\n",
        "                   fc_layers=fc_lay, fc_units=fc_units, fc_nl=fc_nl)"
      ],
      "metadata": {
        "id": "vmCYbrRDliwp"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Prepare for evaluation throughout training\n",
        "Below we will use the same continual learning strategies as before (fine-tuning, EWC and replay) to train models on the five contexts of Split MNIST. When we do this, we want to keep track of the performance of the model while it is sequentially trained on these different contexts. For that we will define some functions here.\n",
        "\n",
        "As is common in the continual learning literature, we will evaluate the performance of the model only after finishing training on each new task. (But see [this paper](https://openreview.net/forum?id=Zy350cRstc6) for an interesting phenomenon that can be observed if we would evaluate the model on previous tasks after each training iteration on the new task.)"
      ],
      "metadata": {
        "id": "1zoLaBgvutQ9"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Define a function to initiate a dict-object in which performance throughout training is logged.\n",
        "def initiate_result_dict(n_contexts):\n",
        "    '''Initiate <dict> with accuracy-measures to keep track of.'''\n",
        "    result_dict = {}\n",
        "    result_dict[\"acc per context\"] = {}\n",
        "    for i in range(n_contexts):\n",
        "        result_dict[\"acc per context\"][\"context {}\".format(i+1)] = []\n",
        "    result_dict[\"average_contexts_so_far\"] = []  # average accuracy over all contexts so far\n",
        "    result_dict[\"average_all_contexts\"] = []     # average accuracy over all contexts\n",
        "    result_dict[\"context\"] = []                  # number of contexts so far\n",
        "    return result_dict"
      ],
      "metadata": {
        "id": "S5KOtI2Axyn8"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def test_all(model, datasets, current_context, test_size=None, result_dict=None, verbose=False):\n",
        "    '''Evaluate accuracy of a classifier (=[model]) on all contexts in [datasets].'''\n",
        "\n",
        "    n_contexts = len(datasets)\n",
        "\n",
        "    # Evaluate accuracy of model on all contexts\n",
        "    precs = []\n",
        "    for i in range(n_contexts):\n",
        "        precs.append(test_acc(model, datasets[i], test_size=test_size))\n",
        "\n",
        "    # Compute average accuracy both for all contexts seen so far, and for all contexts\n",
        "    ave_so_far = sum([precs[context_id] for context_id in range(current_context)]) / current_context\n",
        "    ave_all = sum([precs[context_id] for context_id in range(n_contexts)]) / n_contexts\n",
        "\n",
        "    # Print results on screen\n",
        "    if verbose:\n",
        "        print(' => ave accuracy (contexts so far): {:.3f}'.format(ave_so_far))\n",
        "        print(' => ave accuracy (all contexts):    {:.3f}'.format(ave_all))\n",
        "\n",
        "    # Add results to [result_dict]\n",
        "    if result_dict is not None:\n",
        "        for i in range(n_contexts):\n",
        "            result_dict['acc per context']['context {}'.format(i+1)].append(precs[i])\n",
        "        result_dict['average_all_contexts'].append(ave_all)\n",
        "        result_dict['average_contexts_so_far'].append(ave_so_far)\n",
        "        result_dict['context'].append(current_context)"
      ],
      "metadata": {
        "id": "N_j_o9RCuyvB"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Compare fine-tuning, EWC and experience replay on Split MNIST"
      ],
      "metadata": {
        "id": "MooCLNx7luDs"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Fine-tuning"
      ],
      "metadata": {
        "id": "qRNzQe6vt0mf"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a copy from the base-model\n",
        "model_finetune = copy.deepcopy(model)"
      ],
      "metadata": {
        "id": "s78p_YQQtX-M"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Initiate a `results_dict` to keep track of performance throughout the continual training.\n",
        "result_dict_finetune = initiate_result_dict(contexts)"
      ],
      "metadata": {
        "id": "L0FKJoYRy7lp"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "For fine-tuning, we can simply re-use the `train`-function we had defined above to train the model on a given dataset in \"the standard way\" (i.e., without using any specific continual learning strategy)."
      ],
      "metadata": {
        "id": "WrMqMENnuHRg"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Sequentially train the model on all contexts using finetuning\n",
        "for context_id in range(contexts):\n",
        "    # train the model on this context\n",
        "    train(model_finetune, dataset=train_datasets[context_id], iters=iters, lr=lr,\n",
        "          batch_size=batch_size)\n",
        "    # evaluate the performance of the model after training on this context\n",
        "    test_all(model_finetune, test_datasets, context_id+1, test_size=None,\n",
        "             result_dict=result_dict_finetune, verbose=True)"
      ],
      "metadata": {
        "id": "g0y65f07tVie"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "#@title Visualization function\n",
        "def plot_lines(list_with_lines, x_axes=None, line_names=None, colors=None, title=None,\n",
        "               title_top=None, xlabel=None, ylabel=None, ylim=None, figsize=None, list_with_errors=None, errors=\"shaded\",\n",
        "               x_log=False, with_dots=False, linestyle='solid', h_line=None, h_label=None, h_error=None,\n",
        "               h_lines=None, h_colors=None, h_labels=None, h_errors=None):\n",
        "    '''Generates a figure containing multiple lines in one plot.\n",
        "\n",
        "    :param list_with_lines: <list> of all lines to plot (with each line being a <list> as well)\n",
        "    :param x_axes:          <list> containing the values for the x-axis\n",
        "    :param line_names:      <list> containing the names of each line\n",
        "    :param colors:          <list> containing the colors of each line\n",
        "    :param title:           <str> title of plot\n",
        "    :param title_top:       <str> text to appear on top of the title\n",
        "    :return: f:             <figure>\n",
        "    '''\n",
        "\n",
        "    # if needed, generate default x-axis\n",
        "    if x_axes == None:\n",
        "        n_obs = len(list_with_lines[0])\n",
        "        x_axes = list(range(n_obs))\n",
        "\n",
        "    # if needed, generate default line-names\n",
        "    if line_names == None:\n",
        "        n_lines = len(list_with_lines)\n",
        "        line_names = [\"line \" + str(line_id) for line_id in range(n_lines)]\n",
        "\n",
        "    # make plot\n",
        "    size = (12,7) if figsize is None else figsize\n",
        "    f, axarr = plt.subplots(1, 1, figsize=size)\n",
        "\n",
        "    # add error-lines / shaded areas\n",
        "    if list_with_errors is not None:\n",
        "        for line_id, name in enumerate(line_names):\n",
        "            if errors==\"shaded\":\n",
        "                axarr.fill_between(x_axes, list(np.array(list_with_lines[line_id]) + np.array(list_with_errors[line_id])),\n",
        "                                   list(np.array(list_with_lines[line_id]) - np.array(list_with_errors[line_id])),\n",
        "                                   color=None if (colors is None) else colors[line_id], alpha=0.25)\n",
        "            else:\n",
        "                axarr.plot(x_axes, list(np.array(list_with_lines[line_id]) + np.array(list_with_errors[line_id])), label=None,\n",
        "                           color=None if (colors is None) else colors[line_id], linewidth=1, linestyle='dashed')\n",
        "                axarr.plot(x_axes, list(np.array(list_with_lines[line_id]) - np.array(list_with_errors[line_id])), label=None,\n",
        "                           color=None if (colors is None) else colors[line_id], linewidth=1, linestyle='dashed')\n",
        "\n",
        "    # mean lines\n",
        "    for line_id, name in enumerate(line_names):\n",
        "        axarr.plot(x_axes, list_with_lines[line_id], label=name,\n",
        "                   color=None if (colors is None) else colors[line_id],\n",
        "                   linewidth=4, marker='o' if with_dots else None, linestyle=linestyle if type(linestyle)==str else linestyle[line_id])\n",
        "\n",
        "    # add horizontal line\n",
        "    if h_line is not None:\n",
        "        axarr.axhline(y=h_line, label=h_label, color=\"grey\")\n",
        "        if h_error is not None:\n",
        "            if errors == \"shaded\":\n",
        "                axarr.fill_between([x_axes[0], x_axes[-1]],\n",
        "                                   [h_line + h_error, h_line + h_error], [h_line - h_error, h_line - h_error],\n",
        "                                   color=\"grey\", alpha=0.25)\n",
        "            else:\n",
        "                axarr.axhline(y=h_line + h_error, label=None, color=\"grey\", linewidth=1, linestyle='dashed')\n",
        "                axarr.axhline(y=h_line - h_error, label=None, color=\"grey\", linewidth=1, linestyle='dashed')\n",
        "\n",
        "    # add horizontal lines\n",
        "    if h_lines is not None:\n",
        "        h_colors = colors if h_colors is None else h_colors\n",
        "        for line_id, new_h_line in enumerate(h_lines):\n",
        "            axarr.axhline(y=new_h_line, label=None if h_labels is None else h_labels[line_id],\n",
        "                          color=None if (h_colors is None) else h_colors[line_id])\n",
        "            if h_errors is not None:\n",
        "                if errors == \"shaded\":\n",
        "                    axarr.fill_between([x_axes[0], x_axes[-1]],\n",
        "                                       [new_h_line + h_errors[line_id], new_h_line+h_errors[line_id]],\n",
        "                                       [new_h_line - h_errors[line_id], new_h_line - h_errors[line_id]],\n",
        "                                       color=None if (h_colors is None) else h_colors[line_id], alpha=0.25)\n",
        "                else:\n",
        "                    axarr.axhline(y=new_h_line+h_errors[line_id], label=None,\n",
        "                                  color=None if (h_colors is None) else h_colors[line_id], linewidth=1,\n",
        "                                  linestyle='dashed')\n",
        "                    axarr.axhline(y=new_h_line-h_errors[line_id], label=None,\n",
        "                                  color=None if (h_colors is None) else h_colors[line_id], linewidth=1,\n",
        "                                  linestyle='dashed')\n",
        "\n",
        "    # finish layout\n",
        "    # -set y-axis\n",
        "    if ylim is not None:\n",
        "        axarr.set_ylim(ylim)\n",
        "    # -add axis-labels\n",
        "    if xlabel is not None:\n",
        "        axarr.set_xlabel(xlabel)\n",
        "    if ylabel is not None:\n",
        "        axarr.set_ylabel(ylabel)\n",
        "    # -add title(s)\n",
        "    if title is not None:\n",
        "        axarr.set_title(title)\n",
        "    if title_top is not None:\n",
        "        f.suptitle(title_top)\n",
        "    # -add legend\n",
        "    if line_names is not None:\n",
        "        axarr.legend()\n",
        "    # -set x-axis to log-scale\n",
        "    if x_log:\n",
        "        axarr.set_xscale('log')\n",
        "\n",
        "    # return the figure\n",
        "    return f"
      ],
      "metadata": {
        "cellView": "form",
        "id": "tK2ztl4O2SeF"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Visualize performance on each context throughout the continual training\n",
        "plot_list = []\n",
        "for i in range(contexts):\n",
        "    plot_list.append(result_dict_finetune[\"acc per context\"][\"context {}\".format(i + 1)])\n",
        "figure = plot_lines(\n",
        "    plot_list, x_axes=result_dict_finetune[\"context\"],\n",
        "    line_names=['context {}'.format(i + 1) for i in range(contexts)],\n",
        "    title=\"Fine-tuning\", ylabel=\"Test Accuracy (%)\",\n",
        "    xlabel=\"Number of contexts trained so far\", figsize=(10,5),\n",
        ")"
      ],
      "metadata": {
        "id": "izJVfClC0Nwe"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "That doesn't look great! The performance on data from each context drops to zero as soon as the next context has been learned.\n",
        "\n",
        "Can this be fixed with EWC or replay?\n",
        "\n"
      ],
      "metadata": {
        "id": "XoOvQDxU0H7V"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### EWC"
      ],
      "metadata": {
        "id": "qZpWQSzct6-j"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a copy from the base-model\n",
        "model_ewc = copy.deepcopy(model)"
      ],
      "metadata": {
        "id": "A0zJrd324EYi"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Initiate a `results_dict` to keep track of performance throughout the continual training.\n",
        "result_dict_ewc = initiate_result_dict(contexts)"
      ],
      "metadata": {
        "id": "6b_toOZG4HK2"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Select a hyperparameter for EWC\n",
        "ewc_lambda = 100"
      ],
      "metadata": {
        "id": "KUvoqpvC462f"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Sequentially train the model on all contexts using EWC\n",
        "for context_id in range(contexts):\n",
        "    # Train the model on this context\n",
        "    train_ewc(model_ewc, train_datasets[context_id], iters=iters, lr=lr, batch_size=batch_size,\n",
        "              current_context=context_id+1, ewc_lambda=ewc_lambda)\n",
        "    # Estimate/update the FI-matrix (which is stored as attribute in the network)\n",
        "    estimate_fisher(model_ewc, train_datasets[context_id], n_samples=200)\n",
        "    # Evaluate the performance of the model after training on this context\n",
        "    test_all(model_ewc, test_datasets, context_id+1, test_size=None,\n",
        "             result_dict=result_dict_ewc, verbose=True)"
      ],
      "metadata": {
        "id": "UaTlFqCU4L23"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Visualize performance on each context throughout the continual training\n",
        "plot_list = []\n",
        "for i in range(contexts):\n",
        "    plot_list.append(result_dict_ewc[\"acc per context\"][\"context {}\".format(i + 1)])\n",
        "figure = plot_lines(\n",
        "    plot_list, x_axes=result_dict_ewc[\"context\"],\n",
        "    line_names=['context {}'.format(i + 1) for i in range(contexts)],\n",
        "    title=\"EWC\", ylabel=\"Test Accuracy (%)\",\n",
        "    xlabel=\"Number of contexts trained so far\", figsize=(10,5),\n",
        ")"
      ],
      "metadata": {
        "id": "cnwEr6D86XXz"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "On this problem, EWC does not seem to help!"
      ],
      "metadata": {
        "id": "A3veN7vFDgtg"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Experience Replay"
      ],
      "metadata": {
        "id": "xKga8yfYt8qI"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a copy from the base-model\n",
        "model_replay = copy.deepcopy(model)"
      ],
      "metadata": {
        "id": "PNUKP7xL7Mit"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Initiate a `results_dict` to keep track of performance throughout the continual training.\n",
        "result_dict_replay = initiate_result_dict(contexts)"
      ],
      "metadata": {
        "id": "w47-mCHp7PU1"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Select how many samples per class can be stored in the memory buffer\n",
        "buffer_size_per_class = 20"
      ],
      "metadata": {
        "id": "ysrAtTYN9Xq8"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Sequentially train the model on all contexts using Experience Replay\n",
        "memory_sets = []\n",
        "buffer_dataset = None\n",
        "for context_id in range(contexts):\n",
        "    # Train the model on this context\n",
        "    train_replay(model_replay, train_datasets[context_id], iters=iters, lr=lr,\n",
        "                 batch_size=batch_size, current_context=context_id+1, buffer_dataset=buffer_dataset)\n",
        "    # Update memory buffer\n",
        "    classes_in_this_context = list(range(classes_per_context*context_id,\n",
        "                                         classes_per_context*(context_id+1)))\n",
        "    memory_sets = fill_memory_buffer(memory_sets, train_datasets[context_id],\n",
        "                                     buffer_size_per_class=buffer_size_per_class,\n",
        "                                     class_indeces=classes_in_this_context)\n",
        "    buffer_dataset = MemorySetDataset(memory_sets)\n",
        "    # Evaluate the performance of the model after training on this context\n",
        "    test_all(model_replay, test_datasets, context_id+1, test_size=None,\n",
        "             result_dict=result_dict_replay, verbose=True)"
      ],
      "metadata": {
        "id": "PqEQ0psH7S5_"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Visualize performance on each context throughout the continual training\n",
        "plot_list = []\n",
        "for i in range(contexts):\n",
        "    plot_list.append(result_dict_replay[\"acc per context\"][\"context {}\".format(i + 1)])\n",
        "figure = plot_lines(\n",
        "    plot_list, x_axes=result_dict_replay[\"context\"],\n",
        "    line_names=['context {}'.format(i + 1) for i in range(contexts)],\n",
        "    title=\"Experience Replay ({} samples per class)\".format(buffer_size_per_class),\n",
        "    ylabel=\"Test Accuracy (%)\", xlabel=\"Number of contexts trained so far\", figsize=(10,5),\n",
        ")"
      ],
      "metadata": {
        "id": "Cd6cVn4596Z2"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Experience replay does help on this type of continual learning problem!"
      ],
      "metadata": {
        "id": "0h0_dqswDxIF"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Visual comparison fine-tuning, EWC and replay"
      ],
      "metadata": {
        "id": "PeRrxlPMD9Ih"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "key = \"average_contexts_so_far\"\n",
        "plot_list = [result_dict_finetune[key], result_dict_ewc[key], result_dict_replay[key]]\n",
        "line_names = ['Fine-tuning', 'EWC', 'Experience Replay']\n",
        "figure = plot_lines(\n",
        "    plot_list, x_axes=result_dict_replay[\"context\"], line_names=line_names,\n",
        "    title=\"Comparison (performance on all contexts so far)\",\n",
        "    ylabel=\"Test Accuracy (%)\", xlabel=\"Number of contexts trained so far\", figsize=(10,5),\n",
        ")"
      ],
      "metadata": {
        "id": "BlMgaQw3EDNG"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "key = \"average_all_contexts\"\n",
        "plot_list = [result_dict_finetune[key], result_dict_ewc[key], result_dict_replay[key]]\n",
        "line_names = ['Fine-tuning', 'EWC', 'Experience Replay']\n",
        "figure = plot_lines(\n",
        "    plot_list, x_axes=result_dict_replay[\"context\"], line_names=line_names,\n",
        "    title=\"Comparison (performance on all contexts)\",\n",
        "    ylabel=\"Test Accuracy (%)\", xlabel=\"Number of contexts trained so far\", figsize=(10,5),\n",
        ")"
      ],
      "metadata": {
        "id": "jDvPRbb6NunA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "(Note that the lines of `fine-tuning` and `EWC` might well overlap almost completely.)"
      ],
      "metadata": {
        "id": "BnzPs8sLNiiW"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### **ASSIGNMENT**: Combine EWC and replay on class-incremental Split MNIST\n",
        "Train another model copy on the class-incremental version of Split MNIST, now again using *both* EWC and experience replay."
      ],
      "metadata": {
        "id": "nnHbEqUclFzL"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model_ewc_replay = copy.deepcopy(model)\n",
        "result_dict_ewc_replay = initiate_result_dict(contexts)"
      ],
      "metadata": {
        "id": "9Z4mW-SGJ2Mz"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# TO BE COMPLETED"
      ],
      "metadata": {
        "id": "o8uoeyclJsol"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "#@title Possible Answer\n",
        "\n",
        "# Select hyperparameter for EWC\n",
        "ewc_lambda_with_replay = 100\n",
        "\n",
        "# Sequentially train the model on all contexts using both EWC and Experience Replay\n",
        "memory_sets = []\n",
        "buffer_dataset = None\n",
        "for context_id in range(contexts):\n",
        "\n",
        "    # Train the model on this context\n",
        "    train_ewc_replay(model_ewc_replay, train_datasets[context_id], iters=iters, lr=lr,\n",
        "                     batch_size=batch_size, current_context=context_id+1,\n",
        "                     ewc_lambda=ewc_lambda_with_replay, buffer_dataset=buffer_dataset)\n",
        "    # NOTE: depending on how you had written your `train_ewc_replay` function, you might need to\n",
        "    #       adjust it to make it suitable for the more general case in which the function is used\n",
        "    #       for arbitrary contexts\n",
        "\n",
        "    # Estimate/update the FI-matrix (which is stored as attribute in the network)\n",
        "    estimate_fisher(model_ewc_replay, train_datasets[context_id], n_samples=200)\n",
        "\n",
        "    # Update memory buffer\n",
        "    classes_in_this_context = list(range(classes_per_context*context_id,\n",
        "                                         classes_per_context*(context_id+1)))\n",
        "    memory_sets = fill_memory_buffer(memory_sets, train_datasets[context_id],\n",
        "                                     buffer_size_per_class=buffer_size_per_class,\n",
        "                                     class_indeces=classes_in_this_context)\n",
        "    buffer_dataset = MemorySetDataset(memory_sets)\n",
        "\n",
        "    # Evaluate the performance of the model after training on this context\n",
        "    test_all(model_ewc_replay, test_datasets, context_id+1, test_size=None,\n",
        "             result_dict=result_dict_ewc_replay, verbose=True)"
      ],
      "metadata": {
        "id": "NCIl4npjKA4b"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Visualize performance on each context throughout the continual training\n",
        "plot_list = []\n",
        "for i in range(contexts):\n",
        "    plot_list.append(result_dict_ewc_replay[\"acc per context\"][\"context {}\".format(i + 1)])\n",
        "figure = plot_lines(\n",
        "    plot_list, x_axes=result_dict_ewc_replay[\"context\"],\n",
        "    line_names=['context {}'.format(i + 1) for i in range(contexts)],\n",
        "    title=\"EWC + Experience Replay ({} samples per class)\".format(buffer_size_per_class),\n",
        "    ylabel=\"Test Accuracy (%)\", xlabel=\"Number of contexts trained so far\", figsize=(10,5),\n",
        ")"
      ],
      "metadata": {
        "id": "H8Bv45h-LAOo"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Is there a benefit of \"EWC + Experience Replay\" over only \"Experience Replay\"? Let's compare them more directly."
      ],
      "metadata": {
        "id": "lYRQToDAOHCC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "key = \"average_contexts_so_far\"\n",
        "plot_list = [result_dict_finetune[key], result_dict_ewc[key], result_dict_replay[key],\n",
        "             result_dict_ewc_replay[key]]\n",
        "line_names = ['Fine-tuning', 'EWC', 'Experience Replay', 'EWC + Experience Replay']\n",
        "figure = plot_lines(\n",
        "    plot_list, x_axes=result_dict_replay[\"context\"], line_names=line_names,\n",
        "    title=\"Comparison (performance on all contexts so far)\",\n",
        "    ylabel=\"Test Accuracy (%)\", xlabel=\"Number of contexts trained so far\", figsize=(10,5),\n",
        ")"
      ],
      "metadata": {
        "id": "Y7tYuOJqLm1F"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Bonus Part: Task- and domain-incremental learning version of Split MNIST\n",
        "As a *bonus exercise*, let's try to think about how to adapt the code above such that we can run the same experiment, except that we perform Split MNIST according to the task- and domain-incremental learning scenarios rather than the class-incremental learning scenario."
      ],
      "metadata": {
        "id": "tysftIoWOY-u"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Recall from the lecture that Split MNIST (just like any other sequence of classification tasks), can be performed according to each of the three scenarios:\n",
        "\n",
        "![Screenshot 2024-03-14 at 15.02.19.png]()\n",
        "\n"
      ],
      "metadata": {
        "id": "qnzRpciWOnSY"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Domain-incremental Split MNIST\n",
        "To create the domain-incrmental learning version of Split MNIST, the number of possible output classes needs to be changed from ten (one output class for each digit) to two (one output class for `odd` and one output class for `even`).\n",
        "\n",
        "To implement this, we need to change both the way the benchmark is defined (e.g., in the second context, the digits '2' and '3' should no longer be labelled with `y=2` and `y=3`, but instead with `y=0` and `y=1`), and we need to change the way the classifier is defined (as it should now have two output units rather than ten)."
      ],
      "metadata": {
        "id": "AS-70pNrv4E_"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Task-incremental Split MNIST\n",
        "To create the task-incremental version of Split MNIST, the context label needs to be provided as input to the model. Usually, this context label will then be used to enable a \"multi-headed output layer\", meaning that there is a separate output layer per task.\n",
        "\n",
        "One option to implement this is to keep the number of output classes at ten and to still have the same, single output layer as in the class-incremental learning case, but to always mask out the outputs that are not in the current context.\n",
        "\n",
        "Another option to implement this is to set the number of output classes to two (as in the domain-incremental learning case) and to define a separate output layer per task.\n",
        "\n"
      ],
      "metadata": {
        "id": "7Ipjh-G0wjwh"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![image.png]()"
      ],
      "metadata": {
        "id": "oYI2Bbr4zCDE"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "$^{\\text{a}}$ With task-incremental learning, at the computational level, there is no difference between whether the algorithm must return the within-context label or the global label, because the within-context label can be combined with the context label (which is provided as input) to get the global label."
      ],
      "metadata": {
        "id": "vFDIFf_KzLu-"
      }
    }
  ]
}