{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "[<img src='https://github.com/jeshraghian/snntorch/blob/master/docs/_static/img/snntorch_alpha_w.png?raw=true' width=\"400\">](https://github.com/jeshraghian/snntorch/)\n",
        "\n",
        "# Binarized Spiking Neural Networks\n",
        "## By Erik Mercado\n",
        "\n",
        "\n",
        "<a href=\"https://colab.research.google.com/github/jeshraghian/snntorch/blob/master/examples/tutorial_BSNN.ipynb\">\n",
        "  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
        "</a>\n",
        "\n",
        "[<img src='https://github.com/jeshraghian/snntorch/blob/master/docs/_static/img/GitHub-Mark-Light-120px-plus.png?raw=true' width=\"28\">](https://github.com/jeshraghian/snntorch/) [<img src='https://github.com/jeshraghian/snntorch/blob/master/docs/_static/img/GitHub_Logo_White.png?raw=true' width=\"80\">](https://github.com/jeshraghian/snntorch/)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "This tutorial is based on the following papers on binarized spiking neural networks. If you find these resources or code useful in your work, please consider citing the following sources:\n",
        "\n",
        "> <cite> [Jason K. Eshraghian, Xinxin Wang, and Wei D. Lu. \"Memristor-based Binarized Spiking Neural Networks: Challenges and Applications\". IEEE Nanotechnology Magazine, 16(2) April 2023.](https://ieeexplore.ieee.org/abstract/document/9693512/) </cite>\n",
        "\n",
        "> <cite> [Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, and Wei D. Lu. \"Training Spiking Neural Networks Using Lessons From Deep Learning\". Proceedings of the IEEE, 111(9) September 2023.](https://ieeexplore.ieee.org/abstract/document/10242251) </cite>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "This tutorial will show you how to train a binarized spiking neural networks (BSNNs) in snnTorch. Weights are binarized and forced to take on values $w \\in \\{-1,+1\\}$. Weight binarization is applied during training with a straight-through-estimator to side-step the non-differentiability of binarization. This is distinct to the surrogate gradient applied to the spiking function.\n",
        "\n",
        "> Note: The model you will train is an emulation of binarized SNNs. I.e., while the weights are constrained to taking on $+1$ and $-1$, they are represented in a full precision format. \n",
        "\n",
        "In general, SNNs are thought to potentially improve upon the performance of binarized neural networks because: 1) they can represent data over time, where time cannot be binarized, and 2) because data can be stored in the state of the neuron. They can be difficult to optimize, but there is a lot of space in enabling BSNNs to perform similarly to SNNs."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "igdvjFODtcXc"
      },
      "source": [
        "# 1. Environment Setup\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rmPjrnjGtmOc"
      },
      "outputs": [],
      "source": [
        "! pip install snntorch --quiet"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Zgi2kblr99E0",
        "outputId": "80791742-3a9c-497e-db5e-80c079cacf76"
      },
      "outputs": [],
      "source": [
        "# PyTorch Imports\n",
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "from torch.autograd import Function\n",
        "from torch.utils.data import DataLoader\n",
        "from torchvision import datasets, transforms\n",
        "from torch.optim import Adam\n",
        "from torch.utils.data import random_split\n",
        "\n",
        "# Additional Imports\n",
        "import snntorch as snn\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import time\n",
        "import os\n",
        "\n",
        "# Set the seed for reproducibility of results\n",
        "torch.manual_seed(0)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "F3ynOCqL8MSV"
      },
      "source": [
        "# 2. Data Preparation\n",
        "\n",
        "## Fashion-MNIST (FMNIST) Dataset Overview\n",
        "- **Total Images:** 70,000 grayscale images\n",
        "- **Image Dimensions:** 28x28 pixels\n",
        "- **Categories:** 10 categories of fashion products, with 7,000 images per category\n",
        "\n",
        "## Dataset Composition\n",
        "- **Training Set:** 60,000 images\n",
        "- **Test Set:** 10,000 images\n",
        "\n",
        "## Training and Validation Split\n",
        "- **Standard Practice:** Splitting the training set into separate training and validation sets\n",
        "- **Split Ratio:** 80% for training, 20% for validation\n",
        "- **In Practice for FMNIST:**\n",
        "  - **Training Set:** 48,000 images\n",
        "  - **Validation Set:** 12,000 images\n",
        "- **Purpose of Split:** Allows for hyperparameter tuning and model validation during training, while keeping the test set separate for final evaluation\n",
        "![Screenshot 2023-12-02 173234.png]()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "IgQgyyS3_W9s"
      },
      "outputs": [],
      "source": [
        "# Load Fashion-MNIST data\n",
        "transform = transforms.Compose([\n",
        "    transforms.ToTensor(),\n",
        "    transforms.Normalize((0.5,), (0.5,))  # Normalize to [-1, 1]\n",
        "])\n",
        "\n",
        "# Download and load the Fashion-MNIST dataset\n",
        "train_set = datasets.FashionMNIST(root='./data', train=True, download=True, transform=transform)\n",
        "test_set = datasets.FashionMNIST(root='./data', train=False, download=True, transform=transform)\n",
        "\n",
        "# Split the train_set into new training and validation sets\n",
        "train_size = int(0.8 * len(train_set)) # 80% train_set\n",
        "val_size = len(train_set) - train_size # 20% train_set\n",
        "train_subset, val_subset = random_split(train_set, [train_size, val_size])\n",
        "\n",
        "# Create data loaders for the training, validation, and test sets\n",
        "train_loader = DataLoader(train_subset, batch_size=64, shuffle=True)\n",
        "val_loader = DataLoader(val_subset, batch_size=64, shuffle=False)\n",
        "test_loader = DataLoader(test_set, batch_size=64, shuffle=False)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "QC4r9d0z_xk8"
      },
      "outputs": [],
      "source": [
        "#@title Dataset Visualization Function\n",
        "def show_img_by_class(data_loader, classes):\n",
        "    class_images = {}\n",
        "    class_labels = {}\n",
        "\n",
        "    # loop through the DataLoader until we find one instance of each class.\n",
        "    while len(class_images) < len(classes):\n",
        "        images, labels = next(data_loader)\n",
        "        for i, label in enumerate(labels):\n",
        "            if label.item() not in class_images and len(class_images) < len(classes):\n",
        "                class_images[label.item()] = images[i]\n",
        "                class_labels[label.item()] = label\n",
        "\n",
        "    # Set a larger figure size to accommodate the images\n",
        "    plt.figure(figsize=(20, 20))\n",
        "\n",
        "    # plot each image and its label\n",
        "    for i, (label, image) in enumerate(class_images.items()):\n",
        "        ax = plt.subplot(1, len(classes), i + 1)\n",
        "        img = image / 2 + 0.5  # Unnormalize\n",
        "        npimg = img.numpy()\n",
        "        plt.imshow(npimg.squeeze(), cmap=\"gray\")  # Grayscale images don't need color channel adjustment\n",
        "        ax.set_title(f\"{classes[class_labels[label].item()]}\")\n",
        "        ax.axis(\"off\")\n",
        "\n",
        "    plt.show()\n",
        "\n",
        "    # print image size\n",
        "    print(f\"Image size: {images[0].size()}\")\n",
        "\n",
        "# get a batch of training images\n",
        "data_iter = iter(train_loader)\n",
        "\n",
        "# classes in Fashion-MNIST\n",
        "classes = (\"T-shirt/top\", \"Trouser\", \"Pullover\", \"Dress\", \"Coat\",\n",
        "           \"Sandal\", \"Shirt\", \"Sneaker\", \"Bag\", \"Ankle boot\")\n",
        "\n",
        "show_img_by_class(data_iter, classes)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dGzOpCsAu_7L"
      },
      "source": [
        "# 3. Define the BSNN Model\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OLxZfD2LMKMx"
      },
      "source": [
        "## 3.1 Binarization\n",
        "\n",
        "Binarization will convert all weights greater than $0$ to $+1$, and all weights less than $0$ to $-1$. This thresholding function is non-differentiable. The following functions show how to separate the binarization in the forward-pass and ignoring the binarization in the backward pass.\n",
        "\n",
        "- **Method: `forward`**\n",
        "  - **Purpose:** Performs binarization.\n",
        "  - **Operation:**\n",
        "    - `input.sign()`: Computes the sign of each element in the input tensor. Returns 1 for positive values and -1 for negative and zero values.\n",
        "    - `.clamp(min=-1)`: Ensures the minimum value in the tensor is -1, effectively converting 0s to -1s, as required for binarization.\n",
        "\n",
        "    ![Untitled drawing.png]()\n",
        "\n",
        "\n",
        "\n",
        "- **Method: `backward`**\n",
        "  - **Purpose:** Defines the gradient computation for backpropagation in neural networks.\n",
        "  - **Operation:**\n",
        "    - `gradient_out.clone()`: Creates a clone of the gradient. This is part of the Straight-Through Estimator (STE) technique, where the gradient is passed unchanged or modified minimally to facilitate backpropagation in binary networks.\n",
        "![STE.png]()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "The following function will be applied to various layers in PyTorch."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "J9U6eZ9bNSQy"
      },
      "outputs": [],
      "source": [
        "class Binarize(Function):\n",
        "  @staticmethod\n",
        "  def forward(weight_ref, input):\n",
        "    return input.sign().clamp(min=-1) # convert input to -1 or 1\n",
        "\n",
        "  @staticmethod\n",
        "  def backward(weight_ref, gradient_out):\n",
        "    gradient_in = gradient_out.clone() # create clone of weights for STE\n",
        "    return gradient_in\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3VSg7yu7RaCN"
      },
      "source": [
        "## 3.2 Custom Binary Conv2d Layer\n",
        "\n",
        "The above function will now be applied to various neural network layer types. \n",
        "Note that the full precision weights are stored in the background and all weight updates are applied to these full precision weights. \n",
        "Weights are binarized only during the forward-pass in order to generate a loss that is aware of the binarization function.\n",
        "\n",
        "- **Method: `forward`**\n",
        "  - **Purpose:** Conv2d With Binarization\n",
        "  - **Operation:**\n",
        "    - return a Pytroch Conv Layer with Binarized Weights\n",
        "\n",
        "![BinConv2d.png]()\n",
        "\n",
        "- **Method: `reset_parameters`**\n",
        "  - **Purpose:** Initalize weights with Xavier normal distribution\n",
        "  - **Operation:**\n",
        "    - Initializing the weights to help with vanishing or exploding gradients\n",
        "    - Mean = 0\n",
        "    - STD = 1\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hj7f51OENWIs"
      },
      "outputs": [],
      "source": [
        "class BinaryConv2d(nn.Conv2d):\n",
        "\n",
        "  def __init__(self, *kargs, **kwargs):\n",
        "        super(BinaryConv2d, self).__init__(*kargs, **kwargs)\n",
        "\n",
        "  def forward(self, input):\n",
        "    binarized_weights = Binarize.apply(self.weight)\n",
        "    return F.conv2d(input, binarized_weights)\n",
        "\n",
        "  def reset_parameters(self):\n",
        "    # Xavier normal initialization\n",
        "    nn.init.xavier_normal_(self.weight)\n",
        "    if self.bias is not None:\n",
        "      # Initialize bias to zero\n",
        "      nn.init.constant(self.bias, 0)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "z_j67uPZgEi3"
      },
      "source": [
        "## 3.3 Custom Binary Linear Layer\n",
        "- **Method: `forward`**\n",
        "  - **Purpose:** Fully Connected Layer with Binarization.\n",
        "  - **Operation:**\n",
        "    - return a Pytroch Linear Layer with Binarized Weights\n",
        "    - The output vector represents the activation of each neuron in the binarized linear layer.\n",
        "![Linear_FC.png]()\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "T9_FloJsNY4m"
      },
      "outputs": [],
      "source": [
        "class BinaryLinear(nn.Linear):\n",
        "\n",
        "    def forward(self, input):\n",
        "        bin_weights = Binarize.apply(self.weight)\n",
        "        if self.bias is None:\n",
        "            return F.linear(input, bin_weights)\n",
        "        else:\n",
        "            return F.linear(input, bin_weights, self.bias)\n",
        "\n",
        "    def reset_parameters(self):\n",
        "        # Apply Xavier normal initialization\n",
        "        torch.nn.init.xavier_normal_(self.weight)\n",
        "        if self.bias is not None:\n",
        "            # Initialize bias to zero\n",
        "            torch.nn.init.constant_(self.bias, 0)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rr16O79sg1Cc"
      },
      "source": [
        "## 3.4 Configuration\n",
        "Define the hyperparameters for the model and training loop."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "P5MNLt4eNPS3"
      },
      "outputs": [],
      "source": [
        "config = {\n",
        "    # SNN\n",
        "    \"threshold1\": 2.3599835635698114,\n",
        "    \"threshold2\": 7.985043705972782,\n",
        "    \"threshold3\": 3.849629060468402,\n",
        "    \"beta\": 0.44154740154430405,\n",
        "    \"num_steps\": 10,\n",
        "\n",
        "    # Network\n",
        "    \"batch_norm\": True,\n",
        "    \"dropout\": 0.3276864426153669,\n",
        "\n",
        "    # Hyper Params\n",
        "    \"lr\": 0.00713202055922571,\n",
        "\n",
        "    # Early Stopping\n",
        "    \"min_delta\": 1e-6,\n",
        "    \"patience_es\": 20,\n",
        "\n",
        "    # Training\n",
        "    \"epochs\": 100\n",
        "}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CrlaUAT7g-WP"
      },
      "source": [
        "## 3.5 BSNN Architecture\n",
        "![Arch.png]()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "pFH88iPwNbgM"
      },
      "outputs": [],
      "source": [
        "class BSNN(nn.Module):\n",
        "  def __init__(self, config):\n",
        "    super(BSNN, self).__init__()\n",
        "\n",
        "    # Initialize configuration parameters\n",
        "      # LIF\n",
        "    self.thresh1 = config[\"threshold1\"]\n",
        "    self.thresh2 = config[\"threshold2\"]\n",
        "    self.thresh3 = config[\"threshold3\"]\n",
        "    self.beta = config[\"beta\"]\n",
        "    self.num_steps = config[\"num_steps\"]\n",
        "\n",
        "      # Hyper Params for Layers\n",
        "    self.batch_norm = config[\"batch_norm\"]\n",
        "    self.dropout_percent = config[\"dropout\"]\n",
        "\n",
        "      # Network Layers\n",
        "    self.bin_conv_1 = BinaryConv2d(in_channels=1, out_channels=16, kernel_size=3, padding=\"same\", bias=False)\n",
        "    self.batch_norm_1 = nn.BatchNorm2d(num_features=16)\n",
        "    self.max_pool_1 = nn.MaxPool2d(kernel_size=2)\n",
        "    self.lif1 = snn.Leaky(beta=self.beta, threshold=self.thresh1)\n",
        "\n",
        "    self.bin_conv_2 = BinaryConv2d(in_channels=16, out_channels=32, kernel_size=3, padding=\"same\", bias=False)\n",
        "    self.batch_norm_2 = nn.BatchNorm2d(num_features=32)\n",
        "    self.max_pool_2 = nn.MaxPool2d(kernel_size=2)\n",
        "    self.lif2 = snn.Leaky(beta=self.beta, threshold=self.thresh2)\n",
        "\n",
        "    self.flatten = nn.Flatten()\n",
        "    self.bin_fully_connected_1 = BinaryLinear(in_features=32*5*5, out_features=10)\n",
        "    self.dropout = nn.Dropout(self.dropout_percent)\n",
        "    self.lif3 = snn.Leaky(beta=self.beta, threshold=self.thresh3)\n",
        "\n",
        "\n",
        "    # Forward Pass\n",
        "  def forward(self, inpt):\n",
        "    mem1 = self.lif1.init_leaky()\n",
        "    mem2 = self.lif2.init_leaky()\n",
        "    mem3 = self.lif3.init_leaky()\n",
        "\n",
        "    spike3_rec = []\n",
        "    mem3_rec = []\n",
        "\n",
        "    for step in range(self.num_steps):\n",
        "      current1 = self.bin_conv_1(inpt)\n",
        "      current1 = self.batch_norm_1(current1) if self.batch_norm else current1\n",
        "      current1 = self.max_pool_1(current1)\n",
        "      spike1, mem1 = self.lif1(current1, mem1)\n",
        "\n",
        "      current2 = self.bin_conv_2(spike1)\n",
        "      current2 = self.batch_norm_2(current2) if self.batch_norm else current2\n",
        "      current2 = self.max_pool_2(current2)\n",
        "      spike2, mem2 = self.lif2(current2, mem2)\n",
        "\n",
        "      current3 = self.flatten(spike2)\n",
        "      current3 = self.bin_fully_connected_1(current3)\n",
        "      current3 = self.dropout(current3)\n",
        "      spike3, mem3 = self.lif3(current3, mem3)\n",
        "\n",
        "      spike3_rec.append(spike3)\n",
        "      mem3_rec.append(mem3)\n",
        "\n",
        "    return torch.stack(spike3_rec, dim=0), torch.stack(mem3_rec, dim=0)\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HI6PilsEy1b4"
      },
      "source": [
        "# 4. Model Training Setup\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "M5DwoZM3q8hi"
      },
      "source": [
        "## 4.1 Early Stopping\n",
        "Use early stopping during the training loop. If the model doesn't improve performance after 20 epochs by default, then use the last best performant model.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cdZcC3FDq28I"
      },
      "outputs": [],
      "source": [
        "class EarlyStopping:\n",
        "    def __init__(self, patience=config[\"patience_es\"], min_delta=config[\"min_delta\"]):\n",
        "        # Early stops the training if validation loss doesn't improve after a given patience.\n",
        "        self.patience = patience\n",
        "        self.min_delta = min_delta\n",
        "        self.counter = 0\n",
        "        self.best_score = None\n",
        "        self.early_stop = False\n",
        "\n",
        "    def __call__(self, val_loss):\n",
        "        if self.best_score is None:\n",
        "            self.best_score = val_loss\n",
        "        elif val_loss > self.best_score - self.min_delta:\n",
        "            self.counter += 1\n",
        "            print(f\"Earlystop {self.counter}/{self.patience}\\n\")\n",
        "            if self.counter >= self.patience:\n",
        "                self.early_stop = True\n",
        "        else:\n",
        "            self.best_score = val_loss\n",
        "            self.counter = 0"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ro1Qxww0rCe3"
      },
      "source": [
        "## 4.2 Training Set-Up"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "nGFYXEqnNmB7"
      },
      "outputs": [],
      "source": [
        "# Model initialization\n",
        "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
        "model = BSNN(config).to(device)\n",
        "\n",
        "# Optimizer and Loss Function\n",
        "optimizer = Adam(model.parameters(), lr=config[\"lr\"])\n",
        "criterion = nn.CrossEntropyLoss()\n",
        "\n",
        "# Early Stopping\n",
        "early_stopping = EarlyStopping(patience=config[\"patience_es\"], min_delta=config[\"min_delta\"])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OgO_JHkAIeE1"
      },
      "source": [
        "# 5. Training Loop\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "05j4YB4NIeFB"
      },
      "source": [
        "### Training Function"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9xkkQNjeKB5f"
      },
      "source": [
        "Below is a training loop. We will train for one epoch initially, exposing our network to each sample of data once. We can change the number of epochs to run more training loops."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hr8gV1jwko7_"
      },
      "outputs": [],
      "source": [
        "def train(model, train_loader, optimizer, criterion, device):\n",
        "    model.train()\n",
        "    running_loss = 0.0\n",
        "    correct_train = 0\n",
        "    total_train = 0\n",
        "\n",
        "    for data, targets in train_loader:\n",
        "        data, targets = data.to(device), targets.to(device)\n",
        "\n",
        "        optimizer.zero_grad()\n",
        "        spike_out, _ = model(data)\n",
        "        output = spike_out.sum(dim=0)\n",
        "        loss = criterion(output, targets)\n",
        "        running_loss += loss.item()\n",
        "\n",
        "        _, predicted_train = torch.max(output.data, 1)\n",
        "        total_train += targets.size(0)\n",
        "        correct_train += (predicted_train == targets).sum().item()\n",
        "\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "\n",
        "    train_loss = running_loss / len(train_loader)\n",
        "    train_accuracy = 100 * correct_train / total_train\n",
        "    return train_loss, train_accuracy\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4fA5P-5bIeFM"
      },
      "source": [
        "### Validation Function"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jCzvjeCfX_sl"
      },
      "outputs": [],
      "source": [
        "def validate(model, val_loader, criterion, device):\n",
        "    model.eval()\n",
        "    val_loss = 0.0\n",
        "    correct_val = 0\n",
        "    total_val = 0\n",
        "\n",
        "    with torch.no_grad():\n",
        "        for data, targets in val_loader:\n",
        "            data, targets = data.to(device), targets.to(device)\n",
        "            spike_out, _ = model(data)\n",
        "            output = spike_out.sum(dim=0)\n",
        "            loss = criterion(output, targets)\n",
        "            val_loss += loss.item()\n",
        "\n",
        "            _, predicted_val = torch.max(output.data, 1)\n",
        "            total_val += targets.size(0)\n",
        "            correct_val += (predicted_val == targets).sum().item()\n",
        "\n",
        "    val_loss = val_loss / len(val_loader)\n",
        "    val_accuracy = 100 * correct_val / total_val\n",
        "    return val_loss, val_accuracy\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tI6_a5LXIeFP"
      },
      "source": [
        "### Training Loop"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wN5RyAcIK1fC"
      },
      "source": [
        "The function below iterates over all minibatches to obtain a measure of accuracy over all the samples in the test set."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "i8aQrSBIqFyp",
        "outputId": "87ec3edf-002f-4c2f-cabb-ffbbe8cf2c89"
      },
      "outputs": [],
      "source": [
        "train_losses, train_accuracies, val_losses, val_accuracies = [], [], [], []\n",
        "best_val_accuracy = 0\n",
        "model_path = \"best_BSNN_model.pth\"\n",
        "\n",
        "for epoch in range(config[\"epochs\"]):\n",
        "    train_loss, train_accuracy = train(model, train_loader, optimizer, criterion, device)\n",
        "    train_losses.append(train_loss)\n",
        "    train_accuracies.append(train_accuracy)\n",
        "\n",
        "    val_loss, val_accuracy = validate(model, val_loader, criterion, device)\n",
        "    val_losses.append(val_loss)\n",
        "    val_accuracies.append(val_accuracy)\n",
        "\n",
        "    print(f\"Epoch: {epoch + 1}, Training Loss: {train_loss:.5f}, Training Accuracy: {train_accuracy:.2f}%, Validation Loss: {val_loss:.5f}, Validation Accuracy: {val_accuracy:.2f}%\\n\")\n",
        "\n",
        "    if val_accuracy > best_val_accuracy:\n",
        "        best_val_accuracy = val_accuracy\n",
        "        torch.save(model.state_dict(), model_path)\n",
        "        print(f\"Saved model with improved validation accuracy: {val_accuracy:.2f}% \\n\")\n",
        "\n",
        "    early_stopping(val_loss)\n",
        "    if early_stopping.early_stop:\n",
        "        print(\"\\nEarly stopping triggered\")\n",
        "        break"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HH3x3HgC0O5n"
      },
      "source": [
        "# 6. Visualization and Analysis\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-BVjh8NAKdFK"
      },
      "source": [
        "Plotting training loss: loss curves are noisy because the losses are tracked at every iteration instead of an average of multiple iterations."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 957
        },
        "id": "x_9eVQYF9tR8",
        "outputId": "beccd0b8-d519-41d6-d200-ad2ca8ff6aa5"
      },
      "outputs": [],
      "source": [
        "# Plotting training, validation, and test losses\n",
        "plt.figure(figsize=(10, 5))\n",
        "plt.plot(train_losses, label='Training Loss')\n",
        "plt.plot(val_losses, label='Validation Loss')\n",
        "plt.title('Loss over Epochs')\n",
        "plt.xlabel('Epochs')\n",
        "plt.ylabel('Loss')\n",
        "plt.legend()\n",
        "plt.show()\n",
        "\n",
        "# Plotting training, validation, and test accuracies\n",
        "plt.figure(figsize=(10, 5))\n",
        "plt.plot(train_accuracies, label='Training Accuracy')\n",
        "plt.plot(val_accuracies, label='Validation Accuracy')\n",
        "plt.title('Accuracy over Epochs')\n",
        "plt.xlabel('Epochs')\n",
        "plt.ylabel('Accuracy (%)')\n",
        "plt.legend()\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TYnUEhDqz77k"
      },
      "source": [
        "# 7. Testing\n",
        "* Run the testing dataset through the model.\n",
        "* Collect and display metrics such as accuracy, loss"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "66hwnK84P03d"
      },
      "outputs": [],
      "source": [
        "def test(model, test_loader, criterion, device, model_path=\"best_BSNN_model.pth\"):\n",
        "\n",
        "    # Initialize variables for test loss and accuracy\n",
        "    test_loss = 0.0\n",
        "    correct_test = 0\n",
        "    total_test = 0\n",
        "\n",
        "    # Restore best BSNN Model\n",
        "    if os.path.isfile(model_path):\n",
        "        model.load_state_dict(torch.load(model_path))\n",
        "        print(f\"Loaded saved model from {model_path}\\n\")\n",
        "\n",
        "    # Switch model to evaluation mode\n",
        "    model.eval()\n",
        "\n",
        "    # Iterate over the test data\n",
        "    with torch.no_grad():\n",
        "        for data, targets in test_loader:\n",
        "            data, targets = data.to(device), targets.to(device)\n",
        "\n",
        "            # Forward pass\n",
        "            outputs, _ = model(data)  # Modify according to your model's output\n",
        "            outputs = outputs.mean(dim=0)\n",
        "\n",
        "            # Calculate loss\n",
        "            loss = criterion(outputs, targets)\n",
        "            test_loss += loss.item()\n",
        "\n",
        "            # Calculate accuracy\n",
        "            _, predicted = torch.max(outputs.data, 1)\n",
        "            total_test += targets.size(0)\n",
        "            correct_test += (predicted == targets).sum().item()\n",
        "\n",
        "    # Calculate average loss and accuracy\n",
        "    test_loss /= len(test_loader)\n",
        "    test_accuracy = 100 * correct_test / total_test\n",
        "\n",
        "    return test_loss, test_accuracy\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "xSzJxF_SQQa_",
        "outputId": "97100ee1-4d98-4f1d-9256-32ba61abc614"
      },
      "outputs": [],
      "source": [
        "test_loss, test_accuracy = test(model, test_loader, criterion, device)\n",
        "print(f\"Test Loss: {test_loss:.4f}, Test Accuracy: {test_accuracy:.2f}%\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Conclusion\n",
        "\n",
        "Accuracy should be reaching reasonably good numbers despite the severe constraint on weights. Caveat: it can take a while to find the optimal hyperparameters to use when training BSNNs. \n",
        "\n",
        "If you like this project, please consider starring ⭐ the repo on GitHub as it is the easiest and best way to support it.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Additional Resources\n",
        "* [Check out the snnTorch GitHub project here.](https://github.com/jeshraghian/snntorch)\n",
        "\n",
        "* More detail on BSNNs can be found in the corresponding paper here: [Jason K. Eshraghian, Xinxin Wang, and Wei D. Lu. \"Memristor-based Binarized Spiking Neural Networks: Challenges and Applications\". IEEE Nanotechnology Magazine, 16(2) April 2023.](https://ieeexplore.ieee.org/abstract/document/9693512/)"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [
        "igdvjFODtcXc",
        "F3ynOCqL8MSV",
        "OLxZfD2LMKMx",
        "3VSg7yu7RaCN",
        "z_j67uPZgEi3",
        "CrlaUAT7g-WP"
      ],
      "gpuType": "T4",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.15"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
