{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "820bff2b",
   "metadata": {},
   "source": [
    "# Vision Transformer optimisation using TFMOT\n",
    "\n",
    "Example notebook to demonstrate how TFMOT can be used for optimising complex transformer models such as ViT."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b22944d9",
   "metadata": {},
   "source": [
    "## Background\n",
    "\n",
    "The [Vision Transformer (ViT)](https://arxiv.org/pdf/2010.11929.pdf) architecture uses stacked transformer encoder blocks to process images for certain tasks. The encoder blocks are architecturally similar to the popular [NLP transformers](https://arxiv.org/pdf/1706.03762.pdf). The inputs to the transformer encoders are embeddings of patches extracted from the image. For a classification task, an additional feed forward network is added to the end.\n",
    "\n",
    "<img src=\"https://github.com/google-research/vision_transformer/blob/main/vit_figure.png?raw=true\" alt=\"Vision Transformer architecture\" width=\"700\"/>\n",
    "\n",
    "In this notebook:\n",
    "1. Firstly a ViT model is created and trained from scratch on the MNIST dataset. In practice, pre-trained weights can also be loaded.\n",
    "2. Afterwards, unstructured weight pruning, clustering and quantisation aware training (QAT) techniques are applied sequentially using the collaborative optimisation features of the [TensorFlow Model Optimization Toolkit (TFMOT)](https://www.tensorflow.org/model_optimization).\n",
    "3. Finally, an integer-only TFLite model is generated and tested."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae5a58ae",
   "metadata": {},
   "source": [
    "## TFMOT limitations\n",
    "- Subclassed models are not supported. Only sequential and functional model definitions are supported. (Pruning, Clustering & QAT)\n",
    "- Custom subclassed layers are not supported. (Clustering & QAT)\n",
    "    - Clustering will only work with subclassed layers if the weight variables you have to cluster are not nested within another layer (e.g. MHA).\n",
    "    - QAT works correctly if the subclassed layer performs only 1 operation.\n",
    "- Low-level tensorflow operators such as `tf.linalg.matmul` are not supported. (Only for QAT)\n",
    "    - QAT expects all quantised layers to be a subclass of `tf.keras.layers.Layer`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f09a1f7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "import tensorflow_model_optimization as tfmot\n",
    "\n",
    "tf.random.set_seed(0)\n",
    "\n",
    "print('TensorFlow version: {}'.format(tf.__version__))\n",
    "print('TFMOT version: {}'.format(tfmot.__version__))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ea4a90ee",
   "metadata": {},
   "source": [
    "## Model definition\n",
    "\n",
    "Due to the above-mentioned limitations, custom Keras layers must be defined for all of the low-level TensorFlow operators in order to perform QAT (each layer must only contain a single operation).\n",
    "\n",
    "Since none of these will have any prunable/clusterable weights, first we create a base prunable clusterable layer class to extend, instead of `tf.keras.layers.Layer`. If any of the weights in the custom layers should be pruned or clustered, a list of the weights should be provided in the respective method. Refer to the TFMOT documentation for more details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b022e09f",
   "metadata": {},
   "outputs": [],
   "source": [
    "class PrunableClusterableLayer(tf.keras.layers.Layer,\n",
    "                               tfmot.sparsity.keras.PrunableLayer,\n",
    "                               tfmot.clustering.keras.ClusterableLayer):\n",
    "    def get_prunable_weights(self): return []\n",
    "    def get_clusterable_weights(self): return []"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bd2b1152",
   "metadata": {},
   "source": [
    "### 1. Define each of the TensorFlow operations ViT uses as a Keras subclassed layer:\n",
    "\n",
    "Note that some of these layers have trainable weights defined using the `add_weight` method. These weights will not be pruned or clustered."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "51101c26",
   "metadata": {},
   "outputs": [],
   "source": [
    "class MatMul(PrunableClusterableLayer):\n",
    "    def __init__(self, transpose_b=False, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.transpose_b = transpose_b\n",
    "\n",
    "    def call(self, inputs):\n",
    "        return tf.linalg.matmul(*inputs, transpose_b=self.transpose_b)\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'transpose_b': self.transpose_b})\n",
    "        return config\n",
    "\n",
    "class Multiply(PrunableClusterableLayer):\n",
    "    def call(self, inputs):\n",
    "        return tf.multiply(*inputs)\n",
    "\n",
    "# Calling Multiply with a scalar input will lead to an error.\n",
    "# Use the following ScalarMultiply class instead.\n",
    "class ScalarMultiply(PrunableClusterableLayer):\n",
    "    def __init__(self, scalar, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.scalar = scalar\n",
    "\n",
    "    def call(self, x):\n",
    "        return tf.math.multiply(x, self.scalar)\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'scalar': self.scalar})\n",
    "        return config\n",
    "\n",
    "class Add(PrunableClusterableLayer):\n",
    "    def call(self, inputs):\n",
    "        return tf.math.add(*inputs)\n",
    "\n",
    "# Calling Add with a scalar input will lead to an error.\n",
    "# Use the following ScalarAdd class instead.\n",
    "class ScalarAdd(PrunableClusterableLayer):\n",
    "    def __init__(self, scalar, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.scalar = scalar\n",
    "\n",
    "    def call(self, x):\n",
    "        return tf.math.add(x, self.scalar)\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'scalar': self.scalar})\n",
    "        return config\n",
    "\n",
    "class Slice(PrunableClusterableLayer):\n",
    "    def __init__(self, seq_idx, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.seq_idx = seq_idx\n",
    "\n",
    "    def call(self, x):\n",
    "        return x[:, self.seq_idx, ...]\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'seq_idx': self.seq_idx})\n",
    "        return config\n",
    "\n",
    "class Mean(PrunableClusterableLayer):\n",
    "    def __init__(self, axes=None, keepdims=True, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.axes=axes\n",
    "        self.keepdims = keepdims\n",
    "\n",
    "    def call(self, x):\n",
    "        return tf.math.reduce_mean(x, axis=self.axes, keepdims=self.keepdims)\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'axes': self.axes,\n",
    "                       'keepdims': self.keepdims})\n",
    "        return config\n",
    "\n",
    "class Subtract(PrunableClusterableLayer):\n",
    "    def call(self, inputs):\n",
    "        return tf.math.subtract(*inputs)\n",
    "\n",
    "class StopGradient(PrunableClusterableLayer):\n",
    "    def call(self, x):\n",
    "        return tf.stop_gradient(x)\n",
    "\n",
    "class RSqrt(PrunableClusterableLayer):\n",
    "    def call(self, x):\n",
    "        return tf.math.rsqrt(x)\n",
    "\n",
    "class ClipMin(PrunableClusterableLayer):\n",
    "    def __init__(self, min_val=0, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.min_val = min_val\n",
    "\n",
    "    def call(self, x):\n",
    "        return tf.math.maximum(x, self.min_val)\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'min_val': self.min_val})\n",
    "        return config\n",
    "\n",
    "class BroadcastToken(PrunableClusterableLayer):\n",
    "    \"\"\"Layer to broadcast the class token\"\"\"\n",
    "    def __init__(self, embedding_dim, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.embedding_dim = embedding_dim\n",
    "\n",
    "    def build(self, input_shape):\n",
    "        self.w = self.add_weight(shape=(1, 1, self.embedding_dim), initializer='zeros', \n",
    "                                 trainable=True, name='token')\n",
    "        super().build(input_shape)\n",
    "\n",
    "    def call(self, x):\n",
    "        batch_size = tf.shape(x)[0]\n",
    "        return tf.broadcast_to(self.w, [batch_size, 1, self.embedding_dim])\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'embedding_dim': self.embedding_dim})\n",
    "        return config\n",
    "\n",
    "class AddPositionalEmbedding(PrunableClusterableLayer):\n",
    "    \"\"\"Layer to add positional embeddings to the tokens\"\"\"\n",
    "    def __init__(self, seq_len, embedding_dim, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.embedding_dim = embedding_dim\n",
    "        self.seq_len = seq_len\n",
    "\n",
    "    def build(self, input_shape):\n",
    "        self.w = self.add_weight(shape=(1, self.seq_len, self.embedding_dim), initializer=None,\n",
    "                                 trainable=True, name='pos_emb')\n",
    "        super().build(input_shape)\n",
    "\n",
    "    def call(self, x):\n",
    "        return x + self.w\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'embedding_dim': self.embedding_dim, 'seq_len': self.seq_len})\n",
    "        return config\n",
    "\n",
    "class Scale(PrunableClusterableLayer):\n",
    "    \"\"\"Multiply with gamma (LayerNorm)\"\"\"\n",
    "    def __init__(self, axes, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.axes = axes\n",
    "\n",
    "    def build(self, input_shape):\n",
    "        param_shape = [input_shape[dim] for dim in self.axes]\n",
    "        self.w = self.add_weight(name='gamma', shape=param_shape,\n",
    "                                 trainable=True, initializer='ones')\n",
    "        super().build(input_shape)\n",
    "\n",
    "    def call(self, x):\n",
    "        return tf.multiply(x, self.w)\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'axes': self.axes})\n",
    "        return config\n",
    "\n",
    "class Centre(PrunableClusterableLayer):\n",
    "    \"\"\"Add beta (LayerNorm)\"\"\"\n",
    "    def __init__(self, axes, **kwargs):\n",
    "        super().__init__(**kwargs)\n",
    "        self.axes = axes\n",
    "\n",
    "    def build(self, input_shape):\n",
    "        param_shape = [input_shape[dim] for dim in self.axes]\n",
    "        self.w = self.add_weight(name='beta', shape=param_shape,\n",
    "                                 trainable=True, initializer='zeros')\n",
    "        super().build(input_shape)\n",
    "\n",
    "    def call(self, x):\n",
    "        return tf.math.add(x, self.w)\n",
    "\n",
    "    def get_config(self):\n",
    "        config = super().get_config()\n",
    "        config.update({'axes': self.axes})\n",
    "        return config"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5c0abcb4",
   "metadata": {},
   "source": [
    "### 2. Now that these low-level operators are defined as Keras layers, we can start writing ViT layers such as multi-head attention or layer normalisation functionally:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a2fd1ae8",
   "metadata": {},
   "outputs": [],
   "source": [
    "Tanh = tf.keras.layers.Activation('tanh')\n",
    "\n",
    "def patch_encoder(inp, patch_size, num_patches, embedding_dim):\n",
    "    \"\"\"\n",
    "    Patch encoder layer, extracts patches from the image, flattens them \n",
    "    and adds the class token and positional embedding vectors.\n",
    "    \"\"\"\n",
    "    x = tf.keras.layers.Conv2D(filters=embedding_dim, kernel_size=patch_size,\n",
    "                               strides=patch_size, name='patch_encoder/conv2d')(inp)\n",
    "    x = tf.keras.layers.Reshape((num_patches, embedding_dim))(x)\n",
    "\n",
    "    # add the class token\n",
    "    cls_token = BroadcastToken(embedding_dim=embedding_dim, name='patch_encoder/cls_token')(inp)\n",
    "    x = tf.keras.layers.Concatenate(axis=1)([cls_token, x])\n",
    "\n",
    "    x = AddPositionalEmbedding(seq_len=(num_patches + 1),  # +1 for the class token\n",
    "                               embedding_dim=embedding_dim,\n",
    "                               name='patch_encoder/add_pos_emb')(x)\n",
    "    return x\n",
    "\n",
    "def self_attention(x, n_heads, dim, name='mha'):\n",
    "    \"\"\"Multi-head attention layer\"\"\"\n",
    "    depth = dim // n_heads\n",
    "\n",
    "    q = tf.keras.layers.Dense(units=dim, name=f'{name}/query')(x)\n",
    "    k = tf.keras.layers.Dense(units=dim, name=f'{name}/key')(x)\n",
    "    v = tf.keras.layers.Dense(units=dim, name=f'{name}/value')(x)\n",
    "\n",
    "    q = tf.keras.layers.Reshape((-1, n_heads, depth))(q)\n",
    "    q = tf.keras.layers.Permute((2, 1, 3))(q)\n",
    "    k = tf.keras.layers.Reshape((-1, n_heads, depth))(k)\n",
    "    k = tf.keras.layers.Permute((2, 1, 3))(k)\n",
    "    v = tf.keras.layers.Reshape((-1, n_heads, depth))(v)\n",
    "    v = tf.keras.layers.Permute((2, 1, 3))(v)\n",
    "\n",
    "    qk = ScalarMultiply(depth ** -0.5)(MatMul(transpose_b=True)([q, k]))\n",
    "    attn_weights = tf.keras.layers.Softmax(axis=-1)(qk)\n",
    "\n",
    "    attn_out = MatMul()([attn_weights, v]) \n",
    "    attn_out = tf.keras.layers.Permute((2, 1, 3))(attn_out)\n",
    "    attn_out = tf.keras.layers.Reshape((-1, dim))(attn_out)\n",
    "    out = tf.keras.layers.Dense(dim, name=f'{name}/output_dense')(attn_out)\n",
    "\n",
    "    return out\n",
    "\n",
    "def layer_norm(x, axes=2, epsilon=0.001, name='layer_norm', trainable=True):\n",
    "    \"\"\"LayerNormalization\"\"\"\n",
    "    if isinstance(axes, int): axes = [axes]\n",
    "\n",
    "    mean = Mean(axes=axes)(x)\n",
    "    ## This block can be replaced with a squared_difference layer ##\n",
    "    diff = Subtract()([x, StopGradient()(mean)])                  ##\n",
    "    sq_diff = Multiply()([diff, diff])                            ##\n",
    "    ## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ##\n",
    "    variance = Mean(axes=axes, name=f'{name}/variance')(sq_diff)\n",
    "    if not trainable:\n",
    "        inv = RSqrt()(variance)\n",
    "        x = Multiply()([diff, inv])\n",
    "    else:\n",
    "        inv = RSqrt()(ClipMin(min_val=epsilon)(variance))  # ClipMin prevents division by 0.\n",
    "        x = Subtract(name=f'{name}/grad_subtract')([x, mean])  # This layer is removed for inference so it is named.\n",
    "        x = Multiply()([x, inv])\n",
    "\n",
    "    x = Scale(axes=axes)(x)\n",
    "    x = Centre(axes=axes)(x)\n",
    "\n",
    "    return x\n",
    "\n",
    "def gelu(x):\n",
    "    \"\"\"Functional definition of approximate GELU with Keras layers\"\"\"\n",
    "    res = Add()([x, ScalarMultiply(0.044715)(Multiply()([x, Multiply()([x, x])]))])\n",
    "    res = ScalarAdd(1.0)(Tanh(ScalarMultiply(math.sqrt(2 / math.pi))(res)))\n",
    "    res = ScalarMultiply(0.5)(res)\n",
    "    res = Multiply()([x, res])\n",
    "    return res\n",
    "\n",
    "def mlp(x, hidden_dim, out_dim):\n",
    "    \"\"\"Multi-layer perceptron block\"\"\"\n",
    "    x = tf.keras.layers.Dense(units=hidden_dim)(x)\n",
    "    x = gelu(x)\n",
    "    x = tf.keras.layers.Dense(units=out_dim)(x)\n",
    "    return x"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0b2464d0",
   "metadata": {},
   "source": [
    "### 3. Full functional model definition:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e5ce1116",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_vision_transformer(input_shape,\n",
    "                           n_classes,\n",
    "                           patch_size,\n",
    "                           embedding_dim,\n",
    "                           n_layers,\n",
    "                           n_attention_heads,\n",
    "                           mlp_hidden_dim,\n",
    "                           trainable=True):\n",
    "    \"\"\"\n",
    "    Args:\n",
    "        input_shape (tuple): Shape of the inputs, including the batch size.\n",
    "        n_classes (int): Number of classes in the dataset.\n",
    "        patch_size (int / tuple of ints): Size of the patches to extract from the images.\n",
    "        embedding_dim (int): Size of the embedded patch vectors.\n",
    "        n_layers (int): Number of transformer encoder layers.\n",
    "        n_attention_heads (int): Number of attention heads.\n",
    "        mlp_hidden_dim (int): Hidden layer size for the intermediate MLPs.\n",
    "\n",
    "    Returns:\n",
    "        model (tf.keras.Model): The Keras model.\n",
    "    \"\"\"\n",
    "\n",
    "    if isinstance(patch_size, int): patch_size = (patch_size, patch_size)\n",
    "\n",
    "    # Calculate the number of patches\n",
    "    num_patches = (input_shape[1] * input_shape[2]) // (patch_size[0] * patch_size[1])\n",
    "\n",
    "    inp = tf.keras.layers.Input(shape=input_shape[1:], batch_size=input_shape[0], name='image')\n",
    "\n",
    "    # Patch encoder layer\n",
    "    x = patch_encoder(inp, patch_size, num_patches, embedding_dim)\n",
    "\n",
    "    for block in range(n_layers):\n",
    "        # Attention block\n",
    "        x1 = layer_norm(x, name=(f'layer_norm_{2 * block}' if block != 0 else 'layer_norm'), trainable=trainable)\n",
    "        x1 = self_attention(x1, n_attention_heads, embedding_dim, name=(f'mha_{block}' if block != 0 else 'mha'))\n",
    "        x1 = tf.keras.layers.Add()([x1, x])\n",
    "\n",
    "        # MLP block\n",
    "        x2 = layer_norm(x1, name=f'layer_norm_{2 * block + 1}', trainable=trainable)\n",
    "        x2 = mlp(x2, mlp_hidden_dim, embedding_dim)\n",
    "        x = tf.keras.layers.Add()([x2, x1])\n",
    "\n",
    "    x = layer_norm(x, name=f'layer_norm_{2 * block + 2}', trainable=trainable)\n",
    "\n",
    "    ## ~ Classification head ~ ##\n",
    "    cls_head = Slice(0)(x)\n",
    "    out = tf.keras.layers.Dense(n_classes, kernel_initializer='zeros', name='cls_head')(cls_head)\n",
    "    ## ~~~~~~~~~~~~~~~~~~~~~~~ ##\n",
    "\n",
    "    model = tf.keras.Model(inputs=inp, outputs=out)\n",
    "\n",
    "    return model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a9060af",
   "metadata": {},
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "13f55e4d",
   "metadata": {},
   "outputs": [],
   "source": [
    "BATCH_SIZE = 32\n",
    "\n",
    "# Load the MNIST dataset\n",
    "(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()\n",
    "X_train, X_test = (X_train[..., np.newaxis] / 255.0), (X_test[..., np.newaxis] / 255.0)\n",
    "train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(1000) \\\n",
    "                                                                 .batch(BATCH_SIZE, drop_remainder=True) \\\n",
    "                                                                 .prefetch(tf.data.AUTOTUNE)\n",
    "test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch(BATCH_SIZE, drop_remainder=True) \\\n",
    "                                                              .prefetch(tf.data.AUTOTUNE)\n",
    "\n",
    "def compile_and_fit(model, **kwargs):\n",
    "    model.compile(optimizer=\"adam\",\n",
    "                  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "                  metrics=[\"accuracy\"])\n",
    "    model.fit(train_ds, validation_data=test_ds, **kwargs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "96da5e80",
   "metadata": {},
   "outputs": [],
   "source": [
    "model = get_vision_transformer(input_shape=(BATCH_SIZE, 28, 28, 1),  # (batch_size, height, width, channels)\n",
    "                               n_classes=10,\n",
    "                               patch_size=(4, 4),\n",
    "                               embedding_dim=16,\n",
    "                               n_layers=2,\n",
    "                               n_attention_heads=2,\n",
    "                               mlp_hidden_dim=16)\n",
    "\n",
    "compile_and_fit(model, epochs=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "edf7f692",
   "metadata": {},
   "source": [
    "## Pruning, Clustering & QAT"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d49a31f",
   "metadata": {},
   "source": [
    "### 1. Apply the pruning API"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5e3048bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude\n",
    "strip_pruning = tfmot.sparsity.keras.strip_pruning\n",
    "\n",
    "N_EPOCHS = 1\n",
    "pruning_params = {\n",
    "    'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.1, final_sparsity=0.5,\n",
    "                                                             begin_step=0, end_step=int(len(train_ds)*N_EPOCHS*0.7))\n",
    "}\n",
    "pruned_model = prune_low_magnitude(model, **pruning_params)\n",
    "# Fine-tune with pruning\n",
    "compile_and_fit(pruned_model, epochs=N_EPOCHS, callbacks=[tfmot.sparsity.keras.UpdatePruningStep()])\n",
    "stripped_pruned_model = strip_pruning(pruned_model)\n",
    "print('Success')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bcc2c089",
   "metadata": {},
   "source": [
    "#### 1.1. Check that the weights are pruned"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "490e2923",
   "metadata": {},
   "outputs": [],
   "source": [
    "def print_sparsity(model):\n",
    "    for w in model.weights:\n",
    "        n_weights = w.numpy().size\n",
    "        n_zeros = np.count_nonzero(w == 0)\n",
    "        sparsity = n_zeros / n_weights * 100.0\n",
    "        if sparsity > 0:\n",
    "            print('    {} - {:.1f}% sparsity'.format(w.name, sparsity))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a6542138",
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Sparse weights:')\n",
    "print_sparsity(stripped_pruned_model)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a084e9d",
   "metadata": {},
   "source": [
    "### 2. Apply the clustering API"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a24393d1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow_model_optimization.python.core.clustering.keras.experimental import cluster\n",
    "\n",
    "cluster_weights = cluster.cluster_weights\n",
    "CentroidInitialization = tfmot.clustering.keras.CentroidInitialization\n",
    "strip_clustering = tfmot.clustering.keras.strip_clustering\n",
    "\n",
    "# Add sparsity-preserving clustering wrappers\n",
    "pruned_clustered_model = cluster_weights(stripped_pruned_model,\n",
    "                                         number_of_clusters=4,\n",
    "                                         cluster_centroids_init=CentroidInitialization.KMEANS_PLUS_PLUS,\n",
    "                                         preserve_sparsity=True)\n",
    "# Fine-tune with clustering\n",
    "compile_and_fit(pruned_clustered_model, epochs=1)\n",
    "stripped_pruned_clustered_model = strip_clustering(pruned_clustered_model)\n",
    "print('Success')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0180d56c",
   "metadata": {},
   "source": [
    "#### 2.1. Check that the weights are pruned and clustered"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6169783c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def print_clusters(model):\n",
    "    for w in model.weights:\n",
    "        n_weights = w.numpy().size\n",
    "        n_unique = len(np.unique(w))\n",
    "        if n_unique < n_weights:\n",
    "            print('    {} - {} unique weights'.format(w.name, n_unique))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2b1d8480",
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Sparse weights:')\n",
    "print_sparsity(stripped_pruned_clustered_model)\n",
    "print('Clustered weights:')\n",
    "print_clusters(stripped_pruned_clustered_model)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ec4c7b4",
   "metadata": {},
   "source": [
    "**Warning: The original model is modified after calling [`prune_low_magnitude`](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras/prune_low_magnitude) or [`cluster_weights`](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/clustering/keras/cluster_weights).**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "209c2d0a",
   "metadata": {},
   "source": [
    "### 3. Quantisation-aware training API\n",
    "#### 3.1. To use the custom Keras layers we defined, we need to pass a [`QuantizeConfig`](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/quantization/keras/QuantizeConfig) for each of these layers.\n",
    "\n",
    "For Keras layers which are already supported in TFMOT, a default `QuantizeConfig` class is assigned to each one. However custom `QuantizeConfig` instances could also be created for these layers to give more control over how they are quantised."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "40c11c58",
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow_model_optimization.quantization.keras import QuantizeConfig, quantizers\n",
    "\n",
    "LastValueQuantizer = quantizers.LastValueQuantizer\n",
    "MovingAverageQuantizer = quantizers.MovingAverageQuantizer\n",
    "AllValuesQuantizer = quantizers.AllValuesQuantizer\n",
    "\n",
    "class NoOpQuantizeConfig(QuantizeConfig):\n",
    "    \"\"\"QuantizeConfig which does not quantize any part of the layer.\"\"\"\n",
    "\n",
    "    def get_weights_and_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def get_activations_and_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def set_quantize_weights(self, layer, quantize_weights):\n",
    "        pass\n",
    "\n",
    "    def set_quantize_activations(self, layer, quantize_activations):\n",
    "        pass\n",
    "\n",
    "    def get_output_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def get_config(self):\n",
    "        return {}\n",
    "\n",
    "class OutputQuantizeConfig(QuantizeConfig):\n",
    "    \"\"\"QuantizeConfig which only quantizes the output of a layer.\"\"\"\n",
    "\n",
    "    def get_weights_and_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def get_activations_and_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def set_quantize_weights(self, layer, quantize_weights):\n",
    "        pass\n",
    "\n",
    "    def set_quantize_activations(self, layer, quantize_activations):\n",
    "        pass\n",
    "\n",
    "    def get_output_quantizers(self, layer):\n",
    "        return [MovingAverageQuantizer(num_bits=8, per_axis=False, symmetric=False, narrow_range=False)]\n",
    "\n",
    "    def get_config(self):\n",
    "        return {}\n",
    "\n",
    "class WeightQuantizeConfig(QuantizeConfig):\n",
    "    \"\"\"QuantizeConfig which quantizes the custom weights in the patch encoder and layer normalisation layers.\"\"\"\n",
    "\n",
    "    def __init__(self):\n",
    "        self.weight_quantizer = LastValueQuantizer(num_bits=8, per_axis=False,\n",
    "                                                   symmetric=True, narrow_range=True)\n",
    "        self.activation_quantizer = MovingAverageQuantizer(num_bits=8, per_axis=False,\n",
    "                                                           symmetric=False, narrow_range=False)\n",
    "\n",
    "    def get_weights_and_quantizers(self, layer):\n",
    "        return [(layer.w, self.weight_quantizer)]\n",
    "\n",
    "    def get_activations_and_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def set_quantize_weights(self, layer, quantize_weights):\n",
    "        layer.w = quantize_weights[0]\n",
    "\n",
    "    def set_quantize_activations(self, layer, quantize_activations):\n",
    "        pass\n",
    "\n",
    "    def get_output_quantizers(self, layer):\n",
    "        return [self.activation_quantizer]\n",
    "\n",
    "    def get_config(self):\n",
    "        return {}\n",
    "\n",
    "class VarianceQuantizeConfig(QuantizeConfig):\n",
    "    \"\"\"QuantizeConfig for the variance calculation in the layer normalisation layer.\"\"\"\n",
    "\n",
    "    def get_weights_and_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def get_activations_and_quantizers(self, layer):\n",
    "        return []\n",
    "\n",
    "    def set_quantize_weights(self, layer, quantize_weights):\n",
    "        pass\n",
    "\n",
    "    def set_quantize_activations(self, layer, quantize_activations):\n",
    "        pass\n",
    "\n",
    "    def get_output_quantizers(self, layer):\n",
    "        return [AllValuesQuantizer(num_bits=8, per_axis=False, symmetric=False, narrow_range=False)]\n",
    "\n",
    "    def get_config(self):\n",
    "        return {}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a928d97d",
   "metadata": {},
   "source": [
    "Since custom layers and `QuantizeConfig`s are used, the whole model cannot directly be wrapped with QAT wrappers. <br>\n",
    "So first we write a function to wrap the individual layers with QAT wrappers:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "43837c2e",
   "metadata": {},
   "outputs": [],
   "source": [
    "def apply_wrapper(wrapper_function, layer_param_dict):\n",
    "\n",
    "    def wrap_layer(layer):\n",
    "        if layer.name in layer_param_dict.keys():\n",
    "            return wrapper_function(layer, **layer_param_dict[layer.name])\n",
    "        return layer\n",
    "\n",
    "    return wrap_layer\n",
    "\n",
    "def layer_wrapper(model, wrapper_function, layer_param_dict):\n",
    "    return tf.keras.models.clone_model(model, clone_function=apply_wrapper(wrapper_function, layer_param_dict))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26030a8c",
   "metadata": {},
   "source": [
    "The custom layers should be quantized with the following `QuantizeConfig` classes:\n",
    "\n",
    "| Custom Layer | QuantizeConfig |\n",
    "| :- | :-: |\n",
    "| ClipMin | NoOpQuantizeConfig |\n",
    "| Slice | NoOpQuantizeConfig |\n",
    "| StopGradient | NoOpQuantizeConfig |\n",
    "| MatMul | OutputQuantizeConfig |\n",
    "| Multiply | OutputQuantizeConfig |\n",
    "| ScalarMultiply | OutputQuantizeConfig |\n",
    "| Add | OutputQuantizeConfig |\n",
    "| ScalarAdd | OutputQuantizeConfig |\n",
    "| Subtract | OutputQuantizeConfig |\n",
    "| RSqrt | OutputQuantizeConfig |\n",
    "| Mean <br> Mean (variance) | OutputQuantizeConfig <br> VarianceQuantizeConfig |\n",
    "| BroadcastToken | WeightQuantizeConfig |\n",
    "| AddPositionalEmbedding | WeightQuantizeConfig |\n",
    "| Scale | WeightQuantizeConfig |\n",
    "| Centre | WeightQuantizeConfig |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ed06321",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_quant_configs(model):\n",
    "    layer_param_dict = {}  # stores {Layer_Name: QuantizeConfig} pairs\n",
    "    scope = {}  # stores all custom objects\n",
    "\n",
    "    for layer in model.layers:\n",
    "\n",
    "        if any([x in layer.name for x in ['clip', 'slice', 'stop_gradient']]):\n",
    "            layer_param_dict[layer.name] = {'quantize_config': NoOpQuantizeConfig()}\n",
    "            scope[layer.__class__.__name__] = layer.__class__\n",
    "\n",
    "        elif any([x in layer.name for x in ['mat_mul', 'multiply', 'scalar_multiply', 'add', \\\n",
    "                                            'scalar_add', 'mean', 'subtract', 'r_sqrt']]):\n",
    "            layer_param_dict[layer.name] = {'quantize_config': OutputQuantizeConfig()}\n",
    "            scope[layer.__class__.__name__] = layer.__class__\n",
    "\n",
    "        elif any([x in layer.name for x in ['patch_encoder/cls_token', 'patch_encoder/add_pos_emb', \\\n",
    "                                            'scale', 'centre']]):\n",
    "            layer_param_dict[layer.name] = {'quantize_config': WeightQuantizeConfig()}\n",
    "            scope[layer.__class__.__name__] = layer.__class__\n",
    "\n",
    "        elif 'variance' in layer.name:\n",
    "            layer_param_dict[layer.name] = {'quantize_config': VarianceQuantizeConfig()}\n",
    "            scope[layer.__class__.__name__] = layer.__class__\n",
    "\n",
    "    scope['NoOpQuantizeConfig'] = NoOpQuantizeConfig\n",
    "    scope['OutputQuantizeConfig'] = OutputQuantizeConfig\n",
    "    scope['WeightQuantizeConfig'] = WeightQuantizeConfig\n",
    "    scope['VarianceQuantizeConfig'] = VarianceQuantizeConfig\n",
    "\n",
    "    return layer_param_dict, scope"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55225a25",
   "metadata": {},
   "source": [
    "#### 3.2 Load the necessary API classes/functions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff279a49",
   "metadata": {},
   "outputs": [],
   "source": [
    "quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer\n",
    "quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model\n",
    "quantize_apply = tfmot.quantization.keras.quantize_apply\n",
    "quantize_scope = tfmot.quantization.keras.quantize_scope\n",
    "Default8BitClusterPreserveQuantizeScheme = tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme\n",
    "strip_clustering_cqat = tfmot.experimental.combine.strip_clustering_cqat"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f6598c0",
   "metadata": {},
   "source": [
    "#### 3.3 Apply QAT\n",
    "\n",
    "When calling the `quantize_apply` function, if an unsupported layer is missing from `layer_param_dict` or the `scope`, TFMOT will throw an error."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "881a8ee9",
   "metadata": {},
   "outputs": [],
   "source": [
    "layer_param_dict, scope = get_quant_configs(stripped_pruned_clustered_model)\n",
    "\n",
    "# Wrap each custom layer with the corresponding QuantizeConfig:\n",
    "pcqat_model = layer_wrapper(stripped_pruned_clustered_model, quantize_annotate_layer, layer_param_dict)\n",
    "# Quantize the rest of the model with the API defaults:\n",
    "pcqat_model = quantize_annotate_model(pcqat_model)\n",
    "\n",
    "with quantize_scope(scope):\n",
    "    pcqat_model = quantize_apply(pcqat_model, scheme=Default8BitClusterPreserveQuantizeScheme(preserve_sparsity=True))\n",
    "\n",
    "compile_and_fit(pcqat_model, epochs=2)\n",
    "pcqat_model = strip_clustering_cqat(pcqat_model)  # strip clustering variables\n",
    "\n",
    "WEIGHTS_PATH = './ViT_PCQAT.h5'\n",
    "pcqat_model.save_weights(WEIGHTS_PATH)\n",
    "print('Success')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ce333f92",
   "metadata": {},
   "source": [
    "#### 3.4. Check that the weights are still pruned and clustered"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0058a007",
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Sparse weights:')\n",
    "print_sparsity(pcqat_model)\n",
    "print('Clustered weights:')\n",
    "print_clusters(pcqat_model)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26864702",
   "metadata": {},
   "source": [
    "### 4. Generate an int8 TFLite file\n",
    "\n",
    "If we attempt to directly generate a TFLite file using the fine-tuned model above:\n",
    "1. It will not have a correct batch size of 1.\n",
    "2. It will have operators which are unnecessary during inference. Precisely, the extra `Subtract` operators and `ClipMin` operator in the layer normalisation blocks, which were used during training and fine-tuning, should be removed from the graph before creating the TFLite file.\n",
    "\n",
    "Therefore the network should be redefined with a batch size of 1 and with the redundant operators removed. The weights of the fine-tuned optimised model can then be loaded into this new model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "86b93732",
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.keras.backend.clear_session()  # reset layer name counters\n",
    "\n",
    "net = get_vision_transformer(input_shape=(1, 28, 28, 1),  # (batch_size, height, width, channels)\n",
    "                             n_classes=10,\n",
    "                             patch_size=(4, 4),\n",
    "                             embedding_dim=16,\n",
    "                             n_layers=2,\n",
    "                             n_attention_heads=2,\n",
    "                             mlp_hidden_dim=16,\n",
    "                             trainable=False)\n",
    "layer_param_dict, scope = get_quant_configs(net)\n",
    "net = quantize_annotate_model(layer_wrapper(net, quantize_annotate_layer, layer_param_dict))\n",
    "with quantize_scope(scope):\n",
    "    net = quantize_apply(net)\n",
    "\n",
    "net.load_weights(WEIGHTS_PATH, by_name=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d00bbff7",
   "metadata": {},
   "outputs": [],
   "source": [
    "MODEL_PATH = './ViT_PCQAT_int8.tflite'\n",
    "\n",
    "converter = tf.lite.TFLiteConverter.from_keras_model(net)\n",
    "converter.optimizations = [tf.lite.Optimize.DEFAULT]\n",
    "converter.inference_input_type = tf.int8\n",
    "converter.inference_output_type = tf.int8\n",
    "\n",
    "# Experimental flag which improves efficiency for some devices\n",
    "converter._experimental_disable_batchmatmul_unfold = True\n",
    "\n",
    "tflite_model = converter.convert()\n",
    "with open(MODEL_PATH, \"wb+\") as tflite_file:\n",
    "    tflite_file.write(tflite_model)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c7df6b2",
   "metadata": {},
   "source": [
    "### 5. Evaluate the TFLite model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7a5d2a75",
   "metadata": {},
   "outputs": [],
   "source": [
    "interpreter = tf.lite.Interpreter(model_path=MODEL_PATH)\n",
    "interpreter.allocate_tensors()\n",
    "\n",
    "input_details = interpreter.get_input_details()\n",
    "output_details = interpreter.get_output_details()\n",
    "input_scale, input_zero_point = input_details[0]['quantization']\n",
    "output_scale, output_zero_point = output_details[0]['quantization']\n",
    "\n",
    "int8_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='int8_accuracy')\n",
    "progbar = tf.keras.utils.Progbar(len(X_test), stateful_metrics=['accuracy'])\n",
    "for step, (img, lbl) in enumerate(zip(X_test, y_test)):\n",
    "    # Set input tensor\n",
    "    img = img[np.newaxis, ...] / input_scale + input_zero_point\n",
    "    interpreter.set_tensor(input_details[0]['index'], tf.cast(img, input_details[0]['dtype']))\n",
    "    interpreter.invoke()\n",
    "\n",
    "    # Get output tensor\n",
    "    output_data = interpreter.get_tensor(output_details[0]['index'])\n",
    "    output_data = output_scale * (output_data.astype(np.float32) - output_zero_point)\n",
    "\n",
    "    # Update accuracy\n",
    "    int8_accuracy.update_state(lbl, output_data)\n",
    "    progbar.update(step + 1, values=[('accuracy', int8_accuracy.result().numpy())])\n",
    "\n",
    "print('Accuracy:', int8_accuracy.result().numpy())"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
