{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Model Compression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The need of model compression\n",
    "\n",
    "Models are getting larger and larger everyday. State of the art models gets super large super fast. Model compression is a method to combat the stress that this trend puts on your device: it makes your model smaller, so that it can be transferred over the Internet, it can fit in your memory to run faster, or it can just save a lot of disk usage. Model compression is the science of reducing the size of a model.\n",
    "\n",
    "Of course, model compression does come with its downsides. After compressed, models will get less accurate. In many cases though, it's a sacrifice that people are willing to take."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Ways of doing model compression\n",
    "\n",
    "There are many ways of doing model compression:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Unstructured pruning\n",
    "\n",
    "Because of how deep learning models are based on linear algebra, zero values in a layer in the model simply does not do anything but waste space in memory. Pruning is the art of making the model's layers less dense and more sparse, so that it can only store things that matter. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Structured pruning\n",
    "\n",
    "As great as unstructured pruning is, dealing with sparse matrices (which is produced a lot in unstructured pruning) is slow because it's difficult to run it on GPU. Structured pruning does the opposite, it finds a filter/channel/matrix to prune, so that the end result is still a network that consists of dense matrices."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Quantization\n",
    "\n",
    "Quantization means to store the weights of the model in a less accurate format to save weight. For example, if your model's weight is 64-bit floating point numbers, converting those numbers to 32-bit floating point numbers will slash off half the amount of space. It's as simple as that. Recently there are also 16-bit floating point models that makes storing the models efficiently even easier.\n",
    "\n",
    "Some people take quantization a bit far and use integers for storing the values of the model. It's feasible but hurts performance quite a lot."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Summary\n",
    "\n",
    "These three ways (or two if you merge the two pruning methods) are the main ways people reduce the size of their model without training new ones.\n",
    "\n",
    "If training a new model is an option, also see knowledge distillation."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
