{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Compress and Evaluate Image Generation Models"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "<a target=\"_blank\" href=\"https://colab.research.google.com/github/PrunaAI/pruna/blob/v|version|/docs/tutorials/image_generation.ipynb\">\n",
    "    <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
    "</a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "| Component | Details |\n",
    "|-----------|---------|\n",
    "| **Goal** | Demonstrate a standard workflow for optimizing and evaluating an image generation model |\n",
    "| **Model** | [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) |\n",
    "| **Dataset** | [nannullna/laion_subset](https://huggingface.co/datasets/nannullna/laion_subset) |\n",
    "| **Optimization Algorithms** | cacher(deepcache), compiler(torch_compile), quantizer(hqq_diffusers) |\n",
    "| **Evaluation Metrics** | `throughput`, `total time`, `clip_score` |\n",
    "\n",
    "## Getting Started\n",
    "\n",
    "To install the dependencies, run the following command:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install pruna"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "device = \"cuda\" if torch.cuda.is_available() else \"mps\" if torch.backends.mps.is_available() else \"cpu\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "source": [
    "The device is set to the best available option to maximize the benefits of the optimization process."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Load the Model\n",
    "\n",
    "Before optimizing the model, we first ensure that it loads correctly and fits into memory. For this example, we will use a lightweight image generation model, [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) but feel free to use any [text-to-image model on Hugging Face](https://huggingface.co/models?pipeline_tag=text-to-image).\n",
    "\n",
    "Although Pruna works at least as good with much larger models, like FLUX or SD3.5, however, a small model is a good starting point to show the steps of the optimization process."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from diffusers import DiffusionPipeline\n",
    "\n",
    "pipe = DiffusionPipeline.from_pretrained(\n",
    "    pretrained_model_name_or_path=\"stabilityai/stable-diffusion-xl-base-1.0\",\n",
    "    torch_dtype=torch.bfloat16,\n",
    ")\n",
    "pipe = pipe.to(device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we've loaded the pipeline, let's examine some of the outputs it can generate. We use an example from [this amazing prompt guide](https://strikingloo.github.io/stable-diffusion-vs-dalle-2)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt = \"Editorial Style Photo, Bonsai Apple Tree, Task Lighting, Inspiring and Sunset, Afternoon, Beautiful, 4k\"\n",
    "image = pipe(\n",
    "    prompt,\n",
    "    generator=torch.Generator().manual_seed(42),\n",
    ")\n",
    "image.images[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we can see, the model is able to generate a beautiful image based on the provided input prompt.\n",
    "\n",
    "## 2. Define the SmashConfig\n",
    "\n",
    "Now that we've confirmed the model is functioning correctly, let's proceed with the optimization process by defining the `SmashConfig`, which will be used later to optimize the model.\n",
    "\n",
    "For diffusion models, the most important categories of optimization algorithms are cachers, compilers, and quantizers. Note that not all algorithms are compatible with every model. For Stable Diffusion models, the following options are available:\n",
    "\n",
    "<img src=\"../assets/images/stable_diffusion_algorithms.png\" alt=\"Stable Diffusion Algorithms\" height=\"100\"/>\n",
    "\n",
    "You can learn more about the various optimization algorithms and their hyperparameters in the [Algorithms Overview](https://docs.pruna.ai/en/stable/compression.html) section of the documentation.\n",
    "\n",
    "In this optimization, we'll combine [deepcache](https://docs.pruna.ai/en/stable/compression.html#deepcache), [torch_compile](https://docs.pruna.ai/en/stable/compression.html#torch-compile), and [hqq-diffusers](https://docs.pruna.ai/en/stable/compression.html#hqq-diffusers). We'll also update some of the parameters for these algorithms, setting `hqq_diffusers_weight_bits` to `4`. This is just one of many possible configurations and is intended to serve as an example.\n",
    "\n",
    "<img src=\"../assets/images/stable_diffusion_quantized.png\" alt=\"Stable Diffusion Algorithms\" height=\"100\"/>\n",
    "\n",
    "Let's define the `SmashConfig` object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pruna import SmashConfig\n",
    "\n",
    "smash_config = SmashConfig(device=device)\n",
    "smash_config.add(\n",
    "    {\"deepcache\": {\"interval\": 2},\n",
    "    \"torch_compile\": {},\n",
    "    \"hqq_diffusers\": {\"weight_bits\": 4, \"group_size\": 64, \"backend\": \"marlin\"}\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Smash the Model\n",
    "\n",
    "Now that we've defined the `SmashConfig` object, we can proceed to smash the model. We'll use the `smash` function, passing both the `model` and the `smash_config` as arguments. We make a deep copy of the model to avoid modifying the original model.\n",
    "\n",
    "Let's smash the model, which should take around 20 seconds for this configuration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import copy\n",
    "\n",
    "from pruna import smash\n",
    "\n",
    "copy_pipe = copy.deepcopy(pipe).to(\"cpu\")\n",
    "smashed_pipe = smash(\n",
    "    model=pipe,\n",
    "    smash_config=smash_config,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we've smashed the model, let's verify that everything still works as expected by running inference with the smashed model.\n",
    "\n",
    "If you are using torch_compile as your compiler, you can expect the first inference warmup to take a bit longer than the actual inference."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt = \"Editorial Style Photo, Bonsai Apple Tree, Task Lighting, Inspiring and Sunset, Afternoon, Beautiful, 4k\"\n",
    "image = smashed_pipe(\n",
    "    prompt,\n",
    "    generator=torch.Generator().manual_seed(42),\n",
    ")\n",
    "image.images[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we can see, the model is able to generate a similar image as the original model. \n",
    "\n",
    "If you notice a significant difference, it might have several reasons, the model, the configuration, the hardware, etc. As optimization can be non-deterministic, we encourage you to retry the optimization process or try out different configurations and models to find the best fit for your use case but also feel free to reach out to us on [Discord](https://discord.gg/JFQmtFKCjd) if you have any questions or feedback.\n",
    "\n",
    "## 4. Evaluate the Smashed Model\n",
    "\n",
    "Now that the model has been optimized, we can evaluate its performance using the `EvaluationAgent`. This evaluation will include metrics like `elapsed_time` for general performance and the `clip_score` for evaluating the quality of the generated images.\n",
    "\n",
    "You can find a complete overview of all available metrics in our [documentation](https://docs.pruna.ai/)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pruna import PrunaModel\n",
    "from pruna.data.pruna_datamodule import PrunaDataModule\n",
    "from pruna.evaluation.evaluation_agent import EvaluationAgent\n",
    "from pruna.evaluation.metrics import (\n",
    "    LatencyMetric,\n",
    "    ThroughputMetric,\n",
    "    TorchMetricWrapper,\n",
    ")\n",
    "from pruna.evaluation.task import Task\n",
    "\n",
    "# Define the metrics\n",
    "metrics = [\n",
    "    LatencyMetric(n_iterations=20, n_warmup_iterations=5),\n",
    "    ThroughputMetric(n_iterations=20, n_warmup_iterations=5),\n",
    "    TorchMetricWrapper(\"clip_score\"),\n",
    "]\n",
    "\n",
    "# Define the datamodule\n",
    "datamodule = PrunaDataModule.from_string(\"LAION256\")\n",
    "datamodule.limit_datasets(10)\n",
    "\n",
    "# Define the task and evaluation agent\n",
    "task = Task(metrics, datamodule=datamodule, device=device)\n",
    "eval_agent = EvaluationAgent(task)\n",
    "\n",
    "# Evaluate base model and offload it to CPU\n",
    "wrapped_pipe = PrunaModel(model=copy_pipe)\n",
    "wrapped_pipe.move_to_device(device)\n",
    "base_model_results = eval_agent.evaluate(wrapped_pipe)\n",
    "wrapped_pipe.move_to_device(\"cpu\")\n",
    "\n",
    "# Evaluate smashed model and offload it to CPU\n",
    "smashed_pipe.move_to_device(device)\n",
    "smashed_model_results = eval_agent.evaluate(smashed_pipe)\n",
    "smashed_pipe.move_to_device(\"cpu\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can now review the evaluation results and compare the performance of the original model with the optimized version."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import Markdown, display  # noqa\n",
    "\n",
    "\n",
    "def make_comparison_table(base_model_results, smashed_model_results):  # noqa\n",
    "    header = \"| Metric | Base Model | Smashed Model | Improvement % |\\n\"\n",
    "    header += \"|\" + \"-----|\" * 4 + \"\\n\"\n",
    "    rows = []\n",
    "\n",
    "    for base, smashed in zip(base_model_results, smashed_model_results):\n",
    "        base_result = base.result\n",
    "        smashed_result = smashed.result\n",
    "        if base.higher_is_better:\n",
    "            diff = ((smashed_result - base_result) / base_result) * 100\n",
    "        else:\n",
    "            diff = ((base_result - smashed_result) / base_result) * 100\n",
    "        row = f\"| {base.name} | {base_result:.4f} {base.metric_units or ''}\"\n",
    "        row += f\"| {smashed_result:.4f} {smashed.metric_units or ''} | {diff:.2f}% |\"\n",
    "        rows.append(row)\n",
    "    return header + \"\\n\".join(rows)\n",
    "\n",
    "\n",
    "display(Markdown(make_comparison_table(base_model_results, smashed_model_results)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we can see, the optimized model is approximately 2× faster and smaller than the base model. While the CLIP score remains nearly unchanged. This is expected, given the nature of the optimization process.\n",
    "\n",
    "We can now save the optimized model to disk or share it with others:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# save the model to disk\n",
    "smashed_pipe.save_pretrained(\"sdxl-smashed\")\n",
    "# after saving the model, you can load it with\n",
    "# smashed_pipe = PrunaModel.from_pretrained(\"sdxl-smashed\")\n",
    "\n",
    "# save the model to HuggingFace\n",
    "# smashed_pipe.push_to_hub(\"PrunaAI/sdxl-smashed\")\n",
    "# smashed_pipe = PrunaModel.from_pretrained(\"PrunaAI/sdxl-smashed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Conclusion\n",
    "\n",
    "In this tutorial, we demonstrated a standard workflow for optimizing and evaluating an image generation model using Pruna.\n",
    "\n",
    "We defined our optimization strategy using the `SmashConfig` object and applied it to the model with the `smash` function. We then evaluated the performance of the optimized model using the `EvaluationAgent`, comparing key metrics such as `elapsed_time` and `CLIP score`.\n",
    "\n",
    "To support the workflow, we also used the `PrunaDataModule` to load the dataset and the `Task` object to configure the task and link it to the evaluation process.\n",
    "\n",
    "The results show that we can significantly improve runtime performance and reduce memory usage and energy consumption, while maintaining a high level of output quality. This makes it easy to explore trade-offs and iterate on configurations to find the best optimization strategy for your specific use case."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Pruna",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
