{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Text to Video generation with Wan2.1 and OpenVINO\n",
    "\n",
    " Wan2.1 is a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.\n",
    "\n",
    " Built upon the mainstream diffusion transformer paradigm, Wan 2.1 achieves significant advancements in generative capabilities through a series of innovations, including our novel spatio-temporal variational autoencoder (VAE), scalable pre-training strategies, large-scale data construction, and automated evaluation metrics. These contributions collectively enhance the model's performance and versatility.\n",
    "\n",
    " You can find more details about model in [model card](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) and [original repository](https://github.com/Wan-Video/Wan2.1)\n",
    "\n",
    " In this tutorial we consider how to convert, optimize and run Wan2.1 model using OpenVINO.\n",
    " Additionally, for achieving inference speedup, we will apply [CausVid](https://causvid.github.io/) distillation approach using LoRA.\n",
    "\n",
    " ![](https://causvid.github.io/images/methods.jpg)\n",
    "\n",
    " Current video diffusion models achieve impressive generation quality but struggle in interactive applications due to bidirectional attention dependencies. The generation of a single frame requires the model to process the entire sequence, including the future. CausVid address this limitation by adapting a pretrained bidirectional diffusion transformer to an autoregressive transformer that generates frames on-the-fly. To further reduce latency, the authors extend distribution matching distillation (DMD) to videos, distilling 50-step diffusion model into a 4-step generator.\n",
    "\n",
    " The method distills a many-step, bidirectional video diffusion model into a 4-step, causal generator. The training process consists of two stages: \n",
    " 1. Student Initialization: Initialization of the causal student by pretraining it on a small set of ODE solution pairs generated by the bidirectional teacher. This step helps stabilize the subsequent distillation training.\n",
    " 2. Asymmetric Distillation: Using the bidirectional teacher model, we train the causal student generator through a distribution matching distillation loss. \n",
    "\n",
    "More details about CausVid can be found in the [paper](https://arxiv.org/abs/2412.07772), [original repository](https://github.com/tianweiy/CausVid) and [project page](https://causvid.github.io/)\n",
    "#### Table of contents:\n",
    "\n",
    "- [Prerequisites](#Prerequisites)\n",
    "- [Convert model to OpenVINO Intermediate Representation](#Convert-model-to-OpenVINO-Intermediate-Representation)\n",
    "    - [Compress model weights](#Compress-model-weights)\n",
    "- [Prepare model inference pipeline](#Prepare-model-inference-pipeline)\n",
    "    - [Select inference device](#Select-inference-device)\n",
    "- [Run OpenVINO Model Inference](#Run-OpenVINO-Model-Inference)\n",
    "- [Interactive demo](#Interactive-demo)\n",
    "\n",
    "\n",
    "### Installation Instructions\n",
    "\n",
    "This is a self-contained example that relies solely on its own code.\n",
    "\n",
    "We recommend  running the notebook in a virtual environment. You only need a Jupyter server to start.\n",
    "For details, please refer to [Installation Guide](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/README.md#-installation-guide).\n",
    "\n",
    "<img referrerpolicy=\"no-referrer-when-downgrade\" src=\"https://static.scarf.sh/a.png?x-pxid=5b5a4db0-7875-4bfb-bdbd-01698b5b1a77&file=notebooks/wan2.1-text-to-video/wan2.1-text-to-video.ipynb\" />\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "[back to top ⬆️](#Table-of-contents:)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
      "test_replace": {
       "import platform\n": "import platform\n%pip uninstall -q -y tensorflow tf_keras \"tensorflow-estimator\" \"tensorflow-io-gcs-filesystem\" \"ml-dtypes\"\n"
    }
   },
   "outputs": [],
   "source": [
    "import platform\n",
    "\n",
    "%pip install -q \"torch>=2.1\" \"git+https://github.com/huggingface/diffusers.git\" \"transformers>=4.49.0\" \"accelerate\" \"safetensors\" \"sentencepiece\" \"peft>=0.15.0\" \"ftfy\" \"gradio>=4.19\" \"opencv-python\" --extra-index-url https://download.pytorch.org/whl/cpu\n",
    "%pip install --pre -U \"openvino>=2025.1.0\" \"nncf>=2.16.0\" --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly\n",
    "\n",
    "if platform.system() == \"Darwin\":\n",
    "    %pip install -q \"numpy<2.0.0\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "from pathlib import Path\n",
    "\n",
    "if not Path(\"ov_wan_helper.py\").exists():\n",
    "    r = requests.get(url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/notebooks/wan2.1-text-to-video/ov_wan_helper.py\")\n",
    "    open(\"ov_wan_helper.py\", \"w\").write(r.text)\n",
    "\n",
    "if not Path(\"gradio_helper.py\").exists():\n",
    "    r = requests.get(url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/notebooks/wan2.1-text-to-video/gradio_helper.py\")\n",
    "    open(\"gradio_helper.py\", \"w\").write(r.text)\n",
    "\n",
    "if not Path(\"notebook_utils.py\").exists():\n",
    "    r = requests.get(url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py\")\n",
    "    open(\"notebook_utils.py\", \"w\").write(r.text)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Convert model to OpenVINO Intermediate Representation\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "Wan2.1 is PyTorch model. OpenVINO supports PyTorch models via conversion to OpenVINO Intermediate Representation (IR). [OpenVINO model conversion API](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html#convert-a-model-with-python-convert-model) should be used for these purposes. `ov.convert_model` function accepts original PyTorch model instance and example input for tracing and returns `ov.Model` representing this model in OpenVINO framework. Converted model can be used for saving on disk using `ov.save_model` function or directly loading on device using `core.complie_model`.\n",
    "\n",
    "Model consist of 3 parts:\n",
    "* **Text Encoder** to encode input multi-language text, incorporating cross-attention within each transformer block to embed the text into the model structure\n",
    "* **Diffusion Transformer** for step by step denoising of generated video guided by text instructions.\n",
    "* **VAE Decoder** to decode generated video represented in latent space.\n",
    "\n",
    "Model performs text-to-video generation task. For preserving original model flexibility, we will convert each part separately.\n",
    "\n",
    "The script `ov_wan_helper.py` contains helper function for model conversion, please check its content if you interested in conversion details.\n",
    "\n",
    "### Compress model weights\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "For reducing memory consumption, weights compression optimization can be applied using [NNCF](https://github.com/openvinotoolkit/nncf). \n",
    "\n",
    "<details>\n",
    "    <summary><b>Click here for more details about weight compression</b></summary>\n",
    "Weight compression aims to reduce the memory footprint of a model. It can also lead to significant performance improvement for large memory-bound models, such as Large Language Models (LLMs). LLMs and other models, which require extensive memory to store the weights during inference, can benefit from weight compression in the following ways:\n",
    "\n",
    "* enabling the inference of exceptionally large models that cannot be accommodated in the memory of the device;\n",
    "\n",
    "* improving the inference performance of the models by reducing the latency of the memory access when computing the operations with weights, for example, Linear layers.\n",
    "\n",
    "[Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) provides 4-bit / 8-bit mixed weight quantization as a compression method primarily designed to optimize LLMs. The main difference between weights compression and full model quantization (post-training quantization) is that activations remain floating-point in the case of weights compression which leads to a better accuracy. Weight compression for LLMs provides a solid inference performance improvement which is on par with the performance of the full model quantization. In addition, weight compression is data-free and does not require a calibration dataset, making it easy to use.\n",
    "\n",
    "`nncf.compress_weights` function can be used for performing weights compression. The function accepts an OpenVINO model and other compression parameters. Compared to INT8 compression, INT4 compression improves performance even more, but introduces a minor drop in prediction quality.\n",
    "\n",
    "More details about weights compression, can be found in [OpenVINO documentation](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).\n",
    "</details>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c0fbf714670947e48155681658ab588f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Dropdown(description='Model format:', index=2, options=('FP16', 'INT8', 'INT4'), value='INT4')"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import ipywidgets as widgets\n",
    "\n",
    "# Read more about telemetry collection at https://github.com/openvinotoolkit/openvino_notebooks?tab=readme-ov-file#-telemetry\n",
    "from notebook_utils import collect_telemetry\n",
    "\n",
    "collect_telemetry(\"wan2.1-text-to-video.ipynb\")\n",
    "\n",
    "model_id = \"Wan-AI/Wan2.1-T2V-1.3B-Diffusers\"\n",
    "model_base_dir = Path(model_id.split(\"/\")[-1])\n",
    "\n",
    "model_format = widgets.Dropdown(\n",
    "    options=[\"FP16\", \"INT8\", \"INT4\"],\n",
    "    value=\"INT4\",\n",
    "    description=\"Model format:\",\n",
    ")\n",
    "\n",
    "model_format"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import nncf\n",
    "\n",
    "model_dir = model_base_dir / model_format.value\n",
    "\n",
    "if model_format.value == \"INT4\":\n",
    "    weights_compression_config = {\"mode\": nncf.CompressWeightsMode.INT4_ASYM, \"group_size\": 64, \"ratio\": 1.0}\n",
    "elif model_format.value == \"INT8\":\n",
    "    weights_compression_config = {\"mode\": nncf.CompressWeightsMode.INT8_ASYM}\n",
    "else:\n",
    "    weights_compression_config = None"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Multiple distributions found for package optimum. Picked distribution: optimum\n",
      "The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.\n"
     ]
    }
   ],
   "source": [
    "from ov_wan_helper import convert_pipeline\n",
    "\n",
    "# Uncomment the line to see model conversion code\n",
    "# ??convert_pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Wan-AI/Wan2.1-T2V-1.3B-Diffusers model already converted. You can find results in Wan2.1-T2V-1.3B-Diffusers/INT4\n"
     ]
    }
   ],
   "source": [
    "convert_pipeline(model_id, model_dir, compression_config=weights_compression_config)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prepare model inference pipeline\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "`OVWanPipeline` defined in `ov_wan_helper.py` provides unified interface for running model inference. It accepts model directory and target device map for inference."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "from ov_wan_helper import OVWanPipeline\n",
    "\n",
    "# Uncomment the line to see model inference code\n",
    "# ??OVWanPipeline"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Select inference device\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "You can specify inference device for each pipeline component or use the same device for all of them using widgets below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4e3a666ff4004b4eb2fa75e2b362d921",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "VBox(children=(Dropdown(description='Transformer', index=1, options=('CPU', 'AUTO'), value='AUTO'), Dropdown(d…"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from notebook_utils import device_widget\n",
    "\n",
    "device_transformer = device_widget(exclude=[\"NPU\"], description=\"Transformer\")\n",
    "device_text_encoder = device_widget(exclude=[\"NPU\"], description=\"Text Encoder\")\n",
    "device_vae = device_widget(exclude=[\"NPU\"], description=\"VAE Decoder\")\n",
    "\n",
    "devices = widgets.VBox([device_transformer, device_text_encoder, device_vae])\n",
    "\n",
    "devices"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "device_map = {\"transformer\": device_transformer.value, \"text_encoder\": device_text_encoder.value, \"vae\": device_vae.value}\n",
    "\n",
    "ov_pipe = OVWanPipeline(model_dir, device_map)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Run OpenVINO Model Inference\n",
    "[back to top ⬆️](#Table-of-contents:)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from diffusers.utils import export_to_video\n",
    "\n",
    "prompt = \"A cat walks on the grass, realistic\"\n",
    "negative_prompt = \"Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards\"\n",
    "\n",
    "output = ov_pipe(prompt=prompt, negative_prompt=negative_prompt, height=480, width=832, num_frames=20, guidance_scale=1.0, num_inference_steps=4).frames[0]\n",
    "export_to_video(output, \"output.mp4\", fps=10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<video src=\"output.mp4\" controls  >\n",
       "      Your browser does not support the <code>video</code> element.\n",
       "    </video>"
      ],
      "text/plain": [
       "<IPython.core.display.Video object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.display import Video\n",
    "\n",
    "display(Video(\"output.mp4\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Interactive demo\n",
    "[back to top ⬆️](#Table-of-contents:)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from gradio_helper import make_demo\n",
    "\n",
    "demo = make_demo(ov_pipe)\n",
    "\n",
    "try:\n",
    "    demo.launch(debug=True)\n",
    "except Exception:\n",
    "    demo.launch(share=True, debug=True)\n",
    "# if you are launching remotely, specify server_name and server_port\n",
    "# demo.launch(server_name='your server name', server_port='server port in int')\n",
    "# Read more in the docs: https://gradio.app/docs/"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  },
  "openvino_notebooks": {
   "imageUrl": "https://github.com/user-attachments/assets/3d1b587c-4799-442a-ac0a-2a5c9832d56e",
   "tags": {
    "categories": [
     "Model Demos",
     "AI Trends"
    ],
    "libraries": [],
    "other": [],
    "tasks": [
     "Text-to-Video"
    ]
   }
  },
  "widgets": {
   "application/vnd.jupyter.widget-state+json": {
    "state": {
     "05592f84f5104aefb789af1ac56241cc": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DescriptionStyleModel",
      "state": {
       "description_width": ""
      }
     },
     "0c470d92ab6d4f909c03fa31a0e7bbc7": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DescriptionStyleModel",
      "state": {
       "description_width": ""
      }
     },
     "0f298906ffd145b0b024e5c5d4de40b6": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HBoxModel",
      "state": {
       "children": [
        "IPY_MODEL_5249107cf961483f8eb8e3c54109983c",
        "IPY_MODEL_7827e06e83654ab9b12e0c3782dba700",
        "IPY_MODEL_b2bc6fb669514af8a2bb12b0ec55c618"
       ],
       "layout": "IPY_MODEL_abc2e27b9de6460fbdf561ee3213e1c1"
      }
     },
     "1055cdeb567649a8a2746539a0cf7b87": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "25a664bf06ef4a7c9a0ab503ee497759": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "2efd8892df254338b4873d33037c3e24": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DropdownModel",
      "state": {
       "_options_labels": [
        "CPU",
        "AUTO"
       ],
       "description": "Transformer",
       "index": 1,
       "layout": "IPY_MODEL_546abc5d11bf4bc0aeff2d4ae46c230a",
       "style": "IPY_MODEL_05592f84f5104aefb789af1ac56241cc"
      }
     },
     "3decb2c6214e4232b8a236cb0115db22": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HBoxModel",
      "state": {
       "children": [
        "IPY_MODEL_91e7bedb19cb4430836adaa0ea35143e",
        "IPY_MODEL_8770dcd83a5046c4a47d8345a8201ed1",
        "IPY_MODEL_90bde2e6e00646b296c8629611adc831"
       ],
       "layout": "IPY_MODEL_4b20a03efb6c4c83a41f5873f59393a0"
      }
     },
     "48cd0b498d9942fca34c1b6cecdb5596": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "4b20a03efb6c4c83a41f5873f59393a0": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "4e3a666ff4004b4eb2fa75e2b362d921": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "VBoxModel",
      "state": {
       "children": [
        "IPY_MODEL_2efd8892df254338b4873d33037c3e24",
        "IPY_MODEL_dddb537713304325bce235618653f4ed",
        "IPY_MODEL_e6d005e13b5e4c928267665bfbd2120f"
       ],
       "layout": "IPY_MODEL_debe50f8e1b74013b4b0ab3db4e47052"
      }
     },
     "5249107cf961483f8eb8e3c54109983c": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLModel",
      "state": {
       "layout": "IPY_MODEL_6e21a98be6e243d5baaa895a29397936",
       "style": "IPY_MODEL_ded4fe557ee14efebae06eb737e08d9c",
       "value": "100%"
      }
     },
     "54539ede5d2e4cbeb69b9930a1d1097a": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "ProgressStyleModel",
      "state": {
       "description_width": ""
      }
     },
     "546abc5d11bf4bc0aeff2d4ae46c230a": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "5b9c545caad74c49a4d2760419b9564d": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "637f184f36a84a138f51f30a6bae2aa7": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DescriptionStyleModel",
      "state": {
       "description_width": ""
      }
     },
     "6e21a98be6e243d5baaa895a29397936": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "77beacf3065e4e5c8429bc6124ad9c93": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLStyleModel",
      "state": {
       "description_width": "",
       "font_size": null,
       "text_color": null
      }
     },
     "7827e06e83654ab9b12e0c3782dba700": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "FloatProgressModel",
      "state": {
       "bar_style": "success",
       "layout": "IPY_MODEL_e42d849d6cc4420dba0ded96ba3f07fe",
       "max": 4,
       "style": "IPY_MODEL_54539ede5d2e4cbeb69b9930a1d1097a",
       "value": 4
      }
     },
     "8770dcd83a5046c4a47d8345a8201ed1": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "FloatProgressModel",
      "state": {
       "bar_style": "success",
       "layout": "IPY_MODEL_cd8971c29586478098064f7b5238c3fb",
       "max": 4,
       "style": "IPY_MODEL_ab82dee3c8184723ad48f7c557ccc226",
       "value": 4
      }
     },
     "90bde2e6e00646b296c8629611adc831": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLModel",
      "state": {
       "layout": "IPY_MODEL_48cd0b498d9942fca34c1b6cecdb5596",
       "style": "IPY_MODEL_d114193526614448836db914c1a96ce3",
       "value": " 4/4 [05:18&lt;00:00, 79.47s/steps]"
      }
     },
     "91e7bedb19cb4430836adaa0ea35143e": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLModel",
      "state": {
       "layout": "IPY_MODEL_9668c3451df24eef8c3bd2b3deb17859",
       "style": "IPY_MODEL_d6a81425fc474d9eb2a3d078e1dbd529",
       "value": "100%"
      }
     },
     "9668c3451df24eef8c3bd2b3deb17859": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "9ff69292e7114d86ac7b828532542b8e": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "ab82dee3c8184723ad48f7c557ccc226": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "ProgressStyleModel",
      "state": {
       "description_width": ""
      }
     },
     "abc2e27b9de6460fbdf561ee3213e1c1": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "b2bc6fb669514af8a2bb12b0ec55c618": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLModel",
      "state": {
       "layout": "IPY_MODEL_5b9c545caad74c49a4d2760419b9564d",
       "style": "IPY_MODEL_77beacf3065e4e5c8429bc6124ad9c93",
       "value": " 4/4 [02:35&lt;00:00, 38.67s/it]"
      }
     },
     "c0fbf714670947e48155681658ab588f": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DropdownModel",
      "state": {
       "_options_labels": [
        "FP16",
        "INT8",
        "INT4"
       ],
       "description": "Model format:",
       "index": 2,
       "layout": "IPY_MODEL_9ff69292e7114d86ac7b828532542b8e",
       "style": "IPY_MODEL_637f184f36a84a138f51f30a6bae2aa7"
      }
     },
     "cd8971c29586478098064f7b5238c3fb": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "d114193526614448836db914c1a96ce3": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLStyleModel",
      "state": {
       "description_width": "",
       "font_size": null,
       "text_color": null
      }
     },
     "d29d1a223c2d4fabbe1fac8167d0dc5f": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DescriptionStyleModel",
      "state": {
       "description_width": ""
      }
     },
     "d6a81425fc474d9eb2a3d078e1dbd529": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLStyleModel",
      "state": {
       "description_width": "",
       "font_size": null,
       "text_color": null
      }
     },
     "dddb537713304325bce235618653f4ed": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DropdownModel",
      "state": {
       "_options_labels": [
        "CPU",
        "AUTO"
       ],
       "description": "Text Encoder",
       "index": 1,
       "layout": "IPY_MODEL_25a664bf06ef4a7c9a0ab503ee497759",
       "style": "IPY_MODEL_d29d1a223c2d4fabbe1fac8167d0dc5f"
      }
     },
     "debe50f8e1b74013b4b0ab3db4e47052": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "ded4fe557ee14efebae06eb737e08d9c": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "HTMLStyleModel",
      "state": {
       "description_width": "",
       "font_size": null,
       "text_color": null
      }
     },
     "e42d849d6cc4420dba0ded96ba3f07fe": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "e6d005e13b5e4c928267665bfbd2120f": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DropdownModel",
      "state": {
       "_options_labels": [
        "CPU",
        "AUTO"
       ],
       "description": "VAE Decoder",
       "index": 1,
       "layout": "IPY_MODEL_1055cdeb567649a8a2746539a0cf7b87",
       "style": "IPY_MODEL_0c470d92ab6d4f909c03fa31a0e7bbc7"
      }
     }
    },
    "version_major": 2,
    "version_minor": 0
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
