|
--- |
|
license: creativeml-openrail-m |
|
tags: |
|
- stable-diffusion |
|
- text-to-image |
|
- openvino |
|
|
|
--- |
|
|
|
# OpenVINO Stable Diffusion |
|
|
|
## hakurei/waifu-diffusion |
|
|
|
This repository contains the models from [hakurei/waifu-diffusion](https://huggingface.co/hakurei/waifu-diffusion) converted to |
|
OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: |
|
[optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 |
|
precision, which reduces the size of the model by half. |
|
|
|
Please check out the [source model repository](https://huggingface.co/hakurei/waifu-diffusion) for more information about the model and its license. |
|
|
|
To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, |
|
including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). |
|
|
|
The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the |
|
model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. |
|
|
|
```python |
|
from optimum.intel.openvino import OVStableDiffusionPipeline |
|
|
|
stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/hakurei-waifu-diffusion-ov") |
|
images = stable_diffusion("a random image").images |
|
``` |
|
|
|
The following example code uses static shapes for even faster inference. Using larger image sizes will |
|
require more memory and take longer to generate. |
|
|
|
If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel |
|
discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. |
|
Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable |
|
diffusion only static shapes are supported at the moment. |
|
|
|
|
|
```python |
|
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline |
|
|
|
batch_size = 1 |
|
num_images_per_prompt = 1 |
|
height = 256 |
|
width = 256 |
|
|
|
# load the model and reshape to static shapes for faster inference |
|
model_id = "helenai/hakurei-waifu-diffusion-ov" |
|
stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) |
|
stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) |
|
stable_diffusion.compile() |
|
|
|
# generate image! |
|
prompt = "a random image" |
|
images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images |
|
images[0].save("result.png") |
|
``` |
|
|
|
|